A new patent recently filed by Apple sees the company interested in researching a method for the interaction of the user with screen elements in a three-dimensional form by means of headtracking .
Although this sounds really complicated, the truth is that it is something quite simple once you see it (you will find a very illustrative video on the subject below). Basically it’s about analyzing the relative position of the user in relation to the screen through a camera (like the one integrated in most Macs) and modifying the perspective of the objects displayed to add a sense of depth .
This idea is not new and companies such as Sony or Microsoft intend to incorporate it into their game consoles in one way or another most probably by next year. Have you heard about the Natal project? Well, it’s the same thing, just adding the voice recognition and the functions of a vitaminized Eye Toy.
See the video on the original site.
The applications proposed by Apple would range from the 3D display of a graph as illustrated in the image above to seeing the hidden areas of a series of open windows on top of each other by simply moving your head. This would add that sense of depth we talked about before even in two-dimensional objects.
But Apple goes much further and suggests that the software could be advanced enough to incorporate elements of the user’s environment into the scene represented on the screen. For example, one could define the properties of the different surfaces represented (such as refraction or reflection) and apply portions of the image captured by the camera to them. By means of this approach, the surfaces that pretend to be chromed will be able to reflect the scene realistically to the user’s point of view and if we looked at the object from different angles we could see how our reflection is affected.
Even eliminating the most difficult ideas to implement from this patent, we are in front of a quite interesting field of research and if Apple manages to do it well, detecting our position in a fluid way and correctly filtering the involuntary movements we make, we could be in front of a huge advance in the interaction with our computers .