Skip to content

Apple patents a multi-touch gesture dictionary

Ars Technica

A few years ago, it seemed that interaction with computers and autonomous agents in the future would be via voice. There we have Dave from 2001, or R2-D2 , to give a couple of known examples. Then movies like Johnny Mnemonic and later Minority Report appeared, and it seems that reality follows fiction, because gestural communication and multi-touch screens have become fashionable. We have examples of gestural communication in games such as Black and White, as well as in applications for our Mac such as FlyGesture and xGestures. As for the multi-touch screens, there is our friend the iPhone giving war.

Apple patents a multi-touch gesture dictionaryApple patents a multi-touch gesture dictionary

Today I read that Apple has filed a patent on a multi-touch gesture dictionary, so it seems that our favorite CEO has in mind to release more products with an innovative interface.

The patent presents a system of finger gestures and movements, organized in a dictionary form as a set of signs and their corresponding meaning. The gestures are composed of a combination of fingers that touch (or do not touch, as we will see later) the surface, and a movement associated with each of them. Each gesture would be able to provoke a response in the system and its representation or feedback for the user. The patent also talks about showing this dictionary, for example by means of animations or graphics superimposed on the application. It would also be possible to launch such visual aids by means of a gesture, and the user would certainly be able to modify the predefined gestures or create new ones.

Some of you may wonder why I talked about Minority Report and Johnny Mnemonic . The answer can be found in paragraph 65 of the detailed description of the patent. It states that the principles of the patent are perfectly valid for gestures in a three-dimensional space. And it does not stop there, but talks about gestures made with the right and left hand, shown in their corresponding part of the screen. As my imagination is quite developed, I have started to think that if we could have sensors on both the monitor and the keyboard, we could make gestures and interact with the computer without having to touch anything.

Maybe the future is closer than we think, and we see ourselves surfing the Internet or modeling an object in three dimensions using only gestures in less than a rooster crow.

More information