Touch screens may already be on the road to extinction. People will be able to tap the air, causing the kitchen light to brighten or a page to turn on a tablet. A snap of the fingers will start a song. To an outsider, we’ll appear telekinetic. 

Computer scientists have begun developing gesture-recognition technology that will manipulate devices throughout the home, on a variety of platforms and devices. Here’s a look at some of the ingenuity that may make our lives more like a CSI lab or Philip K. Dick novel.  


Gesture technology frequently uses webcams or other sensors to identify movement. EyeSight, an Israeli firm, requires a basic Webcam that can be connected to cars, computers, televisions, and other devices. With a range of fifteen feet, the technology can open a browser, turn on the car radio, or answer a call. The company has developed models for Windows, Android, and Linux, and is working with many hardware vendors--it’s already been incorporated in Lenovo’s recent Ultrabooks. 

Thalmic Labs

Thalmic Labs, based in Ontario, created the MYO band, which is supposed to ship at the end of the year. Worn on the arm, the device monitors electrical activity and muscular movement, recognizing gestures and connecting the real and digital worlds. The band utilizes Bluetooth to talk to devices, detecting movements within milliseconds of initiation--often appearing prescient. As seen in the lab's promotional video, anything from laptops to speakers to ATVs can be controlled by the band, which costs only $149. Thalmic, formed in 2012, had raised a $1 million seed round and subsequently a $14.5 million Series A round.


An MIT student, Andrea Colcoa, has invented a platform for smartphones. The team, which won the MIT $100K entrepreneurship competition, is designing technology that will digitally display keyboards on hard surfaces (i.e., a desk), enabling easier typing on phones and tablets. The system can be adapted to work with wearable devices or those without touchscreens, such as Google Glass. 


A University of Washington team has gone a step further-creating the ‘WiSee’ technology. Using wi-fi signals to monitor movements without a sensor or camera, the system is founded on the Doppler effect. Because wi-fi doesn’t require line-of-sight, the technology would encompass an entire household with only a few receivers.

So far, WiSee can recognize nine body gestures, noting the frequency change in wi-fi waves as the source moves. In order to distinguish intentional gestures from those arising in conversation, the software requires a repetitive gesture to serve as a preamble or password. 

Gesture technology is already being seen in recent conversations, as well as on the market. Xbox Kinect uses two cameras to monitor movements as well as respond to aural signals. A Norwegian company, Elliptic Labs, launched gesturing technology based on ultrasonic signals transmitted from tiny speakers embedded in the phone. PointGrab tracks hand movements from 5 meters away, using algorithms to convert finger movement onto a cursor coordinate grid. 

New user interfaces are burgeoning the ways we can manipulate software, coming closer to the plethora of ways humans communicate. A new computing generation will be born with these UIs, as people find themselves surrounded and embedded in a technology culture.