Seeing Machines' new product hopes to help us better understand how we interact with machines.
You know the scene: Jerry Seinfeld is in his car, executing an ambiguous gesture involving his finger and his nose. The model he's dating catches him. She's sure it was a pick. Jerry insists it was a scratch. The relationship is history.
Now there's a technology that could settle the argument. FaceLab, which was unveiled by its maker, Seeing Machines Inc., in March, is a camera-enabled computer system that tracks head position and facial and eye motion in three dimensions. Installed on a vehicle's dashboard, FaceLab can trip a warning signal when the driver's attention flags. Companies that design air-traffic control or nuclear-reactor panels can use it to determine ergonomically efficient layouts of instruments. And media consultants can track the eye movements of people who are watching TV commercials. "It's for anyone who's trying to understand how people interact when they're using a particular machine," says Alex Zelinsky, CEO of the one-year-old company.
That wasn't FaceLab's original purpose. Zelinsky began developing the technology in 1996 to allow people with disabilities to instruct robots by using head gestures and eye movements. The robots would "act like a helping hand," says Zelinsky, picking up items or bringing food to the user's mouth. The project, which was funded by organizations for the disabled, eventually ran out of money. Volvo Technological Development then picked up the tab and took FaceLab in a mass-market direction. So far, Seeing Machines, which is based in Canberra, Australia, has shipped 10 of the $25,000 devices to customers like Toyota's R&D labs and Delphi Automotive Systems. The start-up hopes for sales of $1 million this year.