Researchers say AI could lead to better gesture recognition

Artificial intelligence could help make gesture recognition systems more effective, according to researchers at Purdue University.

The system they created, Deep Hand, requires less processing power than other approaches, they said in a paper released this spring.

It can allows users to perform a range of tasks such as picking items, driving virtual cars, making virtual pottery and moving items in virtual and augmented reality environments.

“Environments such as HoloLens by Microsoft or Oculus by Facebook will all need hands to work faithfully,” Ramani Karthik, a professor of mechanical engineering and one of the co-authors of the paper, told Hypergrid Business.

The system starts out with some basic data about hands and joints, he said.

“Even though deep neural networks can understand hands in various ways, we use a general structure of the hand to guide the learning,” said Karthik. “For example we generate the wrist orientations first and use that to identify the center of the palm and configuration of each of the fingers.”

Watch a demo video below:

Current input methods, such as typing and using a mouse, are very difficult in a virtual environment.

“Once the hand shape and gestures are understood, other users can build new interfaces that use the hand,” said Karthik. “So future applications will be what people and developers come up with.”