Artificial intelligence could help make gesture recognition systems more effective, according to researchersÂ at Purdue University.
The system they created, Deep Hand, requires less processing power than other approaches, they said in a paperÂ released this spring.
It can allows users to perform a range of tasks such asÂ picking items, driving virtual cars, making virtual pottery and moving items in virtual and augmented reality environments.
“Environments such as HoloLens by Microsoft or Oculus by Facebook will all need hands to work faithfully,” Ramani Karthik, a professor of mechanical engineering and one of the co-authors of the paper, told Hypergrid Business.
The system starts out with some basic data about hands and joints, he said.
“Even though deep neural networks can understand hands in various ways, we use a general structure of the hand to guide the learning,” saidÂ Karthik. “For example we generate the wrist orientations first and use that to identify the center of the palm and configuration of each of the fingers.”
Watch aÂ demo video below:
Current input methods, such as typing and using a mouse, are very difficult in a virtual environment.
“Once the hand shape and gestures are understood, other users can build new interfaces that use the hand,” saidÂ Karthik. “So future applications will be what people and developers come up with.”
- OpenSim user activity ramping up for the holidays - November 15, 2023
- OpenSim land area at a new high as grids prep for holidays - October 15, 2023
- Craft World’s Hypergrid International Expo starts Friday - October 3, 2023