AltspaceVR solves nausea challenge
I’ve gotten a little wary of trying new virtual reality applications because some of the best ones, graphically, do the nastiest job on my stomach. There was that one swimming-with-sharks simulations that made me seasick, that I still remember well.
AltspaceVR has figured out how to put people in a virtual meeting environment — while they’re wearing the Oculus Rift. In my case, my in-world interview with CEO Eric Romo lasted about an hour, and I felt as good at the end as when I started.
They way they’ve done it is that the only avatar movements possible are looking around in various directions, moving your hands, or teleporting from place to place by clicking on where you want to go.
The teleports were fine. It kind of feels like you wink, and when you open your eyes you’re someplace new. The only problem is that you then have to figure out where you are, because your angles of view look different.
And moving your head around is fine, because as you move your head in-world, you’re also moving it in the real world. So the motions correspond fully, and don’t contribute to motion sickness.
It’s a scene-based world, unlike the map-based worlds of Second Life and OpenSim.
In fact, pretty much all other virtual worlds are scene-based. You load a virtual scene, you do stuff in it, and if you want to go somewhere else, you teleport to a different virtual scene.
It’s easier to program, because you don’t have to keep track of a map, and it doesn’t matter what size each scene is. Depending on how much stuff you pack in, it could be a large, open landscape or a small closet.
One disadvantage of being scene-based is that it’s harder to build up a mental map of your environment when any location can connect to any other location. But this is something that designers can address, if they’re careful about how they link scenes together.
AltspaceVR is a mesh-based world. In fact, currently, all the environments are created by the company itself. The only opportunities for users to affect the environment are in the content they put up on in-world screens, and individual 3D objects that they can import from websites.
In fact, there’s an API and SDK for developers to create WebGL content and bring it into the world. So, for example, a car company could use one of these environments to hold a meeting about a new car model, and actually bring the car into the virtual space for everyone to look at.
The content that’s brought in is interactive. In the demo that I saw, there was a chess game with knee-high pieces that you could click on with the mouse and move around.
The required hardware is the Oculus Rift, and it can also be used in combination with either the Kinect or Leap Motion to project your hand movements into the virtual world. The company is currently working on adding support for Samsung Gear and HTC Vive headsets, as well.
The platform itself is built on Unity, and in-world media is through a built-in Web browser based on Chromium.
Avatars are created through the central website, and then users can take those avatars into any of the spaces they have access to.
A single scene can support at least 50 avatars, said Bruce Wooden, the company’s head of developer and community relations.
“We haven’t hit our limit yet,” he said.
The platform supports fully spacial voice.
“To the point where you can whisper in someone’s year,” Wooden said.
Right now, the company’s focus is to provide shared experiences to the general public.
Beta access weekends have included a Super Bowl party, music events, and video game competitions.
People can also uses the spaces for presentations, or to watch movies together.
“The intention is to have our core experience be free and to monetize around private experiences or a certain level of customization of avatar or space,” Wooden said. “Or an app store with in-app purchases.”
Right now, the avatars that I saw needed quite a bit of work. Arms could be better connected to the bodies. And faces and facial expressions could be connected to voice, as they are in Second Life and OpenSim, for example.
It’s a proprietary platform, which could be a disadvantage in the long term. In the short term, however, I believe that proprietary platforms will initially dominate and help push the field forward.
Eventually, some of the most useful features will wind up in the open source metaverse.
The hand movements, for example, are a nice compromise between full-body puppeteering and limiting people to packaged animations.
For most business environments in particular, hand gestures combined with a selection of sitting and standing poses would be more than enough.
Speaking of sitting, the demo that I saw did not allow me to sit. Instead, clicking on a chair teleported me to a standing position on top of the chair, instead of to a sitting position on it. Since it’s a teleport, they could have just as easily teleported right into a sit pose — but this is something simple to fix and they might have already taken care of it.
Overall, once the avatars are improved and sitting poses are added, I can see this as an extremely effective platform for business meetings and virtual conferences.
The interface was extremely easy to use, and it took seconds for me to learn how to navigate the built-in media player and pull up videos or websites on in-world screens.
Meanwhile, replacing walking and running with click-to-teleport is something other designers should seriously consider for their virtual environments. It really does eliminate most — if not all — of the motion sickness problems people can experience in virtual reality.
Coming up next
There’s another Beta Access weekend coming up in a couple of days, April 24 through 26. You can request a Beta invite here.
The company will also be at the Silicon Valley VR conference next month in San Jose.