5 magic tools for UI designers

The Oculus Rift is coming to Second Life. It’s already available for the Unity platform. Other virtual world platforms and games are rapidly signing on, including the Unreal engine, Valve, Bioshock, Hawken, Minecraft, Portal, and many, many more.

This 3D headset — and the flood of competitors soon to come — will transform the way we engage with virtual worlds.

For example, when you’re wearing a headset, you can’t see the computer keyboard in front of you. Since not all of us are touch typists, this means that you can no longer expect people to type anything. Or use keyboard shortcuts to activate commands. Or use the arrow keys to move an avatar — or a camera — around.

Speaking of the camera, you can’t have one. Not only does it break immersion, but it can make people sick if their viewpoint is heading in one direction while their avatar body is heading in another.

Heads-up displays — where text floats in front of your face — can work in situations where the avatar is wearing a helmet, or virtual Google glasses, or some other device that enables a heads-up display. But in other contexts it just breaks immersion and blocks the view.

But menus have to go. If something is normally at the top, bottom, or side of the screen then it’s probably just outside your field of vision in the Oculus Rift. And you can’t turn your head to look at it, because then the whole field of view shifts. Plus, they break immersion.

I suggest that some of the answers to this problem can come from the world of magic. After all, witches, wizards and wicked queens have been making amazing things happen for generations.

How did they do it?

Magic gestures

Samantha twitched her nose. Genie blinked her eyes. Sabrina the Teenage Witch pointed her finger.

The Oculus Rift by itself can’t detect these gestures. But the headset is expected to be used in combination with other devices, such as the Microsoft Kinect or Leap Motion.

Using such a device will actually serve two functions. The ability to see your own hands in-world will add to the realism and immersion of the experience.

The downside to gestures is that they aren’t particularly precise. For example, at home, I use a Wii to watch Hulu videos, and navigating the Wii and Hulu menus with the Wii controller is a pain because it’s hard to get fine control when your hand is up in the air.

Eye tracking has the same problem — eyes naturally move around from place to place, and blink automatically, even when you might not want them to.

I expect, over time, that we’ll develop a common language of gestures, similarly to the language quickly evolving for touch devices — swipes, pinches, taps and double taps.

In 3D space, we’ll have pointing — obviously. Maybe raise your arms in the air to fly. Putting palms together and spreading them to pull up a menu. Putting your hands up to your eyes in the “taking a picture” motion to take a snapshot. Pinching the fingers of your hands together, then pulling them apart to make a magic wand or laser pointer appear. Leaning forward — or walking in place — to walk forward. Or actually walking, as with the Virtuix Omni treadmill. (Only two weeks left in the Kickstarter campaign, by the way, which is rapidly approaching $1 million.)

Magic words

The magic word or phrase — abracadabra, alakazoom, open sesame, avada kedavra — cause things to happen, often in combination with particular gestures.

The words have to be unusual, so that they don’t pop up in casual conversation. In a virtual world, they can be used to activate functionality, devices, or make a menu appear in front of the avatar.

The Oculus Rift doesn’t have voice support built in, but users are likely to be using separate headset for sound and voice. In the future, the two functions might be combined.

Magic items

We already know how to use many magic items.

We open the stopper on a genie bottle, and rub a magic lamp. We wave a magic wand around, then point it. We put on seven-league boots and cloaks of transparency. We sit on a flying carpet. We drink magic potions. We spread out a magic tablecloth. We reach into a magic hat to pull out a rabbit.

-Oil-painting-The-Magic-Carpet

The benefit of magic items is that they can offer unlimited new functionality without having to modify the interface itself.

However, we do need to have some kind of standard indicator that an item is a magic item. A sparkly effect, say, so that we aren’t sitting down on every carpet we come across and trying to get it to fly.

Magic mirrors

Magic mirrors, popular with evil queens, serve double duty. On the one hand, it’s a mirror. On the other hand, it’s a personal magical assistant.

In virtual worlds in particular mirrors — the regular kind — are important since the user can’t simply rotate the camera to see what the avatar looks like.

The Evil Queen's magic mirror from Once Upon a Time.

The Evil Queen’s magic mirror from Once Upon a Time.

When activated, a magic mirror could turn into a touch screen with a menu, avatar inventory, Web browser, or video phone, or it could bring up a Siri-like personal assistant.

Interaction can be through voice, through pointing, through touching, or some combination of those.

Magic charms

A special type of magic item, magic charms are worn and give certain abilities. Say, the ability to breathe underwater. Or understand a foreign language.

To activate a charm, you pull it out of your inventory and wear it. To deactivate it, you take it off.

I can imagine wearing a bracelet of charms. For a reminder of what each one does, I could briefly touch it, and words would briefly appear explaining what it does, then float off and dissipate.

maria@hypergridbusiness.com'

Maria Korolov

Maria Korolov is editor and publisher of Hypergrid Business. She has been a journalist for more than twenty years and has worked for the Chicago Tribune, Reuters, and Computerworld and has reported from over a dozen countries, including Russia and China.

  • LaeMing Ai

    Interesting article. I could see a lot of the less-frequently-used UI being moved to a wrist-mounted device. A small location-in-space-aware wrist-mounted touch-tablet with no need for an LCD as the graphics are presented in VR over the location the tablet is detected to be in RL. Such a tablet would give a great deal of precision in a small space for UI interaction, physical feedback to the via there being something actual at the end of ones fingertips, and out of the way when not needed in the same way a wrist watch is.

    • I agree — the idea of an in-world tablet or smartphone — whether worn on the wrist, or pulled out of inventory when needed — is a great way to get information and features to a user without breaking immersion.

      I didn’t include it in the story because it’s not particularly magical. 🙂

      • LaeMing Ai

        Well, it could be configured like a Ouija Board 😛

  • Paul

    I like the concepts in the article, but I don’t think such things will be implemented.

    First of all, that kind of interface would be hard on the arms. There is the “Gorilla Arm” caused by current gesture interfaces because of the efforts needed to make the gestures.

    Second, there already exists a for of interface that has had many years of development: Console like controllers.

    These kinds of controllers usually have 1 or 2 analogue joysticks at least 4 surface buttons and 2 to 4 shoulder buttons. Not only are there many buttons and joysticks for input, these buttons are also easy to find without looking at the controller. It is pretty easy to connect such controllers to a PC, and the control systems for many FPS games that use console type controllers could be adapted for virtual worlds as they are quite similar in the basic navigation. Inventories can be accessed by using the shoulder buttons and so forth.

    So, it would make more sense to use a console type controller with a headset as these devices are well established and offer many different options for inputs.

    • You’re right when it comes to existing first-person shooter games being adapted for the Oculus Rift. Gamers already know those controllers, and how to use them without looking at them.

      I’m thinking more of general-purpose social worlds or business or education environments. You’re not constantly shooting at things, so there’s no need to keep your arms up all the time.

      Plus, these kinds of users are less likely to know how to use a standard gaming controller. For example, I can’t remember ever using one. I must have, as a kid, trying out a game, but those would have been the early one-stick-and-one-button style controllers, anyway. These days, the only time I use a controller is when I use the Wii remote to browse through the Hulu menu.

      But it’s not just social virtual worlds like Second Life, and business, meeting, and training environments that would benefit from a more natural user interface. So could casual games, strategy games, role playing games and other games aimed at a wide audience, which that don’t depend on fast joystick action.

      • Paul

        A good test of how hard it can be ut use a gesture interface for a virtual world is to stand with your arms out in front of you as if you are typing on a keyboard. You will quickly find that your arms become tired, and using such an interface day after day would lead to RSI.

        People are finding this with the Kinect and Leap Motion devices, that gesture interfaces, while sound good and look impressive (especially in movies) are not that great in real life.