Virtual reality and the single office worker

At some distant point in the future, virtual reality will be good enough and immersive enough that we can telecommute to our jobs by simply stepping into some version of a Holodeck or plugging into some version of the Matrix.

Today’s virtual reality has only very limited use for the average corporate employee.

There are some uses. Rapid prototyping, virtual simulations, the occasional virtual meeting or conference. But these are, right now, very much niche applications.

One problem is that a typical job involves dealing with two different types of tasks — ones that involve sharing and processing experiences, and ones that involve processing and sharing information.

Virtual reality is great at the experience part. You can your colleagues can, say, ride a rollercoaster together, visit a mockup of your new office building together, or attend a quarterly earnings presentation together.

And your current computer setup is optimized for dealing with information. You’ve got your word processing, your email, your Internet, your spreadsheets, your Skype — everything you need to move data around.

The problem is when these two things overlap. You don’t want to edit a word processing document by waving your hands in the air — you would get very tired, very quickly. The mouse-and-keyboard combination allows you to work for hours at a time. But to use it, you’d have to leave the virtual world.

This is why so many people are frustrated at attending meetings in Second Life or OpenSim — it takes a long time to get into the world, to get the audio to work, to make sure everyone is wearing clothes — and then if all you do is watch a PowerPoint presentation, then what was the point? You could have watched the same presentation online and had a quick conference call, instead.

But with the online presentation, you don’t get the sense of presence — you’re missing the experience part of the equation. You don’t get to interact with your co-workers, to gossip with them afterwards around the water cooler, to catch lunch together after the meeting. And that’s exactly when a big chunk of work actually happens.

But it just occurred to me that we do actually have a technology that combines the two. And I hate to say it, it really feels bad to have to admit it, but it’s Google Glass.

Imagine that you’re sitting in your home office, working at your computer. But when you look up, instead of seeing the walls around you, you see the rest of your company’s office, your co-workers, your boss. They can walk over and see what you’re working on, catch you up on the latest company gossip.

Google Glass — or some future variant of it — would allow you to see your computer, your keyboard, and your office furniture, while also interacting with a virtual environment.

I got the idea when looking at the Atlas system for the Oculus Rift (now running a Kickstarter campaign).

The Atlas uses an iPhone camera to map your real environment into a virtual environment, so you can walk around your actual living room while you think you’re walking around inside, say, a zombie castle. I wondered if it was possible to use the same idea to integrate a computer, keyboard and your chair into a virtual environment. But then you’d have a situation where you’d have a camera projecting your actual computer screen into a virtual world, which is just weird. Going the other way — looking at your actual screen, but having virtual people projected into your real office — makes more sense, and also means you’re not wearing a heavy Oculus Rift on your head all day.

Instead, you’re wearing a pair of glasses, which many of us wear anyway.

How long will it take for us to see this hardware?

When I started writing this column a couple of days ago, I would have guessed a year or two for someone to take the Google Glasses idea, and combine it with full stereoscopic 3D and a video camera.

Turns out, they’ve already done it.

They’re called Space Glasses and they’re already available to order, for $667 each, from Meta, for November delivery. The company was founded just last year by students at Columbia University.

The folks at Meta think their glasses will make computers obsolete because any blank wall can become a computer screen, and any flat surface a keyboard.

I’d probably still want a keyboard, though — I like the tactile feeling of the keys. But I don’t think I’ll mind getting rid of all the monitors I have on my desk.

maria@hypergridbusiness.com'

Maria Korolov

Maria Korolov is editor and publisher of Hypergrid Business. She has been a journalist for more than twenty years and has worked for the Chicago Tribune, Reuters, and Computerworld and has reported from over a dozen countries, including Russia and China.

  • These guys finished up a Kickstarter last month — how did I miss it? — raising twice their goal: http://www.kickstarter.com/projects/551975293/meta-the-most-advanced-augmented-reality-interface

    • Joe Nickence

      I participated in their kickstarter, and I never participate in kickstarters. That’s how much I believe they’re going to take it out of labs and put it in the hands of the public. Between Meta, and Occulus Rift, traditional screens are doomed. At that point it won’t make a bit of difference what computing hardware you use, as long as it can power the app your HUD needs. But as for mice and keyboards? Those aren’t going anywhere, just like paper and printouts.

    • ah, this is actually something I like, a lot…occulus seemed to bulky to me [and from what little I read it is a standing up thing]. This is small, and smaller to come, and I can continue to sit on my old couch [should I do a kickstarter for a new one???] and play. The integration seems cool and doing away with monitors I like a lot.

      They say it will be untethered for their consumer editions which is good also. And they say the first consumer iteration with weigh in at 100 grams or 3.5 ounces.

      some additional info from their page…

      Can I manipulate 3D objects with my hands?

      Yes, you can move virtual objects using your bare hands (no gloves required). Additionally, you can deform the 3D meshes, allowing you to treat all objects as if they’re made from virtual clay.

      What will be different with the consumer version?

      Smaller, lighter, more fashionable, resembling a pair of popular sunglasses frames. Available in a stand alone option (untethered, fully battery powered).

      Can I use meta if I have a vision impairment in either/both eye(s)?

      The meta glasses project an image through prisms that give the illusion the image is 5m away. If you normally need sight aids to see objects clearly that far away, then you’ll need those sight aids when using meta. Most eye glasses are compatible with the meta glasses frame (ie. you can probably wear your eye glasses and meta glasses at the same time)

  • Ada Radius

    We use SL for our nonprofit’s board meetings for years, and recently moved them to Kitely (more server side bandwidth, better Talk sound, and I can dedicate an entire sim for the event for pennies). It’s more secure than Google Talk or Skype, and easier to figure out who’s talking by the green bands and just enough sense of where sound is coming from. A whole lot cheaper and faster to set up than teleconferencing. I like the combination of live Talk, the feeling that we’re all in the same room, rather than three countries and four time zones, and being able to quickly use typed Chat for specific spellings, numbers and such to get them into the minutes, or share images or websites if we need to.

  • Tara Yeats

    There’s just one problem I have with the “use-a-blank-wall-no monitor” concept: having to stare at a blank wall! My monitors are partially in front of sliding glass doors with a lovely view of the great outdoors – which is great for periodic change of focus to reduce/prevent eye strain…

    • In one of the demos, they use a blank sheet of paper. They moved the paper around — even crumpled it up — and it continued to show the screen. I can see using plain cardboard stands as screens. Or just have the screen projected into empty air in front of you. I’m curious to see what will actually happen in practice.

      Either way, I think we’re on the cusp of some very significant changes.

  • hack13

    To be honest google was not the first, “augmented reality” has been something that has been around for a very long time now. It just was not an affordable thing to have. For example the Nintendo DS supports augmented reality, http://www.youtube.com/watch?v=GryGuy3-ZIg <– Nintendo DS Demo

    All these companies have done is shove the camera in the glasses, and using reflective services to project the images in front of you. I do like how google has taken a step further than most, and use a "bone vibration" instead of a speaker. Something that has also been around for a long time. Instead of a speaker, they pump the vibrations of the noises directly onto your bone that connects to your ear drum. This provides "true" privacy in the phone call where people can't listen in beside you.

    As far as all of this stuff goes, it is just becoming popular. It is technology that has been around for years, just taking off. It is kinda like opensim, we have had a small take off points, and we may have another big one at some point. Which will expand the development even further. But this is technology we all had already, just more compact, and more affordable.

    As far as computer becoming obsolete, I think maybe for end users, but for people like me and others who are developers. We will always be tied down to our devices, something that we can't really help. Code is written, line by line, and we need the power of PC's to compile these operations. Though we do have the cloud to compute and compile these days, but it is really not as affordable as people make it seem.

    However, this did feature one things Google had to REMOVE from their glasses due to the major protests against Google Glass. People have asked before Google adds back facial recolonization they MUST include a DO NOT TRACK feature that keeps a record of people it WILL NOT identify and MUST ignore. This is a hard thing to do, but is a valid concern. Me, I am not that worried about myself, but I know plenty of very valid and legit reasons to not include. Wives fleeing abusive husbands, who they have divorced, children who have not reached an age to choose weather or not to have their data tracked. These are things people MUST keep in mind when making things like this.