The race is on

After Sony made their Project Morpheus announcement, and the Oculus Rift won best of show at CES, and rumors began flying around about Microsoft planning a virtual reality product, I knew I had to get my act in gear.

Mass adoption of virtual reality is coming, and while it’s taken longer than I hoped so far, once it does come, it will probably move faster than either mobile or the Web did before.

With each new technology adoption cycle, there’s an acceleration due to the fact that people can see what’s coming, and everyone tries to get in early in order to get ahead.

And, today, with Facebook’s acquisition of Oculus, that’s exactly what happened. We’re at the starting line and a giant gun just went off.

(Image courtesy Andrew Kicinski via Flickr.)

(Image courtesy Andrew Kicinski via Flickr.)

Now, I’m not saying that all of my readers with OpenSim startups are going to be able to cash out for millions next week, next month, or next year. But I’m not saying that they won’t, either.

There’s going to be a big land grab, a new investment bubble.

Now, I don’t know what software platform Facebook is going to choose for its own Oculus Rift environments. Hopefully, it will be something akin to Second Life, where everyday users can actually modify the environment themselves. In fact, it might well be Second Life itself, where there’s a large collection of ready-made content and a large user base — and there’s already support for the Oculus Rift.

Or it might be OpenSim, which would provide Facebook with a more scalable platform that has the potential of becoming a basic building block of the 3D Web. If I was Facebook — or one of their competitors — I’d be cozying up to Linden Lab and Kitely any minute now.

However it shakes out, there’s going to be a huge spike in demand for virtual world experts.

If you thought the hype around the first coming of Second Life was bad, you ain’t seen nothing yet, kid.

Don’t let this opportunity slip you by. Start polishing your expert credentials. Do that website redesign you’ve always been planning to do. Sweep up you landing area and slap a fresh coat of paint on your freebie avatars.

I know what I’ll be doing for the next 50 weekends.


Related Posts'

Maria Korolov

Maria Korolov is editor and publisher of Hypergrid Business. She has been a journalist for more than twenty years and has worked for the Chicago Tribune, Reuters, and Computerworld and has reported from over a dozen countries, including Russia and China. Follow me on Twitter @MariaKorolov.

86 Responses

  1.' Joey1058 says:

    You’re right on the money, so to speak, Maria. This pinata called Virtual Reality has gotten so big over the years from all the swings and misses, that the honkin’ big bat called Oculus is gonna smash the thing wide open!

  2.' Inara Pey says:

    I admire your optimism, but…

    Given Gartner point to mass adoption of VR still being some 5 years away, and Zuckerberg has made it clear that FB are in the VR game for the long haul (think, as Oculus VR investor Chris Dixon has said, in terms of Google’s 2005 acquisition of Android), I’m not so convinced FB will be making any immediate jumps in any direction, VW-wise.

    And when they do, I seriously doubt it will be in the direction of SL / OpenSim. Frankly, both have too much associated baggage (even if OpenSim’s is the result of it being seen as sitting in SL’s shadow as far as the world at large is concerned – and no, i’m not knocking OpenSim in saying that).

    Rather, if FB look anywhere, I’d tend to lean towards them looking at the likes of High Fidelity. Rosedale’s start-up just claimed its second round of investment (a further $2.5 million on top of that originally seeded by the likes of Linden Lab, Kapor Enterprises, and Google Ventures).

    If nothing else, this puts them squarely on the map as a potential proposition down the road. In addition, in terms of goals, those Rosedale has voiced for High Fidelity seem to be a remarkably good fit for Zuckerberg’s initial statements on his long-term aspirations for “social” VR …

    Of course, there is the Ondrejka / Rosedale past to consider (assuming that even is an issue now, severn years down the road), but I’m not sure that would stand in the way of an potential deal, were one to be considered at some point.

    • I agree that mass adoption is five years away, AT LEAST. I’m not saying that virtual world companies are going to profitable at any time soon.

      What I am saying is that there’s going to be a massive of influx of capital into this space, as all the big players try to jockey for position.

      For example — is Microsoft going to let itself miss this opportunity? It already missed out on cloud, and on mobile. And it has a LOT of money to spend.

      Similarly, is Google going to let Facebook own this space? Or is it going to start looking around for acquisitions of its own?

      Sony already has a hardware project in place, and it might stick with that. But it’s getting rid of some of its other hardware lines of business — maybe it will take the opportunity to move into the software side, and try its hand at building (or acquiring) a chunk of the future.

      And we shouldn’t count Apple out. If anyone is going to make a device, and an interface, that sets the standard for the next couple of decades, it’s them. Or, at least, Steve Jobs. They have a legacy to live up to.

      I think the Facebook acquisition was wake-up call for everyone that if they don’t step up soon, they’ll be left out of the game.

      And while its the big buys — like the Oculus Rift — that make the news — there will be lots of potential for smaller companies to make a mark for themselves in all kinds of different niches.

      So just because Rosedale’s High Fidelity might be the best fit for Facebook, it still leaves room for other players to snap up SL, OpenSim vendors, folks working in realXtend and Project Wonderland — after all, we don’t have a commonly-agreed on standard yet.

      Any of these might win.

      After all, we all know from experience that the winner is rarely the obvious choice, or the most technically advanced, or the most user-friendly. It comes down to an unpredictable combination of early adopters, industry partners, scalability, interoperability, customizeability, and, of course, porn.

      • I agree with Inara, High Fidelity will most likely be the first golden
        child of this development. I bet Mr. Rosedale was cartwheeling to work
        in his spangly little codpiece when he heard this one πŸ™‚

        That said, if there is anyone who is likely to incorporate the user generated paradigm, its him.

        When the occulus or any commercial VR headset goes into mass production, there will be a plethora of platform options following shorlty in their wake. People will focus on those, and maybe a few will notice you can use it with SL/OS and think “oh, you can use it with that old weird thing too?…..well, ok” and not give it much thought beyond that.

        I think Second Lifes biggest sticking point with people is the fact that it is called Second Life. I cringe every time it leaves my RL tongue.

        I dont think IMVU is more popular than SL because its easier to use, I think its because you can use the service without the nagging implication that you might just be a massive looser nagging at the back of your mind.It implies all sorts of negative connotations to people, and a fancy new
        headset is not going to solve that basic public image problem.

        Opensim stands more of a chance in the long run I think, so long as it can break its association with SL and offer something markedly different. We are reaching the point of overtaking SL in many ways now and that needs to continue. We have Varregions, NPCs and Mega-regions. Thats a good start. I would like to see things like Voxel Terrain with paintable textures and splatmaps, increased materials support and the number 1 supreme-most-important-thing-in the-world, Avatar 2.0.

        Then I think we will be good for the next 5+ years and stand a chance of not being swept away by newer, sexier platforms.

      •' Inara Pey says:

        I don’t disagree that FB’s acquisition of Oculus VR could see a rash of interest in terms of major players poking at VR.

        However, I was addressing your core musing on where Facebook may go with Oculus Rift and the specific mentions of Second Life and OpenSim.

        In terms of other players, I would point-out that Microsoft has been there for a while, initially via Project Fortaleza and more recently via Phil Spencer’s March 21st confirmation that Mircosoft Studio is working on a VR adjunct to Xbox and Kinect.

        But just because they, Google and Apple may well either be already working on approaches to VR or looking at jumping on the bandwagon doesn’t mean they need to look at LL or OpenSim vendors (or users of either platform) for expertise or will be eyeing them with a view to “snapping-up”. Again, see comment on baggage, above.

        In terms of VR standards, and as something of an aside, I find the fact that Oculus VR walked away from the VR Alliance just a week before the FB deal was announced to be perhaps a tad too coincidental. It’ll be interesting to see how that plays out as VR continues to develop toward an actual consumer product base.

        • There’s other reasons to buy up companies than because you need their expertise. You can also buy them to deny their expertise to your competitors, or buy them so that they don’t develop into competitors.

          To my mind, this is the great thing about open source projects — you can’t buy them up and cancel them, like Yahoo just did with Cloud Party. Or, my personal bugbear, like Twitter did with DabbleDB.

          In any case, the question of practicality becomes irrelevant when there is too much money chasing too few projects. Then just about anything can get bought up or get VC funding. Ah,, we barely knew ye.

        •' Bruce Thomson says:

          Hi Inara,
          – I liked your post, and expect you’re right – about the time needed for VR to become as easy-to-use and consumer-attractive as I expect it will.
          – The most stunning thing to me (even before the FaceBook $2B purchase of Oculus, is how rapidly the mainstream public (not just the teenage gamers) is now accidentally being ‘educated’ to enjoy and want VR. Notably the movies ‘Avatar’ and ‘Her’, and soon ‘Transcendance’. Add on the Google Glass, and you might agree that technology is moving from appearing science-fiction, and becoming normalized. I’ve joked to friends that perhaps it’s because the Hollywood crowd may be socializing in the same cafes and restaurants as the NASA and VR and Singularity people.
          – But even brute-force money won’t get us (a) the easy interface and data speeds needed for crystal clear, ultra-low-latency experiences (b) the needed immense array of adapted games and useful applications – and also the deeply satisfying haptics that will be the caramel on the ice cream – to make people reach for their wallets.
          – Personally I can hardly wait for this. I want to use high fidelity VR to shape myself mentally, to intensify my visualizations/self-education/exploring, and to commune much more richly with compatible people all over the world.
          Bruce Thomson in New Zealand

      •' Minethere says:


      • Google has Glasses… and took a serious hit on their reputation creating the awkward “social virtual world” Lively. One wonders if they’ll be so eager to enter this market again.

        Microsoft, of course, has the Kinect, the Xbox, and apparently some kind of head gear, too. It would be highly unlikely that they’d stay out of the race.

        Apple is a big question mark. They used to have a virtual world division and their own 3D virtual world, which they shut down around 2000 or so, without much fuss. They could certainly leverage the brand which is famous for both its hardware and software, and create a Mac-only VW for Mac fans, using iGlasses and the iController… and probably sell a few millions of devices… but the question is, this was never Jobs vision, he clearly said “there is no money in virtual worlds”, and one can only wonder if Cook shares that vision as well?

        I’d think that Yahoo, now that they have the Cloud Party techs working for them, might launch not a “social” VW, but a “gaming” VW, one that might also be Rift-compatible. At least they can certainly make an experiment. The problem with the “gaming VW” is that it would be subscription-based, and, as such, targeting a completely different market. But it’s a possibility.

        One also wonders what the likes of Blizzard, EA, Valve, etc. will do now. Will they stick to subscription-based games? Will they attempt a social virtual world using a different business model? Hard to predict really…

    •' Arielle says:

      I think people are too focused on the Virtual aspect of the VR label. I strongly suspect that Facebook sees this as something beyond a virtual reality and more about R/L reality. This is a tool to that could allow all their subscribers to immerse themselves in the living rooms, kitchens etc of their friends and families by simply donning a headset and yet being thousands of miles away.

      When we come down to the crunch this technology is simply a fancy monitor which will allow for much greater sense of immersion then looking at a standard monitor. The peripheral vision will be completely the scene of somewhere some distance away fed by a camera at that location.

      No doubt this will augment games, S/L, Opensims andHigh Fidelity worlds also but I don’t believe that Facebook paid 2 Billion for that reason. Rather they see it as a way to use this beyond computer generated worlds into the real reality.

      •' Inara Pey says:

        Absolutely. Zuckerberg is looking much further down the road at “social VR”. He made that pretty clear in his statements following the acquisition. While that doesn’t exclude the use of VWs in that vision, it does encompass a lot more.

        What resonates to me with some of his statements is the manner in which Philip Rosedale has described the High Fidelity vapourware. Both see this idea of “social VR” as being the market place for immersive activities.

        •' Bruce Thomson says:

          1. Rosedale quoted a study ‘said that the amount of energy needed do do things in virtuality (e.g. education, design, business conferences, building construction) was between 1/100th and 1/1000th of real world energy needed. Therefore, in the (international, fiercely competitive) business world, the economics of transforming as much physicality into virtualities screams at the accountants to steer their money ships to maximize virtualities.
          2. Another HUGE, almost yet-untapped advantage of virtuality is the ability to create an infinitely huge host of ENTIRELY NEW products and services that are physically impossible in real world. (All monetisable.) Examples: New kinds of music, sex & other entertainment & social communions, research into human – especially immersive experiences that enable us to powerfully train our minds (e.g. contexts helping us vigorously exercise, lose weight, evict fear of spiders, new workplace sims and countless other educational experiences.
          3. FaceBook ‘badly needs to’ ($2B just to start!) thrash Google Glass. The Oculus can be hosed with millions of R&D dollars to make it wireless & sunglasses-compact. From there, FaceBook’s victory could lead them onward to corneal contact lens successors, and to brain caps and implants. Remember, FaceBook is still the fierce king of social networking, and with an Oculus FaceBook virtual world (e.g. don’t waste time, just buy SecondLife or HighFidelity and hose them with money too) Google’s Google+ may just die off.
          4. If so, what happens to ‘search’ (strongly AI-powered, with Kurzweil driving) It’s the mountain that Google is ‘king of’. For fun, ask yourself this: Will Kurzweil ask either IBM’s Watson or his own superb AI expert system, “How on earth can I beat FaceBook if Oculus and the virtual worlds leaders cars are parked outside the FaceBook bank?” I bet we see some FEROCIOUS competition from Google now. Bruce Thomson in New Zealand.

          •' Inara Pey says:

            I don’t deny anything you say in terms of the potential for consumer VR (as opposed to the more specialised uses we already have for it in a number of fields). I’ve actually made similar arguments elsewhere in terms of possible consumer market impact that goes beyond the hype and hope.

            What I remain unconvinced of, is the idea that if VR does go big – and I use the word “if” purely because we’ve actually yet to see it unleashed on the wider consumer market and where it goes – is that it automatically mean that the likes of SL and OpenSim will see corporate-level interest and hunger, for the reasons I’ve given: Baggage.

            “Don’t waste time, just buy Second Life” – with the risk that you’re taking a decade-only monolithic structure which has demonstrated multiple complexities of maintenance and support and you then waste time
            pouring time and resources just maintaining it rather than developing it while you competitors use their huge resources and bring something to market engineered from the outset to capture a market. Something which has none of the baggage or the mistrust associated with SL.

            Hence, better to gobble-up the likes of High Fidelity. They have no baggage (arguably, they have product, but vapourware still tends to win in the hip market – hence Hi-Fi $5 million in venture funding in a year).

            Where interest in Linden Lab might be stirred is whether or not they have something sitting up their collective sleeve we’ve yet to see. In this, it is interesting to note that in October 2012, Rod Humble took time to visit my blog to confirm the Lab was investing in virtual worlds – plural – and that they were some 3-5 years from realising them. His comments came fully 6 months
            ahead of any direct financial involvement in High Fidelity, so they may not be simply a pointer to LL putting money in High Fidelity (which had its first round
            of venture capital funding in April 2013).

            If we assume Humble’s comments were indicative that LL are working on something like “son of SL”, they may very well have something of interest to a potential buyer. Although any interest in them depends on them going out and talking to people about what they’re doing in this regard, and whether it is exciting enough to garner interest.

            Beyond this, and as I pointed-out, and Maria agreed, it could be at least five years – if not longer – before VR had established itself as a consumer platform. That’s more than enough time for the big players you mention to measure the market sectors for VR, shape them, determine hardware and software standards, and so on, without actually needing to rely on anything like Second
            Life when it comes to sectors such as virtual environments / worlds.

          •' Ener Hax says:

            Implants – like Johnny Mnemonic! 1995 . . .

        •' Ener Hax says:

          i dunno, the web is still mostly written text, i don’t see people wearing VR headsets anytime soon (like 5 to 10 years) and i don’t see facebook still being “the” thing in 10 years – the demographic continues to shift but my crystal ball can certainly be way wrong (i’m not worth zillions like Suckerberg)

          my opinion can be seen in detail on myspace . . . =D

          •' Inara Pey says:

            Actually, we broadly agree – see my initial comment about VR potentially not reaching widespread adoption for 5-10 years vis-a-vis Gartner. :).

    •' Ilan Tochner says:

      Hi Inara,

      The type of vision Facebook and High Fidelity are pursuing is one which various organizations, Kitely included, have been developing various components for for quite some time. It’s not just about the virtual world architecture, a lot of what’s required for a deployable service is integrating backend automation, billing, marketplace, web based control panels, management infrastructure, etc. In other words “boring” stuff that isn’t as sexy as the virtual world engine itself but that takes a lot of time to do right and without which companies aiming to lead the space will have a hard time providing a mass market service.

      Kitely has been developing Virtual Worlds on Demand technology as an enabler for “Virtual Worlds as Apps” or “YouTube for Virtual Worlds” for more than 5 years. Of the more than 330,000 lines of code we’ve written so far more than 90% are not OpenSim specific and can be adopted to work with other virtual world architectures. That type of working infrastructure takes time to develop even for multi-billion dollar companies with thousands of developers.

      In a world where the difference between leading a space and being a distant second has a lot to do with time to market, companies with deep pockets often buy their way into having a technological lead (which is what Facebook did now with the Oculus VR acquisition even though mass market VR may still be years away).

      As a reference, you can see me mentioning both the aforementioned terms in an interview I gave at MetaMeets 2011:

      As an aside you’ll note I mentioned Emscripten for turning the viewer into an HTML5+WebGL application years before Mozilla embraced it and both Unreal Engine, Unity 3D, and IMVU adopted it as a tool for getting their solutions into the browser.

  3. Tranquillity (InWorldz) says:

    It is funny in a way, when I’m reading through here I can see how the current experiences of various players in this space definitely affect their viewpoints on what is important.

    Those who have put a lot of time into the management of specific components have skewed judgement on the effort required to manage the platform vs actually making it work to scale. Management can be made simple using modern tools like puppet, chef, and salt with a little bit of custom code sprinkled in for the business logic specific tasks. Companies the size of facebook already utilize these tools to manage their datacenters.

    But mass scalability is still an active research field. Using cloud everywhere to explain it away only goes so far. Eventually you’ll need distributed message queues. You’ll need to understand and deal with the consequences of CAP theorem. Leftover mysql infrastructure will become difficult to maintain and mysql write masters will become single points of failure. There are just so many concerns here that cant just be written off by making statements that have never been tested under real web scale conditions. If you’re running 1000 servers, and one of them failing brings some service down, you are not ready to run at facebook scale.

    A company like facebook has experience with all of this and isn’t going to host a virtual world platform on AWS, nor are they going to use components they have already had scaling problems with in the past. They have their own datacenters and distributed storage systems based on HBase and others. They’re going to have a completely unique perspective on solving scaling problems because they have to deal with it on a daily basis. Their product design is going to be primarily based around the ability to scale out to their userbase.

    The important questions they’ll be asking are things like “Can I get 5,000 people in to watch a basketball game?” “Will the platform support virtual BeyoncΓ© concerts”. I’m not certain any current project has a proven track record with this kind of scale in mind, but the really awesome thing about a big company being involved is that they have and can acquire a ton of people to work on a team and solve these problems. Look at the history behind big data and why google and amazon came up with BigTable and Dynamo respectively. This same kind of revolution can happen with distributed 3d spatial partitioning and load balancing with enough of the right people involved.

    Exciting times ahead indeed, but then again 3d has always been exciting.

    • It will be VERY exciting to see what happens when large numbers of people — and large amounts of money — is thrown at these problems.

      I just hope in the process that the little guys don’t get locked out. With console gaming, for example, it’s just the big three players and proprietary infrastructure. With the PC, it’s been basically Windows (or Apple) with software distribution channels controlled by the big guys until recently.

      The Web opened everything up — anyone could put up a website, contribute code to Linux, Apache, WordPress, Drupal, cpanel, etc.. or create and sell themes, hosting, services.

      I hope that the virtual reality future is more like the web, and less like the old Microsoft PC days, or the current console days. What worries me is that VR, especially at the start, might be hardware-dependent without open standards. The way that, say, console games are hardware dependent. With the Web, it doesn’t matter what brand of computer you use to access it, or whether you use some other kind of device altogether, like a phone or tablet.

      If there’s device dependency, there’s only so many systems a typical household will buy. For example in my house, we’ve got… two different consoles, both iPhones and Android phones, two Kindles, and both Windows and Linux PCs. So, basically, we’ve the top two in each category. Which does not bode well for third, fourth, and fifth place companies in a non-integrated world.

      •' lmpierce says:

        On the one hand, the propriety infrastructure to console gaming has provided a relatively stable means of support (a living) to thousands of programmers and artists in a marketplace that, for better and for worse, revolves around recognizable brands that have optimized platforms designed to exclusively control certain advantages including the user experience. At the same time, there has been a meteoric rise in independent game and app development, the likes of which are unprecedented in the history of computer technology. I don’t see where the β€œlittle guys” have been locked out, looking at the marketplace as a whole.

        • Tranquillity (InWorldz) says:

          I think a lot of what I say is taken as very capitalist or commercial, but ultimately it comes down to this. People need to eat and deserve to be paid a fair wage for their work. I want VR/VWs to get to a point where that is possible on a massive scale.

    •' Ilan Tochner says:

      While you didn’t mention anyone Tranquility, you’re making a lot of assumptions about how other companies architecture is designed. Not all systems scale the same way nor are they all managed the same. There are a lot of open source solutions that work great in one scenario and create a lot of problems in others. There is a big difference between managing servers and managing applications. Some people have degrees in computer science and many years of experience developing solutions for telecoms, where tackling multi-million user scenarios is a requirement. Some companies start off with no-SQL based solutions. Some companies use abstraction layers when writing code so they aren’t tied into using one cloud provider, etc.

      I could go on but I hope you understand that making insinuations isn’t really called for. If we found it necessary to write hundreds of thousands of lines of code it isn’t because we didn’t research the available open source and propriety solutions, it’s because even when using them (for some things) there were still a lot more that needed to be done that these solutions didn’t provide.

      Most day-to-day group interactions don’t require handling thousands of people, they require handling a few dozen people at best (and most often just 2-3 people). Being able to quickly provision and run 1000 low concurrency virtual environments as a type of on-demand service requires a very different architecture than what is required for running a persistent world with high concurrency. There are just so many concerts you attend, but you have small group interactions many times per day. Replacing going to a concert is nice but Kitely is aiming to provide a VR alternative to day-to-day activities. For that you need the type of solution we’ve been developing. Other companies (mostly telecoms and big VOIP providers) have built similar technology but there is no open source replacement for it and you can’t just buy it off the shelf (and no, SmartFox doesn’t solve all the problems). You can try, you’ll end up spending a lot of time writing glue code and debugging the interactions between all the open source components you’ll utilize.

      • Tranquillity (InWorldz) says:

        As you’ve already correctly stated different companies solve these problems in different ways. I just don’t see configuration management and deployment as the killer app here. Not when you have hundreds of millions of people concurrently utilizing a platform to worry about.

        We’ll probably have to agree to disagree on this one. Nonetheless I don’t disrespect the work you’ve put in, I just want to highlight there is a lot more going on than CM at the scale we all hope to achieve for the industry.

        •' Ilan Tochner says:

          We view the virtual world architecture itself as just part of the solution. It could be OpenSim, it could be something else. It doesn’t really matter. The main challenge is creating an architecture where all the system components will be able to support having millions (or more) simultaneous environments active at the same time just like there can be millions of group IMing sessions going on at the same time on Facebook. The virtual world architecture that suits that need may or may not be the same one that addresses the virtual concert scenario. In any case, that isn’t the problem we’re trying to solve.

          • Tranquillity (InWorldz) says:

            So as you’ve stated above, achieving the goal of millions of virtual environments active and running at the same time with users inside them is a much harder problem than just configuration management and deployment, which was my point. There are far more problems to conquer after the simulations are spun up live and running than before.

            Some other components like message routing and object storage can be generically reusable for many use cases, but going into deep technical is probably off base for comments on the blog. It’d be helpful to have a VW ideas forum for all of us to brainstorm.

            As always, my best to you and yours.

          • This where federation comes in. The hypergrid, without any additional technical tweaks, is already fully scalable.

            It reminds me a lot of the early Web. Sure, individual websites crashed when they got too much traffic (or for any of a million other reasons) but the Web as a whole stayed up.

            I know, I know — security.

            But that’s an issue only as long as the current OpenSim business model is based on renting land to store owners who somehow believe that turning off hypergrid protects them against copybotting.

            For major players like Facebook, or Google, who make their money from ads and from tracking consumer behavior, this is a non-issue. After all, you can copy-and-paste anything you find on Google’s and Facebook’s sites. Sure, if you infringe too much, they’ll slap you with a lawsuit, but there is nothing, technically, keeping you from downloading all of Google or Facebook. (Other than storage limits!)

            Lack of security didn’t stop the growth of the Web. If anything, by allowing people to learn from each other, it allows best design practices proliferate quickly and helped speed up the Web’s evolution.

            Plus, there’s a strong case to be made that technologies that allow content-sharing — cassette tapes, video tapes, MP3s, movie streaming — have not actually hurt the original creators of content.


            Yes, we could go the direction of AOL-style, centrally curated, very large-scale walled-garden virtual worlds.

            I really hope that Facebook doesn’t. Or that its competitors don’t.

            I believe that this kind of centralization would be bad for everybody except the owners of those worlds, including the content creator groups it’s supposed to protect.

          •' Minethere says:

            “But that’s an issue only as long as the current OpenSim business model is based on renting land to store owners who somehow believe that turning off hypergrid protects them against copybotting.”

            But let us not lose sight of the fact that the majority of people are using hypergated aspects of OpenSim, either enabling it when desired or having it open all the time or being attached to a grid that has it enabled and where people can then jump to the ones that have not enabled it [there are several simulators attached to Metropolis who turned of HG].

            “I believe that this kind of centralization would be bad for everybody except the owners of those worlds, including the content creator groups it’s supposed to protect.”

            I do also and it will not last either.

          • Tranquillity (InWorldz) says:

            Eventually no matter how you shard the infrastructure, even setting up 100 “HG Facebook grids” you’ll need to be able to handle the presence and messaging loads from intra/inter-shard communications. How many database lookups are involved in sending a single instant message inter-shard? between shards? How about a group message? How is the load from these lookups distributed at the HG level? Does it create hot spots or an even distribution? Are these lookups even appropriate? What happens when part a shard goes down?
            These are the questions that would be asked when proposing that something like this go big. Simply throwing around a word as a silver bullet will not suffice for the platform to be taken seriously.
            I think this whole area has great potential, but overselling something can be as bad or worse than underselling it. One of my personal mottos has always been: Let’s make sure WE know for sure before telling the world about it.

          •' Ilan Tochner says:

            I completely agree that there is much more to scalability at that size than just being able to quickly start more servers. Which is why it required writing a lot of proprietary code. As you said, this isn’t the proper venue to discuss scalability design approaches. Suffice it to say that we know what we’re doing and have done so in the past in companies we worked in or have done projects for (look at our LinkedIn profiles).

            You could say we’re building a type of VR telcom, what you call configuration management is just a small component in our system. Regardless, even if you have a share-nothing architecture (the simplest case to scale), automation (across the board) is what enables turning a software component that provides some function (VR in this case) into a competitively priced service that is robust enough to handle millions of concurrent users.

          •' Bruce Thomson says:

            Are you and Tranquillity (Inworldz) aware of Phillip Rosedale’s ‘HighFidelity’ project, where he plans to use the computing power of the vast user base’s home computers to power that virtual world? He’s specifically planning for such low latency and such high resolution that the emotional engagement will be dramatically ‘real’. Bruce Thomson in New Zealand.

          • You can download the code from GitHub and see how Philip’s doing it. I think he’s using a similar architecture to OpenCobalt’s, who designed their world using a similar concept (the idea that end-users also provide CPU and resources towards the grid).

          •' Ilan Tochner says:

            Hi Bruce,

            I can’t speak for Tranquility but I can tell you that we in Kitely are following the development of High Fidelity. Our technology isn’t tied to using OpenSim, if a better open solution becomes available we can switch to using it. The hosting component is also just one component of our solution, our service will continue to have value even if the hosting model stops being relevant.

          •' Bruce Thomson says:

            Hi Ilan, Very interesting! (Folks, the website enables anyone to create their own virtual worlds) Free account allows one virtual world, then options from $14 to $99 a month for serious use.)

  4. The only thing I can say is that developing a VR solution that allows “millions of users” is an order of complexity harder than a world-wide mobile network, and at least two (perhaps more) orders of magnitude harder than a world-wide Web. After all, the first web server β€” which could pretty much drive a whole website and get millions of simultaneous users, given enough hardware and bandwidth β€” was written in a handful of lines (you can still write webservers in one line of code β€” β€” granted, with some cheating).

    Obviously full-fledged servers β€” Apache, IIS, nginx β€” have tens of thousands of lines of code, and are extremely complex, but that’s not my point. The point is that the Web is inherently simple, and that’s what allowed it to spread so easily in such a short time.

    The VW “equivalent” of the simplest webserver is a huge behemoth with dozens, nay, hundreds of thousands of lines of code, and it can barely show one avatar. A short calculation made on the source code of High Fidelity, which Inara Pey correctly classifies as “vapourvare”, counts some 78 thousand lines of code (of course a lot of that are comments, but that’s irrelevant). OpenSim, if my math is not failing me, has 1.6 million lines of code.

    Obviously I’m aware that as time goes on, we’re able to create way more complex solutions in much less time than on the previous generation, but still, it’s worth pointing out that we’re not merely talking about “technicalities” here. Doing a whole VW from scratch and expecting it to expand to hundreds of millions of users in 5 years β€” assuming, of course, that there are hundreds of millions of users for that kind of product, no matter what the hype says β€” is a massive undertaking, and it’s not merely “throwing money” at the problem that will make it disappear. There is a limit to how many programmers in the world can be hired to work on this. There used to be an old rule: if you’re among the best programmers of the world, you want to work for Google or Microsoft. Second-tier companies attracting developers are Facebook, Apple, and, until relatively recently, Yahoo. Below that you get “regular” programmers β€” all of them experts in their fields, of course, and many of them brilliant β€” but the utter geniuses are all tied at Google and Microsoft. I’m not really wishing to say that everybody else is a “low quality programmer” β€” rather the contrary. There are geniuses out there who would never work for Google and Microsoft as a matter of principle. What I mean is that pulling those masterclass programmers out of the clutches of Google and Microsoft will be very hard to do, and it’s not simply a question of money.

    So, where does that leave the noble objective of getting global scale VR? I’m afraid that the answer, for now, means crowdsourcing β€” either on projects like OpenSimulator, of course, or with mixed models like High Fidelity is doing (which is a strange mix with full-time programmers, part-time programmers, programmers who bid to complete small assignments but remain freelancers, and programmers just giving away their code for free) β€” and of course they’re not the only ones (WordPress, for instance, has a similar model as well). This is, I believe, the only way to get a critical mass of developers to do that kind of thing in merely five years. But that development model might be alien to Facebook’s culture… which, in turn, shows something even more interesting: if Facebook is unwilling to buy a company with a VR solution (no matter who), and prefers to develop everything in-house, they will need to change their corporate culture, which might not be a bad thing, at the end of the day.

    Honestly, I think that what Tranquility and Ilan are saying is that one thing is to develop and run a “prototype” that can help spread out the hype, but to become something massive at a global scale, there are problems with orders of magnitude more of complexity than what we can possibly imagine. And while definitely companies like Google, Facebook, or even Amazon, eBay, and PayPal, are well-acquainted with deploying infrastructure that serves content to hundreds of millions of people simultaneously, they are working on top of the Web, which has one of the simplest communication protocols ever designed. YouTube and Skype are another story, and certainly they have much tougher challenges to deal with. But real-time, photorealistic VR tops all of that by several orders of magnitude.

    It’s not that it’s impossible; it’s just that it’s extremely hard to do…

    I hope to be proven wrong in five years, though πŸ™‚

    •' Ilan Tochner says:

      Hi Gwyneth,

      I can’t speak for how it is in other parts of the world but I know that
      in Israel the most talented developers often found or join startups and
      don’t seek employment in big companies. They only end up in big companies after their startups get bought and they often leave those big companies to start new startups a few years after the acquisition.

      Trying to create a centralized virtual world were millions of avatars co-exist in the same virtual space would be challenging and time consuming. Creating a distributed solution where millions of small virtual worlds each contain only a few hundred avatars at most would actually be pretty straight forward using existing technology. It would require a lot of coding and time to implement but it wouldn’t be a big architectural challenge to do so using existing scalability solutions. A hypergrid-like connection between these small worlds would allow for almost the same benefit that can be gained from a true multi-million user solution. As stated previously, there are relatively very few scenarios that actually benefit from more than a few hundred people being able to interact with one another at the same time. In most cases, only a few people are relevant to the activity you’re involved with and the rest are background noise you try to filter out when focusing on what it is you’re doing.

      That said, progressively increasing project coordination overhead limits the number of developers that can effectively work on the same codebase at the same time – resulting in a decreasing average number of debugged lines-per-day as the number of developers increases. This is one reason why startups with a few talented developers often create solutions faster than big companies that have a lot more resources. This is also the reason why big companies often buy small companies for their intellectual property.

      BTW, OpenSim currently has about 413K lines of code and 104K lines
      of comments (estimated by Ohloh at 109 developer years). See: . Kitely’s proprietary code (written by us and not specific to the OpenSim codebase) has 60% that amount and was written by one person in 5 years. Not all the top developers work in Google πŸ™‚

      • Well-made points, @IlanTochner:disqus!

        As for top programmers starting their own company, that naturally depends on their personal skills: not every top developer is simultaneously an entrepreneur and has the required management skills to launch their own operation. While I can admit that this varies from region and region (and I definitely believe that Israeli programmers have much better business skills than programmers elsewhere!), the truth is, not every “top programmer” feels comfortable to risk their job security and just jump into the unknown waters of being self-employed in their own company…

        As for your analysis that a federated multiverse is far easier to create than a centralized one (specially one that starts from scratch!), I have no doubts that you’re absolutely right on that. The issue about the federated multiverse is not a technical one β€” technically, it’s a sound solution! β€” but a political/social one. When HyperGrid teleporting was introduced β€” after Linden Lab abandoned their own intergrid teleporting protocol β€” this sounded like the promise of this federated multiverse you’re talking about, but, as we all know, because of personal reasons and business issues, the largest OpenSim grids are absolutely against “federation” β€” for the same reason that Linden Lab doesn’t want to “federate” with any grid outside Second Life. I’m not sure what that means long-term, and I have no idea if Zuckerberg has analysed this issue properly, but I remember that Google had similar issues when they adopted XMPP for Google Talk (now mostly defunct, having been replaced by Hangouts) and proposed a federation model for instant messaging β€” which worked, for a while, but it never became widespread. Will a Facebook-driven virtual world become federated? I personally doubt it, as it’s not really part of Facebook’s culture (although the way they allowed games to be “embedded” inside Facebook might point to a different direction… that somehow Facebook is willing to “open up” a bit more, so long as they regain the crucial control).

        High Fidelity’s model looks a bit better in terms of creating a federated virtual world, but one has to ask how exactly Philip plans to make money out of it. I suppose that being the core hub of financial transactions for his virtual world might be a way to get revenue. But it’s waaaay too early to speculate!

        As for the calculations of the lines of code, my own estimates just come from using wc on all OpenSim files terminating in .cs πŸ™‚ which, I admit, is probably not the best way to count them πŸ™‚

        •' Minethere says:

          I am not sure why you tout the closed grid commercial concepts of OS as examples, but when you exclude SL, the majority of people are not in those, but in private grids [such as with Educators] or in the connected hypergated Meta…and even those closed sometime open to HG for various reasons, and some are open at first [to get content, for example] and then close.

          This is the sentence Maria often states, of the mostly uncounted masses, but we can easily see by the known numbers and increasing numbers, and from anecdotal evidence, hypergating systems are the majority.

          Of course, by their very nature, they are private and do not have both the SL clone type of commercialism/connection to poach from nor the money to promote themselves.

          Heck, I was just having a conversation with someone who bought something for their own private HG enabled grid from the Kitely MP, who is completely unknown to stats [and who also, btw, had some interesting comments on another matter about a certain closed grid’s goofiness…but, I digress]

          •' elmoono Dana says:

            How do you get on the stats – there must be loads of diva standalones like mine, doubt many are on the stats. I dont really want mine on stats until i get fiber later this year as my present upload speed useless and I’m still just messing with terrains etc. However for me on my home network my 4 sim world is instant and i get to HG wherever i want:)
            So for others reading this – Is it necessary to submit to a list somewhere? or will having the hypergrid TP from hyperica on your land auto add you to stats (mine hasn’t), or are people meant to join by using the co-ords method of osgrid via its map – i have no idea! I was so happy that the standalone works so well and i can HG i never considered getting listed!
            I’m also now wondering how to handle my av when Kitely enables HG, i was going to use the me on Kitely, but now its tempting to use the me on my standalone – especially now i can buy content for it – (could be down to whether i want to make sims for the public like i used to in SL or whether i am happy just doing my own thing with friends) decisions decisions:)

          •' Minethere says:

            there are 2 ways I know of to help get identified, in some regards…


            and and here of course:


          •' elmoono Dana says:

            Many thanks Minethere:)
            It would be cool if the hypergate 7.3 could automate a list –

          •' Minethere says:

            7.3? We are on 76 I think, or perhaps I misunderstand.

            Automating would kinda defeat one of the aspects of free opensim, that of the freedom to also be private, if one chooses. I would not want to see that change.

          •' elmoono Dana says:

            was referring to the gate version 0.7.3, not opensim! and yes your correct – privacy. Have read the links and now understand.

        •' elmoono Dana says:

          I was wondering the same thing, how to make money from a technology that will use the users equipment and bandwidth to make the ‘world’ tick – i would feel a bit annoyed paying SL prices for land etc when its my own equipment along with everyone else’s driving fidelity.
          Being a sucker for anything new tech i have signed up to alpha test and got a questionnaire yesterday – i note on it they are concerned with the power of the home computer (notably ram), whether a pc will be left on and connected when your out and: how much will you be willing to pay for the hardware (headsets etc i guess) that goes with the technology. So room to make money on the hardware, and then the potential for a joining fee, then the SL type land/tier fees – perhaps with a decent reduction if you allow your equipment to be used to help drive the infrastructure.
          What they probably should have asked is what your adsl bandwidth is – i suspect, but easily forgotten about sitting somewhere all fibered up, i get fiber late 2014, at present my adsl upload is only 320kbps – making my hg standalone great for me and v slow rezzing/useless for visitors! I wonder what amount of bandwidth fidelity will want to consume from participating equipment.

        •' Ilan Tochner says:

          I completely agree that the ideal engineer skillset and the ideal entrepreneur one don’t necessarily match but some cultures, again Israeli high-tech one for example, are very flat hierarchy structure and “be your own boss” oriented so top talent often seeks to either start or join small organizations where they can have a bigger impact on the development of the organization than they could as cogs in a big company. Of course this is generalizing and older people, especially ones with children, do tend to seek job security more than they did when they were younger. But, if they are talented by that time they have usually progressed enough in their career path up the management track that they no longer spend most of their time doing actual code development.

          A distributed architecture solution doesn’t have to be a federated one. A big company could self host all the instances and still use a distributed architecture. They could even allow others to self host closed-source instances as long at they hard code them to get various services from the company. That wouldn’t be my preferred outcome as a user (I’d like to see an open sourced solution win) but companies such as Facebook, Google, Microsoft, Apple, Sony, etc. could easily opt for such a strategy and have live services out in 1-2 years.

          They’d then use something akin to Kitely’s technology to host people’s worlds for free, and monetize virtual goods sales and advanced value-add services. This wouldn’t be different from how most MMORPGs work just instead of trying to create a big world with shards they’d have millions of small ones that are created, used, and discarded on demand. You could use existing web technologies for the identity, messaging, community aspects and the limited world sizes would save you from having to deal with the challenges of scaling up world concurrencies.

          BTW, (on a person note) please see my reply to your question about Kitely Market HG delivery system on our forums.

          • I have to say that I fully agree on that proposed business model… anything else (i.e. a subscription-based service like most MMORPGs have, or forcing people to pay high fees for displaying their content, like LL/SL does, seems outdated to me and unable to capture the attention of millions of users).

            1-2 years… really? I’d be a bit skeptic about such a short timeframe, unless, of course, they’re not starting from scratch.

          •' Ilan Tochner says:

            If they have any sense, they’d buy several specific companies and allocate a lot of R&D resources to complete the remaining integration projects. If they do that then they can be out with a scalable VR service within 2 years. If they half ass it or try to build everything in-house than it will take them much longer.

            This is a race for dominance of the next big digital era. The first generation good-enough service will be far from perfect but it can be enough to give the company that rolls it out control of the market. Companies that wait-and-see will be left behind. Even with a lot of resources some things just take time to build and even half a year can mean the difference between market leader and a distant second place.

          • I hope so, @IlanTochner:disqus… even though it’s also important to “do it right”. In terms of social media, AdultFriendFinder still hasn’t disappeared, and neither has MySpace, but they certainly aren’t “dominating” technologies. Hi5 merged with Tagged in order to survive. Friendster became a gamers’ network. Google still keeps Orkut around (mainly because of the Brazilian users) but every time I log back in to that, it looks more and more like Google+ (and is constantly referring to Google+ for finding friends, interesting ideas or people to follow, and so forth). So Facebook, although it was a late comer, and had to start from scratch, did “everything right” and carved its own market. Twitter, albeit a different product β€” and perhaps because it is, indeed, different β€” managed to survive, maybe exactly “because it’s not Facebook”. Google+, by contrast, struggles to become an alternative. Perhaps the same can be said about Android and iOS, even though the visions behind each are completely opposite…

            There are obviously many cases where “being the first” gives a huge advantage, but it’s an advantage that has to be well-managed to become a success!

          •' Ilan Tochner says:

            Unless the market leader fumbles the ball, time-to-market in a consumer service usually provides a very clear advantage to the first company that provides a good enough solution.

            The creation of the first social-network powered ecosystem for third-parties (Facebook Apps) and big product level mistakes by the then market leaders are IMO what brought Facebook to the lead. It’s interface was clean enough to allow easy access and extensibility (compared to MySpace at the time) and the app economy succeeded in giving third party developers a financial incentive to promote the system and extend it with additional features. By the time other social networks started offering similar added value for developers Facebook had already become the leader and network effect made it hard for other social networks to compete for developer mindshare.

      •' Minethere says:

        So it seems to my untechy eyes that the hypergrid aspect of opensim is quite similar in conceptual design as what HI-Fi is doing…in that it is distributing the load, similar, as some seem to not know about [or forgot] that does.

    •' Minethere says:

      well, I dunno, but Yahoo was once a major player, then went down in popularity some notches, and is apparently rising again, to wit:

      Reminds me of Jack in the Box, sorta…

      • Hehe discussing Yahoo is one of my favourite topics, but totally off-topic in this thread! Aye, Yahoo is that kind of company that nobody understands how they can survive β€” they’re not “the best” in any of the areas they’re in, and they’re in so many! β€” but they continue to exist, they continue to acquire new tech and new companies and enrich their portfolio, and their shares are still worth quite a lot. It’s a mystery. When Yahoo’s Flickr announced that they’d give everybody a Terabyte of disk space to store their pictures for free (because they were losing the battle against Picasa/Google+, Facebook, Instagram…), my first though was: “oh no! and now where will their income come from? No more Flickr Pro users!” The truth is that neither Flickr nor Yahoo disappeared.

        Now, if Yahoo could only upgrade their Mac version of their Messenger… it’s stuck on “3.0 Beta” for years now πŸ™‚ Maybe their next release will be a virtual world using Cloud Party technology…

        •' Minethere says:

          Those commercials used to come on so often all I could think about was …yahooooooooooo….lol

          I can’t really speak to whatever they are doing nowadays though, I just wanted to post the commercial-)) and yea, off-topic, but, that’s ok, too.

    • Tranquillity (InWorldz) says:

      “And while definitely companies like Google, Facebook, or even Amazon, eBay, and PayPal, are well-acquainted with deploying infrastructure that serves content to hundreds of millions of people simultaneously, they are working on top of the Web, which has one of the simplest communication protocols ever designed. ”

      Their infrastructure also demonstrates the place where federation fails. They have attracted a far above average number of visitors and as a result the loads on their part of the web are many orders of magnitude higher than that of someone’s personal blog.

      This was the point of my original post. The conjecture was that a company like facebook may want to set up a grid and that possibly opensim would be a good product for them to latch onto. A company like facebook creating the “facebook grid” would run into all the scenarios where federation fails since they would be running the largest grid on the planet. Even if they wanted to use HG as a load balancing technique they would somehow have to ensure that each person that logged in is evenly balanced between a set of mini facebook grids. This kind of manual sharding doesn’t scale easily, creates hotspots, and is prone to failures. These are the reasons modern distributed systems were developed over solutions like mysql sharding and why distributed systems that use consistent hashing to achieve load balancing and failover are beginning to replace the classical manual models.

      In summary, for large deployments like a facebook grid it would be more cost effective for them to just rewrite the grid backend components to scale using tech they’re already accustomed to and have deployed at scale.

      For hosting at web scale beyond a single organization, yes federation is vitally important, and must be carefully designed to scale out. The concept behind HG seems like a good first step and highly appropriate for this use. I’d love to see more information about how routing and presence is handled though.. Without spelunking code.

      Have a great week!

      •' Ilan Tochner says:

        You’re trying to tackle a hard problem to solve instead of solving the much easier one that works great for most use cases. Pareto (80-20) the problem and you’ll see that there is no need for a big company to create a big grid that can support high concurrencies. Instead, they only need to be able to affordably run millions of standalones and have a user-friendly way for people to get the world they want when they want it for interactions with a quite low number of other users (hundreds of users top). Not that there aren’t high data storage requirements but big companies already have solutions for those and the disjoint nature of the worlds makes it easy to update the data without encountering a lot of state contention issues. There is no need to load balance people when people get their own virtual environments to share with specific people – the load balancing is inherit in the problem definition.

        Using that approach they can have a scalable system up and running within a couple of years and tackle the few high concurrency scenarios that remain after they’ve already established themselves as market leaders. For big companies this is a race for market domination, the companies that try to get all the scenarios covered before they go to market will find themselves in a big disadvantage. Smart companies will start buying their way to market lead and develop their ideal solutions while they already have a market presence.

        • Tranquillity (InWorldz) says:

          “You’re trying to tackle a hard problem to solve instead of solving the much easier one”

          And you’re ignoring the fact that there is much more to load balancing than putting people on the actual simulators themselves. Even if you only have 100 people max on a region you still have to handle the messaging, inventory, presence, and other massive amounts of data being generated and consumed by those users. You honestly think that having millions of completely partitioned standalones with no shared state is the way to go? What if they want to take inventory with them to someone else’s simulator? Does it now have to start their simulator to serve up the inventory? How do you handle fault tolerance? If everyone’s instance is it’s own isloated “grid” using HG sharding that means all of the asset, inventory, and world data is contained on that instance. Is this cost effective to do with any kind of redundancy?

          “Not that there aren’t high data storage requirements but big companies already have solutions for those”

          Yes I believe I covered this already when I said “using tech they’re already accustomed to and have deployed at scale”

          “There is no need to load balance people when people get their own virtual environments to share with specific people – the load balancing is inherit in the problem definition.”

          Really? In a shared nothing isolated environment this might be true, but i’d hardly consider a virtual world a shared nothing environment. Your definition of a virtual world is pretty limited if you can only IM, pass objects to, and generally communicate with people in the same instance. Once you go outside the instance scope with a facebook level of users you have actual engineering problems to solve.

          “Smart companies will start buying their way to market lead and develop their ideal solutions while they already have a market presence.”

          Thank you for the lecture.

          •' Ilan Tochner says:

            It all depends on what you call shared state. If a user in one world only affects users in other worlds via messaging (IM, voice, video), users can only be in one world at a time, and worlds don’t share state for the objects they contain then you could have millions of very complex, frequently changing virtual world simulations running without them creating state contention. User profiles, assets and inventories (file storage), messaging, presence, etc. are already handled by the big data solutions big companies have in place for those things. The only additional components needed are in the virtual world simulators themselves and their state can be persisted without it affecting the state of other worlds.

            World sims can crash like phone connections can be dropped. You want to improve it as much as possible and need management code to be able to track it and autorecover from problems but it doesn’t have to be perfect to be viable for market adoption. The number of users and worlds will affect the number of servers you’ll need to have on hand but since the entities in your system (objects, avatars, worlds) are not sharing state between worlds, you only need to be able to handle the complexity of handling one simulator many times instead of the interconnections between a large number of simulators.

            Of course various optimizations could help performance and reducing various load centers in such a system but that is beyond what I’m willing to share about this solution.

          • Tranquillity (InWorldz) says:

            We’re basically in agreement here, you’re just writing off or or omitting the parts of opensim and SL style simulators that don’t scale well with federation and saying they’ll just use big data to solve that, and I’m trying to explain those parts as if a non-developer asked about them.

            But that really was my point in this discussion. Out of the box there aren’t solutions for some of the big problems, and HG doesn’t solve them all. Hence a company like Facebook would be smart to use some of their own engineering to solve them, or talk to other companies that have done scaling on these specific areas. Both of our companies have experience here.

            I’m just trying to be realistic about this so that when someone looks at these threads they understand the work that would have to go into a scaled solution beyond just naming a silver bullet or two. I concentrate on this part of the picture because it’s what I do, so of course I have a predilection to talk about it.

            Have a good day!

          • The Oculus Rift isn’t going to come out for a year or two… So, say Facebook wants to have a virtual world that is able to support millions of active simultaneous users on day one, OpenSim is probably as good a starting point as any to start development. Most of the big video game engines require that the world be static, and users download and pre-install most of the virtual world assets — and even then they use sharding to actually support all their users.

            With OpenSim, Facebook can simply create an on-demand mini-grid for every user who requests one, and keep it tucked away in storage, Kitely on-demand style, until it’s accessed. All user assets are stored in with those mini-grids, so the system can federate indefinitely.

            Companies can pay extra for larger grids that support a bigger number of simultaneous visitors, say, with Intel’s DSG. That gets them to … around 1,000 users in the same space at the same time? More than that, say for large events, you group a bunch of these together, and limit interactivity between sections. Like, say, for a stadium-style concert, you’d go in through your own section and just stay there. You can see people immediately around you, and the stage, and can send texts to your friends, but you can’t interact directly with anyone not sitting immediately near you. Which is the way it works in the real world, too (mostly)

            I doubt that a million people are going to want to gather for a virtual concert the day the Oculus Rift is released, so there will probably be time to build the infrastructure out, first. And, sure, it might take a few years.

            During the early days of the web, websites kept crashing all the time. Even today, popular games and apps are often released and have problems as everyone tries to connect at once. It’s embarrassing, but it happens, and it doesn’t mean that the whole platform is doomed.

          •' Minethere says:

            Kinda like the OpenSim Conference where they separated the regions, right?

            A friend and I used to do events and put the stream in side by side regions allowing people to choose either side depending on how it affected worked rather well at the time.

          • Tranquillity (InWorldz) says:

            That’s a bit different. You still had the assets and inventory supplied by back end grid services first before being cached on the region which is the concentration of my discussion here. The 4 regions for example didn’t each hold a part of your inventory.

            Diving the users between sims was done to provide the balancing for the actual simulation (moving objects, scripts, avatars, etc) more so than to provide a balance on the shared state stuff like your inventory and the assets it contains.

          •' Minethere says:

            oh, well, I was just referring to Maria saying ” for a stadium-style concert, you’d go in through your own section and just stay there. You can see people immediately around you, and the stage, and can send texts to your friends, but you can’t interact directly with anyone not sitting immediately near you.”, but also many people hypergated into the OpenSim Conference, at least during the load tests, and they had a partition between, I think, even different simulator instances, but maybe just one simulator, I forget, to minimize easy traveling between them. [tho I just cammed over and double click tp’ed, because that’s the way I roll] [breaking rules and conventions][not really, I just like saying that].

            But the main assets, as in my own case, are held by Metropolis…I came in with all my stuff. Though I suppose when we HG the receiving simulators caches some things. [that’s beyond my pay grade, and thankfully so].

            We did the 2 region events because one of them was heavily loaded with prims and such and the other was more decorative/low primmage/mostly terrain…and since you responded I will say it was on mainland regions in your grid that Macaria rented at that time, that I was referring to in that regard.

            Of course the rubberbanding had some effect so when people moved to one or the other they tended to stay in that one…this was, I dunno, 3 or 4 years ago tho, so no need to say that is fixed now….tho it is in OpenSim, for the most part, with V76.

            [2 edits, sorry]

          • Tranquillity (InWorldz) says:

            Yes. The load on the standard OpenSim region is spatially partitioned in X * Y meter squares. So one person standing on region 1 and another standing on region 2 are being simulated (eg your avatar movement, attachments scripts, objects) by those regions respectively. A script going nuts in region 1 (as long as it’s not sending out updates) won’t affect the performance of the person in region 2 assuming they’re on separate servers/VMs.

            There are more ways to spread a scene’s load around, and region partitioning is not ideal for all situations. OpenSim inherited the “region” partition from Second Life, and is now expanding on it with tech like DSG. I have a bunch of my own ideas in this area as well since it interests me greatly.

            At least we’ll never be bored.

          •' Minethere says:

            Well, varregions are being heavily tested now [Metropolis just added it to the test grid yesterday also]..heck, you could even HG into it to check it out [yea, I suspect you have an avie or 2 or so in HG grids or your own…but if not, cool anyways].

            It is due to be default with bullet in V8 so I hear.

            And of course Kitely has their own proprietary, and very well done megaregions going…tho I guess they will have to eventually go to varregions, or, maybe not.

            But I do that with my rented regions from Zetamex when I do multi regions builds and terraforming, spread out the load…seems simple logic to my mind [which is often not very logical, but then, I don’t care].

            So load issues of such nature will be going away in most regards and we can just run other simulators easily and set them next to each other to spread any load issues, and either do our own grids or find a place in Metropolis…naturally…out in the boondocks.

            ‘Course, I really have no clue what I am talking about, so there is that, also.

          • My issue is just that OpenSim is a technology that can address “hundreds” of avatars, DSG can handle “thousands”, but… super-events on the Facebook Metaverse will need to handle “tens of millions”. That’s so many orders of magnitude above current tech that I have no clue what’s necessary for that.

            And, again, it’s not about storing assets and shuffling them around, or dealing with communication β€” Facebook knows how to handle that. It’s really simulating avatars moving around with all their attachments (and having a dynamic background as well, not just a picture posted on a prim πŸ™‚ ).

            That’s the area where I personally am a bit skeptic in believing it could be done in 1-2 years…

            Still… Skype and Hangouts deal with, at most, 10-25 simultaneous calls in a single conference, and that hasn’t stopped people from using them, SL and OpenSim can do way more than that with voice chat!

          • Tranquillity (InWorldz) says:

            This is more along the line of the type of virtual experiences I am most interested in. Different techniques of balancing user and object load, partitioning objects for fast retrieval from storage. I have no doubt the low latency storage and retrieval technologies now exist to do things that could never be done before. All it will take is smart people to put the pieces together.

            The biggest thing I got from SL was that the illusion of a truly boundless and endless world within our real world was very compelling, and the use cases were nearly infinite. That feeling of awe has stuck with me to this day and is with me behind every UML diagram and boring state chart I churn out.

            Hang in there. There are a lot of dreamers in here as well as some very smart people that want to push this envelope to the edge.

          • /me is crossing her fingers. You guys rock πŸ™‚

          •' Ener Hax says:

            “boundless and endless world within our real world was very compelling” well put! that was me in 2006 inSL!

          •' Minethere says:

            Yea, I had read something or other on the DSG thing but as it does not especially interest me so I never drilled down into it enough to be able to say much on that.

            Others, such as yourself, find that interesting, I kinda like doing my own thing nowadays and finding people when the mood strikes me. Others are more social creatures.

            I do notice, and read about, some aspects that interest me, and find solutions that work for me, if it interests me enough. Mostly I just accept the status quo as pure tech is not interesting to me.

            For example, I use Zetamex for hosting my simulators in Metropolis. I would run my own but either my router is not the right kind, or the tech frustrates me too much to figure it out…the cost is reasonable and affordable for what I get too. And the support has been very good.

            Others like being “taken care of”, either because that is all they know of OS or it just “makes things very easy for them”…if they have some issue they can bring it to the sysops attention and usually get it resolved…but this, also, depends on the ability of those in charge to resolve issues properly, and in that, there is a definite problem because, other than the Kitely owners [in the commercial closed grid concepts] and SL, of course, owners of those grids simply have no real life business experience, it is all virtual business, which leaves them at a decided disadvantage when it comes to collaborating with pure business people, who actually do have real experience.

            The tech people can talk all day until they are blue in the face, about tech, but without real business acumen, they will eventually find a point where they can go no further without hiring those who do, or finding something else to do.

            So, if anything happens in all this, it will be the techs who work for the business people, and those business people will use those tech types to help them make a purely business decision.

            Then they will use properly educated and experienced Marketing divisions to push it out to the world, another aspect of OS that is very much lacking, due to several reasons but mostly that the money is simply not available.

            So, this bodes well for interesting times coming, for sure.

          • But I would agree with Ilan that “backend stuff” β€” inventory, assets, communication β€” is what Facebook already excels in. It wouldn’t be an issue for them. They already do it for a billion users. “Actual simulation”, no, that’s a completely different issue πŸ™‚

          • Tranquillity (InWorldz) says:

            Right. That’s pretty much all I’ve been trying to say is that a company like FB would be smart to use some of the tech they already have to support the heavy lifting πŸ™‚

          • Tranquillity (InWorldz) says:

            “All user assets are stored in with those mini-grids, so the system can federate indefinitely.”

            That’s fine if they never have to bring them anywhere other than their own mini grid, nor if their guests have to being any of their assets with them. Otherwise that means that N people vising M sims requires NxM mini-grids to be spun up to supply both the user’s assets and the destination assets. These mini grids themselves would also need redundancy so that a failure wouldn’t lose all of person X’s things. I don’t think that would be the best, nor most cost effective way to handle the problem, and again my original point was that Facebook would have tech to make it work on their own given their scale.

            “you group a bunch of these together, and limit interactivity between sections.”

            I agree we can solve problems by avoiding them until the functionality is requested. I guess I just hope to see a future where more is possible.

            “During the early days of the web, websites kept crashing all the time. Even today, popular games and apps are often released and have problems as everyone tries to connect at once. It’s embarrassing, but it happens, and it doesn’t mean that the whole platform is doomed.”

            This part I agree with to a point, but we’ve also seen tech get one shot to get it right and then written off the instant they have major technical failures for too long of a period. I wouldn’t want to see that happen if VR gets a new second chance.

            Anyways, I think this has gotten to the point where it is clear the technical features of all of this merit their own discussion area.

          • If N people are visiting M other people, then you’d need to spin up N+M mini-grids, not M*N. And you’re right about redundancy — there should definitely be backups of all of these things, so you’d have 2(N+M) mini-grid active. Which is already doable with today’s technology.

            But otherwise, I agree with you.

          • @mariakorolov:disqus, I obviously cannot disagree with you. OpenSim is definitely a solution, even if just a temporary one, but good enough to “fill in” until something better comes along (like, say, a C++ version of OpenSim πŸ˜‰ ). And aye, I remember when even Facebook stopped responding β€” or Twitter, which suffered a lot from the “dead birdie” syndrome.

            Not being a MMORG player myself, I would nevertheless expect “flawless performance” on Day One, and a show-off of the technology would need to include a concert where millions could attend (assuming that millions of Oculus Rifts had been sold by that date). That is the “hard” problem to solve.

            On the other hand, Facebook would be stupid to give ultra-high-edge performance for free for everybody. They might be happy to go the Kitely route and give everybody a node in their Metaverse, but people would quickly figure out that to host a multi-million-avatar event, the prices for that would skyrocket (whatever the business model might be).

            Intel DSG can definitely support thousands of avatars. Would it support millions?

          •' Ilan Tochner says:

            If people are attending an event that would cost serious money in real life then they (eventually) shouldn’t have a problem paying something for it in a convincing virtual reality setting as well. That’s why I mentioned both virtual goods and value added services as the business method. πŸ™‚

            You don’t have to have a simulator per person, just per environment (which is likely to contain multiple people) so the amount of sims you’d likely need in peek times will still be much lower than the number of users you have.

            The big events are a problem that doesn’t need to be solved on day-one. They could conquer the market addressing the small meetings that can be handled by a single sim and get the ability to support the bigger events in the future (when their own R&D team finishes building it or they acquire some startup that solves that problem).

          •' Ener Hax says:

            big events in virtual worlds like OpenSim? i don’t see it happening after having been in all the hype in 2006 with SL (and i work for a 4 billion euro spanish tech company that pretty much owns travel and hospitality crm platforms with a Munich data centre that does 1/3 the daily transactions that Google does)

            still way too clunky and way too many man-hours involved for this to be mainstream, even with the promise of huge dollars saved for events

            my 2 cents . . .

          •' Ilan Tochner says:

            I think we’ll need to see much more user friendly viewers before we’ll see VR widely used even for small group meetings. In any case, good VR requires the removal of HUDs and other UI elements from the user’s view and an intuitive way to to move around and manipulate objects. That requires different input devices as well. The solution that will work will have very little in common with SL-derived viewers being manipulated with a mouse and keyboard. We’re starting to see some possible human interface devices but, to be honest, none are more than barely acceptable at this time. We’ll need to wait a couple of years (not 5 to 10 IMO) until we see commercial availability of a viable basic solution that can be used by the mass market (VR helmet, manipulation devices, first “killer app” for the platform).

          •' Ilan Tochner says:

            I’m assuming competency on the part of developers in big companies and assuming they’d prefer using their own in-house big data solutions over adopting third-party systems. The data persistence solution was obvious to both our companies, I’m sure it will be obvious to them as well. Big data is a known field, it isn’t trivial and there are a lot of ways things can be improved but that requires a standard engineering effort that is similar to what those company’s have already done. Ergo, me yada-yadaing what’s required here.

            Domain specific optimizations do however require systems that are probably not currently used in their existing web architectures. Knowing what is needed, the edge cases, and how to best address them is part of the challenge. The engineering work requires diligence but can be done by many. The know-how of what needs to be done to minimize time to market and avoid pitfalls that will waste a lot of resources is something worth buying. It’s not that big companies can’t build this it’s that they will lose time to market if they don’t buy expertise and working code to gain the edge over other big companies.

            In any case, my focus in this discussion isn’t OpenSim specific. With Facebook’s resources and what we know now, I’d buy several specific companies and throw developers at some of the remaining integration tasks. The result would be better than SL and it’s OpenSim kin. I would not, however, start building everything in-house. If they do that then some other big company will beat them to the punch.

          •' Minethere says:

            blame Ilan, I had nothing to do with this…


          • Hmm. I guess I start to see what you’re aiming at. The vast majority of Facebook users have merely a handful of followers, most of which won’t be online at the same time. On the other hand, there are a few individuals β€” say, celebrities β€” who have millions. But they would be edge cases and handled separately.

            In terms of drawing a parallel to virtual worlds, what this would mean is that the vast majority of users would just require cheap, underpowered instances (call them “nodes” to avoid using SL/OS terminology β€” they would be “sims” in our words), which don’t even need to be always on, but just very rarely and occasionally.

            By contrast, a huge shop serving millions of users would need an always-on solution which can deal with all those people β€” as would, say, a concert by Shakira or BeyoncΓ© or Rhianna… But these would be edge cases. Facebook’s focus, thus, would be to deploy a vast array of low-powered nodes for the vast majority of users, and just worry about how to deliver super-performance on those few edge cases. “Few”, of course, might mean hundreds of thousands, but most definitely not hundreds of millions, and that’s where the trick is.

            As for the backend, aye, I hear you, Facebook knows perfectly well how to deal with that… that’s where they excel already, and handling communications, profiles, inventory, etc. is perfectly well handled by their existing backends β€” so long as it can be stored and transmitted in a web-ish way. And that’s obviously what they’re going to do. Even SL/OpenSim are learning the lesson and doing exactly that…

          •' Ilan Tochner says:

            Exactly, they can capture the market without having to start dealing with the difficult problems until much later or even ever – Skype leads video chat even though it doesn’t support big meetings.

            As they won’t be the only company that understands this, other big companies will try buying various companies for the missing components to get such a solution up and running as quickly as possible (or prevent other companies from buying their way into the race).

            There is no point in trying to overachieve here. The solution just needs to be good enough for most use cases (and those rarely have more than a few dozen people all interacting with each other in the same space).

          • Google’s specially good at launching those “beta” concepts which look so cool but might have some issues at the very moment of launch. People are used to things that “look cool even if they only barely work” because they know that, later on, Google will improve them.

            I’m sure that Facebook can pull off the same thing. And, in that regard, we might not need an interactive “FaceWorld” with “millions of avatars” on Day One. If they get “thousands of avatars” listening in on a live concert, that would be awesome enough.

          •' Ener Hax says:

            how does net neutrality impact all of this? the US Gov is starting to shift away from supporting net neutrality, and with the lobbyist from companies like AT&T, i think (imo) that it’s inevitable that “fast lanes” get setup and the rest of us will be on the public leftovers with congestion

            maybe i’m pessimistic (yes, i am – i’m from quebec) but this doesn’t seem like a radical opinon in “corporations are people my friend” USA

  5.' ThyGeekGoddess says:

    We still need more cowbell out in the PAC area, but there is such great potential for this platform. Some have a grasp on “it’s complicated” and excited to learn.
    Others, not so much….in a really BIG way.

  6.' Ener Hax says:

    i’ll believe that things like VR and mainstream VW are coming when i see a significant shift on the web away from text. for example, on this blog (or mine when i was writing daily), how many of these articles are posted as videos done from topics discussed inworld?

    how about any other place? daily stories done as vw videos? it’s still novel and highly niche . . .

    •' Ilan Tochner says:

      Hi Ener πŸ™‚

      Skype and YouTube usage is very high without the web changing from being mostly text based. Not all activities are test-based, some require a different medium. VR offers clear benefits (above alternative options) for education, training, entertainment, and certain types of small group interactions. It can encompass other things with time but that will require much better hardware than currently exists.

      I think people should wait until consumer-level devices start shipping before ruling out the concept as being irrelevant to a mass market. We’re just seeing this generation’s prototypes and development kits at this time. Trying to make projections based on what happened with previous attempts to kick start VR in previous decades have the same validity as people saying personal mobile devices (now called smartphones) won’t have mass market appeal because Apple’s Newton didn’t have mass market appeal ( ). There are some things that simply don’t work until your hardware is good enough.

      In any case, it will start with gamers (of which there are many millions) and that will suffice to help the continued development of the current generation of VR platforms.