Should you host on the Amazon cloud?

A lot of people lately have been telling me to move into the cloud and I have been talking about this with many developers — both OpenSimulator and web server development as well. Most of them, in both groups, agree with me that the cloud is just not great for traditional usage. But that only applies to traditional usage. We all agree that there are some really amazing uses for the cloud.

First, I want to compare the difference of hosting a simulator on a cloud instance versus a dedicated server instance. It is documented that the standard usage of an OpenSim region of 15,000 prims and 2,000 scripts should run comfortably with about 1 gigabyte. So let’s compare the prices of running a region for a continuous month on both cloud and dedicated hardware.

Cloud:

  • Specs: 1.7 Gb RAM, 160 Instance Storage, 1 CPU Core
  • Base Cost: $0.06 per hour
  • Bandwidth: $0.01 per gigabyte
  • Access to storage: $0.10 per million requests
  • Storage $0.10 per per gigabyte per month
  • Total Per Month: $48

Traditional server:

  • Specs: 8gb RAM, 2 TB Storage, 2 CPU Cores
  • Base Cost $30.00 per month
  • Bandwidth: unlimited at no extra cost
  • Access to storage: unlimited at no extra cost
  • Storage: 2 terabytes at no extra cost
  • Total Per Month: $30

The examples above use Amazon’s On Demand instances, with no setup costs, and a traditional server provider that has no contract required.  You can get better prices with contracts with both Amazon and dedicated server providers.

However, these prices are variable based on who you get your traditional server from and whether you use on-demand or reserved instances on Amazon. Or, if you are feeling risky enough, you could use spot instances from Amazon which are priced crazy low, but do not have guaranteed up-time.

For me, the biggest issues are the unlimited bandwidth and bandwidth speed caps. A common thing that many people do not realize is that international connections can be very slow.  One of the advantages of Amazon is that you can clone instances in locations closest to your clients — but the same can be done with traditional data centers that have worldwide locations. It is just simply a faster process with Amazon servers.

So far, Amazon price of $48 doesn’t look too far apart from the traditional server price of $30. But if you look closely at the specs, the Amazon setup is designed to be just enough to run a typical single region. The traditional server, however, has almost four times the RAM and twice the computing power — enough to hold several regions. Now the price differential becomes much more significant.

But you can look at it another, way, too. Many regions are empty most of the time. Say you use a region for only 72 hours over the course of a month and shut it down the rest of the time. Now the price adds up to just $5, since Amazon bills based on usage.

Scaling

One of the biggest things that I really hate hearing from people is that companies have to move into the cloud to scale. This is simply untrue. The cloud does make it very simple to ramp up and down based on needs, much easier than with dedicated servers, but you can still scale with traditional vendors just fine.

Lets take an example of a grid that has been around for a long time and has accumulated a large database of assets and inventory — and has suddenly seen a dramatic spike in traffic, resulting in network overload. There are multiple ways to solve this issue. Many times it is fault with the program’s inability to handle the stress.

For example, OpenSim’s services become exhausted quite easily as they rely on an old server client built into OpenSim to handle the requests and responses. One solution to this is to set up reverse proxies, and replace the existing built-in web server with a low overhead server such as NGINX that can take the hits and still keep ticking. But when that is not enough, it is time to upgrade and add another server. You can even cluster them, linking the two servers together, either over the Internet or within the same data center. Then you can replicate the data and balance the traffic between the two servers.

Cloud hosting offers a simpler solution, using Amazon’s Elastic Cloud Load Balancing service. It would automatically spin up more copies of your simulator when usage increases. That makes it much simpler to maintain but can increase costs.

Depending on your needs, the cloud could be cheaper if you only get busy once in a great while. However,a second server could be better if you see frequent busy periods, or have just grown to the point where you need more capacity.

Pros and cons

This wouldn’t be a true “versus” article without a bit of pros and cons.

Pros and cons of dedicated servers

  • Pros: Fixed pricing
  • Cons: Does not dynamically scale
  • Pros:  Normally unlimited bandwidth
  • Cons: Sometimes, bandwidth is not unlimited
  • Pros: Can Be Physically Owned
  • Cons: Hardware Issues Can Occur

Pros and cons of cloud services

  • Pros: Can dynamically scale
  • Cons: Pricing is not fixed
  • Pros:  Can save by not paying for unused time
  • Cons: Pay for all aspects of instances
  • Pros: Can be created instantly
  • Cons: Cannot be physically owned
  • Pros:  Can restart in different data denter if problems occur
  • Cons: Not ideal for full time usage

Closing Thoughts

I personally prefer using dedicated servers. I like to physically own my hardware. It greatly reduces costs since all I have to pay for each month is rack rental fees. If hardware issues arise, I just purchase replacement hardware. I have taken classes in networking, and I know how to network servers together securely and utilize all the space even on servers that primarily used for regions. I use extra space for redundant storage using back end sync tools. I can manage load balancing using DNS and reverse proxy tools and can scale infinitely using these multiple tools and network attached storage.

Now as far as cloud instances go, they are amazing for throwing parties, for running things for a short time with a lot of power. Personally, I still use dedicated servers in these cases, but I can see that cloud instances can be very powerful for that. The main purposes I use cloud instances for is to spin up test servers. There’s no need to order a whole dedicated machine to run tests on, just spin up a cloud server to run tests and toss it in the trash if it fails, or move it off the cloud to a traditional server if it goes well.

The question of cloud versus dedicated is about which is better, which which you actually need.

The biggest example of OpenSim cloud hosting is Kitely. The take advantage of the fact that not many people are ever online all the time, and don’t need their regions to be up and running around the clock. Instead, they just have regions online when people are visiting them. This is a great concept.

But for traditional grids, with regions up and running all the time, cloud isn’t the best possible option.

timothy.f.rogers@gmail.com'

Timothy Rogers

Timothy Rogers is founder and owner of Zetamex, a company offering low-cost hosting of OpenSim and Aurora-Sim regions.

  • Hi Timothy,

    As you mentioned Kitely I’d like to note a few things about how we use cloud-based hosting:

    Our system enables people to have as many virtual worlds as they want when they want them, without charging them when those worlds aren’t used. In order to do this in a financially sustainable way we created a system which automatically sets up and provisions virtual world servers when they are required, without creating long delays between the time a user wishes to enter a world and when that world becomes available. One of the ways we handled this challenge was to design our solution to keep a number of servers ready at all times for hosting virtual world simulators. This number is automatically adjusted as virtual worlds are started and shut down in such a way that we will have the servers we need when we need them, while keeping our datacenter costs proportional to our revenues.

    Whenever our system receives a user request to enter a virtual world that isn’t currently running it will automatically set up and provision an OpenSim instance on one of our available servers. Our system automatically sets up virtual worlds on servers that have enough spare capacity to handle them. As server load changes, so does the number of OpenSim instances that share that server.

    The servers Kitely uses are Amazon’s Large EC2 Instances with 7.5 GB of memory running 64-bit Linux. We use each server to host one or more OpenSim instances, depending on the number of people visiting each virtual world. As your world becomes more crowded the proportion of the server’s resources that it receives grows, until eventually your world gets exclusive access to the entire server.

    In other words, Kitely provides its customers with access to a server that has 4 times the CPU and more than 4 times the RAM as the Small Amazon Instance you used in your example. If they have a popular world with many visitors (that requires a lot of server resources to run) then our system automatically gives their OpenSim instance exclusive access to that entire server for as long as that is needed.

    Without being able to start and provision simulators on different servers on demand we wouldn’t be able to do all that. If we had to pay the cost of a traditional dedicated server for each world that can potentially need a lot of server resources then we’d be forced to keep a lot more servers in reserve than we need to when running in Amazon where we can get a new server automatically provisioned in minutes.

    • hack13

      As I pointed out in my original article not the one posted here, has been edited slightly by Maria. Cloud computing is great in many use cases, and the way you guys use it is a great way to do so. I just point out things for people to consider when going with the cloud. You have to build a bunch of tools such as what you have done to manage and make it affordable.

      Personally how I manage this, I like to buy hardware and I have my hardware shoved in datacenters around the world. Since I mostly own my hardware, my fees are extremely low, as I only pay for my servers to sit on the racks. I normally rent out entire racks as it can be as cheap as 100/usd a month and now I can sit servers I OWN on that rack, and only pay the 100/usd. The overhead cost is practically non-existant for me, and scaling is no issue as if my clusters are getting over loaded, I use DNS based load balancing and warehouse tools in the various datacenters to just recreate servers on my own physical hardware.

      The biggest thing I love about using my own hardware, is that I don’t have the worry about if something happens at Amazon or what not. They are not responsible for your data loss, well nor is many other providers. I also have a server at my home, that talks with all my existing clusters around the world, that daily syncs and performs backups over a privately protected VPN to bring all my backups to one location.

      Again, I am not pointing out that dedicated is better or worse. It all depends on how you manage and run your setups. Also if you are someone like me who already has plenty of servers on standby, and works with datacenters with exclusive contracts with some to provide servers on lease to own standards. Data replication is done on multiple levels, all my servers run in RAID so to protect from failures.

      After the issue with our datacenter a few months ago (you can look at blog.zetamex.com ) you can see they had a fiber outage. We put in place our new cluster system, so that I can instantly bring everything back online from another location. Still using the same IP address and more, all because of working with the different datacenters and special routing through VPN I run for everything.

      I do use amazon, but only when I want to test things. It is super easy for me to say I want this and poof I have it. I am actually investigating intergrating OpenStack soon, so then I will essentially be able to our zetamex’s entirely own cloud backend not relying on a service such as Amazon. These are all things me and Zetamex’s investors are looking into, as we are growing and expanding to new heights.

      Cloud computer works great for many, and not so great for others based on their use cases. That is simply how to put it, and not really any other way to put it.

  • Key Gruin

    Excellent article, puts things in a good perspective, but $30 sounds way too good to be true for a dedicated server with the specs you listed. To be fair and not give Amazon the only benefit of mention, can you give an example or two of companies that offer such a deal?

    • hack13

      Certainly first off try OVH and secondly try Server4You and if you are in the market for servers in Germany go for Hetzner.

      • hack13

        Just to tag onto this one, I forgot another one. I have used them in the past, and they are a good provider. But they tend to get quite back logged on orders, but VolumeDrive as well.

        • Key Gruin

          Fair enough 🙂 Thanks!

      • Samantha Atkins

        Tried OVD dedicated server. The server itself seemed ok or was after I removed a mod they made that interfered with some common linux server software being installed. What drove me away was that their website was broken in some ways that never allowed me to successfully set up my billing account to autopay. I explained this to the folks several times and put money in the account to cover charges, just couldn’t successfully apply it to the charges. Asked them to do so. Was told they would. Then they suspended my server anyway for false non-payment. Not worth it. Back to EC2 instances for me.
        Likely better if you are on the EU side of the pond. I am in the US.

    • hack13

      Also keep in mind, this is a bit of a fudged version of my original post on my person blog. Editors do change things and they loose proper context. It is not a traditional server price, a more good ballpark would be 30-60usd.

  • there is also an environmental aspect to consider. cloud hosting does not need as many servers (all of cloud hosting, not just OpenSim). in theory, fewer servers need to be on 24/7 for cloud-based usage

    while the Amazon Elastic Cloud is comprised of many, many real servers, i like to think that my real and true opensim “load” is, hopefully, considerable less now that i am with Kitely than when we had a dedicated 4 core, 8 gig box sitting always powered on in Dallas

    the downside is that it can take a minute or two to spin up my worlds, but for the sake of the real world, i can delay my gratification by a minute or two =)

    • Thank you Ener 🙂

      I think it’s worth noting that your world takes 1-2 minutes to start up as it is a 16-region world with quite a few objects. Smaller worlds with less objects start up quicker.

    • hack13

      While yes, that is true. I think that it is a good way to lower the foot print of opensimulator, but it doesn’t solve the problem. I work hard coming up with new ways constantly to improve opensim performance, without the need of giving it tons of ram.

      These performance tips and tricks, I give away to others who cannot afford things like cloud hosting and so forth. For the sake of arguement, cloud hosting is great for on demand hosting, but if you are a standard run of the mill community ran grid, that needs and wants their regions always online, it is not really ideal for them.

      So this is why I work to come up with new ways to keep performance spot on, while lowering the ammount of ram and cpu usage that is needed. But I will say bullet is really making that hard, hope it gets down in cpu usage.

    • hack13

      Actually Ener, I want to point out that enviromentally it is not really greener. These datacenters are huge, and actually amazon consumes much more process and physical power draw than most. They servers are all ALWAYS online. They are not offline, they are always constantly drawing power.

      See the difference is, cloud hosting is tiny virtual machines that float ontop of all these physical servers that are currently online. So all the physical servers are constantly running there is no enviroment help, in fact the sad thing is amazon’s datacenters are much larger than even SoftLayer’s (The United States leading data center).

      When you spin up an instance, it only turns on the virtual machine ontop of the physical machines. So in a since cloud hosting is kinda like VPS or KVM hosting, but you have much more resources and a much less likely chance of running into issues that you would with a VPS or KVM.

      • Actually Timothy, Amazon with their on-demand hourly billing are greener, when compared using actual usage per watt spent, than standard webhosting datacenters that rent/co-locate dedicated always-on severs or VPS (virtual private servers) to customers on a monthly basis.

        With Amazon’s on-demand pay-by-the-hour model a lot of companies can time share the same physical servers just when they need that capacity. With the monthly hosting model that non cloud-computing datacenters use, those companies would need to each get their own dedicated server or VPN for entire months at a time and keep it active even when it isn’t required.

        Take Kitely for example, we host close to 5000 regions each supporting up to 100,000 prims. We would need to maintain more than 1000 dedicated servers if we were using standard always-on hosting. With Amazon, we can run dozens of servers instead and have that number grow and shrink by the minute as the number of servers we actually need changes.

        If you host your own cloud on dedicated servers that you keep active in some datacenter then you don’t get the benefit of reducing the number of servers that draw power based on actual usage. You also continue to draw power to run servers that other companies can’t use for their own needs when you aren’t using them. If you have a lot of servers you are therefore wasting a lot of energy that could have either not be used at all or be spent to run those same servers but have them used by another company that could then not have additional servers running.

        • hack13

          While yes you are only paying per hour, others are still drawing on the physical servers. I do like cloud computing, and I think it is a great inovation. But I do not like having to trust another company to manage my data, I like having and owning my data and hardware. I find major advantages to this, and we utilize all our servers, constantly. We keep data synced across our network, and the servers practically manage themselves untill a new one is added.

          If we choose to do a cloud based system with our servers, (which we most likely will not). It would be the same, we operate a cluster at the moment. This means all our servers talk to eachother, just as if they were cloud based. They can depend on eachother, spin up more time or spin down time. This allows me not worry about is it going to be more expensive this month or less expensive this month. I mean yes we do have hardware not used to its full potentional, but that is fine, it is there for when we need it.

          Again, I am not dissing cloud computing. I am just weary of putting my cards in one stack so to speak, if we were going to use cloud computing it would be Google or Amazon’s services. We currently been looking at Google’s for testing, as Google’s pricing is better as well as easier API’s to work with. But again, this is all based on choice of how you utilize your hardware, or if you are on the cloud then IaaS(Infurstructure As A Service).

  • Eddie

    You know, I also considered Amazon cloud in the past. However, after visiting http://weloveourhost.com/semi-dedicated.html I decided to open a semi-dedicated account with Linux Hosts Ltd. This is now 8 months ago and I didn’t have any serious issue yet. There were some minor problems but the company’s support team resolved them in minutes.

  • I’d like to chip in with a dedicated-versus-cloud rule of thumb. Basically, it’s a question of do you rent, or do you own?

    Owning makes more sense when you know how much you’re going to want, and you know you’re going to want it for a while.

    Renting makes more sense when you don’t know how much you’re going to want, or will be needing it for only a short while.

    Many companies take a combo approach. The run the majority of their systems on dedicated hardware, that they pay for up front, and utilize it to the fullest. And if suddenly they have a spike in usage — instead of having unused servers sitting around in reserve, like they would have had to do in the past — they send the extra workload up to the cloud.

    I can see that Tim is already doing this for testing, a great case of short-term use. No point in buying a whole new server when you just need it for an hour.

    Over time, I expect large grids to add a little bit of cloud-based hosting to handle sudden spikes in new customers, to run brand-new regions until they buy and provision new servers to hold them, for short-term but large-scale events — and, of course, for testing and development.

    And I understand that Kitely is pretty much committed to the Amazon model, but, as the grid gets bigger, it may make financial sense to move a certain core set of servers to dedicated machines to reduce costs.

    • hack13

      That is kinda what I was bringing to the table, that in the short term or operating like Kitely does with on demand. Cloud based makes a lot of sense, but when you operate like I do with clients who do not want their regions down, and always up, it make smore sense to utilize dedicated based hardware. Granted my modle looks outdated, but there are plenty of developers who still work this way, and honestly it can be more effecient to keep things hardware based.

      While we do use the cloud for testing, we have considered the cloud for bursting, but we have never ran into an instance yet that our servers couldn’t take the hit. We also pay for DDoS protection on a hardware level and firewalls that trickle traffic if a load becomes to much, making it safer for us to run dedicated without the fear of our servers choking on the stress. We like load balancing as we just feel it is more cost effective then the money that has to go into building these applications which monitor and spin up and down cloud instances. Something we viewed as an uneeded expense. But those are things Kitely has taken the time and created, making it a more profitable in the end for them.

      We are looking into linux’s new systemd development, which is looking very promising with its kernal layer api’s to do kinda like Kitely in the future but on dedicated hardware. Where regions will spin up and down, based on their needs, things we can do without the need of cloud computing. While we do have some vested interest, I should point out that Zetamex does do our versioning backups in Amazon’s S3 and we keep all old clients data for up to 6 months in Amazon Glacier in case they come back to us and need their data.

      • Hi Timothy,

        I hope that you are aware that there is a lot more to having a working system that automatically starts and stops OpenSim instances when they are actually needed than just starting a new virtual machine using some prepared image.

        You need to optimize various OpenSim components to significantly speed up world startup times. You also need to be able to automatically detect and work around real-time problems that can occur in your application layer (OpenSim, database, etc.). Kitely uses a lot of open-source projects and still needed to write hundreds of thousands of lines of code to automate the handling of the various things that can go wrong in OpenSim and the other components that make up a grid.

        You can’t get those missing components off the shelf and if you only automate part of the solution then you’ll need to hire a growing number of system administrators to fix various problems that Kitely handles automatically. This will increase your operations costs, increase your turnaround times, and reduce your margins compared to what Kitely’s tech allows.

        I wish you luck in trying to emulate our technology but be aware of what’s involved with that undertaking – it’s a LOT more complicated than what you expect.

        • hack13

          As stated before, we understand the complications in all of this. Also I will let a small cat out of the bag, that we now just recently finished developing our own private API function that allows us to manage opensim instance using REST commands. This includes everything, creating isntances, regions, oars, iars, database, moving the region, renaming the region, and much more. This took us several weeks to develop but it is going to be intergrated into our new ZetaPanel 2.0 that is due to release soon.

          We know that just simply starting and stopping is not the issue, we have already implimented our own propriatary system that spins down opensim ram and cpu usage when there are now users on it. We are able to do this with simple json, cron, and shell scripts. Doing this has saved us significantly in costs.

          An example of the savings we have from spinning down instances, is they are still completely online and they are stilly completely operational. This lowers their usage ranging from 25% to 45% making it so we can save on system resources for the people who are not idling. Though this has never been an issue in the first place because, we refuse to oversell. That is something we see far to often in the market is overselling, but we see this as just a way of preserving our servers hardware by spinning down things that are not being used.

          We again are 3 years in this game, we know what we are doing. Granted I might not be the best programmer, doesn’t mean I don’t hire good programmers to make sure we got things down. Then on top of all this, we are planning on opensourcing our control panel, and many of our scripts we use for managing our back end because we encurage competition. We have even given hours of free consoltation to many grids, and tons of setups, code and more to grids who have just called and asked us for assistance.

          We are a big believer that people need to share in this business and not keep it to themselves. We are all about the sharing, but our API and our Spin Down technology and some the new services you are going to see with ZetaWorlds launch are going to be our own systems, not opensourced, as it would be too much effort to support dozens of people setting them up. Our API alone is about a mile long in code, and requires a lot of kernal and apache intergration so, it is not an easy setup process.

          • It’s good that you don’t oversell servers but if you keep sims running (even at reduced CPU usage) then you still need to keep a lot more servers online than you could with a completely automated on-demand system like Kitely uses.

            Kitely automatically changes the actual number of sims running on a server so that all the server resources can go to a sim that needs it (it doesn’t just slow down idling sims). It can potentially give all the worlds it hosts dedicated servers at the same time because it isn’t limited by having a set number of co-located servers it can use to run sims on (automatically getting and provisioning a new instance from Amazon takes minutes and our system does predictive allocation to minimize wait times).

            Unless you develop a true on-demand system like Kitely has and place it in a cloud-based environment that enables you on-demand server allocation/deallocation you’ll be forced to continue charging more than Kitely does and continue to offer lower price/performance to your customers because your underlying costs are much higher.

          • hack13

            As I stated before, we are not in the game to do on demand, not what we think is a good selling angle for us. Honest, I think it is cool for some uses, but for people like me, it is just not what I want. I like my worlds to be always online, and always avaliable. You can’t offer me that, that is something that turns me away. But there are many great and amazing use cases for things you can do with Kitely.

            As far as charging more, I don’t see the big deal. I have studied this over the past 3 years and have noticed that our average client never uses more than 20k prims and in many cases never over the 15k prim line. Not to say we don’t have those that do, because we very much so do.

            So in the since, in kitely if I am that average user I need to pay 40usd for the 100k prims but I never use that many prims nor that much power as most people don’t use that much scripting nor that many prims. Or I can go with my solution for 20usd with 15k prims what I am going to use, and save 20 bucks.

            Then another thing I have noticed, many people want a lot of regions, but not use a lot of prims. Thus our ability unlike any other in the business I have seen, you order our 40usd plan and get up to 9 regions sharing 30-45k prims based on your scripting usage. Where again they save cost of having to purchase 80usd for that, with again those extra prims they may or may not need.

            I think Kitely is great if you are going to need all those prims and all that extra power behind your regions. But if you are the average opensimulator user, based off our numbers and other studies we have conducted and reviewed from others. It is really not a profitable choice for me to use Kitely. Not to say your product is bad, in fact your product allows instant region creation as well as other secondary option to only pay 35/month for unlimited access for yourself but not your region visitors. So great for developers, the 35usd/month is great if you just going to build.

          • Kitely’s on-demand worlds are always available they just aren’t running when people aren’t actually using them. The always-on factor reduces the amount of time it can take to enter a sim if it was previously empty of people, but the inworld experience is the same once you’re inworld. That offline world startup time can be less than 30 seconds for Kitely worlds with 15K prims (and this number will continue to improve over time – we developed quite a few optimizations that speed this up compared to regular OpenSim).

            The Kitely option you neglected to mention is that you could go with a time-based billing Kitely world and pay less than $10 to have a prim-rich 4-region world hosted on Kitely while picking up the tab to provide free access to your visitors (you mentioned statistics, well most regions don’t get a lot of visitors who spend a lot of time in them).

            For example, see this testimonial by one of your existing customers (if I’m not mistaken): http://dankojournal.wordpress.com/2013/09/04/osgrid-endmeta-metro-sl-kitely-soas/ “So now I have four regions for 3 USD a month! If I need more than the two hours a month that comes with the free plan, I can simply buy extra minutes as I go. Kitely is the best bargain in the Metaverse.”

            And unlike with your $20/month plan this isn’t limited to just 1024mb of RAM, 15K prims or 10 concurrent users. This Kitely world which has a basic cost of $3 / month can have up to 7.5GB of RAM (on a dedicated server if it’s needed), 100K prims and 100 concurrent users.

            Since you mentioned it, Kitely’s $35/month Gold Plan option doesn’t just give you unlimited personal time, it also allows you to host up to 20 regions (in up to 20 separate worlds that can all be active at the same time each with up to 100K prims and 100 concurrent avatars).

          • hack13

            Look I am not here to argue who is better, because honestly I don’t think your better than any other service provider out there. Nor do I think that Zetamex is any better than any service provider out there.

            I am simply pointing out, firstly that people use their simulators more than 2 hours per month, I see more like 3-4 hours per day common usage for most of Zetamex’s residents. Secondly, your ability to have all those 20 regions or however many you want is cool, but not what I want. I want to have several regions connected without being a mega region. I just find mega regions not to be exciting or a good idea. I like to cluster 30 to 60 regions together, I cannot do that with your product, as i cannot sit idividual regions together.

            Lastly you keep wanting to almost defend Kitely, no one is attacking Kitely. I use cloud services for testing, I use them for experiments as they are great for that usage for ME not anyone else. I am great at networking and IT backend, not the programming point so much. Why I hire people to do that part for me. I tell users constantly if I am not the right fit for them. Heck I just turned down a large contract with a multi thousand dollar client, because I know I am not the right fit for what they were looking for. They wanted something I don’t offer, nor did I want to offer.

            Zetamex is not about the money, Zetamex is about helping the opensim community and helping it grow. We invest a lot into the community aspect of opensim. We still have many clients still on our legacy free region program, which is going to be coming back on ZetaWorlds. We have the hardware, the time, the money, and drive to make opensimulator better for the community.

            You spoke to me privately you want hypergrid to help bring everyone together, but somehow your attitude seems to say otherwise with this whole “well kitely does this better” EVERY post I have written here I have said “not putting cloud computing down” and “not attacking or saying kitely is bad” because I am not. I am just saying in ZETAMEX’s use cases cloud is not profitable for how we operate, we would loose hundreds of dollars a month if we switched our operations over to the cloud. This article highlights both the good of both cloud and dedicated and you have taken this and made it almost a fight to make cloud and kitely the winner. There are no loosers or winners, we all play the same game. Stop fighting, and be friendly, no one here is attacking you.

          • Timothy, if you avoid comparing your service to Kitely’s and making statements about how the average customer will be better off with the type of service you provide while misrepresenting the type of services we provide then we wouldn’t get into this type of exchange.

            There are two issues at play here, the backend technology and the pricing scheme. The article was about what makes sense to run in cloud computing environments such as the one provided by Amazon and what would make more sense to run on dedicated servers. You started comparing what you do to how Kitely works while misrepresenting what we do and making slightly disparaging (my-preference-type) remarks about using cloud-hosted servers as opposed to servers you co-locate in a third-party datacenter.

            If you start a comment with “Look I am not here to argue who is better” then make statements about how good you are and the perceived limitations of the party you’re talking to then you’re going to create a negative sentiment with the person you’re having the conversation with. Please reread your last comments to see that they had similar structure.

          • hack13

            Yes, but it seems that people read inbetween my lines I guess. I speak bluntly and don’t beat around the bush, so I think that is why people think I am attacking them, and I think you kinda do the same thing at times.

            I was simply comparing, the article never brings up kitely, until or unless Maria had edited the article. I repeatedly state things how cloud is great, and it works amazing, and so enjoy the cloud. I just don’t see the cloud profitable for ME MYSELF TIMOTHY FRANCIS ROGERS. Just I don’t, I tried it in the past, tried it again a few months ago, and everytime I go and crunch my numbers, I seem to need to spend twice as much money as I do now with our dedicated servers.

            I know that is only because I do always on, and well that is just how I operate. How I like to do my business, just like you like the on demand aspect, and there are many advantages to that. But again, not what I TIMOTHY FRANCIS ROGERS want, I know and refer many people to your service.

            The two service providers I refer people to are Dreamland Metaverse and Kitely. If I don’t meet their budget or their wishes, if you have ever read Zetamex’s Mission Statement. We believe in telling you “We might not be the right choice for you.” That is Zetamex’s mission statement, to never take on anyone we don’t think we are the right fit for. Some people look at that as a weakness, I look at that as a strength. Why waste a customer’s time and money, if i can’t provide them what they are looking for. Just how I do business and how I would want to be treated myself.

          • Timothy, I’m not here to tell you how to run your business. I think you’re discounting the amount of work you or the people you employ have to do because you’re using co-located dedicated servers. This may not be an issue for you today when you have a relatively small amount of servers but as you get more servers the numbers game will catch up with you and you’ll need to deal with hardware failures on a frequent basis. There are also things that are outside your control, such as router failures, fiber optic disconnects, etc. The datacenter may help you deal with those but turnaround time for hardware failure can be hours or days and that can cost you some business when it happens.

            Those are things that are much less likely to cost you time (and money) when you don’t actually have any servers of your own and can just recreate your entire network architecture in minutes in another datacenter around the world (which is what cloud-computing enables you to do if you set up your system properly).

            Your customers may be very happy with you for years and storm off angry after a few hours downtime that isn’t your fault. Cloud computing with it’s on-demand provisioning enables you to minimize that business risk. Consider it a type of insurance you’re paying for business continuity. If you can then develop your infrastructure to utilize it to reduce your operational expenses as well then all the better.

            Again, there is a reason why most startups nowadays opt to use cloud computing and not go for co-locating dedicated servers. There is more to total cost of ownership than just looking at server hosting costs.

          • hack13

            As stated before, we actually currently have about 20 servers online. All in the high range, spread among the world. We replicate data across them all using the spare space and so forth. As I explained earlier as well, we do store backups in the cloud as well on S3 and Google’s Cloud Storage. We can restore instances and reroute traffic in an instant if something goes offline, just as if you could with your cloud infrastructure.

            We have plans in place, ever since our datacenter disaster a month ago where services were pushed to a horrid slow pace due to a fiber line cut. We have tested our new system and have successfully verified we can restore service and reroute traffic at a moments notice. Cloud is merely our last resort, as the cost of it is just crazy for constant usage. You just not going to convenience me otherwise. There is absolutely no other way to look at it, but we are utilizing new technologies like Docker where we have everything in Containers that are completely portable and can reloaded in an instant on another server or even cloud instant if we have too.

            Using Docker provides us this ability to just up and move our entire infurstructure from server, to vps, to cloud, to barebone. It is quite revolutionary in its design, and it what provides us our latest cutting edge. But we will never seek cloud as our primary source, just last resort backup.

          • Joe Builder

            Seems everything llan says has a happy kitely ending.

          • Danko Whitfield

            This is a bit odd as I have not been misquoted here but a statement has been made about me which is not true. I am not now, nor have I ever been (to quote a phrase) a paying customer of Tim or Zetamex nor have I have used their grid or standalone services or any other service they sell.

            I was a resident of AuroraScape. I have opened an account in ZetaWorlds. I have accounts in close to 30 grids. So I’m a customer in that sense…a free account, non-paying customer. But the statement made by Ilan about me implies that I am a paying customer of Zetamex or Tim. That is not true and never has been. The only two grids I have ever paid money to are Second Life and Kitely.

            And I would be hesitant to spend money with a company whose head one day posts a piece called Why I Don’t Use The Cloud and within the same week announces his company is moving clients to the cloud.

          • I’m sorry Danko, I guess I misunderstood one of your previous blog posts. I read that you had some land in a grid which I believe was hosted on Zetamex (though that grid’s website seems to be no longer available).

          • If I might…Maybe Ilan was referring to endofmeta? That is gone, as far as I know…further details I leave to Danko-))

          • Danko Whitfield

            i had a free parcel on EndMeta…anybody could have one. I don’t see how that makes me a customer of Tim’s or Zetamex. I never saw, spoke, dealt with Tim in connection with that. I was a “customer” of Blady, the owner of the grid. Anyway, she switched hosting companies a couple weeks after i got my parcel. that grid has been offline for a couple of months. If Zetamex ever owned that grid, it was before i got there.

    • There is a reason why VCs tell the startups they fund to use cloud computing and not go with dedicated servers. It has to do with total cost of ownership and business flexibility. You have to be a very small or very big company for dedicated servers to make more financial sense when you consider the additional administrative and overhead costs you have with acquiring and running many dedicated servers.

      Let’s take Kitely as an example. We charge a lot less than companies that use dedicated servers even though we provide each running sim with a lot more server resources per dollar. We can do that because even with our much lower prices we still have bigger margins than those that companies who keep sims running on dedicated servers 24/7 can achieve. We can dynamically change the amount of extra unused capacity we pay for, which isn’t true for people who are taking space in some datacenter (even if their servers are offline).

      Our datacenter costs won’t be something worth spending R&D and operations dollars trying to minimize until we have a lot more actively used servers than SL has. Splitting servers between cloud-based infrastructure and co-located servers in some other datacenter creates additional operations complexity we currently don’t have to deal with. Dealing with that complexity would cost more than just having everything run in Amazon. This isn’t a particularly unique strategy, it’s the one almost all startups (even ones with millions of active users) are using. Time costs money and cloud-based computing saves a lot of time while minimizing upfront expenses and long-term business obligations.

      • hack13

        You are true, the infurstruction Zetamex maintains is very large and very time consuming to manage. However we are able to automate many of the tasks, using newer technology that helps us keep data moving across all solutions. It is not that scary to go dedicated, it is acutally in my opinion much more scary to go cloud. I have messed with many of the APIs and they are a bit confusing to say the least, but they are very well documented so that is one of the major up sides for people who are more fluent with such things.

        Zetamex has gained a lot of attraction and customer switches from many other opensim providers due to our costs being so affordable as well. Then on top of all of that, we have many raving customer reviews of how well we can take care of them. This is because we rely on our datacenter managers to handle many of our hardware related issues, something we don’t have to make any fuss unless an issue occurs.

        Even though we are dedicated hardware, our pricing is still amazingly competitive and on top of that our clients regions are accessible 24/7 and they have their data replicated on backup hardware incase of a dissaster all that included. This falls all the way down to our 20/usd a month clients too. We being in this business for 3 years, we are tough and have proven we have many years to come and are only getting bigger. We just added 2 new servers to our cluster this past week.

      • Samantha Atkins

        Of course there is a middle ground. Many are buying their own hardware and running OpenStack to partition the overall capacity however they wish. This model would support an on-demand region rezzing model like Kitely uses pretty easily. Of course so would dedicated hardware. I wouldn’t go with this approach over EC2 as I have some modest experience helping bring up a rudimentary OpenStack setup. It was not trivial.

        • The problem with going the dedicated server route is that you have bigger upfront costs and less flexibility when it comes to dynamically allocating server resources during peak times. If you have to maintain enough hardware to accommodate uncharacteristic spikes in demand then you’re going to spend a lot more than you can when you rely on other people’s hardware to support your sporadic hardware resource requirement bursts.

          One could argue for deviding the load between your own data center and that of Amazon but distributing servers across datacenters comes at its own operational expense that would be hard to justify until your actual baseline usage is much higher than what I believe it is even in SL (if they actually provisioned servers on demand instead of just keeping them all online all the time).

  • A concerned citizen

    This whole article has turned in to one gigantic rant. I was really fascinated at first but then following through it just seemed to turn into a whole anti cloud rant.

    As for both of you Timothy and llan stop acting like children give each other kisses and make up and get back to business. There are enough grid wars going on between the bigger grids avn and iwz for you both to be fighting and bantering and screaming back and forth. Both of you should be ashamed of yourselves at the way you are both acting. It makes me sick to read this filth.

    You are both the ones that should be standing above this childish behavior take this in to private with each other don’t bring this in to a public news site.

    Enough said!

    • Sammy

      Well its a public topic friend with Timothy Rogers writing an article he sould assume as author that others might take a different view or even have more knowledge on the topic then and it gets discussed in the comment section
      no one is acting like children but you friend now welcome to the intenet

  • I think Amazon Cloud hosting requires not just server and database optimization, but one has to make sure that the cloud server is fully secure.

    http://www.cloudways.com/en/amazon-managed-cloud-hosting.php

  • Merrie Schonbach

    Good article Tim thanks!

  • Yes, Amazon is best hosting platform and try to use Managed Amazon Hosting with CloudWays.com for more security and improve Business performance.