AWS News Blog

Linden Lab: Amazon S3 For The Win

Woke up this morning to find an awesome post from Linden Lab, creator and operator of Second Life.

In an article titled “Amazon S3 For The Win,” developer Jeff Linden describes how they used Amazon S3 to buffer the crushing blow of downloads that they had previously suffered every two weeks when they released a new version of their 30MB client:

The client you download may just seem like a 5-minute nuisance to you.  Magnified ten thousand times, it becomes a severe issue for our webservers on days when we release a new version- tens of thousands of people all rushing to download them at the same time. An average of 30 MB per download, multiplied by however many folks who want to login to this Second Life thing, comes out to a lot of bits.

Solving this problem by hosting the bits on Amazon S3 is a perfect illustration of Web-Scale Computing in action. On average, they need to have enough infrastructure to deal with downloads by newly registered users. At peak times, however, their infrastructure requirements spike and they need enough to accomodate downloads by every active user. Over time the disparity between average and peak will become more pronounced.

Using a Web-Scale model you need not gear up for the peak. You certainly have to understand how you will deal with it, but you don’t need to invest money up-front in servers, networking equipment, or bandwidth reservations. As Jeff says:

Rather than continue to pile on webservers just for this purpose, which has somewhat diminishing returns, we have elected to move the client download over to Amazons S3 service, which is basically a big file server.

In other words, their investment isn’t sitting idle, except for those relatively rare times when they need to deal with peak traffic. Or, as a VC friend of mine told me recently, “We’d rather invest in brains than in servers.”

Jeff was able to include some actual numbers in his post, and they are staggering:

In case youre curious, we switched over halfway during release day; but even for the tail 8 hours of the download rush, we averaged roughly 70 gigabytes of viewer download per hour. Then it settled down to a relatively steady stream of about 20-30 gigabytes per hour. In the last 23 hours weve transferred a total of ~900 gigabytes so far- which Id estimate to be around 30,000-38,000 downloads. This does not include the first several hours of the download rush, which are typically the highest.

He also points out another advantage of the Web-Scale Computing model. Specifically, they don’t have to spend their time worrying about this anymore:

Hopefully your SL experience will be either unchanged or changed for the better- but on the webserver, we can all breathe a sigh of relief.

Clearly downloading bits from a website is important, but ideally you would like this to be a part of the infrastructure — reliable, transparent, and doing its job so well that you can almost forget that its there.

Welcome to the Web-Scale world!

— Jeff;

Update: As is often the case with a blog, the post itself is only a starting for an interesting conversation. I found this followup note in a comment to the subject post:

It just turned out that the S3 solution was ready for deployment immediately, where akamai requires more negotiation. In other words, we already had an amazon S3 account where I was test something out, and then when we noticed the bandwidth was pegged, we made a fast decision to speed up our plans to put our viewer elsewhere, and chose S3.

Yes, there’s that Web-Scale thing again. You need a place to make some bits available for high-volume downloading, you push it up to S3, set the ACL for public read, and start handing out the URL. No planning, no negotiation, no setup charges or residual fees.

TAGS:
Jeff Barr

Jeff Barr

Jeff Barr is Chief Evangelist for AWS. He started this blog in 2004 and has been writing posts just about non-stop ever since.