Category: Amazon S3


Amazon S3 – the new Asprin

As more and more people now move their production apps on Amazon S3, we are getting emails from CEOs and CTOs about their success and how Amazon S3 helped them sleep better at night. Last week I blogged about live-blogging backed by Amazon S3 and their 2-hour-$10-scaling app. This week its Pictogame. As Louis Choquel, President of zSlide (of Podmailing fame) puts it in his own words:

– with S3 we could sleep better, spend totally cool week-ends watching our Digg score climb in total serenity.
– without S3 we would have spent much more money but slept badly anyhow, had nightmares, and actually seen our server crashed, probably at the worst time of night as these things usually do.
Pictogame launched on May 25 – its one of those user-created game widgets that you can create dynamically and embed it on your MySpace blogs. Technically, the app consists of several general purpose SWF flash files (game loader, skin, game template), user media files (currently only pictures: jpeg, gif, png), an XML file describing how to mix all that into a customized game. All of these are stored in a single bucket on Amazon S3. SWF files are stored once as they are common to all, user media are first uploaded to their servers for processing then pushed to Amazon S3. XML files are uploaded to Amazon S3 each time a new game is created. A background process in PHP stores all these files asynchronously on Amazon S3 while they keep the Amazon S3 key in their database for later editing and deletion purpose.

Their stats are all the more impressive: Since launch (in 3 weeks), they got “ dugg” several times and were also there in Top Ten for few hours. Overall, over 75GB were downloaded and 1 million widgets were served (Each game widget weights an average of 150KB) and Total costs were less than $20 (for storage and bandwidth).

Now since the widgets get directly relayed from Amazon S3, they don’t have to worry about scaling. No more post-production headaches!

Here’s the game – Try building the famous “AWS building block” (My best time : 400sec)
Have Fun!
— Jin

Live Blogging Experiment Results – Sitening.com

I blogged about Sitening.com’s Live Blogging Experiment yesterday. They were trying to capture the live feeds from Steve Job’s Keynote at WWDC 2007 Event. For the past several years, any site which live-blogged the event would become hopelessly buried in traffic.

They built a small admin tool that periodically encapsulates the content every minute or so and posts it on Amazon S3 Bucket and lets Amazon S3 handle the load. In the followup blog post, they tell us more about their tool and share the results of their 2-hour experiment. The Blog post is a good read.

  • 7 pictures
  • 130 text posts
  • 20k visits
  • 50k pageviews to the page
  • 5M requests – this includes the 15 second content refresh and the images.
  • 47GB of transfer
  • content was 17k, images were 200k.

Cost for the event: $10.

This demonstrates the true power of Web-scale: A massive scalable infrastructure to back them up when they need it and pay for only what they consume. This model is validated and affirmed during these types of “live” and “spiky” events.

Now I can’t wait to see their next experiment “Live Video Blogging”

Nice work Sitening.com!

— Jin

Apple’s WWDC Keynote updates – Powered by Amazon S3

If you are following the hype of iPhone and Leopard, then you would know that WWDC’s Keynote is so inevitable. Jon Henshaw and Tyler Hall from Sitening is blogging live from Keynote room (10:00AM) as I type this post up. Its powered by Amazon S3.

You can get Steve Jobs’ Keynote updates here

This demonstrates the true Power of Scale, as he mentions on his blog post, he does not have to worry about how many people will log in or how many times he updates the page, It will simply auto-scale.

This is going to let us reach a ton of people without worrying about bandwidth, infrastructure, etc.

Currently we can see that they have photos and text, It would have been even cooler if it would have been live video updates. It looks like they have created a cool utility that automatically updates an Amazon S3 page and also sends signals to Twitter Account.

May be Jon or Tyler can tell us about their cool little “Live Blogging” app.

— Jin

Linden Lab: Amazon S3 For The Win

Woke up this morning to find an awesome post from Linden Lab, creator and operator of Second Life.

In an article titled “Amazon S3 For The Win,” developer Jeff Linden describes how they used Amazon S3 to buffer the crushing blow of downloads that they had previously suffered every two weeks when they released a new version of their 30MB client:

The client you download may just seem like a 5-minute nuisance to you.  Magnified ten thousand times, it becomes a severe issue for our webservers on days when we release a new version- tens of thousands of people all rushing to download them at the same time. An average of 30 MB per download, multiplied by however many folks who want to login to this Second Life thing, comes out to a lot of bits.

Solving this problem by hosting the bits on Amazon S3 is a perfect illustration of Web-Scale Computing in action. On average, they need to have enough infrastructure to deal with downloads by newly registered users. At peak times, however, their infrastructure requirements spike and they need enough to accomodate downloads by every active user. Over time the disparity between average and peak will become more pronounced.

Using a Web-Scale model you need not gear up for the peak. You certainly have to understand how you will deal with it, but you don’t need to invest money up-front in servers, networking equipment, or bandwidth reservations. As Jeff says:

Rather than continue to pile on webservers just for this purpose, which has somewhat diminishing returns, we have elected to move the client download over to Amazons S3 service, which is basically a big file server.

In other words, their investment isn’t sitting idle, except for those relatively rare times when they need to deal with peak traffic. Or, as a VC friend of mine told me recently, “We’d rather invest in brains than in servers.”

Jeff was able to include some actual numbers in his post, and they are staggering:

In case youre curious, we switched over halfway during release day; but even for the tail 8 hours of the download rush, we averaged roughly 70 gigabytes of viewer download per hour. Then it settled down to a relatively steady stream of about 20-30 gigabytes per hour. In the last 23 hours weve transferred a total of ~900 gigabytes so far- which Id estimate to be around 30,000-38,000 downloads. This does not include the first several hours of the download rush, which are typically the highest.

He also points out another advantage of the Web-Scale Computing model. Specifically, they don’t have to spend their time worrying about this anymore:

Hopefully your SL experience will be either unchanged or changed for the better- but on the webserver, we can all breathe a sigh of relief.

Clearly downloading bits from a website is important, but ideally you would like this to be a part of the infrastructure — reliable, transparent, and doing its job so well that you can almost forget that its there.

Welcome to the Web-Scale world!

— Jeff;

Update: As is often the case with a blog, the post itself is only a starting for an interesting conversation. I found this followup note in a comment to the subject post:

It just turned out that the S3 solution was ready for deployment immediately, where akamai requires more negotiation. In other words, we already had an amazon S3 account where I was test something out, and then when we noticed the bandwidth was pegged, we made a fast decision to speed up our plans to put our viewer elsewhere, and chose S3.

Yes, there’s that Web-Scale thing again. You need a place to make some bits available for high-volume downloading, you push it up to S3, set the ACL for public read, and start handing out the URL. No planning, no negotiation, no setup charges or residual fees.