Category: Amazon EC2*


Amazon EC2 Reserved Instances with Windows

It seems to me that every time we release a new service or feature, our customers come up with ideas and requests for at least two more! As they begin to “think cloud” and to get their minds around all that they can do with AWS, their imaginations start to run wild and they aren’t shy about sharing their requests with us. We do our best to listen and to adjust our plans accordingly.

This has definitely been the case with EC2’s Reserved Instances. The Reserved Instances allow you to make a one-time payment to reserve an instance of a particular type for a period of one or three years. Once reserved, hourly usage for that instance is billed at a price that is significantly reduced from the On-Demand price for the same instance type. As soon as we released Reserved Instances with Linux and OpenSolaris, our users started asking for us to provide Reserved Instances with Microsoft Windows. Later, when we released Amazon RDS, they asked for RDS Reserved Instances (we’ve already committed to providing this feature)!

I’m happy to inform you that we are now supporting Reserved Instances with Windows. Purchases can be made using the AWS Management Console, ElasticFox, the EC2 Command Line (API) tools, or the EC2 APIs.

As always, we will automatically optimize your pricing when we compute your AWS bill. We’ll charge you the lower Reserved Instance rate where applicable, to make sure that you always pay the lowest amount.

You can also estimate your monthly and one-time costs using the AWS Simple Monthly Calculator.

— Jeff;

PS – If you have any wild and crazy AWS ideas of your own, feel free to post them in the appropriate AWS forum.

New EC2 Instance Type: m2.xlarge

We’ve added a new EC2 instance type to our repertoire. It is called the High Memory Extra Large (m2.xlarge) and has the following specs:

  • 17.1 GB of RAM.
  • 420 GB of local storage.
  • 64-bit platform.
  • 6.5 ECU (EC2 Compute Units), 2 virtual cores each with 3.25 ECU.

You can leverage this new instance type as a lower cost option if you are already using Standard Extra Large instances. The new instance type is available now in all of the EC2 Regions (US-East, US-West, and EU).

— Jeff;

That’s Flexibility, Baby!

Hi there, this is Simone Brunozzi, AWS Technology Evangelist for Europe and APAC. While Jeff Barr is in Japan, I’ll steal his keyboard to tell you a couple of nice Amazon Web Services success stories.

The first one is from our friends at ZapLive.tv, a German company that allows you to launch your own web TV.

On Friday, Jan 21st, 2010, something unprecedented happened: at 11:38 AM CST, Lily the Black Bear, in Minnesota, gave birth LIVE on the internet!  Thousands of people rushed to WildEarth.TV to watch this wonderful event.

Lili-bear

The broadcast on WildEarth.TV was produced by Doug Hajicek of Whitewolf Entertainment in association with Dr Lynn Rogers of the North American Bear Center. Peaking at a maximum of about 27,000 concurrent viewers, it was streamed across the zaplive.tv dynamic system.

Luckily for WildEarth, they were using Zaplive’s highly scalable infrastructure for load balancing live streams from different locations. The infrastructure is based on Wowza Media Server Pro and Amazon EC2. An Origin Server measures the repeaters and distribute the viewers to different EC2 repeaters, which deliver the stream around the world. Dependent on the load, additional EC2 instances are launched. This combination allows a dynamically auto scaled system to handle hundreds of thousands of unique visitors in a few hours. At their peak, they had about 35 Large EC2 instances running.

As you can see, thanks to the flexibility of Amazon EC2, they were able to quickly launch additional servers when needed, and run them as long as needed, being able to scale down when some of these servers were no longer necessary.

For the second success story we move from Minnesota to green Switzerland.

The Swiss Geoportal, www.geo.admin.ch, is now online and includes a great map viewer, with 35 layers of various Swiss administrations such as Dufour Map, Transport Network, Ground statistic, with more to come in 2010 including English support and up to 50 more datasets. 

Of course, Amazon Web Services (EC2 and S3) were used to implement this web 2.0 mapping application. It allows fast performance and very high load.

Dr. David Oesch, the project coordinator, says:

Thanks again for providing such a great service…without AWS we would not have been able to achieve such a performance at such costs!

Hope you liked these two stories. Goodbye from warm Mumbai!
Simone Brunozzi
(@simon on Twitter)
Technology Evangelist for AWS, Europe and APAC.

Server Density – Easy Server Monitoring

The Server Density monitoring service now supports Amazon EC2 using data collected and made available via Amazon CloudWatch and an optional lightweight monitoring agent.

Provided as a fully managed hosting service, Server Density can provide a snapshot of server status at any time. Alerts can be triggered from any of the metrics and can be delivered via cell phone (SMS), email, or iPhone. All of the data can be graphed, and a “tactical overview” dashboard provides a quick look at the latest monitored values for each server under management. 

There’s a free level (one server and core metrics) and a full level (the whole nine yard) for $16 per server per month, with volume discounts available.

— Jeff;

PS – Check out the demo video!

Cloud MapReduce from Accenture

Accenture is a Global Solution Provider for AWS. As part of their plan to help their clients extend their IT provisioning capabilities into the cloud, they offer a complete Cloud Computing Suite including the Accenture Cloud Computing Accelerator, the Cloud Computing Assessment Tool, the Cloud Computing Data Processing Solution, and the Accenture Web Scaler.

Huan Liu and Dan Orban of Accenture Technology Labs sent me some information about one of their projects, Cloud MapReduce. Cloud MapReduce implements Google’s MapReduce programming model using Amazon EC2, S3, SQS, and SimpleDB as a cloud operating system.

According to the research report on Cloud MapReduce, the resulting system runs at up to 60 times the speed of Hadoop (this depends on the application and the data, of course). There’s no master node, so there’s no single point of failure or a processing bottleneck. Because it takes advantage of high level constructs in the cloud for data (S3) and state (SimpleDB) storage, along with EC2 for processing and SQS for message queuing, the implementation is two orders of magnitude simpler than Hadoop. The research report includes details on the use of each service; they’ve also published some good info about the code architecture.

Download the code, read the tutorial, and and give it a shot!

–Jeff;

More on ADFS with Amazon EC2

Thanks to those who wrote to me with ideas about using ADFS to federate with Windows instances running on Amazon EC2. My original post was picked up by a couple other blogs, which Id like to acknowledge here:

ADFS-EC2-SSO As part of a joint project between Amazon Web Services and Microsoft, Im proud to announce the release of a whitepaper written by David Chappell that explores these federation scenarios in more detail. David begins his paper with an additional scenario your Amazon EC2 resources are placed in an Amazon Virtual Private Cloud (Amazon VPC) and joined to your own corporate domain; here, theres no use of ADFS. Then he illustrates the two scenarios I mentioned before, and shows how it would work with both ADFS 1.1 and ADFS 2.0.

Soon well release a companion step-by-step guide that walks you through the steps required to build these federation scenarios in a lab. From this youll gain the skills and experience necessary to implement them in your production environment. Ill announce here when the guide is available for download.

> Steve <

Federation with ADFS in Windows Server 2008

As I’ve talked with customers who have deployed or plan to deploy Windows Server 2008 instances on Amazon EC2, one feature they commonly inquire about is Active Directory Federation Services (ADFS). There seems to be a lot of interest in ADFS v2 with its support for WS-Federation and Windows Identity Foundation. These capabilities are fully supported in our Windows Server 2008 AMIs and will work with applications developed for both the “public” side of AWS and those you might run on instances inside Amazon VPC.

I’d like to get a better sense of how you might use ADFS. When you state that you need “federation,” what are you wanting to do? I imagine most scenarios involve applications on Amazon EC2 instances obtaining tokens from an ADFS server located inside your corporate network. This makes sense when your users are in your own domains and the applications running on Amazon EC2 are yours.

Another scenario involves a forest living entirely inside Amazon EC2. Imagine you’ve created the next killer SaaS app. As customers sign up, you’d like to let them use their own corpnet credentials rather than bother with creating dedicated logons (your customers will love you for this). You’d create an application domain in which you’d deploy your application, configured to trust tokens only from the application’s ADFS. Your customers would configure their ADFS servers to issue tokens not for your application but for your application domain ADFS, which in turn issues tokens to your application. Signing up new customers is now much easier.

What else do you have in mind for federation? How will you use it? Feel free to join the discussion. I’ve started a thread on the forums, please add your thoughts there. I’m looking forward to some great ideas.

> Steve <

Third-Party AWS Tracking Sites

A couple of really cool third-party AWS tracking sites have sprung up lately. Some of these sites make use of AWS data directly and others measure it using their own proprietary methodologies. I don’t have any special insight in to the design or operation of these sites, but at first glance they appear to be reasonably accurate.

Cloud Exchange

Tim Lossen‘s Cloud Exchange site tracks the price of EC2 Spot Instances over time and displays the accumulated data in graphical form, broken down by EC2 Region, Instance Type, and Operating System. Here’s what it looks like:

Spot History

The Spot History site also tracks the price of EC2 Spot Instances over time. This one doesn’t break the prices down by Region. Here’s what it looks like:

Cloudelay

Marco Slot‘s Cloudelay site measures latency from your current location (e.g. your browser) to Amazon S3 and Amazon CloudFront using some clever scripting techniques.

Timetric

Timetric tracks the price of the EC2 Spot Instances and displays them in a number of ways including spot price as a percentage of the on-demand price and a bar chart. They also provide access to the for DIY charting.

— Jeff;

Fotopedia and AWS

Hi there, this is Simone Brunozzi, Technology Evangelist for AWS in Europe. I’ll steal the keyboard from Jeff Barr for a few minutes to share something really interesting with you: in fact, It is always fascinating to see how our customers are using the Amazon Web Services to power their businesses.

Olivier Gutknecht, Director of Server Software at French-based Fotonauts Inc., spent some time with me to describe how they use AWS to power Fotopedia, a collaborative photo encyclopedia.

We have been very lucky with our development timeframe: we developed this project while Amazon was building its rich set of services. Early in the development we tested Amazon S3 as the main data store for our images and thumbnails. Switching our first implementation to S3 was a matter of days. Last year, when our widgets were featured on LeWeb 08 site, we enabled Amazon CloudFront for distribution of our images – literally days after the official CloudFront introduction. Before this, we moved our processing to EC2 instances and persistent EBS volumes. And in the previous months, we integrated the Elastic Load Balancing and Elastic Map Reduce into our stacks.

It is interesting to see how the AWS services replaced our initial implementation. We’re not in the business of configuring Hadoop for the cloud, for example, so we’re quite happy to use such a service if it fits our needs. The same happened to our HTTP fault tolerance layer, quickly replaced with AWS ELB.

So Amazon S3, CloudFront and EC2 (with Elastic Block Storage (EBS) volumes for the data stores) are the three key services that they are using to power Fotopedia, but they also take advantage of other AWS services.

We regularly analyze a full Wikipedia dump to extract abstract and compute a graph of related articles to build our photo encyclopedia. We use Elastic Map Reduce with custom Hadoop jobs and Pig scripts to analyze the Wikipedia content – it’s nice to be able to go from eight hours to less than two hours of processing time.

We’re also using on-demand instances and Hadoop to analyze our logs: all services logs are aggregated and archived into a S3 bucket, and we regularly analyze these to extract business metrics and user visible stats we then integrate into the site.

And there’s the secret sauce to bind this together: Chef. Chef is a young, and extremely powerful system integration framework. The Fotonauts team is working on a detailed blog post on “how we use chef” post in the future, because they consider Chef to be an essential component in our stack.

For instance, when we provision a new EC2 instance, we set up the instance with a simple boot script. On first boot, the instance automatically configures our ssh keys, installs some base packages (ruby, essentially) and registers itself in our DNS. Finally, Chef registers the instance into our Chef server. At this point we have a “generic”, passive machine added to our grid. Then we just associate a new role for this instance – let’s say we need a new backend for our main Rails application. At this point, it is just a matter of waiting for the instance to configure itself: installing rails, monitoring probes, doing a checkout of our source code and finally launch the application. A few minutes later, the machine running our load balancer and web cache notices a new backend and immediately reconfigures itself.

It would be interesting to see how they will benefit from the recent Boot-From-EBS feature that we added earlier.

What is great with this Amazon & Chef setup is that it helps you into thinking about your application globally. Running a complex application like Fotopedia is not just a matter of running some rails code and a MySQL database, but coordinating a long list of software services: some written by us, some installed as packages from the operating systems, some built and installed from source code (sometimes because it’s so recent it is not available in our linux distribution, sometimes because we need to patch the software for our needs). Automation is the rule, not the exception.

But putting aside the technical questions, our decision to base our infrastructure on Amazon Web Services led to several positive consequences on our process and workflow: less friction to experiment and prototype, an easy way to setup a testing and development platform, and more control over our production costs and requirements. We also recently migrated some instances to reserved instances billing.

I asked Olivier what’s next in their AWS Experiments and this is what he told me: “Amazon Relational Database Service.”

Thanks Olivier, and good luck with Fotopedia!

Simone Brunozzi (@simon on Twitter)
Technology Evangelist for AWS in Europe

Amazon Virtual Private Cloud Opens Up

I am happy to announce that the Amazon Virtual Private Cloud (VPC) is now available to all current and future Amazon EC2 customers. VPC users are charged only for VPN connection hours and for data transfer, making this a very cost-efficient way to create a secure and seamless bridge between a company’s existing IT infrastructure and the AWS cloud.

During the limited beta test, VPC users have seen that they can easily add a scalable, on-demand component to their infrastructure repertoire. They’ve used it to support a number of scenarios including development and testing, batch processing, and disaster recovery. We’re excited to be able to open up the VPC to the entire EC2 user base and look forward to hearing about even more usage scenarios.

We will enable all EC2 accounts for VPC today (this will take a couple of hours).

Start out by reading the VPC Technical Documentation, including the VPC Getting Started Guide, the VPC Network Administrator Guide, and the VPC Developer Guide.

— Jeff;