Category: Amazon EC2


Elastic Load Balancing, Auto Scaling, and CloudWatch Resources

by Jeff Barr | on | in Amazon EC2 |

Here are some good resources for current and potential users of our Elastic Load Balancing, Auto Scaling, and Amazon CloudWatch features:

Version 1.8a of the popular Boto library for AWS now supports all three of the new features. Written in Python, Boto provides access to Amazon EC2, Amazon S3, Amazon SQS, Amazon Mechanical Turk, Amazon SimpleDB, and Amazon CloudFront. The Elastician Blog has some more info.

 

The Elastician Blog also has a good article with a complete example of how to use CloudWatch from Boto. After creating the connection object, one call initiates the monitoring operation and two other calls provide access to the collected statistics.

 

The Paglo monitoring system can now make use of the statistics collected by CloudWatch. You will need to install the open source Paglo Crawler on your EC2 instances. More info on Paglo can be found here.

 

The IT Architects at The Server Labs have put together some great blog posts. The first one, Setting up a load-balanced Oracle Weblogic cluster in Amazon EC2, contains all of the information needed to set up a two node cluster. The second one, Full Weblogic Load-Balancing in EC2 with Amazon ELB, shows how to use the Elastic Load Balancer to front a pair of Apache servers which, in turn, direct traffic to a three node Weblogic cluster to increase scalability and availability.

 

Speaking of availability and durability, you should definitely check out the DZone reference card on the topic. The card provides a detailed yet concise introduction to the two topics in just 6 pages. Topics covered include horizontal scalability, vertical scalability, high availability, measurement, analysis, load balancing, application caching, web caching, clustering, redundancy, fault detection, and fault tolerance.

 

Author and blogger Ramesh Rajamani wrote a detailed paper on the topic of Dynamically Scaling Web Applications in Amazon EC2. Although the paper predates the release of the Elastic Load Balancer and Auto Scaling, the approach to scaling is still valid. Ramesh shows how to use Nginx and Nagios to build a scalable cluster.

 

The Serk Tools Blog has a post on Amazon Elastic Load Balancer Setup. The post includes an architectural review of the Elastic Load Balancer service, detailed directions to create an Elastic Load Balancer instance, information about how to set up a CNAME record in your DNS server, and directions on how to set up health checks.

 

Arfon Smith wrote a blog post detailing his experience moving the Galaxy Zoo from HAProxy to Elastic Load Balancing. He notes that it took him just 15 minutes to make the switch and that he’s now saving $150 per month.

 

Update: After I wrote this post, two more good resources were brought to my attention!

Shlomo Swidler of of MyDrifts.com wrote to tell me about his post. He covers the two-level elasticity of Elastic Load Balancing and describes some testing strategies. The first level of elasticity is provided by DNS when it maps the CNAME of an Elastic Load Balancer instance to the actual endpoint of the instance. Shlomo correctly points out that this allows inbound network traffic to scale. The second level is provided by the Elastic Load Balancer itself as it distributes traffic across multiple EC2 instances. The latter sections of the post provide a testing strategy for a system powered by one or more Elastic Load Balancer instances.

 

The Typica AWS library for Java has included CloudWatch support for a few months. You can read this post to learn more about enabling and fetching CloudWatch metrics through Typica.

 

I hope you find these resources to be helpful!

— Jeff;

Amazon Elastic MapReduce Now Available in Europe

by Jeff Barr | on | in Amazon EC2 |

Earlier this year I wrote about Amazon Elastic MapReduce and the ways in which it can be used to process large data sets on a cluster of processors. Since the announcement, our customers have wholeheartedly embraced the service and have been doing some very impressive work with it (more on this in a moment).

Today I am pleased to announce Amazon Elastic MapReduce job flows can now be run in our European region. You can launch jobs in Europe by simply choosing the new region from the menu. The jobs will run on EC2 instances in Europe and usage will be billed at those rates.

Because the input and output locations for Elastic MapReduce jobs are specified in terms of URLs to S3 buckets, you can process data from US-hosted buckets in Europe, storing the results in Europe or in the US. Since this is an internet data transfer, the usual EC2 and S3 bandwidth charges will apply.

Our customers are doing some interesting things with Elastic MapReduce.

At the recent Hadoop Summit, online shopping site ExtraBux described their multi-stage processing pipeline. The pipeline is fed with data supplied by their merchant partners. This data is preprocessed on some EC2 instances and then stored on a collection of Elastic Block Store volumes. The first MapReduce step processes this data into a common format and stores it in HDFS form for further processing. Additional processing steps transform the data and product images into final form for presentation to online shoppers. You can learn more about this work in Jinesh Varia’s Hadoop Summit Presentation.

Online dating site eHarmony is also making good use of Elastic MapReduce, processing tens of gigabytes of data representing hundreds of millions of users, each with several hundred attributes to be matched. According to an article on SearchCloudComputing.com, they are doing this work for $1,200 per month, a considerable savings from the $5,000 per month that they estimated it would cost them to do it internally.

We’ve added some articles to our Resource Center to help you to use Elastic MapReduce in your own applications. Here’s what we have so far:

You should also check out AWS Evangelist Jinesh Varia in this video from the Hadoop Summit:

— Jeff;

PS – If you have a lot of data that you would like to process on Elastic MapReduce, don’t forget to check out the new AWS Import/Export service. You can send your physical media to us and we’ll take care of loading it into Amazon S3 for you.

Scaling to the Stars

by Jeff Barr | on | in Amazon EC2 |

Recently I blogged about The Server Labs, a consultancy that specializes in high-performance computing including on Amazon Web Services.

Heres another story that I found fascinating: nominally it is about how The Server Labs uses Amazon Web Services as a scale-out solution that also implements Oracle databases; however its really about space exploration (or should I say nebula computing). It began with an email asking whether there would be a problem running up to 1,000 Amazon EC2 High-CPU Extra-Large instances.

The Server Labs is a software development/consulting group based in Spain and the UK that works closely with the European Space Agency, and they needed to prove the scalability of an application that they helped build for ESA’s Gaia project. In addition to the instances, they also requested 2 large and 3 X-Large instances to host Oracle databases that coordinate the work being performed by the high-CPU instances.

Gaias goal is to make the largest, most precise three-dimensional map of our Galaxy by surveying an unprecedented number of stars – more than one billion. This, by the way, is less than 1% of all stars! The plan is to launch a mission in 2011, collect data until 2017; and then publish a completed catalog no later than 2019.

I had the opportunity to see a PowerPoint deck created and presented by The Server Labs founder, Paul Parsons, and their software architect, Alfonso Olias, who is currently assigned to this project.

The deck explained that the expected number of samples in Gaia is 1 billion stars x 80 observations x 10 readouts, which is approximately equal to 1 x 1012 samplesor as much as 42 GB per day transferred back to Earth. Theres a slide in the deck that says Put another way, if it took 1 millisecond to process one image, the processing time for just one pass through the data on a single processor) would take 30 years.

As the spacecraft travels, it will continuously scan the sky in 0.7 degree arcs, sending the data back to Earth. Some involved algorithms will come into play in order to process the data; and the result is a fairly complex computing architecture that is linked to an Oracle database. Scheduling the cluster of computational servers is not quite so complicated, and is based on a scheduler that is focused on keeping each machine as busy as possible.

However the amount of data to process is not steadyit will increase over time. Which means that infrastructure needs will also vary over time. And of course idle computing capacity is deadly to a budget.

The opportunity to solve large computational problems usually turns to grid computing. No difference this time either except that as mentioned above, the required size of the grid is not constant. Because Amazon Web Services is on-demand, its possible to apply just enough computational resources to the problem at any given time.

In their test, The Server Labs set up an Oracle database using an AWS Large Instance running a pre-defined public AMI. Then they mounted 5 EBS volumes of 100 GB each, and mounted them to the instance.

Then they created Amazon Machine Images (AMIs) to run the actual analysis software. These images were based on large instances and included Java, Tomcat, the AGIS software and an rc.local script to self-configure an instance when its launched.

The requirements break down as follows:

To process 5 years of data for 2 million stars, they will need to run 24 iterations of 100 minutes each, which works out to 40 hours running a grid of 20 Amazon EC2 instances. A secondary update has to be run once and requires 30 minutes per run, or 5 hours running a grid of 20 EC2 instances.

For the full 1 billion star project numbers extrapolate out more or less as follows: They calculated that they will analyze 100 million primary stars, plus 6 years of data, which will require a total of 16,200 hours of a 20-node EC2 cluster. Thats an estimated total computing cost of 344,000 Euros. By comparison, an in-house solution would cost roughly 720,000 EUR (at todays prices) which doesnt include electricity or storage or sys-admin costs. (Storage alone would be an additional 100,000 EUR.)

Its really exciting to see the Cloud used in this manner; especially when you realize that an entire set of problem solutions that were beyond economic possibility before the Cloud became a reality.

Mike

Webinar: How to Create Secure Test and Dev Environments on the Cloud

by Jeff Barr | on | in Amazon EC2 |

Amazon Web Services, CohesiveFT, and RightScale will participate in a webinar titled “How to Create Secure Test and Dev Environments on the Cloud.”

Along with Michael Crandell and Edward Goldberg of RightScale, Simone Brunozzi of Amazon Web Services and Patrick Kerpan of CohesiveFT will show you how you can save time and money by running your entire testing application testing infrastructure in the cloud. They will discuss creation of an agile approach to rapid prototyping, creation of a test and development environment which replicates the final deployment environment, and will also show how to build a secure VPN environment.

The webinar is free but registration is required.

— Jeff;

Setting up a Load-Balanced Oracle Weblogic Cluster in Amazon EC2

by Jeff Barr | on | in Amazon EC2 |

Update (January 29, 2016) – This blog post is six years old and many of the original links are now out of date. Take a look at the newer (albeit not by much) Oracle AMIs page for some alternatives.

— Jeff;


Oracle recently released several middleware Amazon Machine Images (AMIs) to the community. I want to point out a detailed blog entry by Paul Parsons from The Server Labs that describes how to run a Weblogic Server on Amazon EC2, incorporating the load balancing feature inside the Monitoring, Auto Scaling and Elastic Load Balancing Service for Amazon EC2.

The following paragraphs are straight out of Pauls full blog post, which of course you should read in its entirety if this topic is relevant to your interests.

Oracle recently made available a set of AMI images suitable for use with the Amazon EC2 cloud computing platform. I found the two images (32-bit and 64-bit) that contain Weblogic (along with Oracle Enterprise Linux 5 and JRockit) the most interesting of the lot. This article will explain how to set up a basic two-node Weblogic cluster using the 32-bit Weblogic image provided by Oracle with an Amazon Elastic Load Balancer (ELB). In future articles, I will demonstrate how to set up a more complicated cluster with Apache Web Server instances balancing the load between many weblogic cluster members.

You can set up a Weblogic cluster in EC2 in very little time which makes it great for testing complicated Weblogic setups without having to mess around on your local machine or trying to scrape together the necessary hardware. This type of configuration would also be suitable for deploying a production application, though youd have to check the licensing implications with Oracle if you wanted to do this.

Note that this article assumes a basic level of familiarity with using Amazon web services.

Mike

AWS Management Console Support for Reserved Instances

by Jeff Barr | on | in Amazon EC2 |

The AWS Management Console now has support for our new Reserved Instances feature, previously announced in this very blog. You can now purchase new Reserved Instances and see your existing holdings with point-and-click ease.

The EC2 tab of the console has a new button:

You can see your existing set of Reserved Instances:

And you can purchase additional Reserved Instances:

This new feature should make it easier than ever for you to enjoy the cost benefit that comes with the use of one or more Reserved Instances.

— Jeff;

EC2 and Wowza Media Support Belgium’s Largest Live Streaming Event

by Jeff Barr | on | in Amazon EC2 |

Imagine if you need to prepare the internet infrastructure needed to support a live event that:

  • Will host a streaming video,
  • Will start at a time that you can’t control,
  • Will be of an unknown duration,
  • May attract a worldwide audience, and
  • Happens once in a blue moon.

You can’t buy the infrastructure, since you’ll need it just once. Even then you wouldn’t know how much to get. Traditional hosting would require you to make a long term commitment and you still wouldn’t know how much to reserve. Cloud computing, once again, turns out to solve these problems and to enable hundreds of thousands of people to witness a relatively rare event — the birth of an elephant in captivity!

On May 16th and 17th, over 350,000 unique visitors were able to watch the birth of Kai-Mook, the first elephant ever born at the Antwerp Zoo in Belgium. This amazing event was streamed live from a number of Amazon EC2 servers running the Wowza Media Server Pro product.

The statistics for this event were themselves elephantine! In advance of the event, about 50,000 people registered to receive an SMS alert when the birth was immminent. When the alert went out, the system scaled up quickly and was soon streaming live video to over 30,000 concurrent users, helped by some good media coverage including a BBC article. The users watched for an average of 1 hour and 35 minutes and the live event lasted for a total of 42 hours. Behind the scenes, 170 EC2 Large instances handled the streaming.

Note: The original version of the preceding paragraph claimed that the event pumped out 34 Gbps (gigabits per second) of data. It turns out that this was an optimistic and somewhat fuzzy estimate.

Videos, photos, and more are available at the Baby-Oliphant site, developed by interactive agency Boondoggle. CDN provider Rambla used a combination of AWS and their own infrastructure for this project. The original video is here. There’s also a Flickr Photostream. Naturally enough, there’s also a Kai-Mook blog and a complete genealogy. Weighing in at 80 kilograms, Kai-Mook has 12 siblings on his father’s side and 3 more on his mother’s.

As you can see, EC2 and Wowza Media Server Pro were able to support this event in fine style. Billing for the Wowza product is handled through Amazon DevPay so they didn’t have to pay an arm and a leg (or a trunk?) for an excessive number of software licenses to support this unique event.

— Jeff;

New Features for Amazon EC2: Elastic Load Balancing, Auto Scaling, and Amazon CloudWatch

by Jeff Barr | on | in Amazon EC2 |

We are working to make it even easier for you to build sophisticated, scalable, and robust web applications using AWS. As soon as you launch some EC2 instances, you want visibility into resource utilization and overall performance. You want your application to be able to scale on demand based on traffic and system load. You want to spread the incoming traffic across multiple web servers for high availability and better performance. You want to focus on building an application that takes advantage of the powerful infrastructure available in the cloud, while avoiding system administration and operational burdens (“The Muck,” as Jeff Bezos once called it).

Today, we are bringing you a lot closer to that world! The load balancing, auto scaling, and cloud monitoring features that I talked about earlier are now available. The features work together to help you to build highly scalable and highly available applications. Amazon CloudWatch monitors your Amazon EC2 capacity, Auto Scaling dynamically scales it based on demand, and Elastic Load Balancing distributes load across multiple instances in one or more Availability Zones. The measurements collected by Amazon CloudWatch provide Auto Scaling with the information needed to run enough Amazon EC2 instances to deal with the traffic load. Auto Scaling updates the Elastic Load Balancing service when new instances are launched or terminated to automatically scale the load-balanced capacity. You can instantiate, configure, and deploy these important system architecture components in seconds.

Amazon CloudWatch tracks and stores a number of per-instance performance metrics including CPU load, Disk I/O rates, and Network I/O rates. The metrics are rolled-up at one minute intervals and are retained for two weeks. Once stored, you can retrieve metrics across a number of dimensions including Availability Zone, Instance Type, AMI ID, or Auto Scaling Group. Because the metrics are measured inside Amazon EC2 you do not have to install or maintain monitoring agents on every instance that you want to monitor. You get real-time visibility into the performance of each of your Amazon EC2 instances and can quickly detect underperforming or underutilized instances.

Auto Scaling lets you define scaling policies driven by metrics collected by Amazon CloudWatch. Your Amazon EC2 instances will scale automatically based on actual system load and performance but you won’t be spending money to keep idle instances running. The service maintains a detailed audit trail of all scaling operations. Auto Scaling uses a concept called an Auto Scaling Group to define what to scale, how to scale, and when to scale. Each group tracks the status of an application running across one or more EC2 instances. A set of rules or Scaling Triggers associated with each group define the system conditions under which additional EC2 instances will be launched or unneeded EC2 instances terminated. Each group includes an EC2 launch configuration to allow for specification of an AMI ID, instance type, and so forth.

Finally, the Elastic Load Balancing feature makes it easy for you to distribute web traffic across Amazon EC2 instances residing in one or more Availability Zones. You can create a new Elastic Load Balancer in minutes. Each one contains a list of EC2 instance IDs, a public-facing URL, and a port number. You will need to use a CNAME record in your site’s DNS entry to associate your this URL with your application. You can use Health Checks to ascertain the health of each instance via pings and URL fetches, and stop sending traffic to unhealthy instances until they recover.

Here’s how the services fit together:

All of this functionality is provided in web service and command-line form:

  • You can call ListMetrics to get a list of statistics collected by Amazon CloudWatch, and then call GetMetricStatistics to retrieve them. Your call to GetMetricStatistics can include a number of parameters to specify the date range, desired metrics and statistics, metric granularity, and more. You can also use mon-list-metrics and mon-get-stats from the command line. There’s a lot more info in the Developer Guide (HTML or PDF) and the Quick Reference Card.
  • On the load balancing side, you start out by calling CreateLoadBalancer to create an Elastic Load Balancer, and will receive a DNS name in return. You can include a list of Availability Zones in the call or you can add them later using EnableAvailabilityZonesForLoadBalancer. From there you can add any number of health checks using ConfigureHealthCheck. A call to RegisterInstancesWithLoadBalancer will add your Amazon EC2 instances to the Elastic Load Balancer, and load balancing will commence. You can use elb-create-lb, elb-enable-zones-for-lb, elb-configure-healthcheck, and elb-register-instances-with-lb from the command line. Again, there’s a lot more info in the Developer Guide (HTML or PDF) and the Quick Reference Card.
  • For Auto Scaling you begin by calling CreateAutoScalingGroup, naming the group and providing the information needed to launch suitably configured Amazon EC2 instances. You then establish the scaling parameters using the CreateOrUpdateScalingTrigger function. The service will then launch Amazon EC2 instances as indicated by the scaling parameters. You can call DescribeScalingActivities at any point to fetch a list of recent scaling activities (instance launches and terminations). Command line equivalents are as-create-autoscaling-group, as-create-or-update-trigger, and as-describe-scaling-activities. Again, there’s a lot ore info in the Developer Guide (HTML or PDF) and the Quick Reference Card.

If you’re signed up for the Amazon EC2 service, you’re already registered to use all of these new features and can begin using them via the web service APIs or Command Line tools. These new features are currently available in the U.S. region with EU region availability coming in the next few months.

You can use these services to make your AWS applications perform better without sacrificing application control freedom of development, choice of tools, speed of deployment, or any other kind of flexibility. You can be up and running with these new services in a matter of minutes. All of these new features are supported through our public forums and also through AWS Premium Support.

Morning Update:As always, a few interesting things came up after I put this post out last night:

  1. Amazon CTO Werner Vogels wrote about these new features in his blog post, Automating the management of Amazon EC2 using Amazon CloudWatch, Auto Scaling and Elastic Load Balancing.
  2. RightScale founder Thorsten von Eiken also wrote about them in his post, Amazon adds Load balancing, Monitoring, and Auto-Scaling..
  3. There’s a good discussion taking place on a Hacker News thread.

— Jeff;

Quetzall CloudCache

by Jeff Barr | on | in Amazon EC2 |

Marc from Quetzall sent me a note about their new CloudCache product. CloudCache is a fast, lightweight key-value caching system designed for use within the cloud. Each key can optionally have an associated TTL (time to live). Once the TTL is reached the key and the associated value are removed from the cache.

Running on Amazon EC2 via DevPay, CloudCache is fast, with latency measured at just 1.5 ms. It can be run in multiple EC2 regions to minimize latency concerns. Customers can start small (1 cache) and grow large (1000 caches). CloudCache returns data in Ajax (JSON) or XML format. Bindings are available for Ruby, Java, PHP, and Python. Read the API documentation to learn more.

Since CloudCache is accessible via DevPay, you can sign up here and start using it right away.All pricing and usage charges are available on that page.

— Jeff;