Category: Amazon EC2*


New release: tutorial for ADFS with Amazon EC2

In January I wrote about the availability of a conceptual whitepaper describing various scenarios for using Windows ADFS to federate with services running on Amazon EC2 and mentioned that a step-by-step guide was forthcoming. I’m very pleased to announce that the guide is now finished and available for download. To give you a flavor for what you can learn by following the steps in the guide, I’ll quote from its introduction:

This document provides step-by-step instructions for creating a test lab demonstrating identity federation between an on-premise Windows Server Active Directory domain and an ASP.NET web application hosted on Amazons Elastic Compute Cloud (EC2) service, using Microsofts Active Directory Federation Services (ADFS) technology. The document is organized in a series of scenarios, with each building on the ones before it. It is strongly recommended that the reader follow the documents instructions in the order they are presented. The scenarios covered are:

  1. Corporate application, accessed internally: Domain-joined Windows client (i.e. in the corporate office) accessing an Amazon EC2-hosted application operated by same company, using ADFS v1.1.
  2. Corporate application, accessed from anywhere: External, not-domain-joined client (i.e. at the coffee shop) accessing the same EC2-hosted application, using ADFS v1.1 with an ADFS proxy. In addition to external (forms-based) authentication, the proxy also provides added security for the corporate federation server.
  3. Service provider application: Domain-joined and external Windows clients accessing an EC2-hosted application operated by a service provider, using one ADFS v1.1 federation server for each organization (with the service providers federation server hosted in EC2) and a federated trust between the parties.
  4. Service provider application with added security: Same clients accessing same vendor-owned EC2-hosted application, but with an ADFS proxy deployed by the software vendor for security purposes.
  5. Corporate application, accessed internally (ADFS 2.0): Domain-joined Windows client accessing EC2-based application owned by same organization (same as Scenario 1), but using the currently-in-beta ADFS 2.0 as the federation server and the recently-released Windows Identity Foundation (WIF) .NET libraries on the web server.

We hope you find this information useful and that it helps to simplify migrating existing applications or developing entirely new solutions that leverage the power of Amazon EC2 with your existing internal IT environment.

> Steve <

New Elastic Load Balancing Feature: Sticky Sessions

Amazon EC2’s Elastic Load Balancing feature just became a bit more powerful. Up until now each load balancer had the freedom to forward each incoming HTTP or TCP request to any of the EC2 instances under its purview. This resulted in a reasonably even load on each instance, but it also meant that each instance would have to retrieve, manipulate, and store session data for each request without any possible benefit from locality of reference.

Suppose two separate web browsers each request three separate web pages in turn. Each request can go to any of the EC2 instances behind the load balancer, like this:

When a particular request reaches a given EC2 instance, the instance must retrieve information about the user from state data that must be stored globally. There’s no opportunity for the instance to cache any data since the odds that several requests from the same user / browser will go down as more instances are added to the load balancer.

With the new sticky session feature, it is possible to instruct the load balancer to route repeated requests to the same EC2 instance whenever possible.

In this case, the instances can cache user data locally for better performance. A series of requests from the user will be routed to the same EC2 instance if possible. If the instance has been terminated or has failed a recent health check, the load balancer will route the request to another instance. Of course, in a real world scenario, there would be more than two users, and the third EC2 instance wouldn’t be sitting idle.

Full information on this new feature can be found in the Elastic Load Balancer documentation.

Update: Shlomo Swidler wrote a really nice post on Elastic Load Balancing with Sticky Sessions. I’d encourage you to check it out to learn more about why sticky sessions can improve your application.

— Jeff;

From Our Support Team: Elastic Load Balancing Tips and Tricks

A couple of members of the AWS Developer Support team put together the following tips and tricks to help you get the most from the Elastic Load Balancer.

— Jeff;

Are you thinking about using Amazon EC2 with Elastic Load Balancing, but want to make sure you set it up right the first time? Are you already using an ELB but are seeing intermittent problems with your page loads? Well, you’ve come to the right place! Let’s uncover a couple of common pitfalls. They’re easy to avoid, once you know about them.

For those of you who aren’t familiar, Elastic Load Balancing helps you distribute incoming network traffic across multiple Amazon EC2 instances. Your Elastic Load Balancer (ELB) will automatically route traffic to only the EC2 instances it deems to be healthy, so you needn’t worry about manually enforcing which instances handle the traffic. With ELB, it’s all manged for you. Furthermore, your ELB will also scale itself up and down to meet the demands of your traffic load. You can ensure that the EC2 instances themselves do the same by using Amazon Auto Scaling, but that’s beyond the scope of today’s discussion. You can read more about Auto Scaling here.

A key feature of ELB is that it will distribute incoming traffic equally across all of the Availability Zones you’ve configured it to use. This means that if you enabled, say, Availability Zones us-east-1a and us-east-1d, but only registered instances in us-east-1a, half of your traffic will go to us-east-1d, but there will be no EC2 instances there to handle it. The traffic will be redirected back to us-east-1a, but this redirection could increase latency for your users. Thus, you’ll want to keep track of which Availability Zones your ELB is set up to use. You can use the ELB API command line tools to do this:

elb-describe-lbs –show-xml

If you don’t already have the ELB command line API tools, then you can grab them here. Once you know which Availability Zones are enabled for your ELB, you can run this next command to see which instances are currently registered with your ELB:

elb-describe-instance-health Your_ELB_Name_Goes_Here

The command above will return the instance IDs of the registered instances, which you can then use to determine which Availability Zones each is in:

ec2-describe-instances instance_ID_1 instance_ID_2 …

You can glean a lot of potentially useful information at this point regarding each of the instances behind your ELB. Here are some things to check:

1) Does each enabled Availability Zone contain at least one instance registered with your ELB?

If not, you have two approaches to remedy the situation. The quick fix is to simply disable the empty Availability Zones:

elb-disable-zones-for-lb  ELB_Name_Goes_Here -z Availability_Zone_Name_Goes_Here

The better fix is to register instances with your ELB in the empty Availability Zones:

elb-register-instances-with-lb  ELB_Name_Goes_Here -instances instance_ID_5 instance_ID_6

Great, so at this point you should have at least one instance in each of your ELB’s Availability Zones. But could we still strengthen the setup even further? How about digging into the details of the individual instances behind your ELB? How robust is your configuration? This brings us to a second important item to check:

2) Do you have an equal number of instances in each Availability Zone? And are they the same type?

Since your ELB will distribute incoming traffic equally across your Availability Zones, you really don’t want to have, say, one m1.large instance in us-east-1a and five c1.medium instances in us-east-1d. The single m1.large instance will receive roughly 50% of all of your traffic and, under high traffic volume, may not be able to keep up. Meanwhile, your five c1.medium instances are each under a much lower load. This is definitely a suboptimal arrangement.

The ec2-describe-instances command above returned not only the Availability Zone of each instance but also its instance type. We suggest populating each Availability Zone with an equal number of instances of the same type. You may even want to check if they are all based on the same AMI. Cycling out older instances for replacements based on your most recent AMI will help ensure that your instances remain up-to-date and service requests in a consistent manner, and can simplify debugging in the future.

We hope this helps you understand Elastic Load Balancing. Do you have more questions? Post them to the AWS forums!

Update: George Cook left a good question as a comment. I took it to the leader of the ELB team and here’s what he told me:

Thank you for the feedback. You make some good points. Under certain failure modes, the behavior you described is the right thing to do, and that is on our roadmap. However, in other cases, it is still necessary to bounce traffic between Availability Zones. For example, it is possible that all instances in an Availability Zone become unhealthy (or get deregistered) while there are requests in-flight to that Availability Zone. The load balancer will then bounce these requests to a different Availability Zone in order to minimize any failed requests.

Introducing QC2 – the Quantum Compute Cloud

We’ve had more than our fair share of technical challenges along the way, but the time is right for me to talk about our newest product, the Quantum Compute Cloud, or QC2 for short.

This is the first production-ready quantum computer. You can use it to solve certain types of math and logic problems with breathtaking speed.

Ordinary computers use collections of bits to represent their state. Each bit is definitively 0 or 1, and the number of possible states is 2n. 1 bit can be in either of 2 states, 2 bits can be in any one of 4 states, and so forth.

Quantum computers such as the QC2 use a more sophisticated data representation known as a qubit or quantum bit. Each qubit exists in all of it’s possible states simultaneously, but the probability that a qubit can be in any of the states can change. Quantum computers work by manipulating the probability distribution of each state.

How do you program a quantum computer? With quantum algorithms, of course. Pretty much everything that you know about traditional programming becomes obsolete when you step up to the QC2. You need to think in terms of probabilities, distributions of probabilities, and so forth. Take a look at Shor’s Algorithm for finding prime factors to get a better idea of the power of a quantum computer.

 

We are also planning to support Bernhard Omer’s QCL programming language. Take a look at his thesis on Structured Quantum Programming to learn more. Here’s a QCL code sample to get you started:

Once you’ve launched a QC2 instance and loaded up your algorithm, you must sample the output (also known as “collapsing the quantum state“) in order to retrieve the probability distribution which represents your answer. You’ll want to do this more than once for any particular problem in order to increase your confidence in the solution. Collapsing the quantum state is a destructive operation (much like reading from a magnetic core memory); be sure to account for this in your algorithm. In effect, the answer doesn’t exist until you ask for it.

Until now, the largest quantum computer contained less than 8 qubits. Because we’re really, really smart, we’ve been able to push this all the way to 32 in the first-generation QC2. This will allow you to represent problems with up to 232 distinct states.

We’re launching the QC2 in the US East Region in multiple Availability Zones. using the amazing “spooky action at a distance” property of quantum entanglement, you can actually replicate QC2 instances across Zones.

The QC2 beta is limited, and will definitely close before the end of the day.

— Jeff;

PS – We need to hire lots of world-class people to help us with leading edge technologies like QC2, EC2, and the like. Please check out our AWS jobs page.

East Coast US AWS Events – April 2010

Here’s some information on some AWS events coming up in April, all on the East Coast of the US:

  • I will be speaking at the Rochester AWS User Group at 6:00 PM on Monday, April 5th. My talk will cover some of the latest AWS developments including the Virtual Private Cloud and the Relational Database Service.
  • I will be speaking at the New York City Cloud Computing Group at 6:00 PM on Tuesday, April 6th. I’ll cover VPC and RDS again.
  • I will be speaking at the Emerging Tech for the Enterprise conference in Philadelphia on April 9th. I am looking forward to this visit to my home town! If you will be at ETE, please say hello, and also plan to see Chris Cera and David Brussin talk about Enterprise Cloud Computing: Pitfalls, Puzzles, and Great Rewards.
  • As the final talk of my trip to the East Coast, I will be speaking to the RubyNation conference in Reston, Virginia on Saturday, April 10th. I worked in Reston back when the unofficial motto was “We’re not dead, we’re Reston.” Things have livened up considerably since then and I’m looking forward to connecting with some old friends and colleagues while I am in the area.
  • There will be a Mechanical Turk Meetup in New York at 6:00 PM on April 13th. Learn more about Mechanical Turk‘s global on-demand workforce, discover best practices, talk to existing Requesters, and mingle with members of the Mechanical Turk team. Preregister here.
  • Terry Wise, Director of Business Development for the Amazon Web Services, will be speaking at PegaWORLD in Philadelphia on April 26th. Terry will talk about how Tenet Healthcare uses Pegas Cloud Computing solution to radically improve the way it builds its business process applications, reducing delivery time  and cost by a factor of 5. Discount registrations for the conference are available here.

— Jeff;

PS – Despite the route implied by my map, I will be traveling by plane and train!

Configurable Reverse DNS for Amazon EC2’s Elastic IP Addresses

I’d like to call your attention to a new feature that we rolled out earlier this month. You can now provide us with a configurable Reverse DNS record for any of your Elastic IP addresses. Once you’ve supplied us with the record, reverse DNS lookups (from IP address to domain name) will work as expected: the Elastic IP address in question will resolve to the domain that you specified in the record.

If you are using an EC2 instance to send email, you’ll appreciate this one. Some other types of applications and protocols (FTP and Secure FTP come to mind), can also benefit from it, but most of our customers have asked for it after they tried to send email from Amazon EC2.

You can provide us with your Reverse DNS records using this form. We’ll set up the mappings as quickly as possible and we’ll send you an email once everything is all set up.

We count on our customers to provide us with the feedback needed to assign the proper priority to this and to other features. We’re always happy to hear from you; send your feature requests to awseditor@amazon.com and I’ll make sure that they are routed directly to the proper team.

— Jeff;

Save Money With Combined AWS Bandwidth Pricing

I never tire of telling our customers that they’ll be saving money by using AWS!

Effective April 1, 2010, we’ll add up the bandwidth you use for Amazon Simple Storage Service (Amazon S3), Amazon Elastic Compute Cloud (Amazon EC2), Amazon SimpleDB, Amazon Relational Database Service (Amazon RDS), and the Amazon Simple Queue Service (SQS) on  a Region-by-Region basis and use that value to set your bandwidth tier for each Region. Depending on your bandwidth usage, this could mean your overall bandwidth charge will be reduced, since you will be able to reach higher volume tiers more quickly if you use multiple services.

We’re also going to make the first gigabyte of outbound data transfer each month free.

You’ll see both of these benefits in a new entry on your AWS Account Activity page. We’ll combine the bandwidth used by the services listed above.

Sound good?

— Jeff;

Bring Your Own EA Windows Server License to Amazon EC2!

Update (November 2011): The information in this post is no longer relevant. Read our post Extending Microsoft Licensing Mobility to the AWS Cloud: Now You Can Run Several Windows Server Applications on AWS for an update.

When we talk about AWS with potential users, they often ask if Windows Server is available. As you know, we’ve supported Windows Server 2003 for a while, and we recently added support for Windows Server 2008. Once we’ve let them know that they can run Windows Server and their existing Microsoft Windows applications on Amazon EC2, the larger customers often tell us that they’ve already set up an Enterprise Agreement (EA) with Microsoft, and ask if it can be applied to EC2 instances.

As of today, the answer is now a conditional (yet still enthusiastic) – “Yes!”

Under a new Microsoft pilot program, you can bring your EA Windows Server licenses into the cloud, activate them, and then launch Amazon EC2 instances running Microsoft Windows Server for the same price as Linux/UNIX On-Demand or Reserved Instance .

Enrollment starts today and will continue until September 23, 2010, so it is important to act fast. Your participation and feedback will have a definite impact on the long-term prospects for this pilot program.

To participate in the pilot, Microsoft requires that your company meets the following criteria:

  • Your company must be based (or have a legal entity) in the United States.
  • Your company must have an existing Microsoft Enterprise Agreement that does not expire within 12 months of your entry into the Pilot.
  • You must already have purchased Software Assurance from Microsoft for your EA Windows Server licenses.
  • You must be an Enterprise customer (Academic and Government institutions are not covered by this pilot).

Once enrolled, you can move your Enterprise Agreement Windows Server Standard, Windows Server Enterprise, or Windows Server Datacenter Edition licenses to Amazon EC2 for 1 year. Each of your Windows Server Standard licenses will let you launch one EC2 instance. Each of your Windows Server Enterprise or Windows Server Datacenter licenses will let you launch up to four EC2 instances. In either case, you can use any of the EC2 instance types. The licenses you bring to EC2 can only be moved between EC2 and your on-premises machines every 90 days. You can use your licenses in the US East (Northern Virginia) or US West (Northern California) Regions. You will still be responsible for maintaining your Client Access Licenses and External Connector licenses appropriately.

To apply for this program, dig up your Enterprise Agreement Number and fill out the Windows License Mobility Form. We’ll verify your eligibility with Microsoft, and then we’ll need you to sign some paperwork and return it to us. We’ll do some final checking, pass the paperwork along to Microsoft, and we’ll enable your AWS account for the program. We’ve set up SLAs for each step to make it possible to have you up and running in less than two weeks.

Operationally, you’ll use a couple of new EC2 APIs (ActivateLicense, DeactivateLicense, and DescribeLicenses) to tell us how many licenses you’d like to use for EC2. The ActivateLicense and DescribeLicenses requests will return a “License Pool” that you’ll use in an EC2 RunInstances request, which will activate an EC2 instance using your license and at the lower price.

— Jeff;

JumpBox Jumps Ahead – Open Source as a Service on Amazon EC2

Kimbro and Sean at JumpBox have been breaking a lot of new ground as they strive to make it even easier to run a wide variety of open source applications in a service-oriented fashion.

I spent some time talking to Sean yesterday and he walked me through their latest step into the self-service, push-button world of the future. They’ve streamlined and simplified the process of launching a good-sized catalog of useful applications on Amazon EC2.

Once you’ve created your JumpBox account and entered your AWS Security Credentials, you can launch a polished and easy to manage application with just a few clicks. Let’s say that your boss (or your spouse) says “Jeff, we need a wiki of our very own, and we need it now!” You browse through the JumpBox catalog and decide that the MoinMoin wiki is just the thing:

You click on the Launch on Amazon EC2 link and log in to JumpBox. You confirm your intent, and the EC2 instance is launched and ready to go in a few minutes:




As you can see from the final screen shot, the next step is to access the configuration page for the JumpBox:

You fill in the form and you are all set. The final page provides you with links to the admin page for your Wiki and for the JumpBox Administration Portal running on the instance:

Your wiki is all set and you are a hero:

You can use the Administration Portal to manage backups and restores, naming, and much more:

Once you’ve supplied your AWS Credentials, backups can be scheduled to occur at any desired frequency:

You can restore any of the backups to the same instance of the Wiki or to an entirely new one:

You’ll receive a confirmation email after you’ve finished setting up your JumpBox. The email includes all of the URLs needed to access, administer, and shut down the instance. I think they’ve pretty much thought of everything.

This looks pretty cool and I think that it will give lots of folks a jump-start into the cloud. Check out the application catalog and give it a spin.

–Jeff;

SIOS CloudStation – Cloud-Powered High Availability and Disaster Recovery

Late last week I met Jim Kaskade of SIOS at a Seattle-area Starbucks for a meeting and a product demo. With the very cool (and appropriate) title “Chief of Cloud”, Jim was the right person to demonstrate his company’s new cloud-powered high availability and disaster recovery solution.

Jim’s Mac laptop was running Centos. He used Xen and Red Hat’s Virtual Machine Manager to host a couple of virtual machines representing the web, application, and database tiers of a SugarCRM installation. Each of the guest operating systems was running a copy of the new SIOS CloudStation product. Each copy of CloudStation was configured (using a web-based GUI) to replicate the state of the virtual machine to an Amazon EC2 instance running in a user-selected Region.

Once everything was up and running, Jim showed me how he could selectively kill the local virtual machines while keeping the application running. The demo was designed to feature a very short RPO (Recovery Point Objective) so that changes made locally just seconds before the database was killed were available from the cloud-based virtual mirror. Jim walked me through a number of different failure and recovery scenarios.

It was quite impressive and makes a great demo of the cloud-based DR (Disaster Recovery) and HA (High Availability) that I’ve been telling my audiences about for the last couple of years. Once configured, CloudStation can fail over from local processing to the cloud, from one cloud region to another, or even from one cloud provider to another. It can also be used as a migration tool, or what is sometimes calls P2V (Physical to Virtual) or P2C (Physical to Cloud).

Read more in the Solution Brief (PDF) or sign up for the March 24th webinar.

— Jeff;