Category: Amazon EC2


Configurable Reverse DNS for Amazon EC2’s Elastic IP Addresses

by Jeff Barr | on | in Amazon EC2 |

I’d like to call your attention to a new feature that we rolled out earlier this month. You can now provide us with a configurable Reverse DNS record for any of your Elastic IP addresses. Once you’ve supplied us with the record, reverse DNS lookups (from IP address to domain name) will work as expected: the Elastic IP address in question will resolve to the domain that you specified in the record.

If you are using an EC2 instance to send email, you’ll appreciate this one. Some other types of applications and protocols (FTP and Secure FTP come to mind), can also benefit from it, but most of our customers have asked for it after they tried to send email from Amazon EC2.

You can provide us with your Reverse DNS records using this form. We’ll set up the mappings as quickly as possible and we’ll send you an email once everything is all set up.

We count on our customers to provide us with the feedback needed to assign the proper priority to this and to other features. We’re always happy to hear from you; send your feature requests to awseditor@amazon.com and I’ll make sure that they are routed directly to the proper team.

— Jeff;

Save Money With Combined AWS Bandwidth Pricing

by Jeff Barr | on | in Amazon EC2, Amazon RDS, Amazon S3, Amazon SDB, Amazon SQS |

I never tire of telling our customers that they’ll be saving money by using AWS!

Effective April 1, 2010, we’ll add up the bandwidth you use for Amazon Simple Storage Service (Amazon S3), Amazon Elastic Compute Cloud (Amazon EC2), Amazon SimpleDB, Amazon Relational Database Service (Amazon RDS), and the Amazon Simple Queue Service (SQS) on  a Region-by-Region basis and use that value to set your bandwidth tier for each Region. Depending on your bandwidth usage, this could mean your overall bandwidth charge will be reduced, since you will be able to reach higher volume tiers more quickly if you use multiple services.

We’re also going to make the first gigabyte of outbound data transfer each month free.

You’ll see both of these benefits in a new entry on your AWS Account Activity page. We’ll combine the bandwidth used by the services listed above.

Sound good?

— Jeff;

Bring Your Own EA Windows Server License to Amazon EC2!

by Jeff Barr | on | in Amazon EC2, Windows |

Update (November 2011): The information in this post is no longer relevant. Read our post Extending Microsoft Licensing Mobility to the AWS Cloud: Now You Can Run Several Windows Server Applications on AWS for an update.

When we talk about AWS with potential users, they often ask if Windows Server is available. As you know, we’ve supported Windows Server 2003 for a while, and we recently added support for Windows Server 2008. Once we’ve let them know that they can run Windows Server and their existing Microsoft Windows applications on Amazon EC2, the larger customers often tell us that they’ve already set up an Enterprise Agreement (EA) with Microsoft, and ask if it can be applied to EC2 instances.

As of today, the answer is now a conditional (yet still enthusiastic) – “Yes!”

Under a new Microsoft pilot program, you can bring your EA Windows Server licenses into the cloud, activate them, and then launch Amazon EC2 instances running Microsoft Windows Server for the same price as Linux/UNIX On-Demand or Reserved Instance .

Enrollment starts today and will continue until September 23, 2010, so it is important to act fast. Your participation and feedback will have a definite impact on the long-term prospects for this pilot program.

To participate in the pilot, Microsoft requires that your company meets the following criteria:

  • Your company must be based (or have a legal entity) in the United States.
  • Your company must have an existing Microsoft Enterprise Agreement that does not expire within 12 months of your entry into the Pilot.
  • You must already have purchased Software Assurance from Microsoft for your EA Windows Server licenses.
  • You must be an Enterprise customer (Academic and Government institutions are not covered by this pilot).

Once enrolled, you can move your Enterprise Agreement Windows Server Standard, Windows Server Enterprise, or Windows Server Datacenter Edition licenses to Amazon EC2 for 1 year. Each of your Windows Server Standard licenses will let you launch one EC2 instance. Each of your Windows Server Enterprise or Windows Server Datacenter licenses will let you launch up to four EC2 instances. In either case, you can use any of the EC2 instance types. The licenses you bring to EC2 can only be moved between EC2 and your on-premises machines every 90 days. You can use your licenses in the US East (Northern Virginia) or US West (Northern California) Regions. You will still be responsible for maintaining your Client Access Licenses and External Connector licenses appropriately.

To apply for this program, dig up your Enterprise Agreement Number and fill out the Windows License Mobility Form. We’ll verify your eligibility with Microsoft, and then we’ll need you to sign some paperwork and return it to us. We’ll do some final checking, pass the paperwork along to Microsoft, and we’ll enable your AWS account for the program. We’ve set up SLAs for each step to make it possible to have you up and running in less than two weeks.

Operationally, you’ll use a couple of new EC2 APIs (ActivateLicense, DeactivateLicense, and DescribeLicenses) to tell us how many licenses you’d like to use for EC2. The ActivateLicense and DescribeLicenses requests will return a “License Pool” that you’ll use in an EC2 RunInstances request, which will activate an EC2 instance using your license and at the lower price.

— Jeff;

JumpBox Jumps Ahead – Open Source as a Service on Amazon EC2

by Jeff Barr | on | in Amazon EC2, Cool Sites |

Kimbro and Sean at JumpBox have been breaking a lot of new ground as they strive to make it even easier to run a wide variety of open source applications in a service-oriented fashion.

I spent some time talking to Sean yesterday and he walked me through their latest step into the self-service, push-button world of the future. They’ve streamlined and simplified the process of launching a good-sized catalog of useful applications on Amazon EC2.

Once you’ve created your JumpBox account and entered your AWS Security Credentials, you can launch a polished and easy to manage application with just a few clicks. Let’s say that your boss (or your spouse) says “Jeff, we need a wiki of our very own, and we need it now!” You browse through the JumpBox catalog and decide that the MoinMoin wiki is just the thing:

You click on the Launch on Amazon EC2 link and log in to JumpBox. You confirm your intent, and the EC2 instance is launched and ready to go in a few minutes:




As you can see from the final screen shot, the next step is to access the configuration page for the JumpBox:

You fill in the form and you are all set. The final page provides you with links to the admin page for your Wiki and for the JumpBox Administration Portal running on the instance:

Your wiki is all set and you are a hero:

You can use the Administration Portal to manage backups and restores, naming, and much more:

Once you’ve supplied your AWS Credentials, backups can be scheduled to occur at any desired frequency:

You can restore any of the backups to the same instance of the Wiki or to an entirely new one:

You’ll receive a confirmation email after you’ve finished setting up your JumpBox. The email includes all of the URLs needed to access, administer, and shut down the instance. I think they’ve pretty much thought of everything.

This looks pretty cool and I think that it will give lots of folks a jump-start into the cloud. Check out the application catalog and give it a spin.

–Jeff;

SIOS CloudStation – Cloud-Powered High Availability and Disaster Recovery

by Jeff Barr | on | in Amazon EC2, Developer Tools, Enterprise |

Late last week I met Jim Kaskade of SIOS at a Seattle-area Starbucks for a meeting and a product demo. With the very cool (and appropriate) title “Chief of Cloud”, Jim was the right person to demonstrate his company’s new cloud-powered high availability and disaster recovery solution.

Jim’s Mac laptop was running Centos. He used Xen and Red Hat’s Virtual Machine Manager to host a couple of virtual machines representing the web, application, and database tiers of a SugarCRM installation. Each of the guest operating systems was running a copy of the new SIOS CloudStation product. Each copy of CloudStation was configured (using a web-based GUI) to replicate the state of the virtual machine to an Amazon EC2 instance running in a user-selected Region.

Once everything was up and running, Jim showed me how he could selectively kill the local virtual machines while keeping the application running. The demo was designed to feature a very short RPO (Recovery Point Objective) so that changes made locally just seconds before the database was killed were available from the cloud-based virtual mirror. Jim walked me through a number of different failure and recovery scenarios.

It was quite impressive and makes a great demo of the cloud-based DR (Disaster Recovery) and HA (High Availability) that I’ve been telling my audiences about for the last couple of years. Once configured, CloudStation can fail over from local processing to the cloud, from one cloud region to another, or even from one cloud provider to another. It can also be used as a migration tool, or what is sometimes calls P2V (Physical to Virtual) or P2C (Physical to Cloud).

Read more in the Solution Brief (PDF) or sign up for the March 24th webinar.

— Jeff;

Amazon EC2 Reserved Instances with Windows

by Jeff Barr | on | in Amazon EC2, Windows |

It seems to me that every time we release a new service or feature, our customers come up with ideas and requests for at least two more! As they begin to “think cloud” and to get their minds around all that they can do with AWS, their imaginations start to run wild and they aren’t shy about sharing their requests with us. We do our best to listen and to adjust our plans accordingly.

This has definitely been the case with EC2’s Reserved Instances. The Reserved Instances allow you to make a one-time payment to reserve an instance of a particular type for a period of one or three years. Once reserved, hourly usage for that instance is billed at a price that is significantly reduced from the On-Demand price for the same instance type. As soon as we released Reserved Instances with Linux and OpenSolaris, our users started asking for us to provide Reserved Instances with Microsoft Windows. Later, when we released Amazon RDS, they asked for RDS Reserved Instances (we’ve already committed to providing this feature)!

I’m happy to inform you that we are now supporting Reserved Instances with Windows. Purchases can be made using the AWS Management Console, ElasticFox, the EC2 Command Line (API) tools, or the EC2 APIs.

As always, we will automatically optimize your pricing when we compute your AWS bill. We’ll charge you the lower Reserved Instance rate where applicable, to make sure that you always pay the lowest amount.

You can also estimate your monthly and one-time costs using the AWS Simple Monthly Calculator.

— Jeff;

PS – If you have any wild and crazy AWS ideas of your own, feel free to post them in the appropriate AWS forum.

New EC2 Instance Type: m2.xlarge

by Jeff Barr | on | in Amazon EC2 |

We’ve added a new EC2 instance type to our repertoire. It is called the High Memory Extra Large (m2.xlarge) and has the following specs:

  • 17.1 GB of RAM.
  • 420 GB of local storage.
  • 64-bit platform.
  • 6.5 ECU (EC2 Compute Units), 2 virtual cores each with 3.25 ECU.

You can leverage this new instance type as a lower cost option if you are already using Standard Extra Large instances. The new instance type is available now in all of the EC2 Regions (US-East, US-West, and EU).

— Jeff;

That’s Flexibility, Baby!

by Jeff Barr | on | in Amazon EC2, Europe, Events |

Hi there, this is Simone Brunozzi, AWS Technology Evangelist for Europe and APAC. While Jeff Barr is in Japan, I’ll steal his keyboard to tell you a couple of nice Amazon Web Services success stories.

The first one is from our friends at ZapLive.tv, a German company that allows you to launch your own web TV.

On Friday, Jan 21st, 2010, something unprecedented happened: at 11:38 AM CST, Lily the Black Bear, in Minnesota, gave birth LIVE on the internet!  Thousands of people rushed to WildEarth.TV to watch this wonderful event.

Lili-bear

The broadcast on WildEarth.TV was produced by Doug Hajicek of Whitewolf Entertainment in association with Dr Lynn Rogers of the North American Bear Center. Peaking at a maximum of about 27,000 concurrent viewers, it was streamed across the zaplive.tv dynamic system.

Luckily for WildEarth, they were using Zaplive’s highly scalable infrastructure for load balancing live streams from different locations. The infrastructure is based on Wowza Media Server Pro and Amazon EC2. An Origin Server measures the repeaters and distribute the viewers to different EC2 repeaters, which deliver the stream around the world. Dependent on the load, additional EC2 instances are launched. This combination allows a dynamically auto scaled system to handle hundreds of thousands of unique visitors in a few hours. At their peak, they had about 35 Large EC2 instances running.

As you can see, thanks to the flexibility of Amazon EC2, they were able to quickly launch additional servers when needed, and run them as long as needed, being able to scale down when some of these servers were no longer necessary.

For the second success story we move from Minnesota to green Switzerland.

The Swiss Geoportal, www.geo.admin.ch, is now online and includes a great map viewer, with 35 layers of various Swiss administrations such as Dufour Map, Transport Network, Ground statistic, with more to come in 2010 including English support and up to 50 more datasets. 

Of course, Amazon Web Services (EC2 and S3) were used to implement this web 2.0 mapping application. It allows fast performance and very high load.

Dr. David Oesch, the project coordinator, says:

Thanks again for providing such a great service…without AWS we would not have been able to achieve such a performance at such costs!

Hope you liked these two stories. Goodbye from warm Mumbai!
Simone Brunozzi
(@simon on Twitter)
Technology Evangelist for AWS, Europe and APAC.

Server Density – Easy Server Monitoring

by Jeff Barr | on | in Amazon CloudWatch, Amazon EC2, Developer Tools |

The Server Density monitoring service now supports Amazon EC2 using data collected and made available via Amazon CloudWatch and an optional lightweight monitoring agent.

Provided as a fully managed hosting service, Server Density can provide a snapshot of server status at any time. Alerts can be triggered from any of the metrics and can be delivered via cell phone (SMS), email, or iPhone. All of the data can be graphed, and a “tactical overview” dashboard provides a quick look at the latest monitored values for each server under management. 

There’s a free level (one server and core metrics) and a full level (the whole nine yard) for $16 per server per month, with volume discounts available.

— Jeff;

PS – Check out the demo video!

Cloud MapReduce from Accenture

by Jeff Barr | on | in Amazon EC2, Amazon Elastic MapReduce, Amazon S3, Amazon SDB, Amazon SQS |

Accenture is a Global Solution Provider for AWS. As part of their plan to help their clients extend their IT provisioning capabilities into the cloud, they offer a complete Cloud Computing Suite including the Accenture Cloud Computing Accelerator, the Cloud Computing Assessment Tool, the Cloud Computing Data Processing Solution, and the Accenture Web Scaler.

Huan Liu and Dan Orban of Accenture Technology Labs sent me some information about one of their projects, Cloud MapReduce. Cloud MapReduce implements Google’s MapReduce programming model using Amazon EC2, S3, SQS, and SimpleDB as a cloud operating system.

According to the research report on Cloud MapReduce, the resulting system runs at up to 60 times the speed of Hadoop (this depends on the application and the data, of course). There’s no master node, so there’s no single point of failure or a processing bottleneck. Because it takes advantage of high level constructs in the cloud for data (S3) and state (SimpleDB) storage, along with EC2 for processing and SQS for message queuing, the implementation is two orders of magnitude simpler than Hadoop. The research report includes details on the use of each service; they’ve also published some good info about the code architecture.

Download the code, read the tutorial, and and give it a shot!

–Jeff;