Category: Amazon EC2


New Amazon EC2 Fresh Servers

Let’s face it. Sometimes you just need a local server. Perhaps your office is too cold, or you have the urge to pull the cover off and reseat the memory. Or, you might have some data on floppy disks that you simply cannot live without.

Because we will leave no stone unturned in our efforts to bring on-demand computing to the masses, I would like to tell you about today‘s release, the Amazon Fresh Server.

Starting today, if you live within 45 North or South of the Equator, we can deliver a fresh EC2 server to you in 15 minutes or less. This is a genuine, physical server. We’ve launched (literally) some brand new technology in order to make this a reality. Read on to learn a lot more.

There are two delivery modes: terrestrial and atmospheric.

Terrestrial Delivery
If you live in a densely populated urban area, a uniformed delivery person will have your new server on your doorstep in a matter of minutes. As I write this, trucks loaded with servers are circling the 100 largest cities in the country. Here’s one of our delivery people in action:


Actual delivery person delivering actual server.

Atmospheric Delivery
The Atmospheric Delivery model is a lot more interesting. In conjunction with our friends at NASA JPL, we’ve launched a fleet of satellites in to low Earth orbit. Each satellite is stocked with a considerable number of Cluster Compute Eight Extra Large (cc2.8xlarge) servers, individually packaged in our proprietary re-entry shields.


Genuine fake button thanks, jQuery)

When you order a server (currently limited to one per customer) using the new Deliver Instance button, we’ll select a satellite and place your order in the appropriate delivery queue.  After a set of careful (checked, double-checked, and then re-checked) ballistic calculations, the satellite will release your order on a trajectory that will deliver it to the latitude and longitude of your choice, accurate to a 1 meter radius, within 10 minutes. You need do nothing more than fill out this dialog:

 


Actual fake picture of genuine AWS Management Console.

So far so good, right? Read on, it gets even better!

As you probably know, a satellite in LEO is traveling at approximately 7.8 km/second. The amount of heat generated on re-entry to the atmosphere is considerable since the payload must lose all of that speed within a few minutes. We capture that “delta-v” energy and use it to power the server for up to two weeks. Because the server includes a built-in Wi-Fi card and a preconfigured Elastic IP Address, you don’t have to connect any cables. You can simply leave it where it lands and start using it. In fact, under optimal conditions, you can start using it while it is still decelerating. You’ll be up and running in minutes.


Actual conceptual diagram by genuine artist.

As part of the pre-release beta, my son Stephen ordered a server and took delivery in the courtyard of his graduate student housing complex. It was up and running when it landed and we were happily coding away in no time flat:


Actual developers using a genuine Amazon Fresh Server.

When you are done with your server, you can initiate the return process via a single click in the AWS Management Console.

This is a pilot program, and we’ll be taking orders starting today. Get your server today!

— Jeff;

New Whitepaper: The Total Cost of (Non) Ownership of a NoSQL Database Service

We have received tremendous positive feedback from customers and partners since we launched Amazon DynamoDB two months ago. Amazon DynamoDB enables customers to offload the administrative burden of operating and scaling a highly available distributed database cluster while only paying for the actual system resources they consume. We also received a ton of great feedback about how simple it is get started and how easy it is to scale the database. Since Amazon DynamoDB introduced the new concept of a provisioned throughput pricing model, we also received several questions around how to think about its Total Cost of Ownership (TCO). 

We are very excited to publish our new TCO whitepaper: The Total Cost of (Non) Ownership of a NoSQL Database service. Download PDF

TCO_DynamoDB

In this whitepaper, we attempt to explain the TCO of Amazon DynamoDB and highlight the different cost factors involved in deploying and managing a scalable NoSQL database whether on-premise or in the cloud.

When calculating TCO, we recommend that you start with a specific use case or application that you plan to deploy in the cloud instead of relying on generic comparison analysis. Hence, in this whitepaper, we walk through an example scenario (a social game to support the launch of a new movie) and highlight the total costs for three different deployment options over three different usage patterns. The graph below summarizes the results of our white paper.

When determining the TCO of a cloud-based service, its easy to overlook several cost factors such as administration and redundancy costs, which can lead to inaccurate and incomplete comparisons. Additionally, in the case of a NoSQL database solution, people often forget to include database administration costs. Hence, in the paper, we are providing a detailed breakdown of costs for the lifecycle of an application.

Its challenging to do the right apples-to-apples comparison between on-premises software and a Cloud service, especially since some costs are up-front capital expenditure while others are on-going operating expenditure. In order to simplify the calculations and cost comparison between options, we have amortized the costs over a 3 year period for the on-premises option. We have clearly stated our assumptions in each option so you can adjust them based on your own research or quotes from your hardware vendors and co-location providers. 

Amazon DynamoDB frees you from the headaches of provisioning hardware and systems software, setting up and configuring a distributed database cluster, and managing ongoing cluster operations. There are no hardware administration costs since there is no hardware to maintain. There are no NoSQL database administration costs such as patching the OS and managing and scaling the NoSQL cluster, since there is no software to maintain. This is an important point because NoSQL database admins are not that easy to find these days.

We hope that the whitepaper provides you with the necessary TCO information you need so you can make the right decision when it comes to deploying and running a NoSQL database solution. If you have any questions, comments, suggestions and/or feedback, feel free to reach out to us.

— Jinesh

Updated Amazon Linux AMI (2012.03) Now Available

We’ve just released version 2012.03 of the Amazon Linux AMI. Coming six months after our previous major release, this version of the AMI is loaded with new features that EC2 users will enjoy.

One of our goals for this release has been to make available multiple major versions of important packages. This allows code that relies on different versions of core languages, databases, or applications to migrate from older AMIs with minimal changes needed. For example:

Tomcat 7: Support is included for both Tomcat 6 and Tomcat 7. Both are included in the package repository, and can be installed via yum install tomcat6 or yum install tomcat7.

MySQL 5.5: New Amazon Linux AMI 2012.03 users who yum install mysql (or yum install mysql55) will get MySQL 5.5 by default, unless they explicitly choose to install the older MySQL 5.1. Users upgrading via yum from Amazon Linux AMI 2011.09 instances with the older MySQL 5.1 installed will stay with MySQL 5.1, which is still available as mysql51 in the package repository.

PostgreSQL 9: Similar to MySQL, new Amazon Linux AMI 2012.03 users who yum install postgresql (or yum install postgresql9) will get PostgreSQL 9 by default, unless they explicitly choose to install the older PostgreSQL 8. Users upgrading via yum from Amazon Linux AMI 2011.09 instances with the older PostgreSQL 8.4.x installed will stay with PostgreSQL 8, which is still available as postgresql8 in the package repository.

GCC 4.6: While GCC 4.4.6 remains the default, we have included GCC 4.6.2, specifically for use on EC2 instances that support Intel’s new AVX instruction set. Run yum install gcc46 in order to get the packages. GCC 4.6 enables the Amazon Linux AMI to take advantage of the AVX support available on the Cluster Compute Eight Extra Large (cc2.8xlarge) instance type.

Python 2.7: While Python 2.6 is still the default, you can yum install python27 to install version 2.7. We are constantly working on getting more modules built and available for the new Python version, and will be pushing those modules into the repository as they become available.

Ruby 1.9.3: While Ruby 1.8.7 is still the default, you can yum install ruby19 to run version 1.9.3 of Ruby.

We have also upgraded the kernel to version 3.2, updated all the AWS command line tools, and refreshed many of the packages that are available in the Amazon Linux AMI to the latest upstream versions.

The Amazon Linux AMI 2012.03 is available for launch in all regions. The Amazon Linux AMI package repositories have also been updated in all regions. Users of 2011.09 or 2011.02 versions of the Amazon Linux AMI can easily upgrade using yum. Users who prefer to lock on the 2011.09 package set even after the release of 2012.03 should consult the instructions on the EC2 forum.

For more information, see the Amazon Linux 2012.03 release notes.

— Jeff;

PS – If you’d like to help make the Amazon Linux AMI even better, please take a look at our open positions:

 

CloudSpokes Coding Challenge – Build an EC2 Spot Instance Tool

There’s a new coding challenge on CloudSpokes with a top prize of $2000 and other prizes ranging from $1000 to $100.

The goal: Create tools to help more people to make use of Amazon EC2 Spot Instances!

This is an opportunity to be imaginative and creative, and to show us what you can do. You could focus on the  business side and work on price visualization, better ways to determine the optimal prices for bids, or a strategy-based system that uses relative workload priorities to get the most work done at the lowest price.

Or, you could focus on the technical side. You can create tools or libraries to handle long-running processes that might be interrupted, a checkpointing framework, some setup tools, or create an interesting integration with Elastic MapReduce.

The challenge ends in less than 13 days, so you’d best  be getting started now! I look forward to seeing what you come up with.

— Jeff;

 

 

Cost Savings in the Cloud – foursquare and Global Blue

Weve received great feedback from customers on the recent AWS price reductions. Ive personally received a number of great stories from customers who are using AWS to reduce the cost of running their business.

Weve written up two of these recent cost savings stories as AWS case studies. foursquare Labs, Inc. and Global Blue may be two dramatically different companies in terms of longevity (3 years versus 3 decades) as well as industry focus, but they do have one thing in common they are both saving time and money  by using the Amazon Web Services as an alternative to on-premises infrastructure. Here’s the scoop:

foursquare (new AWS case study) is a location-based social network. Over 10 million foursquare users check in via a smartphone app or SMS to exchange travel tips and to share their location with friends. By checking in frequently, users earn points and virtual badges. To perform analytics across more than 5 million daily check-ins, foursquare uses Amazon Elastic MapReduce, Amazon EC2 Spot Instances, Amazon S3, and open-source technologies MongoDB and Apache Flume. By taking advantage of new Amazon EC2 price reductions and using a combination of On-Demand and Reserved Instances, foursquare saves 53% in costs over self-hosting while maintaining their scalability needs.

Global Blue (new AWS case study) is a multi-national firm that has been instrumental in delivering tax-free shopping and refund points to international travelers for nearly 30 years. The companys network has helped 270,000 retailers, shopping brands, and hotels in 40 countries. In 2010, Global Blue handled over 20 million transactions worldwide and an estimated 55,000 travelers use their services everyday. To help track the transactions occurring between merchants, banks, and international travelers, the company needed to create more capacity for their business intelligence (BI) needs. As a result of moving to AWS, Global Blue has increased speed, capacity, and scalabilityall while avoiding $800,000 in CapEx and $78,000 in OpEx costs that would have been spent self-hosting. Thats nearly $1M in cost savings by moving to the cloud!

Companies like foursquare and Global Blue help illustrate not only the cost savings that can be achieved with cloud computing, but also the diverse approaches that can be taken to maximize their performance and scalability.

Jeff;

 

 

New AMI Catalog and New AMI Launch Button Are Now Available

A few days ago we launched a simple, yet very important new feature on our AWS website: the self-service AWS AMI catalog.

As you know, AMIs are disk images that contain a pre-configured operating system and virtual application software, and they serve as the required base to launch an EC2 instance.

Before this change in the catalog, customers had to spend considerable time navigating the site, without the possibility to efficiently filter results by multiple categories or the ability to easily find details about a specific AMI. In fact, some of them sent us feedback about this, and as we always listen to customer feedback, we decided it was time to improve the catalog.

With the new catalog, already available today, customers can easily search AMIs specifying desired categories, such as Provider, Region, Architecture, Root device type, and Operating System (Platform), and they can sort the results by date or title (A-Z).

This is an example of the results you get when you search for a 64-bit, EBS boot, South America AMI :

Figure1

You can launch the AMI directly with the new Launch button that you see in the figure above, or you can click on the link, and take a look at the details, before deciding in which region you want to launch it:

Figure2

The Launch button is particularly useful, since you no longer need to copy and paste AMI IDs from the old catalog to your Management Console.

We also made it easier for customers to publish their own AMIs into the catalog. Simply find the relevant AMI ID on your Management Console, and then submit it as shown below.

Figure3

The system will verify the ID and then submit it; you can then check the status of your submission on the appropriate page, and check out your community contributions where you can see what you own and edit it.

As a final note, always remember to check out our security guidelines on how to use Shared AMIs (also called Community AMIs).

Let us know how you like the new catalog.

– Simone (@simon)

Two New AWS Getting Started Guides

We’ve put together a pair of new Getting Started Guides for Linux and Microsoft Windows. Both guides will show you how to use EC2, Elastic Load Balancing, Auto Scaling, and CloudWatch to host a web application.

The Linux version of the guide (HTML, PDF) is built around the popular Drupal content management system. The Windows version (HTML, PDF) is built around the equally popular DotNetNuke CMS.

These guides are comprehensive. You will learn how to:

  • Sign up for the services
  • Install the command line tools
  • Find an AMI
  • Launch an Instance
  • Deploy your application
  • Connect to the Instance using the MindTerm SSH Client or PuTTY
  • Configure the Instance
  • Create a custom AMI
  • Create an Elastic Load Balancer
  • Update a Security Group
  • Configure and use Auto Scaling
  • Create a CloudWatch Alarm
  • Clean up

Other sections cover pricing, costs, and potential cost savings.

We also have Getting Started Guides for Web Application Hosting, Big Data, and Static Website Hosting.

— Jeff;

 

The Next Type of EC2 Status Check: EBS Volume Status

Weve gotten great feedback on the EC2 Instance Status checks that we introduced back in January. As I said at the time, we expect to add more of these checks throughout the year. Our goal is to get you the information that you need in order to understand when your EC2 resources are impaired.

Status checks help identify problems that may impair an instances ability to run your applications. These status checks show the results of automated tests performed by EC2 on every running instance that detect hardware and software issues. Today we are happy to introduce the first Volume Status check for EBS volumes. In rare cases, bad things happen to good volumes. The new status check is updated when the automated tests detect a potential inconsistency in a volumes data. In addition, weve added API and Console support so you can control how a potentially inconsistent volume will be processed.

Here’s what’s new:

  • Status Checks and Events – The new DescribeVolumeStatus API reflects the status of the volume and lists an event when a potential inconsistency is detected. The event tells you why a volumes status is impaired and when the impairment started. By default, when we detect a problem, we disable I/O on the volume to prevent application exposure to potential data inconsistency.
  • Re-Enabling I/O The IO Enabled status check fails when I/O is blocked. You can re-enable I/O by calling the new EnableVolumeIO API.
  • Automatically Enable I/O Using the ModifyVolumeAttribute/DescribeVolumeAttribute APIs you can configure a volume to automatically re-enable I/O. We provide this for cases when you might favor immediate volume availability over consistency. For example, in the case of an instances boot volume where youre only writing logging information, you might choose to accept possible inconsistency of the latest log entries in order to get the instance back online as quickly as possible.

Console Support
The status of each of your instances is displayed in the volume list: (you may have to add the Status Checks column to the table using the selections accessed via the Show/Hide button):

(I don’t have that many volumes; this screen shot came from a colleague’s test environment).

The console displays detailed information about the status check when a volume is selected:

And you can set the volume attribute to auto-enable I/O by accessing this option in the volume actions drop-down list:

To learn more, go to the Monitoring Volume Status section of the Amazon EC2 User Guide.

Were happy to be delivering another EC2 resource status check to provide you with information on impaired resources and the tools to take rapid action on them. As I noted before, we look forward to providing more of these status checks over time.

Help Wanted
If you are interested in helping us build systems like EBS, wed love to hear from you! EBS is hiring software engineers, product managers, and experienced engineering managers. For more information about positions please contact us at ebs-jobs at amazon.com.

— Jeff;

EC2 Updates: New Medium Instance, 64-bit Ubiquity, SSH Client

Big News
I have three important announcements for EC2 users:

  1. We have introduced a new instance type, the Medium (m1.medium).
  2. You can now launch 64-bit operating systems on the m1.small and c1.medium instances.
  3. You can now log in to an EC2 instance from the AWS Management Console using an integrated SSH client.

New Instance Type
The new Medium instance type fills a gap in the m1 family of instance types, splitting the difference, price and performance-wise, between the existing Small and Large types and bringing our instance count to thirteen (other EC2 instance types). Here are the specs:

  • 3.75 GB of RAM
  • 1 virtual core running at 2 ECU (EC2 Compute Unit)
  • 410 GB of instance storage
  • 32 and 64-bit
  • Moderate I/O performance

The Medium instance type is available now in all 8 of the AWS Regions. See the EC2 Pricing page for more information on the On Demand and Reserved Instance pricing (you can also acquire Medium instances in Spot form).

64-bit Ubiquity
You can now launch 64-bit operating systems on the Small and Medium instance types. This means that you can now create a single Amazon Machine Image (AMI) and run it on an extremely wide range of instance types, from the Micro all the way up to the High-CPU Extra Large and and the High-Memory Quadruple Extra Large, as you can see from the console menu:

This will make it easier for you to scale vertically (to larger and smaller instances) without having to maintain parallel (32 and 64-bit) AMIs.

SSH Client
We’ve integrated the MindTerm SSH client into the AWS Management console to simplify the process of connecting to an EC2 instance. There’s now  second option on the Connect window:

And there you have it! What do you think?

— Jeff;

 

New Amazon CloudWatch Monitoring Scripts

Update (January 6, 2016) – The scripts described in this blog post have been deprecated and are no longer available.

For updated information on how to perform the same tasks in a more modern fashion, please take a look at Sending Performance Counters to CloudWatch and Logs to CloudWatch Logs, Configuring a Windows Instance Using the EC2Config Service, and Monitoring Memory and Disk Statistics for Amazon EC2 Linux Instances.


The Amazon CloudWatch team has just released new sample scripts for monitoring memory and disk space usage on your Amazon EC2 instances running Linux.

You can run these scripts on your instances and configure them to report memory and disk space usage metrics to Amazon CloudWatch. Once the metrics are submitted to CloudWatch, you can view graphs, calculate statistics and set alarms on them in the CloudWatch console or via the CloudWatch API.

Available metrics include:

  • Memory Utilization (%)
  • Memory Used (MB)
  • Memory Available (MB)
  • Swap Utilization (%)
  • Swap Used (MB)
  • Disk Space Utilization (%)
  • Disk Space Used (GB)
  • Disk Space Available (GB)

The instance memory and disk space usage metrics are reported as Amazon CloudWatch Custom Metrics. Standard Amazon CloudWatch free tier quantities and pricing apply. This is an unsupported sample but we appreciate all feedback, comments and questions you post to the AWS forums.

To learn more about how to use the scripts, including installation, setup and configuration, please visit “Amazon CloudWatch Monitoring Scripts for Linux” in the Amazon CloudWatch Developer Guide.

— Henry Hahn, Product Manager, Amazon CloudWatch.