Category: Amazon EC2

AWS CloudTrail Update – Seven New Services & Support From CloudCheckr

AWS CloudTrail records the API calls made in your AWS account and publishes the resulting log files to an Amazon S3 bucket in JSON format, with optional notification to an Amazon SNS topic each time a file is published.

Our customers use the log files generated CloudTrail in many different ways. Popular use cases include operational troubleshooting, analysis of security incidents, and archival for compliance purposes. If you need to meet the requirements posed by ISO 27001, PCI DSS, or FedRAMP, be sure to read our new white paper, Security at Scale: Logging in AWS, to learn more.

Over the course of the last month or so, we have expanded CloudTrail with support for additional AWS services. I would also like to tell you about the work that AWS partner CloudCheckr has done to support CloudTrail.

New Services
At launch time, CloudTrail supported eight AWS services. We have added support for seven additional services over the past month or so. Here’s the full list:

 Here’s an updated version of the diagram that I published when we launched CloudTrail:

News From CloudCheckr
CloudCheckr (an AWS Partner) integrates with CloudTrail to provide visibility and actionable information for your AWS resources. You can use CloudCheckr to analyze, search, and understand changes to AWS resources and the API activity recorded by CloudTrail.

Let’s say that an AWS administrator needs to verify that a particular AWS account is not being accessed from outside a set of dedicated IP addresses. They can open the CloudTrail Events report, select the month of April, and group the results by IP address. This will display the following report:

As you can see, the administrator can use the report to identify all the IP addresses that are being used to access the AWS account. If any of the IP addresses were not on the list, the administrator could dig in further to determine the IAM user name being used, the calls being made, and so forth.

CloudCheckr is available in Freemium and Pro versions. You can try CloudCheckr Pro for 14 days at no charge. At the end of the evaluation period you can upgrade to the Pro version or stay with CloudCheckr Freemium.

— Jeff;

Tag Your Auto Scaled EC2 Instances

EC2’s Auto Scaling feature gives you the ability to define and then launch an Auto Scaling Group of Amazon EC2 instances that expands and contracts as necessary in order to handle a workload that varies over time. You can define scale-up and scale-down events that are triggered by the Amazon CloudWatch metrics that are most indicative of your application’s performance.

Because the instances are launched automatically, identifying them can sometimes be difficult. We’re solving that problem today by giving you the ability to define up to ten tags that will be associated with each EC2 instance launched by a particular Auto Scaling Group.

Here’s how you define the tags in the AWS Management Console:

You can, of course, see the tags in the EC2 section of the console:

You can also edit the tags of an existing Auto Scaling Group; they will be applied to all newly launched EC2 instances.

This new feature is available now and you can start using it today!


EC2 Update – Previous Generation Instances

We have made some important changes to the EC2 pricing and instance type pages. We are introducing the concept of previous generations of EC2 instances.

Amazon EC2 has been around since the summer of 2006. We started with a single instance (the venerable and still-popular m1.small) and have added many over the years. We have broadened our selection by adding specialized instance families such as CPU-Optimized, Memory-Optimized, and Cluster and by adding a wide variety of sizes within each family.

As newer and more powerful processors have become available, we have added to the lineup in order to provide you with access to the best performance at a given price point. The newest instances are a better fit for new applications and we want to make this clear on our website. To this end, we have moved some of the instance families to a new Previous Generations page. Instances in these families are still available as On-Demand, Reserved Instances and Spot Instances. Here’s a list of some previous generations and their contemporary equivalents:

Instance Family Previous Generation
Current Generation
General Purpose M1 M3
Compute-Optimized C1 & CC2 C3
Memory-Optimized M2, CR1 R3
Storage-Optimized HI1 I2

While we have no current plans to deprecate any of the instances listed above, we do recommend that you choose the latest generation of instances for new applications.

— Jeff;

Now Available – New Memory-Optimized EC2 Instances (R3)

I talked about the upcoming memory-optimized EC2 instance type (R3) last week and provided you with configuration and pricing information so that you could start thinking about how to put them to use in your environment. I am happy to report that the R3 instances are now available for use in the following AWS Regions:

  • US East (Northern Virginia)
  • US West (Northern California)
  • US West (Oregon)
  • EU (Ireland)
  • Asia Pacific (Tokyo)
  • Asia Pacific (Sydney)
  • Asia Pacific (Singapore)

R3 instances are recommended for applications that require high memory performance at the best price point per GiB of RAM. The instances include the following features:

  • Intel Xeon E5-2670 v2 “Ivy Bridge” Processors
  • Hardware Virtualization (HVM) only
  • SSD-backed instance storage, including TRIM support
  • Enhanced Networking with lower latency, low jitter, and high packet-per-second performance

The R3 instances are available in five sizes, as follows (prices are in US East (Northern Virginia); see the EC2 pricing page for full information):

Instance Name vCPU Count RAM
Instance Storage (SSD) Price/Hour
r3.large 2 15 GiB 1 x 32 GB $0.175
r3.xlarge 4 30.5 GiB 1 x 80 GB $0.350
r3.2xlarge 8 61 GiB 1 x 160 GB $0.700
r3.4xlarge 16 122 GiB 1 x 320 GB $1.400
r3.8xlarge 32 244 GiB 2 x 320 GB $2.800

You can launch the r3.xlarge, r3.2xlarge, and r3.4xlarge instances in EBS-Optimized form, with additional, dedicated I/O capacity for EBS volumes. The r3.8xlarge instance features 10 Gigabit networking.

Customer Reaction
Several AWS customers have been working with the R3 instances in preparation for today’s launch:

Netflix is the worlds leading Internet television network with over 44 million members in 41 countries enjoying more than one billion hours of TV shows and movies per month, including original series. Coburn Watson, Manager of Performance Engineering at Netflix told us:

We run many memory-hungry applications to support the volume of content our customers access. These applications require instances with high memory footprint and high memory bandwidth. By delivering high memory capacity, and high performance, R3 instances address these needs at a low cost and we are already planning to utilize them to support many of our applications and services.

MongoDB is one of the most popular NoSQL options on AWS. It uses aggressive memory caching for its data file management and benefits from access to copious amounts of memory. Matt Asay, VP of Marketing and Business Development at MongoDB, told us:

R3 instances provide a broad spectrum of compute and memory scaling options for our customers to realize full memory caching potential of MongoDB.  Our customers can start with a smaller instance for testing and early development, and scale to larger R3 instances as they move to production.

Metamarkets enables buyers and sellers of digital advertising to understand and visualize large quantities of data in real-time. Patrick McBride, Head of Technical Operations for Metamarkets, told us:

A key part of our analytics platform is Druid, our open source datastore thats built to analyze tens of billions of records in under a second.  For certain query types, R3 instances help us reduce Druids median query time by nearly 50%. That means a better experience for our clients, who rely on us to deliver insights right when they need them.

Partner Support
Many APN (Amazon Partner Network) Technology Members are working to make their offerings available on the R3 instances. Here’s a sampling:

Buddha LabsHardened Red Hat Enterprise Linux 6 x64 for Cluster Instances and DISA STIG Red Hat Enterprise Linux 6.4 x64 For Cluster Instances.

Parallel Universe – Parallel Universe with Cluster Instances (Red Hat Enterprise Linux, SUSE LES, Ubuntu Server, Amazon Linux, GPU Amazon Linux).

SoftNAS CloudHigh Performance Cloud NAS and SAN.

MathWorksMatLab and Simulink.

— Jeff;

Use Oracle GoldenGate with Amazon RDS for Oracle Database

Many organizations face the need to move transactional data from one location to another location. As organizations continue to make
the cloud a central part of their overall IT architecture, this need seems to grow in tandem with the size, scope, and complexity of the organization. The application use cases range from migrating data from a master transactional database to a readable secondary database, or moving applications from on-premises to the cloud, or having a redundant copy in another data center location. Transactions that are generated and stored within a database run by one application may need to be copied over so that it can be processed, analyzed, and aggregated in a central location.

In many cases, one part of the organization has moved to a cloud-based data storage model that’s powered by the Amazon Relational Database Service (RDS). With support for the four most popular relational databases (Oracle, MySQL, SQL Server, and PostgreSQL), RDS has been adopted by organizations of all shapes and sizes. Users of Amazon RDS love the fact that it takes care of many important yet tedious deployment, maintenance, and backup tasks that are traditionally part and parcel of an on-premises database.

Oracle GoldenGate
Today we are giving RDS Oracle customers the ability to use Oracle GoldenGate with Amazon RDS. Your RDS Oracle Database Instances can be used as the source or the target of GoldenGate-powered replication operations.

Oracle GoldenGate can collect, replicate, and manage transactional data between a pair of Oracle databases. These databases can be hosted on-premises or in the AWS cloud. If both databases are in the AWS cloud, they can be in the same Region or in different Regions. The cloud-based databases can be RDS DB Instances or Amazon EC2 Instances that are running a supported version of Oracle Database. In other words, you have a lot of flexibility! Here are four example scenarios:

  1. On-premises database to RDS DB Instance.
  2. RDS DB Instance to RDS DB Instance.
  3. EC2-hosted database  to RDS DB Instance.
  4. Cross-region replication from one RDS DB Instance to another RDS DB Instance.

You can also use GoldenGate for Amazon RDS to upgrade to a new major version of Oracle.

Getting Started
As you can see from the scenarios listed above, you will need to run the GoldenGate Hub on an EC2 Instance. This instance must have sufficient processing power, storage, and RAM to handle the anticipated transaction volume. Supplemental logging must be enabled for the source database and it must retain archived redo logs. The source and target database need user accounts for the GoldenGate user, along with a very specific set of privileges.

After everything has been configured, you will use the Extract and Replicat utilities provided by Oracle GoldenGate.

The Amazon RDS User Guide contains the information that you will need to have in order to install and configure the hub and to run the utilities.

— Jeff;

Coming Soon – New Memory-Optimized EC2 Instances

At last week’s AWS Summit in San Francisco, Senior VP Andy Jassy announced the forthcoming R3 instance type (watch Andy’s presentation), and presented a map to illustrate the choices:

I’d like to provide you with some additional technical and pricing information so that you can start thinking about how you will put this powerful new instance to work.

Soon to be available in five instance sizes, this instance type is recommended for applications that require high memory performance at the best price point per GiB of RAM. The R3 instances include the following features:

  • Intel Xeon E5-2670 v2 “Ivy Bridge” Processors
  • Hardware Virtualization (HVM) only
  • SSD-backed instance storage, including TRIM support
  • Enhanced Networking with lower latency, low jitter, and high packet-per-second performance

The R3 instances will be available in five sizes, as follows (prices are in US East (Northern Virginia); see the EC2 pricing page for full information):

Instance Name vCPU Count RAM
Instance Storage (SSD) Price/Hour
r3.large 2 15 GiB 1 x 32 GB $0.175
r3.xlarge 4 30.5 GiB 1 x 80 GB $0.350
r3.2xlarge 8 61 GiB 1 x 160 GB $0.700
r3.4xlarge 16 122 GiB 1 x 320 GB $1.400
r3.8xlarge 32 244 GiB 2 x 320 GB $2.800

You will be able to launch the r3.xlarge, r3.2xlarge, and r3.4xlarge instances in EBS-Optimized form, with additional, dedicated I/O capacity for EBS volumes. The r3.8xlarge instance features 10 Gigabit networking.

Stay tuned to this blog, or follow me on Twitter and you’ll be among the first to know when you can start launching some R3 instances.

— Jeff;

Amazon Linux AMI 2014.03 is Now Available

The Amazon Linux AMI is a supported and maintained Linux image for use on Amazon EC2.

We release new versions of the Amazon Linux AMI every six months after a public testing phase that includes one or more Release Candidates. The Release Candidates are announced in the EC2 forum and are available to all EC2 users.

Launch Time
Today marks the release of the 2014.03 Amazon Linux AMI, which is available in both PV and HVM mode, with both EBS-backed and Instance Store-backed AMIs.  The Amazon Linux AMI is supported on all EC2 instance types.

You can launch this new version of the AMI in the usual ways. You can also upgrade existing EC2 instances by running yum update and rebooting your instance.

Updates & New Features
The Amazon Linux AMI was designed to provide a stable, secure, and high performance execution environment for applications running on EC2.

Here are the new features:

Linux kernel 3.10.34 – The AMI is built around the 3.10 series of Linux kernel releases. This is a long-term stable release that includes many performance and functionality improvements.

CloudInit 0.7.2 – This handy package has been upgraded to the 0.7 series. It supports dracut-modules-growroot, which automatically resizes your root filesystem on boot.

Java 7 – Java 7 (java-1.7.0-openjdk) is now the default; Java 6 (java-1.6.0-openjdk) is still available in the AMI repositories.

Ruby 2.0 – Ruby 2.0 is now the default Ruby interpreter. Core Ruby gems have been updated to work with both Ruby 1.8 and Ruby 2.0.

glibc 2.17 – The GNU C library has been upgraded to version 2.17, bringing in numerous bug fixes and optimizations.

GCC 4.8 – Version 4.8 of GCC is now the default; versions 4.4, 4.6, and 4.7 are still available in the repositories.

Docker 0.9 – You can now run Docker containers on the Amazon Linux AMI.

LXC 0.9 – The newest version of LXC is available; you can now use the Linux containment features on the Amazon Linux AMI.

GoLang 1.2 – You can now build Go programs.

We have also added a number of new packages to the repositories and re-synced other packages to the latest upstream versions.

Please read the entire set of Amazon Linux AMI 2014.03 release notes for more information.

Going Going Gone
This release marks the third anniversary of the launch of the Amazon Linux AMI. We are now starting to make plans to deprecate and ultimately remove some of the older packages. Check the release notes for more information about our plans in this area.

Choosing Alternatives
As you can see from the list of updates and new features, the Amazon Linux AMI incorporates multiple versions of a number of important packages. The Alternatives package is part of the AMI and can be used to switch between versions. Under the covers, this command uses symbolic links to effect a system-wide change that will persist across reboots.

To show you how to do this, I installed four separate versions of GCC on my instance. I can switch between them using the command alternatives –config gcc.. The command lists the available versions and allows me to make a change by selecting the desired command:

The new version of the Amazon Linux AMI is available today in all of the public AWS Regions.

— Jeff;

AWS Price Reduction #42 – EC2, S3, RDS, ElastiCache, and Elastic MapReduce

It is always fun to write about price reductions. I enjoy knowing that our customers will find AWS to be an even better value over time as we work on their behalf to make AWS more and more cost-effective over time. If you’ve been reading this blog for an extended period of time you know that we reduce prices on our services from time to time, and todays announcement serves as the 42nd price reduction since 2008.

We’re more than happy to continue this tradition with our latest price reduction.

Effective April 1, 2014 we are reducing prices for Amazon EC2, Amazon S3, the Amazon Relational Database Service, and Elastic MapReduce.

Amazon EC2 Price Reductions
We are reducing prices for On-Demand instance as shown below. Note that these changes will automatically be applied to your AWS bill with no additional action required on your part.

Instance Type Linux / Unix Price Reduction
Microsoft Windows Price Reduction
M1, M2, C1 10-40% 7-35%
C3 30% 19%
M3 38% 24-27%

We are reducing the prices for Reserved Instances as well for all new purchases. With todays announcement, you can save up to 45% with on a 1 year RI and 60% on a 3 year RI relative to the On-Demand price. Here are the details:

Instance Type Linux / Unix Price Reduction
Microsoft Windows Price Reduction
  1 Year
3 Year
1 Year
3 Year
M1, M2, C1 10%-40% 10%-40% Up to 23% Up to 20%
C3 30% 30% Up to 16% Up to 13%
M3 30% 30% Up to 18% Up to 15%

Also keep in mind that as you scale your footprint of EC2 Reserved Instances, that you will benefit from the Reserved Instance volume discount tiers, increasing your overall discount over On-Demand by up to 68%.

Consult the EC2 Price Reduction page for more information.

Amazon S3 Price Reductions
We are reducing prices for Standard and Reduced Redundancy Storage, by an average of 51%. The price reductions in the individual S3 pricing tiers range from 36% to 65%, as follows:

Tier New S3 Price / GB / Month
Price Reduction
0-1 TB $0.0300 65%
1-50 TB $0.0295 61%
50-500 TB $0.0290 52%
500-1000 TB $0.0285 48%
1000-5000 TB $0.0280 45%
5000 TB or More $0.0275 36%

These prices are for the US Standard Region; consult the S3 Price Reduction page for more information on pricing in the other AWS Regions.

Amazon RDS Price Reductions
We are reducing prices for Amazon RDS DB Instances by an average of 28%. There’s more information on the RDS Price Reduction page, including pricing for Reserved Instances and Multi-AZ deployments of Amazon RDS.

Amazon ElastiCache Price Reductions
We are reducing prices for Amazon ElasticCache cache nodes by an average of 34%. Check out the ElastiCache Price Reduction page for more information.

Amazon Elastic MapReduce Price Reductions
We are reducing prices for Elastic MapReduce by 27% to 61%. Note that this is addition to the EC2 price reductions described above. Here are the details:

Instance Type EMR Price Before Change
New EMR Price
m1.small $0.015 $0.011 27%
m1.medium $0.03 $0.022 27%
m1.large $0.06 $0.044 27%
m1.xlarge $0.12 $0.088 27%
cc2.8xlarge $0.50 $0.270 46%
cg1.4xlarge $0.42 $0.270 36%
m2.xlarge $0.09 $0.062 32%
m2.2xlarge $0.21 $0.123 41%
m2.4xlarge $0.42 $0.246 41%
hs1.8xlarge $0.69 $0.270 61%
 hi1.4xlarge $0.47 $0.270 43%

With this price reduction, you can now run a large Hadoop cluster using the hs1.8xlarge instance for less than $1000 per Terabyte per year (this includes both the EC2 and the Elastic MapReduce costs).

Consult the Elastic MapReduce Price Reduction page for more information.

We’ve often talked about the benefits that AWS’s scale and focus creates for our customers. Our ability to lower prices again now is an example of this principle at work.

It might be useful for you to remember that an added advantage of using AWS services such as Amazon S3 and Amazon EC2 over using your own on-premises solution is that with AWS, the price reductions that we regularly roll out apply not only to any new storage that you might add but also to the existing data that you have already stored in AWS. With no action on your part, your cost to store existing data goes down over time.

Once again, all of these price reductions go in to effect on April 1, 2014 and will be applied automatically.

— Jeff;

New VPC Peering for the Amazon Virtual Private Cloud

The Amazon Virtual Private Cloud (VPC) gives you the power to create a logically isolated section of the AWS Cloud, which you can think of as virtual network. You can launch AWS resources, including Amazon EC2 instances within the network, and you have full control over the virtual networking environment, including the IP address range and the subnet model. You also have full control over network routing, both within the VPC (using route tables) and between networks (using network gateways).

VPC Peering
Today we are making the VPC model even more flexible! You now have the ability to create a VPC peering connection between VPCs in the same AWS Region. Once established, EC2 instances in the peered VPCs can communicate with each other across the peering connection using their private IP addresses, just as if they were within the same network.

You can create a peering connection between two of your own VPCs, or with a VPC in another AWS account.A VPC can have one-to-one peering connections with up to 50 other VPCs in the same Region.

VPC peering enables a number of interesting use cases; let’s take a look at a couple of them.

Within a single organization, you can set up peering relationships between VPCs that are run by different departments.  One VPC can encompass resources that are shared across an entire organization, with additional, per-department VPCs for resources that are peculiar to the department. Here’s a very simple example:

After you set up the peering connections and add entries to the routing tables (to direct packets out of one VPC and into another), the EC2 instances in the Accounting VPC can access the Shared Resources VPC, as can the instances in the Engineering VPC. However, the Accounting instances cannot access the Engineering instances, or vice versa. Peering connections are not transitive; you would need to set up a peering connection between Engineering and Accounting in order to establish connectivity. Think about extending this model with an Operations VPC that is peered with all of the other VPCs in your organization.

As I mentioned earlier, you can also establish VPC peering between a pair of VPCs that are owned by different accounts. Suppose your organization is a member of an industry consortium or a party to a joint venture. You can use VPC peering to share common resources between members of the consortium or other joint venture, all within AWS and with full control of the networking topology:

As was the case in the previous scenario, each participant in the consortium will be able to see and access the shared resources, but not those of the other participants. We’ve documented a number of common peering scenarios in our VPC Peering Guide.

Peering Details
I’m going to show you just how easy it is to create a VPC peering connection in just a minute. Before I do that, I’d like to review the rules that govern the use of this very powerful new feature.

You can connect any two VPCs that are in the same AWS Region, regardless of ownership, as long as both parties agree. We plan to extend this feature to support cross-Region peering in the future. Connections are requested by sending an invitation from one VPC to the other. The invitation must be accepted in order to establish the connection. Needless to say, you should only accept invitations from VPCs that you know. You are free to ignore unwanted or incorrect invitations; they’ll expire before too long.

The VPCs to be peered must have non-overlapping CIDR blocks. This is to ensure that all of the private  IP addresses are unique, allowing direct access (as allowed by the peering and routing tables) without the need for any form of network address translation.

As you can see from the scenarios that I described above, VPC peering connections do not generate transitive trust. Just because A is peered with B and B is peered with C, it doesn’t mean that A is peered with C.

The connections are implemented within the VPC fabric; this avoids single points of failure and bandwidth bottlenecks.

There is no charge for setting up or running a VPC peering connection. Data transferred across peering connections is charged at $0.01/GB for send and receive, regardless of the Availability Zones involved.

You can set up VPC peering connections from the AWS Management Console, the VPC APIs, or the AWS Command Line Interface (CLI).

VPC Peering Example
I used the AWS Management Console to set up a VPC peering connection between two of my VPCs, which were named corporate-vpc and branch-east-vpc. Here are the IDs and the CIDRs:

Before I go any further, I should note that these features are available in the “Preview” version of the VPC console. In addition to support for the creation and management of VPC peering connections, the new console includes a multitude of tagging features to simplify and enhance your VPC management operations.

I clicked on Peering Connections in the VPC Dashboard, selected corporate-vpc, and then used the Create VPC Peering Connection button to invite branch-east-vpc to peer:

The invite appeared in the list of connections. I selected it and clicked Accept:

The peering connection was created and became visible immediately:

Then I created an entry in the route table of each VPC. As you can see, the console provided me with a helpful popup when it was time for me to choose the Target for the route:

Peer Now
The new VPC peering feature is available now and you can start using it today. I am very interested in seeing how this feature is put to use. Leave me a comment and let me know what you think!

— Jeff;

Eight Years (And Counting) of Cloud Computing

We launched Amazon S3 on March 14, 2006 with a press release and a simple blog post. We knew that the developer community was interested in and hungry for powerful, scalable, and useful web services and we were eager to see how they would respond.

S3 and the Amazon Values
Almost every company has a mission statement of some kind. At, we are guided by our Leadership Principles. We use these principles as part of the interviewing process, and revisit them during our annual reviews.  I thought back to the launch of S3 and the long string of additional features that we have added to it since then, and tried to match them up to some of the leadership principles.

Customer Obsession – Before we wrote a line of code, we talked to lots of potential customers so that we could have a good understanding of the features that they would like to have in an Internet-scale storage service. We talked to individuals and groups within the company, and to outside developers. The listening process didn’t stop when S3 launched. We talk to customers every day and we do our best to listen, understand, and to respond.

Invent and Simplify – True innovation calls for a lot of difficult decisions. The innovator must decide what the product is, and what it is not. We were breaking new ground when we were designing and building S3, and had to figure out how to handle identity, authentication, billing, security, and hundreds of other issues before we could launch the product.

Are Right, A Lot – The first time I heard about S3 internally, I was told that we were building “Malloc for the Internet.” As a long-time C programmer, I knew exactly what this meant. Malloc is a very basic C library function it allocates the requested amount of memory and returns a pointer to it. It is a simple building block for more complex forms of memory management. Equating S3 to Malloc was a key insight, and one that served as a guiding principle when making those early (and crucial) design decisions. Moving forward, we continually remind ourselves that the first “S” in S3 stands for “Simple.”

Think Big – Because S3 was designed for the Internet, we had to make sure that the architecture and the implementation contained no intrinsic limits. Today, with trillions of objects stored and an access rate of over one million requests per second, we continue to look to the future, with a well-tuned model that allows us to forecast, plan for, and accommodate the never-ending inflow of new data. Like Malloc, S3 is a dependable architectural component. Amazon EC2, Elastic MapReduce, Elastic Block Store, Amazon Glacier, CloudTrail, Redshift, the Relational Database Service, and other services all make use of S3 for object-style storage.

Earn Trust of Others – It is kind of fun to be in a crowded elevator at a tech conference. The conference attendees talk about S3 very casually, and take its scale, durability, and cost-effectiveness pretty much for granted. I often hear them say things like “Just throw it into S3 and stop worrying about it.” S3 has become, as we envisioned at design time, the de facto storage system for the Internet.

AWS at Airbnb
Accommodation booking site Airbnb has been on AWS since they launched. They now use a wide variety of services including S3, EC2, the Relational Database Service (RDS), Route 53, ElastiCache, Redshift, and DynamoDB. With 9 million customers, 1000 EC2 instances, 2 billion rows of data in RDS, and 50 terabytes of photos stored in S3, Airbnb is run by an operations team of just five people.

In the video below, Airbnb VP of Engineering Mike Curtis talks about the benefits that they have seen from using AWS:

Onward and Upward!
Eight years down the road from the launch of S3, I remain as excited as ever about the future of AWS and of cloud computing. I have written and published 1,645 posts since the launch of S3 (972,246 words, but who’s counting?) and am doing my best to keep up with all of cool stuff that our teams build. Stay tuned for the next eight years and you won’t be disappointed!

— Jeff;