Category: Amazon EC2*


Updates to the AWS SDKs

We’ve made some important updates to the AWS SDK for Java the AWS SDK for PHP, and the AWS SDK for .NET. The newest versions of the respective SDKs are available now.

AWS SDK for Java

The AWS SDK for Java now supports the new Amazon S3 Multipart Upload feature in two different ways. First, you can use the new APIs — InitiateMultipartUpload, UploadPart, CompleteMultipartUpload, and so forth. Second, you can use the SDK’s new TransferManager class. This class implements an asynchronous, higher level interface for uploading data to Amazon S3. The TransferManager will use multipart uploads if the object to be uploaded is larger than a configurable threshold. You can simply initiate the transfer (using the upload method) and proceed. Your application can poll the TransferManager to track the status of the upload.

The SDK’s PutObject method can now provide status updates via a new ProgressListener interface. This can be used to implement a status bar or for other tracking purposes.

We’ve also fixed a couple of bugs.

AWS SDK for PHP

The AWS SDK for PHP now supports even more services. We’ve added support for Elastic Load Balancing, the Relational Database Service, and the Virtual Private Cloud.

We have also added support for the S3 Multipart Upload, and for CloudFront Custom Origins, and you can now stream to (writing) or from (reading) an open file when transferring an S3 object. You can also seek to a specific file position before initating a streaming transfer.

The 1000-item limit has been removed from the convenience functions; get_bucket_filesize, get_object_list, delete_all_objects, delete_all_object_versions, and delete_bucket will now operate on all of the entries in a bucket.

We’ve also fixed a number of bugs.

AWS SDK for .NET

The AWS SDK for .NET now supports the Amazon S3 Multipart Upload feature using the new APIs — InitiateMultipartUpload, UploadPart, CompleteMultipartUpload, etc.as well as a new TransferUtility class that automatically determines when to upload objects using the Multipart Upload feature.

Weve also added support for the CloudFront Custom Origins and fixed a few bugs.

These SDKs (and a lot of other things) are produced by the AWS Developer Resource team. They are hiring and have the following open positions:

— Jeff;

 

FameTown – A New AWS-Powered Facebook Game

Amir of Diversion, Inc. wrote to tell me about his company’s newest release, an AWS-powered Facebook game called FameTown.

FameTown lets you play the role of a movie star in a digital version of Hollywood. You can start on the D-List (a total unknown) and attempt to progress to the A-List, earning points by completing tasks such as meeting with cast members and directors. You can also improve your social standing by attending parties and charity events, and you can hire agents, assistants, and publicists to further boost your career.

Under the hood, this Facebook game makes good use of AWS and a number of other technologies. Here’s a summary:

  • The game is written in Sinatra, a DSL (domain specific language) used to create Ruby apps with minimal effort. The code runs on Amazon EC2.
  • Traffic to the EC2 instances is load balanced using the Nginx load balancer.
  • Membase is used for data storage, hosted on a number of Elastic Block Storage (EBS) volumes.
  • Scalr is used to scale and manage the application.

Amir and I chatted about scalability. He told me that each application server runs on High CPU Extra Large (c1.xlarge) instance and can process 3000 to 3500 requests per second. Membase runs on a set of three Extra Large (m1.xlarge) instances and can handle over 100,000 requests per second.

I’ve not yet played FameTown (but I will), and I hope that Amir and company have a lot of success with it.

— Jeff;

 

Servers for Nothing, Bits for Free

In the last year or two we’ve added free tiers of service to Amazon SQS, Amazon SNS, and Amazon SimpleDB. We have learned that developers like to be able to try out our services without having to pay to do so. In many cases, they have created non-trivial applications that can run entirely within the free tier of a particular service.

Today, we’re going to go a lot farther. How far? Really far!

Effective today (November 1, 2010), we’re opening up a new free tier for all new AWS developers. Here’s what you get each month when you combine the existing free tier with this announcement:

  • 750 hours of free time on an Amazon EC2 Micro instance running Linux. You can use this to run one of the Amazon Linux AMIs or any other non-paid Linux AMI. This time cannot be used to run the new SUSE Linux AMIs, the IBM AMIs, or the Microsoft Windows AMIs.
  • 10 GB/months of Elastic Block Storage, 1 GB of snapshot storage, and 1 million I/O requests. This is enough space for the Amazon Linux AMI among others.
  • 750 hours of Elastic Load Balancer time and 15 GB of data transfer through it.
  • 5 GB/months of Amazon S3 storage, along with 20K GETs and 2K PUTs.
  • 15 GB of internet data transfer out, and 15 GB of internet data transfer in.
  • 100K Amazon SQS requests.
  • 100K Amazon SNS requests, along with 100K HTTP notifications and 1K email notifications.
  • 25 Amazon SimpleDB machine hours and 1 GB of storage.

In plain English, you get everything that you need to build and deploy a very functional web application and run it full time, for free! The AWS Management Console and Auto Scaling are already available at no charge, of course.

You need to send us cookies, put an AWS sticker on your cat, write a blog post about this, create an AWS account with a valid credit card attached, in case your usage in a given month exceeds what we’ve made available in the free tier.

You will be able to see what pay-as-you-go really means, and you will be able to get some valuable experience with AWS.

Your free usage starts on the day that you create your AWS account and ends one year after that. Accounts are limited to one per customer.

You can get started by reading the EC2 Getting Started Guide or my new AWS book.

We are very interested in learning more about the uses that you find for these new resources. In fact, we’re curious what folks are working on this week. If you create a cool tool, a great application, or a compelling web site that runs entirely within the free tier,send us a note by this Friday. You can leave a comment on this post or drop us a line at awseditor@amazon.com . We’d love to hear from you.

— Jeff;

 

Cloud-powered Software Development LifeCycle – Innovative Social Gaming and LBS Platform in the AWS Cloud – TAPTIN

As AWS technology evangelists, we often meet startups working on cool stuff.  Every so often we discover startups that have done incredible things on AWS.  Recently, I came across Navitas, a Berkeley-based company with development teams in Silicon Valley, Ecuador, and Thailand. Since I am deeply interested in location-based services and geo apps in the AWS, I dived a little deeper to learn more about the company and its architecture.

Navitas is the creator of TAPTIN, a location-based service similar to Foursquare and Gowalla. However, TAPTIN goes beyond mere check-ins.  The TAPTIN platform enables the creation of locally branded apps, such as Berkeley Local, which has events and recommendations for UC Berkeley and the city of Berkeley.  TAPTIN is thus a new form of local media, with built-in Foursquare-style check-in features as well as services for merchants to engage with their customers, such as through coupons, loyalty campaigns, etc. so you can build locally branded apps for every city around the world.  Another example of an app built on the same platform is We Love Beer app. This beer app has a beer catalog, and pubs can link to the catalog categories to create their own beer menus.  This app enables you to find what beers are available nearby, to locate a particular kind of beer, and to find your friends at local pubs.

Recently, Navitas abandoned their server farm and moved their entire development and production environments to AWS. It runs 100% in AWS cloud. TAPTIN is scaling on AWS across multiple tiers of servers. The founder of the company, Kevin Leong, was helpful in explaining their architecture in detail below.

What Kevin and his team have done is commendable, especially given that they did it by bootstrapping, which Kevin says would not have been possible without AWS.

Production Environment
The figure below depicts the Navitas production environment, which consists of seven scalable layers, all using open source enterprise technologies. Load balancers are employed in multiple tiers.  Search is based on Solr, a popular open source search platform from the Apache Lucene project. Solr is also used for geospatial search. The search tier uses HAProxy on an Amazon EC2 instance to apply write updates to a Solr master server, and these updates are then distributed to the two Solr read-only slaves.

Navitas Cloud Architecture 
 
The application tiers consist of three layers. Web pages are implemented in PHP and consume REST APIs running on Jetty servers.  Some PHP pages are also calling Solr directly.  The company originally started with Enterprise Java Beans (EJB) running JBoss servers but then decided to use lightweight Java Persistence API (JPA) with Hibernate and Spring Framework HTTP Remoting. The caching layer runs memcached which provides dynamic load balancing cache services. They employ two layers of cache.  First, they employ memcached that is deployed in the web tier.  If an object is not found in memcached, it will be requested from the persistent tier, which for most recently used objects are probably in cache.  This technique gives a higher performance.  Memcached is configured to scale automatically with new servers.

While load balancing and automatic instance deployment ensures high availability as TAPTIN and Grand Poker apps scale, Kevins team also implemented a failover strategy, automatic data backup and implemented data recovery steps, as well as recovery of Solr search indexes. Because everything is done behind AWS, there is no bandwidth usage.

Navitas uses PostgreSQL on Amazon EC2 to store structured data with pgpool for load balancing, failover and asynchronous replication.  It’s very easy for them to add another instance to Pgpool to replicate to support load balancing and parallel queries. 

Media, such as photos, are transcoded and stored in Amazon S3.

Sandbox and Development Environment
Having sandbox and source code repository (SVN) on AWS was not only cost-effective but also a huge productivity gain for the team as it was easy to launch another instance. With Amazon Machine Images (AMIs), developers create and launch a consistent environment for development, test and production environments. Kevin said that his developer team which is spread out around the world (in California, Latin America and Asia Pacific) can launch the same pre-configured sandboxes in that AWS region within minutes. This saved a lot of time and increased the developer productivity. The company uses spot instances for all development work, whenever available, which is cheaper.

They also create a new sandbox environment on AWS for testing.  With SVN on Amazon EC2, Navitas does their nightly build in the cloud. Source code is checked out to a build directory where its compiled, built and deployed. Unit test hornets are also run to ensure no code breakage and to ensure performance of function is maintained. Kevin talked about automated performance testing will be coming later when the company has more resources. 

With having an automated build on AWS, basically they were able to migrate their extreme programming development methodology to AWS by having developers commit their code daily for nightly build.  They commit all development code to SVN trunk for all projects, and can build as required for testing in their sandbox environment. They create SVN branches for all production releases, allowing them to bugfix quickly and efficiently, and immediately release new application binaries, or plan and stage back-end upgrades.

The company maintains a series of build-scripts and uses Maven to manage the build dependencies.  The server configuration is externalized so that the build scripts can pick up the appropriate configuration for sandbox and production.  They create their sandbox which is a mirror of production and the bootstrapped AMI does all the magic. Cloud-powered SDLC clearly has lot of advantages.

Final Thought
What I really liked about Kevin’s strategy was the “Think Scale” approach.  Not many startups invest in designing a scalable architecture early on especially because its time-consuming and distracting. Some think it’s too expensive. His message to Start-ups was “think scale” from the beginning as it is really not too expensive to do in the cloud. To quote him, “I did it by bootstrapping.  They can also.  Amazon AWS is the way to go.”

– Jinesh Varia

Restore The Gulf – US Government Site Hosted on Amazon EC2

Ho hum – another web site (pictured at right) running on Amazon EC2. No big deal, right?

Actually, it is a pretty big deal.

Take look at the top left of the site. What does it say?

An Official Website of the United States Government

A number of US Government regulations, including an important one called FISMA (Federal Information Security Management Act), establish stringent information security requirements that had to be satisfied before this site was brought on line.

The prime contractor for this project was a company called Government Acquisitions. They worked with Acquia for hosting, and SiteWorx (an Acquia partner) to build the site.

The site itself was built with the very popular Drupal content management system (CMS). You can read more about Acquia Hosting and their use of AWS in this update from Drupal founder Dries Buytaert. Ths is a nice step forward for Drupal and an unmistakable sign that the US Government is paying attention to open source software like Drupal.

If your application requires FISMA certification and you’d like to learn more about running it on AWS, please use the AWS Sales and Business Development form to get in touch with us.

Speaking of the US Government, we’ll be participating in the Adobe Government Assembly on November 3rd, 2010. At this event, government IT innovators will discuss ways to engage, innovate, and improve government efficiency. We hope to see you there!

— Jeff;

 

 

New: Amazon EC2 Running SUSE Linux Enterprise Server

A critical aspect of the value proposition for the Amazon Web Services revolves around choice. This takes many forms, each of which gives you the freedom to choose the best fit for your particular situation:

  • A wide variety of services that you can choose to use, or not, as determined by your needs.
  • Ten EC2 instance types, with instances spanning a very wide range of CPU power, RAM, instance storage, and network performance.
  • Five RDS DB instance classes, also spanning a wide range.
  • Four EC2 regions (US East, US West, EU, and Asia Pacfic).
  • Multiple EC2 pricing models (On-Demand, Spot, and Reserved).
  • Multiple Operating Systems including a number of Linux Distributions, two versions of Microsoft Windows, and OpenSolaris.

Today we are giving you an additional operating system choice – you can now run SUSE Linux Enterprise Server (version 10 or 11) on Amazon EC2 in any of our regions and on any of our instance types. You’ll also have access to a maintenance subscription that automatically installs the most current security patches, bug fixes, and new features from a repository hosted within AWS.

With more than 6,000 certified applications from over 1,500 independent software vendors, SUSE Linux Enterprise is a proven, commercially supported Linux platform that is ideal for development, test, and production workloads.

All of this is available on a pay as you go basis, with no long-term commitments and no minimum fees. Reserved Instances and Spot Instances are also available; you can run SUSE in the cloud using Reserved Instances very economically.

Pricing and other details can be found on our SUSE Linux Enterprise Server page. You can launch SLES from the Quick Start tab of the AWS Management Console; On-Demand, Reserved, and Spot Instances are available.

— Jeff;

 

 

 

 

Now Available: Host Your Web Site in the Cloud

I am very happy to announce that my first book, Host Your Web Site in the Cloud is now available! Weighing in at over 355 pages, this book is designed to show developers how to build sophisticated AWS applications using PHP and the CloudFusion toolkit.

Here is the table of contents:

  1. Welcome to Cloud Computing.
  2. Amazon Web Services Overview.
  3. Tooling Up.
  4. Storing Data with Amazon S3.
  5. Web Hosting with Amazon EC2.
  6. Building a Scalable Architecture with Amazon SQS.
  7. EC2 Monitoring, Auto Scaling, and Elastic Load Balancing.
  8. Amazon SimpleDB: A Cloud Database.
  9. Amazon Relational Database Service.
  10. Advanced AWS.
  11. Putting It All Together: CloudList.

After an introduction to the concept of cloud computing and a review of each of the Amazon Web Services in the first two chapters, you will set up your development environment in chapter three. Each of the next six chapters focuses on a single service. In addition to a more detailed look at each service, each of these chapters include lots of full-functional code. The final chapter shows you how to use AWS to implement a simple classified advertising system.

Although I am really happy with all of the chapters, I have to say that Chapter 6 is my favorite. In that chapter I show you how to use the Amazon Simple Queue Service to build a scalable multistage image crawling, processing, and rendering pipeline. I build the code step by step, creating a queue, writing the code for a single step, running it, and then turning my attention to the next step. Once I had it all up and running, I opened up five PuTTY windows, ran a stage in each, and watched the work flow through the pipeline with great rapidity. Here’s what the finished pipeline looks like:

I had a really good time writing this book and I hope that you will have an equally good time as you read it and put what you learn to good use in your own AWS applications.

Today (September 21) at 4 PM PT I will be participating in a webinar with the good folks from SitePoint. Sign up now if you would like to attend.

— Jeff;

PS – If you are interested in the writing process and how I stayed focused, disciplined, and organized while I wrote the book, check out this post on my personal blog.

 

Run Oracle Applications on Amazon EC2

A wide variety of Oracle applications have been certified for use on Amazon EC2 with virtualization provided by the Oracle VM (OVM). The following products are now fully certified and supported and you’ll be able to run them in the cloud on production workloads before too long:

These applications will be available in the form of Amazon Machine Images (AMIs) that you can launch from the AWS Management Console and from other EC2 management tools.

You can use your existing Oracle licenses at no additional license cost or you can acquire new licenses from Oracle. We implemented OVM support on Amazon EC2 with hard partitioning so Oracle’s standard partitioned processor licensing models apply.

Working together with Oracle, we will publish a set of pre-configured AMIs based on the Oracle VM Templates so that you can be up and running in a matter of minutes instead of weeks or even months.

We’ll start with Oracle Linux, Oracle Database 11gR2, Oracle E-Business Suite, and a number of Oracle Fusion Middleware technologies including Oracle Weblogic Server and Oracle Business Process Management. After that, we’ll add AMIs for PeopleSoft, Siebel, and JD Edwards.

You’ll be able to take advantage of notable EC2 features such as Elastic Load Balancing, Auto Scaling, Security Groups, Amazon CloudWatch and Reserved Instance pricing.

To learn more about running Oracle applications on EC2 and to register to be notified when application templates become available, visit the Oracle and Amazon Web Services page.

If you are at Oracle OpenWorld this week (September 19-23, 2010), please stop by the AWS booth and say hello to our team. 

— Jeff;

 

New Amazon EC2 Features: Resource Tagging, Idempotency, Filtering, Bring Your Own Keys

We’ve just introduced four cool new features for Amazon EC2. Instead of trying to squeeze all of the information in to one ridiculously long post, I’ve written four separate posts. Here’s what we introduced:

  • Resource Tagging -Tag the following types of resources: EC2 instances, Amazon Machine Images (AMIs), EBS volumes, EBS snapshots, and Amazon VPC resources such as VPCs, subnets, connections, and gateways.
  • Idempotent Instance Creation – Ensure that multiple EC2 instances are not accidentally created when you needed just one.
  • Filtering – Filter the information returned by an EC2 Describe call using one or more key/value pairs as filters.
  • Bring Your Own Keypair – Import your own RSA keypair for use with EC2.

The posts are linked to each other, so you can start at Resource Tagging and read each of them in turn.

— Jeff;

New Amazon EC2 Feature: Filtering

Many of our customers create large numbers of EC2 resources. Some of them run hundreds or thousands of EC2 instances, create thousands of EBS volumes, and retain tens of thousands of EBS volume snapshots.

This growth has meant that the corresponding Describe APIs (DescribeInstances, DescribeVolumes, and DescribeSnapshots, to name a few) can return results that are very long and somewhat tedious to process.

In order to make client applications simpler and more efficient, you can now specify filters when you call the EC2 “Describe” functions (except those having to do with attributes or datafeed subscriptions for Spot instances).

You can provide one or more filters as part of your call to a Describe function. Each filter consists of a case-sensitive name and a value. Values are text strings or XML text string representations of non-textual (e.g. Boolean) values. Filter values can use the “*” to match zero or more characters, or the “?” to match a single character.

You can also combine multiple filters. Multiple filters with the same name are OR-ed together and then AND-ed with the other filters. You could, for example, call DescribeInstances and ask for all of your m1.large instances that are running a Ubuntu AMI in the us-east-1a or us-east-1b Availability Zones.

The filters are also supported by the EC2 command-line tools via the “–filter name=value” option.

Tool vendors will be able to make use of this new flexibility to create faster and more powerful EC2 management tools.

Read more about filtering in the newest version of the EC2 User Guide.

Next feature: Bring Your Own Keypair.

— Jeff;