Category: Amazon EC2*

Additional VM Import Functionality – Windows 2003, XenServer, Hyper-V

We’ve extended EC2’s VM Import feature to handle additional image formats and operating systems.

The first release of VM Import handled Windows 2008 images in the VMware ESX VMDK format. You can now import Windows 2003 and Windows 2008 images in any of the following formats:

  • VMware ESX VMDK
  • Citrix XenServer VHD
  • Microsoft Hyper-V VHD

I see VM Import as a key tool that will help our enterprise customers to move into the cloud. There are many ways to use this service two popular ways are to extend data centers into the cloud and to be a disaster recovery repository for enterprises.

You can use the EC2 API tools, or if you use VMware vSphere, the EC2 VM Import Connector to import your VM into EC2. Once the import process is done, you will be given an instance ID that you can use to boot your new EC2 instance You have complete control of the instance size, security group, and (optionally) the VPC destination. Here’s a flowchart of the import process:

You can also import data volumes, turning them in to Elastic Block Storage (EBS) volumes that can be attached to any instance in the target Availability Zone.

As I’ve said in the past, we plan to support additional operating systems, versions, and virtualization platforms in the future. We are also planning to enable the export of EC2 instances to common image formats.

Read more:

— Jeff;

Four New Amazon EC2 Spot Instance Videos

If you have read about the Amazon EC2 Spot Instances but are not quite sure how to get started, how to make cost-effective bids, or how to build applications that can manage interruptions, I’ve got a treat for you. We’ve put together a trio of short videos that should give you all of the information you’ll need to get started. Here is what we have:

  1. Getting Started with EC2 Spot Instances – This video shows you how to launch, view, and cancel spot instance requests using the AWS Management Console. You will see how to log in to the console, select an AMI, and create a bid.
  2. Spot Instance Bidding Strategies – This video shows how customers choose bid strategies that have resulted in savings of 50-66% from the On-Demand pricing. Four strategies are covered: bidding near the hourly rate for Reserved Instances, bidding based on previous trends, bidding at the On-Demand price, and bidding higher than the On-Demand price. Watch this video to learn when it is appropriate to use each of these strategies. The video also references a particle physics project run by the University of Melbourne and the University of Barcelona on Spot Instances. More information about this project is available in a recent case study.
  3. Spot Instance Interruption Management – This video outlines multiple ways to create applications that can run in an environment where they may be interrupted and subsequently restarted. Strategies include using Elastic MapReduce or Hadoop, checkpointing, using a grid architecture, or driving all processing from a message queue.
  4. Using EC2 Spot Instances With EMR – This video shows how to run Elastic MapReduce jobs on a combination of On-Demand and Spot Instances to get work done more quickly and at a lower cost. The includes guidelines for choosing the right type of instance for the Master, Core, and Task instance groups.

I hope that you find these videos to be interesting and informative, and that you are now ready to make use of EC2 Spot Instances in your own application.

— Jeff;


Run Amazon Elastic MapReduce on EC2 Spot Instances

We’ve combined two popular Amazon EC2 features Spot Instances and Elastic MapReduce to allow you to launch managed Hadoop clusters using unused EC2 capacity. You will be able to run long-running jobs, cost-driven workloads, data-critical workloads, and application testing at a discount that has historically ranged between 50% and 66%.

The EC2 instances used to run an Elastic MapReduce job flow fall in to one of three categories or instance groups:

Master– The Master instance group contains a single EC2 instance. This instance schedules Hadoop tasks on the Core and Task nodes.

Core – The Core instance group contains one or more EC2 instances. These instances use HDFS to store the data for the job flow. They also run mapper and reducer tasks as specified in the job flow. This group can be expanded in order to accelerate a running job flow.

Task – The Task instance group contains zero or more EC2 instances and runs mapper and reduce tasks. Since they don’t store any data, this group can expand or contract during the course of a job flow.

You can choose to use either On-Demand or Spot Instances for each of your job flows. If you run your Master or Core groups on Spot Instances, these instances will be terminated if the market price rises above your bid price, and the entire job flow will fail. If you run your Task group on Spot Instances, the unfinished work running on those instances will be returned to the processing queue.

If you have purchased one or more EC2 Reserved Instances, Elastic MapReduce will also take advantage of them (this is not new but I wanted to make sure that you knew about it).

Here are some guidelines to get you started with Elastic MapReduce on Spot Instances:

Long-running Job Flows and Data Warehouses – If you maintain a long-running Elastic MapReduce cluster with some predictable variations in load, you can handle peak demand at lower cost using Spot Instances. Run the Master and Core instance groups on On-Demand instances and supplement the cluster with Spot Instances in a Task instance group at peak times.

Cost-Driven Workloads – If your jobs are relatively short-lived (generally several hours or less), the time to completion is less important than the cost, and losing partial work is acceptable, run the entire job flow on Spot Instances for the largest potential cost savings.

Data-Critical Workloads – If the overall cost is more important than the time to completion and you don’t want to lose any partial work, run the Master and Core instance groups on On-Demand instances, making sure that you run enough Core instance groups to hold all of your data in HDFS. Add Spot Instances as needed to reduce the overall processing speed and the total cost.

Application Testing – If you want to test an entire application before moving it to production, run the entire job (Master and Core instance groups) on Spot Instances.

You can start to use Spot Instances for all or part of a job flow by specifying a bid price for one or more of the flow’s instance groups. You can do this from the AWS Management Console, the command line, or the Elastic MapReduce APIs. To determine how that maximum price compares to past Spot Prices, the Spot Price history for the past 90 days is available via the EC2 API and the AWS Management Console. Here’s a screen shot of the AWS Management Console. As you can see, all you need to do is to check “Request Spot Instances” and enter a Spot Bid Price to benefit from Spot Instances:

You can also add additional TASK instance groups to a running job flow and you can specify a bid price for the instances as you add each group. You could use this feature to create a layered set of bids if you’d like. As you probably know, each job flow is limited to 20 EC2 instances by default. If you would like to run larger job flows, you need to fill out the instance request form.

We expect that Elastic MapReduce users with several types of job flows will really enjoy and make good use of Spot Instances. Two areas that come to mind are:

  1. Batch-processing workloads that are not particularly time-sensitive such as image and video processing, data processing for scientific research, financial modeling, and financial analysis.
  2. Data warehouses that have a recurring workload variance at peak times.

Our customers have been using Elastic MapReduce to process large volumes of data quickly and economically. For example:

Fliptop (full case study) helps brands convert email lists into social media profiles. They are able to do this using Spot Instances and have realized a cost savings of over 50%.

Foursquare (full case study) performs analytics across more than 3 million daily check-ins using Elastic MapReduce, Spot Instances, Amazon S3, MongoDB, and Apache Flume. This is what  Matthew Rathbone of Foursquare told us:

Elastic MapReduce had already significantly reduced the time, effort, and cost of using Hadoop to generate customer insights. Now, by expanding our clusters with Spot Instances, we have reduced our analytics costs by over 50% while decreasing processing time for urgent data-analysis, all without requiring additional application development or adding risk to our analytics.

We have put together a new video, using EC2 Spot Instances with EMR, to show you how to run an Elastic MapReduce job using a combination of On-Demand and Spot Instances.

Read More

I am a big fan of our Spot Instances and I am really looking forward to hearing about new and interesting ways that our customers put them to use. You now have the opportunity to fine-tune your business processes to reduce your costs, and you can now make some very explicit tradeoffs between cost, time to completion, and what happens if the market price rises above your bid. If you are an IT professional, you have some shiny new tools that will allow you to reduce costs while getting work done more quickly.

And what do you think?

 — Jeff;


Run Oracle Applications on Amazon EC2 Now

Every since we announced the availability of the first set of Oracle applications on Amazon EC2, we’ve been working to add additional instance types, locations, and applications.

We’ve made some really good progress; you can now run a wide variety of applications on all 32 and 64-bit instance types in all five AWS Regions, all running on the Oracle VM hypervisor (OVM).

The following applications are now available:

For more information, check out the Oracle and AWS page.

If you are ready to start running any of these applications on EC2 or if you have any business or technical questions, we’re here to help. Fill in and submit our contact-us page and we’ll get back to you right away.

Finally, we continue to grow our database team. Here are some of our openings:

 — Jeff;


New – AWS GovCloud (US) Region – ITAR Compliant

A New Region
Our new AWS GovCloud (US) Region was designed to meet the unique regulatory requirements of the United States Government. The US federal government, state and local governments, and the contractors who support their mission now have access to secure, flexible, and cost-effective AWS services running in an environment that complies with US Government regulations for processing of sensitive workloads and storing sensitive data as described below.

The AWS GovCloud (US) Region supports the processing and storage of International Traffic in Arms (ITAR) controlled data and the hosting of ITAR controlled applications. As you may know, ITAR stipulates that all controlled data must be stored in an environment where logical and physical access is limited to US Persons (US citizens and permanent residents). This Region (and all of the AWS Regions) also provides FISMA Moderate controls. This means that we have completed the implementation of a series of controls and have also passed an independent security test and evaluation. Needless to say, it also supports existing security controls and certifications such as PCI DSS Level 1, ISO 27001, and SAS 70.

To demonstrate that GovCloud complies with ITAR, we have commissioned a third-party review of the ITAR compliance program for AWS GovCloud (US) and have received a favorable letter of attestation with respect to the stated ITAR objectives.

The Details
The new Region is located on the west coast of the US.

All EC2 instances launched within this Region must reside within a Virtual Private Cloud (VPC). In addition to Amazon EC2, the following services are now available:

If you are currently using one of the other AWS Regions, I’d like you to take note of one really important aspect of this release:

Other than the restriction to US persons and the requirement that EC2 instances are launched within a VPC, we didn’t make any other changes to our usual operational systems or practices. In other words, the security profile of the existing Regions was already up to the task of protecting important processing and data. In effect, we simply put a gateway at the door — “Please show your passport or green card before entering.”

You can read more about our security processes, certifications, and accreditations in the AWS Security Center.

Full pricing information is available on the new GovCloud (US) page.

AWS in Action
I recently learned that more than 100 federal, state, and local government agencies are already using AWS in various ways. Here are some examples:

The AWS Federal Government page contains a number of additional case studies and use cases.

Getting Access
Agencies with a need to access the AWS GovCloud must sign an AWS GovCloud (US) Enterprise Agreement. We will also make this Region accessible to government contractors, software integrators, and service providers with a demonstrated need for access. Those of you in this category will need to meet the requirements set out in ITAR Regulation 120.15.

Help Wanted
The AWS team enjoys taking on large, complex challenges to deliver new services, features, and regions to our customers. A typical release represents the combined efforts of a multitude of developers, testers, writers, program managers, and business leaders.

If you would like to work on large, complicated offerings such as AWS GovCloud, we’d love to talk to you. Here’s a small sampling of our current job postings (there’s a full list on the AWS careers page):

— Jeff;

PS – As you might be able to guess from the name of this Region, we would be interested in talking to other sovereign nations about their cloud computing needs.

AWS Direct Connect

The new AWS Direct Connect service allows enterprises to create a connection to an AWS Region via a dedicated network circuit. In addition to enhancing privacy, dedicated circuits will generally result in more predictable data transfer performance and will also increase bandwidth between your data center and AWS. Additionally, users of dedicated circuits will frequently see a net reduction in bandwidth costs.

AWS Direct Connect has one location available today, located at Equinixs Ashburn, Virginia colocation facility. From this location, you can connect to services in the AWS US-East (Virginia) region. Additional AWS Direct Connect locations are planned for San Jose, Los Angeles, London, Tokyo, and Singapore in the next several months.

There are two ways to get started:

  • If you already have your own hardware in an Equinix data center in Ashburn, Virginia, you can simply ask them to create a cross-connect from your network to ours. They can generally get this set up in 72 hours or less.
  • If you don’t have hardware in this data center, you can work with one of the AWS Direct Connect solution providers (our initial list includes AboveNet, Equinix, and Level 3) to procure a circuit to the same datacenter or obtain colocation space. If you procure a circuit, the AWS Direct Connect solution provider will take care of the cross-connect for you.

You can select 1 Gbit or 10 Gbit networking for each connection, and you can create multiple connections for redundancy if you’d like. Each connection can be used to access all AWS services. It can also be used to connect to one or more Virtual Private Clouds.

Billing will be based on the number of ports and the speed of each one. Data transfer out of AWS across the circuit will be billed at $0.02 / GB (2 cents per GB). There is no charge for data transfer in to AWS.

I expect to see AWS Direct Connect used in a number of different scenarios. Here are a few of them:

  • Data Center Replacement – Migrate an existing data center to AWS and then use Direct Connect to link AWS to the corporate headquarters using a known private connection.
  • Custom Hosting – Place some custom network or storage devices in a facility adjacent to an AWS Region, and enjoy high bandwidth low latency access to the devices from AWS.
  • High Volume Data Transfer – Move extremely large amounts of data in and out of HPC-style applications.

In order to make the most of a dedicated high speed connection, you will want to look at a category of software often known as WAN optimization (e.g. Riverbed’s Cloud Steelhead) or high speed file transfer (e.g. Aspera’s On-Demand Direct for AWS). Late last month I saw a demonstration from Aspera. They showed me that that were able to achieve 700 Mbps of data transfer across a 1 Gbps line. At this rate they are able to transfer 5 Terabytes of data to AWS in 17 hours. 

— Jeff;


AWS Summer Startups: Mediology

Over the summer months, we’d like to share a few stories from startups around the world: what are they working on and how they are using the cloud to get things done. Today, we’re profiling Mediology Software from India!

The Story
2010 was the first year when we allowed countries from Asia Pacific to enter the start-up challenge. We were very impressed with the quality of entries and, in specific, one of them, Mediology Software, caught our eye and made it to the final round in Palo Alto.

Mediology Software is currently a one-year old start-up based out of India and employing 35 people. Mediology DigitalEdition, their main product, is a SaaS platform that enables print publishers to digitize their content, add interactivity, create workflows, and then distribute the content via web, mobile and e-reading platforms. The system achieves its massive scale for content digitization and delivery using event-centric cloud computing services from AWS.

As an example of the type of work Mediology does, I encourage you to take a look at the case study we recently published, describing how AWS and Mediology teamed up to help, a website geared to East Asian and South Asian women on a wide range of topics including family, health, beauty, etc. In addition to offering CozyCot a better website hosting and scaling solution through the AWS infrastructure, Mediology has helped them distribute and promote their content through a wide variety of platforms, increasing CozyCots bottom line. 

From the Founders
I caught up with Manish Dhingra and Gaurav Bhatnagar, Co-Founders at Mediology Software, a few days ago, as I was checking on how theyre doing almost a year after being named finalists in the AWS Start-up Challenge.

Since January 2011, we have had some high-profile launches on our DigitalEdition platform. Naturally, the usage of AWS, not just in terms of the instance volume, but across the set of AWS services has enabled us to create a very scalable, yet cost-effective architecture. We’re 100% build and reliant on AWS. For instance, we use EC2, Cloudfront, S3, SES, SimpleDB, RDS, SNS, CloudWatch and IAM, all orchestrated together to enable our SaaS platform, Mediology DigitalEdition.

How Has the AWS Start-up Challenge Helped Mediology?
I asked Manish to tell me how the AWS Start-up Challenge has helped their business. Here’s what he told me:

Consumer and Customer confidence in our solution has definitely taken a giant leap, since we returned from Palo Alto in December 2010. Although the same has also led to higher expectations, our grasp of AWS has enabled us to meet the customer expectations quite easily.

Sharing the Wisdom with other Asia-Pacific Start-ups:

AWS gives you the ability to enable application or solution heavy-lifting. We believe Asia is a growth market and many new age concepts around value-based computing, value-added services (specifically around mobile, which works on the core tenets of SaaS and SOA) will find great traction here. 

The key is to not get fazed during the stealth and growth stages of your start-up. Think of AWS as something that gives the wings to your creativity and enables very effective working-capital utilization. In fact, if the pricing benefits are passed on to the consumers, then there is a great chance of leveling the playing field and being the best at what you do, without compromising on the bottom line.

The AWS Startup Challenge
We’re getting ready to launch this year’s edition of our own annual contest, the AWS Startup Challenge. You can sign up to get notified when we launch it, or you can follow @AWSStartups on Twitter.

— Simone;

Summer Startups: Sportaneous

Over the summer months, we’d like to share a few stories from startups around the world: what are they working on and how they are using the cloud to get things done. Today, we’re profiling Sportaneous from New York City!

The Story
I first learned about Sportaneous after reading about NYC Big Apps, an application contest launched by the city of New York, BigApps is a competition that is organized by Mayor Bloomberg. Its goal is to reward apps that improve NYC by using public data sets released by the local government.

Sportaneous jumped at the opportunity to enter the contest because their applications already offered users a database of public sports facilities to choose from, many of which are obtained from Park & Recreation data. Sportaneous makes it easy for busy people to play sports or engage in group fitness activities. Through the Sportaneous mobile app and website, a person can quickly view all sports games and fitness activities that have been proposed in her surrounding neighborhoods. The user can choose to join whichever game best fits her schedule, location, and skill level. Alternatively, a Sportaneous user may spontaneously propose a game herself (for example, a beginner soccer game in Central Park three hours from now), which allows all Sportaneous users in the Manhattan area to join the game until the maximum number of players has been reached.

From the over 50 applications that entered the NYC BigApps competition, Sportaneous won two of the main awards: the Popular Vote Grand Prize, based on over 9,500 people voting for their favorite app in the contest and the Second Overall Grand Prize voted on by a panel of very distinguished judges, including Jack Dorsey (Co-founder, Twitter), Naveen Selvadurai (Co-founder, Foursquare) and prominent tech investors in NYC.

Here is a video of Sportaneous in action:

From the CEO
I spoke to Omar Haroun, CEO and Co-Founder at Sportaneous about how they got started and ended up using AWS. He shared a bit about their humble beginnings and how their growth plans continued to include AWS:

We initially bootstrapped the service using a single EC2 instance.  We used an off-the-shelf AMI with a backing EBS volume, so we could fine-tune the machine’s configuration as we started higher traffic numbers.  We wanted a low cost, reliable hosting option which we knew had the ability to scale gracefully (and very quickly) when needed.  EC2 allowed us to get up and running in a matter of hours, without forcing any design compromises which we’d later regret.

As traffic has grown and we’ve begun preparing for a public launch, we’re planning to move our MySQL databases to RDS and to take advantage of some additional ELB features (including SSL termination). RDS was also a no-brainer.  We realize that any data-loss event would be devastating to our momentum, but we don’t have the resources for a full time DBA (or even a database expert).  RDS and its cross-AZ replication takes a huge amount of pressure off of our shoulders.

Behind the Scenes
I asked Omar to tell me a bit about the technology behind Sportaneous. Here’s what he told me:

Our web app is written in Scala, using the very awesome Lift Framework. Our iPhone App is written in Objective-C.  Both web app and iPhone app are thin clients on top of a backend implemented in Java, using the Hibernate persistence framework. Our EC2 boxes (which serve both our web app and our backend) run Jetty behind nginx.

He wrapped up on a very positive note:

Using EC2 with off the shelf AMIs, we went from zero to scalable, performant web app in under two hours.

The AWS Startup Challenge
We’re getting ready to launch this year’s edition of our own annual contest, the AWS Startup Challenge. You can sign up to get notified when we launch it, or you can follow @AWSStartups on Twitter.

— Jeff;

EC2 Spot Pricing – Now Specific to Each Availability Zone

We have made an important change to the way pricing works for EC2 Spot Instances. We are replacing the original Region-based pricing information with more detailed information that is specific to each Availability Zone. This change ensures that both the supply (the amount of unused EC2 capacity) and the demand (the amount and number of bids for the capacity) reflect the status of a particular Availability Zone, enabling you to submit bids that are more likely to be fulfilled quickly and to use Spot Instances in more types of applications.

As you may know, Spot Instances allow you to bid for unused EC2 capacity, often allowing you to significantly reduce your Amazon EC2 bill. After you place a bid, your instances continue to run as long as the bid exceeds the current Spot Market price. You can also create persistent requests that will automatically be considered again for fulfillment if you are ever outbid.

Over the last year and a half, our customers have successfully leveraged Spot Instances to obtain compute cycles at substantial discounts for use cases like batch processing, scientific research, image processing and encoding, data and web crawling, testing, and financial analysis. Here are some examples:

Social analytics platform BackType (full case study) uses Spot Instances to handle large-scale Hadoop-based batch data processing (tens of terabytes of data representing over 100 billion records). They have been able to reduce their costs by up to 66% when compared to On-Demand instances. Their Spot strategy includes placing high bids to reduce the chance of being interrupted (they pay the current price regardless of their bid, so this does not increase their operating costs).

Monitoring and load testing company BrowserMob (full case study) uses a combination of Spot and On-Demand instances to meet their capacity needs. Their provisioning system forecasts capacity needs 5 minutes ahead of time and submits suitably priced bids for Spot Instances, resorting to On-Demand instances as needed based on pricing and availability.

Biotechnology drug design platform Numerate (full case study) has incorporated Amazon EC2 as a production computational cluster and Amazon S3 for cache storage. Numerate enjoys around 50% cost savings by using Amazon EC2 Spot Instances after spending just 5 days of engineering effort.

Image rendering tool Litmus (full case study) takes snapshots of an email in various email clients and consolidates the images for their customers. Litmus enjoys a 57% cost savings by using Amazon EC2 Spot Instances for their compute needs.

When Spot Instances were first launched, there was a Spot Price for each EC2 instance type and platform (Linux/Unix or Windows) in each Region:

This model worked well but we think we can do even better based on your feedback. We have made some improvements to the Spot Instance model to make it easier for you to implement a cost-effective bidding strategy.

  1. The market price for each type of Spot Instance is now specific to an Availability Zone, not a Region.
  2. We will now publish historical pricing for each Availability Zone. This change will provide fine-grained data that you can use to determine a suitable bid for capacity in a particular Availability Zone. Because this will make a great deal of additional data available to you, we have made an API change to allow you to paginate the results of the DescribeSpotPriceHistory function.
  3. Spot requests that target a particular Availability Zone now have a greater chance of being fulfilled.

With these changes you now have the information needed to do additional fine tuning of your Spot requests. In particular, we believe that these changes will allow you to more easily use Spot Instances for applications that are sensitive to latency or that need to run in a particular Availability Zone in order to be co-located with other services or data. For example,  it will now be easier for AWS to run a set of Hadoop-based processed in the same Availability Zone, paving the way for you to use Spot Instances with Elastic MapReduce.

Its easy to get started. Simply go to the AWS Management Console and launch an instance like normal. In the Request Instance Wizard, click the Request Spot Instance radio button to set your bid to the maximum that you are willing to pay for an instance hour of the desired instance type. Here’s a screenshot of a persistent request for four Micro instances at a maximum price of $0.01 (one penny) per hour per instance:

I will look forward to hearing about the ways that you find to put Spot Instances to use in your application.

Read More:

— Jeff;

Summer Startups: GoSquared

AWS is a pay-as-you-use mix of tools and services that help businesses of all sizes build innovative products. Over the summer months, we’d like to share a few stories from startups around the world: what are they working on and how they are using the cloud to get things done. Today, we’re profiling GoSquared.



GoSquared is a real-time web analytics platform, built entirely on AWS, enabling businesses to improve and adapt their online presence quickly. The real-time metrics allow rapid website optimisation through buyer conversion, signups, engagement or other measurements important to a site.

The company was founded in 2006 by three 15 year old school friends, James Gill, Geoff Wagstaff and James Taylor and has been rolling on AWS since 2009. It is now funded and run out of the legendary White Bear Yard offices in Clarkenwell, London.

I talked to Geoff, GoSquared CTO, about their use of AWS. In his own words:

“Initially running on a low budget with experimental technology, we needed flexibility not only for our compute resources but for billing, so that we could develop our system without worrying about over or under provisioning resources and expenses. It was clear we needed the cloud.”

The analytics platform runs on a wide range of AWS services. EC2 is the workhorse for compute resources, including web, processing, development, application, database and cache servers. The GoSquared architecture is configured for high availability with fault tolerance and cost-effective vertical and horizontal scaling. The site uses Elastic Load Balancing and the AWS Auto Scaling service with CloudWatch to distribute workloads and drive down costs. The team also use CloudFront to deliver low latency assets, including tracking code for customer websites. A few other details that make GoSquared interesting:

Price-Aware Architecture

A really nice architectural feature of the GoSquared platform is the integration of Spot Instances. For their data analysis and tracking platform, GoSquared balance incoming data across a collection of EC2 instances. Some of those instances are under Auto Scaling control, which means they automatically scale up and down based on demand, but the remainder are provisioned as Spot Instances. A low bid price ensures that costs stay down, and a collection of CloudWatch metrics and alarms gracefully replace terminated Spot Instances to ensure availability should the EC2 spot price exceed the bid price.

Looking Forward

The small team (which has just hired employee #4!) have done a great job in taking advantage of the AWS services to build a robust, available, scalable, joyful product and we couldn’t be happier to help GoSquared as it continues to grow by leaps and bounds.

“The overall flexibility and diversity of the AWS platform has been an intrinsic ingredient to the agility of our technology and business, and has lowered barriers to entry in our market. Before AWS, the kind of infrastructure required to run a real-time web analytics operation was largely only available to highly skilled datacenter technicians managing their own physical hardware, accounting for all the overheads associated with that. By attacking this problem, AWS has brought infrastructure right to the fingertips of everyone.”, says Geoff, GoSquared CTO.

If you’re interested in learning more, visit GoSquared’s site, or read our case study.

~ Matt


Related topics:

Join us in London!

 We’re hosting an evening meetup in Shoreditch on 12th July for startups and entrepreneurs. The AWS team will be joined by GoSquared and the fine folks of Mendeley to discuss how they’re using the cloud to build their businesses. Join us! The event is free, but you’ll need to register to attend.

AWS Start-up Challenge

We’re getting close to launching our yearly contest. Sign up to get notified the second we open it up for submissions.