Category: Amazon EC2


Scientific Computing with EC2 Spot Instances

Do you use EC2 Spot Instances in your application? Do you understand how they work and how they can save you a lot of money? If you answered no to any of these questions, then you are behind the times and you need to catch up.

I’m dead-serious.

The scientific community was quick to recognize that their compute-intensive, batch workloads (often known as HPC or Big Data) were a perfect fit for EC2 Spot Instances. These AWS customers have seen cost savings of 50% to 66% when compared to running the same job using On-Demand instances. They are able to make the best possible use of their research funds. Moreover, they can set the Spot price to reflect the priority of the work, bidding higher in order to increase their access to cycles.

Our friends over at Cycle Computing have used Spot Instances to create a 30,000 core cluster that spans 3 AWS Regions. They were able to run this cluster for nine hours at a cost of $1279 per hour (a 57% savings vs. On-Demand). The molecular modeling job running on the cluster consumed 10.9 compute years and had access to over 30 TB of RAM.

Harvards Laboratory for Personalized Medicine (LPM) uses Amazon EC2 Spot Instances to run genetic testing models and simulations, and stretch their grant money even further. One day of engineering allowed them to save roughly 50% on their instance costs moving forward.

Based on the number of success stories that we have seen in the scientific arena, we have created a brand new (and very comprehensive) resource page dedicated for scientific researchers using Spot Instances. We’ve collected a number of scientific success stories, videos, and other resources. Our new Scientific Computing Using Spot Instances page has all sorts of goodies for you.

Among many new and unique things you will find:

  • A case study from Harvard Medical School. They run patient (virtual avatar) simulations on EC2. After one day’s worth of engineering effort, they now run their simulations on Spot Instances and have realized a cost savings of over 50%. Some of the work described in this case study is detailed in a new paper, Biomedical Cloud Computing With Amazon Web Services.
  • A video tutorial that will show you how to use the newest version of MIT’s StarCluster to launch an entire cluster of Spot Instances in minutes. This video was produced by our friends at BioTeam.
  • A video tutorial that will show you how to launch your Elastic MapReduce jobs flows on Spot Instances.
  • Detailed technical and business information about the use of Spot Instances for scientific applications including a guide to getting started and information on migrating your applications.
  • Common architectures (MapReduce, Grid, Queue, and Checkpoint) and best practices.
  • Additional case studies from DNAnexus, Numerate, University of Melbourne/University of Barcelona, BioTeam, CycleComputing, and EagleGenomics.
  • A list of great Solution Providers who can help you get started if you need a little extra assistance migrating to Spot Instances.
  • Documentation and tutorials.
  • Links to a number of research papers on the use of Spot Instances.
  • Other resources like our Public Data Sets on AWS and AWS Academic programs.

Spot Instances work great for scientific Research, but there are a huge number of other customers out there that also love Spot. As an example Spot works really well for loads of other use cases like analytics, big data, financial modeling, geospatial analysis, image and media encoding, testing, and web crawling. Check out this brand new video for more information on common use cases and example customers who leverage them.

Again, if you don’t grasp the value of Spot Instances, you are behind the times. Check out our new page and bring yourself up to date today.

If you have a scientific computing success story of your own (with or without Spot) or have feedback on how to make Spot even better, we’d love to hear more about it. Please feel free to post a comment to the blog or to email it to us at spot-instance-feedback@amazon.com.

Finally, if you are excited about Spot and want to join our team, please contact Kelly OMara at komara@amazon.com to learn more about the team and our open positions.

— Jeff;

 

AWS Summer Startups: Peritor/Scalarium

Over the summer months, we’d like to share a few stories from startups around the world: what are they working on and how they are using the cloud to get things done. Today Im speaking to Jonathan and Thomas, two of the creators of Scalarium, from Berlin, Germany! 

Peritor team


R: Hi guys, could you briefly describe Scalarium and the background of your team?

Thomas:
With Scalarium, we’ve created an easy management service for EC2 clusters. Scalarium helps our customers deploy Rails, Node.js, PHP, Java, Python or any other stack. It automates the initial setup and continuous configuration of servers. Scalarium also takes care of scaling, security, monitoring, and a lot more.
We started as an IT consultancy in 2005 and used EC2 from the early days on to help our clients scale out. Doing so, we realized that we repeated ourselves in this kind of projects. So we created Scalarium as a framework that helps customers automate EC2 deployments.

 

R: How have you incorporated Amazon Web Services as part of your own architecture? What services are you using and how?
 
Jonathan:
We heavily use EC2, EBS and S3. And in our stack you will find Ruby, CouchDB, Redis, RabbitMQ, Chef and other nice and shiny stuff. We brought you a little illustration that shows you how we run Scalarium on EC2. But before that, you will need to understand a little more about what we do.
As said, Scalarium helps customers run apps on EC2. But instead of offering you some restrictive and expensive PaaS solution, we offer you an elegant way to automate everything on your servers. So you will still maintain root access to all servers and are able to configure each and every setting.

 

R: How does Scalarium help customers run apps on AWS?
 
Thomas:
In the cloud, each server goes through something that we would describe as a server life cycle. Each and every server in your cluster comes to existence at some time, it experiences some changes and it goes at some point later. Some of them have a rather short lifespan like application servers that are used to burst out, others have long lifespans like database servers. But all of them go trough this cycle.
We defined events in this life cycle which we and you can hook into to execute scripts on the servers. The life cycle events that are used in Scalarium are the following ones. 
Ec2-lifecycle-thumb

  • Setup is used to update a base image and install everything you need on the fly as soon as the server comes into existence.
  • Configure is triggered by any change in the cluster – new servers coming or old ones going.
  • Deploy executes scripts that should run during the deployment of an application on the servers. You can hook into the deployment with before_migrate or any other hook you know from Capistrano.
  • Undeploy – this is triggered if you want to remove an application.
  • Shutdown is triggered if you gracefully stop a server. You can copy stuff around or inform other servers about the absence of the server in advance.
Now imagine a very basic setup with one load balancer, a couple of app servers and a single database.What would you need to do if you wanted to add another app server to your stack? (click image below to enlarge)
Peritor-img2
You would need to boot an AMI (Amazon Machine Image), log in to the machine, install updates and dependencies, configure all services, cron jobs and so on and last but not least deploy your application. But you are not done yet. You also need to log in to the database server and grant access to the new app server by adding the IP to your ACL. After that you have to log in to your load balancer and add the app server to the load cycle.
This procedure is rather tedious even for easy and basic setups like this, but as you can imagine, the number of dependencies and tasks grows very fast as soon as you have more tiers and servers in your cluster.
What would you do if one of your servers dies or isnt reachable due to some temporary network issues? Have a look at the Netflix Tech Blog and learn about the chaos monkey and his friends if you think your servers will be always on and flawless forever.
We created Scalarium to take care of this type of concerns automatically. You can extend the abilities of Scalarium as you like because you can react to all life cycle events and hook into them. This enables you to do just about everything. You always start with a vanilla OS and in the end you have a totally customized setup on your server and all other servers in the cluster know how to react and reconfigure themselves. We offer a broad selection of predefined stacks and examples. You can change them easily or add your own ones.

 

R: How does the bootstrapping of an instance work?
Jonathan:
In this picture you see roughly what happens behind the curtains if a new server is added to a cluster (click image to enlarge):
Scalarium-lifecylce
As soon as a new server is requested, we ask Amazon for it. Once the server finished booting it downloads the Scalarium agent and a custom certificate, installs the agent and connects back to Scalarium in an encrypted and signed way. We check what kind of server you instructed it to be and execute the appropriate Chef recipes. Chef is an open-source system integration framework, similar to Puppet or CFEngine. Check out our example cookbooks on github to get a feeling about how easy it is to use Chef. You will find the main Scalarium cookbooks there too.
The server bootstraps and will be your new app server, database or whatever you wanted it to be. This process usually takes just one or two minutes depending on the stack you want to install and the size of the server.
After successful bootstrapping of a new server, all existing servers in the cluster get informed. This step is very important. Because now, recipes bound to the configure event are executed on each server in the cluster. That way, load balancers can execute recipes that ensure that they are aware of all running app servers and that they can safely remove stopped app servers from their load cycle. A database server can check if it has granted access to the available app servers. But of course you also could do advanced things like adding new database servers and re-balance your data, update your nagios alerting or your graylog2 server to catch all the logs you want.
If you are done with your basic setup you can easily add time or load based servers, add and deploy applications to your cluster or clone the complete environment to create a staging system. All that can be done via the UI or the Scalarium API.

R: How do you run on AWS yourself?
Thomas:

Below is a simplified visualization of our own architecture. We use two main databases for Scalarium. One is CouchDB, used to store information like the cluster configurations, server descriptions and current state, applications, deployment definitions. The other one is Redis, used for accounting, events, monitoring and metering data.

We chose CouchDB for high availability, easy replication, clustering, robustness, and a short recovery time. Redis is awesome for the very dynamic, fast growing, and non critical data we have.

Scalarium itself a Rails app, the Scalarium API is a Sinatra app. Workers are based on RabbitMQ/Nanite.

Our setup spans multiple regions and availability zones to guarantee a high uptime. CouchDBs awesome replication features are used to have a master/master replication across regions. Redis uses a master/slave setup for data replication. (click image below to enlarge)

Scalarium-architecture
R: Why did you decide to use AWS?
Thomas:

Thats simple. We use AWS because its the only big, global distributed and reliable source for IaaS out there. Amazon kicks some serious ass and develops tons of new features and services. Last but not least we eat our own dog food – Scalarium runs on Amazon and is managed with Scalarium.

By using AWS and Scalarium we can grow in no time to handle as many customers we like, spin up staging environments, deploy fast and often and do all that completely automated. All fail over, scaling, backup tasks, monitoring and so on is automated. You will love doing that. You can concentrate on developing your app without hassling with data centers and servers.

Amazon enables us to have clients ranging from start ups with one server, over SaaS offerings and agencies with a couple of servers, to the worlds biggest social game providers like wooga or Plinga with an incredible number of servers running their games all over the globe.

If you like you can see a rather old video in which Jonathan explains the complete process to create a Rails cluster, add a Rails app and deploy it. Or even better, sign up and try Scalarium for yourself.

R: Any last words?
Thomas:

 Yes. Take part in the Global AWS Start-Up Challenge! It is a short application form. You can win cash, AWS credits and get a lot of visibility. And if thats not enough we give every semi finalist half a year free Scalarium on top.

So apply for the Start-Up Challenge now!

 -rodica

AWS Summer Startups: ShowNearby

 

Over the summer months, we’d like to share a few stories from startups around the world: what are they working on and how they are using the cloud to get things done. Today, we’re profiling ShowNearby, from Singapore!

ShowNearby team

 
About ShowNearby

ShowNearby is a leading location-based service in Singapore and an early adopter of the Android platform. Unlike many mobile apps out there, ShowNearby started with deployment on Android and then moved on to the iPhone by mid 2010 and Blackberry by fall of 2010. Today, the ShowNearby flagship app is available on Android, iPhone and Blackberry and reports approximately 100 Million mobile searches conducted across all its platforms.

I spoke to Stephen Bylo, Senior Cloud Architect at ShowNearby, who added a bit of color to the experience of running, planning, and meeting the requirements of a popular mobile app. If you’re not from Singapore and would like to see the app, here’s a quick video demo of ShowNearby.
Surviving Our Success with AWS

Due to the success of our application, we had a very big growth in a short period of time. When we launched on the popular platforms of iOS and subsequently BlackBerry, we were blown away by the huge surge of users that started using ShowNearby. In fact in December of 2010, ShowNearby became the top downloaded app in the App store, edging out thousands of other popular free apps in Singapore! It was then that we realized we needed a scalable solution to handle the increasing load and strain on our servers that our existing provider was unable to provide.

Our infrastructure at the time was hosted with a local service provider, but was unable to cope with the high traffic peaks we were facing.We analyzed a few vendors and decided to go ahead with Amazon because of it’s reliability, high availability, range of services and pricing, but mostly because of its solid customer support.

As part of our deployment, we added AWS services incrementally. Currently we use extensively Amazon EC2 instances with auto scaling, Relational Database Service (RDS), Simple Queue Service (SQS), Cloudwatch and Simple Storage Service (S3).

Next item on our list is to focus on automating the deployment of infrastructure environments with cloud formations, as well as optimizing content delivery globally with Cloudfront.

Choosing the Tech Stack That Makes Business Sense

ShowNearby currently leverages on the LAMP stack for most our web services. Delivery of accurate, always available, location based data is ShowNearbys top priority.That is why we chose AWS.

Other important things why to choose cloud/AWS: Speed and agility to create and tear down infrastructure as and when it is needed.  Good and fast network accessibility for our app.  Ability to scale up and out when needed.  Ability to duplicate infrastructure into new regions.

Reaching Automation Nirvana with AWS

We chose to use AWSs Linux based AMI and dynamically build on top of it using well defined, automatic configuration.  Now, every time an instance is started, we are sure the infrastructure is always in a known state.  Admittedly, a lot of hard work is involved to achieve Automation Nirvana, but knowing precisely what works at the end of the day helps us sleep at night.

  • We use Amazon S3 to store infrastructure configuration and user provided content/images. ShowNearbys business is currently in, and marching into new, regions, so S3 is a natural precursor to AWSs CloudFront content distribution service.
  • We use SQS to help process user behaviour and to determine usage patterns.  
  • We use this to provide our dear users with a better, and hopefully, more personalised experience.We use spot instances for early development & testing servers.
  • We use CloudWatch extensively – how could we do without it?
  • We use RDS, for our hosted mySQL databased needs, of course
  • We use the command-line and PHP AWS API tools to a large extent, which provides us increased business agility.

Words of Wisdom for Mobile Startups

We would tell them to find partners who can be good friends at the same time. The race is long and tough, so better do it enjoying every step of the way. There is a window of opportunity in Asia now open to unleash your full potential, show what you are capable of and you’ll be rewarded.

Today, if we need to refresh or update a web application, we restart new instances and flush out the old.  Moving forward, we are looking into reducing the time between releases still further and so, we are working to improve on our already solid infrastructure and configuration management.  Further automation in the form of Chef and/or Puppet or similar is being investigated.

——————————————————

Enter Your Startup in the AWS Start-up Challenge!
This year’s AWS Start-up Challenge is a worldwide competition with prizes at all levels, including up to $100,000 in cash, AWS credits, and more for the grand prize winner. 7 Finalists receive $10,000 in AWS credits and 5 regional semi-finalists receive $2,500 in AWS credits. All eligible entries receive $25 in AWS credits. Learn more and enter today!

You can also follow @AWSStartups on Twitter for updates.

-rodica

Now Available: Windows Server 2008 R2 Cluster Compute and Cluster GPU

You can now run Microsoft Windows Server 2008 R2 on the EC2 Cluster Compute and Cluster GPU instances using new Windows 2008 R2 and Windows 2008 R2 SQL Server AMIs.

To reiterate, here are the specs for these compute-intensive instance types:

Cluster Compute Quadruple Extra Large:

  • 23 GB of memory
  • 33.5 EC2 Compute Units (2 x Intel Xeon X5570, quad-core Nehalem architecture)
  • 1690 GB of instance storage
  • 64-bit platform
  • I/O Performance: Very High (10 Gigabit Ethernet)

Cluster GPU Quadruple Extra Large:

  • 22 GB of memory
  • 33.5 EC2 Compute Units (2 x Intel Xeon X5570, quad-core Nehalem architecture)
  • 2 x NVIDIA Tesla Fermi M2050 GPUs
  • 1690 GB of instance storage
  • 64-bit platform
  • I/O Performance: Very High (10 Gigabit Ethernet)

These instances provide you with plenty of RAM, cycles, and network performance for heavy-duty workloads. With this release, you can now run Microsoft Windows on every one of the eleven EC2 instance types, from the Micro on up.

You can select the Windows Server 2008 R2 AMI for Cluster Instances from the AWS Management Console:

— Jeff;

AWS Summer Startups: Mendeley

Although Summer is starting to ebb into Autumn in the northern hemisphere, it’s just getting going south of the equator, so there is still time to profile another start-up in our on-going series of profiles!

 

Mendeley team

Introducing Mendeley

Today I’m very happy to introduce you to Mendeley, a London based startup that harnesses cloud computing to help the academic community manage existing libraries of research, discover new research and collaborate with researchers around the world. They are simultaneously building the worlds largest crowd-sourced database of research covering all disciplines from Arts to Zoology. Mendeleys software also anonymously aggregates all usage data in the cloud and tracks what articles are being read, by whom, when and how often.

Like a lot of great ideas, the founders of Mendeley set out to solve their own problem, and came up with the concept for Mendeley while studying for higher degrees in business, psychology and machine learning. The team includes many people with backgrounds in software development, academia and publishing.

I spoke to Dan Harvey, a Data Mining Engineer at Mendeley about how they came to use AWS:

“We started out buying our own hardware 34 years ago. Initially our main reasons for using AWS were due to being able to scale up far more quickly and cheaply than we could ourselves for document storage. Over time this is still true with regard to cost and scaling, but the elastic properties of EC2 mean we only have to pay for resources when we are using them. More recently we’re finding that AWS gives our developers more flexibility to have the resources they need to test out new code and ideas, rather than stepping on one another’s toes on shared servers”

Mendeley are using a wide collection of AWS services to power their fast growing business, which now manages over 100 million papers.

“We wanted to produce previews of these documents for use on our article pages on the web. This was done using a combination of Elastic Beanstalk to host a Java app to render PDFs into raw images, S3 to store the data, CloudFront to serve the images to end users, and SQS to glue this all together”, said Dan.

 

Data driven

With such a rich collection of documents and data, Mendeley also provides tailored recommendations to its users, making use of Elastic MapReduce, and Mahout. Dan Harvey continues:

“Our latest use of AWS is with the Apache Mahout project. This is distributed collaborative filtering on top of the Hadoop framework; we use it to provide tailored recommendations for our users. We have our own Hadoop cluster internally but chose EMR for this because Mahout requires a different task granularity to our existing workload; we can optimise Hadoop on EMR for the specific recommendation task. It also allows us have a simple way of calculating the daily cost of recommendations based on the on-demand EC2 instances EMR uses with each run with a multi-use Hadoop cluster it is very hard to allocate costs between the different tasks that run on the shared infrastructure. Finally, when we’re done running recommendations, we can shut the cluster down and it costs us nothing.”

Introduction to AWS

Dan will join us to talk about Mendeley’s use of AWS in more detail at our upcoming Introduction to AWS event in London, where newcomers to the cloud can learn about how to build scalable, elastic applications on AWS. Attendance is free, but you’ll need to register.

 

More information

  • Mendeley have their own API, with which developers can build applications… for science! The Mendeley Binary Battle, an API competition judged by Amazon CTO Werner Vogels and others, runs until the end of September.
     
  • If you’re a start-up running on AWS, don’t forget that there is still time to enter this year’s AWS Start-up Challenge, a worldwide competition with prizes at all levels including $100,000 in cash and AWS credits for the grand prize winner. Learn more, and enter today.

 

~ Matt

AWS Summer Startups: Classle

Over the summer months, we’d like to share a few stories from startups around the world: what are they working on and how they are using the cloud to get things done. Today, we’re profiling Classle, from Chennai, India!

Classle team


I recently read Mark Susters blog on Avoiding Monoculture – which is why Im happy to share with you what Ive learned about Classle, a  startup from India, focused on solving education problems for areas of the world that experience serious resource constraints. Classle has the big goal of changing the world around them by encouraging students and experts to share knowledge and expertise, and using the AWS cloud to facilitate this exchange. 

I reached out to Vaidya Nathan, Founder and CEO of Classle:

About Classle

Classle is a Social Learning infrastructure company with specific focus on Education, Learning and Knowledge communities. Using the main Classle product, Cloud Campus platform, Classle creates and manages private and public social learning environments and offers services based on it.

Classle helps rural students access higher education and reach opportunities unavailable before. Our company partners with a wide network of colleges throughout India, which act as internet-connected “learning nodes” that distribute educational materials to students. When the student goes home for the day with their downloaded lectures and other materials from the library, Classle makes use of mobile technology and SMS-based quizzes to keep students engaged and actively learning. The entire system was designed to work with simple, $10 phones, not smartphones, and the students are entirely addicted to these quizzes – they cant get enough of them.
All these services are provided free of charge to both students and colleges. Classle monetizes by partnering with companies who are looking to hire top talent from among the students, and by selling their cloud-based learning platform for training purposes within companies.

Starting Small and Growing with Business

We are using AWS since our inception in early 2009. Our first steps involved two small Amazon EC2 instances and Amazon EBS to store our database. Over the years, our use has expanded to match our business growth. Our selection criteria covered tactical as well as strategic points. From a tactical perspective, we wanted a quicker time for provisioning, which AWS on-demand instances enabled, and the option to secure our resource needs through Reserved Instances.

At a strategic level, we wanted to provide the best experience for our customers and it was key to build Classle on top of services, products, and infrastructure designed for growth and scale. To date, we have established relationships with over 30 educational organizations and that list is constantly growing. Thanks to AWS, we are effectively competing with some large and strong players in the e-learning space.

Sharing the AWS Lessons:

We are a small, LAMP stack team and we started using AWS in 2009. Currently, the products we are using are below. For reference, we are also happy to share our Classle architecture diagram, which is included in our case study with AWS.

  • Amazon Elastic Compute Cloud (EC2)
  • Elastic Load Balancing (ELB)
  • AutoscalingAmazon Elastic Bock Storage (EBS)
  • Amazon Simple Storage Service (S3)
  • Amazon Reduced Redundancy Storage (RRS)
  • Amazon CloudFront with both streaming and download
  • Amazon Cloud Watch
  • Amazon Relational Database Service (Amazon RDS) with Multi-AZ and Read replication.
  • Amazon SimpleDB
  • Amazon Simple Notification Service (Amazon SNS)
  • Amazon Route 53

Pretty soon, we would be using Amazon Elastic Map-Reduce clusters for our analytics requirements..

Words of Wisdom to Startups

Starting a company is always hard, whether youre from India or anywhere else. However, its worth to keep in mind that its never been easier to go out there and try things out –  with Open Source for robust software and cloud service providers like AWS for infrastructure, you can test your ideas and run a business at very low cost.

Being from in India, where we dont have a strong start-up mentality like in the U.S., certainly poses some unique challenges. There are many more problems to solve, and it is exciting to try and translate the existing limitations into innovations, solutions and hence opportunities.

If I had to boil down my advice, I would say to my fellow entrepreneurs to: venture with confidence, design for scale, start small & architect for growth.

——————————————————

Enter Your Startup in the AWS Start-up Challenge!
This year’s AWS Start-up Challenge is a worldwide competition with prizes at all levels, including up to $100,000 in cash, AWS credits, and more for the grand prize winner. Learn more and enter today!

You can also follow @AWSStartups on Twitter for updates.

 -rodica

AWS Direct Connect Heads West

We introduced AWS Direct Connect last month and invited AWS users with a need for a dedicated network connection to the US East (Northern Virginia) Region to give it a try.

The service is geared for those who have big data transfer requirements or are looking for more consistent network performance when accessing the cloud.

Today we are ready to create connections to our US West Region through the AWS Direct Connect location in the Equinix San Jose facility (SV1 and SV5). To get started, visit the Direct Connect Contact page and we’ll get back to you ASAP.

Additional AWS Direct Connect locations are planned for Los Angeles, London, Tokyo and Singapore in the next several months. Please feel free to use the contact page to express your interest in connections to these locations.

— Jeff;

AWS Summer Startups: TellApart

Over the summer months, we’d like to share a few stories from startups around the world: what are they working on and how they are using the cloud to get things done. Today, we’re profiling one of last year’s finalists from the AWS Start-up ChallengeTellApart, from Burlingame, California!


TellAparts application stood out to all AWS reviewers as a very well written entry to the AWS Start-up Challenge – pun intended, it was indeed easy from the beginning to tell it apart from the thousands that we reviewed.  As we met the team at our final event and got to know them more over the past year, we grew to be as excited as they are about the future of advertising and real-time ad bidding systems on AWS. We took some time to catch up with Josh McFarland, CEO of TellApart:

About TellApart

TellApart helps online retailers identify (or tell apart) their best customers and prospects. They do so by employing a suite of marketing tools – a customer data platform, predictive customer analytics, and a next-generation display ad retargeting application. TellApart believes it is the first company to bring all of these components together in a cloud-based platform. Theyve also introduced a fresh, innovative business model to the markettheir customers, typically online retailers, pay TellApart only when shoppers click through on TellApart-served display ads and actually make a purchase.

Real-Time Ad Bidding System, At Scale

TellAparts vision revolves around helping our e-commerce clients tell apart their highest value shoppers from the rest. We assess each of our clients customers and then use our predictions to place personalized display ads in front of the best visitors to drive new sales.

We do this in real time… and thats not hyperbole; we must respond to bid requests from the Google/DoubleClick Ad Exchange in under 120ms. And we handle more than 10,000 of these requests per second today, making TellApart one of the top five companies in this space. These capabilities require big data and serious compute power — something that, as an ex-Google founding & early engineering team, were used to having on demand. AWS makes this possible at scale, as we outlined in our recent case study.

No Technical Debt Burden

Some of you might say that data warehousing, predictive analytics, and customized marketing solutions are not new concepts at all – and you would be right. Many organizations provide these services  today – and many of them have extensive data-centers that support those computational needs. What makes TellApart interesting is that their technology choices and cloud-based implementation make their business decision process happen in a much more agile and easy way, while keeping costs low. 

When youre growing a customer base and product usage as quickly as we are, you quickly learn that scalability is the first order of business. Getting that right frees up engineering for innovation instead of maintenance. AWS has allowed TellApart to quickly build an infrastructure that is elastically scalable, redundant, fast and cost efficient – something that was not possible just a few years ago.

On Growing, AWS Monthly Bill, and Impact of AWS Start-up Challenge

Since being honored as an AWS Startup Challenge finalist in December of last year, TellApart has continued on an insane trajectory, more than doubling our client list, revenue and headcount! About the only thing that hasnt grown by at least 2x is our AWS bill. Weve made smart use of the recently announced EC2 Spot Instances for our less urgent Hadoop-based data processing jobs, and weve implemented an architecture based on EC2 Reserved Instances to reduce costs for our more predictable front-end ad serving. Innovations like these pricing tiers are awesome, and they show the AWS team is responding to our needs. PS — Were hiring! (For all positions except NOC Admin…)

Words of Wisdom to Entrepreneurs

Our advice for aspiring entrepreneurs is borrowed from the Greek goddess of victory: Just do it! Youll be surprised at how much you can accomplish quickly as you work toward finding product-market fit. We recently completed a $13M Series B funding round, and the venture capitalists we spoke with during the process were astounded at how much we had accomplished in our first two years — and how little we spent to do it! It goes without saying that AWS continues to be a pillar of our success and efficiency.

——————————————————

Enter Your Startup in the AWS Start-up Challenge!
This year’s AWS Start-up Challenge is a worldwide competition with prizes at all levels, including up to $100,000 in cash, AWS credits, and more for the grand prize winner. Learn more and enter today!

You can also follow @AWSStartups on Twitter for updates.

 -rodica

Happy 5th Birthday to Amazon EC2

I woke up today and realized that today is the 5th anniversary of launch of Amazon EC2. I published the initial announcement in the summer of 2006 while on vacation with my family.

We (the AWS Marketing Team) have put together an interactive infographic to highlight some of the more significant changes and improvements that we’ve made to EC2 over the past five years. You can click on the items in the dark blue center column to learn more.

We’ve come a long way, but from what I can tell we are still getting warmed up and the best is yet to come.

— Jeff;

PS – We are hiring for a number of great positions. If you would like to join the team, check out the complete list of AWS jobs.

Additional VM Import Functionality – Windows 2003, XenServer, Hyper-V

We’ve extended EC2’s VM Import feature to handle additional image formats and operating systems.

The first release of VM Import handled Windows 2008 images in the VMware ESX VMDK format. You can now import Windows 2003 and Windows 2008 images in any of the following formats:

  • VMware ESX VMDK
  • Citrix XenServer VHD
  • Microsoft Hyper-V VHD

I see VM Import as a key tool that will help our enterprise customers to move into the cloud. There are many ways to use this service two popular ways are to extend data centers into the cloud and to be a disaster recovery repository for enterprises.

You can use the EC2 API tools, or if you use VMware vSphere, the EC2 VM Import Connector to import your VM into EC2. Once the import process is done, you will be given an instance ID that you can use to boot your new EC2 instance You have complete control of the instance size, security group, and (optionally) the VPC destination. Here’s a flowchart of the import process:

You can also import data volumes, turning them in to Elastic Block Storage (EBS) volumes that can be attached to any instance in the target Availability Zone.

As I’ve said in the past, we plan to support additional operating systems, versions, and virtualization platforms in the future. We are also planning to enable the export of EC2 instances to common image formats.

Read more:

— Jeff;