AWS Blog

New – Worldwide Delivery of Amazon SNS Messages via SMS

by Jeff Barr | on | in Amazon SNS, Mobile Development | | Comments

Many applications use the Amazon SNS to deliver SMS messages to individuals or to groups. Because SNS is simple and easy to use, you can add push notifications to your application with a modest amount of code.

Today we are making SNS even more useful by adding support for worldwide delivery of SMS messages. You can now use SNS to send SMS text messages to mobile phone numbers in more than 200 countries.

Along with this change we are also adding several other features to this feature:

  • No Opt-In – Message recipients no longer need to opt-in to SNS before they can receive messages.
  • Additional Regions – You can now send messages with SMS delivery from the US East (Northern Virginia), US West (Oregon), Europe (Ireland), Asia Pacific (Tokyo), Asia Pacific (Sydney), and Asia Pacific (Singapore) Regions.
  • Direct Publishing – You can now send messages to a phone number without first subscribing the number to an SNS topic. If a user replies with “STOP”, they will be added to the opt-out list associated with your AWS account (this feature applies to messages sent to phone numbers located in countries where local regulations require opt-out capabilities).
  • Opt-Out Management – You can now manage opt-out numbers using the AWS Management Console, API, or CLI (learn more).
  • Delivery Status – You can now enable logging of delivery status for SMS messages and access the status in a pair of CloudWatch Log Groups (Successful and Failed).
  • Limit – You can now set spending limits on a per-account, per-month basis. You can also set a per-message spending limit.
  • Usage Reports – You can now access daily SMS usage reports, with information on successful and failed deliveries (learn more).
  • Long/Short Code Pool – Messages sent via SNS will now appear to come from one of several long or short codes maintained by Amazon SNS. Multiple messages sent from one account to a particular phone number will all be sent from the same long or short code.

To learn more about all of these new features, read Amazon SNS Worldwide SMS Delivery on the AWS Mobile Development Blog.

Jeff;

Amazon Elastic File System – Production-Ready in Three Regions

by Jeff Barr | on | in Amazon EC2, Amazon Elastic File System | | Comments

The portfolio of AWS storage products has grown increasingly rich and diverse over time. Amazon S3 started out with a single storage class and has grown to include storage classes for regular, infrequently accessed, and archived objects. Similarly, Amazon Elastic Block Store (EBS) began with a single volume type and now offers a choice of four types of SAN-style block storage, each designed to be a great for a particular set of access patterns and data types.

With object storage and block storage capably addressed by S3 and EBS, we turned our attention to the file system. We announced the Amazon Elastic File System (EFS) last year in order to provide multiple EC2 instances with shared, low-latency access to a fully-managed file system.

I am happy to announce that EFS is now available for production use in the US East (Northern Virginia), US West (Oregon), and Europe (Ireland) Regions.

We are launching today after an extended preview period that gave us insights into an extraordinarily wide range of customer use cases. The EFS preview was a great fit for large-scale, throughput-heavy processing workloads, along with many forms of content and web serving. During the preview we received a lot of positive feedback about the performance of EFS for these workloads, along with requests to provide equally good support for workloads that are sensitive to latency and/or make heavy use of file system metadata. We’ve been working to address this feedback and today’s launch is designed to handle a very wide range of use cases. Based on what I have heard so far, our customers are really excited about EFS and plan to put it to use right away.

Why We Built EFS
Many AWS customers have asked us for a way to more easily manage file storage on a scalable basis. Some of these customers run farms of web servers or content management systems that benefit from a common namespace and easy access to a corporate or departmental file hierarchy. Others run HPC and Big Data applications that create, process, and then delete many large files, resulting in storage utilization and throughput demands that vary wildly over time. Our customers also insisted on high availability, and durability, along with a strongly consistent model for access and modification.

Amazon Elastic File System
EFS lets you create POSIX-compliant file systems and attach them to one or more of your EC2 instances via NFS. The file system grows and shrinks as necessary (there’s no fixed upper limit and you can grow to petabyte scale) and you don’t pre-provision storage space or bandwidth. You pay only for the storage that you use.

EFS protects your data by storing copies of your files, directories, links, and metadata in multiple Availability Zones.

In order to provide the performance needed to support large file systems accessed by multiple clients simultaneously, Elastic File System performance scales with storage (I’ll say more about this later).

Each Elastic File System is accessible from a single VPC, and is accessed by way of mount targets that you create within the VPC. You have the option to create a mount target in any desired subnet of your VPC. Access to each mount target is controlled, as usual, via Security Groups.

EFS offers two distinct performance modes. The first mode, General Purpose, is the default. You should use this mode unless you expect to have tens, hundreds, or thousands of EC2 instances access the file system concurrently. The second mode, Max I/O, is optimized for higher levels of aggregate throughput and operations per second, but incurs slightly higher latencies for file operations. In most cases, you should start with general purpose mode and watch the relevant CloudWatch metric (PercentIOLimit). When you begin to push the I/O limit of General Purpose mode, you can create a new file system in Max I/O mode, migrate your files, and enjoy even higher throughput and operations per second.

Elastic File System in Action
It is very easy to create, mount, and access an Elastic File System. I used the AWS Management Console; I could have used the EFS API, the AWS Command Line Interface (CLI), or the AWS Tools for Windows PowerShell as well.

I opened the console and clicked on the Create file system button:

Then I selected one of my VPCs and created a mount target in my public subnet:

My security group (corp-vpc-mount-target) allows my EC2 instance to access the mount point on port 2049. Here’s the inbound rule; the outbound one is the same:

I added Name and Owner tags, and opted for the General Purpose performance mode:

Then I confirmed the information and clicked on Create File System:

My file system was ready right away (the mount targets took another minute or so):

I clicked on EC2 mount instructions to learn how to mount my file system on an EC2 instance:

I mounted my file system as /efs, and there it was:

I copied a bunch of files over, and spent some time watching the NFS stats:

The console reports on the amount of space consumed by my file systems (this information is collected every hour and is displayed 2-3 hours after it is collected):

CloudWatch Metrics
Each file system delivers the following metrics to CloudWatch:

  • BurstCreditBalance – The amount of data that can be transferred at the burst level of throughput.
  • ClientConnections – The number of clients that are connected to the file system.
  • DataReadIOBytes – The number of bytes read from the file system.
  • DataWriteIOBytes -The number of bytes written to the file system.
  • MetadataIOBytes – The number of bytes of metadata read and written.
  • TotalIOBytes -The sum of the preceding three metrics.
  • PermittedThroughput -The maximum allowed throughput, based on file system size.
  • PercentIOLimit – The percentage of the available I/O utilized in General Purpose mode.

You can see the metrics in the CloudWatch Console:

EFS Bursting, Workloads, and Performance
The throughput available to each of your EFS file systems will grow as the file system grows. Because file-based workloads are generally spiky, with demands for high levels of throughput for short amounts of time and low levels the rest of the time, EFS is designed to burst to high throughput levels on an as-needed basis.

All file systems can burst to 100 MB per second of throughput. Those over 1 TB can burst to an additional 100 MB per second for each TB stored. For example, a 2 TB file system can burst to 200 MB per second and a 10 TB file system can burst to 1,000 MB per second of throughput. File systems larger than 1 TB can always burst for 50% of the time if they are inactive for the other 50%.

EFS uses a credit system to determine when a file system can burst. Each one accumulates credits at a baseline rate (50 MB per TB of storage) that is determined by the size of the file system, and spends them whenever it reads or writes data. The accumulated credits give the file system the ability to drive throughput beyond the baseline rate.

Here are some examples to give you a better idea of what this means in practice:

  • A 100 GB file system can burst up to 100 MB per second for up to 72 minutes each day, or drive up to 5 MB per second continuously.
  • A 10 TB file system can burst up to 1 GB per second for 12 hours each day, or drive 500 MB per second continuously.

To learn more about how the credit system works, read about File System Performance in the EFS documentation.

In order to gain a better understanding of this feature, I spent a couple of days copying and concatenating files, ultimately ending up using well over 2 TB of space on my file system. I watched the PermittedThroughput metric grow in concert with my usage as soon as my file collection exceeed 1 TB. Here’s what I saw:

As is the case with any file system, the throughput you’ll see is dependent on the characteristics of your workload. The average I/O size, the number of simultaneous connections to EFS, the file access pattern (random or sequential), the request model (synchronous or asynchronous), the NFS client configuration, and the performance characteristics of the EC2 instances running the NFS clients each have an effect (positive or negative). Briefly:

  • Average I/O Size – The work associated with managing the metadata associated with small files via the NFS protocol, coupled with the work that EFS does to make your data highly durable and highly available, combine to create some per-operation overhead. In general, overall throughput will increase in concert with the average I/O size since the per-operation overhead is amortized over a larger amount of data. Also, reads will generally be faster than writes.
  • Simultaneous Connections – Each EFS file system can accommodate connections from thousands of clients. Environments that can drive highly parallel behavior (from multiple EC2 instances) will benefit from the ability that EFS has to support a multitude of concurrent operations.
  • Request Model – If you enable asynchronous writes to the file system by including the async option at mount time, pending writes will be buffered on the instance and then written to EFS asynchronously. Accessing a file system that has been mounted with the sync option or opening files using an option that bypasses the cache (e.g. O_DIRECT) will, in turn, issue synchronous requests to EFS.
  • NFS Client Configuration – Some NFS clients use laughably small (by today’s standards) values for the read and write buffers by default. Consider increasing it to 1 MiB (again, this is an option to the mount command). You can use an NFS 4.0 or 4.1 client with EFS; the latter will provide better performance.
  • EC2 Instances – Applications that perform large amounts of I/O sometimes require a large amount of memory and/or compute power as well. Be sure that you have plenty of both; choose an appropriate instance size and type. If you are performing asynchronous reads and writes, the kernel use additional memory for caching. As a side note, the performance characteristics of EFS file systems are not dependent on the use of EBS-optimized instances.

Benchmarking of file systems is a blend of art and science. Make sure that you use mature, reputable tools, run them more than once, and make sure that you examine your results in light of the considerations listed above. You can also find some detailed data regarding expected performance on the Amazon Elastic File System page.

Available Now
EFS is available now in the US East (Northern Virginia), US West (Oregon), and Europe (Ireland) Regions and you can start using it today.  Pricing is based on the amount of data that you store, sampled several times per day and charged by the Gigabyte-month, pro-rated as usual, starting at $0.30 per GB per month in the US East (Northern Virginia) Region. There are no minimum fees and no setup costs (see the EFS Pricing page for more information). If you are eligible for the AWS Free Tier, you can use up to 5 GB of EFS storage per month at no charge.

Jeff;

Elastic Network Adapter – High Performance Network Interface for Amazon EC2

by Jeff Barr | on | in Amazon EC2 | | Comments

Many AWS customers create high-performance systems that run across multiple EC2 instances and make good use of all available network bandwidth. Over the years, we have been working to make EC2 an ever-better host for this use case. For example, the CC1 instances were the first to feature 10 Gbps networking. Later, the Enhanced Networking feature reduced latency, increased the packet rate, and reduced variability. Through the years, the goal has remained constant—ensure that the network is not the bottleneck for the vast majority of customer workloads.

We’ve been talking to lots of customers and watching technology trends closely. Along with a short-term goal of enabling higher throughput by providing more bandwidth, we established some longer-term goals for the next generation of EC2 networking. We wanted to be able to take advantage of the increased concurrency (more vCPUs) found in today’s processors, and we wanted to lay a solid foundation for driver support in order to allow our customers to take advantage of new developments as easily as possible.

I’m happy to be able to report that we are making great progress toward these goals with today’s launch of the new Elastic Network Adapter (ENA) to provide even better support for high performance networking. Available now for the new X1 instance type, ENA provides up to 20 Gbps of consistent, low-latency performance when used within a Placement Group, at no extra charge!

Per our longer-term goals, ENA will scale as network bandwidth grows and the vCPU count increases; this will allow you to take advantage of higher bandwidth options in the future without the need to install newer drivers or to make other changes to your configuration, as was required by earlier network interfaces.

ENA Advantages
We designed ENA to work well in conjunction with modern processors, such as those found on X1 instances. Because these processors feature a large number of virtual CPUs (128 in the case of  X1), efficient use of shared resources such as the network adapter is important. While delivering high throughput and great packet per second (PPS) performance, ENA minimizes the load on the host processor in a number of ways, and also does a better job of distributing the packet processing workload across multiple vCPUs. Here are some of the features that enable this improved performance:

  • Checksum Generation – ENA handles IPv4 header checksum generation and TCP / UDP partial checksum generation in hardware.
  • Multi-Queue Device Interface – ENA makes uses of multiple transmit and receive queues to reduce internal overhead and to increase scalability. The presence of multiple queues simplifies and accelerates the process of mapping incoming and outgoing packets to a particular vCPU.
  • Receive-Side Steering – ENA is able to direct incoming packets to the proper vCPU for processing. This reduces bottlenecks and increases cache efficacy.

All of these features are designed to keep as much of the workload off of the processor as possible and to create a short, efficient path between the network packets and the vCPU that is generating or processing them.

Using ENA
In order to make use of ENA,  you need to use our new driver and tag the AMI as having ENA support.

The new driver is available in the latest Amazon Linux AMIs and will soon be available in the Windows AMIs. The Open Source Linux driver is available in source form on GitHub for use in your own AMIs. Also, a driver for the Intel® Data Plane Developer Kit (Intel® DPDK) is available for developers that are building network processing applications such as load balancers or virtual routers.

If you are creating your own AMI, you also need to set the enaSupport attribute when you register it. Here’s how you do that from the command line (see the register-image documentation for a full list of parameters):

$ aws ec2 register-image --ena-support ...

You can still use the AMI on instances that do not support ENA.

Going Forward
As noted earlier, ENA is available today for X1 instances. We are also planning to make it available for future EC2 instance types.

Jeff;

Now Open – AWS Asia Pacific (Mumbai) Region

by Jeff Barr | on | in Announcements | | Comments

We are expanding the AWS footprint again, this time with a new region in Mumbai, India. AWS customers in the area can use the new Asia Pacific (Mumbai) Region to better serve end users in India.

New Region
The new Mumbai region has two Availability Zones, raising the global total to 35. It supports Amazon Elastic Compute Cloud (EC2) (C4, M4, T2, D2, I2, and R3 instances are available) and related services including Amazon Elastic Block Store (EBS), Amazon Virtual Private Cloud, Auto Scaling, and  Elastic Load Balancing.

It also supports the following services:

There are now three edge locations (Mumbai, Chennai, and New Delhi) in India. The locations support Amazon Route 53, Amazon CloudFront, and S3 Transfer Acceleration. AWS Direct Connect support is available via our Direct Connect Partners (listed below).

This is our thirteenth region (see the AWS Global Infrastructure map for more information). As usual, you can see the list of regions in the region menu of the Console:

Customers
There are over 75,000 active AWS customers in India, representing a diverse base of industries. In the time leading up to today’s launch, we have provided some of these customers with access to the new region in preview form. Two of them (Ola Cabs and NDTV) were kind enough to share some of their experience and observations with us:

Ola Cabs’ mobile app leverages AWS to redefine point-to-point transportation in more than 100 cities across India. AWS allows OLA to constantly innovate faster with new features and services for their customers, without compromising on availability or the customer experience of their service. Ankit Bhati (CTO and Co-Founder) told us:

We are using technology to create mobility for a billion Indians, by giving them convenience and access to transportation of their choice. Technology is a key enabler, where we use AWS to drive supreme customer experience, and innovate faster on new features & services for our customers. This has helped us reach 100+ cities & 550K driver partners across India. We do petabyte scale analytics using various AWS big data services and deep learning techniques, allowing us to bring our driver-partners close to our customers when they need them. AWS allows us to make 30+ changes a day to our highly scalable micro-services based platform consisting of 100s of low latency APIs, serving millions of requests a day. We have tried the AWS India region. It is great and should help us further enhance the experience for our customers.


NDTV, India’s leading media house is watched by millions of people across the world. NDTV has been using AWS since 2009 to run their video platform and all their web properties. During the Indian general elections in May 2014, NDTV fielded an unprecedented amount of web traffic that scaled 26X from 500 million hits per day to 13 billion hits on Election Day (regularly peaking at 400K hits per second), all running on AWS.  According to Kawaljit Singh Bedi (CTO of NDTV Convergence):

NDTV is pleased to report very promising results in terms of reliability and stability of AWS’ infrastructure in India in our preview tests. Based on tests that our technical teams have run in India, we have determined that the network latency from the AWS India infrastructure Region are far superior compared to other alternatives. Our web and mobile traffic has jumped by over 30% in the last year and as we expand to new territories like eCommerce and platform-integration we are very excited on the new AWS India region launch. With the portfolio of services AWS will offer at launch, low latency, great reliability, and the ability to meet regulatory requirements within India, NDTV has decided to move these critical applications and IT infrastructure all-in to the AWS India region from our current set-up.

 


Here are some of our other customers in the region:

Tata Motors Limited, a leading Indian multinational automotive manufacturing company runs its telematics systems on AWS. Fleet owners use this solution to monitor all vehicles in their fleet on a real time basis. AWS has helped Tata Motors become to more agile and has increased their speed of experimentation and innovation.

redBus is India’s leading bus ticketing platform that sells their tickets via web, mobile, and bus agents. They now cover over 67K routes in India with over 1,800 bus operators. redBus has scaled to sell more than 40 million bus tickets annually, up from just 2 million in 2010. At peak season, there are over 100 bus ticketing transactions every minute. The company also recently developed a new SaaS app on AWS that gives bus operators the option of handling their own ticketing and managing seat inventories. redBus has gone global expanding to new geographic locations such as Singapore and Peru using AWS.

Hotstar is India’s largest premium streaming platform with more than 85K hours of drama and movies and coverage of every major global sporting event. Launched in February 2015, Hotstar quickly became one of the fastest adopted new apps anywhere in the world. It has now been downloaded by more than 68M users and has attracted followers on the back of a highly evolved video streaming technology and high attention to quality of experience across devices and platforms.

Macmillan India has provided publishing services to the education market in India for more than 120 years. Prior to using AWS, Macmillan India has moved its core enterprise applications — Business Intelligence (BI), Sales and Distribution, Materials Management, Financial Accounting and Controlling, Human Resources and a customer relationship management (CRM) system from an existing data center in Chennai to AWS. By moving to AWS, Macmillan India has boosted SAP system availability to almost 100 percent and reduced the time it takes them to provision infrastructure from 6 weeks to 30 minutes.

Partners
We are pleased to be working with a broad selection of partners in India. Here’s a sampling:

  • AWS Premier Consulting Partners – Cognizant, BlazeClan Technologies Pvt. Limited, Minjar Cloud Solutions Pvt Ltd, and Wipro.
  • AWS Consulting Partners – Accenture, BluePi, Cloudcover, Frontier, HCL, Powerupcloud, TCS, and Wipro.
  • AWS Technology Partners – Freshdesk, Druva, Indusface, Leadsquared, Manthan, Mithi, Nucleus Software, Newgen, Ramco Systems, Sanovi, and Vinculum.
  • AWS Managed Service Providers – Progressive Infotech and Spruha Technologies.
  • AWS Direct Connect Partners – AirTel, Colt Technology Services,  Global Cloud Xchange, GPX, Hutchison Global Communications, Sify, and Tata Communications.

Amazon Offices in India
We have opened six offices in India since 2011 – Delhi, Mumbai, Hyderabad, Bengaluru, Pune, and Chennai. These offices support our diverse customer base in India including enterprises, government agencies, academic institutions, small-to-mid-size companies, startups, and developers.

Support
The full range of AWS Support options (Basic, Developer, Business, and Enterprise) is also available for the Mumbai Region. All AWS support plans include an unlimited number of account and billing support cases, with no long-term contracts.

Compliance
Every AWS region is designed and built to meet rigorous compliance standards including ISO 27001, ISO 9001, ISO 27017, ISO 27018, SOC 1, SOC 2, and PCI DSS Level 1 (to name a few). AWS implements an information Security Management System (ISMS) that is independently assessed by qualified third parties. These assessments address a wide variety of requirements which are communicated to customers by making certifications and audit reports available, either on our public-facing website or upon request.

To learn more; take a look at the AWS Cloud Compliance page and our Data Privacy FAQ.

Use it Now
This new region is now open for business and you can start using it today! You can find additional information about the new region, documentation on how to migrate, customer use cases, information on training and other events, and a list of AWS Partners in India on the AWS site.

We have set up a seller of record in India (known as AISPL); please see the AISPL customer agreement for details.

Jeff;

 

New – Cross-Account Copying of Encrypted EBS Snapshots

by Jeff Barr | on | in Amazon Elastic Block Store, Security | | Comments

AWS already supports the use of encrypted Amazon Elastic Block Store (EBS) volumes and snapshots, with keys stored in and managed by AWS Key Management Service (KMS). It also supports copying of EBS snapshots with other AWS accounts so that they can be used to create new volumes. Today we are joining these features to give you the ability to copy encrypted EBS snapshots between accounts, with the flexibility to move between AWS regions as you do so.

This announcement builds on three important AWS best practices:

  1. Take regular backups of your EBS volumes.
  2. Use multiple AWS accounts, one per environment (dev, test, staging, and prod).
  3. Encrypt stored data (data at rest), including backups.

Encrypted EBS Volumes & Snapshots
As a review, you can create an encryption key using the IAM Console:

And you can create an encrypted EBS volume by specifying an encryption key (you must use a custom key if you want to copy a snapshot to another account):

Then you can create an encrypted snapshot from the volume:

As you can see, I have already enabled the longer volume and snapshot IDs for my AWS account (read They’re Here – Longer EBS and Storage Gateway Resource IDs Now Available for more information).

Cross-Account Copying
None of what I have shown you so far is new. Let’s move on to the new part! To create a copy of the encrypted EBS snapshot in another account you need to complete four simple steps:

  1. Share the custom key associated with the snapshot with the target account.
  2. Share the encrypted EBS snapshot with the target account.
  3. In the context of the target account, locate the shared snapshot and make a copy of it.
  4. Use the newly created copy to create a new volume.

You will need the target account number in order to perform the first two steps. Here’s how you share the custom key with the target account from within the IAM Console:

Then you share the encrypted EBS snapshot. Select it and click on Modify Permissions:

Enter the target account number again and click on Save:

Note that you cannot share the encrypted snapshots publicly.

Before going any further I should say a bit about permissions! Here’s what you need to know in order to set up your policies and/or roles:

Source Account – The IAM user or role in the source account needs to be able to call the ModifySnapshotAttribute function and to perform the DescribeKey and ReEncypt operations on the key associated with the original snapshot.

Target Account – The IAM user or role in the target account needs to be able perform the DescribeKey, CreateGrant, and Decrypt operations on the key associated with the original snapshot. The user or role must also be able to perform the CreateGrant, Encrypt, Decrypt, DescribeKey, and GenerateDataKeyWithoutPlaintext operations on the key associated with the call to CopySnapshot.

With that out of the way, let’s copy the snapshot…

Switch to the target account, visit the Snapshots tab, and click on Private Snapshots. Locate the shared snapshot via its Snapshot ID (the name is stored as a tag and is not copied), select it, and choose the Copy action:

Select an encryption key for the copy of the snapshot and create the copy (here I am copying my snapshot to the Asia Pacific (Tokyo) Region):

Using a new key for the copy provides an additional level of isolation between the two accounts. As part of the copy operation, the data will be re-encrypted using the new key.

Available Now
This feature is available in all AWS Regions where AWS Key Management Service (KMS) is available. It is designed for use with data & root volumes and works with all volume types, but cannot be used to share encrypted AMIs at this time. You can use the snapshot to create an encrypted boot volume by copying the snapshot and then registering it as a new image.

Jeff;

 

Guest Post – Zynga Gets in the Game with Amazon Aurora

by Jeff Barr | on | in Amazon Aurora, Guest Post | | Comments

Long-time AWS customer Zynga is making great use of Amazon Aurora and other AWS database services. In today’s guest post you can learn about how they use Amazon Aurora to accommodate spikes in their workload. This post was written by Chris Broglie of Zynga.

Jeff;

Zynga has long operated various database technologies, ranging from simple in-memory caches like Memcached, to persistent NoSQL stores like Redis and Membase, to traditional relational databases like MySQL. We loved the features these technologies offered, but running them at scale required lots of manual time to recover from instance failure and to script and monitor mundane but critical jobs like backup and recovery. As we migrated from our own private cloud to AWS in 2015, one of the main objectives was to reduce the operational burden on our engineers by embracing the many managed services AWS offered.

We’re now using Amazon DynamoDB and Amazon ElastiCache (Memcached and Redis) widely in place of their self-managed equivalents. Now, engineers are able to focus on application code instead of managing database tiers, and we’ve improved our recovery times from instance failure (spoiler alert: machines are better at this than humans). But the one component missing here was MySQL. We loved the automation Amazon RDS for MySQL offers, but it relies on general-purpose Amazon Elastic Block Store (EBS) volumes for storage. Being able to dynamically allocate durable storage is great, but you trade off having to send traffic over the network, and traditional databases suffer from this additional latency. Our testing showed that the performance of RDS for MySQL just couldn’t compare to what we could obtain with i2 instances and their local (though ephemeral) SSDs. Provisioned IOPS narrow the gap, but they cost more. For these reasons, we used self-managed i2 instances wherever we had really strict performance requirements.

However, for one new service we were developing during our migration, we decided to take a measured bet on Amazon Aurora. Aurora is a MySQL-compatible relational database offered through Amazon RDS. Aurora was only in preview when we started writing the service, but it was expected to become generally available before production, and we knew we could always fall back to running MySQL on our own i2 instances. We were naturally apprehensive of any new technology, but we had to see for ourselves if Aurora could deliver on its claims of exceeding the performance of MySQL on local SSDs, while still using network storage and providing all the automation of a managed service like RDS. And after 8 months of production, Aurora has been nothing short of impressive. While our workload is fairly modest – the busiest instance is an r3.2xl handling ~9k selects/second during peak for a 150 GB data set – we love that so far Aurora has delivered the necessary performance without any of the operational overhead of running MySQL.

An example of what this kind of automation has enabled for us was an ops incident where a change in traffic patterns resulted in a huge load spike on one of our Aurora instances. Thankfully, the instance was able to keep serving traffic despite 100% CPU usage, but we needed even more throughput. With Aurora we were able to scale up the reader to an instance that was 4x larger, failover to it, and then watch it handle 4x the traffic, all with just a few clicks in the RDS console. And days later after we released a patch to prevent the incident from recurring, we were able to scale back down to smaller instances using the same procedure. Before Aurora we would have had to either get a DBA online to manually provision, replicate, and failover to a larger instance, or try to ship a code hotfix to reduce the load on the database. Manual changes are always slower and riskier, so Aurora’s automation is a great addition to our ops toolbox, and in this case it led to a resolution measured in minutes rather than hours.

Most of the automation we’re enjoying has long been standard for RDS, but using Aurora has delivered the automation of RDS along with the performance of self-managed i2 instances. Aurora is now our first choice for new services using relational databases.

Chris Broglie, Architect (Zynga)

 

New – AWS Marketplace for the U.S. Intelligence Community

by Jeff Barr | on | in AWS Marketplace | | Comments

AWS built and now runs a private cloud for the United States Intelligence Community.

In order to better meet the needs of this unique community, we have set up an AWS Marketplace designed specifically for them. Much like the existing AWS Marketplace, this new marketplace makes it easy to discover, buy, and deploy software packages and applications, with a focus on products in the Big Data, Analyics, Cloud Transition Support, DevOps, Geospatial, Information Assurance, and Security categories.

Selling directly to the Intelligence Community can be a burdensome process that limits the Intelligence Community’s options when purchasing software. Our goal is to give the Intelligence Community as broad a selection of software as possible, so we are working to help our AWS Marketplace sellers through the onboarding process so that the Intelligence Community can benefit from use of their software.

If you are an Amazon Marketplace Seller and have products in one of the categories above, listing your product in the AWS Marketplace for the Intelligence Community has some important benefits to your ISV or Authorized Reseller business:

Access – You get to reach a new market that may not have been visible or accessible to you.

Efficiency – You get to bypass the contract negotiation that is otherwise a prerequisite to selling to the US government. With no contract negotiation to contend with, you’ll have less business overhead.

To the greatest extent possible, we hope to make the products in the AWS Marketplace also available in the AWS Marketplace for the U.S. Intelligence Community. In order to get there, we are going to need your help!

Come on Board
Completing the steps necessary to make products available in the AWS Marketplace for the U.S. Intelligence Community can be challenging due to security and implementation requirements. Fortunately, the AWS team is here to help; here are the important steps:

  1. Have your company and your products listed commercially in AWS Marketplace if they are not already there.
  2. File for FOCI (Foreign Ownership, Control and Influence) approval and sign the AWS Marketplace IC Marketplace Publisher Addendum.
  3. Ensure your product will work in the Commercial Cloud Services (C2S) environment. This includes ensuring that your software does not make any calls outside to the public internet.
  4. Work with AWS to publish your software on the AWS Marketplace for the U.S. Intelligence Community. You will be able to take advantage of your existing knowledge of AWS and your existing packaging tools and processes that you use to prepare each release of your product for use in AWS Marketplace.

Again, we are here to help! After completing step 1, email us (icmp@amazon.com). We’ll help with the paperwork and the security and do our best to get you going as quickly as possible. To learn more about this process, read my colleague Kate Miller’s new post, AWS Marketplace for the Intelligence Community, on the AWS Partner Network Blog.

Jeff;

New AWS Competency – AWS Migration

by Jeff Barr | on | in AWS Partner Network | | Comments

More and more, I hear from customers who want to migrate large-scale workloads to AWS, and seek advice regarding their cloud migration strategy. We provide customers with a number of cloud migration tools and services, including the AWS Database Migration Service, and resources, such as the AWS Professional Services Cloud Adoption Framework. Further, we have a strong and mature ecosystem of AWS Partner Network (APN) Consulting and Technology Partners who’ve demonstrated expertise in helping customers like you successfully migrate to AWS.

In an effort to make it as easy as we can for you to identify APN Partners who’ve demonstrated technical proficiency and proven customer success in migration, I’m pleased to announce the launch of the AWS Migration Competency.

New Migration Competency – Migration Partner Solutions
Migration Competency Partners provide solutions or have deep experience helping businesses move successfully to AWS, through all phases of complex migration projects, discovery, planning, migration and operations.

The AWS Partner Competency Program has validated that the partners below have demonstrated that they can help enterprise customers migrate applications and legacy infrastructure to AWS.

Categories and Launch Partners

Migration Delivery Partners – Help customers through every stage of migration, accelerating results by providing personnel, tools, and education in the form of professional services. These partners either are, or have a relationship with an AWS audited Managed Service Provider to help customers with ongoing support of AWS workloads. Here are the launch partners:

Migration Consulting Partners – Provide expertise and training to help enterprises quickly develop specific capabilities or achieve specific outcomes. They provide consulting services to enable adoption of DevOps practices, to modernize applications, and implement solutions. Here are our launch partners:

Migration Technology for Discovery & Planning – Discover IT assets across your application portfolio, identify dependencies and requirements, and build your comprehensive migration plan with these technology solutions. Here are our launch partners:

Migration Technology for Workload Mobility – Execute migrations to AWS by capturing your host server, configuration, storage, and network states, then provision and configure your AWS target resources. Here are our launch partners:

Migration Technology for Application Profiling – Gain valuable insights into your applications by capturing and analyzing performance data, usage, and monitoring dependencies before and after migration. Here are our launch partners:

Launch Partners in Action
Do you want to hear from a few of our launch Partners? Visit the videos below to hear Cloud Technology Partners (CTP), REAN Cloud, and Slalom discuss the evolution of enterprise cloud migrations, and the value of the AWS Migration Competency for customers:

Cloud Technology Partners – The Evolution of Cloud Migration


 

REAN Cloud – the Role of DevOps in Cloud Migrations


 

Slalom – The Value of the AWS Migration Competency for Customers

Jeff;

 

Semi-Autonomous Driving Using EC2 Spot Instances at Mapbox

by Jeff Barr | on | in Customer Success, EC2 Spot Instances, Guest Post | | Comments

Will White of Mapbox shared the following guest post with me. In the post, Will describes how they use EC2 Spot Instances to economically process the billions of data points that they collect each day.

I do have one note to add to Will’s excellent post. We know that many AWS customers would like to create Spot Fleets that automatically scale up and down in response to changes in demand. This is on our near-term roadmap and I’ll have more to say about it before too long.

Jeff;

The largest automotive tech conference, TU-Automotive, kicked off in Detroit this morning with almost every conversation focused on strategies for processing the firehose of data coming off connected cars. The volume of data is staggering – last week alone we collected and processed over 100 million miles of sensor data into our maps.

Collecting Street Data
Rather than driving a fleet of cars down every street to make a map, we turn phones, cars, and other devices into a network of real-time sensors. EC2 Spot Instances process the billions of points we collect each day and let us see every street, analyze the speed of traffic, and connect the entire road network. This anonymized and aggregated data protects user privacy while allowing us to quickly detect road changes. The result is Mapbox Drive, the map built specifically for semi-autonomous driving, ride sharing, and connected cars.

Bidding for Spot Capacity
We use the Spot market to bid on spare EC2 instances, letting us scale our data collection and processing at 1/10th the cost. When you launch an EC2 Spot instance you set a bid price for how much you are willing to pay for the instance. The market price (the price you actually pay) constantly changes based on supply and demand in the market. If the market price ever exceeds your bid price, your EC2 Spot instance is terminated. Since spot instances can spontaneously terminate, they have become a popular cost-saving tool for non-critical environments like staging, QA, and R&D – services that don’t require high availability. However, if you can architect your application to handle this kind of sudden termination, it becomes possible to run extremely resource-intensive services on spot and save a massive amount of money while maintaining high availability.

The infrastructure that processes the 100 million miles of sensor data we collect each week is critical and must always be online, but it uses EC2 Spot Instances. We do it by running two Auto Scaling groups, a Spot group and an On-Demand group, that share a single Elastic Load Balancer. When Spot prices spike and instances get terminated, we simply fallback by automatically launching On-Demand instances to pick up the slack.

Handling Termination Notices
We use termination notices, which give us a two-minute warning before any EC2 Spot instance is terminated. When an instance receives a termination notice it immediately makes a call to the Auto Scaling API to scale up the On-Demand Auto Scaling group, seamlessly adding stable capacity to the Elastic Load Balancer. We have to pay On-Demand prices for the replacement EC2s, but only for as long as the Spot interruption lasts. When the Spot market price falls back below our bid price, our Spot Auto Scaling group will automatically launch new Spot instances. As the Spot capacity scales back up, an aggressive Auto Scaling policy scales down the On-Demand group, terminating the more expensive instances.

Building our data processing pipeline on Spot worked so well that we have now moved nearly every Mapbox service over to Spot too. As the traffic done by over 170 million unique users of apps like Foursquare, MapQuest, and Weather.com grows each month, our cost of goods sold (COGS) continues to fall. Spot interruptions are relatively rare for the instance types we use so the fallback is only triggered a 1-2 times per month. This means we are running on discounted Spot instances more than 98% of the time. On our maps service alone, this has resulted in an 90% savings on our EC2 costs each month.

Going Further with Spot
To further optimize our COGS we’re working on a “waterfall” approach to fallback, pushing traffic to other configurations of Spot Instances first and only using On-Demand as an absolute last resort. For example, an application that normally runs on c4.xlarge instances, is often compatible with other instance sizes in the same family (c4.2xlarge, c4.4xlarge, etc) and instance types in other families (m4.2xlarge, m4.4xlarge, etc). When our Spot EC2s get terminated, we’ll bid on the next cheapest option on the Spot market. This will result in more Spot interruptions, but our COGS decrease further because we’ll fallback on Spot instances instead of paying full price for On-Demand EC2 instances. This maximizes our COGS savings while maintaining high availability for our enterprise customers.

It’s worth noting that similar fallback functionality is built into EC2 Spot Fleet, but we prefer Auto Scaling groups due to a few limitations with Spot Fleet (for example, there’s no support for Auto Scaling without implementing it yourself) and because Auto Scaling groups give us the most flexibility.

Over the last 12 months, data collection and processing has increased our consumption of EC2 compute hours by 1044%, but our COGS actually decreased. We used to see our costs increase linearly with consumption, but now see these hockey stick increases in consumption while costs stay basically flat for the same period.

If you’re building a resource-hungry application that requires high availability and the costs for On-Demand EC2 instances make it unsustainable to run, take a close look at EC2 Spot Instances. Combined with the right architecture and some creative orchestration, EC2 Spot Instances will allow you to run your application with extremely low COGS.

Will White, Development Engineering, Mapbox

Amazon EMR 4.7.0 – Apache Tez & Phoenix, Updates to Existing Apps

by Jeff Barr | on | in Amazon EMR | | Comments

Amazon EMR allows you to quickly and cost-effectively process vast amounts of data. Since the 2009 launch, we have added many new features and support for an ever-increasing roster of applications from the Hadoop ecosystem. Here are a few of the additions that we have made this year:

Today we are pushing forward once again, with new support for Apache Tez (dataflow-driven data processing task orchestration) and Apache Phoenix (fast SQL for OLTP and operational analytics), along with updates to several of the existing apps. In order to make use of these new and/or updated applications, you will need to launch a cluster that runs release 4.7.0 of Amazon EMR.

New – Apache Tez (0.8.3)
Tez runs on top of Apache Hadoop YARN. Tez provides you with a set of dataflow definition APIs that allow you to define a DAG (Directed Acyclic Graph) of data processing tasks. Tez can be faster than Hadoop MapReduce, and can be used with both Hive and Pig. To learn more, read the EMR Release Guide. The Tez UI includes a graphical view of the DAG:

The UI also displays detailed information about each DAG:

New – Apache Phoenix (4.7.0)
Phoenix uses HBase (another member of the Hadoop ecosystem) as its datastore. You can connect to Phoenix using a JDBC driver included on the cluster or from other applications that are running on or off of the cluster. Either way, you get access to fast, low-latency SQL with full ACID transaction capabilities. Your SQL queries are compiled into a series of HBase scans, the scans are run in parallel, and the results are aggregated to produce the result set.  To learn more, read the Phoenix Quick Start Guide or review the Apache Phoenix Overview Presentation.

Updated Applications
We have also updated the following applications:

  • HBase 1.2.1 – HBase provides low-latency, random access to massive datasets. The new version includes some bug fixes.
  • Mahout 0.12.0 – Mahout provides scalable machine learning and data mining.  The new version includes a large set of math and statistics features.
  • Presto 0.147 – Presto is a distributed SQL query engine designed for large data sets. The new version adds features and fixes bugs.

Amazon Redshift JDBC Driver
You can use the new Redshift JDBC driver to allow applications running on your EMR clusters to access and update data stored in your Redshift clusters. Two versions of the driver are included on your cluster:

  • JDBC 4.0-compatible/usr/share/aws/redshift/jdbc/RedshiftJDBC4.jar.
  • JDBC 4.1-compatible/usr/share/aws/redshift/jdbc/RedshiftJDBC41.jar.

To start using the new and applications, simply launch a new EMR cluster, and select release 4.7.0 along with the desired set of applications.

Jeff;