Category: Amazon EC2

New – Scheduled Reserved Instances

Many AWS customers run some of their mission-critical applications on a periodic (daily, weekly, or monthly), part-time basis. Here are some of the kinds of things that they like to do:

  • A bank or mutual fund performs Value at Risk calculations every weekday afternoon.
  • A phone company does a multi-day bill calculation run at the start of each month.
  • A trucking company optimizes routes and shipments on Monday, Wednesday, and Friday mornings.
  • An animation studio performs a detailed, compute-intensive 3D rendering every night.

Our new Scheduled Reserved Instances are a great fit for use cases of this type (and many more). They allow you to reserve capacity on a recurring basis with a daily, weekly, or monthly schedule over the course of a one-year term. After you complete your purchase, the instances are available to launch during the time windows that you specified.

Purchasing Scheduled Instances
Let’s step through the purchase process using the EC2 Console. I start by selecting Scheduled Instances on the left:

Then I click on the Purchase Scheduled Instances button and find a schedule that suits my needs.

Let’s say that I am based in Seattle and want to set up a schedule for Monday, Wednesday, and Friday mornings. I convert my time (6 AM) to UTC, choose my duration (8 hours of processing for my particular use case), and set my recurrence. Then I specify a c3.4xlarge instance (I can select one or more types using the menu):

I can see the local starting time while I am setting up the schedule:

When I click on Find schedules, I can see what’s available at my desired time:

As you can see, the results include instances in several different Availability Zones because I chose Any in the previous step. Leaving the Availability Zone and/or the instance type unspecified will give me more options.

I can add the desired instance(s) to my cart, adjusting the quantity if necessary. I can see my choices in my cart:

Once I have what I need I click on Review and purchase to proceed, verify my intent, and click on Purchase:

I can then see all of my Scheduled Reserved instances in the console:

Launching Scheduled Instances
Each Scheduled Reserved instance becomes active according to the schedule that you chose when you made the purchase. You can then launch the instance by selecting it in the Console and clicking on Launch Scheduled Instances:

Then I configure the launch as usual and click on Review:

Scheduled Reserved instances can also be launched via the AWS Command Line Interface (CLI), the AWS Tools for Windows PowerShell, and the new RunScheduledInstances function. We are also working on support for Auto Scaling, AWS Lambda, and AWS CloudFormation.

Things to Know
With this launch, we now have two types of Reserved Instances. The original Reserved Instance (now called Standard Reserved Instances) model allows you to reserved EC2 compute capacity for a one or three year term and use them at any time. The new Scheduled Reserved Instance model allows you to reserve instances for predefined blocks of time on a recurring basis for a one-year term, with prices that are generally 5 to 10% lower than the equivalent On-Demand rates.

This feature is available today in the US East (Northern Virginia), US West (Oregon), and EU (Ireland) regions, with support for the C3, C4, M4, and R3 instance types.


Now Open – AWS Asia Pacific (Seoul) Region

We are expanding the AWS footprint once again, this time with a new region in Seoul, South Korea. AWS customers in the area can use the new Asia Pacific (Seoul) region for fast, low-latency access to the suite of AWS infrastructure services.

New Region
The new Seoul region has two Availability Zones (raising the global total to 32). It supports Amazon EC2 (T2, M4, C4, I2, D2, and R3 instances are available) and related services including Amazon Elastic Block Store (EBS), Amazon Virtual Private Cloud, Auto Scaling, and Elastic Load Balancing.

It also supports the following services:

There are two edge locations in Seoul for Amazon Route 53 and Amazon CloudFront. AWS Direct Connect support is available via KINX.

This is our twelfth region (see the AWS Global Infrastructure map for more information). As usual, you can see the full list in the region menu of the AWS Management Console:

There is already a very broad base of AWS customers in Korea. Here are a couple of examples:

Samsung used AWS to build the Samsung Printing App Center. This complex app can deploy mobile printing, scanning, and copying applications to a global customer base in real time. They chose AWS in order to be cost-effective, agile, and scalable.

Nexon is Korea’s premier gaming company, providing 150 games in 150 countries. AWS allows them to address a global customer base and to experiment with different games without having to invest in local infrastructure. Their newest MMORPG game, HIT, recently achieved the number one sales rank within the Korean mobile gaming industry in a record amount of time, running 100% on AWS.

Mirae Asset Global Investments Group migrated their web properties from on-premises data centers to AWS. This allowed them to stay competitive while reducing their management costs by 50%. With the launch of the new region, they will move additional sensitive, mission-critical workloads to AWS.

Eastar Jet was the first Korean airline company to migrate workloads to the public cloud. As one of the fastest-growing low-cost carriers (4 domestic and 6 international routes), they needed to reduce costs, increase availability, and ensure reliability as the total passenger count grew to over 14 million. They plan to move additional workloads to the new region.

The Beatpacking Company runs a popular music streaming app, with traffic that sometimes surges to 300% of the usual level. Since launching on AWS in March of 2014, they have grown to over 6 million users. Despite this growth, they reduced their AWS cost per user by 97% in the past year.

We are pleased to be working with a very wide variety of partners in Korea. Here is a sampling:

Offices and Support
We opened an AWS office in Seoul in 2012. This office supports enterprises, government agencies, academic institutions, small-to-mid size companies, startups, and developers. The full range of AWS Support options is also available.

Every AWS region is built and designed to meet rigorous compliance standards including ISO 27001, ISO 9001, ISO 27017, ISO 27018, SOC 1, SOC 2, and PCI DSS Level 1 (to name a few); see the AWS Compliance page for more info.

AWS implements an Information Security Management System (ISMS) that is independently assessed by qualified third parties. These assessments address a wide variety of requirements which are communicated by making certifications and audit reports available, either on our public-facing website or upon request.

As customer trust is our top priority, AWS adopts global privacy and data protection best practices. Our most recent example of this commitment is our validation by an independent third party attesting that we align with ISO 27018 – the first international code of practice to focus on protection of personal data in the cloud. This demonstrates to customers that AWS has a system of controls in place specifically to address the privacy protection of their content.

For more information on how we handle data privacy, take a look at our Data Privacy FAQ.

Use it Now
This new region is open for business now and you can start using it today! If you are able to read Korean and want to know more about this region, please visit the new Seoul Region microsite. You’ll find additional information about the new region, documentation on how to migrate, customer use cases, information on training and other events, and  a list of AWS Partners in Korea.



AWS Cost Explorer Update – Access to EC2 Usage Data

The AWS Cost Explorer (read The New Cost Explorer for AWS to learn more) is a set of a tools that help you to track and manage your AWS costs. Last year we added saved reports, budgets & forecasts, and additional filtering & grouping dimensions.

Today we are adding EC2 usage data to Cost Explorer, along with additional dimensions for filtering and grouping:

  • The EC2 cost data is now broken down into three elements: EC2 instances (EC2-Instances), Elastic Load Balancing (ELB), and Elastic Block Store (EBS).
  • You can now filter, group, and view costs on additional dimensions, including Instance Type and Region.

Here’s a screen shot of the new usage data and dimensions:

The new features are available now and you can start using them today. To learn more, read about Analyzing Your Costs with Cost Explorer.



Happy New Year – EC2 Price Reduction (C4, M4, and R3 Instances)

I am happy to be able to announce that we are making yet another EC2 price reduction!

We are reducing the On-Demand and Reserved instance, and Dedicated host prices for C4 and M4 instances running Linux by 5% in the US East (Northern Virginia), US West (Northern California), US West (Oregon), EU (Ireland), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Singapore), and Asia Pacific (Sydney) regions.

We are also reducing the On-Demand, Reserved instance, and Dedicated host prices for R3 instances running Linux by 5% in the US East (Northern Virginia), US West (Northern California), US West (Oregon), EU (Ireland), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), and South America (São Paulo) regions.

Finally, we are reducing the On-Demand and Reserved instance prices for R3 instances running Linux by 5% in the AWS GovCloud (US) regions.

Smaller reductions apply to the same instance types that run SLES and RHEL in the regions mentioned.

Changes to the On-Demand and Dedicated host pricing are retroactive to the beginning of the month (January 1, 2016); the new Reserved instance pricing is in effect today. During the month, your billing estimates may not reflect the reduced prices. They will be reflected in the statement at the end of the month.

The new AWS Price List API will be updated later in the month.

If you are keeping score, this is our 51st price reduction!

— Jeff;

EC2 Container Registry – Now Generally Available

My colleague Andrew Thomas wrote the guest post below to introduce you to the new EC2 Container Registry!

— Jeff;

I am happy to announce that Amazon EC2 Container Registry (ECR) is now generally available!

Amazon ECR is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images. We pre-announced the service at AWS re:Invent and have been receiving a lot of interest and enthusiasm from developers ever since.

We built Amazon ECR because many of you told us that running your own private Docker image registry presented many challenges like managing the infrastructure and handling large scale deployments that involve pulling hundreds of images at once. Self-hosted solutions, you said, are especially hard when deploying container images to clusters that span two or more AWS regions. Additionally, you told us that you needed fine-grained access control to repositories/images without having to manage certificates or credentials.

Amazon ECR was designed to meet all of these needs and more. You do not need to install, operate, or scale your own container registry infrastructure. Amazon ECR hosts your images in a highly available and scalable architecture, allowing you to reliably deploy containers for your applications. Amazon ECR is also highly secure. Your images are transferred to the registry over HTTPS and automatically encrypted at rest in S3. You can configure policies to manage permissions and control access to your images using AWS Identity and Access Management (IAM) users and roles without having to manage credentials directly on your EC2 instances. This enables you to share images with specific users or even AWS accounts.

Amazon EC2 Container Registry also integrates with Amazon ECS and the Docker CLI, allowing you to simplify your development and production workflows. You can easily push your container images to Amazon ECR using the Docker CLI from your development machine, and Amazon ECS can pull them directly for production deployments.

Let’s take a look at how easy it is to store, manage, and deploy Docker containers with Amazon ECR and Amazon ECS.

Amazon ECR Console
The Amazon ECR Console simplifies the process of managing images and setting permissions on repositories. To access the console, simply navigate to the “Repositories” section in the Amazon ECS console. In this example I will push a simple PHP container image to Amazon ECR, configure permissions, and deploy the image to an Amazon ECS cluster.

After navigating to the Amazon ECR Console and selecting “Get Started”, I am presented with a simple wizard to create and configure my repository.

After entering the repository name, I see the repository endpoint URL that I will use to access Amazon ECR. By default I have access to this repository, so I don’t have to worry about permissions now and can set them later in the ECR console.

When I click Next step, I see the commands I need to run in my terminal to build my Docker image and push it to the repository I just created. I am using the Dockerfile from the ECS Docker basics tutorial. The commands that appear in the console require that I have the AWS Command Line Interface (CLI) and Docker CLI installed on my development machine (if you are using the Amazon Linux AMI and are reading this in 2015, you will need to install the CLI manually). Next, I copy and run each command to login, tag the image with the ECR URI, and push the image to my repository.

After completing these steps, I click Done to navigate to the repository where I can manage my images.

Setting Permissions
Amazon ECR uses AWS Identity and Access Management to control and monitor who and what (e.g., EC2 instances) can access your container images. We built a permissions tool in the Amazon ECR Console to make it easier to create resource-based policies for your repositories.

To use the tool I click on the Permissions tab in the repository and select Add. I now see that the fields in the form correspond to an IAM statement within a policy document. After adding the statement ID, I select whether this policy should explicitly deny or allow access. Next I can set who this statement should apply to by either entering another AWS account number or selecting users and roles in the entities table.

After selecting the desired entities, I can then configure the actions that should apply to the statement. For convenience, I can use the toggles on the left to easily select the actions required for pull, push/pull, and administrative capabilities.

Integration With Amazon ECS
Once I’ve created the repository, pushed the image, and set permissions I am now ready to deploy the image to ECS.

Navigating to the Task Definitions section of the ECS console, I create a new Task Definition and specify the Amazon ECR repository in the Image field. Once I’ve configured the Task Definition, I can go to the Clusters section of the console and create a new service for my Task Definition. After creating the service, the ECS Agent will automatically pull down the image from ECR and start running it on an ECS cluster.

Updated First-Run
We have also updated our Amazon ECS Getting Started Wizard to include the ability to push an image to Amazon ECR and deploy that image to ECS:

Partner Support for ECS
At re:Invent we announced partnerships with a number of CI/CD providers to help automate deploying containers on ECS.  We are excited to announce today that our partners have added support for Amazon ECR making it easy for developers to create and orchestrate a full, end-to-end container pipeline to automatically build, store, and deploy images on AWS. To get started check out the solutions from our launch partners who include Shippable, Codeship, Solano Labs, CloudBees, and CircleCI.

We are also excited to announce a partnership with TwistLock to provide vulnerability scanning of images stored within ECR. This makes it even easier for developers to evaluate potential security threats before pushing to Amazon ECR and allows developers to monitor their containers running in production. See the Container Partners Page for more information about our partnerships.

Launch Region
Effective today, Amazon ECR is available in US East (Northern Virginia) with more regions on the way soon!

With Amazon ECR you only pay for the storage used by your images and data transfer from Amazon ECR to the internet or other regions. See the ECR Pricing page for more details.

Get Started Today
Check out our Getting Started with EC2 Container Registry page to start using Amazon ECR today!

Andrew Thomas, Senior Product Manager

New – Managed NAT (Network Address Translation) Gateway for AWS

You can use Amazon Virtual Private Cloud to create a logically isolated section of the AWS Cloud. Within the VPC, you can define your desired IP address range, create subnets, configure route tables, and so forth. You can also use a virtual private gateway to connect the VPC to your existing on-premises network using a hardware Virtual Private Network (VPN) connection.

An interesting network challenge arises when EC2 instances in a private VPC subnet need to connect to the Internet. Because the subnet is private, the IP addresses assigned to the instances cannot be used in public. Instead, it is necessary to use Network Address Translation (NAT) to map the private IP addresses to a public address on the way out, and then map the public IP address to the private address on the return trip.

New Managed NAT Gateway
Performing this translation at scale can be challenging. In order to simplify the task (and, as usual, to let you spend more time on your application and on your business), we are launching a new Managed NAT Gateway for AWS!

Instead of configuring, running, monitoring, and scaling a cluster of EC2 instances (you’d need at least 2 in order to ensure high availability), you can now create and configure a gateway with a couple of clicks.

The gateway has built-in redundancy for high availability. Each gateway that you create can handle up to 10 Gbps of bursty TCP, UDP, and ICMP traffic, and is managed by Amazon. You control the public IP address by assigning an Elastic IP Address when you create the gateway.

Creating a Managed NAT Gateway
Let’s create a Managed NAT Gateway! Open up the VPC Console, and take a peek at the navigation area on the left. Locate and click on NAT Gateways:

Then click on Create NAT Gateway and choose one of your subnets:

Choose one of your existing Elastic IP addresses, or create a new one:

Then click on Create a NAT Gateway, and observe the confirmation:

As you can see from the confirmation, you will need to edit your VPC’s route tables to send traffic destined for the Internet toward the gateway. The gateway’s internal (private) IP address will be chosen automatically, and will be on the subnet associated with the gateway. Here’s a sample route table:

And that’s all you need to do. You don’t need to size, scale, or manage the gateway.

You can use VPC Flow Logs to capture the traffic flowing through your gateway, and then use the information in the logs to create CloudWatch metrics based on packets, bytes, and protocols. You can use the following filter pattern as a starting point (be sure to enter actual values for ENI_ID and NGW_IP):

[version, accountid, interfaceid=ENI_ID, srcaddr, dstaddr=NGW_IP, srcport, dstport, protocol, packets, bytes, start, end, action, log_status]

The resulting graph will look like this:

If you create a new VPC using the VPC Wizard, it will offer to create a NAT Gateway and the route table rules for you. This makes the setup process even easier!

To learn more, read about the VPC NAT Gateway in the VPC User Guide.

Pricing and Availability
You can start using this new feature today in the US East (Northern Virginia), US West (Oregon), US West (Northern California), EU (Ireland), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo) regions.

Pricing starts at $0.045 per NAT gateway hour plus data processing and data transfer charges. Data processing costs are based on the amount of data processed by the NAT Gateway; data transfer costs are the usual costs to move data between an EC2 instance and the Internet. For more information, read about VPC Pricing.



EC2 Run Command Update – Now Available for Linux Instances

When we launched EC2 Run Command seven weeks ago (see my post, New EC2 Run Command – Remote Instance Management at Scale to learn more), I promised similar functionality for instances that run Linux. I am happy to be able to report that this functionality is available now and that you can start using it today.

Run Command for Linux
Like its Windows counterpart, this feature is designed to help you to administer your EC2 instances in an easy and secure fashion, regardless of how many you are running. You can install patches, alter configuration files, and more. To recap, we built this feature to serve the following management needs:

  • A need to implement configuration changes across their instances on a consistent yet ad hoc basis.
  • A need for reliable and consistent results across multiple instances.
  • Control over who can perform changes and what can be done.
  • A clear audit path of what actions were taken.
  • A desire to be able to do all of the above without the need for unfettered SSH access.

This new feature makes command execution secure, reliable, convenient, and scalable. You can create your own commands and exercise fine-grained control over execution privileges using AWS Identity and Access Management (IAM). All of the commands are centrally logged to AWS CloudTrail for easy auditing.

Run Command Benefits
The Run Command feature was designed to provide you with the following benefits (these apply to both Linux and Windows):

Control / Security – You can use IAM policies and roles to regulate access to commands and to instances. This allows you to reduce the number of users who have direct access to the instances.

Reliability – You can increase the reliability of your system by creating templates for your configuration changes. This will give you more control while also increasing predictability and reducing configuration drift over time.

Visibility – You will have more visibility into configuration changes because Run Command supports command tracking and is also integrated with CloudTrail.

Ease of Use – You can choose from a set of predefined commands, run them, and then track their progress using the Console, CLI, or API.

Customizability – You can create custom commands to tailor Run Command to the needs of your organization.

Using Run Command on Linux
Run Command makes use of an agent (amazon-ssm-agent) that runs on each instance. It is available for the following Linux distributions:

  • Amazon Linux AMI (64 bit) – 2015.09, 2015.03, 2014.09, and 2014.03.
  • Ubuntu Server (64 bit) – 14.04 LTS, 12.04 LTS
  • Red Hat Enterprise Linux (64 bit) – 7.x

Here are some of the things that you can do with Run Command:

  • Run shell commands or scripts
  • Add users or groups
  • Configure user or group permissions
  • View all running services
  • Start or stop services
  • View system resources
  • View log files
  • Install or uninstall applications
  • Update a scheduled (cron) task

You can launch new Linux instances and bootstrap the agent by including a few lines in the UserData like this (to learn more, read Configure the SSM Agent in the EC2 Documentation):

Here’s how I choose a command document (separate command documents are available for Linux and for Windows):

And here’s how I select the target instances and enter in a command or a set of commands to run:

Here’s the output from the command:

Here’s how I review the output from commands that I have already run:

Run a Command Today
This feature is available now and you can start using it today in the US East (Northern Virginia), US West (Oregon), and EU (Ireland) regions. There’s no charge for the command, but you will be billed for other AWS resources that you consume.

To learn more, visit the Run Command page.


EC2 Update – T2.Nano Instances Now Available

We announced the t2.nano instances earlier this year. Like their larger siblings (t2.micro, t2.small, t2.medium, and t2.large), these instances provide a baseline level of processing power, along with the ability to save up unused cycles and use them when the need arises.

As I noted in my earlier post (New T2.Large Instances), this model has proven to be extremely popular with our customers. In fact, we did some research and found that, over the course of a couple of days, over 96% of the T2 instances always maintained a positive CPU Credit balance. In effect, you are paying for a very modest amount of processing power, yet have access to far more when the need arises. The pricing (which I will get to in a moment) becomes even more compelling when you purchase a 1 year or 3 year Reserved Instance.

I expect to see the t2.nano used to host low-traffic websites, run microservices, support dev / test environments, and to be used as cost-effective monitoring vehicles. There are also plenty of ways to use these instances in training and educational settings.

The Specs
Each t2.nano instance has 512 MiB of memory and 1 vCPU, and can run 32 or 64 bit operating systems and applications. They support EBS encryption and up to two Elastic Network Interfaces per instance.

The t2.nano offers the full performance of a high frequency Intel CPU core if your workload utilizes less than 5% of the core on average over 24 hours. You get full access to the CPU core when needed, as long as you maintain a positive CPU credit balance. Each newly launched t2.nano starts out with a CPU credit balance of 30 credits, and earns 3 more credits per hour, up to a maximum of 72. This means that each instance can burst to full-core performance for up to 72 minutes at a stretch.

You can run Linux or Windows on these instances. However, our data shows that Windows instances consume more CPU and memory than Linux instances and you’ll want to do some testing and evaluation in order to decide which instance size will work best for your application. If you do not need the Windows GUI, you may want to take a look at the Server Core AMI.

EC2 Pricing & Sample Configurations
The t2.nano instances are priced at exactly half of the t2.micro for a given region. Here are some sample prices (see the EC2 Pricing page for more information):

Region Price / Hour (On-Demand)
Price / Month (On-Demand)
1 Year Reserved Instance / Month
3 Year Reserved Instance / Month
US East (Northern Virginia) $0.0065 $4.75 $3.125 $2.10
US West (Oregon) $0.0065 $4.75 $3.125 $2.10
EU (Ireland) $0.0070 $5.11 $3.42 $2.31
Asia Pacific (Tokyo)  $0.0100 $7.30 $5.25 $3.44
South America (São Paulo) $0.0135 $9.85 $5.67 $4.17

Let’s take a look at the full-system cost to host and run a low-traffic website (up to 25,000 visits or so per month) on AWS using a t2.nano for one month.This is a real-world configuration that is more than adequate to handle the load.

In addition to the instance itself, the sample configuration includes an 8GB EBS SSD volume for storage and domain hosting with Amazon Route 53. The pricing includes 2 gigabytes of network-out traffic. In other words, this is the all-in cost to run the site on AWS. Here’s the monthly pricing in US West (Oregon):

AWS Service
Configuration On-Demand 1 Year Reserved Instance
3 Year Reserved Instance
EC2 t2.nano $4.75 $3.17 $2.11
EBS Volume 8 GB SSD $0.80 $0.80 $0.80
Network Out  2 GB $0.09 $0.09 $0.09
Route 53  1 Domain + 25K Queries $0.51 $0.51 $0.51
Total Price $6.15 $4.57 $3.51

Let’s say you really hit the jackpot and draw in 10 times as many visits as you planned for. You’ll pay less than $1 in additional Network Out charges, $0.81 to be precise. If you are running a small site and want to keep a watchful eye over your variable costs, don’t forget to create a billing alert.

This is a powerful starter system that can easily scale to handle more traffic or to host a more complex site or application. Over time, you can expand to make use of other AWS services such as S3, Elastic Load Balancing, Auto Scaling, Amazon Relational Database Service (RDS), and AWS CloudFormation. You also have access to T2 instances in other sizes, and to the full range of EC2 instance types.

Our friends at Bitnami provide a very wide range of packaged tools and applications that can be used on AWS with a couple of clicks. They have optimized their very popular WordPress AMI for use on the t2.nano. You can find this and many other applications in the AWS Marketplace.

Available Now
You can launch t2.nano instances today in the US East (Northern Virginia), US West (Oregon), US West (Northern California), EU (Ireland), Asia Pacific (Singapore), Asia Pacific (Tokyo), South America (São Paulo), and AWS GovCloud (US) regions. The instances will be available soon in EU (Frankfurt) and Asia Pacific (Sydney). You can use them with AWS CloudFormation today; support for AWS Elastic Beanstalk (in the form of updated containers) is in the works.


PS – Several of the comments ask about the free tier. Here’s our perspective on this:

The AWS Free Tier is designed to enable customers to get hands-on experience with AWS Cloud Services. For EC2, we offer new customers up to 750 hours per month (expiring 12 months after signing up for AWS) of usage on t2.micro instances, which allows users to run an instance continuously through their first year. We believe this is enough time for new customers to get familiar with all the tools available to interact with, configure, and monitor their EC2 instances (APIs, CLI, Console, CloudWatch, etc.). T2.micro is designed to work with a broad set of AMIs and AWS Marketplace software eligible for Free Tier, whereas the t2.nano is best suited for the workloads that work well within 512 MB memory and have more moderate burst requirements. The t2.micro is the best starting point to get hands-on experience with AWS, and customers can later scale up or down depending on their workloads.”

New – Encrypted EBS Boot Volumes

Encryption is an important part of any data protection strategy. Over the past year or two, we have introduced many features that are designed to simplify the task of storing your cloud-based information in encrypted form. Many of these features make use of the AWS Key Management Service (KMS); here are some of the more recent announcements on that topic:

To learn more, check out the AWS Services That Offer Encryption Integrated with AWS KMS.

Many customers tell me that they appreciate the fact that AWS makes it very easy for them to encrypt their data. They enable it as needed, and rely on AWS for the heavy lifting.

Encrypted EBS Boot Volumes
Today we are launching encryption for EBS boot volumes. This feature builds on a recent release that allowed you to copy an EBS snapshot while also applying encryption.

You can now create Amazon Machine Images (AMIs) that make use of encrypted EBS boot volumes and use the AMIs to launch EC2 instances. The stored data is encrypted, as is the data transfer path between the EBS volume and the EC2 instance. The data is decrypted on the instance on an as-needed basis, then stored only in memory.

This feature will aid your security, compliance, and auditing efforts by allowing you to verify that all of the data that you store on EBS is encrypted, whether it is stored on a boot volume or on a data volume. Further, because this feature makes use of KMS, you can track and audit all uses of the encryption keys.

Each EBS backed AMI contains references to one or more snapshots of EBS volumes. The first reference is to an image of the boot volume. The others (if present) are to snapshots of data volumes. When you launch the AMI, an EBS volume is created from each snapshot. Because EBS already supports encryption of data volumes (and by implication the snapshots associated with the volumes), you can now create a single AMI with a fully-encrypted set of volumes. You can, if you like, use individual Customer Master Keys in KMS for each volume.

Creating an Encrypted EBS Boot Volume
The process of creating an encrypted EBS boot volume begins with an existing AMI (either Linux or Windows). If you own the AMI, or if it is both public and free you can use it directly. Otherwise, you will need to launch the AMI, create an image from it, and then use that image to create the encrypted EBS boot volume (this applies, for example, to Windows AMIs). The resulting encrypted AMI will be private; you cannot share it with another AWS account.

With the AMI and the encrypted snapshot in hand, you simply create a new AMI using the AWS copy-image command as follows:

$ aws ec2 copy-image -r source_region -s source_ami_id \
  [-n ami_name] [-d ami_description] [-c token] \
  [--encrypted] [--kmsKeyID keyid]

If you request encryption with --encrypted and do not supply the --kmsKeyID parameter, the default EBS Customer Master Key (CMK) for your account will be used.

For example, here is how you would make a copy of the Amazon Linux AMI:

$ aws ec2 copy-image -r us-east-1 -s ami-60b6c60a  \
  --encrypted --kmsKeyID arn:aws:kms:us-east-1:012345678910:key/abcd1234-a123-456a-a12b-a123b4cd56ef

You can also create an AMI with an encrypted boot volume from the EC2 Console:

Using an Encrypted EBS Boot Volume
After you create your new AMI, you can use it to launch new instances as usual. You don’t need to make any other changes to your code or your operational practices.

Available Now
This new feature is available now in all AWS regions except China (Beijing) and you can start using it today at no additional charge.


PS – The EBS team is hiring! Check out the EBS Careers page for more info.

Now Available – EC2 Dedicated Hosts

Last month, I announced that we would soon be making EC2 Dedicated Hosts available. As I wrote at the time, this model allows you to control the mapping of EC2 instances to the underlying physical servers. Dedicated Hosts allow you to:

  • Bring Your Own Licenses – You can bring your existing server-based licenses for Windows Server, SQL Server, SUSE Linux Enterprise Server, and other enterprise systems and products to the cloud. Dedicated Hosts provide you with visibility into the number of sockets and physical cores that are available so that you can obtain and use software licenses that are a good match for the actual hardware.
  • Help Meet Compliance and Regulatory Requirements – You can allocate Dedicated Hosts and use them to run applications on hardware that is fully dedicated to your use.
  • Track Usage – You can use AWS Config to track the history of instances that are started and stopped on each of your Dedicated Hosts. This data can be used to verify usage against your licensing metrics.
  • Control Instance Placement – You can exercise fine-grained control over the placement of EC2 instances on each of your Dedicated Hosts.

Available Now
I am happy to be able to announced the Dedicated Hosts are available now and that you can start using them today. You can launch them from the AWS Management Console, AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, or via code that makes calls to the AWS SDKs.

Let’s provision a Dedicated Host and then launch some EC2 instances on it via the Console! I simply open up the EC2 Console, select Dedicated Hosts in the left-side navigation bar, and click on Allocate a Host.

I choose the instance type (Dedicated hosts for M3, M4, C3, C4, G2, R3, D2, and I2  instances are available), the Availability Zone, and the quantity (each Dedicated Host can accommodate one or more instances of a particular type, all of which must be the same size).

If I choose to allow instance auto-placement, subsequent launches of the designed instance type in the chosen Availability Zone are eligible for automatic placement on the Dedicated Host, and will be placed there if instance capacity is available on the host and the launch specifies a tenancy of Host without specifying a particular one. If I do not allow auto-placement, I must specifically target this Dedicated Host when I launch an instance.

When  I click Allocate host, I’ll receive confirmation that it was allocated:

Billing for the Dedicated Host begins at this point. The size and number of instances are running on it does not have an impact on the cost.

I can see all of my Dedicated Hosts at a glance. Selecting one displays detailed information about it:

As you can see, my Dedicated Host has 2 sockets and 24 cores. It can host up to 22 m4.large instances, but is currently not hosting any. The next step is run some instances on my Dedicated Host. I click on Actions and choose Launch Instance(s) onto Host (I can also use the existing EC2 launch wizard):

Then I pick an AMI. Some AMIs (currently RHEL, SUSE Linux, and those which include Windows licenses) cannot be used with Dedicated Hosts, and cannot be selected in the screen below or from the AWS Marketplace:

The instance type is already selected:

Instances launched on a Dedicated Host must always reside within a VPC. A single Dedicated Host can accommodate instances that run in more than one VPC.

The remainder of the instance launch process proceeds in the usual way and I have access to the options that make sense when running on a Dedicated Host. You cannot, for example, run Spot instances on a Dedicated Host.

I can also choose to target one of my Dedicated Hosts when I launch an EC2 instance in the traditional way. I simply set the Tenancy option to Dedicated host and choose one of my Dedicated Hosts (I can also leave it set to No preference and have AWS make the choice for me):

If I select Affinity, a persistent relationship will be created between the Dedicated Host and the instance. This gives you confidence that the instance will restart on the same Host, and minimizes the possibility that you will inadvertently run licensed software on the wrong Host. If you import a Windows Server image (to pick one that we expect to be popular), you can keep it assigned to a particular physical server for at least 90 days, in accordance with the terms of the license.

I can return to the Dedicated Hosts section of the Console, select one of my Hosts, and learn more about the instances that are running on it:

Using & Tracking Licensed Software
You can use your existing software licenses on Dedicated Hosts. Verify that the terms allow the software to be used in a virtualized environment, and use VM Import/Export to bring your existing machine images into the cloud. To learn more, read about Bring Your Own License in the EC2 Documentation. To learn more about Windows licensing options as they relate to AWS, read about Microsoft Licensing on AWS and our detailed Windows BYOL Licensing FAQ.

You can use AWS Config to record configuration changes for your Dedicated Hosts and the instances that are launched, stopped, or terminated on them. This information will prove useful for license reporting. You can use the Edit Config Recording button in the Console to change the settings (hovering your mouse over the button will display the current status):

To learn more, read about Using AWS Config.

Some Important Details
As I mentioned earlier, billing begins when you allocate a Dedicated Host. For more information about pricing, visit the Dedicated Host Pricing page.

EC2 automatically monitors the health of each of your Dedicated Hosts and communicates it to you via the Console. The state is normally available; it switches to under-assessment if we are exploring a possible issue with the Dedicated Host.

Instances launched on Dedicated Hosts must always reside within a VPC, but cannot make use of Placement Groups. Auto Scaling is not supported, and neither is RDS.

Dedicated Hosts are available in the US East (Northern Virginia), US West (Oregon), US West (Northern California), EU (Ireland), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), and South America (São Paulo) regions.  You can allocate up to 2 Dedicated Hosts per instance family (M4, C4, and so forth) per region; if you need more, just ask.