Category: Amazon EC2


New – VPC Endpoint for Amazon S3

I would like to tell you about a new AWS feature that will allow you to make even better use of Amazon Virtual Private Cloud and Amazon Simple Storage Service (S3). As you probably know, S3 provides you with secure, durable, and highly scalable object storage. You can use the Virtual Private Cloud to create a logically isolated section of the AWS Cloud, with full control over a virtual network that you define.

When you create a VPC, you use security groups and access control lists (ACLs) to control inbound and outbound traffic. Until now, if you wanted your EC2 instances to be able to access public resources, you had to use an Internet Gateway, and potentially manage some NAT instances.

New VPC Endpoint for S3
Today we are simplifying access to S3 resources from within a VPC by introducing the concept of a VPC Endpoint. These endpoints are easy to configure, highly reliable, and provide a secure connection to S3 that does not require a gateway or NAT instances.

EC2 instances running in private subnets of a VPC can now have controlled access to S3 buckets, objects, and API functions that are in the same region as the VPC. You can use an S3 bucket policy to indicate which VPCs and which VPC Endpoints have access to your S3 buckets.

Creating and Using VPC Endpoints
You can create and configure VPC Endpoints using the AWS Management Console, AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, and the VPC API. Let’s create one using the console! Start by opening up the VPC Dashboard and selecting the desired region. Locate the Endpoints item in the navigation bar and click on it:

If you have already created some VPC Endpoints, they will appear in the list:

Now click on Create Endpoint, choose the desired VPC, and customize the access policy (if you want):

The access policy on the VPC Endpoint allows you disallow requests to untrusted S3 buckets (by default a VPC Endpoint can access any S3 bucket). You can also use access policies on your S3 buckets to control access from a specific VPC or VPC Endpoint. These access policies would use the new aws:SourceVpc and aws:SourceVpce conditions (read the documentation to learn more).

As you might be able to guess from the screen above, you will eventually be able to create VPC Endpoints for other AWS services!

Now choose the VPC subnets that will be allowed to access the endpoint:

As indicated in the note on the screen above, open connections using an instance’s public IP address in the affected subnets will be dropped when you create the VPC Endpoint.

Once you create the VPC Endpoint, the S3 public endpoints and DNS names will continue to work as expected. The Endpoint simply changes the way in which the requests are routed from EC2 to S3.

Available Now
Amazon VPC Endpoints for Amazon S3 are available now in the US East (Northern Virginia) (for access to the US Standard region), US West (Oregon), US West (Northern California), EU (Ireland), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Singapore), and Asia Pacific (Sydney) regions. You can start using them today. Read the documentation to learn more.

Jeff;

AWS GovCloud (US) Update – AWS Key Management Service Now Available

The AWS Key Management Service (KMS) provides you with seamless, centralized control over your encryption keys. As I noted when we launched the service (see my post, New AWS Key Management Service, for more information), this service gives you a new option for data protection and relieves you of many of the more onerous scalability and availability issues that inevitably surface when you implement key management at enterprise scale. KMS uses Hardware Security Modules to protect the security of your keys. It is also integrated with AWS CloudTrail for centralized logging of all key usage.

AWS GovCloud (US), as you probably know, is an AWS region designed to allow U.S. government agencies (federal, state, and local), along with contractors, educational institutions, enterprises, and other U.S. customers to run regulated workloads in the cloud. AWS includes many security features and is also subject to many compliance programs. AWS GovCloud (US) allows customers to run workloads that are subject to U.S. International Traffic in Arms Regulations (ITAR), the Federal Risk and Authorization Management Program (FedRAMPsm), and levels 1-5 of the Department of Defense Cloud Security Model (CSM).

KMS in GovCloud (US)
Today we are making AWS Key Management Service (KMS) available in AWS GovCloud (US).  You can use it to encrypt data in your own applications and within the following AWS services, all using keys that are under your control:

  • Amazon EBS volumes.
  • Amazon S3 objects using Server-Side Encryption (SSE-KMS) or client-side encryption using the encryption client in the AWS SDKs.
  • Output from Amazon EMR clusters to S3 using the EMRFS client.

To learn more, visit the AWS Key Management Service (KMS) page. To get started in the AWS GovCloud (US) region, contact us today!

Jeff;

New AWS Quick Start – SAP Business One, version for SAP HANA

We have added another AWS Quick Start Reference Deployment. The new SAP Business One, Version for SAP HANA document will show you how to get on the fast track to plan, deploy, and configure this enterprise resource planning (ERP) solution. It is powered by SAP HANA, SAP’s in-memory database.

This deployment builds on our existing SAP HANA on AWS Quick Start. It makes use of Amazon Elastic Compute Cloud (EC2) and Amazon Virtual Private Cloud, and is launched via a AWS CloudFormation template.

The CloudFormation template creates the following resources, all within a new or existing VPC:

  • A NAT instance in the public subnet to support inbound SSH access and outbound Internet access.
  • A Microsoft Windows Server instance in the public subnet for downloading SAP HANA media and to provide a remote desktop connection to the SAP Business One client instance.
  • Security groups and IAM roles.
  • A SAP HANA system installed with Amazon Elastic Block Store (EBS) volumes configured to meet HANA’s performance requirements.
  • SAP Business One, version for SAP HANA, client and server components.

The document will help you to choose the appropriate EC2 instance types for both production and non-production scenarios. It also includes a comprehensive, step-by-step walk-through of the entire setup process. During the process, you will need to log in to the Windows instance using an RDP client in order to download and stage the SAP media.

After you make your choices, you simply launch the template, fill in the blanks, and sit back while the resources are created and configured. Exclusive of the media download (a manual step), this process will take about 90 minutes.

The quick start reference guide is available now and you can read it today!

Jeff;

VM Import Update – Faster and More Flexible, with Multi-Volume Support

Enterprise IT architects and system administrators often ask me how to go about moving their existing compute infrastructure to AWS. Invariably, they have spent a long time creating and polishing their existing system configurations and are hoping to take advantage of this work when they migrate to the cloud.

We introduced VM Import quite some time ago in order to address this aspect of the migration process. Since then, many AWS customers have used it as part of their migration, backup, and disaster recovery workflows.

Even Better
Today we are improving VM Import by adding new ImportImage and ImportSnapshot functions to the API.  These new functions are faster and more flexible than the existing ImportInstance function and should be used for all new applications. Here’s a quick comparison of the benefits of ImportImage with respect to ImportInstance:

ImportInstance ImportImage
Source S3 manifest + objects
(usually uploaded from an on-premises image file)
Image file in S3 or an EBS Snapshot
Destination Stopped EC2 instance Amazon Machine Image (AMI)
 VM Complexity Single volume, single disk Multiple volume, multiple disk
 Concurrent Imports 5 20
Operating Systems  Windows Server 2003, Windows Server 2008, Windows Server 2012, Red Hat Enterprise Linux (RHEL), CentOS, Ubuntu, Debian.
VM Formats VMDK, VHD, RAW VMDK, VHD, RAW, OVA

Because ImportImage and ImportSnapshot use an image file in S3 as a starting point, you now have several choices when it comes to moving your images to the cloud. You can use the AWS Management Console, AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, or custom tools built around the S3 API (be sure to take advantage of multipart uploads if you do this).  You can also use AWS Import/Export to transfer your images using physical devices.

The image file that you provide to ImportImage will typically be an OVA package, but other formats are also supported. The file contains images of one or more disks, a manifest file, certificates, and other data associated with the image.

As noted in the table above ImportImage accepts image files that contain multiple disks and/or multiple disk volumes. This makes it a better match for the complex storage configurations that are often a necessity within an enterprise-scale environment.

ImportImage generates an AMI that can be launched as many times as needed. This is simpler, more flexible, and easier to work with than the stopped instance built by ImportInstance. ImportSnapshot generates an EBS Snapshot that can be used to create an EBS volume.

Behind the scenes, ImportImage and ImportSnapshot are able to distribute the processing and storage operations of each import operation across multiple EC2 instances. This optimization speeds up the import process and also makes it easier for you to predict how long it will take to import an image of a given size.

In addition to building your own import programs that make use of ImportImage and ImportSnapshot (by way of the AWS SDK for Java and the AWS SDK for .NET), you can also access this new functionality from the AWS Command Line Interface (CLI).

To learn more, read the API documentation for ImportImage and ImportSnapshot.

Available Now
This new functions are available now in all AWS regions except China (Beijing) and AWS GovCloud (US).

Jeff;

New G2 Instance Type with 4x More GPU Power

The GPU-powered G2 instance family is home to molecular modeling,  rendering, machine learning, game streaming, and transcoding jobs that require massive amounts of parallel processing power. The NVIDIA GRID GPU includes dedicated, hardware-accelerated video encoding; it generates an H.264 video stream that can be displayed on any client device that has a compatible video codec. Here’s the block diagram from my original post:

Today we are adding a second member to the G2 family. The new g2.8xlarge instance has the following specifications:

  • Four NVIDIA GRID GPUs, each with 1,536 CUDA cores and 4 GB of video memory and the ability to encode either four real-time HD video streams at 1080p or eight real-time HD video streams at 720P.
  • 32 vCPUs.
  • 60 GiB of memory.
  • 240 GB (2 x 120) of SSD storage.

This new instance size was designed to meet the needs of customers who are building and running high-performance CUDA, OpenCL, DirectX, and OpenGL applications.

From our Customers
AWS customer OpenEye Scientific provides software to the pharmaceutical industry for molecular modeling and cheminformatics. The additional memory and compute power of the g2.8xlarge allows them to accelerate their modeling and shape-fitting process. Brian Cole (their GPU Computing Lead) told us:

FastROCS is an extremely fast shape comparison application, based on the idea that molecules have similar shape if their volumes overlay well and any volume mismatch is a measure of dissimilarity. The unprecedented speed of FastROCS represents a paradigm shift in the potential for 3D shape screening as part of the drug discovery process. To meet the high performance of FastROCS on NVIDIA GPUs, the molecular database must reside in main memory.

The 15GB of memory provided by the g2.2xlarge was a limiting factor in OpenEye’s ability to use AWS for FastROCS. The only piece of our cloud offering not yet running in AWS is an on-premises dedicated FastROCS machine. Now that the g2.8xlarge instance provides nearly four times more memory, FastROCS can be run on production-sized pharmaceutically-relevant datasets in AWS.

In addition, we have observed a four times performance increase from scaling across four GPUs on the g2.8xlarge instance. This will bring with it all the great flexibility and maintainability we have come to rely on from the AWS cloud.

Here’s a visual representation of the scaling that they have been able to achieve by using all four GPUs in a g2.8xlarge instance:

AWS customer OTOY builds GPU-based software that is designed to create cutting-edge digital content. Their AWS-powered Octane Render Cloud (ORC) provides users with high-quality, cloud-based rendering.

3D artists and visual effects (VFX) houses can use ORC to access essentially unlimited rendering capacity (including computationally intensive tasks such as light fields and path-traced cloud gaming), all powered by EC2 instances equipped with GPUs. This frees up their local workstations for creative work.

ORC’s web-based UI allows users to log in, upload projects, and create render jobs. The jobs are rendered on g2.2xlarge and g2.8xlarge instances and the user receives an email notification when the rendering is complete. Visual assets are deduplicated and stored in S3; this allows for space efficiency even if render scenes from different users make use of some of the same assets.

Brigade is OTOY’s real-time GPU path tracer. They are currently using ORC to port Octane scenes to Brigade for live, path-traced cloud gaming. Take a look at this video to see what this looks like:

Finally, AWS customer Butterfly Network (“Transforming Diagnostic and Therapeutic Imaging with Devices, Deep Learning, and the Cloud”) uses the g2.8xlarge to support their machine learning platform. Alex Rothberg (Senior Scientist) told us:

With the benefit of the new g2.8xlarge instances, we can now leverage data parallelism across multiple GPUs to speed up training our neural networks. This will allow us to more rapidly iterate on deep learning methods which will enable us to democratize medical imaging.

Go GPU Today!
You can launch these instances today in the US East (Northern Virginia), US West (Northern California), US West (Oregon), EU (Ireland), Asia Pacific (Singapore), and Asia Pacific (Tokyo) regions today in On-Demand or Spot form; Reserved Instances are also available.

Jeff;

MongoDB on the AWS Cloud – New Quick Start Reference Deployment

We have added a new AWS Quick Start Reference Deployment. The new MongoDB on the AWS Cloud document will show you how to design and configure a MongoDB (an open source NoSQL database) cluster that runs on AWS.

The MongoDB cluster (version 2.6 or 3.0) makes use of Amazon Elastic Compute Cloud (EC2) and Amazon Virtual Private Cloud, and is launched via a AWS CloudFormation template. You can use the template directly or you can copy and then customize it as needed.  The template creates the following resources:

  • VPC with private and public subnets (you can also launch the cluster into an existing VPC).
  • A NAT instance in the public subnet to support SSH access to the cluster and outbound Internet connectivity.
  • An IAM instance role with fine-grained permissions.
  • Security groups
  • A fully customized MongoDB cluster with replica sets, shards, and config servers, along with customized EBS storage, all running in the private subnet.

The document examines scaling, replication, and performance tradeoffs in depth, and provides guidance to help you to choose appropriate types of EC2 instances and EBS volumes.

After you make your choices, you simply launch the template, fill in the blanks, and wait about 15 minutes while the resources are created and configured:

The document is available now and you can read it today!

Jeff;

EC2 Container Service – Long-Running Applications, Load Balancing, and More

Amazon EC2 Container Service is a highly scalable container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon Elastic Compute Cloud (EC2) instances. We launched the preview at AWS re:Invent and have been receiving a lot of great feedback ever since.

We built Amazon ECS because many of you told us that you are using Docker to encapsulate your applications and services, and that you want to run one or (more typically) more of them across a cluster of EC2 instances without having to worry about managing it. You asked for a service that was reliable and scalable, and that allowed you to take advantage of the advanced EC2 features that you are already using.

You also told us that managing a cluster, including the state of the EC2 instances and containers, can be tricky, especially as the environment grows. You need to have access to accurate and timely state information so that you can make container placement decisions, such as what instances are available and have the requisite capacity. Tracking of state grows increasingly difficult (and yet ever-more important) as the number of instances and containers grows into the thousands or tens of thousands.

Amazon ECS was designed to meet all of these needs and more. You do not need to install, operate, and scale your own cluster management infrastructure. With simple API calls, you can launch and stop container-enabled applications and discover the state of your cluster. You can use Amazon EC2 with other AWS services  and you can take advantage of familiar features such as Elastic Load Balancing, EBS volumes, EC2 security groups, and IAM roles.

You also get several options for container scheduling, allowing you to run a wide variety of applications and to manage the placement and utilization of containers, applications, and services.

Now Generally Available
I am happy to be able to announce that Amazon ECS is now generally available! We have added some powerful new features including support for long-running applications, a shiny new Amazon ECS Console, and CloudTrail integration. We are also making Amazon ECS available in the Asia Pacific (Tokyo) region.

Let’s take a look at each of these new features.

Long-Running Applications
Previously, Amazon ECS included two ways to schedule Docker containers on a cluster. You could run tasks once for processes such as batch jobs that perform work and then stop. You could also make calls to the Amazon ECS APIs to retrieve state information about the cluster and then use it to power a third-party or custom-written scheduler.

With today’s launch you can also use the new Amazon ECS Service scheduler to manage long-running applications and services. The Service scheduler helps you maintain application availability and allows you to scale your containers up or down to meet your application’s capacity requirements. Here’s what it does for you:

  • Load Balancing – The Service scheduler allows you to distribute traffic across your containers using Elastic Load Balancing. Amazon ECS will automatically register and deregister your containers from the associated load balancer.
  • Health Management – The Service scheduler will also automatically recover containers that become unhealthy (fail ELB health checks) or stop running to make sure that you have the desired number of healthy containers available to run your application.
  • Scale-Up and Scale-Down – You can scale your application up and down by changing the number of containers you want the service to run.
  • Update Management – You can update your application by changing its definition or using a new image. The scheduler will automatically start new containers using the new definition and stop containers running the previous version. It will wait for the ELB connections to drain if ELB is used.

You can also use these new facilities to implement a basic service discovery model. You can list the services that are running in a cluster and then use the ELB as the service endpoint.

A Word From Gilt
Online retailer Gilt provides their customers with insider access to today’s top brands and experiences. Co-founder Phong Nguyen told us:

“We were early adopters of Docker. Using Docker has helped us move faster and allowed us to improve and simplify end-to-end continuous delivery of our micro-services architecture. As we Dockerize all our services, it is very important for us to have a platform that can help us speed up deployments, automate our services, and gain greater efficiencies. The new service scheduler and ELB integration make Amazon ECS an excellent platform for our services, and we are looking forward to partnering with AWS as our Docker platform.”

Amazon ECS Console
The new Amazon ECS Console simplifies the process of setting up and running a cluster. I’ll demonstrate it by creating a service that deploys a simple PHP application.  The application displays content (a message and the time of day) provided by a linked container.

I start by opening up the Console and opting to create a custom task definition (the set of containers that I want to run together on an EC2 instance). I choose Custom:

Then I create my task definition. I can build it visually, one container at a time:

Or I can paste in an existing JSON definition (this one came straight from the Docker Basics section of the documentation):

I clicked on Create a service and accepted all of the default values in the next step:

Now that the service is defined I can create a cluster (3 t2.micro instances in this case) to run it. I used the Select/Create Roles button to set up the requisite IAM roles in one-click fashion:

After reviewing my choices and confirming my intent, Amazon ECS launched the EC2 instances into my default cluster. I was able to watch the progress from within the Console:

Everything was up and running within a couple of minutes and I was able to do some exploration. I started by looking at my list of clusters (just one):

And then zooming in for a closer look  at the cluster:

From there I can check on the service:

And I can make changes as needed:

You can also exercise all of this control, and retrieve all of the same information, by using the ECS API or the Command-Line Interface (CLI).

CloudTrail Integration
Calls to the ECS APIs are now logged to AWS CloudTrail.

Another Region
Effective today, Amazon ECS is available in the Asia Pacific (Tokyo) region. It is also available in the US East (Northern Virginia), US West (Oregon), and EU (Ireland) regions.

Get Started Today
If you are new to the world of container-based computing and Amazon ECS, start by reading What is Amazon EC2 Container Service.

Jeff;

 

Amazon Elastic File System – Shared File Storage for Amazon EC2

I’d like to give you a preview of a new AWS service that will make its debut later this year.

Let’s take a quick look at the AWS storage portfolio. We currently offer:

Today we are introducing Amazon Elastic File System, our newest storage service. It provides multiple EC2 instances with low-latency, shared access to a fully-managed file system via the popular NFSv4 protocol, and is designed to perform well for a wide variety of workloads, with the ability to scale to thousands of concurrent connections.

We expect to see EFS used for content repositories, development environments, web server farms, home directories, and Big Data applications, to name just a few. If you’ve got the files, we’ve got the storage!

The SSD-based file systems are highly available and highly durable (files, directories, and links are stored redundantly across multiple Availability Zones within an AWS region) and grow or shrink as needed (there’s no need to pre-provision capacity). You’ll be able to create them using the AWS Management Console, the AWS Command Line Interface (CLI), and a simple set of APIs, and start using them within seconds.

File systems can grow to petabyte scale, and throughput and IOPS scale accordingly. You’ll pay only for the storage that you use (billed monthly based on the average used during the month) at the rate of $0.30 per gigabyte per month.

EFS is designed to support the security requirements of large, complex organizations. You’ll be able to use IAM to grant access to the EFS APIs, along with VPC security groups to control network access to file systems. You’ll be able to use standard file and directory permissions (good old chown and chmod) to control access to the directories, files, and links stored on your file systems.

Coming Soon
We will be opening up EFS in preview form in the near future. Visit the Amazon Elastic File System page, and sign up for the preview today, and we will let you know as soon as it is ready for you to use. I will have more information on using Amazon EFS to share with you at that time.

Jeff;

AWS CodeDeploy Update – New Support for On-Premises Instances

My colleague Andy Troutman wrote up the guest post below to share news of a powerful new way to use AWS CodeDeploy.

— Jeff;


Customers use AWS CodeDeploy to manage application updates to their Amazon EC2 instances. CodeDeploy allows developers and administrators to centrally control and track their application deployments across different development, testing, and production environments. CodeDeploy is built using many of the lessons learned from Amazon’s internal deployment systems, and focuses on coordinating the rollout of application updates in a way that’s clear, reproducible, and non-impactful to application users. To learn more about how to integrate CodeDeploy into your existing app management process, visit the CodeDeploy detail page.

Customers who are using CodeDeploy to manage their Amazon Elastic Compute Cloud (EC2) instances have asked to be able to use the same fleet coordination features to deploy code to their on-premises instances. Today we’re happy to make the functionality of CodeDeploy available for use on a customer’s own servers, in addition to Amazon EC2.

Advantages of CodeDeploy for On-Premises Instances
Customers can now manage their EC2 and on-premises application deployments using a single solution. Here are some of the advantages:

  • Coordinate a rolling update across a collection of EC2 and/or on-premises instances. CodeDeploy will actively track the outcome of each instance update, and use this data to safely select the next set of updates, or to stop the deployment in a case where an application update isn’t going as expected.
  • Tag on-premises instances the same as with EC2 instances to define deployment groups. You can create multiple unique groups to define separate application stages such as “Development”, “Test”, “QA”, “Production”, and so forth.
  • Deployment groups can be composed of on-premises instances, static EC2 tags, or dynamically changing AWS Auto Scaling groups. New Auto Scaling capacity is automatically deployed to with CodeDeploy.
  • In cases where a deployment fails on an EC2 instance or on-premises instance, CodeDeploy will immediately make the last 4k of deployment log data available to customers in the console, making it easy to quickly troubleshoot a failed application deployment.
  • Monitor and update the state of your on-premises instances directly from the CodeDeploy Console or AWS Command Line Interface (CLI).
  • Since CodeDeploy uses a pull-based model for our agent, on-premises instances only need outbound HTTPS access to the appropriate CodeDeploy endpoint (read about Regions and Endpoints to learn more).  Your instance and firewall do not need to allow SSH access or any other inbound connection to trigger a deployment.
  • Use CodeDeploy to update both Linux and Windows EC2 instances or on-premises instances. From 1 to 10,000+, CodeDeploy is architected to do deployments at any scale.

Getting Started
The easiest way to start managing on-premises instances with AWS CodeDeploy is via the AWS Command Line Interface (CLI). You can install the AWS CLI on your desktop or directly on the on-premises instance.  Once you have the CLI  installed, you’ll be able to get your first on-premises instance running with 3 steps:

  1. Issue the aws deploy register command, which will create a new IAM user and associate it with the on-premises instance, register the instance with AWS CodeDeploy, and tag the instance with any tags specified.
  2. Issue the aws deploy install command from your on-premises instance. This will install the CodeDeploy agent onto the instance and configure it to communicate with a supported AWS Region. Today, CodeDeploy is available in the US East (Northern Virginia) and US West (Oregon) regions.
  3. Update or create a new CodeDeploy Deployment Group to add your on-premises instance to a new or existing application. You can do this step from the CLI or via the CodeDeploy console.

Once complete, you can deploy to your on-premises instance in the exact same way as any EC2 instance. For more details on the setup steps, please take a look at the CodeDeploy documentation.

Pricing and Availability
You pay $0.02 per on-premises instance update using AWS CodeDeploy; there are no minimum fees and no upfront commitments. You will only be charged if CodeDeploy begins an update on an instance. There is no additional charge for code deployments to Amazon EC2 instances through AWS CodeDeploy.

Customers may register their on-premises instances against any of our available regions. For the best availability, we recommend that customers segment their on-premises instances to talk to the closest available region, in much the same way they would segment EC2 instances by region. For more information, please see the CodeDeploy Pricing and regions FAQ.

Andy Troutman, Software Development Manager

The Next Generation of Dense-storage Instances for EC2

Perhaps you, like many other AWS users, store and process  huge amounts of data in the cloud. Today we are announcing a new generation of Dense-storage instances that will provide you additional options for processing multi-terabyte data sets.

New D2 Instances
The new D2 instances are designed to provide you with additional compute power and memory (when compared to the first-generation HS1 instances) and the ability to sustain a high rate of sequential disk I/O for access to extremely large data sets, all at a very affordable price. The instances are based on Intel Xeon E5-2676 v3 (code name Haswell) processors running at a base clock frequency of 2.4 GHz and come in four instance sizes as follows:

Instance Name vCPU Count RAM Instance Storage Network Performance Disk Read Throughput
(with 2 MiB Blocks)
Linux On-Demand Price
d2.xlarge 4 30.5 GiB 6 TB
(3 x 2 TB)
Moderate 437 MB/second $0.690
d2.2xlarge 8 61 GiB 12 TB
(6 x 2 TB)
High 875 MB/second $1.380
d2.4xlarge 16 122 GiB 24 TB
(12 x 2 TB)
High 1,750 MB/second $2.760
d2.8xlarge 36 244 GiB 48 TB
(24 x 2 TB)
10 Gbps 3,500 MB/second $5.520

The prices listed above are for the US East ( Northern Virginia ) and US West ( Oregon ) regions . For more pricing information, take a look at the EC2 Pricing page.

You can also launch multiple D2 instances in a placement group for high bandwidth low latency networking between the instances.

Notes on Storage
The largest D2 instances (d2.8xlarge) are capable of providing up to 3,500 MB/second read and 3,100 MB/second write performance with a 2 MiB block size when launched with a Linux AMI.

In order to ensure the best disk throughput performance from your D2 instances on Linux, we recommend that you use the most recent version of the Amazon Linux AMI, or another Linux AMI with a kernel version of 3.8 or later. The D2 instances provide the best disk performance when you use a Linux kernel that supports Persistent Grants – an extension to the Xen block ring protocol that significantly improves disk throughput and scalability. The following Linux AMIs support this feature:

  • Amazon Linux AMI 2015.03 (HVM)
  • Ubuntu Server 14.04 LTS (HVM)
  • Red Hat Enterprise Linux 7.1 (HVM)
  • SUSE Linux Enterprise Server 12 (HVM)

For more information, read about Persistent Grants in the Xen Project  Blog.

The storage on this instance family is local, and has a lifetime equal to that of the instance. Therefore, you should think of these instances as building blocks that you can use to build a complete storage system. For example, you should build some redundancy in to your storage architecture (e.g. RAID 1, 5, or 6) and you should use a fault-tolerant file system such as HDFS or Gluster. You should also back up your data to Amazon Simple Storage Service (S3) or Amazon Elastic Block Store (EBS) for increased durability.

Enhanced Networking
With Enhanced Networking and extremely high sequential high I/O rates, these instances will chew through your Massively Parallel Processing (MPP) data warehouse, log processing, and MapReduce jobs. They will also make great hosts for your network file systems and data warehouses. In order to take advantage of Enhanced Networking, you need to use recent versions of the appropriate Windows or Linux AMIs and run inside of a VPC.

Amazon EBS–Optimized by Default
Each D2 instance type is EBS-optimized by default, and delivers dedicated block storage throughput ranging from 500 Mbps to 4,000 Mbps at no additional cost. EBS-optimized instances enable you to get consistently high performance for your Amazon EBS volumes by eliminating contention between Amazon EBS I/O and other network traffic from your D2 instance. For more information, see Amazon EBS-Optimized Instances.

Power to the People
Each virtual CPU (vCPU) is a hardware hyperthread on an Intel Xeon E5-2676 v3 (Haswell) processor.

The D2 instances take advantage of Intel Turbo for increased performance. As I have explained in the past, this technology allows the processor to run faster than its baseline speed (2.4 GHz) as long as it remains within predefined thermal and power limits, with an upper limit of 3.0 GHz.

The largest instance (d2.8xlarge) also gives you a pair of bonus features: NUMA support and CPU power management. NUMA (Non-Uniform Memory Access) allows you to specify an affinity between an application and a processor that will result in use of memory that is “closer” to the processor and therefore more rapidly accessed. CPU power management gives you control over the C-states and P-states to enable higher turbo frequencies and to lower performance variability, respectively.

Available Worldwide Now
You can launch D2 instances today in the US East (Northern Virginia), US West (Oregon), EU (Ireland), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Singapore), and Asia Pacific (Sydney) regions as On-Demand, Reserved Instances, or Spot Instances.

Jeff;