Category: Amazon EC2


Heads-Up – Longer EC2 & EBS Resource IDs Coming in 2016

My colleague Angela Chapman wrote the guest post below to make you aware of longer instance, reservation, volume, and snapshot IDs that we will be rolling out in 2016.

– Jeff;


EC2 and EBS are planning to increase the length of some of their resource IDs over the coming year. In 2016, we will be introducing longer IDs for instances, reservations, volumes, and snapshots.  You will have until the end of next year to opt-in to receiving the longer IDs, and the switch-over will not impact most customers. However, we wanted to make you aware of these upcoming changes so you can schedule time in 2016 to test your systems with these longer IDs.

We need to do this given how fast AWS is continuing to grow; we will start to run low on IDs for certain EC2 and EBS resources within a year or so.  In order to enable the long-term, uninterrupted creation of new instances, reservations, volumes, and snapshots, we will need to introduce a longer ID format for these resources.  The new IDs will be the same format as existing IDs, but longer. The current ID format is a resource identifier followed by an 8-character string, and the new format will be the same resource identifier followed by a 17-character string.

The vast majority of our customers will not be impacted by this change. Only systems that parse or store resource IDs might be impacted. For those impacted, you can continue using the existing 8-character IDs for your existing resources; these will not change and will continue to be supported. Only new resources after you opt in to the new format will receive the 17-character IDs. The SDKs are already compatible with longer IDs and don’t require any updates.

We’ll enable you to opt in to receiving longer IDs over a transition period that starts in January and will last through December 2016. Until December 2016, use of longer IDs will be optional.  After December 2016, about 13 months from now, all new instances, volumes, reservations, and snapshots will start to receive the longer IDs.

The AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, and AWS SDKs are already compatible with longer resource IDs and do not need to be updated to support the longer ID format. We will be introducing new APIs to manage the opt-in process.

Additional information, including a detailed timeline and FAQs, can be found here.  If you have any questions, you can contact the AWS support team on the community forums and via AWS Premium Support.

Angela Chapman, Senior Product Manager

EC2 VPC VPN Update – NAT Traversal, Additional Encryption Options, and More

You can use Amazon Virtual Private Cloud to create a logically isolated section of the AWS Cloud. Within the VPC, you can define your desired IP address range, create subnets, configure route tables, and so forth. You can also use a network gateway to connect the VPC to your existing on-premises network using a hardware Virtual Private Network (VPN) connection. The VPN running in the AWS Cloud (also known as a VPN gateway or VGW) communicates with a customer gateway (CGW) on your network or in your data center (read about Your Customer Gateway to learn more).

Today we are adding several new features to the VPN. Here’s a summary:

  • NAT Traversal
  • Additional Encryption Options
  • Reusable IP addresses for the CGW

In order to take advantage of any of these new features, you will need to create a new VGW and then create new VPN tunnels with the desired attributes.

NAT Traversal
Network Address Translation (NAT) maps one range of IP addresses to another. Let’s say that you have private IP space on your local LAN that all connects to the internet through a single router or firewall. You aren’t able to put your VPN device (CGW) on a public IP address of it’s own. You can now use Network Address Translation to map the CGW from a private IP to a public, and use NAT-Traversal, or NAT-T, to connect your CGW to your Virtual Private Gateway (VGW). NAT-T allows you to create IP connections that originate on-premises behind a NAT device and connect to a VPC using addresses that have been translated. This mapping process takes places when the VPN is established.

You don’t need to do anything to set this up in the AWS Management Console. You just need to configure your NAT device for NAT-Traversal. You will also need to open up UDP port 4500 in your firewall in order to make use of NAT-T.

Additional Encryption Options
You can now make use of several new encryption options.

When the VPC’s hardware VPN is in the process of establishing a connection with your on-premises VPN, it proposes several different encryption options, each with a different strength. You can now configure the VPN on the VPC to propose AES256 as an alternative to the older and weaker AES128. If you decide to make use of this new option, you should configure your device so that it no longer accepts a proposal to use AES128 encryption.

The two endpoints participate in a Diffie-Hellman key exchange in order to establish a shared secret. The Diffie-Hellman groups used in the exchange will determine the strength of the hash on the keys. You can now configure the use of a wider range of groups:

  • Phase 1 can now use DH groups 2, 14-18, 22, 23, and 24.
  • Phase 2 can now use DH groups 1, 2, 5, 14-18, 22, 23, and 24.

Packets that flow across the VPN connection are verified using a hash algorithm. A matching hash gives a very high-quality indicator that the packet has not been maliciously modified along the way. You can now configure the VPN on the VPC to use the SHA-2 hashing algorithm with a 256 bit digest (also known as SHA-256). Again, you should configure your device to disallow the use of the weaker hash algorithms.

Reusable CGW IP Addresses
You no longer need to specify a unique IP address for each customer gateway connection that you create. Instead, you can now reuse an existing IP address. Many VPC users have been asking for this feature and I expect it to be well-used.

To learn more, read our FAQ and the VPC Network Administrator Guide.

Jeff;

New EC2 Run Command – Remote Instance Management at Scale

When you move from a relatively static and homogeneous computing environment where you have a small number of persistent, well-known servers (or instances, using Amazon Elastic Compute Cloud (EC2) terminology) to a larger and more dynamic and heterogeneous environment, you may need to think about managing and controlling those instances in a new way.

New EC2 Run Command
Today we are introducing EC2 Run Command. This new feature will help you to administer your instances (no matter how many you have) in a manner that is both easy and secure. This feature was designed to support a wide range of enterprise scenarios including installing software, running ad hoc scripts or Microsoft PowerShell commands, configuring Windows Update settings, and more. It is accessible from the AWS Management Console, the AWS Command Line Interface (CLI), the AWS Tools for Windows PowerShell, and the AWS SDKs. If you currently administer individual Windows instances by running PS1 scripts or individual PowerShell commands, you can now run them on one or more instances.

We built this feature after talking to many users about their management needs. Here are some of the themes that came about as a result of these conversations:

  • A need to implement configuration changes across their instances on a consistent yet ad hoc basis.
  • A need for reliable and consistent results across multiple instances.
  • Control over who can perform changes and what can be done.
  • A clear audit path of what actions were taken.
  • A desire to be able to do all of the above without the need for full remote desktop (RDP) access.

Command execution is secure, reliable, convenient, and scalable. You can create your own commands and exercise fine-grained control over execution privileges by using AWS Identity and Access Management (IAM). For example, you can specify that administrative commands can be run on a specific set of instances by a tightly controlled group of trusted users. All of the commands are centrally logged to AWS CloudTrail for easy auditing.

Run Command Benefits
The new Run Command feature was designed to provide you with the following benefits:

Control / Security – You can use IAM policies and roles to regulate access to commands and to instances. This allows you to reduce the number of users who have direct access to the instances.

Reliability – You can increase the reliability of your system by creating templates for your configuration changes. This will give you more control while also increasing predictability and reducing configuration drift over time.

Visibility – You will have more visibility into configuration changes because Run Command supports command tracking and is also integrated with CloudTrail.

Ease of Use – You can choose from a set of predefined commands, run them, and then track their progress using the Console, CLI, or API.

Customizability – You can create custom commands to tailor Run Command to the needs of your organization.

Exercising Run Command from the EC2 Console
Run Command works across all of your Windows instances and uses the existing EC2Config agent on the instances. Open the Console, select Commands, and review the prerequisites for using Run Command:

Click on Run a command to take you to the main Run Command screen. You’ll see your existing runs (if any) and the Run a command button:

Each row on the display represents a command that has been executed on an instance. Click on Run a command to start a new command:

The Command document menu contains seven predefined commands, along with any custom commands that you have created for your account:

Choose the appropriate document based on your use case and the change that you want to make to the target instance(s). Each document has a description and an explanation that will help you to make the right choice. For common administrative tasks, use the AWS-RunPowerShellScript document. This will allow you to run any PowerShell command or to call an existing PowerShell script.

After choosing the document, fill in the command (I used ipconfig), and choose the instances of interest (you can filter by attributes, tags, or keywords):

If you are running a command or script that will generate a lot of output on StdOut, you can specify an S3 bucket and a key prefix and the output will be routed there. If you don’t do this, Run Command will capture and display the first 2500 characters of console output.

When you are ready to proceed, click on Run. The Console will display a confirmation message:

Return to the command history and inspect it to find the results:

Select the desired command, and click on the Output tab:

Then click on View Output:

Using Run Command in Production
Here are some of the ways that you can make use of Run Command in your AWS environment:

  • Install and configure third-party agents and software.
  • Manage local groups and users.
  • Check for installed software or patches, and act on the results.
  • Restart a Windows service or service.
  • Update a scheduled task.

Available Now
You can use Run Command today in the US East (Northern Virginia), US West (Oregon), and EU (Ireland) regions. Simply open the Run Command Console or use the latest AWS Tools for Windows PowerShell, AWS Command Line Interface (CLI). There is no charge for this this feature; you pay only for the AWS resources that you consume.

Jeff;

PS – We plan to provide similar functionality for instances that run Linux. Stay tuned to the blog for more info!

Learn About the newest AWS Services – Attend our October Webinars

If you attended AWS re:Invent, you were among the first to know about Amazon QuickSight, AWS IoT, Kinesis Firehose, and our other new offerings. Perhaps you had time to attend a session to learn more about the new service or services that were of interest to you. If you didn’t attend re:Invent or missed a session or two and are ready to learn more, I’ve got you covered. We will be running nine new-product webinars later this month. Each webinar is designed to provide you with the information that you need to have in order to be up and running as quickly as possible.

Here’s what we have for you! The webinars are free but “seating” is limited and you should definitely sign up ahead of time if you want to attend (all times are Pacific):

Tuesday, October 27
QuickSight is a fast, cloud-powered business intelligence tool. You can build visualizations, perform ad-hoc analysis, and get business insights from your data.

AWS IoT is a managed cloud platform that lets connected devices easily and securely interact with cloud applications and other devices.

Amazon Kinesis Firehose is the easiest way to load streaming data into AWS.

Wednesday, October 28
Spot Blocks allow you to launch Spot instances that will run for a finite duration (1 to 6 hours).

AWS WAF is a web application firewall that helps protect your web applications from common exploits.

Amazon Elasticsearch Service is a managed service that makes it easy to deploy, operate, and scale Elasticsearch in the AWS Cloud.

Thursday, October 29
AWS Lambda lets you run code in the cloud without provisioning or managing servers.

AWS Mobile Hub provides an integrated console that helps you build, test, and monitor your mobile apps.

AWS Import/Export Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of AWS.

Jeff;

 

 

EC2 Container Service Update – Container Registry, ECS CLI, AZ-Aware Scheduling, and More

I’m really excited by the Docker-driven, container-based deployment model that is quickly becoming the preferred way to build, run, scale, and quickly update new applications. Since we launched Amazon EC2 Container Service last year, we have seen customers use it to host and run their microservices, web applications, and batch jobs.

Many developers have told me that containers have allowed them to speed up their development, testing, and deployment efforts. They love the fact that individual containers can hold “standardized” application components, each of which can be built using the language, framework, and middleware best suited to the task at hand. The isolation provided by the containers gives them the freedom to innovate at a more granular level instead of putting the entire system at risk due to large-scale changes.

Based on the feedback that we have received from our contingent of Amazon ECS and Docker users, we are announcing some powerful new features  – the Amazon EC2 Container Registry and the Amazon EC2 Container Service CLI. We are also making the Amazon ECS scheduler more aware of Availability Zones and adding some new container configuration options.

Let’s dive in!

Amazon EC2 Container Registry
Docker (and hence EC2 Container Service) is built around the concept of an image. When you launch a container, you reference an image, which is pulled from a Docker registry. This registry is a critical part of the deployment process. Based on conversations with our customers, we have learned that they need a registry that is highly available and exceptionally scalable, and globally accessible, with the ability to support deployments that span two or more AWS regions. They also want it to integrate with AWS Identity and Access Management (IAM) to simplify authorization and to provide fine-grained control.

While customers could meet most of these requirements by hosting their own registry, they have told us that this would impose an operational burden that they would strongly prefer to avoid.

Later this year we will make the Amazon EC2 Container Registry (Amazon ECR) available. This fully managed registry will address the issues that I mentioned above by making it easy for you to store, manage, distribute, and collaborate around Docker container images.

Amazon ECR is integrated with ECS and will be easy to integrate into your production workflow. You can use the Docker CLI running on your development machine to push images to Amazon ECR, where Amazon ECS can retrieve them for use in production deployments.

Images are stored durably in S3 and are encrypted at rest and in transit, with access controls via IAM users and roles. You will pay only for the data that you store and for data that you transfer to the Internet.

Here’s a sneak peek at the console interface:

You can visit the signup page to learn more and to sign up for early access. If you are interested in participating in this program, I would encourage you to sign up today.

We are already working with multiple launch partners including Shippable, CloudBees, CodeShip, and Wercker to provide integration with Amazon ECS and Amazon ECR, with a focus on automatically building and deploying Docker images. To learn more, visit our Container Partners page.

Amazon EC2 Container Service CLI
The ECS Command Line Interface (ECS CLI) is a command line interface for Amazon EC2 Container Service (ECS) that provides high level commands that simplify creating, updating and monitoring clusters and tasks from a local development environment.

The Amazon ECS CLI supports Docker Compose, a popular open-source tool for defining and running multi-container applications. You can use the ECS CLI as part of your everyday development and testing cycle as an alternative to the AWS Management Console.

You can get started with the ECS CLI in a couple of minutes. Download it (read the directions first), install it, and then configure it as follows (you have other choices and options, of course):

$ ecs-cli configure --region us-east-1 --cluster my-cluster

Launch your first cluster like this:

$ ecs-cli up --keypair my-keys --capability-iam --size 2

Docker Compose requires a configuration file. Here’s a simple one to get started (put this in docker-compose.yml):

web:
  image: amazon/amazon-ecs-sample
  ports:
  - "80:80"

Now run this on the cluster:

$ ecs-cli compose up
INFO[0000] Found task definition TaskDefinition=arn:aws:ecs:us-east-1:980116778723:task-definition/ecscompose-bin:1
INFO[0000] Started task with id:arn:aws:ecs:us-east-1:9801167:task/fd8d5a69-87c5-46a4-80b6-51918092e600

Then take a peek at the running tasks:

$ ecs-cli compose ps
Name                                      State    Ports
fd8d5a69-87c5-46a4-80b6-51918092e600/web  RUNNING  54.209.244.64:80->80/tcp

Point your web browser at that address to see the sample app running in the cluster.

The ECS CLI includes lots of other options (run it with –help to see all of them). For example, you can create and manage long-running services. Here’s the list of options:

The ECS CLI is available under an Apache 2.0 license (code is available at https://github.com/aws/amazon-ecs-cli) and we are looking forward to seeing your pull requests.

New Docker Container Configuration Options
A task definition is a description of an application that lets you define the containers that are scheduled together on an EC2 instance. Some of the parameters you can specify in a task definition include which Docker images to use, how much CPU and memory to use with each container, and what (if any) ports from the container are mapped to the host container.

Task definitions now support lots of Docker options including Docker labels, working directory, networking disabled, privileged execution, read-only root filesystem, DNS servers, DNS search domains, ulimits, log configuration, extra hosts (hosts to add to /etc/hosts), and security options for Multi-Level Security (MLS) systems such as SELinux.

The Task Definition Editor in the ECS Console has been updated and now accepts the new configuration options:

For more information, read about Task Definition Parameters.

Scheduling is Now Aware of Availability Zones
We introduced the Amazon ECS service scheduler earlier this year as a way to easily schedule containers for long running stateless services and applications. The service scheduler optionally integrates with Elastic Load Balancing. It ensures that the specified number of tasks are constantly running and restarts tasks if they fail. The service scheduler is the primary way customers deploy and run production services with ECS and we want to continue to make it easier to do so.

Today the service scheduler is availability zone aware. As new tasks are launched, the service scheduler will spread the tasks to maintain balance across AWS availability zones.

Amazon ECS at re:Invent
If you are at AWS re:Invent and want to learn more about how your colleagues (not to mention your competitors) are using container-based computing and Amazon ECS, check out the following sessions:

  • CMP302 – Amazon EC2 Container Service: Distributed Applications at Scale (to be live streamed).
  • CMP406 – Amazon ECS at Coursera.
  • DVO305 – Turbocharge Your Continuous Deployment Pipeline with Containers.
  • DVO308 – Docker & ECS in Production: How We Migrated Our Infrastructure from Heroku to AWS.
  • DVO313 – Building Next-Generation Applications with Amazon ECS.
  • DVO317 – From Local Docker Development to Production Deployments.

Jeff;

EC2 Instance Update – X1 (SAP HANA) & T2.Nano (Websites)

AWS customers love to share their plans and their infrastructure needs with us. We, in turn, love to listen and to do our best to meet those needs. A look at the EC2 instance history should tell you a lot about our ability to listen to our customers and to respond with an increasingly broad range of instances (check out the EC2 Instance History for a detailed look).

Lately, we have been hearing two types of requests, both driven by some important industry trends:

  • On the high end, many of our enterprise customers are clamoring for instances that have very large amounts of memory. They want to run SAP HANA and other in-memory databases, generate analytics in real time, process giant graphs using Neo4j or Titan , or create enormous caches.
  • On the low end, other customers need a little bit of processing power to host dynamic websites that usually get very modest amounts of traffic,  or to run their microservices or monitoring systems.

In order to meet both of these needs, we are planning to launch two new EC2 instances in the coming months. The upcoming X1 instances will have loads of memory; the t2.nano will provide that little bit of processing power, along with bursting capabilities similar to those of its larger siblings.

X1 – Tons of Memory
X1 instances will feature up to 2 TB of memory, a full order of magnitude larger than the current generation of high-memory instances. These instances are designed for demanding enterprise workloads including production installations of SAP HANA, Microsoft SQL Server, Apache Spark, and Presto.

The X1 instances will be powered by up to four Intel® Xeon® E7 processors. The processors have high memory bandwidth and large L3 caches, both designed to support high-performance, memory-bound applications. With over 100 vCPUs, these instances will be able to handle highly concurrent workloads with ease.

We expect to have the X1 available in the first half of 2016. I’ll share pricing and other details at launch time.

T2.Nano – A Little (Burstable) Processing Power
The T2 instances provide a baseline level of processing power, along with the ability to save up unused cycles (“CPU Credits”) and use them when the need arises (read about New Low Cost EC2 Instances with Burstable Performance to learn more). We launched the t2.micro, t2.small, and t2.medium a little over a year ago. The burstable model has proven to be extremely popular with our customers. It turns out that most of them never actually consume all of their CPU Credits and are able to run at full core performance. We extended this model with the introduction of t2.large just a few months ago.

The next step is to go in the other direction. Later this year we will introduce the t2.nano instance.  You’ll get 1 vCPU and 512 MB of memory, and the ability run at full core performance for over an hour on a full credit balance. Each newly launched t2.nano starts out with sufficient CPU Credits to allow you to get started as quickly as possible.

Due to the burstable performance, these instances are going to be a great fit for websites that usually get modest amounts of traffic. During those quiet times, CPU Credits will accumulate, providing a reserve that can be drawn upon when traffic surges.

Again, I’ll share more info as we get closer to the launch!

Jeff;

New – EC2 Spot Blocks for Defined-Duration Workloads

I do believe that there’s a strong evolutionary aspect to the continued development of AWS. Services start out simple and gain new features over time. Our customers start to use those features, provide us with ample feedback, and we respond by enhancing existing features and building new ones. As an example, consider the history of Amazon Elastic Compute Cloud (EC2) payment models. We started with On-Demand pricing, and then added Reserved Instances (further enhanced with three different options). We also added Spot instances and later enhanced them with the new Spot Fleet option. Here’s a simple evolutionary tree:

Spot instances are a great fit for applications that are able to checkpoint and continue after an interruption, along with applications that might need to run for an indeterminate amount of time.  They also work really well for stateless applications such as web and application servers and can offer considerable savings over On-Demand prices.

Some existing applications are not equipped to generate checkpoints over the course of a multi-hour run. Many applications of this type are compute-intensive and (after some initial benchmarking) run in a predictable amount of time. Applications of this type often perform batch processing, encoding, rendering, modeling, analysis, or continuous integration.

New Spot Block Model
In order to make EC2 an even better fit for this type of defined-duration workload, you can now launch Spot instances that will run continuously for a finite duration (1 to 6 hours). Pricing is based on the requested duration and the available capacity, and is typically 30% to 45% less than On-Demand, with an additional 5% off during non-peak hours for the region. Spot blocks and Spot instances are priced separately; you can view the current Spot pricing to learn more.

You simply submit a Spot instance request and use the new BlockDuration parameter to specify the number of hours your want your instance(s) to run, along with the maximum price that you are willing to pay. When Spot instance capacity is available for the the requested duration, your instances will launch and run continuously for a flat hourly price. They will be terminated automatically at the end of the time block (you can also terminate them manually). This model is a good for situations where you have jobs that need to run continuously for up to 6 hours.

Here’s how you would submit a request of this type using the AWS Command Line Interface (CLI):

$ aws ec2 request-spot-instances \
  --block-duration-minutes 360 \
  --instance-count 2 \
  --spot-price "0.25" ...

You can also do this by calling the RequestSpotInstances function (Console support is in the works).

Here’s the revised evolutionary tree:

Available Now
You can start to make use of Spot blocks today. To learn more, read about Using Spot Blocks.

— Jeff;

Coming Soon – EC2 Dedicated Hosts

Sometimes business enables technology, and sometimes technology enables business!

If you are migrating from an existing environment to AWS, you may have purchased volume licenses for software that is licensed for use on a server with a certain number of sockets or physical cores. Or, you may be required to run it on a specific server for a given period of time. Licenses for Windows Server, Windows SQL Server, Oracle Database, and SUSE Linux Enterprise Server often include this requirement.

We want to make sure that you can continue to derive value from these licenses after you migrate to AWS. In general, we call this model Bring Your Own License, or BYOL. In order to do this while adhering to the terms of the license, you are going to need to control the mapping of the EC2 instances to the underlying, physical servers.

Introducing EC2 Dedicated Hosts
In order to give you control over this mapping, we are announcing a new model that we call Amazon EC2 Dedicated Hosts.  This model will allow you to allocate an actual physical server (the Dedicated Host) and then launch one or more EC2 instances of a given type on it. You will be able to target and reuse specific physical servers and stay within the confines of your existing software licenses.

In addition to allowing you to Bring Your Own License to the cloud to reduce costs,  Amazon EC2 Dedicated Hosts can help you to meet stringent compliance and regulatory requirements, some of which require control and visibility over instance placement at the physical host level. In these environments, detailed auditing of changes is also a must; AWS Config will help out by recording all changes to your instances and your Amazon EC2 Dedicated Hosts.

Using Dedicated Hosts
You will start by allocating a Dedicated Host in a specific region and Availability Zone, and for a particular type of EC2 instance (we’ll have API, CLI, and Console support for doing this).

Each host has room for a predefined number of instances of a particular type. For example, a specific host could have room for eight c3.xlarge instances (this is a number that I made up for this post).  After you allocate the host, you can then launch up to eight c3.xlarge instances on it.

You will have full control over placement. You can launch instances on a specific Amazon EC2 Dedicated Host or you can have EC2 place the instances automatically onto your Amazon EC2 Dedicated Hosts. Amazon EC2 Dedicated Hosts also support affinity so that Amazon EC2 Dedicated Host instances are placed on the same host even after they are rebooted or stopped and then restarted.

With Dedicated Hosts, the same “cloudy” benefits that you get with using EC2 instances apply but you have additional controls and visibility at your disposal to address your requirements, even as they change.

Purchase Options
Amazon EC2 Dedicated Hosts will be available in Reserved and On-Demand form. In either case, you pay (or consume a previously purchased Reserved Dedicated Host) when you allocate the host, regardless of whether you choose to run instances on it or not.

You will be able to bring your existing machine images to AWS using VM Import and the AWS Management Portal for vCenter. You can also find machine images in the AWS Marketplace and launch them on Amazon EC2 Dedicated Hosts using your existing licenses and you can make use of the Amazon Linux AMI and other Linux operating systems.

Stay Tuned
I’ll have more to say about this feature before too long. Stay tuned to the blog for details!

Jeff;

Spot Fleet Update – Console Support, Fleet Scaling, CloudFormation

There’s a lot of buzz about Spot instances these days. Customers are really starting to understand the power that comes with the ability to name their own price for compute power!

After launching the Spot fleet API in May to allow you to manage thousands of Spot instances with a single request, we followed up with resource-oriented bidding in August and the option to distribute your fleet across multiple instance pools in September.

One quick note before I dig in: While the word “fleet” might make you think that this model is best-suited to running hundreds or thousands of instances at a time, everything that I have to say here applies regardless of the size of your fleet, whether it is comprised of one, two, three, or three thousand instances! As you will see in a moment, you get a console that’s flexible and easy to use, along with the ability to draw resources from multiple pools of Spot capacity, when you create and run a Spot fleet.

Today we are adding three more features to the roster: a new Spot console, the ability to change the size of a running fleet, and CloudFormation support.

New Spot Console (With Fleet Support)
In addition to CLI and API support, you can now design and launch Spot fleets using the new Spot Instance Launch Wizard. The new wizard allows you to create resource-oriented bids that are denominated in instances, vCPUs, or arbitrary units that you can specify when you design your fleet.  It also helps you to choose a bid price that is high enough (given the current state of the Spot market) to allow you to launch instances of the desired types.

I start by choosing the desired AMI (stock or custom), the capacity unit (I’ll start with instances), and the amount of capacity that I need. I can specify a fixed bid price across all of the instance types that I select, or I set it to be a percentage of the On-Demand price for the type. Either way, the wizard will indicate (with the “caution” icon) any bid prices that are too low to succeed:

When I find a set of prices and instance types that satisfies my requirements, I can select them and click on Next to move forward.

I can also make resource-oriented bids using a custom capacity unit. When I do this I have even more control over the bid. First, I can specify the minimum requirements (vCPUs, memory, instance storage, and generation) for the instances that I want in my fleet:

The display will update to indicate the instance types that meet my requirements.

The second element that I can control is the amount of capacity per instance type (as I explained in an earlier post, this might be driven by the amount of throughput that a particular instance type can deliver for my application). I can control this by clicking in the Weighted Capacity column and entering the designated amount of capacity for each instance type:

As you can see from the screen shot above, I have chosen all of instance types that offer weighted capacity at less than $0.35 / unit.

Now that I have designed my fleet, I can configure it by choosing the allocation strategy (diversified or lowest price), the VPC, security groups, availability zones / subnets, and a key pair for SSH access:

I can also click on Advanced to create requests that are valid only between certain dates and times, and to set other options:

After that I review my settings and click on Launch to move ahead:

My Spot fleet is visible in the Console. I can select it and see which instances were used to satisfy my request:

If I plan to make requests for similar fleets from time to time, I can download a JSON version of my settings:

Fleet Size Modification
We are also giving you the ability to modify the size of an existing fleet. The new ModifySpotFleetRequest allows you to make an existing fleet larger or smaller by specifying a new target capacity.

When you increase the capacity of one of your existing fleets, new bids will be placed in accordance with the fleet’s allocation strategy (lowest price or diversified).

When you decrease the capacity of one of your existing fleets, you can request that excess instances be terminated based on the allocation strategy. Alternatively, you can leave the instances running, and manually terminate them using a strategy of your own.

You can also modify the size of your fleet using the Console:

CloudFormation Support
We are also adding support for the creation of Spot fleets via a CloudFormation template. Here’s a sample:

"SpotFleet": {
  "Type": "AWS::EC2::SpotFleet",
  "Properties": {
    "SpotFleetRequestConfigData": {
      "IamFleetRole": { "Ref": "IAMFleetRole" },
      "SpotPrice": "1000",
      "TargetCapacity": { "Ref": "TargetCapacity" },
      "LaunchSpecifications": [
      {
        "EbsOptimized": "false",
        "InstanceType": { "Ref": "InstanceType" },
        "ImageId": { "Fn::FindInMap": [ "AWSRegionArch2AMI", { "Ref": "AWS::Region" },
                     { "Fn::FindInMap": [ "AWSInstanceType2Arch", { "Ref": "InstanceType" }, "Arch" ] }
                   ]},
        "WeightedCapacity": "8"
      },
      {
        "EbsOptimized": "true",
        "InstanceType": { "Ref": "InstanceType" },
        "ImageId": { "Fn::FindInMap": [ "AWSRegionArch2AMI", { "Ref": "AWS::Region" },
                     { "Fn::FindInMap": [ "AWSInstanceType2Arch", { "Ref": "InstanceType" }, "Arch" ] }
                   ]},
        "Monitoring": { "Enabled": "true" },
        "SecurityGroups": [ { "GroupId": { "Fn::GetAtt": [ "SG0", "GroupId" ] } } ],
        "SubnetId": { "Ref": "Subnet0" },
        "IamInstanceProfile": { "Arn": { "Fn::GetAtt": [ "RootInstanceProfile", "Arn" ] } },
        "WeightedCapacity": "8"
      }
      ]
    }
  }

Available Now
The new Spot Fleet Console, the new ModifySpotFleetRequest function, and the CloudFormation support are available now and you can start using them today!

Jeff;

Now Available – Amazon Linux AMI 2015.09

My colleague Max Spevack runs the team that produces the Amazon Linux AMI. He wrote the guest post below to announce the newest release!

Jeff;


The Amazon Linux AMI is a supported and maintained Linux image for use on Amazon EC2.

We offer new major versions of the Amazon Linux AMI after a public testing phase that includes one or more Release Candidates. The Release Candidates are announced in the EC2 forum and we welcome feedback on them.

Launching 2015.09 Today
Today we announce the 2015.09 Amazon Linux AMI, which is supported in all regions and on all current-generation EC2 instance types.  The Amazon Linux AMI supports both PV and HVM mode, as well as both EBS-backed and Instance Store-backed AMIs.

You can launch this new version of the AMI in the usual ways. You can also upgrade an existing EC2 instance by running the following commands:

$ sudo yum clean all
$ sudo yum update

And then rebooting the instance.

New Kernel
A major new feature in this release is the 4.1.7 kernel, which is the most recent long-term stable release kernel. Of particular interest to many customers is the support for OverlayFS in the 4.x kernel series.

New Features
The roadmap for the Amazon Linux AMI is driven in large part by customer requests. During this release cycle, we have added a number of features as a result of these requests; here’s a sampling:

  • Based on numerous customer requests and in order to support joining Amazon Linux AMI instances to an AWS Directory Service directory, we have added Samba 4.1 to the Amazon Linux AMI repositories, available via sudo yum install samba.
  • Numerous customers have asked for PostgreSQL 9.4 and it is now available in our Amazon Linux AMI repositories as a separate package from PostgreSQL 9.2 and 9.3. PostgreSQL 9.4 is available via sudo yum install postgresql94 and the 2015.09 Amazon Linux AMI repositories include PostgreSQL 9.4.4.
  • A frequent customer request has been MySQL 5.6, and we are pleased to offer it in the 2015.09 repositories as a separate package from MySQL 5.1 and 5.5. MySQL 5.6 is available via sudo yum install mysql56 and the 2015.09 Amazon Linux AMI repositories include MySQL 5.6.26.
  • We introduced support for Docker and Go in our 2014.03 AMI, and we continue to follow upstream developments in each. The lead-up to the 2015.09 release included an update to Go 1.4 and to Docker 1.7.1.
  • We already provide Python 2.6, 2.7 (default), and 3.4 in the Amazon Linux AMI, but several customers have also asked for the PyPy implementation of Python. We’re pleased to include PyPy 2.4 in our preview repository. PyPy 2.4 is compatible with Python 2.7.8 and is installable via sudo yum --enablerepo=amzn-preview install pypy.
  • In our 2015.03 release we added an initial preview of the Rust programming language. Upstream development has continued on this language, and we have updated from Rust 1.0 to Rust 1.2 for the 2015.09 release. You can install the Rust compiler by running sudo yum --enablerepo=amzn-preview install rust.

The release notes contain a longer discussion of the new features and updated packages, including an updated version of Emacs prepared specially for Jeff in order to ensure timely publication of this blog post!

— Max Spevack, Development Manager, Amazon Linux AMI.

PS – If you enjoy the Amazon Linux AMI offering and would like to work on future versions, let us know!