Category: Amazon EC2


Amazon EBS Update – New Elastic Volumes Change Everything

by Jeff Barr | on | in Amazon EC2, Amazon Elastic Block Store, Launch | | Comments

It is always interesting to speak with our customers and to learn how the dynamic nature of their business and their applications drives their block storage requirements. These needs change over time, creating the need to modify existing volumes to add capacity or to change performance characteristics. Today’s 24×7 operating models leaves no room for downtime; as a result, customers want to make changes without going offline or otherwise impacting operations.

Over the years, we have introduced new EBS offerings that support an ever-widening set of use cases. For example, we introduced two new volume types in 2016 – Throughput Optimized HDD (st1) and Cold HDD (sc1). Our customers want to use these volume types as storage tiers, modifying the volume type to save money or to change the performance characteristics, without impacting operations.

In other words, our customers want their EBS volumes to be even more elastic!

New Elastic Volumes
Today we are launching a new EBS feature we call Elastic Volumes and making it available for all current-generation EBS volumes attached to current-generation EC2 instances. You can now increase volume size, adjust performance, or change the volume type while the volume is in use. You can continue to use your application while the change takes effect.

This new feature will greatly simplify (or even eliminate) many of your planning, tuning, and space management chores. Instead of a traditional provisioning cycle that can take weeks or months, you can make changes to your storage infrastructure instantaneously, with a simple API call.

You can address the following scenarios (and many more that you can come up with on your own) using Elastic Volumes:

Changing Workloads – You set up your infrastructure in a rush and used the General Purpose SSD volumes for your block storage. After gaining some experience you figure out that the Throughput Optimized volumes are a better fit, and simply change the type of the volume.

Spiking Demand – You are running a relational database on a Provisioned IOPS volume that is set to handle a moderate amount of traffic during the month, with a 10x spike in traffic  during the final three days of each month due to month-end processing.  You can use Elastic Volumes to dial up the provisioning in order to handle the spike, and then dial it down afterward.

Increasing Storage – You provisioned a volume for 100 GiB and an alarm goes off indicating that it is now at 90% of capacity. You increase the size of the volume and expand the file system to match, with no downtime, and in a fully automated fashion.

Using Elastic Volumes
You can manage all of this from the AWS Management Console, via API calls, or from the AWS Command Line Interface (CLI).

To make a change from the Console, simply select the volume and choose Modify Volume from the Action menu:

Then make any desired changes to the volume type, size, and Provisioned IOPS (if appropriate). Here I am changing my 75 GiB General Purpose (gp2) volume into a 400 GiB Provisioned IOPS volume, with 20,000 IOPS:

When I click on Modify I confirm my intent, and click on Yes:

The volume’s state reflects the progress of the operation (modifying, optimizing, or complete):

The next step is to expand the file system so that it can take advantage of the additional storage space. To learn how to do that, read Expanding the Storage Space of an EBS Volume on Linux or Expanding the Storage Space of an EBS Volume on Windows. You can expand the file system as soon as the state transitions to optimizing (typically a few seconds after you start the operation). The new configuration is in effect at this point, although optimization may continue for up to 24 hours. Billing for the new configuration begins as soon as the state turns to optimizing (there’s no charge for the modification itself).

Automatic Elastic Volume Operations
While manual changes are fine, there’s plenty of potential for automation. Here are a couple of ideas:

Right-Sizing – Use a CloudWatch alarm to watch for a volume that is running at or near its IOPS limit. Initiate a workflow and approval process that could provision additional IOPS or change the type of the volume. Or, publish a “free space” metric to CloudWatch and use a similar approval process to resize the volume and the filesystem.

Cost Reduction – Use metrics or schedules to reduce IOPS or to change the type of a volume. Last week I spoke with a security auditor at a university. He collects tens of gigabytes of log files from all over campus each day and retains them for 60 days. Most of the files are never read, and those that are can be scanned at a leisurely pace. They could address this use case by creating a fresh General Purpose volume each day, writing the logs to it at high speed, and then changing the type to Throughput Optimized.

As I mentioned earlier, you need to resize the file system in order to be able to access the newly provisioned space on the volume. In order to show you how to automate this process, my colleagues built a sample that makes use of CloudWatch Events, AWS Lambda, EC2 Systems Manager, and some PowerShell scripting. The rule matches the modifyVolume event emitted by EBS and invokes the logEvents Lambda function:

The function locates the volume, confirms that it is attached to an instance that is managed by EC2 Systems Manager, and then adds a “maintenance tag” to the instance:

from __future__ import print_function
import boto3
ec2 = boto3.client('ec2')
ssm = boto3.client('ssm')
tags = ['maintenance']

def lambda_handler(event, context):
    volume = [event['resources'][0].split('/')[1]]
    attach = ec2.describe_volumes(VolumeIds=volume)['Volumes'][0]['Attachments']
    if attach:
        instance = attach[0]['InstanceId']
        filters = [{'key': 'InstanceIds', 'valueSet': [instance]}]
        info = ssm.describe_instance_information(
            InstanceInformationFilterList=filters)['InstanceInformationList']
        if info:
            ec2.create_tags(Resources=[instance], Tags=tags)
            print('{} Instance {} has been tagged for maintenance'.format(info[0]['PlatformName'], instance))

Later (either manually or on a schedule), EC2 Systems Manager is used to run a PowerShell script on all of the instances that are tagged for maintenance. The script looks at the instance’s disks and partitions, and resizes all of the drives (filesystems) to the maximum allowable size. Here’s an excerpt:

foreach ($DriveLetter in $DriveLetters) {
	$Error.Clear()
        $SizeMax = (Get-PartitionSupportedSize -DriveLetter $DriveLetter).SizeMax
}

Available Today
The Elastic Volumes feature is available today and you can start using it right now!

To learn about some important special cases and a few limitations on instance types, read Considerations When Modifying EBS Volumes.

Jeff;

PS – If you would like to design and build cool, game-changing storage services like EBS, take a look at our EBS Jobs page!

 

AWS IPv6 Update – Global Support Spanning 15 Regions & Multiple AWS Services

by Jeff Barr | on | in Amazon EC2, Amazon Elastic Load Balancer, Amazon VPC | | Comments

We’ve been working to add IPv6 support to many different parts of AWS over the last couple of years, starting with Elastic Load Balancing, AWS IoT, AWS Direct Connect, Amazon Route 53, Amazon CloudFront, AWS WAF, and S3 Transfer Acceleration, all building up to last month’s announcement of IPv6 support for EC2 instances in Virtual Private Clouds (initially available for use in the US East (Ohio) Region).

Today I am happy to share the news that IPv6 support for EC2 instances in VPCs is now available in a total of fifteen regions, along with Application Load Balancer support for IPv6 in nine of those regions.

You can now build and deploy applications that can use IPv6 addresses to communicate with servers, object storage, load balancers, and content distribution services. In accord with the latest guidelines for IPv6 support from Apple and other vendors, your mobile applications can now make use of IPv6 addresses when they communicate with AWS.

IPv6 Now in 15 Regions
IPv6 support for EC2 instances in new and existing VPCs is now available in the US East (Northern Virginia), US East (Ohio), US West (Northern California), US West (Oregon), South America (São Paulo), Canada (Central), EU (Ireland), EU (Frankfurt), EU (London), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific (Mumbai), and AWS GovCloud (US) Regions and you can start using it today!

You can enable IPv6 from the AWS Management Console when you create a new VPC:

Application Load Balancer
Application Load Balancers in the US East (Northern Virginia), US West (Northern California), US West (Oregon), South America (São Paulo), EU (Ireland), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), and AWS GovCloud (US) Regions now support IPv6 in dual-stack mode, making them accessible via IPv4 or IPv6 (we expect to add support for the remaining regions within a few weeks).

Simply enable the dualstack option when you configure the ALB and then make sure that your security groups allow or deny IPv6 traffic in accord with your requirements. Here’s how you select the dualstack option:

You can also enable this option by running the set-ip-address-type command or by making a call to the SetIpAddressType function. To learn more about this new feature, read the Load Balancer Address Type documentation.

IPv6 Recap
Here are the IPv6 launches that we made in the run-up to the launch of IPv6 support for EC2 instances in VPCs:

CloudFront, WAF, and S3 Transfer Acceleration – This launch let you enable IPv6 support for individual CloudFront distributions. Newly created distributions supported IPv6 by default and existing distributions could be upgraded with a couple of clicks (if you using Route 53 alias records, you also need to add an AAAA record to the domain). With IPv6 support enabled, the new addresses will show up in the CloudFront Access Logs. The launch also let you use AWS WAF to inspect requests that arrive via IPv4 or IPv6 addresses and to use a new, dual-stack endpoint for S3 Transfer Acceleration.

Route 53 – This launch added support for DNS queries over IPv6 (support for the requisite AAAA records was already in place). A subsequent launch added support for Health Checks of IPv6 Endpoints, allowing you to monitor the health of the endpoints and to arrange for DNS failover.

IoT – This product launch included IPv6 support for message exchange between devices and AWS IoT.

S3 – This launch added support for access to S3 buckets via dual-stack endpoints.

Elastic Load Balancing – This launch added publicly routable IPv6 addresses for Elastic Load Balancers.

Direct Connect – Supports single and dualstack configurations on public and private VIFs (virtual interfaces).

Jeff;

 

Look Before You Leap – December 31, 2016 Leap Second on AWS

by Jeff Barr | on | in Amazon EC2, Amazon RDS, Announcements | | Comments

If you are counting down the seconds before 2016 is history, be sure to add one at the very end!

The next leap second (the 27th so far) will be inserted on December 31, 2016 at 23:59:60 UTC. This will keep Earth time (Coordinated Universal Time) close to mean solar time and means that the last minute of the year will have 61 seconds.

The information in our last post (Look Before You Leap – The Coming Leap Second and AWS), still applies, with a few nuances and new developments:

AWS Adjusted Time – We will spread the extra second over the 24 hours surrounding the leap second (11:59:59 on December 31, 2016 to 12:00:00 on January 1, 2017). AWS Adjusted Time and Coordinated Universal time will be in sync at the end of this time period.

Microsoft Windows – Instances that are running Microsoft Windows AMIs supplied by Amazon will follow AWS Adjusted Time.

Amazon RDS – The majority of Amazon RDS database instances will show “23:59:59” twice. Oracle versions 11.2.0.2, 11.2.0.3, and 12.1.0.1 will follow AWS Adjusted Time. For Oracle versions 11.2.0.4 and 12.1.0.2 contact AWS Support for more information.

Need Help?
If you have any questions about this upcoming event, please contact AWS Support or post in the EC2 Forum.

Jeff;

 

EC2 Systems Manager – Configure & Manage EC2 and On-Premises Systems

by Jeff Barr | on | in Amazon EC2, EC2 Systems Manager, Windows | | Comments

Last year I introduced you to the EC2 Run Command and showed you how to use it to do remote instance management at scale, first for EC2 instances and then in hybrid and cross-cloud environments. Along the way we added support for Linux instances, making EC2 Run Command a widely applicable and incredibly useful administration tool.

Welcome to the Family
Werner announced the EC2 Systems Manager at AWS re:Invent and I’m finally getting around to telling you about it!

This a new management service that include an enhanced version of EC2 Run Command along with eight other equally useful functions. Like EC2 Run Command it supports hybrid and cross-cloud environments composed of instances and services running Windows and Linux. You simply open up the AWS Management Console, select the instances that you want to manage, and define the tasks that you want to perform (API and CLI access is also available).

Here’s an overview of the improvements and new features:

Run Command – Now allows you to control the velocity of command executions, and to stop issuing commands if the error rate grows too high.

State Manager – Maintains a defined system configuration via policies that are applied at regular intervals.

Parameter Store – Provides centralized (and optionally encrypted) storage for license keys, passwords, user lists, and other values.

Maintenance Window -Specify a time window for installation of updates and other system maintenance.

Software Inventory – Gathers a detailed software and configuration inventory (with user-defined additions) from each instance.

AWS Config Integration – In conjunction with the new software inventory feature, AWS Config can record software inventory changes to your instances.

Patch Management – Simplify and automate the patching process for your instances.

Automation – Simplify AMI building and other recurring AMI-related tasks.

Let’s take a look at each one…

Run Command Improvements
You can now control the number of concurrent command executions. This can be useful in situations where the command references a shared, limited resource such as an internal update or patch server and you want to avoid overloading it with too many requests.

This feature is currently accessible from the CLI and from the API. Here’s a CLI example that limits the number of concurrent executions to 2:

$ aws ssm send-command \
  --instance-ids "i-023c301591e6651ea" "i-03cf0fc05ec82a30b" "i-09e4ed09e540caca0" "i-0f6d1fe27dc064099" \
  --document-name "AWS-RunShellScript" \
  --comment "Run a shell script or specify the commands to run." \
  --parameters commands="date" \
  --timeout-seconds 600 --output-s3-bucket-name "jbarr-data" \
  --region us-east-1 --max-concurrency 2

Here’s a more interesting variant that is driven by tags and tag values by specifying --targets instead of --instance-ids:

$ aws ssm send-command \
  --targets "Key=tag:Mode,Values=Production" ... 

You  can also stop issuing commands if they are returning errors, with the option to specify either a maximum number of errors or a failure rate:

$ aws ssm send-command --max-errors 5 ... 
$ aws ssm send-command --max-errors 5% ...

State Manager
State Manager helps to keep your instances in a defined state, as defined by a document. You create the document, associate it with a set of target instances, and then create an association to specify when and how often the document should be applied. Here’s a document that updates the message of the day file:

And here’s the association (this one uses tags so that it applies to current instances and to others that are launched later and are tagged in the same way):

Specifying targets using tags makes the association future-proof, and allows it to work as expected in dynamic, auto-scaled environments. I can see all of my associations, and I can run the new one by selecting it and clicking on Apply Association Now:

Parameter Store
This feature simplifies storage and management for license keys, passwords, and other data that you want to distribute  to your instances. Each parameter has a type (string, string list, or secure string), and can be stored in encrypted form. Here’s how I create a parameter:

And here’s how I reference the parameter in a command:

Maintenance Window
This feature allows specification of a time window for installation of updates and other system maintenance. Here’s how I create a weekly time window that opens for four hours every Saturday:

After I create the window I need to assign a set of instances to it. I can do this by instance Id or by tag:

And  then I need to register a task to perform during the maintenance window. For example, I can run a Linux shell script:

Software Inventory
This feature collects information about software and settings for a set of instances. To access it, I click on Managed Instances and Setup Inventory:

Setting up the inventory creates an association between an AWS-owned document and a set of instances. I simply choose the targets, set the schedule, and identify the types of items to be inventoried, then click on Setup Inventory:

After the inventory runs, I can select an instance and then click on the Inventory tab in order to inspect the results:

The results can be filtered for further analysis. For example, I can narrow down the list of AWS Components to show only development tools and libraries:

I can also run inventory-powered queries across all of the managed instances. Here’s how I can find Windows Server 2012 R2 instances that are running a version of .NET older than 4.6:

AWS Config Integration
The results of the inventory can be routed to AWS Config  and allow you to track changes to the applications, AWS components, instance information, network configuration, and Windows Updates over time. To access this information, I click on Managed instance information above the Config timeline for the instance:

The three lines at the bottom lead to the inventory information. Here’s the network configuration:

Patch Management
This feature helps you to keep the operating system on your Windows instances up to date. Patches are applied during maintenance windows that you define, and are done with respect to a baseline. The baseline specifies rules for automatic approval of patches based on classification and severity, along with an explicit list of patches to approve or reject.

Here’s my baseline:

Each baseline can apply to one or more patch groups. Instances within a patch group have a Patch Group tag. I named my group Win2016:

Then I associated the value with the baseline:

The next step is to arrange to apply the patches during a maintenance window using the AWS-ApplyPatchBaseline document:

I can return to the list of Managed Instances and use a pair of filters to find out which instances are in need of patches:

Automation
Last but definitely not least, the Automation feature simplifies common AMI-building and updating tasks. For example, you can build a fresh Amazon Linux AMI each month using the AWS-UpdateLinuxAmi document:

Here’s what happens when this automation is run:

Available Now
All of the EC2 Systems Manager features and functions that I described above are available now and you can start using them today at no charge. You pay only for the resources that you manage.

Jeff;

 

Amazon EFS Update – On-Premises Access via Direct Connect

by Jeff Barr | on | in Amazon EC2, Amazon Elastic File System, AWS Direct Connect, AWS re:Invent | | Comments

I introduced you to Amazon Elastic File System last year (Amazon Elastic File System – Shared File Storage for Amazon EC2) and announced production readiness earlier this year (Amazon Elastic File System – Production-Ready in Three Regions). Since the launch earlier this year, thousands of AWS customers have used it to set up, scale, and operate shared file storage in the cloud.

Today we are making EFS even more useful with the introduction of simple and reliable on-premises access via AWS Direct Connect. This has been a much-requested feature and I know that it will be useful for migration, cloudbursting, and backup. To use this feature for migration, you simply attach an EFS file system to your on-premises servers, copy your data to it, and then process it in the cloud as desired, leaving your data in AWS for the long term.  For cloudbursting, you would copy on-premises data to an EFS file system, analyze it at high speed using a fleet of Amazon Elastic Compute Cloud (EC2) instances, and then copy the results back on-premises or visualize them in Amazon QuickSight.

You’ll get the same file system access semantics including strong consistency and file locking, whether you access your EFS file systems from your on-premises servers or from your EC2 instances (of course, you can do both concurrently). You will also be able to enjoy the same multi-AZ availability and durability that is part-and-parcel of EFS.

In order to take advantage of this new feature, you will need to use Direct Connect to set up a dedicated network connection between your on-premises data center and an Amazon Virtual Private Cloud. Then you need to make sure that your filesystems have mount targets in subnets that are reachable via the Direct Connect connection:

You also need to add a rule to the mount target’s security group in order to allow inbound TCP and UDP traffic to port 2049 (NFS) from your on-premises servers:

After you create the file system, you can reference the mount targets by their IP addresses, NFS-mount them on-premises, and start copying files. The IP addresses are available from within the AWS Management Console:

The Management Console also provides you with access to step-by-step directions! Simply click on the On-premises mount instructions:

And follow along:

This feature is available today at no extra charge in the US East (Northern Virginia), US West (Oregon), EU (Ireland), and US East (Ohio) Regions.

Jeff;

 

New – IPv6 Support for EC2 Instances in Virtual Private Clouds

by Jeff Barr | on | in Amazon EC2, Amazon VPC, AWS re:Invent | | Comments

The continued growth of the Internet, particularly in the areas of mobile applications, connected devices, and IoT, has spurred an industry-wide move to IPv6. In accord with a mandate that dates back to 2010, United States government agencies have been working to move their public-facing servers and services to IPv6 as quickly as possible. With 128 bits of address space, IPv6 has plenty of room for growth and also opens the door to new applications and new use cases.

IPv6 for EC2
Earlier this year we launched IPv6 support for S3 (including Transfer Acceleration), CloudFront, WAF, and Route 53. Today we are taking the next big step forward with the launch of IPv6 support for Virtual Private Cloud (VPC) and EC2 instances running in a VPC. This support is launching today in the US East (Ohio) Region and is in the works for the others.

IPv6 support works for new and existing VPCs; you can opt in on a VPC-by-VPC basis by simply checking a box on the Console (API and CLI support is also available):

Each VPC is given a unique /56 address prefix from within Amazon’s GUA (Global Unicast Address); you can assign a /64 address prefix to each subnet in your VPC:

As we did with S3, we make use of a dual-stack model that assigns each instance an IPv4 address and an IPv6 address, along with corresponding DNS entries. Support for both versions of the protocol ensures compatibility and flexibility to access resources and applications.

Security Groups, Route Tables, Network ACLs, VPC Peering, Internet Gateway, Direct Connect, VPC Flow Logs, and DNS resolution within a VPC all operate in the same way as today. Application Load Balancer support for the dual-stack model is on the near-term roadmap and I’ll let you know as soon as it is available.

IPv6 Support for Direct Connect
The Direct Connect Console lets you create virtual interfaces (VIFs) with your choice of IPv4 or IPv6 addresses:

Each VIF supports one BGP peering session over IPv4 and one BGP peering session over IPv6.

New Egress-Only Internet Gateway for IPv6
One of the interesting things about IPv6 is that every address is internet-routable and can talk to the Internet by default. In an IPv4-only VPC, assigning a public IP address to an EC2 instance sets up 1:1 NAT (Network Address Translation) to a private address that is associated with the instance. In a VPC where IPv6 is enabled, the address associated with the instance is public. This direct association removes a host of networking challenges, but it also means that you need another mechanism to create private subnets.

As part of today’s launch, we are introducing a new Egress-Only Internet Gateway (EGW) that you can use to implement private subnets for your VPCs. The EGW is easier to set up and to use than a fleet of NAT instances, and is available to you at no cost. It allows you to block incoming traffic while still allowing outbound traffic (think of it as an Internet Gateway mated to a Security Group). You can create an EGW in all of the usual ways, and use it to impose restrictions on inbound IPv6 traffic. You can continue to use NAT instances or NAT Gateways for IPv4 traffic.

Available Now
IPv6 support for EC2 is now available in the US East (Ohio) Region and you can start using it today at no extra charge. It works with all current-generation EC2 instance types with the exception of M3 and G2, and will be supported on upcoming instance types as well.

IPv6 support for other AWS Regions is in works and I’ll let you know (most likely via a tweet), just as soon as it is ready!

Jeff;

 

Developer Preview – EC2 Instances (F1) with Programmable Hardware

by Jeff Barr | on | in Amazon EC2, AWS re:Invent |

Have you ever had to decide between a general purpose tool and one built for a very specific purpose? The general purpose tools can be used to solve many different problems, but may not be the best choice for any particular one. Purpose-built tools excel at one task, but you may need to do that particular task infrequently.

Computer engineers face this problem when designing architectures and instruction sets, almost always pursuing an approach that delivers good performance across a very wide range of workloads. From time to time, new types of workloads and working conditions emerge that are best addressed by custom hardware. This requires another balancing act: trading off the potential for incredible performance vs. a development life cycle often measured in quarters or years.

Enter the FPGA
One of the more interesting routes to a custom, hardware-based solution is known as a Field Programmable Gate Array, or FPGA. In contrast to a purpose-built chip which is designed with a single function in mind and then hard-wired to implement it, an FPGA is more flexible. It can be programmed in the field, after it has been plugged in to a socket on a PC board. Each FPGA includes a fixed, finite number of simple logic gates. Programming an FPGA is “simply” a matter of connecting them up to create the desired logical functions (AND, OR, XOR, and so forth) or storage elements (flip-flops and shift registers). Unlike a CPU which is essentially serial (with a few parallel elements) and has fixed-size instructions and data paths (typically 32 or 64 bit), the FPGA can be programmed to perform many operations in parallel, and the operations themselves can be of almost any width, large or small.

This highly parallelized model is ideal for building custom accelerators to process compute-intensive problems. Properly programmed, an FPGA has the potential to provide a 30x speedup to many types of genomics, seismic analysis, financial risk analysis, big data search, and encryption algorithms and applications.

I hope that this sounds awesome and that you are chomping at the bit to use FPGAs to speed up your own applications! There are a few interesting challenges along the way. First, FPGAs have traditionally been a component of a larger, purpose-built system. You cannot simply buy one and plug it in to your desktop. Instead, the route to FPGA-powered solutions has included hardware prototyping, construction of a hardware appliance, mass production, and a lengthy sales & deployment cycle. The lead time can limit the applicability of FPGAs, and also means that Moore’s Law has time to make CPU-based solutions more cost-effective.

We think we can do better here!

The New F1 Instance
Today we are launching a developer preview of the new F1 instance. In addition to building applications and services for your own use, you will be able to package them up for sale and reuse in AWS Marketplace.  Putting it all together, you will be able to avoid all of the capital-intensive and time-consuming steps that were once a prerequisite to the use of FPGA-powered applications, using a business model that is more akin to that used for every other type of software. We are giving you the ability to design your own logic, simulate and verify it using cloud-based tools, and then get it to market in a matter of days.

Equipped with Intel Broadwell E5 2686 v4 processors (2.3 GHz base speed, 2.7 GHz Turbo mode on all cores, and 3.0 GHz Turbo mode on one core), up to 976 GiB of memory, up to 4 TB of NVMe SSD storage, and one to eight FPGAs, the F1 instances provide you with plenty of resources to complement your core, FPGA-based logic. The FPGAs are dedicated to the instance and are isolated for use in multi-tenant environments.

Here are the specs on the FPGA (remember that there are up to eight of these in a single F1 instance):

  • Xilinx UltraScale+ VU9P  fabricated using a 16 nm process.
  • 64 GiB of ECC-protected memory on a 288-bit wide bus (four DDR4 channels).
  • Dedicated PCIe x16 interface to the CPU.
  • Approximately 2.5 million logic elements.
  • Approximately 6,800 Digital Signal Processing (DSP) engines.
  • Virtual JTAG interface for debugging.

In instances with more than one FPGA,  dedicated PCIe fabric allows the FPGAs to share the same memory address space and to communicate with each other across a PCIe Fabric at up to 12 gigabytes per second in each direction.  The FPGAs within an instance share access to a 400 Gbps bidirectional ring for low-latency, high bandwidth communication (you’ll need to define your own protocol in order to make use of this advanced feature).

The FPGA Development Process

As part of the developer preview we are also making an FPGA developer AMI available. You can launch this AMI on a memory-optimized or compute-optimized instance for development and simulation, and then use an F1 instance for final debugging and testing.

This AMI includes a set of developer tools that you can use in the AWS Cloud at no charge. You write your FPGA code using VHDL or Verilog and then compile, simulate, and verify it using tools from the Xilinx Vivado Design Suite (you can also use third-party simulators, higher-level language compilers, graphical programming tools, and FPGA IP libraries).

Here’s the Verilog code for a simple 8-bit counter:

module up_counter(out, enable, clk, reset);
output [7:0] out;
input enable, clk, reset;
reg [7:0] out;
always @(posedge clk)
if (reset) begin
  out <= 8'b0;
end else if (enable) begin
  out <= out + 1;
end
endmodule

Although these languages are often described as using C-like syntax (and that’s what I used to stylize the code), this does not mean that you can take existing code and recompile it for use on an FPGA. Instead, you need to start by gaining a strong understanding of the FPGA programming model, learn Boolean algebra, and start to learn about things like propagation delays and clock edges. With that as a foundation, you will be able to start thinking about ways to put FPGAs to use in your environment. If this is too low-level for you, rest assured that you can also use many existing High Level Synthesis tools, including OpenCL, to program the FPGA.

After I launched my instance, I logged in, installed a bunch of packages, and set up the license manager so that I could run the Vivado tools. Then I RDP’ed in to the desktop, opened up a terminal window, and started Vivado in GUI mode:

I opened up the sample project (counter.xpr) and was rewarded with my first look at how FPGA’s are designed and programmed:

After a bit of exploration I managed to synthesize my first FPGA (I was doing little more than clicking interesting stuff at this point; I am not even a novice at this stuff):

From here, I would be able to test my design, package it up as an Amazon FPGA Image (AFI), and then use it for my own applications or list it in AWS Marketplace. I hope to be able to show you how to do all of these things within a couple of weeks.

The F1 Hardware Development Kit
After I learned about the F1 instances, one of my first questions had to do with the interfaces between the FPGA(s), the CPU(s), and main memory. The F1 Hardware Development Kit (HDK) includes preconfigured I/O interfaces and sample applications for multiple communication methods including host-to-FPGA, FPGA-to-memory, and FPGA-to-FPGA. It also includes compilation scripts, reference examples, and an in-field debug toolset. The kit is accessible to members of the F1 developer preview.

The Net-Net
The bottom line here is that the combination of the F1 instances, the cloud-based development tools, and the ability to sell FPGA-powered applications is unique and powerful. The power and flexibility of the FPGA model is now accessible all AWS users; I am sure that this will inspire entirely new types of applications and businesses.

Get Started Today
As I mentioned earlier, we are launching in developer preview form today in the US East (Northern Virginia) Region (we will expand to multiple regions when the instances become generally available in early 2017). If you have prior FPGA programming experience and are interested in getting started, you should sign up now.

If you’d like to learn even more, we have a webinar December 15th.

Jeff;

In the Works – Amazon EC2 Elastic GPUs

by Jeff Barr | on | in Amazon EC2, AWS re:Invent | | Comments

I have written about the benefits of GPU-based computing in the past, most recently as part of the launch of the P2 instances with up to 16 GPUs. As I have noted in the past, GPUs offer incredible power and scale, along with the potential to simultaneously decrease your time-to-results and your overall compute costs.

Today I would like to tell you a little bit about a new GPU-based feature that we are working on.  You will soon have the ability to add graphics acceleration to existing EC2 instance types. When you use G2 or P2 instances, the instance size determines the number of GPUs. While this works well for many types of applications, we believe that many other applications are now ready to take advantage of a newer and more flexible model.

Amazon EC2 Elastic GPUs
The upcoming Amazon EC2 Elastic GPUs give you the best of both worlds. You can choose the EC2 instance type and size that works best for your application and then indicate that you want to use an Elastic GPU when you launch the instance, and take your pick of four different sizes:

Name GPU Memory
eg1.medium 1 GiB
eg1.large 2 GiB
eg1.xlarge 4 GiB
eg1.2xlarge 8 GiB

Today, you have the ability to set up freshly created EBS volumes when you launch new instances. You’ll be able to do something similar with Elastic GPUs, specifying the desired size during the launch process, with the option to stop, modify, and then start a running instance in order to make a change.

Starting with OpenGL
Our Amazon-optimized OpenGL library will automatically detect and make use of Elastic GPUs. We’ll start out with Windows support for Open GL, and plan to add support for the Amazon Linux AMI and other versions of OpenGL after that. We are also giving consideration to support for other 3D APIs including DirectX and Vulkan (let us know if these would be of interest to you). We will include the Amazon-optimized OpenGL library in upcoming revisions to the existing Microsoft Windows AMI.

OpenGL is great for rendering, but how do you see what’s been rendered? Great question! One option is to use the NICE Desktop Cloud Visualization (acquired earlier this year — Amazon Web Services to Acquire NICE) to stream the rendered content to any HTML5-compatible browser or device. This includes recent versions of Firefox and Chrome, along with all sorts of phones and tablets.

I believe that this unique combination of hardware and software will be a great host for all sorts of 3D visualization and technical computing applications. Two of our customers have already shared some of their feedback with us.

Ray Milhem (VP of Enterprise Solutions & Cloud) at ANSYS told us:

ANSYS Enterprise Cloud delivers a virtual simulation data center, optimized for AWS. It delivers a rich interactive graphics experience critical to supporting the end-to-end engineering simulation processes that allow our customers to deliver innovative product designs. With Elastic GPU, ANSYS will be able to more easily deliver this experience right-sized to the price and performance needs of our customers. We are certifying ANSYS applications to run on Elastic GPU to enable our customers to innovate more efficiently on the cloud.

Bob Haubrock (VP of NX Product Management) at Siemens PLM also had some nice things to say:

Elastic GPU is a game-changer for Computer Aided Design (CAD) in the cloud. With Elastic GPU, our customers can now run Siemens PLM NX on Amazon EC2 with professional-grade graphics, and take advantage of the flexibility, security, and global scale that AWS provides. Siemens PLM is excited to certify NX on the EC2 Elastic GPU platform to help our customers push the boundaries of Design & Engineering innovation.

New Certification Program
In order to help software vendors and developers to make sure that their applications take full advantage of  Elastic GPUs and our other GPU-based offerings, we are launching the AWS Graphics Certification Program today. This program offers credits and tools that will help to quickly and automatically test applications across the supported matrix of instance and GPU types.

Stay Tuned
As always, I will share additional information just as soon as it becomes available!

Jeff;

EC2 Instance Type Update – T2, R4, F1, Elastic GPUs, I3, C5

by Jeff Barr | on | in Amazon EC2, AWS re:Invent | | Comments

Earlier today, AWS CEO Andy Jassy announced the next wave of updates to the EC2 instance roadmap.  We are making updates to our high I/O, compute-optimized, memory-optimized instances, expanding the range of burstable instances, and expanding into new areas of hardware acceleration including FPGA-based computing. This blog post summarizes today’s announcements and links to a couple of other posts that contain additional information.

As part of our planning process for these new instances, we have spent a lot of time talking to our customers in order to make sure that we have a good understanding of the challenges they face and the workloads that they plan to run on EC2 going forward. While the responses were diverse, in-memory analytics, multimedia processing, machine learning (aided by the newest AVX-512 instructions), and large-scale, storage-intensive ERP (Enterprise Resource Planning) applications, were frequently mentioned.

These instances are available today:

New F1 Instances – The F1 instances give you access to game-changing programmable hardware known as a Field-Programmable Gate Array or FPGA. You can write code that runs on the FPGA and speeds up many types of genomics, seismic analysis, financial risk analysis, big data search, and encryption algorithms by up to 30 times. Today we are launching a developer preview of the F1 instances and a Hardware Development Kit, and are also giving you the ability to build FPGA-powered applications and services and sell them in AWS Marketplace. To learn more, read Developer Preview – EC2 Instances (F1) with Programmable Hardware.

New R4 Instances – The R4 instances are designed for today’s memory-intensive Business Intelligence, in-memory caching, and database applications and offer up to 488 GiB of memory. The R4 instances improve on the popular R3 instances with a larger L3 cache and higher memory speeds. On the network side, the R4 instances support up to 20 Gbps of ENA-powered network bandwidth when used within a Placement Group, along with 12 Gbps of dedicated throughput to EBS. Instances are available in six sizes, with up to 64 vCPUs and 488 GiB of memory. To learn more, read New – Next Generation (R4) Memory-Optimized EC2 Instances.

Expanded T2 Instances – The T2 instances offer great performance for workloads that do not need to use the full CPU on a consistent basis. Our customers use them for general purpose workloads such as application servers, web servers, development environments, continuous integration servers, and small databases. We’ll be adding the t2.xlarge (16 GiB of memory) and the t2.2xlarge (32 GiB of memory). Like the existing T2 instances, the new sizes will be offer a generous amount of baseline performance (up to 4x that of the existing instances), along with the ability to burst to entire core when you need more compute power. To learn more, read New T2.Xlarge and T2.2Xlarge Instances.

And these are in the works:

New Elastic GPUs – You will soon be able to add high performance graphics acceleration to existing EC2 instance types, with your choice of 1 GiB to 8 GiB of GPU memory and compute power to match. The Amazon-optimized OpenGL library will automatically detect and make use of Elastic GPUs. We are launching this new EC2 feature in preview form today, along with the AWS Graphics Certification Program. To learn more about both items, read In the Works – Amazon EC2 Elastic GPUs.

New I3 Instances – I3 instances will be equipped with fast, low-latency, Non Volatile Memory Express (NVMe) based Solid State Drives. They’ll deliver up to to 3.3 million random IOPS at a 4 KB block size and up to 16 GB/second of disk throughput. These instances are designed to meet the needs of the most demanding I/O intensive relational & NoSQL databases, transactional, and analytics workloads. I3 instances will be available in six sizes, with up to 64 vCPUs, 488 GiB of memory, and 15.2 TB of storage (perfect for those ERP applications). All stored data will be encrypted at rest and the instances will support the new Elastic Network Adapter (ENA).

New C5 Instances – C5 instances will be based on Intel’s brand new Xeon “Skylake” processor, running faster than the processors in any other EC2 instance. As the successor to Broadwell, Skylake supports AVX-512 for machine learning, multimedia, scientific, and financial operations which require top-notch support for floating point calculations. Instances will be available in six sizes, with up to 72 vCPUs and 144 GiB of memory. On the network side, they’ll support ENA and will be EBS-optimized by by default.

I’ll be sharing more information about each of these instances as soon as it becomes available, so stay tuned!

Jeff;

Update! We have a webinar coming up on January 19th, where you can learn more. Sign up for it here.

New – Next Generation (R4) Memory-Optimized EC2 Instances

by Jeff Barr | on | in Amazon EC2, AWS re:Invent |

In-memory processing is a huge deal. With workloads growing larger by the day and CPUs gaining power with every successive generation, the ability to fit entire datasets into memory is becoming a prerequisite for high-quality Business Intelligence, analytics, data mining, and other real-time workloads that are sensitive to latency. Distributed caching and batch processing workloads can also benefit from fast access to lots of memory.

Today we are launching the next generation of memory-optimized EC2 instances. These instances improve upon the popular R3 instances with a larger L3 cache and faster memory. On the network side, they support up to 20 Gbps of ENA-powered network bandwidth when used within a Placement Group, along with 12 Gbps of dedicated throughput to EBS.

The R4 instances include the following features:

  • Dual socket Intel Xeon E5 Broadwell Processors (2.3 GHz)
  • DDR4 memory
  • Hardware Virtualzation (HVM) only

Here’s the lineup:

Model vCPUs Memory (GiB) Networking Performance
r4.large 2 15.25 Up to 10 Gigabit
r4.xlarge 4 30.5 Up to 10 Gigabit
r4.2xlarge 8 61 Up to 10 Gigabit
r4.4xlarge 16 122 Up to 10 Gigabit
r4.8xlarge 32 244 10 Gigabit
r4.16xlarge 64 488 20 Gigabit

The R4 instances are available in On-Demand and Reserved Instance form in the US East (Northern Virginia), US East (Ohio), US West (Oregon), US West (Northern California), EU (Ireland), EU (Frankfurt), Asia Pacific (Sydney), China (Beijing), and AWS GovCloud (US) Regions. Visit the EC2 Pricing page for more information.

Jeff;

Update: There will be a webinar on January 19th where you can learn more! Register here.