Category: Launch


In the Works – Amazon EC2 Elastic GPUs

I have written about the benefits of GPU-based computing in the past, most recently as part of the launch of the P2 instances with up to 16 GPUs. As I have noted in the past, GPUs offer incredible power and scale, along with the potential to simultaneously decrease your time-to-results and your overall compute costs.

Today I would like to tell you a little bit about a new GPU-based feature that we are working on.  You will soon have the ability to add graphics acceleration to existing EC2 instance types. When you use G2 or P2 instances, the instance size determines the number of GPUs. While this works well for many types of applications, we believe that many other applications are now ready to take advantage of a newer and more flexible model.

Amazon EC2 Elastic GPUs
The upcoming Amazon EC2 Elastic GPUs give you the best of both worlds. You can choose the EC2 instance type and size that works best for your application and then indicate that you want to use an Elastic GPU when you launch the instance, and take your pick of four different sizes:

Name GPU Memory
eg1.medium 1 GiB
eg1.large 2 GiB
eg1.xlarge 4 GiB
eg1.2xlarge 8 GiB

Today, you have the ability to set up freshly created EBS volumes when you launch new instances. You’ll be able to do something similar with Elastic GPUs, specifying the desired size during the launch process, with the option to stop, modify, and then start a running instance in order to make a change.

Starting with OpenGL
Our Amazon-optimized OpenGL library will automatically detect and make use of Elastic GPUs. We’ll start out with Windows support for Open GL, and plan to add support for the Amazon Linux AMI and other versions of OpenGL after that. We are also giving consideration to support for other 3D APIs including DirectX and Vulkan (let us know if these would be of interest to you). We will include the Amazon-optimized OpenGL library in upcoming revisions to the existing Microsoft Windows AMI.

OpenGL is great for rendering, but how do you see what’s been rendered? Great question! One option is to use the NICE Desktop Cloud Visualization (acquired earlier this year — Amazon Web Services to Acquire NICE) to stream the rendered content to any HTML5-compatible browser or device. This includes recent versions of Firefox and Chrome, along with all sorts of phones and tablets.

I believe that this unique combination of hardware and software will be a great host for all sorts of 3D visualization and technical computing applications. Two of our customers have already shared some of their feedback with us.

Ray Milhem (VP of Enterprise Solutions & Cloud) at ANSYS told us:

ANSYS Enterprise Cloud delivers a virtual simulation data center, optimized for AWS. It delivers a rich interactive graphics experience critical to supporting the end-to-end engineering simulation processes that allow our customers to deliver innovative product designs. With Elastic GPU, ANSYS will be able to more easily deliver this experience right-sized to the price and performance needs of our customers. We are certifying ANSYS applications to run on Elastic GPU to enable our customers to innovate more efficiently on the cloud.

Bob Haubrock (VP of NX Product Management) at Siemens PLM also had some nice things to say:

Elastic GPU is a game-changer for Computer Aided Design (CAD) in the cloud. With Elastic GPU, our customers can now run Siemens PLM NX on Amazon EC2 with professional-grade graphics, and take advantage of the flexibility, security, and global scale that AWS provides. Siemens PLM is excited to certify NX on the EC2 Elastic GPU platform to help our customers push the boundaries of Design & Engineering innovation.

New Certification Program
In order to help software vendors and developers to make sure that their applications take full advantage of  Elastic GPUs and our other GPU-based offerings, we are launching the AWS Graphics Certification Program today. This program offers credits and tools that will help to quickly and automatically test applications across the supported matrix of instance and GPU types.

Stay Tuned
As always, I will share additional information just as soon as it becomes available!

Jeff;

Amazon Lightsail – The Power of AWS, the Simplicity of a VPS

Some people like to assemble complex systems (houses, computers, or furniture) from parts. They relish the planning process, carefully researching each part and selecting those that give them the desired balance of power and flexibility. With planning out of the way, they enjoy the process of assembling the parts into a finished unit. Other people do not find this do-it-yourself (DIY) approach attractive or worthwhile, and are simply interested in getting to the results as quickly as possible without having to make too many decisions along the way.

Sound familiar?

I believe that this model applies to systems architecture and system building as well. Sometimes you want to take the time to hand-select individual AWS components (servers, storage, IP addresses, and so forth) and put them together on your own. At other times you simply need a system that is preconfigured and preassembled, and is ready to run your web applications with no system-building effort on your part.

In many cases, those seeking a preassembled system turned to a Virtual Private Server, or VPS. With a VPS, you are presented with a handful of options, each ready to run, and available to you for a predictable monthly fee.

While the VPS is a perfect getting-started vehicle, over time the environment can become constrained. At a certain point you may need to step outside the boundaries of the available plans as your needs grow, only to find that you have no options for incremental improvement, and are faced with the need to make a disruptive change. Or, you may find that your options for automated scaling or failover are limited, and that you need to set it all up yourself.

Introducing Amazon Lightsail
Today we are launching Amazon Lightsail. With a couple of clicks you can choose a configuration from a menu and launch a virtual machine preconfigured with SSD-based storage, DNS management, and a static IP address. You can launch your favorite operating system (Amazon Linux AMI or Ubuntu), developer stack (LAMP, LEMP, MEAN, or Node.js), or application (Drupal, Joomla, Redmine, GitLab, and many others), with flat-rate pricing plans that start at $5 per month including a generous allowance for data transfer.

Here are the plans and the configurations:

You get the simplicity of a VPS, backed by the power, reliability, and security of AWS. As your needs grow, you will have the ability to smoothly step outside of the initial boundaries and connect to additional AWS database, messaging, and content distribution services.

All in all, Lightsail is the easiest way for you to get started on AWS and jumpstart your cloud projects, while giving you a smooth, clear path into the future.

A Quick Tour
Let’s take a quick tour of Amazon Lightsail! Each page of the Lightsail console includes a Quick Assist tab. You can click on it at any time in order to access context-sensitive documentation that will help you to get the most out of Lightsail:

I start at the main page. I have no instances or other resources at first:

I click on Create Instance to get moving. I choose my machine image (an App and an OS, or simply an OS) an instance plan, and give my instance a name, all on one page:

I can launch multiple instances, set up a configuration script, or specify an alternate SSH keypair if I’d like. I can also choose an Availability Zone. I’ll choose WordPress on the $10 plan, leave everything else as-is, and click on Create. It is up and running within seconds:

I can manage the instance by clicking on it:

My instance has a public IP address that I can open in my browser. WordPress is already installed, configured, and running:

I’ll need the WordPress password in order to finish setting it up. I click on Connect using SSH on the instance management page and I’m connected via a browser-based SSH terminal window without having to do any key management or install any browser plugins. The WordPress admin password is stored in file bitnami_application_password in the ~bitnami directory (the image below shows a made-up password):

You can bookmark the terminal window in order to be able to access it later with just a click or two.

I can manage my instance from the menu bar:

For example, I can access the performance metrics for my instance:

And I can manage my firewall settings:

I can capture the state of my instance by taking a Snapshot:

Later, I can restore the snapshot to a fresh instance:

I can also create static IP addresses and make use of domain names:

Advanced Lightsail – APIs and VPC Peering
Before I wrap up, let’s talk about a few of the more advanced features of Amazon Lightsail – APIs and VPC Peering.

As is almost always the case with AWS, there’s a full set of APIs behind all of the console functionality that we just reviewed. Here are just a few of the more interesting functions:

  • GetBundles – Get a list of the bundles (machine configurations).
  • CreateInstances – Create one or more Lightsail instances.
  • GetInstances – Get a list of all Lightsail instances.
  • GetInstance – Get information about a specific instance.
  • CreateInstanceSnapshot – Create a snapshot of an instance.
  • CreateInstanceFromSnapshot – Create an instance from a snapshot.

All of the Lightsail instances within an account run within a “shadow” VPC that is not visible in the AWS Management Console. If the code that you are running on your Lightsail instances needs access to other AWS resources, you can set up VPC peering between the shadow VPC and another one in your account, and create the resources therein. Click on Account (top right), scroll down to Advanced features, and check VPC peering:

You can now connect your Lightsail apps to other AWS resources that are running within a VPC.

Pricing and Availability
We are launching Amazon Lightsail today in the US East (Northern Virginia) Region, and plan to expand it to other regions in the near future.

Prices start at $5 per month.

You can learn more about Lightsail on January 17th during our webinar. Sign up here.

Jeff;

EC2 Instance Type Update – T2, R4, F1, Elastic GPUs, I3, C5

Earlier today, AWS CEO Andy Jassy announced the next wave of updates to the EC2 instance roadmap. We are making updates to our high I/O, compute-optimized, memory-optimized instances, expanding the range of burstable instances, and expanding into new areas of hardware acceleration including FPGA-based computing. This blog post summarizes today’s announcements and links to a couple of other posts that contain additional information.

As part of our planning process for these new instances, we have spent a lot of time talking to our customers in order to make sure that we have a good understanding of the challenges they face and the workloads that they plan to run on EC2 going forward. While the responses were diverse, in-memory analytics, multimedia processing, machine learning (aided by the newest AVX-512 instructions), and large-scale, storage-intensive ERP (Enterprise Resource Planning) applications, were frequently mentioned.

These instances are available today:

New F1 Instances – The F1 instances give you access to game-changing programmable hardware known as a Field-Programmable Gate Array or FPGA. You can write code that runs on the FPGA and speeds up many types of genomics, seismic analysis, financial risk analysis, big data search, and encryption algorithms by up to 30 times. Today we are launching a developer preview of the F1 instances and a Hardware Development Kit, and are also giving you the ability to build FPGA-powered applications and services and sell them in AWS Marketplace. To learn more, read Developer Preview – EC2 Instances (F1) with Programmable Hardware.

New R4 Instances – The R4 instances are designed for today’s memory-intensive Business Intelligence, in-memory caching, and database applications and offer up to 488 GiB of memory. The R4 instances improve on the popular R3 instances with a larger L3 cache and higher memory speeds. On the network side, the R4 instances support up to 20 Gbps of ENA-powered network bandwidth when used within a Placement Group, along with 12 Gbps of dedicated throughput to EBS. Instances are available in six sizes, with up to 64 vCPUs and 488 GiB of memory. To learn more, read New – Next Generation (R4) Memory-Optimized EC2 Instances.

Expanded T2 Instances – The T2 instances offer great performance for workloads that do not need to use the full CPU on a consistent basis. Our customers use them for general purpose workloads such as application servers, web servers, development environments, continuous integration servers, and small databases. We’ll be adding the t2.xlarge (16 GiB of memory) and the t2.2xlarge (32 GiB of memory). Like the existing T2 instances, the new sizes will be offer a generous amount of baseline performance (up to 4x that of the existing instances), along with the ability to burst to entire core when you need more compute power. To learn more, read New T2.Xlarge and T2.2Xlarge Instances.

And these are in the works:

New Elastic GPUs – You will soon be able to add high performance graphics acceleration to existing EC2 instance types, with your choice of 1 GiB to 8 GiB of GPU memory and compute power to match. The Amazon-optimized OpenGL library will automatically detect and make use of Elastic GPUs. We are launching this new EC2 feature in preview form today, along with the AWS Graphics Certification Program. To learn more about both items, read In the Works – Amazon EC2 Elastic GPUs.

New I3 Instances – I3 instances will be equipped with fast, low-latency, Non Volatile Memory Express (NVMe) based Solid State Drives. They’ll deliver up to to 3.3 million random IOPS at a 4 KB block size and up to 16 GB/second of disk throughput. These instances are designed to meet the needs of the most demanding I/O intensive relational & NoSQL databases, transactional, and analytics workloads. I3 instances will be available in six sizes, with up to 64 vCPUs, 488 GiB of memory, and 15.2 TB of storage (perfect for those ERP applications). The instances will support the new Elastic Network Adapter (ENA).

New C5 Instances – C5 instances will be based on Intel’s brand new Xeon “Skylake” processor, running faster than the processors in any other EC2 instance. As the successor to Broadwell, Skylake supports AVX-512 for machine learning, multimedia, scientific, and financial operations which require top-notch support for floating point calculations. Instances will be available in six sizes, with up to 72 vCPUs and 144 GiB of memory. On the network side, they’ll support ENA and will be EBS-optimized by by default.

I’ll be sharing more information about each of these instances as soon as it becomes available, so stay tuned!

Jeff;

Update! We have a webinar coming up on January 19th, where you can learn more. Sign up for it here.

New – Next Generation (R4) Memory-Optimized EC2 Instances

In-memory processing is a huge deal. With workloads growing larger by the day and CPUs gaining power with every successive generation, the ability to fit entire datasets into memory is becoming a prerequisite for high-quality Business Intelligence, analytics, data mining, and other real-time workloads that are sensitive to latency. Distributed caching and batch processing workloads can also benefit from fast access to lots of memory.

Today we are launching the next generation of memory-optimized EC2 instances. These instances improve upon the popular R3 instances with a larger L3 cache and faster memory. On the network side, they support up to 20 Gbps of ENA-powered network bandwidth when used within a Placement Group, along with 12 Gbps of dedicated throughput to EBS.

The R4 instances include the following features:

  • Dual socket Intel Xeon E5 Broadwell Processors (2.3 GHz)
  • DDR4 memory
  • Hardware Virtualzation (HVM) only

Here’s the lineup:

Model vCPUs Memory (GiB) Networking Performance
r4.large 2 15.25 Up to 10 Gigabit
r4.xlarge 4 30.5 Up to 10 Gigabit
r4.2xlarge 8 61 Up to 10 Gigabit
r4.4xlarge 16 122 Up to 10 Gigabit
r4.8xlarge 32 244 10 Gigabit
r4.16xlarge 64 488 20 Gigabit

The R4 instances are available in On-Demand and Reserved Instance form in the US East (Northern Virginia), US East (Ohio), US West (Oregon), US West (Northern California), EU (Ireland), EU (Frankfurt), Asia Pacific (Sydney), China (Beijing), and AWS GovCloud (US) Regions. Visit the EC2 Pricing page for more information.

Jeff;

Update: There will be a webinar on January 19th where you can learn more! Register here.

New T2.Xlarge and T2.2Xlarge Instances

AWS customers love the cost-effective, burst-based model that they get when they use T2 instances. These customers use T2 instances to run general purpose workloads such as web servers, development environments, continuous integration servers, test environments, and small databases. These instances provide a generous amount of baseline performance and the ability to automatically and transparently scale up to full-core processing power on an as-needed basis (refer back to New Low Cost EC2 Instances with Burstable Performance if this is news to you).

Today we are adding two new larger T2 instance sizes – t2.xlarge with 16 GiB of memory and t2.2xlarge with 32 GiB of memory. These new sizes enable customers to benefit from price and performance of the T2 burst model for applications with larger resource requirements  (this is the third time that we have expanded the range of t2 instances; we added t2.large instances last June and t2.nano instances last December).

Here are the specs for all of the sizes of T2 instances (the prices reflect the most recent EC2 Price Reduction):

Name vCPUs Baseline Performance Platform Memory (GiB) CPU Credits / Hour Price / Hour
(Linux)
t2.nano 1 5% 32-bit or 64-bit 0.5 3  $0.0059
t2.micro 1 10% 32-bit or 64-bit 1 6  $0.012
t2.small 1 20% 32-bit or 64-bit 2 12  $0.023
t2.medium 2 40% 32-bit or 64-bit 4 24  $0.047
t2.large 2 60% 64-bit 8 36  $0.094
t2.xlarge 4 90% 64-bit 16 54  $0.188
t2.2xlarge 8 135% 64-bit 32 81  $0.376

Here are a couple of ways that you might be able to move existing workloads to the new instances:

  • t2.large workloads can scale up to t2.xlarge or t2.2xlarge in order to gain access to more memory.
  • Intermittent c4.2xlarge workloads can move to t2.xlarge at a significant cost reduction, with similar burst performance.
  • Intermittent m4.xlarge workloads can move to t2.xlarge at s slight cost reduction, and higher burst performance.

The new instances are available today as On-Demand & Reserved Instances in all AWS regions.

Jeff;

Update: We have a webinar coming up on January 19th, where you can learn more. Sign up here.

Use Amazon Aurora for Dev & Test Workloads with new T2.Medium DB Instance Class

Amazon Aurora already allows you to make your choice of five DB instance classes ranging from the db.r3.large (2 vCPUs and 15 GiB of RAM) up to the db.r3.8xlarge (32 vCPUs and 244 GiB of RAM). These instances support a very wide range of production-scale applications and use cases.

Today we giving you a sixth choice, the new db.t2.medium DB instance class with 2 vCPUs and 4 GiB of RAM. Steady-state, this instance class has access to 40% of the performance of a single core, and can burst to full-core performance when processing CPU-intensive queries and other database tasks. Like the similarly-named EC2 instances, this new instance class starts out with a full allocation of CPU Credits, which are spent when the instance is active and accumulate when it is not (my post, New Low Cost EC2 Instances with Burstable Performance, contains a full explanation).

The db.t2.medium should be a great fit for many of your development and test scenarios, and you should also consider them for some of your less-demanding production workloads. You can monitor the CPUCreditUsage and CPUCreditBalance metrics to track the usage and accumulation of credits over time.

Now Available
You can create Amazon Aurora database instances that make use of this new DB instance class today. It is available in all regions where Amazon Aurora is currently available. Prices start at $0.082 per hour.

Jeff;

 

AWS Storage Update – S3 & Glacier Price Reductions + Additional Retrieval Options for Glacier

Back in 2006, we launched S3 with a revolutionary pay-as-you-go pricing model, with an initial price of 15 cents per GB per month. Over the intervening decade, we reduced the price per GB by 80%, launched S3 in every AWS Region, and enhanced the original one-size-fits-all model with user-driven features such as web site hosting, VPC integration, and IPv6 support, while adding new storage options including S3 Infrequent Access.

Because many AWS customers archive important data for legal, compliance, or other reasons and reference it only infrequently, we launched Glacier in 2012, and then gave you the ability to transition data between S3, S3 Infrequent Access, and Glacier by using lifecycle rules.

Today I have two big pieces of news for you: we are reducing the prices for S3 Standard Storage and for Glacier storage. We are also introducing additional retrieval options for Glacier.

S3 & Glacier Price Reduction
As long-time AWS customers already know, we work relentlessly to reduce our own costs, and to pass the resulting savings along in the form of a steady stream of AWS Price Reductions.

We are reducing the per-GB price for S3 Standard Storage in most AWS regions, effective December 1, 2016. The bill for your December usage will automatically reflect the new, lower prices. Here are the new prices for Standard Storage:

Regions 0-50 TB
($ / GB / Month)
51 – 500 TB
($ / GB / Month)
500+ TB
($ / GB / Month)
  • US East (Northern Virginia)
  • US East (Ohio)
  • US West (Oregon)
  • EU (Ireland)

(Reductions range from 23.33% to 23.64%)

 $0.0230 $0.0220 $0.0210
  • US West (Northern California)

(Reductions range from 20.53% to 21.21%)

 $0.0260 $0.0250 $0.0240
  • EU (Frankfurt)

(Reductions range from 24.24% to 24.38%)

 $0.0245 $0.0235 $0.0225
  • Asia Pacific (Singapore)
  • Asia Pacific (Tokyo)
  • Asia Pacific (Sydney)
  • Asia Pacific (Seoul)
  • Asia Pacific (Mumbai)

(Reductions range from 16.36% to 28.13%)

 $0.0250 $0.0240 $0.0230

As you can see from the table above, we are also simplifying the pricing model by consolidating six pricing tiers into three new tiers.

We are also reducing the price of Glacier storage in most AWS Regions. For example, you can now store 1 GB for 1 month in the US East (Northern Virginia), US West (Oregon), or EU (Ireland) Regions for just $0.004 (less than half a cent) per month, a 43% decrease. For reference purposes, this amount of storage cost $0.010 when we launched Glacier in 2012, and $0.007 after our last Glacier price reduction (a 30% decrease).

The lower pricing is a direct result of the scale that comes about when our customers trust us with trillions of objects, but it is just one of the benefits. Based on the feedback that I get when we add new features, the real value of a cloud storage platform is the rapid, steady evolution. Our customers often tell me that they love the fact that we anticipate their needs and respond with new features accordingly.

New Glacier Retrieval Options
Many AWS customers use Amazon Glacier as the archival component of their tiered storage architecture. Glacier allows them to meet compliance requirements (either organizational or regulatory) while allowing them to use any desired amount of cloud-based compute power to process and extract value from the data.

Today we are enhancing Glacier with two new retrieval options for your Glacier data. You can now pay a little bit more to expedite your data retrieval. Alternatively, you can indicate that speed is not of the essence and pay a lower price for retrieval.

We launched Glacier with a pricing model for data retrieval that was based on the amount of data that you had stored in Glacier and the rate at which you retrieved it. While this was an accurate reflection of our own costs to provide the service, it was somewhat difficult to explain. Today we are replacing the rate-based retrieval fees with simpler per-GB pricing.

Our customers in the Media and Entertainment industry archive their TV footage to Glacier. When an emergent situation calls for them to retrieve a specific piece of footage, minutes count and they want fast, cost-effective access to the footage. Healthcare customers are looking for rapid, “while you wait” access to archived medical imagery and genome data; photo archives and companies selling satellite data turn out to have similar requirements. On the other hand, some customers have the ability to plan their retrievals ahead of time, and are perfectly happy to get their data in 5 to 12 hours.

Taking all of this in to account, you can now select one of the following options for retrieving your data from Glacier (The original rate-based retrieval model is no longer applicable):

Standard retrieval is the new name for what Glacier already provides, and is the default for all API-driven retrieval requests. You get your data back in a matter of hours (typically 3 to 5), and pay $0.01 per GB along with $0.05 for every 1,000 requests.

Expedited retrieval addresses the need for “while you wait access.” You can get your data back quickly, with retrieval typically taking 1 to 5 minutes.  If you store (or plan to store) more than 100 TB of data in Glacier and need to make infrequent, yet urgent requests for subsets of your data, this is a great model for you (if you have less data, S3’s Infrequent Access storage class can be a better value). Retrievals cost $0.03 per GB and $0.01 per request.

Retrieval generally takes between 1 and 5 minutes, depending on overall demand. If you need to get your data back in this time frame even in rare situations where demand is exceptionally high, you can provision retrieval capacity. Once you have done this, all Expedited retrievals will automatically be served via your Provisioned capacity. Each unit of Provisioned capacity costs $100 per month and ensures that you can perform at least 3 Expedited Retrievals every 5 minutes, with up to 150 MB/second of retrieval throughput.

Bulk retrieval is a great fit for planned or non-urgent use cases, with retrieval typically taking 5 to 12 hours at a cost of $0.0025 per GB (75% less than for Standard Retrieval) along with $0.025 for every 1,000 requests. Bulk retrievals are perfect when you need to retrieve large amounts of data within a day, and are willing to wait a few extra hours in exchange for a very significant discount.

If you do not specify a retrieval option when you call InitiateJob to retrieve an archive, a Standard Retrieval will be initiated. Your existing jobs will continue to work as expected, and will be charged at the new rate.

To learn more, read about Data Retrieval in the Glacier FAQ.

As always, I am thrilled to be able to share this news with you, and I hope that you are equally excited!

If you want to learn more- we have a webinar coming up December 12th. Register here.

Jeff;

 

New – Web Access for Amazon WorkSpaces

We launched WorkSpaces in late 2013 (Amazon WorkSpaces – Desktop Computing in the Cloud) and have been adding new features at a rapid clip. Here are some highlights from 2016:

Today we are adding to this list with the addition of Amazon WorkSpaces Web Access. You can now access your WorkSpace from recent versions of Chrome or Firefox running on Windows, Mac OS X, or Linux. You can now be productive on heavily restricted networks and in situations where installing a WorkSpaces client is not an option. You don’t have to download or install anything, and you can use this from a public computer without leaving any private or cached data behind.

To use Amazon WorkSpaces Web Access, simply visit the registration page using a supported browser and enter the registration code for your WorkSpace:

Then log in with your user name and password:

And here you go (yes, this is IE and Firefox running on WorkSpaces, displayed in Chrome):

This feature is available for all new WorkSpaces that are running the Value, Standard, or Performance bundles or their Plus counterparts. You can access it at no additional charge after your administrator enables it:

Existing WorkSpaces must be rebuilt and custom images must be refreshed in order to take advantage of Web Access.

Jeff;

 

New for AWS Lambda – Environment Variables and Serverless Application Model (SAM)

I am thrilled by all of the excitement that I see around AWS Lambda and serverless application development. I have shared many serverless success stories, tools, and open source projects in the AWS Week in Review over the last year or two.

Today I would like to tell you about two important additions to Lambda: environment variables and the new Serverless Application Model.

Environment Variables
Every developer likes to build code that can be used in more than one environment. In order to do this in a clean and reusable fashion, the code should be able to accept configuration values at run time. The configuration values customize the environment for the code: table names, device names, file paths, and so forth. For example, many projects have distinct configurations for their development, test, and production environments.

You can now supply environment variables to your Lambda functions. This allows you to effect configuration changes without modifying or redeploying your code, and should make your serverless application development even more efficient. Each environment variable is a key/value pair. The keys and the values are encrypted using AWS Key Management Service (KMS) and decrypted on an as-needed basis. There’s no per-function limit on the number of environment variables, but the total size can be no more than 4 kb.

When you create a new version of a Lambda function, you also set the environment variables for that version of the function. You can modify the values for the latest version of the function, but not for older versions. Here’s how I would create a simple Python function, set some environment variables, and then reference them from my code (note that I had to import the os library):

There’s no charge for this feature if you use the default service key provided by Lambda (the usual per-request KMS charges apply if you choose to use your own key).

To learn more and to get some ideas for other ways to make use of this new feature, read Simplify Serverless Applications With Lambda Environment Variables on the AWS Compute Blog.

AWS Serverless Application Model
Lambda functions, Amazon API Gateway resources, and Amazon DynamoDB tables are often used together to build serverless applications. The new AWS Serverless Application Model (AWS SAM) allows you describe all of these components using a simplified syntax that is natively supported by AWS CloudFormation. In order to use this syntax, your CloudFormation template must include a Transform section (this is a new aspect of CloudFormation) that looks like this:

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31

The remainder of the template is used to specify the Lambda functions, API Gateway endpoints & resources, and DynamoDB tables. Each function declaration specifies a handler, a runtime, and a URI to a ZIP file that contains the code for the function.

APIs can be declared implicitly by defining events, or explicitly, by providing a Swagger file.

DynamoDB tables are declared using a simplified syntax that requires just a table name, a primary key (name and type), and the provisioned throughput. The full range of options is also available for you to use if necessary.

You can now generate AWS SAM files and deployment packages for your Lamba functions using a new Export operation in the Lambda Console. Simply click on the Actions menu and select Export function:

Then click on Download AWS SAM file or Download deployment package:

Here is the AWS SAM file for my function:

AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: A starter AWS Lambda function.
Resources:
  ShowEnv:
    Type: 'AWS::Serverless::Function'
    Properties:
      Handler: lambda_function.lambda_handler
      Runtime: python2.7
      CodeUri: .
      Description: A starter AWS Lambda function.
      MemorySize: 128
      Timeout: 3
      Role: 'arn:aws:iam::99999999999:role/LambdaGeneralRole'

The deployment package is a ZIP file with the code for my function inside. I would simply upload the file to S3 and update the CodeUri in the SAM file in order to use it as part of my serverless application. You can do this manually or you can use a pair of new CLI commands (aws cloudformation package and aws cloudformation deploy) to automate it. To learn more about this option, read the section on Deploying a Serverless app in the new Introducing Simplified Serverless Application Management and Deployment post.

You can also export Lambda function blueprints. Simply click on the download link in the corner:

And click on Download blueprint:

The ZIP file contains the AWS SAM file and the code:

To learn more and to see this new specification in action, read Introducing Simplified Serverless Application Management and Deployment on the AWS Compute Blog.

Jeff;

 

New – Auto Scaling for EMR Clusters

The Amazon EMR team is cranking out new features at an impressive pace (guess they have lots of worker nodes)! So far this quarter they have added all of these features:

Today we are adding to this list with the addition of automatic scaling for EMR clusters. You can now use scale out and scale in policies to adjust the number of core and task nodes in your clusters in response to changing workloads and to optimize your resource usage:

Scale out Policies add additional capacity and allow you to tackle bigger problems. Applications like Apache Spark and Apache Hive will automatically take advantage of the increased processing power as it comes online.

Scale in Policies remove capacity, either at the end of an instance billing hour or as tasks complete. If a node is removed while it is running a YARN container, YARN will rerun that container on another node (read Configure Cluster Scale-Down Behavior for more info).

Using Auto Scaling
In order to make use of Auto Scaling, an IAM role that give Auto Scaling permission to launch and terminate EC2 instances must be associated with your cluster. If you create a cluster from the EMR Console, it will create the EMR_AutoScaling_DefaultRole for you. You can use it as-is or customize it as needed. If you create a cluster programmatically or via the command-line, you will need to create it yourself. You can also create the default roles from the command line like this:

$ aws emr create-default-roles

From the console, you can edit the Auto Scaling policies by clicking on the Advanced Options when you create your cluster:

Simply click on the pencil icon to begin editing your policy. Here’s my Scale out policy:

Because this policy is driven by YARNMemoryAvailablePercentage, it will be activated under low-memory conditions when I am running a YARN-based framework such as Spark, Tez, or Hadoop MapReduce. I can choose many other metrics as well; here are some of my options:

And here’s my Scale in policy:

I can choose from the same set of metrics, and I can set a Cooldown period for each policy. This value sets the minimum amount of time between scaling activities, and allows the metrics to stabilize as the changes take effect.

Default policies (driven by YARNMemoryAvailablePercentage and ContainerPendingRatio) are also available in the console.

Available Now
To learn more about Auto Scaling, read about Scaling Cluster Resources in the EMR Management Guide.

This feature is available now and you start using it today. Simply select emr-5.1.0 from the Release menu to get started!

Jeff;