AWS Blog

AWS Enterprise Support Update – Training Credits, Operations Review, Well-Architected

by Jeff Barr | on | in AWS Support, Enterprise | | Comments

I often speak to new and potential AWS customers in our EBC (Executive Briefing Center) in Seattle. The vast majority of them have already bought in to the promise of the cloud and are already making plans that involve a combination of  “lifting and shifting” existing applications and building new, cloud-native ones. Because their move to AWS is often part of a larger organizational transformation and modernization, the senior leaders that I talk to want to make sure that their technical team is properly equipped to skillfully design, build, and operate cloud-powered systems.

Many of these customers are taking advantage of the AWS Enterprise Support plan as they move their mission-critical applications to the cloud. Like traditional support plans, this one provides access to technical support people and resources in the event that an issue surfaces. However, unlike traditional plans, it also focuses on helping them to build applications that are robust, cost-effective, easily maintained, and scalable. Our customers tell me that they enjoy the unique combination of hands-on, concierge-quality support and the automated, data-driven recommendations provided to them by AWS Trusted Advisor.

New Enterprise Support Benefits
Today we are making the AWS Enterprise Support Plan even better, adding three new benefits that are available to new and existing plan subscribers at no additional charge:

Training Credits – In conjunction with our training partner qwikLabs, each Enterprise Support customer is entitled to receive 500 qwikLabs training credits annually, along with a 30% discount on additional credits. The qwikLabs courses address a wide range of AWS topics; introductory courses are free and the remainder cost between 1 and 15 credits each (read the course catalog to learn more):

If you have an Enterprise Support plan and would like to gain access to your credits and discounts, please contact your AWS Technical Account Manager (TAM).

Cloud Operations Review – Enterprise Support customers are eligible for a Cloud Operations Review designed to help them to identify gaps in their approach to operating in the cloud. Originating from a set of operational best practices distilled from our experience with a large set of representative customers, this program provides you with a review of your cloud operations and the associated management practices. The program uses a four-pillared approach with a focus on preparing, monitoring, operating, and optimizing cloud-based systems in pursuit of operational excellence.

You can work with your TAM to set up a Cloud Operations Review.

Well-Architected Review – Enterprise Support customers are also eligible for a Well-Architected Review of their mission-critical workloads. While the Cloud Operations Review focuses on people and processes, this review allows our customers to measure their architecture against AWS best practices. Our goal is to help our customers to construct architectures that are secure, reliable, performance, and cost-effective. For more information about our Well-Architected program, read Are You Well-Architected?



Additional At-Rest and In-Transit Encryption Options for Amazon EMR

by Jeff Barr | on | in Amazon EMR, Launch | | Comments

Our customers use Amazon EMR (including Apache Hadoop and the full range of tools that make up the Apache Spark ecosystem) to handle many types of mission-critical big data use cases. For example:

  • Yelp processes over a terabyte of log files and photos every day.
  • Expedia processes streams of clickstream, user interaction, and supply data.
  • FINRA analyzes billions of brokerage transaction records daily.
  • DataXu evaluates 30 trillion ad opportunities monthly.

Because customers like these (see our big data use cases for many others) are processing data that is mission-critical and often sensitive, they need to keep it safe and sound.

We already offer several data encryption options for EMR including server and client side encryption for Amazon S3 with EMRFS and Transparent Data Encryption for HDFS. While these solutions do a good job of protecting data at rest, they do not address data stored in temporary files or data that is in flight, moving between job steps. Each of these encryption options must be individually enabled and configured, making the process of implementing encryption more tedious that it need be.

It is time to change this!

New Encryption Support
Today we are launch a new, comprehensive encryption solution for EMR. You can now easily enable at-rest and in-transit encryption for Apache Spark, Apache Tez, and Hadoop MapReduce on EMR.

The at-rest encryption addresses the following types of storage:

  • Data stored in S3 via EMRFS.
  • Data stored in the local file system of each node.
  • Data stored on the cluster using HDFS.

The in-transit encryption makes use of the open-source encryption features native to the following frameworks:

  • Apache Spark
  • Apache Tez
  • Apache Hadoop MapReduce

This new feature can be configured using an Amazon EMR security configuration.  You can create a configuration from the EMR Console, the EMR CLI, or via the EMR API.

The EMR Console now includes a list of security configurations:

Click on Create to make a new one:

Enter a name, and then choose the desired mode and type for each aspect of this new feature. Based on the mode or the type, the console will prompt you for additional information.

S3 Encryption:

Local disk encryption:

In-transit encryption:

If you choose PEM as the certificate provider type, you will need to enter the S3 location of a ZIP file that contains the PEM file(s) that you want to use for encryption. If you choose Custom, you will need to enter the S3 location of a JAR file and the class name of the custom certificate provider.

After you make all of your choices and click on Create, your security configuration will appear in the console:

You can then specify the configuration when you create a new EMR Cluster. This feature is available for clusters that are running Amazon EMR release 4.8.0 or 5.0.0. To learn more, read about Amazon EMR Encryption with Security Configurations.



API Gateway Update – New Features Simplify API Development

by Jeff Barr | on | in Amazon API Gateway | | Comments

Amazon API Gateway allows you to quickly and easily build and run application backends that are robust and scalable. With the recent addition of usage plans, you can create an ecosystem of partner developers around your APIs. Let’s review some terminology to start things off:

Endpoint – A URL (provided by API Gateway) that responds to HTTP requests. These requests use HTTP methods such as GET, PUT, and POST.

Resource – A named entity that exists (symbolically) within an endpoint, referred to by a hierarchical path.

Behavior – The action that your code will take in response to an HTTP request on a particular resource, using an HTTP method.

Integration – The API Gateway mapping from the endpoint, resource, and HTTP method to the actual behavior, and back again.

Today we are extending the integration model provided by API Gateway with support for some new features that will make it even easier for you to build new API endpoints and to port existing applications:

Catch-all Path Variables – Instead of specifying individual paths and behaviors for groups of requests that fall within a common path (such as /store/), you can now specify a catch-all route that intercepts all requests to the path and routes them to the same function. For example a single greedy path (/store/{proxy+}) will intercept requests made to /store/list-products, /store/add-product, and /store/delete-product.

ANY Method – Instead of specifying individual behaviors for each HTTP method (GET, POST, PUT, and so forth) you can now use the catch-all ANY method to define the same integration behavior for all requests.

Lambda Function Integration – A new default mapping template will send the entire request to your Lambda function and then turn the return value into an HTTP response.

HTTP Endpoint Integration – Another new default mapping template will pass the entire request through to your HTTP endpoint and then return the response without modification. This allows you to use API Gateway as an HTTP proxy with very little in the way of setup work.

Let’s dive in!

Catch-all Path Variables
Suppose I am creating a new e-commerce API. I start like this:

And then create the /store resource:

Then I use a catch-all path variable to intercept all requests to any resource within /store (I also had to check Configure as proxy resource):

Because {proxy+} routes requests for sub-resources to the actual resource, it must be used as the final element of the resource path; it does not make sense to use it elsewhere. The {proxy+} can match a path of any depth; the example above would also match /store/us/clothing, /store/us/clothing/children, and so forth.

The proxy can connect to a Lambda function or an HTTP endpoint:

ANY Method
I no longer need to specify individual behaviors for each HTTP method when I define my resources and the methods on them:

Instead, I can select ANY and use the same integration behavior for all of the methods on the resource:

This is cleaner, simpler, and easier to set up. Your code (the integration point for all of the methods on the resource) can inspect the method name and take an appropriate action.

The ANY method is created automatically when I use a greedy path variable, as shown above. It can also be used for individual resources. You can override the configuration for an individual method (perhaps you want to handle DELETE differently), by simply creating it and changing the settings.

Lambda Function Integration
It is now easier than ever to implement a behavior using a Lambda function. A new, built-in Lambda integration template automatically maps the HTTP request elements (headers, query parameters, and payload) into a form directly consumable by the function. The template also maps the function’s return value (an object with status code, header, and body elements) to a properly structured HTTP response.

Here’s a simple function that I copied from the documentation (you can find it in Lambda Function for Proxy Integration):

I connected it to /store like this:

Then I deployed it (not shown), and tested it out like this:

The function ran as expected; the console displayed the response body, the headers, and the log files for me. Here’s the first part:

Then I hopped over to the Lambda Console and inspected the CloudWatch Logs for my function:

As you can see, line 10 of my function produced the message that I highlighted in yellow.

So, to sum it all up: you can now write Lambda functions that respond to HTTP requests on your API’s resources  without having to spend any time setting up mappings or transformations. In fact, a new addition to the Lambda Console makes this process even easier! You can now configure the API Gateway endpoint as one of the first steps in creating a new Lambda function:

HTTP Function Integration
You can also pass API requests through to an HTTP endpoint running on an EC2 instance or on-premises. Again, you don’t have to spend any time setting up mappings or transformations. Instead, you simply select HTTP for the integration type, click on Use HTTP Proxy integration, and enter the name of your endpoint:

If you specify an HTTP  method of ANY, the method of the incoming request will be passed to the endpoint as-is. Otherwise, the method will be set to the indicated value as part of the call.

Available Now
The features described above are available now and you can start using them today at no extra charge.




AWS Week in Review – September 12, 2016

by Jeff Barr | on | in Week in Review | | Comments

Wow! Twenty five (25) external and internal contributors worked together to create this edition of the AWS Week in Review. If you would like to join the party (with the possibility of a free lunch at re:Invent), please visit the AWS Week in Review on GitHub.


September 12


September 13


September 14


September 15


September 16


September 17


September 18

New & Notable Open Source

New SlideShare Presentations

New Customer Success Stories

  • City of Chicago used AWS for the flexibility and agility to launch OpenGrid, Chicago’s highest profile technology release to date. OpenGrid is a real-time, open source situational awareness program intended to improve the quality of life for citizens and improve efficiency of city operations.
  • Dable develops Omni-channel Personalization Platforms that recommend products/contents customers might be interested in based on big data. The company uses AWS Lambda, Amazon Lambda, and Amazon Redshift to analyze data in real time so it can provide customers with recommended services in a short period of time and in the most cost-effective way.
  • Gett scales to keep up with 300 percent annual growth, saves $800,000 yearly, and gains new business insights using AWS. The company provides an online taxi reservation service used by millions of people in Europe, Israel, and the US. Gett runs its website and mobile web application on AWS, relying on Amazon EC2 Spot Instances to optimize costs.
  • Kyowa Hakko Kirin began using AWS with its production run for SAP ERP, an enterprise resource-planning software. Kyowa Hakko Kirin is a pharmaceutical company that manufactures and sells prescription drugs. As the company continues migrating nearly all its systems and data off of physical servers and into the cloud, it is making further progress and reducing costs through strategies such as using reserved instances and stopping unnecessary instances on weekends. The company is developing its cloud data center as a result.
  • National Bank of Canada’s Global Equity Derivatives Group (GED) uses AWS to process and analyze hundreds of terabytes of financial data, conduct data manipulations in one minute instead of days, and scale and optimize its operations. GED provides stock-trading solutions and services to a range of organizations throughout the world. The organization runs its data analysis using the TickVault platform on the AWS Cloud.

New YouTube Videos

Upcoming Events

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.


AWS CloudFormation Update – YAML, Cross-Stack References, Simplified Substitution

by Jeff Barr | on | in AWS CloudFormation, Launch | | Comments

AWS CloudFormation gives you the ability to express entire stacks (collections of related AWS resources) declaratively, by constructing templates. You can define a stack, specify and configure the desired resources and their relationship to each other, and then launch as many copies of the stack as desired. CloudFormation will create and set up the resources for you, while also taking care to address any ordering dependencies between the resources.

Today we are making three important additions to CloudFormation:

  • YAML Support – You can now write your CloudFormation templates in YAML.
  • Cross Stack References – You can now export values from one stack and use them in another.
  • Simplified Substitution – You can more easily perform string replacements within templates.

Let’s take a look!

YAML Support
You can now write your CloudFormation templates in YAML (short for YAML Ain’t Markup Language). Up until now, templates were written in JSON. While YAML and JSON have similar expressive powers, YAML was designed to be human-readable while JSON was (let’s be honest) not. YAML-based templates use less punctuation and should be substantially easier to write and to read. They also allow the use of comments. CloudFormation supports essentially all of YAML, with the exception of hash merges, aliases, and some tags (binary, imap, pairs, TIMESTAMP, and set).

When you write a CloudFormation template in YAML, you will use the same top-level structure (Description, Metadata, Mappings, Outputs, Parameters, Conditions, and Resources).  Here’s what a parameter definition looks like:

    AllowedPattern: '[a-zA-Z][a-zA-Z0-9]*'
    ConstraintDescription: must begin with a letter and contain only alphanumeric
    Default: wordpressdb
    Description: The WordPress database name
    MaxLength: '64'
    MinLength: '1'
    Type: String

When you use YAML, you can also use a new, abbreviated syntax to refer to CloudFormation functions such as GetAtt, Base64, and FindInMap. You can now use the existing syntax ("Fn::GetAtt") or the new, tag-based syntax (!GetAtt). Note that the “!” is part of the YAML syntax for tags; it is not the “logical not” operator. Here’s the old syntax:

- Fn::FindInMap:
    - AWSInstanceType2Arch
    - Ref: InstanceType
    - Arch

And the new one:

!FindInMap [AWSInstanceType2Arch, !Ref InstanceType, Arch]

As you can see, the newer syntax is shorter and cleaner. Note, however, that you cannot put two tags next to each other. You can intermix the two forms and you can also nest them. For example, !Base64 !Sub is invalid but !Base64 Fn::Sub is fine.

The CloudFormations API functions (CreateChangeSet, CreateStack, UpdateStack, and so forth) now accept templates in either JSON or YAML. The GetTemplate function returns the template in the original format. The CloudFormation designer does not support YAML templates today, but this is on our roadmap.

Cross Stack References
Many AWS customers use one “system” CloudFormation stack to set up their environment (VPCs, VPC subnets, security groups, IP addresses, and so forth) and several other “application” stacks to populate it (EC2 & RDS instances, message queues, and the like). Until now there was no easy way for the application stacks to reference resources created by the system stack.

You can now create and export values from one stack and make use of them in other stacks without going to the trouble of creating custom CloudFormation resources. The first stack exports values like this:

    Value: !Ref TroubleShootingSG
      Name: AccountSG

The other stacks then reference them using the new ImportValue function:

  Type: AWS::EC2::Instance
      - !ImportValue AccountSG

The exported names must be unique with the AWS account and the region. A stack that is referenced by another stack cannot be deleted and it cannot modify or remove the exported value.

Simplified Substitution
Many CloudFormation templates perform some intricate string manipulation in order to construct command lines, file paths, and other values that cannot be fully determined until the stack is created.  Until now, this required the use of fn::Join. In combination with the JSON syntax, this results in some messy templates that were hard to understand and maintain. In order to simplify this important aspect of template development, we are introducing a new substitution function, fn::Sub. This function replaces variables (denoted by the syntax ${variable_name}) with their evaluated values. For example:

      command: !Sub |
           mysqladmin -u root password '${DBRootPassword}'
      test: !Sub |
           $(mysql ${DBName} -u root --password='${DBRootPassword}' >/dev/null 2>&1 </dev/null); (( $? != 0 ))
      command: !Sub |  
           mysql -u root --password='${DBRootPassword}' < /tmp/setup.mysql
      test: !Sub |
           $(mysql ${DBName} -u root --password='${DBRootPassword}' >/dev/null 2>&1 </dev/null); (( $? !=0))

If you need to generate ${} or ${variable}, simply write ${!} or ${!variable}.

Coverage Updates
As part of this release we also added additional support for AWS Key Management Service (KMS), EC2 Spot Fleet, and Amazon EC2 Container Service. See the CloudFormation Release History for more information.

Available Now
All of these features are available now and you can start using them today!

If you are interested in learning more about CloudFormation, please plan to attend our upcoming webinar, AWS Infrastructure as Code. You will learn how to take advantage of best practices for planning and provisioning your infrastructure, and you will have the opportunity to see the new features in action.



New – Additional Filtering Options for AWS Cost Explorer

by Jeff Barr | on | in Cost Explorer | | Comments

AWS Cost Explorer is a powerful tool that helps you to visualize, understand, and manage your AWS spending (read The New Cost Explorer for AWS to learn more). You can view your spend by service or by linked account, with your choice of daily or monthly granularity. You can also create custom filters based on the accounts, time period, services, or tags that are of particular interest to you.

In order to give you even more visibility into your spending, we are introducing some additional filtering options today. You can now filter at a more fine-grained level, zooming in to see costs at the most fundamental, as-metered units. You can also zoom out, categorizing your usage at a high level that is nicely aligned with the primary components of AWS usage and billing.

Zooming In
As you may have noticed, AWS tracks your usage at a very detailed level. Each gigabyte-hour of S3 storage, each gigabyte-month of EBS usage, each hour of EC2 usage, each gigabyte of data transfer in or out, and so forth. You can now explore these costs in depth using the Usage Type filtering option. After you enter the Cost Explorer and choose Usage Type from the Filtering menu, you can now filter on the fundamental, as-billed units. For example, I can take a look at my day-by-day usage of m4.xlarge instances:

Zooming Out
Sometimes you need more detail, and sometimes you need a summary. Maybe you want to know how much you spent on RDS, or on S3 API requests, or on EBS magnetic storage. You can do this by filtering on a Usage Type Group.  Here is my overall EC2 usage, day-by-day

Here are some of the other usage type groups that you can use for filtering (I had to do some browser  tricks to make the menu this tall):

Available Now
These new features are available now and you can start using them today in all AWS Regions.




Earth on AWS: A Home for Geospatial Data on AWS

by Jeff Barr | on | in Public Data Sets | | Comments

My colleague Joe Flasher is part of our Open Data team. He wrote the guest post below in order to let you know about our new Earth on AWS project.



In March 2015, we launched Landsat on AWS, a Public Dataset made up of imagery from the Landsat 8 satellite. Within the first year of launching Landsat on AWS, we logged over 1 billion requests for Landsat data and have been inpsired by our customers’ innovative uses of the data. Landsat on AWS showed that sharing data in the cloud makes it possible for anyone to build planetary-scale applications without the bandwidth, storage, memory and processing power limitations of conventional IT infrastructure

Today, we are launching Earth on AWS and making more large geospatial datasets openly available in the cloud so you can bring your algorithms to the data instead of being required to download them to your machine locally. But more than just making the data openly available, the Earth on AWS initiative will focus on providing resources to help you understand how to work with the data. We are also announcing an associated Call for Proposals for research utilizing the Earth on AWS datasets.

Making More Data Available
Earth on AWS currently contains the following data sets:

NAIP 1m Imagery
The National Agriculture Imagery Program (NAIP) acquires aerial imagery during the agricultural growing seasons in the continental U.S.. Roughly 1 meter aerial imagery (Red, Green, Blue, NIR) is available on Amazon S3. Learn more about NAIP on AWS.

Terrain Tiles
Worldwide elevation data available in terrain vector tiles. Additionally, in the United States 10 meter NED data now augments the earlier NED 3 meter and 30 meter SRTM data for crisper, more consistent mountain detail. Tiles are available via Amazon S3. Learn more about terrain tiles.

GDELT – A Global Database of Society
The GDELT Project monitors the world’s broadcast, print, and web news from nearly every corner of every country in over 100 languages and identifies the people, locations, organizations, counts, themes, sources, emotions, counts, quotes, images, and events driving our global society every second of every day. Learn more about GDELT.

Landsat 8 Satellite Imagery
Landsat 8 data is available for anyone to use via Amazon Simple Storage Service (S3). All Landsat 8 scenes from 2015 are available along with a selection of cloud-free scenes from 2013 and 2014. All new Landsat 8 scenes are made available each day, often within hours of production. The satellite images the entire Earth every 16 days at a roughly 30 meter resolution. Learn more about Landsat on AWS.

NEXRAD Weather Radar
The Next Generation Weather Radar (NEXRAD) is a network of 160 high-resolution Doppler radar sites that detects precipitation and atmospheric movement and disseminates data in approximately 5 minute intervals from each site. NEXRAD enables severe storm prediction and is used by researchers and commercial enterprises to study and address the impact of weather across multiple sectors. Learn more about NEXRAD on AWS.

SpaceNet Machine Learning Corpus
SpaceNet is a corpus of very high-resolution DigitalGlobe satellite imagery and labeled training data for researchers to utilize to develop and train machine learning algorithms. The dataset is made up of roughly 1,990 square kilometers of imagery at 50 cm resolution and 220,594 corresponding building footprints. Learn more about the SpaceNet corpus.

NASA Earth Exchange
The NASA Earth Exchange (NEX) makes it easier and more efficient for researchers to access and process earth science data. NEX datasets available on Amazon S3 include downscaled climate projections (including newly available Localized Constructed Analogs), global MODIS vegetation indices, and Landsat Global Land Survey data. Learn more about the NASA Earth Exchange.

Beyond Opening Data
Open data is only useful when you understand what it is and how to use it for your own purposes. To that end, Earth on AWS features videos and articles of customers talking about how they use geospatial data within their own workflows. From using Lambda to replace geospatial servers to studying migrating flocks of birds with radar data, there are a wealth of examples that you can learn from.

If you have an idea of how to use Earth on AWS data, we want to hear about it! There is an open Call for Proposals for research related to Earth on AWS datasets. Our goal with this Call for Proposals is to remove traditional barriers and allow students, educators and researchers to be key drivers of technological innovation and make new advances in their fields.

Thanks to Our Customers
We’d like to thank our customers at DigitalGlobe, Mapzen, Planet, and Unidata for working with us to make these datasets available on AWS.

We are always looking for new ways to work with large datasets and if you have ideas for new data we should be adding or ways in which we should be providing the data, please contact us.

Joe Flasher, Open Geospatial Data Lead, Amazon Web Services

AWS Webinars – September 2016

by Jeff Barr | on | in Webinars | | Comments

At the beginning of the month I blogged about the value of continuing education and shared an infographic that ilustrated the link between continued education and increased pay, higher effectiveness, and decreased proclivity to seek other employment. The pace of AWS innovation means that there’s always something new to learn. One way to do this is to attend some of our webinars. We design these webinars with a focus on training and education, and strongly believe that you can walk away from them ready, willing, and able to use a new AWS service or to try a new aspect of an existing one.

To that end, we have another great selection of webinars on the schedule for September. As always they are free, but they do fill up and I strongly suggest that you register ahead of time. All times are PT, and each webinar runs for one hour:

September 20

September 21

September 22

September 26

September 27

September 28

September 29



PS – Check out the AWS Webinar Archive for more great content!


Amazon RDS for PostgreSQL – New Minor Versions, Logical Replication, DMS, and More

by Jeff Barr | on | in Amazon Relational Database Service | | Comments

Amazon Relational Database Service (RDS) simplifies the process of setting up, operating, and scaling a relational database in the cloud. With support for six database engines (Amazon Aurora, Oracle, Microsoft SQL Server, PostgreSQL, MySQL, and MariaDB) RDS has become a foundation component for many cloud-based applications.

We launched support for PostgreSQL in late 2013 and followed up that launch with support for more features and additional versions of PostgreSQL:

Today we are launching several enhancements to Amazon RDS for PostgreSQL. Here’s a summary:

  • New Minor Versions – Existing RDS for PostgreSQL database instances can be upgraded to new minor versions.
  • Logical Replication – RDS for PostgreSQL now support logical replication and the associated logical decoding.
  • DMS Support – The new logical replication feature allows an RDS for PostgresSQL database instance to be used as the source for AWS Database Migration Service.
  • Event Triggers – Newer versions of PostgreSQL support event triggers at the database instance level.
  • RAM Disk Size – RDS for PostgreSQL now supports control of the size of the RAM disk.

Let’s take a closer look!

New Minor Versions
We are adding support for versions 9.3.14, 9.4.9, and 9.5.4 of PostgreSQL. Each of these versions includes fixes and enhancements as documented in the linked release notes. You can also upgrade your database instances using the RDS Console or the AWS Command Line Interface (CLI). Here’s how to upgrade from 9.5.2 to 9.5.4 using the console:

Be sure to check Apply immediately if you don’t want to wait until the next maintenance window.

Here’s how you can initiate the upgrade operation from the command line (I decided to give the command line some extra attention in this post in order to make sure that my skills were still current):

$ aws rds modify-db-instance --region us-west-2  \
  --db-instance-identifier "pg95" \
  --engine-version "9.5.4" \

You can check on the progress of the upgrade like this:

$ aws rds describe-events --region us-west-2 \
  --source-type db-instance --source-identifier "pg95" \
  --duration 10 --output table

The following part of the output will let you know that the instance has been upgraded:

||                      Events                       ||
||  Date              |  2016-09-13T00:07:54.547Z    ||
||  Message           |  Database instance patched   ||
||  SourceIdentifier  |  pg95                        ||
||  SourceType        |  db-instance                 ||
|||                 EventCategories                 |||
|||  maintenance                                    |||

If you take a look at the entire series of events for the database instance, you’ll also see that RDS performs backups before and after the patch. You can find these backups in the console or via the command line:

$ aws rds describe-db-snapshots --region us-west-2 \
  --db-instance-identifier "pg95" \
  --snapshot-type automated --output table

The output will look like this:

|                                                                                                                                                               DescribeDBSnapshots
||                                                                                                                                                                  DBSnapshots
|| AllocatedStorage | AvailabilityZone  | DBInstanceIdentifier  |   DBSnapshotIdentifier     | Encrypted  |  Engine   | EngineVersion  |    InstanceCreateTime     | Iops  |    LicenseModel     | MasterUsername  |    OptionGroupName    |
||  100             |  us-west-2b       |  pg95                 |  rds:pg95-2016-09-12-23-22 |  False     |  postgres |  9.5.2         |  2016-09-12T23:15:07.999Z |  1000 |  postgresql-license |  root           |  default:postgres-9-5 |
||  100             |  us-west-2b       |  pg95                 |  rds:pg95-2016-09-13-00-01 |  False     |  postgres |  9.5.2         |  2016-09-12T23:15:07.999Z |  1000 |  postgresql-license |  root           |  default:postgres-9-5 |
||  100             |  us-west-2b       |  pg95                 |  rds:pg95-2016-09-13-00-07 |  False     |  postgres |  9.5.4         |  2016-09-12T23:15:07.999Z |  1000 |  postgresql-license |  root           |  default:postgres-9-5 |

Logical Replication
Amazon RDS for PostgreSQL now supports logical replication. You can now efficiently create database replicas by streaming high-level databases changes from an Amazon RDS for PostgreSQL database instance to a non-RDS database that supports the complementary logical decoding feature (PostgreSQL also supports Physical Streaming Replication, an earlier and less efficient byte/block-based mechanism for creating and maintaining replicas). Replication takes place via logical slots; each slot contains a stream of changes that can be replayed exactly once (you can read about Logical Decoding Slots in the PostgreSQL documentation to learn more).

In order to implement logical replication to a non-RDS database, you will need to ensure that the user account for the PostgreSQL database has the rds_superuser and rds_replication roles. You also need to set the rds.logical_replication parameter to 1 in the database options group for your database instance and then reboot the instance. When this parameter is applied, several PostgreSQL parameters will be configured so as to allow for replication.

With the roles in place and the database instance configured, you can create a logical slot and then instruct the non-RDS database (or other client) to read and process records from the slot. For example, the pg_recvlogical command connects to the database instance and streams data from a replication slot into a local file.

To learn more, read Logical Replication for PostgreSQL in the Amazon RDS for PostgreSQL User Guide.

DMS Support
AWS Database Migration Service helps you to migrate databases to AWS.  In conjunction with the new support for Logical Replication, you can now migrate your data from a PostgreSQL database (running on RDS or on a self-managed host) to another open source or commercial database. To do this, you will need to create a logical replication slot as described above.

Event Triggers
Newer versions of PostgreSQL (9.4.9+ and 9.5.4+) support event triggers at the database level. Because these triggers exist outside of any particular database table, they are able to capture a wider range of events, including DDL-level events that create, modify, and delete tables (here’s a full list of events that fire triggers). To learn more and to see a sample implementation of a trigger, take a look at Event Triggers for PostgreSQL in the User Guide.

RAM Disk Size
You can now use the rds.pg_stat_ramdisk_size parameter to control the amount of memory used for PostgreSQL’s stats_temp_directory. This directory is used to temporarily store statistics on run-time performance and behavior; making more memory available can reduce I/O requirements and improve performance.

Available Now
The new versions and features described above are available now and you can start using them today.




AWS Week in Review – September 5, 2016

by Jeff Barr | on | in Week in Review | | Comments

This is the third community-driven edition of the AWS Week in Review. Special thanks are due to the 15 internal and external contributors who helped to make this happen. If you would like to contribute, please take a look at the AWS Week in Review on GitHub.


September 5


September 6


September 7


September 8


September 9


September 10


September 11

New & Notable Open Source

  • s3logs-cloudwatch is a Lambda function parsing S3 server access log files and putting extra bucket metrics in CloudWatch.
  • is a curated list of AWS resources used to prepare for AWS certifications.
  • RedEye is a utility to monitor Redshift performance.
  • Dockerfile will build a Docker image, push it to the EC2 Container Registry, and deploy it to Elastic Beanstalk.
  • lambda-contact-form supports contact form posts from static websites hosted on S3/CloudFront.
  • dust is an SSH cluster shell for EC2.
  • aws-ssh-scp-connector is a utility to help connect to EC2 instances.
  • lambda-comments is a blog commenting system built with Lambda.

New SlideShare Presentations

New YouTube Videos

New Customer Stories

  • MYOB uses AWS to scale its infrastructure to support demand for new services and saves up to 30 percent by shutting down unused capacity and using Reserved Amazon EC2 Instances. MYOB provides business management software to about 1.2 million organizations in Australia and New Zealand. MYOB uses a wide range of AWS services, including Amazon Machine Learning to build smart applications incorporating predictive analytics and AWS CloudFormation scripts to create new AWS environments in the event of a disaster.
  • PATI Games needed IT solutions that would guarantee the stability and scalability of their game services for global market penetration, and AWS provided them with the most safe and cost-efficient solution. PATI Games is a Korean company primarily engaged in the development of games based on SNS platforms. AWS services including Amazon EC2, Amazon RDS (Aurora), and Amazon CloudFront enable PATI Games to maintain high reliability, decrease latency, and eventually boost customer satisfaction.
  • Rabbi Interactive scales to support a live-broadcast, second-screen app and voting system for hundreds of thousands of users, gives home television viewers real-time interactive capabilities, and reduces monthly operating costs by 60 percent by using AWS. Based in Israel, the company provides digital experiences such as second-screen apps used to interact with popular television shows such as “Rising Star” and “Big Brother.” Rabbi Interactive worked with AWS partner CloudZone to develop an interactive second-screen platform.

Upcoming Events

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.