AWS Official Blog

  • AWS Mobile Hub – Build, Test, and Monitor Mobile Applications

    by Jeff Barr | on | in Amazon Mobile Hub |

    The new AWS Mobile Hub (Beta) simplifies the process of building, testing, and monitoring mobile applications that make use of one or more AWS services. It helps you skip the heavy lifting of integrating and configuring services by letting you add and configure features to your apps, including user authentication, data storage, backend logic, push notifications, content delivery, and analytics—all from a single, integrated console.

    The AWS Mobile Hub helps you at each stage of development: configuring, building, testing, and usage monitoring. The console is feature-oriented; instead of picking individual services you select higher-level features comprised of combinations of one or more services, SDKs, and client code.  What once took a day to properly choose and configure can now be done in 10 minutes or so.

    Diving In
    Let’s dive into the console and take a look!

    The Mobile Hub outlines (and helps with) each step of the mobile app development process:

    I will call my project SuperMegaMobileApp:

    Each feature is backed up by one or more AWS services. For example, User Sign-In is powered by Amazon Cognito and Push Notification is powered by Amazon Simple Notification Service (SNS). I simply click on a feature to select and configure it.

    I click on Push Notifications and Enable push, then  choose the destination platform(s):

    I want to push to Android devices, so I select it. Then I need to enter an API Key and a Sender ID:

    I can add logic to my application by connecting it to my AWS Lambda functions:

    After I have selected and configured the features that I need, I click on Build to move forward:

    The Mobile Hub creates a source package that I can use to get started, and provides me with the links and other information that I need to have at hand in order to get going:

    I can download the entire package, open it up in my preferred IDE, and keep going from there:

    I can use this code as a starter  app and edit it as desired. I can also copy selected pieces of code and paste them in to my existing mobile app.

    I can also make use of the AWS Device Farm for testing and Amazon Mobile Analytics to collect operational metrics.

    Visit the Hub
    Whether you are creating a brand new mobile app or adding features to an existing app, AWS Mobile Hub lets you take advantage of the features, scalability, reliability, and low cost of AWS in minutes. As you have seen, AWS Mobile Hub walks you through feature selection and configuration. It then automatically provisions the AWS services required to power these features, and generates working quickstart apps for iOS and Android that use your provisioned services.

    You can now spend more time adding features to your app and less time taking care of all of the details behind the scenes!

    To learn more, visit the Mobile Hub page.



  • AWS IoT – Cloud Services for Connected Devices

    by Jeff Barr | on | in AWS IoT, re:Invent |

    Have you heard about the Internet of Things (IoT)? Although critics sometimes dismiss it as nothing more than “put a chip in it,” I believe that the concept builds upon some long-term technology trends and that there’s something really interesting and valuable going on.

    To me, the most relevant trends are the decreasing cost of mass-produced compute power, the widespread availability of IP connectivity, and the ease with which large amounts of information can be distilled into intelligence using any number of big data tools and techniques:

    • Mass-produced compute power means that it is possible to crank out powerful processors that consume modest amounts of power, occupy very little space, and cost very little. These attributes allow the processors to be unobtrusively embedded in devices of all shapes and sizes.
    • Widespread IP connectivity (wired or wireless) lets these processors talk to each other and  to the cloud. While this connectivity is fairly widespread, it is definitely not ubiquitous.
    • Big data allows us to make sense of the information measured, observed, or collected, by the processors running in these devices.

    We could also add advances in battery & sensor technology to the list of enabling technologies for the Internet of Things. Before too long, factory floors, vehicles, health care systems, household appliances, and much more will become connected “things.” Two good introductory posts on the topic are 20 Real World Problems Solved by IoT and Smart IoT: IoT as a Human Agent, Human Extension, and Human Complement. My friend Sudha Jamthe has also written on the topic; her book IoT Disruptions focuses on new jobs and careers that will come about as IoT becomes more common.

    Taking all of these trends as givens, it should not come as a surprise that we are working to make sure that AWS is well-equipped to support many different types of IoT devices and applications. Although I have described things as connected devices, they can also take the form of apps running on mobile devices.

    New AWS IoT
    Today we are launching AWS IoT (Beta).

    This new managed cloud service provides the infrastructure that allows connected cars, factory floors, aircraft engines, sensor grids, and the like (AWS IoT refers to them as “things”) to easily and securely interact with cloud services and with other devices, all at world-scale. The connection to the cloud is fast and lightweight (MQTT or REST), making it a great fit for devices that have limited memory, processing power, or battery life.

    Let’s take a look at the components that make up AWS IoT:

    • Things are devices of all types, shapes, and sizes including applications, connected devices, and physical objects. Things measure and/or control something of interest in their local environment. The AWS IoT model is driven by state and state changes. This allows things to work properly even when connectivity is intermittent; applications interact with things by way of cloud-based Thing Shadows. Things have names, attributes, and shadows.
    • Thing Shadows are virtual, cloud-based representations of things. They track the state of each connected device, and allow that state to be tracked even if the thing loses connectivity for an extended period of time.
    • The real-time Rules Engine transforms messages based on expressions that you define, and routes them to AWS endpoints (Amazon DynamoDB, Amazon Simple Storage Service (S3)AWS Lambda, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Kinesis, and Amazon Kinesis Firehose) all expressed using a SQL-like syntax. Routing is driven by the contents of individual messages and by context. For example, routine readings from a temperature sensor could be tracked in a DynamoDB table; an aberrant reading that exceeds a value stored in the thing shadow can trigger a Lambda function .
    • The Message Broker speaks MQTT (and also HTTP 1.1) so your devices can take advantage of alternative protocols even if your cloud backend does not speak them. The Message Broker can scale to accommodate billions of responsive long-lived connections between things and your cloud applications. Things use a topic-based pub/sub model to communicate with the broker, and can also publish via HTTP request/response. They can publish their state and can also subscribe to incoming messages. The pub/sub model allows a single device to easily and efficiently share its status with any number of other devices (thousands or even millions).
    • Device SDKs are client libraries that are specific to individual types of devices. The functions in the SDK allow code running on the device to communicate with the AWS IoT Message Broker over encrypted connections. The devices identify themselves using X.509 certificates or Amazon Cognito identities. The SDK also supports direct interaction with Thing Shadows.
    • The Thing Registry assigns a unique identity to each thing. It also tracks descriptive metadata such as the attributes and capabilities for each thing.

    All of these components can be created, configured, and inspected using the AWS Management Console, the AWS Command Line Interface (CLI), or through the IoT API.

    AWS IoT lets billions of things keep responsive connections to the cloud, and lets cloud applications interact with things (works in device shadows, rules engine, and the real-time functionality). It receives messages from  things and filters, records, transforms, augments, or routes them to other parts of AWS or to your own code.

    Getting Started with AWS IoT
    We have been working with a large group of IoT Partners to create AWS-powered starter kits:

    Once you have obtained a kit and connected it to something interesting, you are ready to start building your first IoT application using AWS IoT. You will make use of several different SDKs during this process:

    The AWS IoT Console will help you get started. With a few clicks you can create your first thing, and then download the SDK, security credentials, and sample code you will need to connect a device to AWS IoT.

    You can also build AWS IoT applications that communicate with an Amazon Echo via the Alexa Skills Kit. AWS IoT can trigger an Alexa Skill via a Lambda function and Alexa Skills can interact with thing shadows. Alexa Skills can also take advantage of AWS IoT’s bidirectional messaging capability (which traverses NAT and firewalls found in home networks) to wake devices with commands from the cloud. Manufacturers can use thing shadows to store responses to application-specific messages.

    AWS IoT in the Console
    The Console includes an AWS IoT tutorial to get you started:

    It also provides complete details on each thing, including the thing’s API endpoint, MQTT topic, and the contents of its shadow:

    AWS IoT Topics, Messages, and Rules
    All of the infrasructure that I described can be seen as a support system for the message and rule system that forms the heart of AWS IoT. Things disclose their state by publishing messages to named topics.  Publishing a message to a topic will create the topic if necessary; you don’t have to create it in advance. The topic namespace is hierarchical (“myfactories/seattle/sensors/door”)

    Rules use a SQL-like SELECT statement to filter messages. In the IoT Rules Engine, the FROM clause references an MQTT topic and the WHERE clause references JSON properties in the message. When a rule matches a message, it can invoke one or more of the following actions:

    The SELECT statement can use all (*) or specifically chosen fields of the message in the invocation.

    The endpoints above can be used to reach the rest of AWS. For example, you can reach Amazon Redshift via Kinesis, and external endpoints via Lambda, SNS, or Kinesis.

    Thing Shadows also participate in the message system. Shadows respond to HTTP GET requests with JSON documents (the documents are also accessible via MQTT for environments that don’t support HTTP). Each document contains the thing state, its metadata,  and a version number for the state.  Each piece of state information is stored in both “reported” (what the device last said), and “desired” (what the application wants it to be). Each shadow accepts changes to the desired state (HTTP) post, and publishes “delta” and “accepted” messages to topics associated with the thing shadow. The device listens on these topics and changes its state accordingly.

    IoT at re:Invent
    If you are at re:Invent, be sure to check out our Mobile Developer & IoT track. Here are some of the sessions we have in store:

    • MBL203 – From Drones to Cars: Connecting the Devices in Motion to the Cloud.
    • MBL204 -Connecting the Unconnected – State of the Union – Internet of Things Powered by AWS.
    • MBL303 -Build Mobile Apps for IoT Devices and IoT Apps for Mobile Devices.
    • MBL305 – You Have Date from the Devices, Now What? Getting Value of the IoT.
    • WRK202 – Build a Scalable Mobile App on Serverless, Event-Triggered, Back-End Logic.

    More to Come
    There’s a lot more to talk about and I have barely scratched the surface with this introductory blog post. Once I recover from AWS re:Invent, I will retire to my home lab and cook up a thing or two of my own and share the project with you. Stay tuned!


    PS – Check out the AWS IoT Mega Contest!

  • AWS Lambda Update – Python, VPC, Increased Function Duration, Scheduling, and More

    by Jeff Barr | on | in AWS Lambda, re:Invent |

    We launched AWS Lambda at re:Invent 2014 and the reception has been incredible. Developers and system architects quickly figured out that they can quickly and easily build serverless systems that need no administration and can scale to handle a very large number of requests. As a recap, Lambda functions can run in response to the following events:

    Over the past year we have added lots of new features to Lambda. We launched in three AWS regions (US East (Northern Virginia), US West (Oregon), and Europe (Ireland)]) and added support for Asia Pacific (Tokyo) earlier this year. Lambda launched with support for functions written in Node.js; we added support for Java functions earlier this year. As you can see from the list above, we also connected Lambda to many other parts of AWS. Over on the AWS Compute Blog, you can find some great examples of how to put Lambda to use in powerful and creative ways, including (my personal favorite), Microservices Without the Servers.

    New Features for re:Invent
    Today we are announcing a set of features that will Lambda even more useful. Here’s a summary of what we just announced from the stage:

    • VPC Support
    • Python functions
    • Increased function duration
    • Function versioning
    • Scheduled functions

    As you can see, it is all about the functions! Let’s take a look at each of these new features.

    Accessing Resources in a VPC From a Lambda Function
    Many AWS customers host microservices within a Amazon Virtual Private Cloud and would like to be able to access them from their Lambda functions.Perhaps they run a MongoDB cluster with lookup data, or want to use Amazon ElastiCache as a stateful store for Lambda functions, but don’t want to expose these resources to the Internet.

    You will soon be able to access resources  of this type by setting up one or more security groups within the target VPC, configure them to accept inbound traffic from Lambda, and attach them to the target VPC subnets. Then you will need to specify the VPC, the subnets, and the security groups when your create your Lambda function (you can also add them to an existing function). You’ll also need to give your function permission (via its IAM role) to access a couple of EC2 functions related to Elastic Networking.

    This feature will be available later this year. I’ll have more info (and a walk-through) when we launch it.

    Python Functions
    You can already write your Lambda functions in Node.js and Java. Today we are adding support for Python 2.7, complete with built-in access to the AWS SDK for Python.  Python is easy to learn and easy to use, and you’ll be up and running in minutes. We have received many, many requests for Python support and we are very happy to be able to deliver it. You can start using Python today. Here’s what it looks like in action:

    Increased Function Duration
    Lambda is a great fit for Extract-Transform-Load (ETL) applications. It can easily scale up to ingest and process large volumes of data, without requiring any persistent infrastructure. In order to support this very popular use case, your Lambda functions can now run for up to 5 minutes. As has always been the case, you simply specify the desired timeout when you create the function. Your function can consult the context object to see how much more time it has available.

    Here’s how you can access and log that value using Python:

    print(" Remaining time (ms): " + str(context.get_remaining_time_in_millis()) + "\n")

    Functions that consume all of their time will be terminated, as has always been the case.

    Function Versioning & Aliasing
    When you start to build complex systems with Lambda, you will want to evolve them on a controlled basis. We have added a new versioning feature to simplify this important aspect of development & testing.

    Each time you upload a fresh copy of the code for a particular function, Lambda will automatically create a new version and assign it a number (1, 2,3, and so forth). The Amazon Resource Name (ARN) for the function now accepts an optional version qualifier at the end (a “:” and then a version number).  An ARN without a qualifier always refers to the newest version of the function for ease of use and backward compatibility. A qualified ARN such as “arn:aws:lambda:us-west-2:123456789012:function:PyFunc1:2″ refers to a particular version (2, in this case).

    Here are a couple of things to keep in mind as you start to think about this new feature:

    • Each version of a function has its own description and configuration (language / runtime, memory size, timeout, IAM role, and so forth).
    • Each version of a given function generates a unique set of CloudWatch metrics.
    • The CloudWatch Logs for the function will include the function version as part of the stream name.
    • Lambda will store multiple versions for each function. Each Lambda account can store up to 1.5 gigabytes of code and you can delete older versions as needed.

    You can also create named aliases and assign them to specific versions of the code for a function. For example, you could initially assign “prod” to version 3, “test” to version 5, and “dev” to version 7 for a function. Then you would use the alias as part of the ARN that you use to invoke the function, like this:

    • Production – “arn:aws:lambda:us-west-2:123456789012:function:PyFunc1:prod”
    • Testing – “arn:aws:lambda:us-west-2:123456789012:function:PyFunc1:test”
    • Development – “arn:aws:lambda:us-west-2:123456789012:function:PyFunc1:dev”

    You can use ARNs with versions or aliases (which we like to call qualified ARNs) anywhere you’d use an existing non-versioned or non-aliased ARN.  In fact, we recommend using them as a best practice.

    This feature makes it easy to promote code between stages or to rollback to earlier versions if a problem arises. For example, you can point your prod alias to version 3 of the code, and then remap it to point to version 5 (effectively promoting it from test to production) without having to make any changes to the client applications or to the event source that triggers invocation of the function.

    Scheduled Functions (Cron)
    You can now invoke a Lambda function on a regular, scheduled basis. You can specify a fixed rate (number of minutes, hours, or days between invocations) or you can specify a Cron-like expression:

    This feature is available now in the console, with API and CLI support in the works.

    — Jeff;


  • CloudWatch Dashboards – Create & Use Customized Metrics Views

    by Jeff Barr | on | in Amazon CloudWatch, re:Invent |

    Amazon CloudWatch monitors your AWS cloud resources and your cloud-powered applications. It tracks the metrics so that you can visualize and review them. You can also set alarms that will fire when a metrics goes beyond a limit that you specified. CloudWatch gives you visibility into resource utilization, application performance, and operational health.

    New CloudWatch Dashboards
    Today we are giving you the power to build customized dashboards for your CloudWatch metrics. Each dashboard can display multiple metrics, and can be accessorized with text and images. You can build multiple dashboards if you’d like, each one focusing on providing a distinct view of your environment. You can even pull data from multiple regions into a single dashboard in order to create a global view.

    Let’s build one!

    Building a Dashboard
    I open up the CloudWatch Console and click on Create dashboard to get started. Then I enter a name:

    Then I add my first “Widget” (a graph or some text) to my dashboard. I’ll display some metrics using a line graph:

    Now I need to choose the metric. This is a two step process. First I choose by category:

    I clicked on EC2 Metrics. Now I can choose one or more metrics and create the widget. I sorted the list by the Metric Name selected all of my EC2 instances, and clicked on the Create widget button (not shown in the screen shot):

    As I noted earlier, you can access and make use of metrics drawn from multiple AWS regions; this means that you create a single global status dashboard for your complex, multi-region applications and deployments.

    And here’s my dashboard:

    I can resize the graph, and I can also interact with it. For example, I can focus on a single instance with a click (this will also highlight the other metrics from that instance on the dashboard):

    I can add several widgets. The vertical line allows me to look for correlation between metrics that are shown in different widgets:

    The graphs can be linked or independent with respect to zooming (the Actions menu allows me to choose which option I want). I can click and drag on a desired time-frame and all of the graphs will zoom (if they are linked) when I release the mouse button:

    The Action menu allows me to reset the zoom and to initiate many other operations on my dashboards:

    I can also add static text and images to my dashboard by using a text widget. The contents of the widget are specified in GitHub Flavored Markdown:

    Here’s my stylish new dashboard:

    Text widgets can also include buttons and tables. I can link to help pages, troubleshooting guides, internal and external status pages, phone directories, and so forth.

    I can create several dashboards and switch between then with a click:

    I can also create a link that takes me from one dashboard to another one:

    I can also control the time range for the graphs, and I can enable automatic refresh, with fine-grained control of both:

    Dashboard Ownership and Access
    The individual dashboards are stored at the AWS account level and can be accessed by IAM users within the account. However, in many cases administrators will want to set up dashboards for use across the organization in a controlled fashion.

    In order to support this important scenario, IAM permissions on a pair of CloudWatch functions can be used to control the ability to see metrics and to modify dashboards. Here’s how it works:

    • If an IAM user has permission to call PutMetricData, they can create, edit, and delete dashboards.
    • If an IAM user has permission to call GetMetricStatistics, they can view dashboard content.

    Available Now
    CloudWatch Dashboards are available now and you can start using them today in all AWS regions! You can create up to three dashboards (each with up to 50 metrics) at no charge. After that, each additional dashboard costs $3 per month.

    Share Your Dashboards
    I am looking forward to seeing examples of this feature in action. Take it for a spin and let me know what you come up with!


  • EC2 Container Service Update – Container Registry, ECS CLI, AZ-Aware Scheduling, and More

    by Jeff Barr | on | in EC2 Container Service, re:Invent |

    I’m really excited by the Docker-driven, container-based deployment model that is quickly becoming the preferred way to build, run, scale, and quickly update new applications. Since we launched Amazon EC2 Container Service last year, we have seen customers use it to host and run their microservices, web applications, and batch jobs.

    Many developers have told me that containers have allowed them to speed up their development, testing, and deployment efforts. They love the fact that individual containers can hold “standardized” application components, each of which can be built using the language, framework, and middleware best suited to the task at hand. The isolation provided by the containers gives them the freedom to innovate at a more granular level instead of putting the entire system at risk due to large-scale changes.

    Based on the feedback that we have received from our contingent of Amazon ECS and Docker users, we are announcing some powerful new features  – the Amazon EC2 Container Registry and the Amazon EC2 Container Service CLI. We are also making the Amazon ECS scheduler more aware of Availability Zones and adding some new container configuration options.

    Let’s dive in!

    Amazon EC2 Container Registry
    Docker (and hence EC2 Container Service) is built around the concept of an image. When you launch a container, you reference an image, which is pulled from a Docker registry. This registry is a critical part of the deployment process. Based on conversations with our customers, we have learned that they need a registry that is highly available and exceptionally scalable, and globally accessible, with the ability to support deployments that span two or more AWS regions. They also want it to integrate with AWS Identity and Access Management (IAM) to simplify authorization and to provide fine-grained control.

    While customers could meet most of these requirements by hosting their own registry, they have told us that this would impose an operational burden that they would strongly prefer to avoid.

    Later this year we will make the Amazon EC2 Container Registry (Amazon ECR) available. This fully managed registry will address the issues that I mentioned above by making it easy for you to store, manage, distribute, and collaborate around Docker container images.

    Amazon ECR is integrated with ECS and will be easy to integrate into your production workflow. You can use the Docker CLI running on your development machine to push images to Amazon ECR, where Amazon ECS can retrieve them for use in production deployments.

    Images are stored durably in S3 and are encrypted at rest and in transit, with access controls via IAM users and roles. You will pay only for the data that you store and for data that you transfer to the Internet.

    Here’s a sneak peek at the console interface:

    You can visit the signup page to learn more and to sign up for early access. If you are interested in participating in this program, I would encourage you to sign up today.

    We are already working with multiple launch partners including Shippable, CloudBees, CodeShip, and Wercker to provide integration with Amazon ECS and Amazon ECR, with a focus on automatically building and deploying Docker images. To learn more, visit our Container Partners page.

    Amazon EC2 Container Service CLI
    The ECS Command Line Interface (ECS CLI) is a command line interface for Amazon EC2 Container Service (ECS) that provides high level commands that simplify creating, updating and monitoring clusters and tasks from a local development environment.

    The Amazon ECS CLI supports Docker Compose, a popular open-source tool for defining and running multi-container applications. You can use the ECS CLI as part of your everyday development and testing cycle as an alternative to the AWS Management Console.

    You can get started with the ECS CLI in a couple of minutes. Download it (read the directions first), install it, and then configure it as follows (you have other choices and options, of course):

    $ ecs-cli configure --region us-east-1 --cluster my-cluster

    Launch your first cluster like this:

    $ ecs-cli up --keypair my-keys --capability-iam --size 2

    Docker Compose requires a configuration file. Here’s a simple one to get started (put this in docker-compose.yml):

      image: amazon/amazon-ecs-sample
      - "80:80"

    Now run this on the cluster:

    $ ecs-cli compose up
    INFO[0000] Found task definition TaskDefinition=arn:aws:ecs:us-east-1:980116778723:task-definition/ecscompose-bin:1
    INFO[0000] Started task with id:arn:aws:ecs:us-east-1:9801167:task/fd8d5a69-87c5-46a4-80b6-51918092e600

    Then take a peek at the running tasks:

    $ ecs-cli compose ps
    Name                                      State    Ports
    fd8d5a69-87c5-46a4-80b6-51918092e600/web  RUNNING>80/tcp

    Point your web browser at that address to see the sample app running in the cluster.

    The ECS CLI includes lots of other options (run it with –help to see all of them). For example, you can create and manage long-running services. Here’s the list of options:

    The ECS CLI is available under an Apache 2.0 license (code is available at and we are looking forward to seeing your pull requests.

    New Docker Container Configuration Options
    A task definition is a description of an application that lets you define the containers that are scheduled together on an EC2 instance. Some of the parameters you can specify in a task definition include which Docker images to use, how much CPU and memory to use with each container, and what (if any) ports from the container are mapped to the host container.

    Task definitions now support lots of Docker options including Docker labels, working directory, networking disabled, privileged execution, read-only root filesystem, DNS servers, DNS search domains, ulimits, log configuration, extra hosts (hosts to add to /etc/hosts), and security options for Multi-Level Security (MLS) systems such as SELinux.

    The Task Definition Editor in the ECS Console has been updated and now accepts the new configuration options:

    For more information, read about Task Definition Parameters.

    Scheduling is Now Aware of Availability Zones
    We introduced the Amazon ECS service scheduler earlier this year as a way to easily schedule containers for long running stateless services and applications. The service scheduler optionally integrates with Elastic Load Balancing. It ensures that the specified number of tasks are constantly running and restarts tasks if they fail. The service scheduler is the primary way customers deploy and run production services with ECS and we want to continue to make it easier to do so.

    Today the service scheduler is availability zone aware. As new tasks are launched, the service scheduler will spread the tasks to maintain balance across AWS availability zones.

    Amazon ECS at re:Invent
    If you are at AWS re:Invent and want to learn more about how your colleagues (not to mention your competitors) are using container-based computing and Amazon ECS, check out the following sessions:

    • CMP302 – Amazon EC2 Container Service: Distributed Applications at Scale (to be live streamed).
    • CMP406 – Amazon ECS at Coursera.
    • DVO305 – Turbocharge Your Continuous Deployment Pipeline with Containers.
    • DVO308 – Docker & ECS in Production: How We Migrated Our Infrastructure from Heroku to AWS.
    • DVO313 – Building Next-Generation Applications with Amazon ECS.
    • DVO317 – From Local Docker Development to Production Deployments.


  • EC2 Instance Update – X1 (SAP HANA) & T2.Nano (Websites)

    by Jeff Barr | on | in Amazon EC2, re:Invent |

    AWS customers love to share their plans and their infrastructure needs with us. We, in turn, love to listen and to do our best to meet those needs. A look at the EC2 instance history should tell you a lot about our ability to listen to our customers and to respond with an increasingly broad range of instances (check out the EC2 Instance History for a detailed look).

    Lately, we have been hearing two types of requests, both driven by some important industry trends:

    • On the high end, many of our enterprise customers are clamoring for instances that have very large amounts of memory. They want to run SAP HANA and other in-memory databases, generate analytics in real time, process giant graphs using Neo4j or Titan , or create enormous caches.
    • On the low end, other customers need a little bit of processing power to host dynamic websites that usually get very modest amounts of traffic,  or to run their microservices or monitoring systems.

    In order to meet both of these needs, we are planning to launch two new EC2 instances in the coming months. The upcoming X1 instances will have loads of memory; the t2.nano will provide that little bit of processing power, along with bursting capabilities similar to those of its larger siblings.

    X1 – Tons of Memory
    X1 instances will feature up to 2 TB of memory, a full order of magnitude larger than the current generation of high-memory instances. These instances are designed for demanding enterprise workloads including production installations of SAP HANA, Microsoft SQL Server, Apache Spark, and Presto.

    The X1 instances will be powered by up to four Intel® Xeon® E7 processors. The processors have high memory bandwidth and large L3 caches, both designed to support high-performance, memory-bound applications. With over 100 vCPUs, these instances will be able to handle highly concurrent workloads with ease.

    We expect to have the X1 available in the first half of 2016. I’ll share pricing and other details at launch time.

    T2.Nano – A Little (Burstable) Processing Power
    The T2 instances provide a baseline level of processing power, along with the ability to save up unused cycles (“CPU Credits”) and use them when the need arises (read about New Low Cost EC2 Instances with Burstable Performance to learn more). We launched the t2.micro, t2.small, and t2.medium a little over a year ago. The burstable model has proven to be extremely popular with our customers. It turns out that most of them never actually consume all of their CPU Credits and are able to run at full core performance. We extended this model with the introduction of t2.large just a few months ago.

    The next step is to go in the other direction. Later this year we will introduce the t2.nano instance.  You’ll get 1 vCPU and 512 MB of memory, and the ability run at full core performance for over an hour on a full credit balance. Each newly launched t2.nano starts out with sufficient CPU Credits to allow you to get started as quickly as possible.

    Due to the burstable performance, these instances are going to be a great fit for websites that usually get modest amounts of traffic. During those quiet times, CPU Credits will accumulate, providing a reserve that can be drawn upon when traffic surges.

    Again, I’ll share more info as we get closer to the launch!


  • Amazon Inspector – Automated Security Assessment Service

    by Jeff Barr | on | in Amazon Inspector, re:Invent |

    As systems, configurations, and applications become more and more complex, detecting potential security and compliance issues can be challenging. Agile development methodologies can shorten the time between “code complete” and “code tested and deployed,” but can occasionally allow vulnerabilities to be introduced by accident and overlooked during testing. Also, many organizations do not have enough security personnel on staff to perform time-consuming manual checks on individual servers and other resources.

    New Amazon Inspector
    Today we are announcing a preview of the new Amazon Inspector. As the name implies, it analyzes the behavior of the applications that you run in AWS and helps you to identify potential security issues.

    Inspector works on an application-by-application basis. You start by defining a collection of AWS resources that make up your application:

    Then you create and run a security assessment of the application:

    The EC2 instances and other AWS resources that make up your application are identified by tags. When you create the assessment, you also define a duration (15 minutes, 1 / 8 / 12 hours, or 1 day).

    During the assessment, an Inspector Agent running on each of the EC2 instances that play host to the application monitors network, file system, and process activity. It also collects other information including details of communication with AWS services, use of secure channels, network traffic between instances, and so forth. This information provides Inspector with a complete picture of the application and its potential security or compliance issues.

    After the data has been collected, it is correlated, analyzed, and compared to a set of built-in security rules. The rules include checks against best practices, common compliance standards, and vulnerabilities and represent the collective wisdom of the AWS security team. The members of this team are constantly on the lookout for new vulnerabilities and best practices, which they codify into new rules for Inspector.

    The initial launch of Inspector will include the following sets of rules:

    • Common Vulnerabilities and Exposures
    • Network Security Best Practices
    • Authentication Best Practices
    • Operating System Security Best Practices
    • Application Security Best Practices
    • PCI DSS 3.0 Assessment

    Issues identified by Inspector (we call them “findings”) are gathered together and grouped by severity in a comprehensive report.

    You can access the Inspector from the AWS Management Console, AWS Command Line Interface (CLI), or API.

    More to Come
    I plan to share more information about Inspector shortly after re:Invent wraps up and I have some time to catch my breath, so stay tuned!

    — Jeff;

  • AWS Config Rules – Dynamic Compliance Checking for Cloud Resources

    by Jeff Barr | on | in AWS Config, re:Invent |

    The flexible, dynamic nature of the AWS cloud gives developers and admins the flexibility to launch, configure, use, and terminate processing, storage, networking, and other resources as needed. In any fast-paced agile environment, security guidelines and policies can be overlooked in the race to get a new product to market before the competition.

    Imagine that you had the ability to verify that existing and newly launched AWS resources conformed to your organization’s security guidelines and best practices without creating a bureaucracy or spending your time manually inspecting cloud resources.

    Last year I announced that you could Track AWS Resource Configurations with AWS Config. In that post I showed you how AWS Config captured the state of your AWS resources and the relationships between them. I also discussed Config’s auditing features, including the ability to select a resource and then view a timeline of configuration changes on a timeline.

    New AWS Config Rules
    Today we are extending Config with a powerful new rule system. You can use existing rules from AWS and from partners, and you can also define your own custom rules. Rules can be targeted at specific resources (by id), specific types of resources, or at resources tagged in a particular way. Rules are run when those resources are created or changed, and can also be evaluated on a periodic basis (hourly, daily, and so forth).

    Rules can look for any desirable or undesirable condition. For example, you could:

    • Ensure that EC2 instances launched in a particular VPC are properly tagged.
    • Make sure that every instance is associated with at least one security group.
    • Check to make sure that port 22 is not open in any production security group.

    Each custom rule is simply an AWS Lambda function. When the function is invoked in order to evaluate a resource, it is provided with the resource’s Configuration Item.  The function can inspect the item and can also make calls to other AWS API functions as desired (based on permissions granted via an IAM role, as usual). After the Lambda function makes its decision (compliant or not) it calls the PutEvaluations function to record the decision and returns.

    The results of all of these rule invocations (which you can think of as compliance checks) are recorded and tracked on a per-resource basis and then made available to you in the AWS Management Console. You can also access the results in a report-oriented form, or via the Config API.

    Let’s take a quick tour of AWS Config Rules, with the proviso that some of what I share with you will undoubtedly change as we progress toward general availability. As usual, we will look forward to your feedback and will use it to shape and prioritize our roadmap.

    Using an Existing Rule
    Let’s start by using one of the rules that’s included with Config. I open the Config Console and click on Add Rule:

    I browse through the rules and decide to start with instances-in-vpc. This rule verifies that an EC2  instance belong to a VPC, with the option to check that it belongs to a specific VPC. I click on the rule and customize it as needed:

    I have a lot of choices here.  The Trigger type tells Config to run the rule when the resource is changed, or periodically. The Scope of changes tells Config which resources are of interest. The scope can be specified by resource type (with an optional identifier) by tag name, or by a combination of tag name and value. If I am checking EC2 instances, I can trigger on any of the following:

    • All EC2 instances.
    • Specific EC2 instances, identified by a resource identifier.
    • All resources tagged with the key “Department.”
    • All resources tagged with the key “Stage” and the value “Prod.”

    The Rule parameters allows me to pass additional key/value pairs to the Lambda function. The parameter names, and their meaning, will be specific to the function. In this case, supplying a value for the vpcid parameter tells the function to verify that the EC2 instance is running within the specified VPC.

    The rule goes in to effect after I click on Save. When I return to the Rules page I can see that my AWS configuration is now noncompliant:

    I can investigate the issue by examining the Config timeline for the instance in question:

    It turns out that this instance has been sitting around for a while (truth be told I forgot about it). This is a perfect example of how useful the new Config Rules can be!

    I can also use the Config Console to look at the compliance status of all instances of a particular type:

    Creating a New Rule
    I can create a new rule using any language supported by Lambda. The rule receives the Configuration Item and the rule parameters that I mentioned above, and can implement any desired logic.

    Let’s look at a couple of excerpts from a sample rule. The rule applies to EC2 instances, so it checks to see if was invoked on one:

    function evaluateCompliance(configurationItem, ruleParameters) {
        if (configurationItem.resourceType !== 'AWS::EC2::Instance') {
            return 'NOT_APPLICABLE';
        } else {
            var securityGroups = configurationItem.configuration.securityGroups;
            var expectedSecurityGroupId = ruleParameters.securityGroupId;
            if (hasExpectedSecurityGroup(expectedSecurityGroupId, securityGroups)) {
                return 'COMPLIANT';
            } else {
                return 'NON_COMPLIANT';

    If the rule was invoked on an EC2 instance, it checks to see if any one of a list of expected security groups is attached to the instance:

    function hasExpectedSecurityGroup(expectedSecurityGroupId, securityGroups) {
        for (var i = 0; i < securityGroups.length; i++) {
            var securityGroup = securityGroups[i];
            if (securityGroup.groupId === expectedSecurityGroupId) {
                return true;
        return false;

    Finally, the rule stores the result of the compliance check  by calling the Config API’s putEvaluations function:

    config.putEvaluations(putEvaluationsRequest, function (err, data) {
        if (err) {
        } else {

    The rule can record results for the item being checked or for any related item. Let’s say you are checking to make sure that an Elastic Load Balancer is attached only to a specific kind of EC2 instance. You could decide to report compliance (or noncompliance) for the ELB or for the instance, depending on what makes the most sense for your organization and your compliance model. You can do this for any resource type that is supported by Config.

    Here’s how I create a rule that references my Lambda function:

    On the Way
    AWS Config Rules are being launched in preview form today and you can sign up now.  Stay tuned for additional information!


    PS – re:Invent attendees can attend session SEC 314: Use AWS Config Rules to Improve Governance of Your AWS Resources (5:30 PM on October 8th in Palazzo K).

  • Amazon RDS Update – MariaDB is Now Available

    by Jeff Barr | on | in Amazon Relational Database Service, re:Invent |

    We launched the Amazon Relational Database Service (RDS) almost six years ago, in October of 2009. The initial launch gave you the power to launch a MySQL database instance from the command line. From that starting point we have added a multitude of features, along with support for the SQL Server, Oracle Database, PostgreSQL, and Amazon Aurora databases. We have made RDS available in every AWS region, and on a very wide range of database instance types. You can now run RDS in a geographic location that is well-suited to the needs of your user base, on hardware that is equally well-suited to the needs of your application.

    Hello, MariaDB
    Today we are adding support for the popular MariaDB database, beginning with version 10.0.17. This engine was forked from MySQL in 2009, and has developed at a rapid clip ever since, adding support for two storage engines (XtraDB and Aria) and other leading-edge features. Based on discussions with potential customers, some of the most attractive features include parallel replication and thread pooling.

    As is the case with all of the databases supported by RDS, you can launch MariaDB from the Console, AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, via the RDS API, or from a CloudFormation template.

    I started out with the CLI and launched my database instance like this:

    $ rds-create-db-instance jeff-mariadb-1 \
      --engine mariadb \
      --db-instance-class db.r3.xlarge \
      --db-subnet-group-name dbsub \
      --allocated-storage 100 \
      --publicly-accessible false \
      --master-username root --master-user-password PASSWORD

    Let’s break this down, option by option:

    • Line 1 runs the rds-create-db-instance command and specifies the name (jeff-mariadb-1) that I have chosen for my instance.
    • Line 2 indicates that I want to run the MariaDB engine, and line 3 says that I want to run it on a db.r3.xlarge instance type.
    • Line 4 points to the database subnet group that  I have chosen for the database instance. This group lists the network subnets within my VPC (Virtual Private Cloud) that are suitable for my instance.
    • Line 5 requests 100 gigabytes of storage, and line 6 specifies that I don’t want the database instance to have a publicly accessible IP address.
    • Finally, line 7 provides the name and credentials for the master user of the database.

    The command displays the following information to confirm my launch:

    DBINSTANCE  jeff-mariadb-1  db.r3.xlarge  mariadb  100  root  creating  1  ****  db-QAYNWOIDPPH6EYEN6RD7GTLJW4  n  10.0.17  general-public-license  n  standard  n
          VPCSECGROUP  sg-ca2071af  active
    SUBNETGROUP  dbsub  DB Subnet for Testing  Complete  vpc-7fd2791a
          SUBNET  subnet-b8243890  us-east-1e  Active
          SUBNET  subnet-90af64e7  us-east-1b  Active
          SUBNET  subnet-b3af64c4  us-east-1b  Active
          PARAMGRP  default.mariadb10.0  in-sync
          OPTIONGROUP  default:mariadb-10-0  in-sync

    The RDS CLI includes a full set of powerful, high-level commands, all documented here. For example, I can create read replicas (rds-create-db-instance-read-replicas) and take snapshot backups (rds-create-db-snapshot) in minutes.

    Here’s how I would launch the same instance using the AWS Management Console:

    Get Started Today
    You can launch RDS database instances running MariaDB today in all AWS regions. Supported database instance types include M3 (standard), R3 (memory optimized), and T2 (standard).


  • AWS Import/Export Snowball – Transfer 1 Petabyte Per Week Using Amazon-Owned Storage Appliances

    by Jeff Barr | on | in AWS Import/Export, re:Invent |

    Even though high speed Internet connections (T3 or better) are available in many parts of the world, transferring terabytes or petabytes of data from an existing data center to the cloud remains challenging. Many of our customers find that the data migration aspect of an all-in move to the cloud presents some surprising issues. In many cases, these customers are planning to decommission their existing data centers after they move their apps and their data; in such a situation, upgrading their last-generation networking gear and boosting connection speeds makes little or no sense.

    We launched the first-generation AWS Import/Export service way back in 2009. As I wrote at the time, “Hard drives are getting bigger more rapidly than internet connections are getting faster.” I believe that remains the case today. In fact, the rapid rise in Big Data applications, the emergence of global sensor networks, and the “keep it all just in case we can extract more value later” mindset have made the situation even more dire.

    The original AWS Import/Export model was built around devices that you had to specify, purchase, maintain, format, package, ship, and track. While many AWS customers have used (and continue to use) this model, some challenges remain. For example, it does not make sense for you to buy multiple expensive devices as part of a one-time migration to AWS. In addition to data encryption requirements and device durability issues, creating the requisite manifest files for each device and each shipment adds additional overhead and leaves room for human error.

    New Data Transfer Model with Amazon-Owned Appliances
    After gaining significant experience with the original model, we are ready to unveil a new one, formally known as AWS Import/Export Snowball. Built around appliances that we own and maintain, the new model is faster, cleaner, simpler, more efficient, and more secure. You don’t have to buy storage devices or upgrade your network.

    Snowball is designed for customers that need to move lots of data (generally 10 terabytes or more) to AWS on a one-time or recurring basis. You simply request one or more from the AWS Management Console and wait a few days for the appliance to be delivered to your site. If you want to import a lot of data, you can order one or more Snowball appliances and run them in parallel.

    The new Snowball appliance is purpose-built for efficient data storage and transfer. It is rugged enough to withstand a 6 G jolt, and (at 50 lbs) light enough for one person to carry. It is entirely self-contained, with 110 Volt power and a 10 GB network connection on the back and an E Ink display/control panel on the front. It is weather-resistant and serves as its own shipping container; it can go from your mail room to your data center and back again with no packing or unpacking hassle to slow things down. In addition to being physically rugged and tamper-resistant, AWS Snowball detects tampering attempts. Here’s what it looks like:

    Once you receive a Snowball, you plug it in, connect it to your network, configure the IP address (you can use your own or the device can fetch one from your network using DHCP), and install the AWS Snowball client. Then you return to the Console to download the job manifest and a 25 character unlock code. With all of that info in hand you start the appliance with one command:

    $ snowball start -i DEVICE_IP -m PATH_TO_MANIFEST -u UNLOCK_CODE

    At this point you are ready to copy data to the Snowball. The data will be 256-bit encrypted on the host and stored on the appliance in encrypted form. The appliance can be hosted on a private subnet with limited network access.

    From there you simply copy up to 50 terabytes of data to the Snowball and disconnect it (a shipping label will automatically appear on the E Ink display), and ship it back to us for ingestion. We’ll decrypt the data and copy it to the S3 bucket(s) that you specified when you made your request. Then we’ll sanitize the appliance in accordance with National Institute of Standards and Technology Special Publication 800-88 (Guidelines for Media Sanitization).

    At each step along the way, notifications are sent to an Amazon Simple Notification Service (SNS) topic and email address that you specify. You can use the SNS notifications to integrate the data import process into your own data migration workflow system.

    Creating an Import Job
    Let’s step through the process of creating an AWS Snowball import job from the AWS Management Console. I create a job by entering my name and address (or choosing an existing one if I have done this before):

    Then I give the job a name (mine is import-photos), and select a destination (an AWS region and one or more S3 buckets):

    Next, I set up my security (an IAM role and a KMS key to encrypt the data):

    I’m almost ready! Now I choose the notification options. I can create a new SNS topic and create an email subscription to it, or I can use an existing topic. I can also choose the status changes that are of interest to me:

    After I review and confirm my choices, the job becomes active:

    The next step (which I didn’t have time for in the rush to re:Invent) would be to receive the appliance, install it and copy my data over, and ship it back.

    In the Works
    We are launching AWS Import/Export Snowball with import functionality so that you can move data to the cloud. We are also aware of many interesting use cases that involve moving data the other way, including large-scale data distribution, and plan to address them in the future.

    We are also working on other enhancements including continuous, GPS-powered chain-of-custody tracking.

    Pricing and Availability
    There is a usage charge of $200 per job, plus shipping charges that are based on your destination and the selected shipment method. As part of this charge, you have up to 10 days (starting the day after delivery) to copy your data to the appliance and ship it out. Extra days are $15 each.

    You can import data to the US Standard and US West (Oregon) regions, with more on the way.