AWS Blog

Congratulations to the Winners of the Serverless Chatbot Competition!

by Jeff Barr | on | in Amazon API Gateway, AWS Lambda, Developers | | Comments

I announced the AWS Serverless Chatbot Competion in August and invited you to build a chatbot for Slack using AWS Lambda and Amazon API Gateway.

Last week I sat down with fellow judges Tim Wagner (General Manager of AWS Lambda) and Cecilia Deng (a Software Development Engineer on Tim’s team) to watch the videos and to evaluate all 62 submissions. We were impressed by the functionality and diversity of the entrees, as well as the efforts that the entrants put in to producing attractive videos to show their submissions in action.

After hours of intense deliberation we chose a total of 9 winners: 8 from individuals, teams & small organizations and one from a larger organization. Without further ado, here you go:

Individuals, Teams, and Small Organizations
Here are the winners of the Serverless Slackbot Hero Award. Each winner receives one ticket to AWS re:Invent, access to discounted hotel room rates, public announcement and promotion during the Serverless Computing keynote, some cool swag, and $100 in AWS Credits. You can find the code for many of these bots on GitHub. In alphabetical order, the winners are:

AWS Network Helper“The goal of this project is to provide an AWS network troubleshooting script that runs on a serverless architecture, and can be interacted with via Slack as a chat bot.GitHub repo.

B0pb0t – “Making Mealtime Awesome.” GitHub repo.

Borges – “Borges is a real-time translator for multilingual Slack teams.” GitHub repo.

CLIve – “CLIve makes managing your AWS EC2 instances a doddle. He understands natural language, so no need to learn a new CLI!”

Litlbot – “Litlbot is a Slack bot that enables realtime interaction with students in class, creating a more engaged classroom and learning experience.” GitHub repo.

Marbot – “Forward alerts from Amazon Web Services to your DevOps team.”

Opsidian – “Collaborate on your AWS infra from Slack using natural language.”

ServiceBot – “Communication platform between humans, machines, and enterprises.” GitHub repo.

Larger Organization
And here’s the winner of the Serverless Slackbot Large Organization Award:

Eva – “The virtual travel assistant for your team.” GitHub repo.

Thanks & Congratulations
I would like to personally thank each of the entrants for taking the time to submit their entries to the competition!

Congratulations to all of the winners; I hope to see you all at AWS re:Invent.



PS – If this list has given you an idea for a chatbot of your very own, please watch our Building Serverless Chatbots video and take advantage of our Serverless Chatbot Sample.

AWS Budgets Update – Track Cloud Costs and Usage

by Jeff Barr | on | in AWS Budgets, AWS Cost Explorer, Enterprise | | Comments

As Spider-Man and others before him have said, “with great power comes great responsibility.” In the on-demand, pay-as-you-go cloud world, this means that you need to be an informed, responsible consumer. In a corporate environment, this means that you need to pay attention to budgets and to spending, and to make sure that your actual spend is in line with your projections. With AWS in use across multiple projects and departments, tracking and forecasting becomes more involved.

Today we are making some important upgrades to the AWS Budgets feature (read New – AWS Budgets and Forecasts for background information). This feature is designed to be used by Finance Managers, Project Managers, and VP-level DevOps folks (please feel free to share this post with similarly-titled members of your organization if you are not directly responsible for your cloud budget).  You can use AWS Budgets to maintain a unified view of your costs and usage for specific categories that you define, and you can sign up for automated notifications that provide you with detailed status information (over or under budget) so that you can identify potential issues and take action to prevent undesired actual or forecasted overruns.

AWS Budgets Updates
You can create up to 20,000 budgets per payer account. In order to allow you to stay on top of your spending in environments where costs and resource consumption are changing frequently, the budgets are evaluated four times per day. Notifications are delivered via email or programmatically (an Amazon Simple Notification Service (SNS) message), so that you can take manual, semi-automated, or fully automated corrective action.  This gives you the power to address all of the following situations, along with others that may arise withing your organization:

VP – Optimize your overall cloud spend, with budgets for each business unit and for the company as a whole, tracking spending by region and other dimensions and comparing actual usage against budgets.

Project Manager – Manage costs within your department, watching multiple services, tags, and regions. Alert stakeholders when thresholds have been breached, and ask them to take action. When necessary, give resource budgets to individual team members to encourage adoption and experimentation.

Finance Manager – Analyze historical costs for your organization and use your insight into future plans to develop suitable budgets. Examine costs across the entire company, or on a per-account, per-service, business unit, or project team level.

Creating a Budget
Let’s create a budget or two!

Start by opening up Billing and Cost Management:

And then click on Budgets:

If you are new to AWS Budgets, you may have to wait up to 24 hours after clicking Create budget before you can proceed to the next step. During this time, we’ll prepare the first set of Detailed Billing Reports for your account.

Click on Create budget, decide whether you want the budget to be based on costs or usage, and give your budget a name. Then select Monthly, Quarterly, or Annual. I’ll go for a cost-based ($1000) monthly budget named MainBudget to get started:

By not checking any of the options next to Include costs related to, my budget will apply to my entire account. Checking a box opens the door to all sorts of additional options that give you a lot of flexibility. Here’s how I could create a budget for usage of EC2 instances where the Owner tag is set to jbarr:

I could be even more specific, and choose to set a very modest budget for usage that is on Non-Reserved instances. This would be a great way to make sure that I am making good use of any Reserved Instances that my organization owns.

The next step is to set up email or programmatic notifications:

The programmatic notification option can be used in many different ways. I could create a new web app with a fixed budget, and then invoke a AWS Lambda function if costs are approaching the budgeted amount. The app could take corrective action to ensure that the budget is not exceeded. For example, it could temporarily disable some of the more computationally intensive features, or it could switch over to a statically hosted alternative site.

With everything set up as desired I simply click on Create. My budget is visible right away (I clicked on the triangle in order to display the details before I took this screen shot):

As you can can see, I have already overspent my $1000 budget, with a forecast of almost $5,600 for the month. Given that we are a frugal company (read our leadership principles to learn more), I really need to see what’s going on and clean up some of my extra instances! Because I had opted for email notification, I received the following message not too long after I created my budget:

Suppose that my data transfer budget is separate from my compute budget, and that I am allowed to transfer up to 100 GB of data out of S3 every month, regardless of the cost at the time. I can create a budget that looks like this:

And I can see at a glance that I am in no danger of exceeding my data transfer budget:

I can also download the on-screen information in CSV form for further inspection or as input to another part of my budgeting process:

As you can see, this new feature gives you the power to set up very detailed budgets. Although I have introduced this feature using the AWS Management Console, you can also set up budgets by making calls to the new Budget API or by using the AWS Command Line Interface (CLI). This API includes functions like CreateBudget, DescribeBudget, and UpdateBudget that you can use from within your own applications.

Available Now
This new feature is available now and you can start using it today! You can create two budgets per account at no charge; additional budgets cost $0.02 per day (again, you can have up to 20,000 budgets per account).

To learn more, read Managing Your Costs with Budgets.



AWS Developer Tool Recap – Recent Enhancements to CodeCommit, CodePipeline, and CodeDeploy

by Jeff Barr | on | in Amazon CodeCommit, Amazon CodeDeploy, Amazon CodePipeline | | Comments

The AWS Developer Tools help you to put modern DevOps practices to work! Here’s a quick overview (read New AWS Tools for Code Management and Deployment for an in-depth look):

AWS CodeCommit is a fully-managed source code control service. You can use it to host secure and highly scalable private Git repositories while continuing to use your existing Git tools and workflows (watch the Introduction to AWS CodeCommit video to learn more).

AWS CodeDeploy automates code deployment to Amazon Elastic Compute Cloud (EC2) instances and on-premises servers. You can update your application at a rapid clip, while avoiding downtime during deployment (watch the Introduction to AWS CodeDeploy video to learn more).

AWS CodePipeline is a continuous delivery service that you can use to streamline and automate your release process. Checkins to your repo (CodeCommit or Git) will initiate build, test, and deployment actions (watch Introducing AWS CodePipeline for an introduction). The build can be deployed to your EC2 instances or on-premises servers via CodeDeploy, AWS Elastic Beanstalk, or AWS OpsWorks.

You can combine these services with your existing build and testing tools to create an end-to-end software release pipeline, all orchestrated by CodePipeline.

We have made a lot of enhancements to the Code* products this year and today seems like a good time to recap all of them for you! Many of these enhancements allow you to connect the developer tools to other parts of AWS so that you can continue to fine-tune your development process.

CodeCommit Enhancements
Here’s what’s new with CodeCommit:

  • Repository Triggers
  • Code Browsing
  • Commit History
  • Commit Visualization
  • Elastic Beanstalk Integration

Repository Triggers – You can create Repository Triggers that Send Notification or Run Code whenever a change occurs in a CodeCommit repository (these are sometimes called webhooks — user-defined HTTP callbacks). These hooks will allow you to customize and automate your development workflow. Notifications can be delivered to an Amazon Simple Notification Service (SNS) topic or can invoke a Lambda function.

Code Browsing – You can Browse Your Code in the Console. This includes navigation through the source code tree and the code:

Commit History – You can View the Commit History for your repositories (mine is kind of quiet, hence the 2015-era dates):

Commit Visualization – You can View a Graphical Representation of the Commit History for your repositories:

Elastic Beanstalk Integration – You can Use CodeCommit Repositories with Elastic Beanstalk to store your project code for deployment to an Elastic Beanstalk environment.

CodeDeploy Enhancements
Here’s what’s new with CodeDeploy:

  • CloudWatch Events Integration
  • CloudWatch Alarms and Automatic Deployment Rollback
  • Push Notifications
  • New Partner Integrations

CloudWatch Events Integration – You can Monitor and React to Deployment Changes with Amazon CloudWatch Events by configuring CloudWatch Events to stream changes in the state of your instances or deployments to an AWS Lambda function, an Amazon Kinesis stream, an Amazon Simple Queue Service (SQS) queue, or an SNS topic. You can build workflows and processes that are triggered by your changes. You could automatically terminate EC2 instances when a deployment fails or you could invoke a Lambda function that posts a message to a Slack channel.

CloudWatch Alarms and Automatic Deployment Rollback – CloudWatch Alarms give you another type of Monitoring for your Deployments. You can monitor metrics for the instances or Auto Scaling Groups managed by CodeDeploy and take action if they cross a threshold for a defined period of time, stop a deployment, or change the state of an instance by rebooting, terminating, or recovering it. You can also automatically rollback a deployment in response to a deployment failure or a CloudWatch Alarm.

Push Notifications – You can Receive Push Notifications via Amazon SNS for events related to your deployments and use them to track the state and progress of your deployment.

New Partner Integrations – Our CodeDeploy Partners have been hard at work, connecting their products to ours. Here are some of the most recent offerings:

CodePipeline Enhancements
And here’s what’s new with CodePipeline:

  • AWS OpsWorks Integration
  • Triggering of Lambda Functions
  • Manual Approval Actions
  • Information about Committed Changes
  • New Partner Integrations

AWS OpsWorks Integration – You can Choose AWS OpsWorks as a Deployment Provider in the software release pipelines that you model in CodePipeline:

You can also configure CodePipeline to use OpsWorks to deploy your code using recipes contained in custom Chef cookbooks.

Triggering of Lambda Functions – You can now Trigger a Lambda Function as one of the actions in a stage of your software release pipeline. Because Lambda allows you to write functions to perform almost any task, you can customize the way your pipeline works:

Manual Approval Actions – You can now add Manual Approval Actions to your software release pipeline. Execution pauses until the code change is approved or rejected by someone with the required IAM permission:

Information about Committed Changes – You can now View Information About Committed Changes to the code flowing through your software release pipeline:


New Partner Integrations – Our CodePipeline Partners have been hard at work, connecting their products to ours. Here are some of the most recent offerings:

New Online Content
In order to help you and your colleagues to understand the newest development methodologies, we have created some new introductory material:

Thanks for Reading!
I hope that you have enjoyed this quick look at some of the most recent additions to our development tools.

In order to help you to get some hands-on experience with continuous delivery, my colleagues have created a new Pipeline Starter Kit. The kit includes a AWS CloudFormation template that will create a VPC with two EC2 instances inside, a pair of applications (one for each EC2 instance, both deployed via CodeDeploy), and a pipeline that builds and then deploys the sample application, along with all of the necessary IAM service and instance roles.


Run Windows Server 2016 on Amazon EC2

by Jeff Barr | on | in Amazon EC2, Launch, Windows | | Comments

You can now run Windows Server 2016 on Amazon Elastic Compute Cloud (EC2). This version of Windows Server is packed with new features including support for Docker and Windows containers.  We are making it available in all AWS regions today, in four distinct forms:

  • Windows Server 2016 Datacenter with Desktop Experience – The mainstream version of Windows Server, designed with security and scalability in mind, with support for both traditional and cloud-native applications. To learn a lot more about Windows Server 2016, download The Ultimate Guide to Windows Server 2016 (registration required).
  • Windows Server 2016 Nano Server -A cloud-native, minimal install that takes up a modest amount of disk space and boots more swiftly than the Datacenter version, while leaving more system resources (memory, storage, and CPU) available to run apps and services. You can read Moving to Nano Server to learn how to migrate your code and your applications. Nano Server does not include a desktop UI so you’ll need to administer it remotely using PowerShell or WMI. To learn how to do this, read Connecting to a Windows Server 2016 Nano Server Instance.
  • Windows Server 2016 with Containers – Windows Server 2016 with Windows containers and Docker already installed.
  • Windows Server 2016 with SQL Server 2016 – Windows Server 2016 with SQL Server 2016 already installed.

Here are a couple of things to keep in mind with respect to Windows Server 2016 on EC2:

  • Memory – Microsoft recommends a minimum of 2 GiB of memory for Windows Server. Review the EC2 Instance Types to find the type that is the best fit for your application.
  • Pricing – The standard Windows EC2 Pricing applies; you can launch On-Demand and Spot Instances, and you can purchase Reserved Instances.
  • Licensing – You can (subject to your licensing terms with Microsoft) bring your own license to AWS.
  • SSM Agent – An upgraded version of our SSM Agent is now used in place of EC2Config. Read the User Guide to learn more.

Containers in Action
I launched the Windows Server 2016 with Containers AMI and logged in to it in the usual way:

Then I opened up PowerShell and ran the command docker run microsoft/sample-dotnet . Docker downloaded the image, and launched it. Here’s what I saw:

We plan to add Windows container support to Amazon ECS by the end of 2016. You can register here to learn more.

Get Started Today
You can get started with Windows Server 2016 on EC2 today. Try it out and let me know what you think!


Amazon Aurora Update – Call Lambda Functions From Stored Procedures; Load Data From S3

by Jeff Barr | on | in Amazon Aurora, Amazon S3, AWS Lambda | | Comments

Many AWS services work just fine by themselves, but even better together! This important aspect of our model allows you to select a single service, learn about it, get some experience with it, and then extend your span to other related services over time. On the other hand, opportunities to make the services work together are ever-present, and we have a number of them on our customer-driven roadmap.

Today I would like to tell you about two new features for Amazon Aurora, our MySQL-compatible relational database:

Lambda Function Invocation – The stored procedures that you create within your Amazon Aurora databases can now invoke AWS Lambda functions.

Load Data From S3 – You can now import data stored in an Amazon Simple Storage Service (S3) bucket into a table in an Amazon Aurora database.

Because both of these features involve Amazon Aurora and another AWS service, you must grant Amazon Aurora permission to access the service by creating an IAM Policy and an IAM Role, and then attaching the Role to your Amazon Aurora database cluster. To learn how to do this, see Authorizing Amazon Aurora to Access Other AWS Services On Your Behalf.

Lambda Function Integration
Relational databases use a combination of triggers and stored procedures to enable the implementation of higher-level functionality. The triggers are activated before or after some operations of interest are performed on a particular database table. For example, because Amazon Aurora is compatible with MySQL, it supports triggers on the INSERT, UPDATE, and DELETE operations. Stored procedures are scripts that can be run in response to the activation of a trigger.

You can now write stored procedures that invoke Lambda functions. This new extensibility mechanism allows you to wire your Aurora-based database to other AWS services. You can send email using Amazon Simple Email Service (SES), issue a notification using Amazon Simple Notification Service (SNS), insert publish metrics to Amazon CloudWatch, update a Amazon DynamoDB table, and more.

At the appliction level, you can implement complex ETL jobs and workflows, track and audit actions on database tables, and perform advanced performance monitoring and analysis.

Your stored procedure must call the mysql_lambda_async procedure. This procedure, as the name implies, invokes your desired Lambda function asynchronously, and does not wait for it to complete before proceeding. As usual, you will need to give your Lambda function permission to access any desired AWS services or resources.

To learn more, read Invoking a Lambda Function from an Amazon Aurora DB Cluster.

Load Data From S3
As another form of integration, data stored in an S3 bucket can now be imported directly in to Aurora (up until now you would have had to copy the data to an EC2 instance and import it from there).

The data can be located in any AWS region that is accessible from your Amazon Aurora cluster and can be in text or XML form.

To import data in text form, use the new LOAD DATA FROM S3 command. This command accepts many of the same options as MySQL’s LOAD DATA INFILE, but does not support compressed data. You can specify the line and field delimiters and the character set, and you can ignore any desired number of lines or rows at the start of the data.

To import data in XML form,  use the new LOAD XML from S3 command. Your XML can look like this:

<row column1="value1" column2="value2" />
<row column1="value1" column2="value2" />

Or like this:


Or like this:

  <field name="column1">value1</field>
  <field name="column2">value2</field>

To learn more, read Loading Data Into a DB Cluster From Text Files in an Amazon S3 Bucket.

Available Now
These new features are available now and you can start using them today!

There is no charge for either feature; you’ll pay the usual charges for the use of Amazon Aurora, Lambda, and S3.



AWS Week in Review – October 10, 2016

by Jeff Barr | on | in Week in Review | | Comments

Twenty four (24) external and internal contributors worked together to create this edition of the AWS Week in Review. If you would like to join the party please visit the AWS Week in Review on GitHub. I am also about to open up some discussion on a simplified and streamlined submission process.


October 10


October 11


October 12


October 13


October 14


October 15


October 16

New & Notable Open Source

New SlideShare Presentations

New Customer Success Stories

Upcoming Events

Help Wanted

New AWS Marketplace Listings

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

Now Open – AWS US East (Ohio) Region

by Jeff Barr | on | in Announcements | | Comments

As part of our ongoing plan to expand the AWS footprint, I am happy to announce that our new US East (Ohio) Region is now available. In conjunction with the existing US East (Northern Virginia) Region, AWS customers in the Eastern part of the United States have fast, low-latency access to the suite of AWS infrastructure services.

The Details
The new Ohio Region supports Amazon Elastic Compute Cloud (EC2) and related services including Amazon Elastic Block Store (EBS), Amazon Virtual Private Cloud, Auto Scaling, Elastic Load Balancing, NAT Gateway, Spot Instances, and Dedicated Hosts.

It also supports (deep breath) Amazon API Gateway, Amazon Aurora, AWS Certificate Manager (ACM), AWS CloudFormation, Amazon CloudFront, AWS CloudHSM, Amazon CloudWatch (including CloudWatch Events and CloudWatch Logs), AWS CloudTrail, AWS CodeCommit, AWS CodeDeploy, AWS CodePipeline, AWS Config, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, EC2 Container Registy, Amazon ECS, Amazon Elastic File System, Amazon ElastiCache, AWS Elastic Beanstalk, Amazon EMR, Amazon Elasticsearch Service, Amazon Glacier, AWS Identity and Access Management (IAM), AWS Import/Export Snowball, AWS Key Management Service (KMS), Amazon Kinesis, AWS Lambda, AWS Marketplace, Mobile Hub, AWS OpsWorks, Amazon Relational Database Service (RDS), Amazon Redshift, Amazon Route 53, Amazon Simple Storage Service (S3), AWS Service Catalog, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), AWS Storage Gateway, Amazon Simple Workflow Service (SWF), AWS Trusted Advisor, VM Import/Export, and AWS WAF.

The Region supports all sizes of C4, D2, I2, M4, R3, T2, and X1 instances. As is the case with all of our newer Regions, instances must be launched within a Virtual Private Cloud (read Virtual Private Clouds for Everyone to learn more).

Well Connected
Here are some round-trip network metrics that you may find interesting (all names are airport codes, as is apparently customary in the networking world; all times are +/- 2 ms):

  • 10 ms to ORD (home to a pair of Direct Connect locations hosted by QTS and Equinix and an Internet exchange point).
  • 12 ms to IAD (home of the US East (Northern Virginia) Region).
  • 18 ms to JFK (home to another exchange point).
  • 52 ms to SFO (home of the US West (Northern California) Region).
  • 68 ms to PDX (home of the US West (Oregon) Region].

With just 12 ms of round-trip latency between US East (Ohio) and US East (Northern Virginia), you can make good use of unique AWS features such as S3 Cross-Region Replication, Cross-Region Read Replicas for Amazon Aurora, Cross-Region Read Replicas for MySQL, and Cross-Region Read Replicas for PostgreSQL. Data transfer between the two Regions is priced at the Inter-AZ price ($0.01 per GB), making your cross-region use cases even more economical.

Also on the networking front, we have agreed to work together with Ohio State University to provide AWS Direct Connect access to OARnet. This 100-gigabit network connects colleges, schools, medical research hospitals, and state government across Ohio. This connection provides local teachers, students, and researchers with a dedicated, high-speed network connection to AWS.

14 Regions, 38 Availability Zones, and Counting
Today’s launch of this 3-AZ Region expands our global footprint to a grand total of 14 Regions and 38 Availability Zones. We are also getting ready to open up a second AWS Region in China, along with other new AWS Regions in Canada, France, and the UK.

Since there’s been some industry-wide confusion about the difference between Regions and Availability Zones of late, I think it is important to understand the differences between these two terms. Each Region is a physical location where we have one or more Availability Zones or AZs. Each Availability Zone, in turn, consists of one or more data centers, each with redundant power, networking, and connectivity, all housed in separate facilities. Having two or more AZ’s in each Region gives you the ability to run applications that are more highly available, fault tolerant, and durable than would be the case if you were limited to a single AZ.

Around the office, we sometimes play with analogies that can serve to explain the difference between the two terms. My favorites are “Hotels vs. hotel rooms” and “Apple trees vs. apples.” So, pick your analogy, but be sure that you know what it means!



In the Works – VMware Cloud on AWS

by Jeff Barr | on | in Announcements | | Comments

The long-standing trend toward on-premises virtualization has helped many enterprises to increase operational efficiency and to wring out as much value from their data center as possible. Along the way, they have built up a substantial repertoire of architectural skills and operational experience, but now find that they are struggling to match public cloud economics and the AWS pace of innovation.

Because of this, many enterprises are now looking at the AWS Cloud and like what they see. They are enticed by the fact that AWS has data centers in 35 Availability Zones across 13 different locations around the world (with construction underway in five more) and see considerable value in the rich set of AWS Services and the flexible pay-as-you-go model, and are looking at ways to move in to the future while building on an investment in virtualization that often dates back a decade or more.

VMware + AWS = Win
In order to help these organizations take advantage of the benefits that AWS has to offer while building on their existing investment in virtualization, we are working with our friends at VMware to build and deliver VMware Cloud on AWS.

This new offering is a native, fully managed VMware environment on the AWS Cloud that can be accessed on an hourly, on-demand basis or in subscription form. It includes the same core VMware technologies that customers run in their data centers today including vSphere Hypervisor (ESXi), Virtual SAN (vSAN), and the NSX network virtualization platform and is designed to provide a clean, seamless experience.

VMware Cloud on AWS runs directly on the physical hardware, while still taking advantage of a host of network and hardware features designed to support our security-first design model. This allows VMware to run their virtualization stack on AWS infrastructure without having to use nested virtualization.

If you find yourself in the situation that I described above—running on-premises virtualization but looking forward to the cloud—I think you’ll find a lot to like here. Your investment in packaging, tooling, and training will continue to pay dividends, as will your existing VMware licenses, agreements, and discounts. Everything that you and your team know about ESXi, vSAN, and NSX remain relevant and valuable. You will be able to manage your entire VMware environment (on-premises and AWS) using your existing copy of vCenter, along with tools and scripts that make use of the vCenter APIs.

The entire roster of AWS compute, storage, database, analytics, mobile, and IoT services can be directly accessed from your applications. Because your VMware applications will be running in the same data centers as the AWS services, you’ll be able to benefit from fast, low-latency connectivity when you use these services to enhance or extend your applications. You’ll also be able to take advantage of AWS migration tools such as AWS Database Migration Service, AWS Import/Export Snowball, and AWS Storage Gateway.

Plenty of Options
VMware Cloud on AWS will give you a lot of different options when it comes to migration, data center consolidation, modernization, and globalization:

On the migration side, you can use vSphere vMotion to live-migrate individual VMs, workloads, or entire data centers to AWS with a couple of clicks. Along the way, as you migrate individual components, you can use AWS Direct Connect to set up a dedicated network connection from your premises to AWS.

When it comes to data center consolidation, you can migrate code and data to AWS without having to alter your existing operational practices, tools, or policies.

When you are ready to modernize, you can take advantage of unique and  powerful features such as Amazon Aurora (a highly scalable relational database designed to be compatible with MySQL), Amazon Redshift (a fast, fully managed, petabyte-scale data warehouse), and many other services.

When you need to globalize your business, you can spin up your existing applications in multiple AWS regions with a couple of clicks.

Stay Tuned
I will share more information on this development as it becomes available. To learn more, visit the VMware Cloud on AWS page.


Amazon ElastiCache for Redis Update – Sharded Clusters, Engine Improvements, and More

by Jeff Barr | on | in Amazon ElastiCache | | Comments

Many AWS customers use Amazon ElastiCache to implement a fast, in-memory data store for their applications.

We launched Amazon ElastiCache for Redis in 2013 and have added snapshot exports to S3, a refreshed engine, scale-up capabilities, tagging, and support for Multi-AZ operation with automatic failover over the past year or so.

Today we are adding a healthy collection of new features and capabilities to ElastiCache for Redis. Here’s an overview:

Sharded Cluster Support – You can now create sharded clusters that can hold more than 3.5 TiB of in-memory data.

Improved Console – Creation and maintenance of clusters is now more straightforward and requires far fewer clicks.

Engine Update – You now have access to the features of the Redis 3.2 engine.

Geospatial Data – You can now store and process geospatial data.

Let’s dive in!

Sharded Cluster Support / New Console
Until now, ElastiCache for Redis allowed you to create a cluster containing a single primary node and up to 5 read replicas. This model limited the size of the in-memory data store to 237 GiB per cluster.

You can now create clusters with up to 15 shards, expanding the overall in-memory data store to more than 3.5 TiB. Each shard can have up to 5 read replicas, giving you the ability to handle 20 million reads and 4.5 million writes per second.

The sharded model, in conjunction with the read replicas, improves overall performance and availability. Data is spread across multiple nodes and the read replicas support rapid, automatic failover in the event that a primary node has an issue.

In order to take advantage of the sharded model, you must use a Redis client that is cluster-aware. The client will treat the cluster as a hash table with 16,384 slots spread equally across the shards, and will then map the incoming keys to the proper shard.

ElastiCache for Redis treats the entire cluster as a unit for backup and restore purposes; you don’t have to think about or manage backups for the individual shards.

The Console has been improved and I can create my first Scale Out cluster with ease (note that I checked Cluster Mode enabled (Scale Out) after I chose Redis as my Cluster engine):

The Console helps me to choose a suitable node type with a handy new menu:

You can also create sharded clusters using the AWS Command Line Interface (CLI), the AWS Tools for Windows PowerShell, the ElastiCache API, or via a AWS CloudFormation template.

Engine Update
Amazon ElastiCache for Redis is compatible with version 3.2 of the Redis engine. The engine includes three new features that may be of interest to you:

Enforced Write Consistency – the new WAIT command blocks the caller until all previous write commands have been acknowledged by the primary node and a specified number of read replicas. This change does not make Redis in to a strongly consistent data store, but it does improve the odds that a freshly promoted read replica will include the most recent writes to previous primary.

SPOP with COUNT – The SPOP command removes and then returns a random element from a set. You can now request more than one element at a time.

Bitfields – Bitfields are a memory-efficient way to store a collection of many small integers as a bitmap, stored as a Redis string. Using the BITFIELD command, you can address (GET) and manipulate (SET, increment, or decrement) fields of varying widths without having to think about alignment to byte or word boundaries.

Our implementation of Redis includes a snapshot mechanism that does not need to fork the server process into parent and child processes. Under heavy load, the standard, fork-based snapshot mechanism can lead to degraded performance due to swapping. Our alternative implementation comes in to play when memory utilization is above 50% and neatly sidesteps the issue. It is a bit slower, so we use it only when necessary.

We have improved the performance of the syncing mechanism that brings a fresh read replica into sync with its primary node. We made a similar improvement to the mechanism that brings the remaining read replicas back in to sync with the newly promoted primary node.

As I noted earlier, our engine is compatible with the comparable open source version and your applications do not require any changes.

Geospatial Data
You can now store and query geospatial data (a latitude and a longitude). Here are the commands:

  • GEOADD – Insert a geospatial item.
  • GEODIST – Get the distance between two geospatial items.
  • GEOHASH – Get a Geohash (geocoding) string for an item.
  • GEOPOS – Return the positions of items identified by a key.
  • GEORADIUS -Return items that are within a specified radius of a location.
  • GEORADIUSBYMEMBER – Return items that are within a specified radius of another item.

Available Now
Sharded cluster creation and all of the features that I mentioned are available now and you can start using them today in all AWS regions.


New AWS Quick Starts for Atlassian JIRA Software and Bitbucket Data Center

by Jeff Barr | on | in Quick Start | | Comments

The AWS Quick Starts help you to rapidly deploy reference implementations of software solutions on the AWS Cloud. You can use the Quick Starts to easily test drive and consume software while taking advantage of best practices promoted by AWS and the software partner.

Today I would like to tell you about a pair of Quick Start guides that were developed in collaboration with APN Advanced Technology Partner (and DevOps competency holder) Atlassian to help you to deploy their JIRA Software Data Center and Bitbucket Data Center on AWS.

Atlassian’s Data Center offerings are designed for customers that have large development teams and a need for scalable, highly available development and project management tools. Because these tools are invariably mission-critical, robustness and resilience are baseline requirements, production deployments are always run in a multi-node or cluster configuration.

New Quick Starts
JIRA Software Data Center is a project and issue management solution for agile teams and Bitbucket Data Center is a Git repository solution, both of which provide large teams working on multiple projects with high availability and performance at scale. With these two newly introduced Atlassian Quick Starts, you have access to a thoroughly tested, fully supported reference architecture that greatly simplifies and accelerates the deployment of these products on AWS.

The Quick Starts include AWS CloudFormation templates that allow you deploy Bitbucket and/or JIRA Software into a new or existing Virtual Private Cloud (VPC). If you want to use a new VPC, the template will create it, along with public and private subnets and a NAT Gateway to allow EC2 instances in the private subnet to connect to the Internet (in regions where the NAT Gateway is not available, the template will create a NAT instance instead). If you are already using AWS and have a suitable VPC, you can deploy JIRA Software Data Center and Bitbucket Data Center there instead.

You will need to sign up for evaluation licenses for the Atlassian products that you intend to launch.

Bitbucket Data Center
The Bitbucket Data Center Quick Start deploys the following components as part of the deployment:

Amazon RDS PostgreSQL – Bitbucket Data Center requires a supported external database. Amazon RDS for PostgreSQL in a Multi-AZ configuration allows failover in the event the master node fails.

NFS Server –  Bitbucket Data Center uses a shared file system to store the repositories in a common location that is accessible to multiple Bitbucket nodes. The Quick Start architecture implements the shared file system in an EC2 instance with an attached Amazon Elastic Block Store (EBS) volume.

Bitbucket Auto Scaling Group – The Bitbucket Data Center product is installed on Amazon Elastic Compute Cloud (EC2) instances in an Auto Scaling group. The deployment will scale out and in, based on utilization.

Amazon Elasticsearch Service – Bitbucket Data Center uses Elasticsearch for indexing and searching.  The Quick Start architecture uses Amazon Elasticsearch Service, a managed service that makes it easy to deploy, operate, and scale Elasticsearch in the AWS Cloud.

JIRA Software Data Center
The JIRA Software Data Center Quick Start deploys the following components as part of the deployment:

Amazon RDS PostgreSQL – JIRA Data Center requires a supported external database. Amazon RDS for PostgreSQL in a Multi-AZ configuration allows failover in the event the master node fails.

Amazon Elastic File System – JIRA Software Data Center uses a shared file system to store artifacts in a common location that is accessible to multiple JIRA nodes. The Quick Start architecture implements a highly available shared file system using Amazon Elastic File System.

JIRA Auto Scaling Group – The JIRA Data Center product is installed on Amazon Elastic Compute Cloud (EC2) instances in an Auto Scaling group. The deployment will scale out and in, based on utilization.

We will continue to work with Atlassian to update and refine these two new Quick Starts.  We’re also working on two additional Quick Starts for Atlassian Confluence and Atlassian JIRA Service Desk and hope to have them ready before AWS re:Invent.

To get started, please visit the Bitbucket Data Center Quick Start or the JIRA Software Data Center Quick Start. You can also head over to Atlassian’s Quick Start page. The templates are available today; give them a whirl and let us know what you think!