AWS Management Tools Blog
Getting Started with Patch Manager and Amazon EC2 Systems Manager
At last year’s re:Invent, AWS launched Amazon EC2 Systems Manager, which helps you automatically apply OS patches within customized maintenance windows, collect software inventory, and configure Windows and Linux operating systems. These capabilities enable automated configuration and ongoing management of systems at scale and help maintain software compliance for instances running in Amazon EC2 or on-premises.
One of the capabilities of Systems Manager is Patch Manager, which can automate the process of patching Windows managed instances at scale. With Patch Manager, you can scan instances for missing patches, or scan and install missing patches to individual instances or large groups of instances by using EC2 tags. Patch Manager can also be used with Systems Manager Maintenance Windows, so you can create a schedule to perform patch operations on your instances within a customized maintenance window.
In this post, I guide you through using Patch Manager to patch your Windows instances. If you run the demo, you are charged for the EC2 resources, but Systems Manager is free of charge.
Walkthrough
To get on the fast track of experiencing Patch Manager, these examples use newly created Windows EC2 instances. Here are the steps: (more…)
Running Ansible Playbooks using EC2 Systems Manager Run Command and State Manager
If you are running complex workloads on AWS and managing large groups of instances, chances are you are using some form of configuration management. Configuration management tools are effective in automating the deployment and configuration of applications on hybrid instances. However, efficiently managing the distribution and execution of the playbooks or recipes, centrally managing the code, having a secure and scalable deployment mechanism and properly logging system changes is a challenge. To address this, some of our customers use tools like cron, Rundeck or others provided by configuration management vendors.
State Manager and Run Command, part of EC2 Systems Manager, automate management tasks by providing a secure, and easy to use platform to maintain state and remotely execute commands on large groups of instances. Using these tools also addresses many of the common challenges of managing infrastructure at scale. Here are some of the benefits of these tools:
- Better security
- There is no need to open incoming ports to remotely execute the directives. This eliminates the need for using SSH
- You can use IAM to restrict and control access to the platform
- All command execution is audited via AWS Cloudtrail
- Performance and reliability
- Asynchronous execution of commands
- Commands are delivered and executed even when the system comes back from being offline
- Execute at scale by taking advantage of velocity control
- Control deployment rate if errors increase during deployment
In this blog post, I will show you how to execute configuration management directives using Ansible on your instances using State Manager and Run Command, and the new “AWS-RunAnsiblePlaybook” public document. This document runs Ansible locally on your instances.
Getting Started
Pre-requisities
- Target instances must be set up as managed instances. To set up managed instances, please see here
- Ansible must be pre-installed on the instances. Please see the following section on installing Ansible
- For Amazon S3 URLs in the playbook field, the AWS Command line tools must be installed on the target instance or server
Installing Ansible on target instances
Ansible can be installed as part of the bootstrapping of the instance or with Run Command. The following is some reference information you can use to install Ansible on different Linux distributions:
Amazon Linux
For Amazon Linux Ansible can be installed using pip. You can use the following command.
Ubuntu
For Ubuntu you can install Ansible using the default package manager. Use this command.
RedHat 7
For RedHat 7 you can install Ansible by enabling the epel repo. Use the following commands:
Document Parameters
The “AWS-RunAnsiblePlaybook” document has a few parameters that can be used to execute playbooks locally:
- Playbook – Use this parameter if you would like to pass the YAML text for the playbook directly
- Playbookurl – Use this parameter if you would like to specify a URL where the playbook file is stored. The URL can be a web (http or https) or an s3 URL in the form of s3://bucket-name/playbook.yml . For s3 URL there is a pre-requisite of the AWS Command line tools
- Extravars – This is a string list of additional variables that will be passed to Ansible for the execution of the playbook
- Check – This flag tells Ansible not to execute the actions of the playbook, but try to predict the outcome and log it. This is helpful for testing playbook execution
Workflow
The document performs a few checks and then execute the playbook specified in the parameter. Here is a summary of how the logic works:
- Check the Ansible version to determine if Ansible is present on the system. Having Ansible installed is a pre-requisite of this document. See prerequisites section and also the section on installing Ansible
- Determine if the playbook parameter was passed as YAML using the playbook variable text or a URL using the Playbookurl. Based on the input it will copy the data to a temporary playbook file for later execution
- Determine if the check option was selected and based on the input execute the proper ansible-playbook command
State Manager Walkthrough
Let’s walk through the process of using State Manager to set the desired state for an instance using Ansible Playbook. To understand better how you can use this new public document with State Manager, let’s imagine you have a fleet of servers running Apache. You want to make sure that the web server software is installed and running all the time.
Step 1: Create Ansible playbook
This playbook installs Apache and makes sure is running. It has some logic to use different package management tools depending on the operating system:
Step 2: Create State Manager Association
From the EC2 section of the AWS Console, select “State Manager” from the “System Manager Services” section of the left pane menu, and click on “Create Association”.
Select the “AWS-RunAnsiblePlaybook” document, your target instances (or tag) and the application schedule. In the Parameters section, since the playbook will be specified as direct YAML text, let’s just paste it on the Playbook field and leave the Playbookurl field empty.

In the Extravars field, we can enter any additional variables we would like to pass to the playbook for execution. In this case we are not using any additional variables, so we take the default. Select if you want to use the Check option.
After you click the “Run” button, the console will display the Association ID of this run, which you can to see the result and verify the output from the console.
After the association runs for the first time, you can see the results of the execution by navigating to the association and looking at the status column. Now every time the association runs it will run the playbook and this in turn will ensure the software is installed and running.
Run Command Walkthrough
Run Command lets you rate control remote execution by configuring maximum number of concurrent invocations and errors allowed. This feature will set a threshold to detect errors and stop the execution if the threshold is passed.
Let’s walk through an example of using velocity control when running the AWS-RunAnsiblePlaybook Document. In this example, we will be running the AWS- RunAnsiblePlaybook document on 3 target instances. We will pass the playbook as an s3 URL. To do this you can use the following command:
This command performs the following actions:
- Executes the document AWS- RunAnsiblePlaybook
- Sets the target instances
- Defines the max errors as 1. This means that if the execution encounters 1 error it will stop on the remaining targets
- Passes the parameters for the Ansible document
- It also sets a timeout of 600 seconds
Conclusion
In this post, we’ve showed you how to use State Manager and Run Command to deploy Ansible playbooks at scale. These tools are secure, easy to use platforms that let you perform remote administration and maintain state of your hybrid instances. You can control the rate at which you send commands, use fine-grained permissions, and use notifications to simplify your workflow.
About the Author

Andres Silva is a Senior Technical Account Manager for AWS Enterprise Support. He has been working with AWS technology for more than 6 years. Andres works with Enterprise customers to design, implement and support complex cloud infrastructures. When he is not building cloud automation he enjoys skateboarding with his 2 kids.
Use Application Load Balancers with your AWS OpsWorks Chef 12 Stacks
Want to build scalable applications that take advantage of Elastic Load Balancing Application Load Balancer features? You could add capabilities such as content-based routing, HTTP/2 and WebSocket protocols, support for containers, and enhanced metrics, and more.
AWS OpsWorks Stacks users have been asking AWS how they can use the new Application Load Balancer option with their layer. So AWS decided to develop and open source a set Chef 12 recipes to make this integration simple. This post walks you through the steps required to make any Chef 12 Linux layer in OpsWorks Stacks work with Application Load Balancers.
More Automation Actions for Amazon EC2 Systems Manager
Recently, AWS released five new Amazon EC2 Systems Manager Automation actions. These actions allow you to:
- Launch an AWS CloudFormation stack
- Delete the stack
- Insert a delay in your workflow
- Copy and encrypt Amazon Machine Images (AMIs)
- Tag AWS resources
These actions extend the existing collection of actions, which can be used to orchestrate tasks such as instance launch, OS-level instance configuration and patching, AWS Lambda function invocation, and AMI creation.
In this post, I introduce the actions, discuss possible uses, and include examples.
Automation Overview
Automation allows you to patch, update agents, or bake applications into an AMI. With Automation, you can avoid the time and effort associated with manual image updates, and instead build AMIs through a streamlined, repeatable, and auditable process. Automation workflows are composed of a series of steps, where each step is based on an action.
Automation actions
Here are the actions:
- aws:createStack
- aws:deleteStack
- aws:sleep
- aws:createTags
- aws:copyImage
Streamline AWS CloudTrail Logs Using Event Filters
In November 2016, AWS CloudTrail announced a new feature that provides the ability to filter events that are collected within a CloudTrail trail. This simple feature helps AWS customers save time and money by creating trails that contain a subset of overall API operations and account activity.
In this post, I show you how to add event filters when creating a trail from the AWS Management Console or the AWS CLI.
A common and often recommended CloudTrail setup is to have two or more trails configured within your AWS account. One trail is for security and auditing purposes, leverages Amazon S3 file encryption and log file validation, and is stored in an S3 bucket with a policy allowing only security or audit team access.
Additional trails are often stored in a separate S3 bucket and used to send data to 3rd party tools, set up for the DevOps team to access and use, or leveraged by support teams to troubleshoot and better investigate account issues.

Administering a Group of Instances using Run Command
Emily Freebairn, Software Development Engineer with Amazon Web Services.
Frequently, engineers want to perform operational tasks across a group of instances. However, many of these tasks need to be performed at a controlled speed, and return feedback when there is a problem. Furthermore, administrators often want to ensure that engineers can perform only specific actions.
Run Command, which is part of Amazon EC2 Systems Manager (SSM), is designed to let you remotely and securely manage instances. Run Command provides a simple way of automating common administrative tasks like running shell scripts, installing software or patches, and more. Run Command allows you to execute these commands across multiple instances and provides visibility into the results. Through integration with AWS Identity and Access Management (IAM), you can apply granular permissions to control the actions users can perform on instances. All actions taken with Run Command are recorded by AWS CloudTrail, allowing you to audit changes in your fleet.
In this post, I demonstrate how to send a command to collect diagnostic information on my instances. Because capacity is added to the fleet on demand, the fleet composition changes over time. To reduce the likelihood of unintentional issues on instances, commands can be run at a controlled rate across instances. You get notified if there are any failures for analysis later on. To make sure you can’t accidentally run other commands, you use a custom action with locked down permissions to perform only specific tasks.
Configure Amazon EC2 Instances in an Auto Scaling Group Using State Manager
When you are managing instances at scale, it’s important to be able to define and apply software configurations as well as ensuring that the instances don’t deviate from the expected state. That way, you can make sure that your applications and infrastructure operate as you’d expect.
State Manager, which was launched as part of Amazon EC2 Systems Manager, helps you define and maintain consistent configuration of operating systems and applications. Using State Manager, you can control configuration details such as instance configurations, anti-virus definitions, firewall settings, and more. Based on a schedule that you define, State Manager automatically reviews your fleet and compares it against the specified configuration policy. If your configuration changes and does not match the wanted state, State Manager reapplies the policy to bring it back to the wanted state.
Auto Scaling helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. You create collections of EC2 instances, called Auto Scaling groups. You can specify the maximum or minimum number of instances in each group, and Auto Scaling ensures that your group never goes above or below the values you set.
In this post, I discuss how you can use State Manager to define how to configure your instances in an Auto Scaling group. As new instances are added, State Manager automatically configures them to bring them to their wanted state. In addition, State Manager can also periodically reapply the configuration to your instances, minimizing configuration drift.
Replacing a Bastion Host with Amazon EC2 Systems Manager
Bastion hosts (also called “jump servers”) are often used as a best practice for accessing privately accessible hosts within a system environment. For example, your system might include an application host that is not intended to be publicly accessible. To access it for product updates or managing system patches, you typically log in to a bastion host and then access (or “jump to”) the application host from there.
In this post, I demonstrate how you can reduce your system’s attack surface while also offering greater visibility into commands issued on your hosts. The solution is to replace your bastion host by using Amazon EC2 Systems Manager.
Bastion host access
Access to the bastion host is ideally restricted to a specific IP range, typically from your organization’s corporate network. The benefit of using a bastion host in this regard is that access to any of the internal hosts is isolated to one means of access: through either a single bastion host or a group. For further isolation, the bastion host generally resides in a separate VPC.
The following diagram illustrates this design:

The application host resides in a private subnet in a VPC that is peered with the management VPC. The application host has a security group rule that allows port 22 access only from the management VPC’s bastion host security group. (The examples in this post refer to port 22 and SSH, but Windows users can substitute these for port 3389 and RDP for SSH.) Similarly, the bastion host has a security group rule that allows port 22 access only from the corporate network IP space.
Because the application host resides in a private subnet, it is able to establish outbound Internet connections only through a NAT gateway that resides in the VPC’s public subnet.
To put all of this into context, say that you want to view the network interfaces for the application host. To do so, you would follow these steps:
- Install the application host’s private key on the bastion host.
- Establish an SSH (Secure Shell) session on the bastion host. This is generally done from a trusted network, such as your corporate network.
- Establish an SSH from the bastion host to the application host.
- Run the “ifconfig” command.
- To save the results, you can copy and paste the output, pipe the output to a file, or save the output to a storage device.
The security controls in this system help restrict access to the application and the bastion host. However, the bastion model does have some downsides:
- Like any other infrastructure host, it must be managed and patched.
- It accrues a cost while it is running.
- Each of your security groups that allow bastion access require a security group ingress rule, normally port 22 for SSH (usually for Linux) or port 3389 for RDP (usually for Windows hosts).
- Private RSA keys for the bastion host and application hosts need to be managed, protected, and rotated.
- SSH activity isn’t natively logged.
Use Parameter Store to Securely Access Secrets and Config Data in AWS CodeDeploy
Customers use AWS CodeDeploy to automate application deployment because it provides a uniform method for:
- Updating applications across development, staging, and production environments.
- Handling the complexity of updating applications and avoiding service downtime.
However, deploying and configuring applications often requires access to secrets and configuration data, such as API keys or database passwords, in source code. This is challenging because:
- Config data might be hard-coded in plaintext in the source code or accessed from config files (or other locations) manually. This is not scalable, and more important, from a security standpoint, not recommended.
- Providing granular access control is difficult, especially if the parameters are used in customer-facing or production infrastructure.
- Data is sometimes stored outside your environment crossing trust boundaries, and requiring more tools to manage.
You’ll find more information about Parameter Store in a blog post, Managing Secrets for Amazon ECS Applications Using Parameter Store and IAM Roles for Tasks, recently published by my colleague, Stas Vonholsky.
In this blog post, I will talk about how you can simplify your AWS CodeDeploy workflows by using Parameter Store to store and reference a configuration secret. This not only improves your security posture, it also automates your deployment because you don’t have to manually change configuration data in your source code.
Interesting Articles on EC2 Systems Manager Parameter Store
Recently, we have seen a few interesting articles on using Parameter Store, part of EC2 Systems Manager, to store and access secrets on AWS.
In his post, Simple Secrets Management via AWS’ EC2 Parameter Store, Matt Adorjan shows how to protect your AWS environment by securely storing secrets with Parameter Store and controlling access to secrets with AWS Identity and Access Management (IAM).
In the Secrets in AWS post, Stephen Price describes how you can use Parameter Store to manage and use secrets in your favorite programming language. This Parameter Store capability makes it easy to handle secrets for cloud-based architectures, such as microservices or containerized applications.
Finally, in the Using Parameter Store with AWS CodePipeline post by Trey McElhattan from Stelligent, you can learn about using Parameter Store and AWS CodePipeline as part of your continuous delivery pipeline.