AWS Management Tools Blog
Emily Freebairn, Software Development Engineer with Amazon Web Services.
Frequently, engineers want to perform operational tasks across a group of instances. However, many of these tasks need to be performed at a controlled speed, and return feedback when there is a problem. Furthermore, administrators often want to ensure that engineers can perform only specific actions.
Run Command, which is part of Amazon EC2 Systems Manager (SSM), is designed to let you remotely and securely manage instances. Run Command provides a simple way of automating common administrative tasks like running shell scripts, installing software or patches, and more. Run Command allows you to execute these commands across multiple instances and provides visibility into the results. Through integration with AWS Identity and Access Management (IAM), you can apply granular permissions to control the actions users can perform on instances. All actions taken with Run Command are recorded by AWS CloudTrail, allowing you to audit changes in your fleet.
In this post, I demonstrate how to send a command to collect diagnostic information on my instances. Because capacity is added to the fleet on demand, the fleet composition changes over time. To reduce the likelihood of unintentional issues on instances, commands can be run at a controlled rate across instances. You get notified if there are any failures for analysis later on. To make sure you can’t accidentally run other commands, you use a custom action with locked down permissions to perform only specific tasks.
When you are managing instances at scale, it’s important to be able to define and apply software configurations as well as ensuring that the instances don’t deviate from the expected state. That way, you can make sure that your applications and infrastructure operate as you’d expect.
State Manager, which was launched as part of Amazon EC2 Systems Manager, helps you define and maintain consistent configuration of operating systems and applications. Using State Manager, you can control configuration details such as instance configurations, anti-virus definitions, firewall settings, and more. Based on a schedule that you define, State Manager automatically reviews your fleet and compares it against the specified configuration policy. If your configuration changes and does not match the wanted state, State Manager reapplies the policy to bring it back to the wanted state.
Auto Scaling helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. You create collections of EC2 instances, called Auto Scaling groups. You can specify the maximum or minimum number of instances in each group, and Auto Scaling ensures that your group never goes above or below the values you set.
In this post, I discuss how you can use State Manager to define how to configure your instances in an Auto Scaling group. As new instances are added, State Manager automatically configures them to bring them to their wanted state. In addition, State Manager can also periodically reapply the configuration to your instances, minimizing configuration drift.
Bastion hosts (also called “jump servers”) are often used as a best practice for accessing privately accessible hosts within a system environment. For example, your system might include an application host that is not intended to be publicly accessible. To access it for product updates or managing system patches, you typically log in to a bastion host and then access (or “jump to”) the application host from there.
In this post, I demonstrate how you can reduce your system’s attack surface while also offering greater visibility into commands issued on your hosts. The solution is to replace your bastion host by using Amazon EC2 Systems Manager.
Bastion host access
Access to the bastion host is ideally restricted to a specific IP range, typically from your organization’s corporate network. The benefit of using a bastion host in this regard is that access to any of the internal hosts is isolated to one means of access: through either a single bastion host or a group. For further isolation, the bastion host generally resides in a separate VPC.
The following diagram illustrates this design:
The application host resides in a private subnet in a VPC that is peered with the management VPC. The application host has a security group rule that allows port 22 access only from the management VPC’s bastion host security group. (The examples in this post refer to port 22 and SSH, but Windows users can substitute these for port 3389 and RDP for SSH.) Similarly, the bastion host has a security group rule that allows port 22 access only from the corporate network IP space.
Because the application host resides in a private subnet, it is able to establish outbound Internet connections only through a NAT gateway that resides in the VPC’s public subnet.
To put all of this into context, say that you want to view the network interfaces for the application host. To do so, you would follow these steps:
- Install the application host’s private key on the bastion host.
- Establish an SSH (Secure Shell) session on the bastion host. This is generally done from a trusted network, such as your corporate network.
- Establish an SSH from the bastion host to the application host.
- Run the “ifconfig” command.
- To save the results, you can copy and paste the output, pipe the output to a file, or save the output to a storage device.
The security controls in this system help restrict access to the application and the bastion host. However, the bastion model does have some downsides:
- Like any other infrastructure host, it must be managed and patched.
- It accrues a cost while it is running.
- Each of your security groups that allow bastion access require a security group ingress rule, normally port 22 for SSH (usually for Linux) or port 3389 for RDP (usually for Windows hosts).
- Private RSA keys for the bastion host and application hosts need to be managed, protected, and rotated.
- SSH activity isn’t natively logged.
Customers use AWS CodeDeploy to automate application deployment because it provides a uniform method for:
- Updating applications across development, staging, and production environments.
- Handling the complexity of updating applications and avoiding service downtime.
However, deploying and configuring applications often requires access to secrets and configuration data, such as API keys or database passwords, in source code. This is challenging because:
- Config data might be hard-coded in plaintext in the source code or accessed from config files (or other locations) manually. This is not scalable, and more important, from a security standpoint, not recommended.
- Providing granular access control is difficult, especially if the parameters are used in customer-facing or production infrastructure.
- Data is sometimes stored outside your environment crossing trust boundaries, and requiring more tools to manage.
You’ll find more information about Parameter Store in a blog post, Managing Secrets for Amazon ECS Applications Using Parameter Store and IAM Roles for Tasks, recently published by my colleague, Stas Vonholsky.
In this blog post, I will talk about how you can simplify your AWS CodeDeploy workflows by using Parameter Store to store and reference a configuration secret. This not only improves your security posture, it also automates your deployment because you don’t have to manually change configuration data in your source code.
In his post, Simple Secrets Management via AWS’ EC2 Parameter Store, Matt Adorjan shows how to protect your AWS environment by securely storing secrets with Parameter Store and controlling access to secrets with AWS Identity and Access Management (IAM).
In the Secrets in AWS post, Stephen Price describes how you can use Parameter Store to manage and use secrets in your favorite programming language. This Parameter Store capability makes it easy to handle secrets for cloud-based architectures, such as microservices or containerized applications.
Finally, in the Using Parameter Store with AWS CodePipeline post by Trey McElhattan from Stelligent, you can learn about using Parameter Store and AWS CodePipeline as part of your continuous delivery pipeline.
A few days ago, The AWS Big Data Blog published a new blog post: “Analyze Security, Compliance, and Operational Activity Using AWS CloudTrail and Amazon Athena.”
In this blog post, AWS Professional Services Consultant Sai Sriparasa shows how to set up and use the recently released Amazon Athena CloudTrail SerDe to query AWS CloudTrail log files for Amazon EC2 security group modifications, console sign-in activity, and operational account activity. This post assumes that you already have CloudTrail configured.
To read the whole post, see Analyze Security, Compliance, and Operational Activity Using AWS CloudTrail and Amazon Athena.
Amazon EC2 instances are often created and destroyed as demand dictates. Auto Scaling is great for dynamically scaling servers so that EC2 resources are consumed only when they are necessary. This blog post will show you how to connect EC2 instances created by an Auto Scaling group to an AWS OpsWorks for Chef Automate server. When EC2 instances are launched in an Auto Scaling group, they will be added to the Chef Automate node list and configured. New nodes will be added automatically when the group scales up and removed when it scales down.
AWS Config is a fully managed service that provides AWS resource inventory, configuration history, and configuration change notifications to enable security and governance. You can use AWS Config Rules enables you to create rules that automatically check the configuration of AWS resources recorded by AWS Config. Over the last year, we expanded the service coverage for Config in 7 new regions, and expanded support for Config rules in 9 new regions. We added support for 15 resource types from 6 new services, and developed 18 new managed rules. Let’s look back on these significant new features and updates to Config and Config Rules that we introduced in 2016.
AWS CloudFormation allows developers and systems administrators to create and manage a collection of related AWS resources (called a stack) by provisioning and updating them in an orderly and predictable way. In this blog post, we will look back on the CloudFormation features and updates introduced in 2016, including:
- New AWS resources you can provision with CloudFormation.
- AWS CodePipeline integration to enable continuous delivery of infrastructure.
- Support for YAML and the AWS Serverless App Model (AWS SAM) to improve the developer experience.
- Change sets and cross-stack references to enhance ClouldFormation stack management capabilities.
Today, we are excited to launch the new Management Tools Blog. The AWS Management Tools are a group of services that help you provision, configure, monitor, track, audit, and cost manage your AWS and on-premises resources.
This blog will cover a range of topics, including new feature updates, tips and tricks, as well as sample apps and templates. In addition to providing deep technical coverage of the latest features, we will also spend time discussing existing features and use-cases for the suite of Management Tools services. We hope this blog will be a useful resource in helping you operate your infrastructure at scale on AWS.
To see future updates, check back often, follow our social media accounts, or subscribe to our blog using the RSS feed button at the top of the page.