AWS Management Tools Blog

The Right Way to Store Secrets using Parameter Store

This guest post was written by Evan Johnson, who works in the Security team at Segment.

The way companies manage application secrets is critical. Even today, the most high profile security companies can suffer breaches from improper secrets management practices. Having internet facing credentials is like leaving your house key under a doormat that millions of people walk over daily. Even if the secrets are hard to find, it is a game of hide and seek that you eventually lose.

At Segment, we centrally and securely manage our secrets with Amazon EC2 Systems Manager Parameter Store, lots of Terraform code, and chamber. Parameter store is a great tool for achieving secrets management. If you are running workloads on AWS, then using Parameter Store as a managed secrets store is worth serious consideration. This post has all the information you need to get running with Parameter Store in production.

Service Identity

At Segment, we run hundreds of services that communicate with one another, AWS APIs, and third-party APIs. The services we run have different needs and should only have access to systems that are strictly necessary. This is called the ‘principle of least privilege’.

As an example, our main webserver should never have access to security audit logs for our infrastructure. Without giving containers and services an identity, it is not possible to protect and restrict access to secrets with access control policies. Our services identify themselves using IAM roles. From the AWS docs – “An IAM role … is an AWS identity with permission policies that determine what the identity can and cannot do in AWS.”

For example, our IAM roles for instances have write-only access to an Amazon S3 bucket for appending audit logs, but prevent the deletion and reading of those logs.

How do containers get their role securely?

A requirement to using Amazon ECS is that all containers must run the Amazon ECS container agent (ecs-agent). The agent runs as a container that orchestrates and provides an API with which other containers can communicate. The agent is the central nervous system of how containers fetch IAM role credentials.

One important piece to the agent is that it runs an HTTP API that MUST be accessible to the other containers that are running in the cluster. To make this API available, an iptables rule is set on the host instance. This iptables rule forwards traffic destined for a magic IP address to the ecs-agent container.

iptables -t nat \
-d \
-p tcp \
-m tcp \
--dport 80 \
--to-ports 51679

Before the agent starts a container, it first fetches credentials for the container’s task role from the AWS credential service. The agent next sets the credentials key ID, a UUID, as the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI environment variable inside the container when it is started.

$ env

Using this relative URI and UUID, containers fetch AWS credentials from the agent over HTTP. One container cannot access the authentication credentials to impersonate another container because the UUID is sufficiently difficult to guess.

  "RoleArn": "arn:aws:iam::111111111111:role/test-service",
  "SecretAccessKey": "REDACTRED",
  "Token": "REDACTED",
  "Expiration": "2017-08-10T02:01:43Z"

Additional security details

As heavy Amazon ECS users, we did find security foot-guns associated with ECS task roles. It’s important to realize that any container that can access the Amazon EC2 metadata service on behalf of its host can become any other task role on the system. This could allow containers to circumvent access control policies and gain access to unauthorized systems.

The two ways a container can access the metadata service is using host networking and over the docker bridge. When a container is run with –network=’host’, it is always able to connect to the EC2 metadata service using its host’s network. Setting the ECS_ENABLE_TASK_IAM_ROLE_NETWORK_HOST variable to false in the ecs-agent config file prevents containers from running with this permission.

Additionally, it’s important to block access to the metadata service IP address over the Docker bridge using iptables. The IAM task role documentation recommends preventing access to the EC2 metadata service with this specific rule.

$ iptables --insert FORWARD 1 --in-interface docker+ --destination --jump DROP

The principle of least privilege is always important to keep in mind when building a security system. Setting ECS_DISABLE_PRIVILEGED to true in the host’s ecs-agent config file can prevent privileged Docker containers from being run and causing other more nuanced security problems.

Parameter Store

Parameter Store is an AWS service that stores strings. It can store secret data and non-secret data alike. Secrets stored in Parameter Store are secure strings, encrypted with a customer-specific AWS KMS key.

Under the hood, a service that requests secure strings from the Parameter Store has a lot of things happening behind the scenes.

  1. The ECS container agent requests the host instance’s temporary credentials.
  2. The agent continuously generates temporary credentials for each ECS task role running on ECS, using an undocumented service called ACS.
  3. When the agent starts each task, it sets a secret UUID in the environment of the container.
  4. When the task needs its task role credentials, it requests them from the ecs-agent API and authenticates with the secret UUID.
  5. The ECS task requests its secrets from Parameter Store using the task role credentials.
  6. Parameter Store transparently decrypts these secure strings before returning them to the ECS task.

Using roles with Parameter Store is especially nice because it does not require maintaining additional authentication tokens. This would create additional headache and additional secrets to manage!

Parameter Store IAM Policies

Each role that accesses the Parameter Store requires the ssm:GetParameters permission. “SSM” stands for “Simple System Manager”, the previous name for Systems Manager, and is how AWS denotes Parameter Store operations.

The ssm:GetParameters permission is the policy used to enforce access control and protect one service’s secrets from another. Segment gives all services an IAM role that grants access to secrets that match the format  {{service_name}}/*.  Parameter Store supports hierarchies natively, so this permission provides each service with its own directory of secrets.

  "Sid": "",
  "Effect": "Allow",
  "Action": "ssm:GetParameters",
  "Resource": [

In addition to the access control policies, Segment uses a dedicated AWS KMS key to encrypt secure strings within the Parameter Store. Each IAM role is granted a small set of KMS permissions in order to decrypt the secrets they store in Parameter Store.

  "Sid": "",
  "Effect": "Allow",
  "Action": [
  "Resource": "parameter_store_key"

Automating service identity and policies

Segment has a small Terraform module that abstracts away the creation of a unique IAM role, load balancers, DNS records, Auto Scaling, and CloudWatch alarms. Below, I show how our nginx load balancer is defined using our service module.

module "nginx" {
  source            = "../modules/service"

  name              = "nginx"
  image             = "segment/nginx"
  product_area      = "foudation-security"
  health_check_path = "/healthcheck"
  environment       = "${var.environment}"

Under the hood, the task role given to each service has all of the IAM policies we previously listed, restricting access to Parameter Store by the value in the name field. No configuration required.

Additionally, developers have the option to override which secrets their service has access to by providing a “secret label”. This secret label replaces their service name in their IAM policy. If NGINX were to need the same secrets as an HAProxy instance, the two services can share credentials by using the same secret label.

module "nginx" {
  source            = "../modules/service"

  name              = "nginx"
  image             = "segment/nginx"
  product_area      = "foudation-security"
  health_check_path = "/healthcheck"
  environment       = "${var.environment}"
  # Share secrets with loadbalancers
  secret_label = "loadbalancers"

Parameter Store in production

All Segment employees authenticate with AWS using aws-vault, which can securely store AWS credentials in the macOS keychain or in an encrypted file for Linux users. Segment has several AWS accounts. Engineers can interact with each account using aws-vault, and execute commands locally with their AWS credentials populated in their environment.

$ aws-vault exec development -- aws s3 ls s3://segmentio-bucket

Using Chamber with Parameter Store

Chamber is a CLI tool that Segment built to allow developers and code to communicate with Parameter Store in a consistent manner. By allowing developers to use the same tools that run in production, we decrease the number of differences between code running in development with staging and production.

Chamber works with aws-vault, and has only a few key subcommands:

  • exec—a command after loading secrets in to the environment.
  • history—of changes made to a secret in parameter store.
  • list—the names of all secrets in a services path.
  • write—a secret to the Parameter Store.

Chamber leverages Parameter Store’s built in search and history mechanisms to implement the list and history subcommands. All strings stored in Parameter Store are automatically versioned. The subcommand used to fetch secrets from the Parameter Store is exec. When developers use the exec subcommand, they use it with aws-vault.

$ aws-vault exec development -- chamber exec loadbalancers -- nginx

In the preceding command, chamber is executed with the credentials and permissions of the employee in the development account, and it fetches the secrets associated with loadbalancers from Parameter Store. After chamber populates the environment, it runs the NGINX server.

Running chamber in production

Chamber is packaged inside our Docker containers as a binary and is the entry point of the container. Chamber passes signals to the program it executes in order to allow the program to gracefully handle them.

Here’s a diff of what it required to make our main website chamber ready.

-ENTRYPOINT ["node", "server/boot.js"] 
+ENTRYPOINT ["chamber", "exec", "app", "--", "node", "server/boot.js"]

Non-Docker containers can also use chamber to populate the environment before creating configuration files out of templates, run daemons, etc.


All access to Parameter Store is logged with AWS CloudTrail. This makes keeping a full audit trail for all parameters simple and inexpensive. It also makes building custom alerting and audit logging straightforward.

"eventTime": "2017-08-02T18:54:06Z",
"eventSource": "",
"eventName": "GetParameters",
"awsRegion": "us-west-2",
"sourceIPAddress": "",
"userAgent": "aws-sdk-go/1.8.1 (go1.8.3; linux; amd64)",
"requestParameters": {
    "withDecryption": true,
    "names": [
"responseElements": null,
"requestID": "88888888-4444-4444-4444-121212121212",
"eventID": "88888888-4444-4444-4444-121212121212",
"readOnly": true,

CloudTrail makes it possible to determine exactly what secrets are used and can make discovering unused secrets or unauthorized access to secrets possible.

AWS logs all Parameter Store access for free as a CloudTrail management event. Most security information and events management (SIEM) solutions can be configured to watch, and read data from S3.


Using Parameter Store and IAM, Segment was able to build a small tool that provides all of the properties most important in a secrets management system.

  • Protect the secrets at rest with strong encryption.
  • Enforce strong access control policies.
  • Create audit logs of authentication and access history.
  • Great developer experience.

Secrets management is very challenging to get right. Many products have been built to manage secrets, but none fit the use cases needed by Segment better than Parameter Store.

About the Author

Evan Johnson works on security at Segment. Segment is the infrastructure for customer data. Businesses use Segment’s API to unlock 200+ tools for every team across their organization. With Segment, developers can stop building tedious and expensive one-off data integrations, turning on their favorite apps right from the Segment dashboard.



AWS is not responsible for the content or accuracy of this post. The content and opinions in this blog are solely those of the third party author.