AWS Partner Network (APN) Blog

How to Secure Enterprise Cloud Environments with AWS and HashiCorp

By Kevin Cochran, Senior Solutions Engineer at HashiCorp

HashiCorp-Logo-1
Stackery-APN-Badge-2
Connect with HashiCorp-1
Rate HashiCorp-1

Securing applications can be a tricky topic, as security isn’t always top of mind for developers because it can slow down software releases.

HashiCorp Vault helps eliminate much of the security burden developers experience while trying to comply with security team requirements.

Vault was built to address the difficult task of passing sensitive data to users and applications without it being compromised. Within Vault, all transactions are token-based, which limits potential malicious activity, and provides greater visibility into whom and what is accessing that information.

Vault achieves this through a number of secrets engines and authentication methods that leverage trusted sources of identity, like AWS Identity and Access Management (IAM).

In this post, I will walk you through several of Vault’s features that can help you get started. You’ll see just how simple security can be!

HashiCorp is an AWS Partner Network (APN) Advanced Technology Partner with AWS Competencies in both DevOps and Containers.

Vault Auto Unseal with AWS Key Management Service

First, let’s cover how to unseal a Vault cluster. This can be simplified by storing Vault’s master key in AWS Key Management Service (KMS) and enabling the auto unseal feature.

By default, when a Vault server is created or reset it initiates in a sealed state. This is important because when Vault is sealed it can’t encrypt or decrypt anything. Basically, it forgets what its master key is and locks out any potential threats from gaining access. This is a critical feature when you know or suspect your environment has been compromised.

Unsealing Vault is not a simple as—and for good reasons. Vault utilizes the Shamir secrets-sharing technique when a new server is initiated. To unseal Vault with this technique, you must provide the minimum number of keys, as determined by the team that created the Vault server.

Should you need to unseal Vault (either from a manual seal or a restart), getting the requisite number of keys may take longer than your service level agreement (SLA) can support.

With auto unseal, Vault reaches out to KMS to retrieve its master key rather than reconstructing it from key shards. This means that to unseal Vault, all you need to do is restart the Vault service. You can still manually seal Vault in the case of a security issue, but unsealing can be done safely, securely, and easily.

Setting Up Auto Unseal

Setting up auto unseal with KMS takes only a few minutes, and the configuration is very simple. To get started, make sure your server’s environment is setup with your AWS credentials.

First, we need to add a stanza to Vault’s configuration file. Add the following lines:

seal "awskms" {
    region = "AWS_REGION"
    kms_key_id = "AWS_KMS_KEY_ID"
}

Next, issue a service restart command, such as:

service vault restart

If you’re starting with a brand new instance of Vault, you can go ahead and initialize Vault now by issuing the following command:

vault operator init -key-shares=1 -key-threshold=1

However, if you’re migrating your master key to KMS, we’ll need a couple of more steps. Moving the key to KMS takes place during the unseal process by adding a -migrate flag.

vault operator unseal -migrate UNSEAL_KEY_1
...
vault operator unseal -migrate UNSEAL_KEY_N

Your master key is now stored in KMS. Since KMS is considered a trusted source, we no longer need to use key shares. However, we still need to rekey Vault to have a single key.

We’ll first need to initialize a rekey and reduce the key shares and key threshold each to 1:

vault operator rekey -init -target=recovery -key-shares=1 -key-threshold=1

Vault provides you with a nonce token, which we’ll need for the next step. Now, we need to complete the process by using our nonce token with each of our original unseal keys:

vault operator rekey -target=recovery -key-shares=1 -key-threshold=1 -nonce=NONCE_TOKEN UNSEAL_KEY_1
...
vault operator rekey -target=recovery -key-shares=1 -key-threshold=1 -nonce=NONCE_TOKEN UNSEAL_KEY_N

That’s all there is to it! You can test it by restarting your vault service and checking the status:

service vault restart

Then:

vault status

Your Vault should be automatically unsealed:

Key                      Value
---                      -----
Recovery Seal Type       shamir
Initialized              true
Sealed                   false
Total Recovery Shares    1
Threshold                1
Version                  1.1.0
Cluster Name             vault-cluster-e4e06553
Cluster ID               29314997-4388-f66d-4b5a-3ac892504ee9
HA Enabled               false

Now that Vault can be safely sealed or unsealed, you’re ready to use your Vault instance for secrets management.

Dynamic Database Credentials

Databases are where we typically find the most sensitive data of any organization.

It would make sense to take extra precautions with database access, but in the past this access was managed at a local level on a username and password basis. In the cloud, we need to manage credentials for users and applications at a much greater scale.

Vault allows you to dynamically create database credentials, which opens up a whole world of possibilities. For instance, your application may get a 24-hour lease on database credentials, and upon expiration, have a new set of credentials generated. Or you may want to generate short-lived credentials with read-only database permissions through a self-service portal.

These credentials are removed from the database upon expiry, meaning Vault manages the clean-up and reduces the burden of password rotations. In addition, you can ensure that each instance of an application has its own unique credentials for provenance.

Setup is actually quite simple. In this example, we’re using MySQL.

To get started, you’ll need a username and password which has the ability to create users. We’re going to use root, and the credentials we’ll create will have the equivalent of root access.

The first thing we need to do is enable the database secrets engine:

vault secrets enable database

Next, we need to configure a database connection. Vault currently supports eight major database engines with multiple variants of each, and custom configurations.

For MySQL, a database configuration can be issued like this:

vault write database/config/mysqlvaultdb \
    plugin_name="mysql-database-plugin" \
    connection_url="{{username}}:{{password}}@tcp(https://mysql.example.com:3306)/" \
    allowed_roles="db-app-role" \
    username="root" \
    password="password"

In the command above, we create a configuration named mysqlvaultdb. The connection URL contains a reference to the username and password, and points to your MySQL instance.

We haven’t created any roles just yet, but we’re letting Vault know that only the db-app-role role is allowed to use this connection.

Finally, we provide the username and password to be used in the connection string which Vault uses to interact with the database.

Next, we need to create a role which executes a CREATE USER statement in MySQL:

vault write database/roles/db-app-role \
    db_name=mysqlvaultdb \
    creation_statements="CREATE USER '{{name}}'@'%' IDENTIFIED BY '{{password}}';GRANT SELECT ON *.* TO '{{name}}'@'%';" \
    default_ttl="1h" \
    max_ttl="24h"

This role, which we named db-app-role, is the same name we referenced in the allowed_roles in the connection configuration. The db_name is the Vault connection we created just prior: mysqlvaultdb.

The creation_statements is where the action takes place and gives the Vault administrator total control over what Vault is allowed to do within the database.

Here, Vault internally creates a username and a password, interpolates creation_statements to plugin the username ({{name}}) and password ({{password}}), then passes the final SQL statement to the connection to be executed on the server.

Upon success, Vault returns the username and password, valid for default_ttl—in this case one hour.

Now, let’s put this to use by creating a policy and a user. We’ll create a file called getdbcreds.hcl and put the following contents in it:

path "database/creds/*" {
    capabilities = ["read"]
}

Then, we need to create the policy in Vault. We’ll call the policy getdbcreds:

vault policy write getdbcreds getdbcreds.hcl

We’re going to create a user with a simple username/password scheme. This authentication method needs to be enabled first:

vault auth enable userpass

Finally, we’ll create our user and assign the policy we just created:

vault write auth/userpass/users/james \
    password="superpass" \
    policies="getdbcreds"

To test that everything works, simply login to Vault as our new user:

vault login -method=userpass username=james

Enter the password, then issue the following command:

vault read database/creds/db-app-role

You’ll see something similar to this:

Key                Value
---                -----
lease_id           database/creds/db-app-role/iaIWuTCjE4KszxSHPFbpS6V7
lease_duration     1h
lease_renewable    true
password           A1a-ClBMDtllDELhA47d
username           v-userpass-j-app-role-o1msTfFl1e

Our user james is now able to login to the MySQL database using these credentials. After one hour, those credentials will expire and he’ll need to request a new set of credentials.

Amazon EC2 Authentication

Manually passing around secrets and tokens to applications and servers is a security hazard. Once they get loose, it’s hard to reel it all back in.

At HashiCorp, we call this challenge secrets sprawl. Vault provides several mechanisms by which users and applications can authenticate without passing secrets, keys, or tokens. One way is by using Amazon Elastic Compute Cloud (Amazon EC2) authentication. Though the IAM authentication method is preferred, Amazon EC2 allows us to use existing resources.

The Amazon EC2 authentication method allows Vault to identify an instance based on any number of attributes.

For our example, we’re just going to be using the Amazon Machine Image (AMI) Id to validate that the instance can login. If the attributes don’t match, login is denied. The full set of attributes can be found in our AWS Auth API documentation.

We’ll need to start by enabling the AWS authentication method in Vault:

vault auth enable aws

Vault will also need to communicate with our AWS account, so we need to provide our access credentials:

vault write auth/aws/config/client \
    secret_key=XXXXXX \
    access_key=XXXXXX

Our Amazon EC2 instance will be requesting access to our MySQL database, so we can use the policy we created in our previous example: getdbcreds.

We want a role which authenticates an Amazon EC2 instance based on its AMI Id, for a session that will last one hour, and has the ability to get database credentials:

vault write \
    auth/aws/role/app-db-role \
    auth_type=ec2 \
    policies=getdbcreds \
    max_ttl=1h \
    disallow_reauthentication=false \
    bound_ami_id=ami-0475f60cdd8fd2120

Vault is now ready to authenticate Amazon EC2 instances. To validate, we need an instance which is using the AMI we specified. We’ll login to that system and use the HTTP API to communicate with our Vault server.

Once logged in, we’ll want to get the PKCS7 signature from the instance’s metadata:

pkcs7=$(curl -s \
  "http://169.254.169.254/latest/dynamic/instance-identity/pkcs7" | tr -d '\n')

Along with the signature, we need to tell Vault what role we are requesting access to:

data=$(cat <<EOF
{
  "role": "app-db-role",
  "pkcs7": "$pkcs7"
}
EOF
)

Now, we’re ready to login to Vault:

curl --request POST \
  --data "$data" \
  "http://vault.example.com:8200/v1/auth/aws/login"

Vault responds with a JSON payload, which looks something like this:

{
  "request_id": "b30f4111-95b7-4481-e98f-f7a86ba9c0b9",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 0,
  "data": null,
  "wrap_info": null,
  "warnings": [
    "TTL of \"768h0m0s\" exceeded the effective max_ttl of \"1h0m0s\"; TTL value is capped accordingly"
  ],
  "auth": {
    "client_token": "s.FErTfpbFlkfDX3pUjkgldXT8",
    "accessor": "22pd7MJLBTMK2gvRmw7tM3Ku",
    "policies": [
      "default",
      "getdbcreds"
    ],
    "token_policies": [
      "default",
      "getdbcreds"
    ],
    "metadata": {
      "account_id": "753646501470",
      "ami_id": "ami-0475f60cdd8fd2120",
      "instance_id": "i-0e50b4b3e6fce4853",
      "nonce": "03a6eb04-931d-d602-8bb2-9065134144d8",
      "region": "us-west-2",
      "role": "app-db-role",
      "role_tag_max_ttl": "0s"
    },
    "lease_duration": 3600,
    "renewable": true,
    "entity_id": "f02d29a9-f72c-fa34-2bbf-31baeb8c5fee",
    "token_type": "service",
    "orphan": true
  }
}

The value we’re most interested in is the client_token, which tells us we authenticated successfully and can now communicate with Vault using the specified role.

We can now simple pass that token as a header value and get our database credentials:

curl \
    --header "X-Vault-Token: CLIENT_TOKEN." \
    http://vault.example.com:8200/v1/database/creds/db-app-role

Which returns the following:

{
  "request_id": "1aac4536-97e1-8121-17d1-656ab953a963",
  "lease_id": "database/creds/db-app-role/wPefgAXF5rZjiRJfdC2S7fik",
  "renewable": true,
  "lease_duration": 3600,
  "data": {
    "password": "A1a-kENCugtGPxPDq4tn",
    "username": "v-aws-app-role-Clp0KoQNv5TdOvXzx"
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

Your application can now use dynamic database credentials using Amazon EC2 authentication. By running Vault agent on the instance itself, the instance can stay logged in without needing to reauthenticate.

You can easily test that no other instances can login by spinning up an Amazon EC2 instance with a different AMI Id and running through the same commands.

Encryption as a Service

Encryption is complicated. We need it for all kinds of data—both at rest and in transit. Outside of application development, we have security teams who understand how to put it all together, but encryption is often highly specialized within the developer community.

Unfortunately, some organizations fully expect developers to responsibly handle the encryption and decryption of sensitive data.

Yes, there are encryption libraries available, but they’re not as easy to use as most developers would like. They are, in fact, libraries and must support a multitude of use cases. Developers need to know which encryption algorithm should be used for their project.

Vault’s transit engine solves this dilemma by providing an API for developers to use for encrypting and decrypting data. This makes encryption a part of their existing workflow.

The developer simply passes in the data, and Vault returns the ciphertext. That text can be stored in place of the original data, and should your database ever be compromised, the attacker will only see useless, encrypted text.

As you might know by now, enabling the transit engine is quite simple:

vault secrets enable transit

Creating a key is just as simple:

vault write -f transit/keys/customer-key

Next, we need to add a policy which allows read access to our new key:

vault policy write "custkey" -<<EOF
path "transit/keys/customer-key" {
    capabilities = ["read"]
}
EOF

Now, assign this policy to an entity you would like to gain access to this key, such as an app role, an IAM role, or an instance.

Once your resource has authenticated, use the token with the API to encrypt/decrypt:

curl -s \
    --header "X-Vault-Token: $CLIENT_TOKEN" \
    --request POST \
    --data '{ "plaintext": "SGFzaGlDb3JwIFZhdWx0IFJvY2tzIQ==" }' \
    http://vault.example.com:8200/v1/transit/encrypt/customer-key

In return, we receive a payload with a ciphertext value. There are three columns delimited by a colon (:). The first column is the word vault, which makes it easy for developers to determine if it’s encrypted data or not. The next column—currently v1—is the version of the key.

The entire payload looks like this:

{
  "request_id": "f87aab69-5b96-4311-358e-d157cc5a4e77",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 0,
  "data": {
    "ciphertext": "vault:v1:ctwlaZ4QI+hzwZJwMsQo0zJzGNfhhLoCoQh4PV1lPO0QhgxLhNZfXeM4KvJj0CKq9gM="
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

Decrypting the data follows the same process. We just pass over the entire ciphertext we first received from vault’s transit engine:

curl -s \
    --header "X-Vault-Token: $CLIENT_TOKEN" \
    --request POST \
    --data '{ "ciphertext": "vault:v1:ctwlaZ4QI+hzwZJwMsQo0zJzGNfhhLoCoQh4PV1lPO0QhgxLhNZfXeM4KvJj0CKq9gM=" }' \
    http://vault.example.com:8200/v1/transit/decrypt/customer-key

Here, we get back our plaintext we originally sent over:

{
  "request_id": "eee22c0d-5674-2171-9df3-398d3d231f78",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 0,
  "data": {
    "plaintext": "SGFzaGlDb3JwIFZhdWx0IFJvY2tzIQ=="
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

As you may have noticed, our plaintext is actually a base64-encoded bit of text.

Summary

HashiCorp Vault is specifically designed for public and private clouds operating in low- or zero-trust environments. In this post, we’ve covered a few of the many features Vault offers for IT organizations.

For enterprise customers, Vault offers a host of robust features meeting the requirements of governance and compliance, such as HSM integration, FIPS 140-2 compliance, disaster recovery, replication (performance, cross-region, and filter sets), namespaces, and more.

If you’ve never used Vault, download our open source version and run through the tutorials on learn.hashicorp.com.

The content and opinions in this blog are those of the third party author and AWS is not responsible for the content or accuracy of this post.

.

AWS Competency Partners: The Next Smart

HashiCorp is an AWS Competency Partner, and if you want to be successful in today’s complex IT environment and remain that way tomorrow and into the future, teaming up with an AWS Competency Partner is The Next Smart.

The Next Smart-APN Blog-1
.


HashiCorp-Logo-1
Connect with HashiCorp-1

HashiCorp – APN Partner Spotlight

HashiCorp is an AWS DevOps Competency Partner. Enterprise versions of products like Vault enhance the open source tools with features that promote collaboration, operations, governance, and multi-data center functionality.

Contact HashiCorp | Solution Overview

*Already worked with HashiCorp? Rate this Partner

*To review an APN Partner, you must be an AWS customer that has worked with them directly on a project.