AWS Partner Network (APN) Blog
How to Secure Enterprise Cloud Environments with AWS and HashiCorp
By Kevin Cochran, Senior Solutions Engineer at HashiCorp
Securing applications can be a tricky topic, as security isn’t always top of mind for developers because it can slow down software releases.
HashiCorp Vault helps eliminate much of the security burden developers experience while trying to comply with security team requirements.
Vault was built to address the difficult task of passing sensitive data to users and applications without it being compromised. Within Vault, all transactions are token-based, which limits potential malicious activity, and provides greater visibility into whom and what is accessing that information.
Vault achieves this through a number of secrets engines and authentication methods that leverage trusted sources of identity, like AWS Identity and Access Management (IAM).
In this post, I will walk you through several of Vault’s features that can help you get started. You’ll see just how simple security can be!
HashiCorp is an AWS Partner Network (APN) Advanced Technology Partner with AWS Competencies in both DevOps and Containers.
Vault Auto Unseal with AWS Key Management Service
First, let’s cover how to unseal a Vault cluster. This can be simplified by storing Vault’s master key in AWS Key Management Service (KMS) and enabling the auto unseal feature.
By default, when a Vault server is created or reset it initiates in a sealed state. This is important because when Vault is sealed it can’t encrypt or decrypt anything. Basically, it forgets what its master key is and locks out any potential threats from gaining access. This is a critical feature when you know or suspect your environment has been compromised.
Unsealing Vault is not a simple as—and for good reasons. Vault utilizes the Shamir secrets-sharing technique when a new server is initiated. To unseal Vault with this technique, you must provide the minimum number of keys, as determined by the team that created the Vault server.
Should you need to unseal Vault (either from a manual seal or a restart), getting the requisite number of keys may take longer than your service level agreement (SLA) can support.
With auto unseal, Vault reaches out to KMS to retrieve its master key rather than reconstructing it from key shards. This means that to unseal Vault, all you need to do is restart the Vault service. You can still manually seal Vault in the case of a security issue, but unsealing can be done safely, securely, and easily.
Setting Up Auto Unseal
Setting up auto unseal with KMS takes only a few minutes, and the configuration is very simple. To get started, make sure your server’s environment is setup with your AWS credentials.
First, we need to add a stanza to Vault’s configuration file. Add the following lines:
Next, issue a service restart command, such as:
If you’re starting with a brand new instance of Vault, you can go ahead and initialize Vault now by issuing the following command:
However, if you’re migrating your master key to KMS, we’ll need a couple of more steps. Moving the key to KMS takes place during the unseal process by adding a -migrate
flag.
Your master key is now stored in KMS. Since KMS is considered a trusted source, we no longer need to use key shares. However, we still need to rekey Vault to have a single key.
We’ll first need to initialize a rekey and reduce the key shares and key threshold each to 1:
Vault provides you with a nonce token, which we’ll need for the next step. Now, we need to complete the process by using our nonce token with each of our original unseal keys:
That’s all there is to it! You can test it by restarting your vault service and checking the status:
Then:
Your Vault should be automatically unsealed:
Now that Vault can be safely sealed or unsealed, you’re ready to use your Vault instance for secrets management.
Dynamic Database Credentials
Databases are where we typically find the most sensitive data of any organization.
It would make sense to take extra precautions with database access, but in the past this access was managed at a local level on a username and password basis. In the cloud, we need to manage credentials for users and applications at a much greater scale.
Vault allows you to dynamically create database credentials, which opens up a whole world of possibilities. For instance, your application may get a 24-hour lease on database credentials, and upon expiration, have a new set of credentials generated. Or you may want to generate short-lived credentials with read-only database permissions through a self-service portal.
These credentials are removed from the database upon expiry, meaning Vault manages the clean-up and reduces the burden of password rotations. In addition, you can ensure that each instance of an application has its own unique credentials for provenance.
Setup is actually quite simple. In this example, we’re using MySQL.
To get started, you’ll need a username and password which has the ability to create users. We’re going to use root, and the credentials we’ll create will have the equivalent of root access.
The first thing we need to do is enable the database secrets engine:
Next, we need to configure a database connection. Vault currently supports eight major database engines with multiple variants of each, and custom configurations.
For MySQL, a database configuration can be issued like this:
In the command above, we create a configuration named mysqlvaultdb
. The connection URL contains a reference to the username and password, and points to your MySQL instance.
We haven’t created any roles just yet, but we’re letting Vault know that only the db-app-role
role is allowed to use this connection.
Finally, we provide the username and password to be used in the connection string which Vault uses to interact with the database.
Next, we need to create a role which executes a CREATE USER
statement in MySQL:
This role, which we named db-app-role
, is the same name we referenced in the allowed_roles
in the connection configuration. The db_name
is the Vault connection we created just prior: mysqlvaultdb
.
The creation_statements
is where the action takes place and gives the Vault administrator total control over what Vault is allowed to do within the database.
Here, Vault internally creates a username and a password, interpolates creation_statements
to plugin the username ({{name
}}) and password ({{password
}}), then passes the final SQL statement to the connection to be executed on the server.
Upon success, Vault returns the username and password, valid for default_ttl
—in this case one hour.
Now, let’s put this to use by creating a policy and a user. We’ll create a file called getdbcreds.hcl
and put the following contents in it:
Then, we need to create the policy in Vault. We’ll call the policy getdbcreds
:
We’re going to create a user with a simple username/password scheme. This authentication method needs to be enabled first:
Finally, we’ll create our user and assign the policy we just created:
To test that everything works, simply login to Vault as our new user:
Enter the password, then issue the following command:
You’ll see something similar to this:
Our user james is now able to login to the MySQL database using these credentials. After one hour, those credentials will expire and he’ll need to request a new set of credentials.
Amazon EC2 Authentication
Manually passing around secrets and tokens to applications and servers is a security hazard. Once they get loose, it’s hard to reel it all back in.
At HashiCorp, we call this challenge secrets sprawl. Vault provides several mechanisms by which users and applications can authenticate without passing secrets, keys, or tokens. One way is by using Amazon Elastic Compute Cloud (Amazon EC2) authentication. Though the IAM authentication method is preferred, Amazon EC2 allows us to use existing resources.
The Amazon EC2 authentication method allows Vault to identify an instance based on any number of attributes.
For our example, we’re just going to be using the Amazon Machine Image (AMI) Id to validate that the instance can login. If the attributes don’t match, login is denied. The full set of attributes can be found in our AWS Auth API documentation.
We’ll need to start by enabling the AWS authentication method in Vault:
Vault will also need to communicate with our AWS account, so we need to provide our access credentials:
Our Amazon EC2 instance will be requesting access to our MySQL database, so we can use the policy we created in our previous example: getdbcreds
.
We want a role which authenticates an Amazon EC2 instance based on its AMI Id, for a session that will last one hour, and has the ability to get database credentials:
Vault is now ready to authenticate Amazon EC2 instances. To validate, we need an instance which is using the AMI we specified. We’ll login to that system and use the HTTP API to communicate with our Vault server.
Once logged in, we’ll want to get the PKCS7 signature from the instance’s metadata:
Along with the signature, we need to tell Vault what role we are requesting access to:
Now, we’re ready to login to Vault:
Vault responds with a JSON payload, which looks something like this:
The value we’re most interested in is the client_token
, which tells us we authenticated successfully and can now communicate with Vault using the specified role.
We can now simple pass that token as a header value and get our database credentials:
Which returns the following:
Your application can now use dynamic database credentials using Amazon EC2 authentication. By running Vault agent on the instance itself, the instance can stay logged in without needing to reauthenticate.
You can easily test that no other instances can login by spinning up an Amazon EC2 instance with a different AMI Id and running through the same commands.
Encryption as a Service
Encryption is complicated. We need it for all kinds of data—both at rest and in transit. Outside of application development, we have security teams who understand how to put it all together, but encryption is often highly specialized within the developer community.
Unfortunately, some organizations fully expect developers to responsibly handle the encryption and decryption of sensitive data.
Yes, there are encryption libraries available, but they’re not as easy to use as most developers would like. They are, in fact, libraries and must support a multitude of use cases. Developers need to know which encryption algorithm should be used for their project.
Vault’s transit engine solves this dilemma by providing an API for developers to use for encrypting and decrypting data. This makes encryption a part of their existing workflow.
The developer simply passes in the data, and Vault returns the ciphertext. That text can be stored in place of the original data, and should your database ever be compromised, the attacker will only see useless, encrypted text.
As you might know by now, enabling the transit engine is quite simple: