AWS Storage Blog
Simplify your AWS SFTP Structure with chroot and logical directories
UPDATE: The AWS CloudFormation template link provided in the “Try it for yourself” section was updated on 11/5/2020. Correspondingly, the blog post mentioned in the opening paragraph and shortly after the CloudFormation template as “my last blog post” has also been updated.
In my last blog post, I showed how you can easily setup AWS Secrets Manager as an identity provider for AWS Transfer for SFTP (AWS SFTP) and enable password authentication. This post discusses how you can leverage that identity provider setup to pass configuration information of a virtual namespace for your users using a new feature called Logical directories.
Bucket visibility
Let us start by looking at the current setup. When a user logs into their AWS SFTP endpoint, they are dropped into their HomeDirectory. As an example, for user ‘Jess’ this might be: /mybucket/home/jess
.
However, using SFTP client commands, Jess can see her current path within the Amazon S3 hierarchy and can navigate that folder structure using cd..
.
Even if the user Jess is scoped down to access only /mybucket/home/${transfer:UserName}
, some clients still allow the user to traverse up a folder to /mybucket/home
. Only logging out and back in again will land them back in their HomeDirectory.
For example:
Let us look at a new way to prevent this from happening.
Logical directories
Logical directories provides you with the ability to construct a virtual directory structure in which your users can navigate. You can provide your end users with a better experience, while you get the flexibility to not disclose your absolute S3 bucket paths (including the bucket name) to your end users.
There are a couple of scenarios where this is applicable. You could choose to reset the user’s root directory to a desired location within your Amazon S3 bucket hierarchy; this is known as a “chroot” operation. In this mode, no matter how hard the user tries to cd..
, their root directory will remain the one you set them to.
In the second scenario, you can create your own directory structure across buckets and prefixes. This is useful if you have a workflow that is expecting a specific directory structure that you are unable to replicate through bucket prefixes, or if you want to link to multiple non-contiguous locations within S3. You could think of this as a bit like when you create a symbolic link in a Linux file system where your directory path references a different location in the file system.
Both of these modes work by providing a list of Entry and Target pairings.
The first thing we have to do is turn on the feature for our user. To do this we need to return a new parameter as part of our user configuration:
Key: | HomeDirectoryType |
Value: | LOGICAL |
Chroot
Next, we need to construct our directory structure. In the case of chroot we have a single pairing, the root folder is the Entry point, and where we want to map that in our bucket structure is our Target:
[{"Entry": "/", "Target": "/mybucket/jess"}]
This can be an absolute path, as shown above, or you can use a dynamic substitution for the username with ${Transfer:UserName}
:
[{"Entry": "/", "Target":
"/mybucket/${Transfer:UserName}"}]
This is the result we see on the command line when trying to navigate the directories. We got locked to our root folder and cannot go higher up the hierarchy:
Virtual structure
If you want to create a complete virtual directory structure, you can supply multiple pairings, with targets anywhere in your S3 buckets (including across buckets), as long as the user’s S3 role mapping has permissions to access them.
[
{"Entry": "/pics", "Target": "/bucket1/pics"},
{"Entry": "/doc", "Target": "/bucket1/anotherpath/docs"},
{"Entry": "/reporting", "Target": "/reportingbucket/Q1"},
{"Entry": "/anotherpath/subpath/financials", "Target": "/reportingbucket/financials"}
]
Using this virtual structure example, when I log into AWS SFTP for this user, I will be in the root directory with sub directories of /pics, /doc, /reporting, and /anotherpath/subpath/financials.
For example:
Try it for yourself
Ok let’s give it a go.
Download and deploy this AWS CloudFormation template.
For full details on what is being deployed in this template, see my last blog post and follow the instructions to setup a user in AWS Secrets Manager. Remember though, don’t include the HomeDirectory parameter.
If you choose to let the template deploy the AWS SFTP server, you can skip those parts of the instructions too.
We now need to add a new Key Value pair to our user in Secrets Manager:
Key: | HomeDirectoryDetails |
Value: | Your Entry/Target Pairings |
Here’s a more specific example. You’ll need to change this to match your bucket structures:
Key: | HomeDirectoryDetails |
Value: | |
Remember when I mentioned the HomeDirectoryType=LOGICAL setting? This is automatically included in the response header by the AWS Lambda integration when it finds the HomeDirectoryDetails key in the user config.
As these setting are applied dynamically on every user log in, you can get creative by updating these mappings on demand, either in the lambda, or by manipulating the directory. How about dynamically mapping or reporting to a prefix for the current month, as an example?
Before you build your mappings, there are a few rules to understand:
- When Entry is “/” you can only have one mapping, as overlapping paths are not allowed.
- Targets can use the
${Transfer:UserName
variable (if the bucket path has been parameterized based on their username). - Targets can be paths in different buckets, but you’ll need to make sure the mapped IAM Role (Role parameter in the response) provides access to those buckets.
- You no longer need to specify the
HomeDirectory
parameter because this value is implied by the Entry Target pairs when using theLOGICAL
home directory type option.
There you have it, that’s the new Logical directory feature that gives you “chroot” and “symlink” like capabilities! This should reduce the need to build complex scope down policies, modify existing client-side scripts, and give you the privacy you need by preventing your S3 bucket hierarchy from being visible to your end users. This means more and more of your SFTP workflows can be migrated seamlessly to AWS.
Don’t forget to clean up any resources you might have deployed in trying this out!
To get even more information on Logical directories, check out the recent AWS storage blog: ‘Using AWS SFTP Logical Directories to Build a Simple Data Distribution Service.’