AWS Compute Blog

Providing durable storage for AWS Outposts servers using AWS Snowcone

This blog post is written by Rob Goodwin, Specialist Solutions Architect, Secure Hybrid Edge. 

With the announcement of AWS Outposts servers, you now have a streamlined means to deploy AWS Cloud infrastructure to regional offices using the 1 rack unit (1U) or 2 rack unit (2U) Outposts servers where the 42U AWS Outposts rack wasn’t an economical or physical fit.

This post discusses how you can use AWS Snowcone to provide persistent storage for AWS Outposts servers in the case of Amazon Elastic Compute Cloud (Amazon EC2) instance termination or if the Outposts server fails. In this post, we show:

  1. How to leverage the built-in features of Snowcone to provide persistent storage to an EC2 instance.
  2. Optionally replicate the data back to an AWS Region with AWS DataSync. Replicating data back to an AWS Region with DataSync allows for a seamless way to copy data offsite to improve resiliency. Furthermore, it allows the ability to leverage regional AWS Services for machine learning (ML) training.

Background

Outposts servers ship with internal NVMe SSD instance storage. Just like in the Regions, instance storage is allocated directly to the EC2 instance and tied to the lifecycle of the instance. This means that if the EC2 instance is terminated, then the data associated with the instance is deleted. In the event you want data to persist after the instance is terminated, you must use operating system (OS) functions to save and back up to other media or save your data to an external network attached storage or file system.

Mounting an external file system to an EC2 instance is not a new concept in AWS. Using Amazon Elastic File System (Amazon EFS), you can mount the EFS file system to EC2 instance(s).

This architecture may look similar to the following diagram:

AWS VPC showing EC2 Instances mounting Amazon EFS in the Region

Figure 1: AWS VPC showing EC2 Instances mounting Amazon EFS in the Region

In this architecture, EC2 instances are using Amazon EFS for a shared file system.

A main use case for Outposts servers is to deploy applications closer to an end user for quicker response times. If we move our application to the Outposts server to improve the response time to the end user, then we could still use Amazon EFS as a shared file system. However, the latency to read the file system over the service link may affect application performance.

There are third-party network attached storage systems available that could work with Outposts servers. However, Snowcone provides the built-in service of DataSync to replicate data back to the Region and is ideal where physical space and power are limited.

By leveraging Snowcone, we can provide persistent and durable network attached storage external to the Outposts server along with a means to replicate data to and from an AWS Region. Snowcone is a small, rugged, and secure device offering edge computing, data storage, and data transfer.

Solution overview

In this solution, we combine multiple AWS services to provide a durable environment. We use Snowcone as our Network File System (NFS) mount point and leverage the built-in DataSync Agent to replicate the bucket on the Snowcone back to an Amazon Simple Storage Service (Amazon S3) bucket in-Region.

When EC2 instances are launched on the Outposts server, we map the NFS mount point from the Snowcone into the file system of a Linux host through the Outposts server’s Logical Network Interface (LNI). For a Windows system, using the NFS Client for Windows, we can map a drive letter to the NFS mount point as well. The following diagram illustrates this.

EC2 instances on Outposts server attaching to the NFS mount on Snowcone with DataSync replicating data back to Amazon S3 in the AWS Region

Figure 2: EC2 instances on Outposts server attaching to the NFS mount on Snowcone with DataSync replicating data back to Amazon S3 in the AWS Region

Prerequisites

To deploy this solution, you must:

  1. Have the Outposts server installed and authorized.
    1. The Outposts server must be fully capable of launching an EC2 instance and being able to communicate through the LNI to local network resources.
  2. Have an AWS Snowcone ordered, connected to the local network, and unlocked.
    1. To make sure that NFS is available, the job type must be either Import into Amazon S3 or Export from Amazon S3, as shown in the following figure.
    1. Figure 3: Screenshot of Job Type when ordering Snow devices
  3. Have a local client with AWS OpsHub installed.
    1. You can use an instance launched on the Outposts server to configure the Snowcone if:
      1. ·       The LNI is connected on the instance
      2. ·       The Snowcone is on the network

Steps to activate

  1. Configure NFS on the Snowcone manually.
    1. Either statically assign the IP address, or if you’re using DHCP, create an IP reservation to make sure that the NFS mount is consistent. In the following figure, we use 10.0.0.32 as a static IP assigned to the NFS Mount.
  2. (Optional) Start the DataSync Agent on the Snowcone.
    1. We assume that the Snowcone has access to the internet in the same way the Outposts server does. Configure the Agent, and then enable tasks. The Agent is used to replicate data from the Snowcone to the Region or from the Region to the Snowcone. The tasks that are created in this step enable replication.
  3. Launch the EC2 instance (either a. or b.)
    1. a.      Using a Linux OS – When launching an instance on the Outposts server to attach to the NFS mount, make sure that the LNI is configured when launching the instance. In the User data section, enter the commands shown in the following figure to mount the NFS file system from the Snowcone.Screenshot of User Data section within the Amazon EC2 Launch Wizard

Figure 5: Screenshot of User Data section within the Amazon EC2 Launch Wizard

#!/bin/bash
sudo mkdir /var/snowcone
sudo mount -t nfs SNOW-NFS-IP:/buckets /var/snowcone
sudo sh -c “echo ’ SNOW-NFS-IP:/buckets /var/snowcone nfs defaults 0 0’ >> /etc/fstab”

In this OS, we create a directory and then mount the NFS file system to that directory. The echo is used to place the mount into fstab to make sure that the mount is persistent if the instance is rebooted.

  1. b        Windows OS – The AMI being used during the launch must include the NFS client. The client is required to mount the NFS. When launching an instance on the Outposts server to attach to the NFS mount, make sure that the LNI is configured when launching the instance. In the User data section, enter the commands shown in the following figure to mount the NFS from the Snowcone as a drive letter.

A screenshot of User Data section of Amazon EC2 Launch wizard with commands to mount NFS to the Windows File System

Figure 6: A screenshot of User Data section of Amazon EC2 Launch wizard with commands to mount NFS to the Windows File System

<powershell>
NET USE Z: \\SNOW-NFS-IP\buckets -P
</powershell> 

The NET USE command maps the Z: drive to the NFS mount, and the -P makes it persistent between reboots.

This solution also works with Snowball Edge Storage Optimized. When ordering the Snowball Edge, choose NFS based data transfer for the storage type.

Screenshot of Select the storage type for the Snowball Edge

Figure 4: Screenshot of Select the storage type for the Snowball Edge

Conclusion

In this post, we examined how to mount NFS file systems in Snowcone to EC2 instances running on Outposts servers. We also covered starting DataSync Agent on Snowcone to enable data transfer from the edge to an AWS Region. By pairing these services together, you can build persistent and durable storage external to the Outposts servers and replicate your data back to the AWS Region.

If you want to learn more about how to get started with Outposts servers, my colleague Josh Coen and I have published a video series on this topic. The demo series shows you how to unbox an Outposts server, activate the Outposts server, and what you can do with your Outposts server after it is activated. Make sure to check it out!