AWS Storage Blog
Implementing a backup solution with AWS Storage Gateway
Backups are the insurance policy for our data. We hope to never use them, but if the time comes when we need them, they had better be there for us.
At a high level, there are two different variations: local and offsite. A local backup can be as simple as copying data to another physical device. If your device fails, you have a copy on another device. But what if you lose both versions of your data? For example, if the physical location in which both devices live is destroyed. Then you must have access to an offsite copy of your data.
I’ve been using AWS to keep offsite backups of my data since 2010. It’s a simple setup; I launch an Amazon EC2 instance and just rsync all my data to it periodically. However:
- It’s not serverless – I need to fire up an EC2 instance each time I want to make a backup. Not a big deal, but it’s an extra step that can be avoided.
- It requires me to use Amazon EBS storage instead of Amazon S3. Again, not a showstopper, but the elastic, pay-as-you-use model of Amazon S3 provides a compelling argument for data storage (no need to pre-provision capacity).
In this blog, I want to show you how you can back up your local data to AWS. With scalability a non-issue, this could be your company’s TiBs or PiBs of corporate data or, as in my case, a few hundreds of GiBs of home-based data.
AWS Storage Gateway and picking a gateway type
I recently turned my attention to AWS Storage Gateway. This is a hybrid cloud storage service that enables you to efficiently, securely, and cost-effectively backup data from your on-premises environment into Amazon S3. It comes in three different types – Tape Gateway, File Gateway, and Volume Gateway. Each type enables you to easily leverage Amazon S3 storage, with its inherent security, durability, and availability. In addition to backup, many AWS customers leverage AWS Storage Gateway to great effect in data center disaster recovery (DR) and migration scenarios.
AWS Storage Gateway is a fully managed service, comprising both in-cloud and on-premises components. You have a number of options for implementing AWS Storage Gateway on premises, depending on your requirements. You can deploy it as a virtual machine (VM), running on Linux QEMU/KVM, VMware ESXi, or Microsoft Hyper-V, or as a hardware appliance. Alternatively, you can implement in-cloud using an Amazon Machine Image (AMI) in Amazon EC2.
It addresses three key hybrid cloud use cases:
- Move backups and archives to the cloud.
- Reduce on-premises storage with cloud-backed file shares.
- Provide on-premises applications low latency access to data stored in AWS.
Let’s take a quick look at the various incarnations of AWS Storage Gateway.
- File Gateway, as its name suggests, provides a file-based NFS/SMB facility for storing and accessing your data in Amazon S3.
- Tape Gateway enables you to use your existing on-premises backup application, presenting an iSCSI-VTL interface. Your backup software uses virtual tapes, presented by AWS Storage Gateway, and your data is stored in Amazon S3.
- Volume Gateway presents iSCSI block devices on your local LAN, with data backed up automatically to Amazon S3 in one of two different modes – stored volumes or cached volumes. With stored volumes, your entire dataset is stored locally, while being asynchronously backed up to S3. With cached volumes, your entire dataset lives in S3, with frequently accessed data cached locally.
Figure 1: AWS Storage Gateway options – File, Volume, and Tape Gateways
After examining the features of the three types, I decided that Volume Gateway in cached volume mode was the option for me, implemented on premises. In this blog, I will walk you through the the implementation process. As mentioned earlier, there are a number of components to the AWS Storage Gateway, on both the AWS side and the client side. It is my aim with this blog to collate the entire process into a single source of reference, delivering a functional Storage Gateway configuration. For the purposes of this blog, a degree of familiarity with Linux, networking, virtualization, and security act as prerequisites. In addition, the presence of a functional DHCP server on the local network is necessary for the specific procedure I describe. You are free to implement using a static IP if you prefer.
Figure 2: AWS Storage Gateway – Volume Gateway, local cache
Note that AWS Storage Gateway manages all Amazon S3 storage in AWS, the cost of which is included in the usage costs of the service. In contrast, any snapshots you make are stored, and therefore accrue charges, in your own account.
Disclaimer: I offer this procedure purely as a proof of concept exercise. I describe the end-to-end mechanics of implementing AWS Storage Gateway. I do not fully address security, sizing/performance, availability, full-versus-incremental backups, or scalability, all of which are dependent on your specific circumstances and requirements. Note, however, that AWS Storage Gateway automatically encrypts your in-transit data, with at-rest data in AWS encrypted by default, or by using your own encryption keys.
At a high level, the process is as follows:
- Prepare a local Linux host.
- Install the AWS Storage Gateway VM on your local Linux host.
- Activate AWS Storage Gateway.
- Allocate storage for client usage.
- Connect to AWS Storage Gateway’s iSCSI storage.
- Procedure to access backup data in AWS.
- Command line tips.
1. Prepare a local Linux host
For my AWS Storage Gateway VM, I’m using a spare Intel® NUC, Intel® Core™ i3-3217U Processor @ 1.80 GHz, 8-GiB RAM, running Fedora 32. My installation spec comes in under the AWS-recommended minimum requirements (4 cores-dedicated, 16-GiB RAM), at 2 cores-dedicated and 4-GiB RAM. But, for the backup volumes in my use case, this reduced spec works just fine. The resident set varies between 3.1 GiB and 3.9 GiB with little in the way of swapping. Storage Gateway is tested and supported on CentOS/RHEL 7.7, Ubuntu 16.04 LTS, and Ubuntu 18.04 LTS. As I’m running it on Fedora, the steps in this blog should work fine on both CentOS and RHEL. I am opting to use QEMU/KVM for the AWS Storage Gateway VM.
The first step is to create a bridge network on the host, to provide connectivity for the VM.
Ensure that the following packages are installed on this host:
bridge-utils
libvirt
virt-install
qemu-kvm
libguestfs-tools
iscsi-initiator-utils
Install any that are missing using the rpm
or dnf
command. Once you have confirmed that all of the packages are installed, ensure that the libvirtd
and iscsid
services are both running and enabled.
For reference, my network configuration initially looked like this:
# nmcli con show NAME UUID TYPE DEVICE virbr0 8a308340-56fc-4d2f-958a-fbabf3b980bd bridge virbr0 Wired connection 1 5d600641-9557-3c63-8ebe-9fd86a388977 ethernet eth0
Add a bridge device:
# nmcli con add ifname br0 type bridge con-name br0 Connection 'br0' (922b5437-a160-4e9b-a092-d24a1cb2a8f5) successfully added.
Create a slave interface:
# nmcli con add type bridge-slave ifname eth0 master br0 Connection 'bridge-slave-eth0' (7d5d61d8-6984-41a3-a19d-32e395435485) successfully added.
Enable the slave interface. Note that the network drops here, so you must be on the host console at this point, not via SSH.
# nmcli con down "Wired connection 1" Connection 'Wired connection 1' successfully deactivated # nmcli con del "Wired connection 1" Connection 'Wired connection 1' (5d600641-9557-3c63-8ebe-9fd86a388977) successfully deleted. # nmcli con up br0 Connection successfully activated (master waiting for slaves)
Reboot and the system should come back up with the new network configuration.
# nmcli con show NAME UUID TYPE DEVICE br0 4e4932f1-ac5b-47d5-9a91-a4d3351fadb0 bridge br0 virbr1 b3b3e841-19d0-4dad-901a-24e61811dcb8 bridge virbr1 bridge-slave-eth0 f5c4d82b-c13a-43da-8178-f5b2e6756118 ethernet eth0
Note: the br0
interface is the one we will use for the AWS Storage Gateway VM in the next step.
Finally, open port 3260/tcp in your Linux firewall, to allow VM connectivity from your network clients.
2. Install the AWS Storage Gateway VM on your local Linux host
First, in the AWS Management Console, navigate to the AWS Storage Gateway service, and click on Create gateway. Select the gateway type – Volume Gateway, Cached Volumes – followed by Next. Select the host platform – Linux KVM – and follow the instructions in the Setup Instructions for Linux KVM drop-down, to download the QCOW2 image onto your Linux host.
Next, back on your local Linux host, unzip the file you just downloaded and import the image into KVM, using the virt-manager
interface (If you are not using a desktop interface, see section “7 Command line tips”) to create the AWS Storage Gateway VM. Select the option to Import existing disk image
followed by Forward
.
Browse to the directory you unzipped earlier and select the .qcow2
image file. For the operating system type, select Generic Default
, followed by Forward
.
On the Choose Memory and CPU settings
screen, enter details of required memory and number of CPU cores, followed by Forward
.
On the Ready to begin the installation
screen, click on the drop-down options for Network selection, choose Specify shared device name
, and enter br0
for Bridge name
. Hit Finish
and the VM creation process will start. The VM console will open and you may be looking at a blank screen for a minute or two, while the image file is imported. Once the image import is complete, and the VM is up and running, you will be at the VM’s login prompt.
Use the default credentials to log in (admin/password – don’t forget to change these) and you will be at the AWS Appliance Activation – Configuration
screen.
Have a browse around the options, in particular the Test Network Connectivity
option, to confirm all looks well.
Back in virt-manager
, create two virtual 20-GiB IDE disks on the gateway (for the cache and upload buffer volumes) and reboot. Note that this needs to be a cold “stop,” followed by “start,” not just a single “reboot.” You will see a warning advising you that these volumes should be at least 150 GB each, which you can safely ignore for our purposes.
3. Activate AWS Storage Gateway
Log in to the AWS Storage Gateway VM and select 0: Get activation key
. Specify a network type of Public
, an endpoint of Standard
and note the activation key shown on the screen. With this key, register the gateway using the AWS CLI, specifying the Region in which to activate (I’m activating in the us-west-2 Region):
$ aws --region us-west-2 storagegateway activate-gateway --activation-key <key> \ --gateway-type CACHED --gateway-name SGW_VMx --gateway-timezone GMT-7:00 \ --gateway-region us-west-2 { "GatewayARN": "arn:aws:storagegateway:us-west-2:978528203459:gateway/sgw-6677990F" }
Once activation is complete, the Status of the AWS Storage Gateway service in the AWS Management Console switches from Offline to Shutdown.
Select the gateway and click on Start gateway and the service will start, confirmed by a status of Running. You will see a message highlighted that states: You need to allocate local storage. This refers to the two volumes you created earlier for the cache and upload buffer. From the Actions drop-down, click on Edit local disks and allocate one to Upload Buffer and the other to Cache, followed by Save.
4. Allocate storage for client usage
Back in the AWS Storage Gateway console, create the application data volume, which will be presented locally to your on-premises clients. Select your gateway followed by Create Volume. The iSCSI target name is an identifier that you use as the SCSI target name, for example “local-app-vol-01.” Click on Create Volume, followed by Skip on the CHAP authentication screen (for this exercise, I am skipping CHAP authentication, but you are strongly recommended to enable CHAP authentication for security).
5. Connect to AWS Storage Gateway’s iSCSI storage
On a client host that requires access to your AWS Storage Gateway, connect to the iSCSI volume, mount it, and write some test data to it.
Before you connect to the iSCSI target, you must make some recommended changes to the client’s iSCSI settings, which make connectivity more tolerant of varying network conditions. Edit the client’s /etc/iscsi/iscsi.d.conf
and modify the following parameters to reflect the specified values:
node.session.timeo.replacement_timeout = 600 node.conn[0].timeo.noop_out_interval = 60 node.conn[0].timeo.noop_out_timeout = 600
Note: In the following steps, 192.168.1.142 is the IP address of the AWS Storage Gateway VM.
Now, on the client host, you are ready to discover the local-app-vol-01
iSCSI target on AWS Storage Gateway:
# iscsiadm --mode discovery --type sendtargets --portal 192.168.1.142:3260 192.168.1.142:3260,1 iqn.1997-05.com.amazon:local-app-vol-01
Show details of the newly discovered iSCSI target:
# iscsiadm --mode node --targetname iqn.1997-05.com.amazon:local-app-vol-01 \ --portal 192.168.1.142:3260,1 # BEGIN RECORD 2.1.0 node.name = iqn.1997-05.com.amazon:local-app-vol-01 node.tpgt = 1 node.startup = automatic node.leading_login = No iface.iscsi_ifacename = default iface.net_ifacename = <empty> iface.ipaddress = <empty> <snip>
Connect to the local-app-vol-01
volume by using the preceding command appended with --login
:
# iscsiadm --mode node --targetname iqn.1997-05.com.amazon:local-app-vol-01 \ --portal 192.168.1.142:3260,1 --login Logging in to [iface: default, target: iqn.1997-05.com.amazon:local-app-vol-01, portal: 192.168.1.142,3260] Login to [iface: default, target: iqn.1997-05.com.amazon:local-app-vol-01, portal: 192.168.1.142,3260] successful.
You have now attached the volume to your iSCSI initiator:
# ls -l /dev/disk/by-path | grep -i iscsi lrwxrwxrwx. 1 root root 9 Oct 13 19:00 ip-192.168.1.142:3260-iscsi-iqn.1997-05.com.amazon:local-app-vol-01-lun-0 -> ../../sdc
You now have a new block device, /dev/sdc
in the example. Just lay down a file system and mount, and you’re done. Any data that’s written to this volume from this point will be automatically and securely synched by AWS Storage Gateway to Amazon S3.
# df –k | grep sgw /dev/sdc1 264090692 47938696 202713892 28% /sgw-data-vol001 # lsblk –scsi NAME HCTL TYPE VENDOR MODEL REV TRAN sda 2:0:0:0 disk ATA MTFDDAV256TBN-1AR15ABHA 0T14 sata sdb 3:0:0:0 disk Samsung Samsung_Portable_SSD_T5 0 usb sdc 4:0:0:0 disk Amazon Storage_Gateway 1.0 iscsi
6. Procedure to access backup data in AWS
After we have written some test data to the AWS Storage Gateway volume, we can access that data in AWS to confirm that everything is working correctly by:
- Making a snapshot of the volume.
- Creating a new EBS volume from the snapshot.
- Mounting the new EBS volume on an EC2 instance.
We can monitor the AWS Storage Gateway service in the AWS Management Console. If you do this while writing to the volume, you will see that Upload buffer usage reflects the activity.
In addition, you can observe the increase in space usage on the volume, shown as Used in the column on the right.
For enterprise operations, AWS Backup integrates with Volume Gateway, and you can use the service to fully automate snapshots and retention periods.
7. Command line tips
You may not always have access to the virt-manager
interface, at which time you must use the command line. Here are some tips and examples on using the virsh
libvirtd utility for installing and administering the local AWS Storage Gateway components.
- Import the AWS Storage Gateway disk image into KVM:
# virt-install --name "SGW_VMx" --description "SGW VMx" --os-type=generic --ram=4096 \ --vcpus=2 --disk path=fgw-kvm.qcow2,bus=virtio,size=80,sparse=false \ --disk path=aws-storage-gateway-1599010968.qcow2,bus=virtio,size=1024,sparse=false \ --network default,model=virtio --graphics none --import
- Start up the AWS Storage Gateway KVM domain and confirm it is running:
# virsh list --state-running Id Name State -------------------- # virsh start SGW_VMx Domain SGW_VMx started # virsh list --state-running Id Name State ------------------------- 5 SGW_VMx running
- Connect to an AWS Storage Gateway KVM domain console:
# virsh list --state-running Id Name State ------------------------- 5 SGW_VMx running # virsh console 5 Connected to domain SGW_VMx Escape character is ^] AWS Appliance Login to change your network configuration and other settings ip-172-31-45-15 login:
To exit the login prompt, use ctrl-] (ctrl + right-square-bracket).
- Shut down the AWS Storage Gateway and KVM domain:
Connect to the AWS Storage Gateway – Configuration
console and select 0: Stop AWS Storage gateway
:
Gateway is stopping. This may take a few minutes. Please wait... Gateway has stopped successfully. You can now shutdown your VM safely. Press Return to return to the Main Menu
- You can then stop the KVM domain:
# virsh list --state-running Id Name State ------------------------- 5 SGW_VMx running # virsh shutdown SGW_VMx Domain SGW_VMx is being shutdown # virsh list --state-running Id Name State --------------------
Cleaning up
If you are not implementing AWS Storage Gateway for actual usage, you should remove all AWS resources that you created, to avoid incurring unnecessary costs. Go to the Storage Gateway page in the AWS Management Console, select the gateway you wish to delete, followed by Actions and Delete gateway. Doing so deletes the gateway and all associated S3 resources.
Conclusion
Congratulations, you now have a functional AWS Storage Gateway on your local network!
In this blog I showed you how to deploy Volume Gateway on a Linux KVM hypervisor, presenting cloud-backed object storage to your on-premises applications. This significantly enhances your range of options around backups, disaster recovery (DR), and migration scenarios, keeping your data secure and highly-available.
For next steps, you should look at security, capacity/scalability, and resiliency. You can find good starting points in the AWS Storage Gateway FAQs and the documentation on Security in AWS Storage Gateway.
I hope this exercise proved useful, and I would be happy to answer any questions in the comments section.