AWS Storage Blog

Clustered storage simplified: GFS2 on Amazon EBS Multi-Attach enabled volumes

One of the design patterns for high availability of mission critical applications is the use of shared storage. This architectural pattern enables you to access storage from multiple hosts simultaneously, making your applications resilient to node failures. Customers with demanding transaction processing systems, analytics workloads, or high performance computing clusters need highly available, high-performance storage to meet the performance and availability requirements of their workloads.

Amazon FSx and Amazon EFS provide simple, scalable, fully managed network file systems that are well suited to provide the shared storage required for most customer applications. For customers that want to lift-and-shift their existing on-premises SAN architecture to AWS without refactoring their cluster-aware file systems, such as Red Hat Global File System 2 (GFS2) or Oracle Cluster File System (OCFS2), another option is to use Amazon EBS volumes and the Multi-Attach feature. You can use this feature for a highly available shared storage by using a cluster-aware file system such as GFS2 that safely coordinates storage access between instances to prevent data inconsistencies.

This post is for our customers who want to build highly available applications using a clustered storage on Amazon EBS volumes. This post walks through the process of setting up GFS2 using Multi-Attach enabled EBS volumes attached to two EC2 instances that are a part of a Linux cluster.

Solution overview

The following are the high-level steps for setting up GFS2 on Multi-Attach enabled EBS volumes:

  1. Setting up EC2 instances and Multi-Attach enabled EBS volumes.
  2. Installing the cluster software.
  3. Configuring the cluster.
  4. Setting up GFS2.

The setup used in the post includes:

  1. Two EC2 instances running a RedHat Linux Cluster (ma-host-1 and ma-host-2).
  2. Multi-Attach enabled EBS volume
  3. GFS2 file system mounted as /sharedFS on both nodes.

Figure 1 - Layout for the use case

Figure 1: Layout for the use case

Prerequisites for setting up EC2 instances and Multi-Attach enabled EBS volumes

This post uses the Red Hat Enterprise Linux 7 AMI available to customers through their RedHat subscription. This setup requires access to the following repositories from RedHat:

  1. Red Hat Enterprise Linux 7 Server (RPMs)
  2. Red Hat Enterprise Linux High Availability (for RHEL 7 server) (RPMs)
  3. Red Hat Enterprise Linux Resilient Storage (for RHEL 7 server) (RPMs)

Click here for steps on setting up GFS2 using CentOS.

Create the Multi-Attach enabled EBS volume

Navigate to the Create Volume menu in the Amazon EBS section of the Amazon EC2 console and create a Multi-Attach enabled EBS volume.

Figure 2 - Creating Multi-Attach enabled EBS volume

Figure 2: Creating Multi-Attach enabled EBS volume

Once you have created the volume successfully, you must attach it to both of the Amazon EC2 instances. You can do this by selecting Attach Volume from the Actions drop-down menu in the console.

Figure 3 - Attaching the EBS volume

Figure 3: Attaching the EBS volume

Select the hosts that you want to attach the volume to. In this case, it is ma-host-1 and ma-host-2.

Figure4 - Selecting hosts

Figure 4: Selecting hosts

Once you have attached the EBS volume to both the nodes, run the lsblk command to confirm that the EBS volume is now visible on both the hosts.

Figure5 - lsblk output from both hosts

Figure 5: lsblk output from both hosts

Now that you have the volume attached to both the EC2 instances, you can start the cluster setup.

Installing the cluster software

Before setting up the cluster software, run yum update on both the nodes. Also, ensure that the nodes are able to reach each other over the network and can resolve each other’s hostname.

Install cluster software using the yum command on both nodes.

$ sudo yum install pcs pacemaker fence-agents-aws

You will be using pcs to configure your cluster. To start and enable the pcsd daemon, run the following on both nodes.

$ sudo systemctl start pcsd.service
$ sudo systemctl enable pcsd.service

The cluster software creates a user “hacluster,” which is used to configure the cluster and perform cluster tasks such as syncing the configuration, starting and stopping services on cluster nodes. To get started, the password for hacluster must be set on both the nodes and has to be same. To set the password for the hacluster user, run the following commands on both the nodes:

$ sudo passwd hacluster

Configuring the cluster

With the required software installed, you can proceed to configuring the cluster.

Use pcs cluster command on any node to authenticate using the hacluster user: (Enter username as hacluster and when prompted, enter the password from the previous step)

$ sudo pcs cluster auth ma-host-1 ma-host-2
Username: hacluster
Password: *********

If your cluster nodes are able to communicate with each other using their registered hostnames, you should see an output like the following one:

Figure 6 - Authenticating hacluster user for both the nodes

Figure 6: Authenticating hacluster user for both the nodes

If the command fails to complete, see if the instances are able to resolve each other’s hostname properly. Also, check if the security group configuration allows traffic between instances belonging to the same security group.

Next, configure a name for the cluster and add the two nodes as its members using the following command (run on any one node):

$ sudo pcs cluster setup --name macluster ma-host-1 ma-host-2

If the command is successful, then you should see an output like this:

Figure 7 - Cluster name and membership

Figure 7: Cluster name and membership

Once the cluster has been set up successfully, you can start the cluster using the pcs cluster start command

$ sudo pcs cluster start --all

Figure 8 - Starting the cluster

Figure 8: Starting the cluster

You can check the status of the cluster using the following commands:

$ sudo pcs status corosync
$ sudo pcs status

Figure9 - Displaying cluster status

Figure 9: Displaying cluster status

Setting up fencing

The next step is to set up a fencing device for the cluster. Fencing is an important component of the cluster configuration used to prevent I/O from nodes that are unresponsive on the network but still have access to the shared EBS volume. Use the aws_fence agent installed earlier to set up fencing for your cluster. To check the fence agent installed, run the following command:

$ sudo pcs stonith list

Figure10 - Displaying the aws_fence device

Figure 10: Displaying the aws_fence device

The fence_aws agent needs the credentials of an IAM user with permissions to describe, start, reboot, and stop the two EC2 instances. If you don’t have one already, create an IAM user with the required permissions. You need the user credentials (access key and secret key) in the next step.

To configure the fencing agent, use the pcs stonith create command on one of the hosts

$ sudo pcs stonith create clusterfence fence_aws \
access_key=<your access key> \
secret_key=<your secret key> \
region=us-east-1 \
pcmk_host_map="ma-host-1:Instance-ID-1;ma-host2:Instance-ID-2" \
power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=4 

On completion, run the pcs status command to check the configuration

$ sudo pcs status

Figure 11 - Setting up fencing

Figure 11: Setting up fencing

Setting up GFS2

After completing the cluster configuration, you must set up GFS2. The Red Hat Enterprise Linux (RHEL) Resilient Storage add-on provides GFS2, and it depends on the RHEL High Availability add-on to provide the cluster management required by GFS2.

To begin, you need the gfs2-utils package for gfs2 and the utilities required to manage the gfs2 file system. Also as you will be using LVM to create volumes on the disk, you need the lvm2-cluster package that has the cluster extension for LVM tools.

To install, run the following on both the nodes:

$ sudo yum install lvm2-cluster gfs2-utils

Before proceeding with the next step, create the mountpoint /sharedFS on both nodes.

$ sudo mkdir /sharedFS

The default cluster behaviour is to stop a node that has lost quorum. However, for GFS2 it is a best practice to change it to “freeze” I/O until quorum is regained. To make the change, run the following on any of the nodes

$ sudo pcs property set no-quorum-policy=freeze

Figure 12 - Setting no-quorum policy to “freeze”

Figure 12: Setting no-quorum policy to “freeze”

Set up the distributed lock manager (dlm) resource by running the following on any node

$ sudo pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s \
 on-fail=fence clone interleave=true ordered=true

Execute the following command on both the nodes to enable clustered locking:

$ sudo /sbin/lvmconf --enable-cluster

Clvmd is the clustered LVM daemon that is responsible for distributing LVM metadata updates across the cluster. Execute the following command on any node to create clvmd as a cluster resource:

$ sudo pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s \
on-fail=fence clone interleave=true ordered=true

Also, clvmd must start after dlm and the clvmd resource must be on the same node as the dlm resource. The following set of commands (run on any node) defines the constraints:

$ sudo pcs constraint order start dlm-clone then clvmd-clone
$ sudo pcs constraint colocation add clvmd-clone with dlm-clone

Create the volume group and the logical volume using the following commands on any of the nodes. Replace /dev/nvme1n1 with the device name visible for the Multi-Attach enabled EBS volume in your setup:

$ sudo pvcreate /dev/nvme1n1  
$ sudo vgcreate -Ay -cy clustervg /dev/nvme1n1  
$ sudo lvcreate -L49G -n clusterlv clustervg

Next step is to create the GFS2 file system on the volume created. Run the following command on any of the nodes to create a GFS2 file system:

$ sudo mkfs.gfs2 -j2 -p lock_dlm -t macluster:sharedFS /dev/clustervg/clusterlv

It is important to specify the correct values after the -t switch. The correct format is cluster_name:FSName, which is macluster:sharedFS in this setup:

Figure 13 - Configuring gfs2

Figure 13: Configuring gfs2

Some points to note before you proceed to mount the GFS2 file system:

  1. The mount entries are not made in the /etc/fstab file as the resource is managed by the cluster. Mount options can be specified while creating the resource.
  2. The ‘noatime’ option is recommended if your workload does not need file access times to be recorded every time a file is accessed on the GFS2 file system.

Create a file system resource by running the following command on any node:

$ sudo pcs resource create clusterfs Filesystem device="/dev/clustervg/clusterlv" \
directory="/sharedFS" fstype="gfs2" options="noatime" op monitor interval=10s \
on-fail=fence clone interleave=true

Finally, for setting up the GFS2 and clvmd dependency and startup order use the following commands:

$ sudo pcs constraint order start clvmd-clone then clusterfs-clone
$ sudo pcs constraint colocation add clusterfs-clone with clvmd-clone

The preceding commands mount the newly created GFS2 file system on both nodes:

Figure 14 - File system mounted on both hosts

Figure 14: File system mounted on both hosts

Congratulations! You have successfully setup a GFS2 file system and can use the EBS volume on both the nodes simultaneously.

Cleaning up

If you don’t need them any longer, remember to terminate the EC2 instances and delete the EBS volumes. If you have any data in the setup, please remember to take a backup before shutting down the resources.

Conclusion

In this blog post, we showed how you can build a highly available setup using Amazon EBS Multi-Attach volumes. We used a cluster aware file system -GFS2 that safely coordinates storage access between instances to prevent data inconsistencies. This post used a sample configuration to set up a simple RedHat Linux cluster with a GFS2 file system. It is important to note that both the cluster and GFS2 need detailed planning and testing based on several factors unique to every environment.

To read about RedHat High Availability please refer to the Red Hat documentation. The Red Hat documentation on GFS2 is a great resource for understanding and planning your GFS2 configuration. To learn more about Amazon EBS and the Multi-Attach feature, please refer to the Amazon EC2 documentation.

Thanks for reading this blog post on GFS2 on Amazon EBS Multi-Attach enabled volumes. Please don’t hesitate to leave a comment in the comments section if you have any questions or feedback.