AWS Partner Network (APN) Blog

Integrating iSCSI Storage with VMware Cloud on AWS Virtual Machines Using Amazon FSx for NetApp ONTAP

By Sheng Chen, Sr. Specialist Solutions Architect – AWS
By Kiran Reid, Sr. Partner Solutions Architect – AWS
By Karthik Coimbatore Varadaraj, Sr. Partner Solutions Architect – AWS

In a previous AWS blog post, we discussed how customers can use Amazon FSx for NetApp ONTAP to launch and run fully-managed NetApp ONTAP file systems on Amazon Web Services (AWS) as an option for providing storage to virtual machines (VMs) running on VMware Cloud on AWS.

In this post, we’ll focus on how you can use Amazon FSx for NetApp ONTAP to provide block storage for VM workloads running on VMware Cloud on AWS via the iSCSI protocol.

Enterprise applications such as corporate email and database systems mandate the need for enterprise-grade block storage management features, as do cloud applications that run server-side processing. This is required to provide consistent I/O performance and low-latency storage connectivity.

With Amazon FSx for ONTAP, you can launch, run, and scale fully managed NetApp ONTAP storage file systems on AWS. The solution helps customers address their block storage requirements for workloads running on VMware Cloud on AWS.

iSCSI over Elastic Network Interface

The VMware Cloud on AWS software-defined data center (SDDC) is directly connected to the customer’s virtual private cloud (VPC) using an elastic network interface (ENI) that allows access to AWS services. This connectivity method is ideal for customers who wish to use iSCSI to access their storage volumes.

Customers can leverage the ENI to interconnect managed NetApp ONTAP volumes running on Amazon FSx, and present them directly to virtual machines running on VMware Cloud on AWS using the iSCSI protocol.

The iSCSI protocol is an IP-based protocol designed to share logical units of storage (LUNs) over IP networks, making it ideal for cloud implementations. These volumes can be used to scale file systems to meet their application performance and durability requirements independently of compute, allowing customers to benefit from a more cost-effective solution.

iSCSI is routable over the Amazon VPC architecture and is supported by native multipath I/O (MPIO) drivers in most operating systems.

With iSCSI, you have a “target” port in each AWS Availability Zone (AZ) connected to the local FSxN node the host is configured to, with the ability to failover between those ports in the event of service disruption. The ports and IP addresses are created by the FSx for NetApp ONTAP service.

VMware-iSCSI-FSx-NetApp-ONTAP-1

Figure 1 – iSCSI over ENI.

This connectivity proves to be the most cost-efficient path to access AWS storage, particularly when the SDDC resides within the same AZ. In this scenario, your storage traffic is exempt from network charges.

There are no data transfer charges when accessing an Amazon FSx file system from the file system’s preferred Availability Zone.

If you access your data from the same AWS Region, but from an AZ other than your file system’s preferred AZ:

  • For multi-AZ file systems created on or after February 23, 2022, there are no data transfer charges.
  • For multi-AZ file systems created before February 23, 2022 or for single-AZ file systems, you will be charged $0.01/GB in each direction.

Note that for all multi-AZ file systems, data transfer incurred for replication of data across AZs is included in the throughput capacity price.

Use Cases

  • Address block storage requirements for workloads and applications running on VMware Cloud on AWS.
  • Access on-demand block storage from the connected VPC for workloads with low latency requirements.
  • Enable customers to continue using on-premises application-consistent snapshotting solutions such as SnapMirror for SQL which they have invested in/part of their protection mechanisms.

Setup

In the following example, we are going to deploy an Amazon FSx for NetApp ONTAP file system to provide iSCSI-based block storage for a Windows server client running on VMware Cloud on AWS.

iSCSI access is possible by using a VMware Transit Gateway (vTGW) for accessing in an existing or new customer VPC.

In this scenario and for the purpose of this demo, we have deployed the FSx service in a connected VPC environment to provide iSCSI storage via the ENI.

VMware-iSCSI-FSx-NetApp-ONTAP-2

Figure 2 – Demo setup.

As illustrated in the above diagram, we have provisioned and prepared the following items as prerequisites to this lab:

  • VMware Cloud on AWS SDDC cluster
  • Windows client 2019 virtual machine (as the iSCSI client) running on the SDDC
  • FSx for NetApp ONTAP Multi-AZ file system

You can refer to this AWS blog post for setting up the FSx for NetApp ONTAP service.

To provision iSCSI LUNs, customers can use a variety of NetApp management tools such as NetApp Cloud Manager, NetApp ONTAP REST API, or the NetApp ONTAP CLI.

For this example, we’ll use the ONTAP CLI to walk through the process of configuring the following:

  • Setting up an iSCSI LUN via ONTAP CLI
  • Windows client configurations

Part 1: Setting Up an iSCSI LUN via ONTAP CLI

To begin with, we’ll first locate the IP addresses of the FSx ONTAP management endpoint and iSCSI endpoints within the AWS Management Console.

  • Open the Amazon FSx service page.
  • Click into the FSx for NetApp ONTAP file system you have just provisioned.
  • On the Summary page, click Administration and you’ll find the ONTAP Management endpoint IP address. We’ll soon need this to access the ONTAP CLI.
  • Also on the Summary page, click Storage virtual machines (SVMs), and then click the SVM you have deployed during the FSx service provisioning.
  • On the SVM Summary page, you’ll find the iSCSI IP addresses under the Endpoints section.

Take a note of the two IP addresses for the iSCSI Logical Interfaces (LIFs) deployed across two Availability Zones.

VMware-iSCSI-FSx-NetApp-ONTAP-3

Figure 3 – iSCSI endpoints.

Next, we’ll SSH to the ONTAP management IP address to access the ONTAP CLI. You will need the administrative password for the default service account (fsxadmin) that was supplied during the FSx service provisioning.

Once logged into the ONTAP CLI, we can run the following commands to verify the SVM iSCSI details, including the iSCSI Target name and iSCSI LIF IP addresses, which should match the endpoint addresses from the AWS console.

VMware-iSCSI-FSx-NetApp-ONTAP-4

Figure 4 – iSCSI target name and LIFs via ONTAP CLI.

Now, we’ll create a 20GB LUN for the Windows client running on VMware Cloud on AWS.

VMware-iSCSI-FSx-NetApp-ONTAP-5

Figure 5 – Create an iSCSI LUN.

For LUN masking, we’ll configure an initiator group (igroup) that consists of the iSCSI initiator name from the Windows client. Also, note the Asymmetric Logical Unit Access (ALUA) feature is enabled by default at the igroup.

VMware-iSCSI-FSx-NetApp-ONTAP-6

Figure 6 – Create an igroup.

ALUA is an industry standard protocol for discovering and managing multiple paths to access storage LUNs. The ONTAP file system uses ALUA and MPIO to manage load sharing and path failover over different iSCSI paths.

In the event of a FSx system failure at the primary AZ, the client host will automatically reroute iSCSI traffic to the standby LIF at the secondary AZ with minimum service disruption.

Finally, we’ll map the LUN to the igroup to make the LUN visible to the client. This concludes the iSCSI configurations on the FSx ONTAP side, and we’ll move to the Windows client side.

VMware-iSCSI-FSx-NetApp-ONTAP-7

Figure 7 – Map LUN to the igroup.

Part 2: Windows Client Configurations

On the Windows client, we’ll first need to ensure the MPIO feature is stalled, which can be added via Server Manager > Manage > Add Roles and Features. Once the MPIO driver is installed, we need to add support for iSCSI devices.

VMware-iSCSI-FSx-NetApp-ONTAP-8

Figure 8 – Enable MPIO for iSCSI devices.

Now go to Control Panel > iSCSI Initiator, and add the iSCSI target of the FSx for NetApp ONTAP file system. Make sure to enable multi-path, and create two iSCSI sessions to connect to both target IPs of the iSCSI LIFs from the primary and secondary AZs.

VMware-iSCSI-FSx-NetApp-ONTAP-9

Figure 9 – Add multiple iSCSI paths.

Finally, go to Computer Manager > Storage > Disk Management and you should see the 20G LUN discovered via iSCSI protocol from the FSx ONTAP file system.

The MPIO driver should also report two different iSCSI paths, with one Active/Optimized path going to active LIF in the primary AZ, and one Active/Non-Optimized path going to the standby LIF in the secondary AZ.

VMware-iSCSI-FSx-NetApp-ONTAP-10

Figure 10 – Access the iSCSI LUN with MPIO.

Design Considerations

Amazon FSx for NetApp ONTAP provides a highly available and resilient storage service across multiple AWS Availability Zones. For customers who have strict service-level agreement (SLA) requirements, you should also consider deploying a stretched SDDC cluster over multiple AZs to provide end-to-end high availability from both compute and storage perspectives.

It’s important to note that in order to access multi-AZ FSx for NetApp ONTAP file systems over other storage protocols such as NFS or SMB, VMware Transit Connect is required to inject the static floating IP range, which is outside of the connected VPC CIDR address space and therefore cannot be routed to the SDDC via the ENI.

Additional considerations on cost and performance were discussed in details in a previous AWS blog post.

Summary

In this post, we took a closer look at the architecture and design options of integrating iSCSI Storage with VMware Cloud on AWS workloads using Amazon FSx for NetApp ONTAP.

We went through a real example of deploying an iSCSI LUN via ONTAP CLI and providing the block storage to a virtual machine running on VMware Cloud on AWS.

To learn more, we recommend you review these additional resources: