AWS Storage Blog
Optimize WordPress performance on Amazon EKS with Amazon FSx for OpenZFS
As users progress in their cloud journey, they increasingly need robust storage options that integrate natively with containers to help them increase operational efficiency, improve performance, and reduce costs. Our users are finding that using Amazon Elastic Kubernetes Service (Amazon EKS) meets this demand by using the Container Storage Interface (CSI) driver.
In this post, we dive into the integration between Amazon EKS and Amazon FSx for OpenZFS, exploring how the CSI driver can help unlock workflow efficiencies. EKS clusters and applications often require low-latency, high-speed access to shared configuration files, metadata, assets, or multi-pod data. FSx for OpenZFS provides a flexible storage option that meets these performance requirements. This is due to its high-throughput, low-latency performance profile, and ability to use the service-native CSI driver to orchestrate storage workflows. The FSx for OpenZFS CSI driver streamlines persistent storage for your stateful containerized applications on Amazon EKS.
Solution overview
By default, WordPress stores uploads on the local file system. To enable horizontal scaling, you need to move the WordPress installation and all user customizations (such as configuration, plugins, themes, and user-generated uploads) into a shared file system, like FSx for OpenZFS. This helps reduce load on the web servers and make the web tier stateless. We walk through how to dynamically provision and mount FSx for OpenZFS volumes in EKS pods using the FSx for OpenZFS CSI Driver for Amazon EKS. This grants containers native access to process and share data across multiple pods, which serves the web tier for the WordPress application.
When it comes to user session data storage, the WordPress core is completely stateless because it relies on cookies that are stored in the client’s web browser. User session (web browsers) storage isn’t a concern unless you have installed any custom code (for example, a WordPress plugin) that instead relies on native PHP sessions. We use MySQL running in a pod for demonstration purposes in this blog, but recommend using Amazon Aurora for the data tier of WordPress in production. Aurora for MySQL increases MySQL performance and availability by tightly integrating the database engine with a purpose-built distributed storage system, backed by SSD. You also have the option to offload all static assets, such as image, CSS, and JavaScript files, to an Amazon S3 bucket with Amazon CloudFront caching in front using WordPress plugins for AWS.
Solution architecture
The solution architecture demonstrates a scalable WordPress deployment on Amazon EKS with shared storage using a Multi-Availability Zone (Multi-AZ) FSx for OpenZFS file system:
Figure 1: WordPress pods in an EKS cluster mounting the FSx for OpenZFS file system
Key components
- User access: Users access the WordPress application through an Application Load Balancer (ALB).
- EKS cluster:
- Hosts WordPress application pods (using a Deployment with 2 replicas for high availability (HA))
- Hosts MySQL database pod and FSx for OpenZFS CSI driver
- Storage layer:
- PersistentVolumeClaim (PVC): Requests storage for WordPress data
- StorageClass (fsxz-vol-sc): Defines storage provisioning parameters
- FSx for OpenZFS: Provides high-performance shared NFS storage
- Data flow: WordPress pods mount the shared
/var/www/htmldirectory from FSx for OpenZFS. The FSx for OpenZFS CSI driver manages dynamic provisioning and lifecycle of storage volumes. Both WordPress pods share the same persistent storage, enabling stateless web tier scaling. The MySQL database pod provides the database backend for WordPress content and configuration. - Color-coded connections: Blue for HTTP traffic, green for NFS storage, purple for MySQL, red dashed for replication.
Understanding Amazon FSx for OpenZFS
FSx for OpenZFS provides fully managed, cost-effective, high performance shared NFS (v3, v4, v4.1, and v4.2) file storage using the open source OpenZFS filesystem. The service offers Single-AZ, Single-AZ HA, and Multi-AZ deployment options.
FSx for OpenZFS provides up to 10 GB/s throughput and 400,000 IOPS for disk operations, with even greater performance when serving data from cache. Users can configure the throughput, capacity, and IOPS of each of their filesystems independently, enabling them to provision only the storage capacity necessary, and to scale performance dynamically as their operational needs evolve.
FSx for OpenZFS provides several key features that enable highly efficient storage use when paired with an Amazon EKS workload:
Clones: Zero-copy clones are created from the snapshots through a PVC, creating a new read/write PersistentVolume.
Compression: The CSI driver supports enabling either the LZ4 compression algorithm for penalty free compression or the Zstandard (zstd) compression algorithm for increased compression.
Walkthrough
In this post, we bootstrap the EKS cluster with Auto Mode enabled, and then install the FSx for OpenZFS CSI driver. You can use Amazon EKS Auto Mode to automate cluster management without deep Kubernetes expertise. This is because it chooses optimal compute instances, dynamically scales resources, continuously optimizes costs, manages core add-ons, patches operating systems, and integrates with AWS security services. AWS expands its operational responsibility in EKS Auto Mode as compared to user-managed infrastructure in your EKS clusters. When enabled, EKS Auto Mode configures cluster capabilities with AWS best-practices included, making sure that clusters are ready for application deployment.
Prerequisites
The following prerequisites are needed to implement this solution:
- Intermediate Kubernetes and Linux skills as an administrator
- An Amazon Virtual Private Cloud (Amazon VPC) with a private subnet
- A configured and operating AWS Command Line Interface (AWS CLI), jq package, and gettext package
- kubectl CLI
- eksctl CLI
- Helm
Step 1: Create the cluster with EKS Auto mode
1.1 Create the necessary environment variables:
1.2 Prepare the eksctl config file:
1.3 Create the cluster using the config file:
1.4 Confirm the cluster creation:
Step 2: Install FSx for OpenZFS CSI driver
The FSx for OpenZFS CSI Driver provides a CSI interface used by container orchestrators to manage the lifecycle of FSx for OpenZFS file systems and volumes. We deploy the driver in the ‘system’ node pool, which was automatically created.
2.1 Add the aws-fsx-openzfs-csi-driver Helm repository:
2.2 Install the latest release of the driver:
2.3 When the driver has been deployed, verify the pods are running:
You should see output similar to below:
Step 3: Create the FSx for OpenZFS file system
We need to create an FSx for OpenZFS file system. You can do this through either the AWS Management Console or AWS CLI. This post demonstrates the deployment process using the AWS CLI.
The file system is deployed in the same Amazon Virtual Private Cloud (Amazon VPC) using the same security group as the EKS cluster. This setup makes sure that application pods in the EKS cluster can successfully mount storage from the FSx for OpenZFS file system.
3.1 Set the security group, subnet, and route table environment variables for the OpenZFS filesystem:
3.2 Create the FSx for OpenZFS file system:
Step 4: Dynamic provisioning of an FSx for OpenZFS volume
When creating an FSx for OpenZFS volume, we assume that an FSx for OpenZFS file system and root volume has already been created. This is what we created in the last step by deploying the file system using the AWS CLI.
As a best practice, avoid storing data directly in the root volume of the file system and instead create separate data volumes mounted beneath it. These mounted data volumes are referred to as children of the parent root volume.
Figure 2: Parent-child volume relationship in FSx for OpenZFS
In this step we first create the storage class for the volume. When the storage class is created, we can dynamically provision an FSx for OpenZFS volume using the CSI driver installed earlier.
4.1 Set the VPC ID, VPC CIDR, file system ID, and root volume ID needed to create the volume storage class:
4.2 Create the volume storage class kustomization and YAML file:
4.3 Create the volume storage class by applying the kustomization file:
4.4 Now that the volume storage class has been created, we can create a persistent volume claim for storing the WordPress data on the FSx for OpenZFS file system and dynamically provision a persistent volume:
4.5 Create the persistent volume claim by applying the volume-pvc.yaml file:
Step 5: Kubernetes resources deployment for WordPress application
5.1 Create MySQL Database (for demo purposes)
5.1.1 Create a MySQL deployment for demonstration after inserting mysql-root-password and mysql-password base64 encoded values in the YAML. In production, you should use Amazon Aurora MySQL as mentioned in the solution overview.
5.1.2 Apply the MySQL deployment:
5.2 Create WordPress deployment
5.2.1 Create the WordPress deployment that uses our FSx for OpenZFS shared storage after inserting your base64 encoded wordpress-db-password in the YAML:
5.2.2 Apply the WordPress deployment:
SECURITY NOTE: The WordPress service is configured as ClusterIP for security. This makes sure that the application is only accessible internally within the cluster. Never use the LoadBalancer type without proper security controls, because it can expose WordPress directly to the internet, creating a significant security vulnerability.
5.3 Verify the deployment
5.3.1 Check that all pods are running:
You should see output similar to below:
Note: WordPress pods may take 30-90 seconds to become fully ready (1/1 READY status). During initial startup, you may see the pods in Running state but 0/1 READY.
This is normal as WordPress:
- Connects to the MySQL database
- Initializes the shared file system on FSx for OpenZFS
- Completes its readiness probe checks (30-second initial delay)
If pods remain 0/1 READY after 3 minutes, check the events and logs:
- Check events for issues:
- Check WordPress logs:
5.3.2 Check the services:
You should see output similar to below:
Check that the PVC is bound:
You should see output similar to below:
5.4 Verify shared storage
5.4.1 Get both pod names:
5.4.2 Create a test file from the first pod:
5.4.3 Verify the file exists on the second pod demonstrating shared storage:
5.4.4 Verify data persists across scaling:
- Scale down to 1 replica
- Wait for scale down to complete
- Scale back up to 2 replicas
- Verify the test file still exists after scaling
This confirms the shared storage on FSx for OpenZFS is working correctly and data persists across pod scaling operations.
5.5 Test local access (optional)
Note: This step uses kubectl port-forward which only works if you’re running kubectl from your local machine. If you’re using AWS CloudShell or running kubectl from an Amazon EC2 instance, skip to step 5.5 to create an Ingress resource.
5.5.1 Test the WordPress application locally using port-forward:
5.5.2 Open your browser and navigate to http://localhost:8080 to access the WordPress installation.
5.6 Create ingress resource (optional)
For production access, you can create an Ingress resource.
5.6.1 Ensure you have an ingress controller installed:
5.6.2 Create the internal VPC Ingress resource:
5.6.3 Apply the Ingress:
5.6.4 Get the Ingress URL:
5.7 Monitor the application
5.7.1 Check logs from WordPress pods:
5.7.2 Monitor resource usage:
Security considerations
For production deployments, avoid exposing WordPress directly to the internet without proper security controls. We recommend the following access methods.
Option 1: Port-forward (most secure for testing)
Important notes: Keep the terminal window open while using port-forward. If the page doesn’t load immediately, then wait 10-15 seconds and refresh. WordPress redirects to the installation page automatically. You can also access the setup directly at: http://localhost:8080/wp-admin/install.php
Troubleshooting: If the connection fails, then try a different port: kubectl port-forward svc/wordpress -n default 8081:80. Make sure that no other applications are using port 8080. Check that WordPress pods are running: kubectl get pods -n default -l app=wordpress
Option 2: Internal ALB (VPC-Only Access)
Option 3: Internet-facing with security controls (production)
Security best practices
-
-
- Use internal ALBs for demo/testing environments
- Implement IP restrictions for internet-facing deployments
- Add AWS WAF protection
- Enable SSL/TLS certificates
- Use authentication/authorization (ALB OIDC, etc.)
- Regular security scanning and updates
-
Cleaning up
-
-
- Delete the FSx for OpenZFS file system:
- Delete the EKS cluster:
-
Conclusion
Integrating Amazon FSx for OpenZFS with WordPress on Amazon EKS enhances performance, scalability, and reliability through high-throughput, low-latency shared storage. This provides fast access to files and data, efficiently handling high traffic loads for a smoother user experience. Dynamic storage provisioning optimizes resource management and costs. FSx for OpenZFS also supports horizontal scaling, allowing multiple WordPress pods to share persistent storage without data inconsistency, and streamlines operations with CSI driver integration. It enhances reliability and data integrity, providing content availability across pods and protecting against data loss with HA and disaster recovery features. These benefits make FSx for OpenZFS an ideal choice for running WordPress on Amazon EKS.

