AWS Storage Blog

Optimize WordPress performance on Amazon EKS with Amazon FSx for OpenZFS

As users progress in their cloud journey, they increasingly need robust storage options that integrate natively with containers to help them increase operational efficiency, improve performance, and reduce costs. Our users are finding that using Amazon Elastic Kubernetes Service (Amazon EKS) meets this demand by using the Container Storage Interface (CSI) driver.

In this post, we dive into the integration between Amazon EKS and Amazon FSx for OpenZFS, exploring how the CSI driver can help unlock workflow efficiencies. EKS clusters and applications often require low-latency, high-speed access to shared configuration files, metadata, assets, or multi-pod data. FSx for OpenZFS provides a flexible storage option that meets these performance requirements. This is due to its high-throughput, low-latency performance profile, and ability to use the service-native CSI driver to orchestrate storage workflows. The FSx for OpenZFS CSI driver streamlines persistent storage for your stateful containerized applications on Amazon EKS.

Solution overview

By default, WordPress stores uploads on the local file system. To enable horizontal scaling, you need to move the WordPress installation and all user customizations (such as configuration, plugins, themes, and user-generated uploads) into a shared file system, like FSx for OpenZFS. This helps reduce load on the web servers and make the web tier stateless. We walk through how to dynamically provision and mount FSx for OpenZFS volumes in EKS pods using the FSx for OpenZFS CSI Driver for Amazon EKS. This grants containers native access to process and share data across multiple pods, which serves the web tier for the WordPress application.

When it comes to user session data storage, the WordPress core is completely stateless because it relies on cookies that are stored in the client’s web browser. User session (web browsers) storage isn’t a concern unless you have installed any custom code (for example, a WordPress plugin) that instead relies on native PHP sessions. We use MySQL running in a pod for demonstration purposes in this blog, but recommend using Amazon Aurora for the data tier of WordPress in production. Aurora for MySQL increases MySQL performance and availability by tightly integrating the database engine with a purpose-built distributed storage system, backed by SSD. You also have the option to offload all static assets, such as image, CSS, and JavaScript files, to an Amazon S3 bucket with Amazon CloudFront caching in front using WordPress plugins for AWS.

Solution architecture

The solution architecture demonstrates a scalable WordPress deployment on Amazon EKS with shared storage using a Multi-Availability Zone (Multi-AZ) FSx for OpenZFS file system:

WordPress pods in an EKS cluster mounting the FSx for OpenZFS file system

Figure 1: WordPress pods in an EKS cluster mounting the FSx for OpenZFS file system

Key components

  1. User access: Users access the WordPress application through an Application Load Balancer (ALB).
  2. EKS cluster:
    1. Hosts WordPress application pods (using a Deployment with 2 replicas for high availability (HA))
    2. Hosts MySQL database pod and FSx for OpenZFS CSI driver
  3. Storage layer:
    1. PersistentVolumeClaim (PVC): Requests storage for WordPress data
    2. StorageClass (fsxz-vol-sc): Defines storage provisioning parameters
    3. FSx for OpenZFS: Provides high-performance shared NFS storage
  4. Data flow: WordPress pods mount the shared /var/www/html directory from FSx for OpenZFS. The FSx for OpenZFS CSI driver manages dynamic provisioning and lifecycle of storage volumes. Both WordPress pods share the same persistent storage, enabling stateless web tier scaling. The MySQL database pod provides the database backend for WordPress content and configuration.
  5. Color-coded connections: Blue for HTTP traffic, green for NFS storage, purple for MySQL, red dashed for replication.

Understanding Amazon FSx for OpenZFS

FSx for OpenZFS provides fully managed, cost-effective, high performance shared NFS (v3, v4, v4.1, and v4.2) file storage using the open source OpenZFS filesystem. The service offers Single-AZ, Single-AZ HA, and Multi-AZ deployment options.

FSx for OpenZFS provides up to 10 GB/s throughput and 400,000 IOPS for disk operations, with even greater performance when serving data from cache. Users can configure the throughput, capacity, and IOPS of each of their filesystems independently, enabling them to provision only the storage capacity necessary, and to scale performance dynamically as their operational needs evolve.

FSx for OpenZFS provides several key features that enable highly efficient storage use when paired with an Amazon EKS workload:

Snapshots: A volume can have a zero-copy point-in-time reference created, enabling instant file level restores of data.
Clones: Zero-copy clones are created from the snapshots through a PVC, creating a new read/write PersistentVolume.
Compression: The CSI driver supports enabling either the LZ4 compression algorithm for penalty free compression or the Zstandard (zstd) compression algorithm for increased compression.

Walkthrough

In this post, we bootstrap the EKS cluster with Auto Mode enabled, and then install the FSx for OpenZFS CSI driver. You can use Amazon EKS Auto Mode to automate cluster management without deep Kubernetes expertise. This is because it chooses optimal compute instances, dynamically scales resources, continuously optimizes costs, manages core add-ons, patches operating systems, and integrates with AWS security services. AWS expands its operational responsibility in EKS Auto Mode as compared to user-managed infrastructure in your EKS clusters. When enabled, EKS Auto Mode configures cluster capabilities with AWS best-practices included, making sure that clusters are ready for application deployment.

Prerequisites

The following prerequisites are needed to implement this solution:

Step 1: Create the cluster with EKS Auto mode

1.1 Create the necessary environment variables:

export CLUSTER_NAME=eks-wordpress
export AWS_REGION=<Your Region>

1.2 Prepare the eksctl config file:

cd /tmp
cat << EOF >> cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: ${CLUSTER_NAME} # your cluster's name.
  region: ${AWS_REGION} # your cluster's region.
  version: "1.34" # your cluster's current kubernetes version.
autoModeConfig:
  enabled: true
  nodePools: ["general-purpose", "system"]
iam:
  withOIDC: true
  serviceAccounts:
    - metadata:
        name: fsx-openzfs-csi-controller-sa
        namespace: kube-system
      attachPolicyARNs:
        - arn:aws:iam::aws:policy/AmazonFSxFullAccess
    - metadata:
        name: aws-load-balancer-controller
        namespace: kube-system
      wellKnownPolicies:
        awsLoadBalancerController: true
EOF

1.3 Create the cluster using the config file:

eksctl create cluster -f cluster.yaml

1.4 Confirm the cluster creation:

eksctl get cluster --region $AWS_REGION

Step 2: Install FSx for OpenZFS CSI driver

The FSx for OpenZFS CSI Driver provides a CSI interface used by container orchestrators to manage the lifecycle of FSx for OpenZFS file systems and volumes. We deploy the driver in the ‘system’ node pool, which was automatically created.

2.1 Add the aws-fsx-openzfs-csi-driver Helm repository:

helm repo add aws-fsx-openzfs-csi-driver https://kubernetes-sigs.github.io/aws-fsx-openzfs-csi-driver
helm repo update

2.2 Install the latest release of the driver:

helm upgrade --install aws-fsx-openzfs-csi-driver \
--namespace kube-system \
--set controller.serviceAccount.create=false \
aws-fsx-openzfs-csi-driver/aws-fsx-openzfs-csi-driver

2.3 When the driver has been deployed, verify the pods are running:

kubectl get pods -n kube-system -l app.kubernetes.io/part-of=aws-fsx-openzfs-csi-driver

You should see output similar to below:

NAME                                          READY   STATUS    RESTARTS   AGE
fsx-openzfs-csi-controller-7d6bb75bf5-nw4hl   5/5     Running   0          30s
fsx-openzfs-csi-controller-7d6bb75bf5-zwtk9   5/5     Running   0          30s
fsx-openzfs-csi-node-9fqh4                    3/3     Running   0          30s
fsx-openzfs-csi-node-rlj9l                    3/3     Running   0          30s

Step 3: Create the FSx for OpenZFS file system

We need to create an FSx for OpenZFS file system. You can do this through either the AWS Management Console or AWS CLI. This post demonstrates the deployment process using the AWS CLI.

The file system is deployed in the same Amazon Virtual Private Cloud (Amazon VPC) using the same security group as the EKS cluster. This setup makes sure that application pods in the EKS cluster can successfully mount storage from the FSx for OpenZFS file system.

3.1 Set the security group, subnet, and route table environment variables for the OpenZFS filesystem:

export FSX_SECURITY_GROUP=$(aws eks describe-cluster \
--name $CLUSTER_NAME \
--query 'cluster.resourcesVpcConfig.clusterSecurityGroupId' \
--output text)
export FSX_SUBNET0=$(aws eks describe-cluster --name $CLUSTER_NAME | \
jq --raw-output '.cluster.resourcesVpcConfig.subnetIds[0]')
export FSX_SUBNET1=$(aws eks describe-cluster --name $CLUSTER_NAME | \
jq --raw-output '.cluster.resourcesVpcConfig.subnetIds[1]')
export FSX_ROUTE_TABLES=$(aws eks describe-cluster --name $CLUSTER_NAME | \
  jq -r '.cluster.resourcesVpcConfig.subnetIds[]' | \
  while read subnet; do
    aws ec2 describe-route-tables \
      --filters "Name=association.subnet-id,Values=$subnet" \
      --query "RouteTables[*].RouteTableId" \
      --output text
  done | sort -u | jq -R -s 'split("\n") | map(select(length > 0))')

3.2 Create the FSx for OpenZFS file system:

aws fsx create-file-system \
  --file-system-type OPENZFS \
  --storage-capacity 64 \
  --subnet-ids $FSX_SUBNET0 $FSX_SUBNET1 \
  --security-group-ids $FSX_SECURITY_GROUP \
  --storage-type SSD \
  --open-zfs-configuration "{\"DeploymentType\": \"MULTI_AZ_1\",\"ThroughputCapacity\":320,\"PreferredSubnetId\":\"${FSX_SUBNET0}\",\"RouteTableIds\":${FSX_ROUTE_TABLES}}" \
  --tags '[{"Key": "Name", "Value": "wordpress-data"}]'

Step 4: Dynamic provisioning of an FSx for OpenZFS volume

When creating an FSx for OpenZFS volume, we assume that an FSx for OpenZFS file system and root volume has already been created. This is what we created in the last step by deploying the file system using the AWS CLI.

As a best practice, avoid storing data directly in the root volume of the file system and instead create separate data volumes mounted beneath it. These mounted data volumes are referred to as children of the parent root volume.

Parent-child volume relationship in FSx for OpenZFS

Figure 2: Parent-child volume relationship in FSx for OpenZFS

In this step we first create the storage class for the volume. When the storage class is created, we can dynamically provision an FSx for OpenZFS volume using the CSI driver installed earlier.

4.1 Set the VPC ID, VPC CIDR, file system ID, and root volume ID needed to create the volume storage class:

export VPC_ID=$(aws eks describe-cluster \
--name $CLUSTER_NAME \
--query "cluster.resourcesVpcConfig.vpcId" \
--output text)
export VPC_CIDR=$(aws ec2 describe-vpcs \
--vpc-ids $VPC_ID --query "Vpcs[*].CidrBlock" \
--output text)
export FILE_SYSTEM_ID=$(aws fsx describe-file-systems | \
  jq -r '.FileSystems[] | 
    select(any(.Tags[]; .Key=="Name" and .Value=="wordpress-data")) | 
    .FileSystemId')
export ROOT_VOL_ID=$(aws fsx describe-file-systems \
--file-system-id $FILE_SYSTEM_ID | \
jq -r '.FileSystems[].OpenZFSConfiguration.RootVolumeId')

4.2 Create the volume storage class kustomization and YAML file:

cd /tmp
cat << EOF >>  kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - fsxz-vol-sc.yaml
EOF
cat << EOF >>  fsxz-vol-sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fsxz-vol-sc
provisioner: fsx.openzfs.csi.aws.com
parameters:
  ResourceType: "volume"
  ParentVolumeId: '"${ROOT_VOL_ID}"'
  CopyTagsToSnapshots: "false"
  DataCompressionType: '"LZ4"'
  NfsExports: '[{"ClientConfigurations": [{"Clients": "${VPC_CIDR}", "Options": ["rw","crossmnt","no_root_squash"]}]}]'
  ReadOnly: "false"
  RecordSizeKiB: "128"
  Tags: '[{"Key": "Name", "Value": "wordpress-data"}]'
  OptionsOnDeletion: '["DELETE_CHILD_VOLUMES_AND_SNAPSHOTS"]'
reclaimPolicy: Delete
allowVolumeExpansion: false
mountOptions:
  - nfsvers=4.2
  - rsize=1048576
  - wsize=1048576
  - timeo=600
  - nconnect=16
  - async
EOF

4.3 Create the volume storage class by applying the kustomization file:

kubectl kustomize /tmp | envsubst | kubectl apply -f-

4.4 Now that the volume storage class has been created, we can create a persistent volume claim for storing the WordPress data on the FSx for OpenZFS file system and dynamically provision a persistent volume:

cd /tmp
cat << EOF >>  volume-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: wordpress-data
  namespace: default
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: fsxz-vol-sc
  resources:
    requests:
      storage: 1Gi
---
EOF

4.5 Create the persistent volume claim by applying the volume-pvc.yaml file:

kubectl apply -f volume-pvc.yaml

Step 5: Kubernetes resources deployment for WordPress application

5.1 Create MySQL Database (for demo purposes)

5.1.1 Create a MySQL deployment for demonstration after inserting mysql-root-password and mysql-password base64 encoded values in the YAML. In production, you should use Amazon Aurora MySQL as mentioned in the solution overview.

cd /tmp
cat << EOF >> mysql-deployment.yaml
apiVersion: v1
kind: Secret
metadata:
  name: mysql-secret
  namespace: default
type: Opaque
data:
  mysql-root-password: [REDACTED] #Replace with your base64 encoded password
  mysql-password: [REDACTED] #Replace with your base64 encoded password
---
apiVersion: v1
kind: Service
metadata:
  name: mysql
  namespace: default
  labels:
    app: mysql
spec:
  ports:
    - port: 3306
  selector:
    app: mysql
  clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
  namespace: default
  labels:
    app: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:8.0
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-secret
              key: mysql-root-password
        - name: MYSQL_DATABASE
          value: wordpress
        - name: MYSQL_USER
          value: wordpress
        - name: MYSQL_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-secret
              key: mysql-password
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        emptyDir: {}
EOF

5.1.2 Apply the MySQL deployment:

kubectl apply -f mysql-deployment.yaml

5.2 Create WordPress deployment

5.2.1 Create the WordPress deployment that uses our FSx for OpenZFS shared storage after inserting your base64 encoded wordpress-db-password in the YAML:

cd /tmp
cat << EOF > wordpress-deployment.yaml
apiVersion: v1
kind: Secret
metadata:
  name: wordpress-secret
  namespace: default
type: Opaque
data:
  wordpress-db-password: [REDACTED] #Replace with your base64 encoded password
---
apiVersion: v1
kind: Service
metadata:
  name: wordpress
  namespace: default
  labels:
    app: wordpress
  annotations:
    # Use internal load balancer if LoadBalancer type is needed
    service.beta.kubernetes.io/aws-load-balancer-scheme: internal
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  selector:
    app: wordpress
  # SECURITY: Use ClusterIP for internal-only access
  # Only change to LoadBalancer if external access is absolutely required
  # and proper security controls are in place
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wordpress
  namespace: default
  labels:
    app: wordpress
spec:
  replicas: 2
  selector:
    matchLabels:
      app: wordpress
  template:
    metadata:
      labels:
        app: wordpress
    spec:
      containers:
      - image: wordpress:6.4-apache
        name: wordpress
        env:
        - name: WORDPRESS_DB_HOST
          value: mysql
        - name: WORDPRESS_DB_NAME
          value: wordpress
        - name: WORDPRESS_DB_USER
          value: wordpress
        - name: WORDPRESS_DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: wordpress-secret
              key: wordpress-db-password
        - name: WORDPRESS_DEBUG
          value: "1"
        ports:
        - containerPort: 80
          name: wordpress
        volumeMounts:
        - name: wordpress-persistent-storage
          mountPath: /var/www/html
        resources:
          requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "512Mi"
            cpu: "500m"
        readinessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 30
          periodSeconds: 10
        livenessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 60
          periodSeconds: 30
      volumes:
      - name: wordpress-persistent-storage
        persistentVolumeClaim:
          claimName: wordpress-data
EOF

5.2.2 Apply the WordPress deployment:

kubectl apply -f wordpress-deployment.yaml

SECURITY NOTE: The WordPress service is configured as ClusterIP for security. This makes sure that the application is only accessible internally within the cluster. Never use the LoadBalancer type without proper security controls, because it can expose WordPress directly to the internet, creating a significant security vulnerability.

5.3 Verify the deployment

5.3.1 Check that all pods are running:

kubectl get pods -n default

You should see output similar to below:

NAME                         READY   STATUS    RESTARTS   AGE
mysql-674c6dc557-j6gm7       1/1     Running   0          14m
wordpress-679f68768c-fbs4x   1/1     Running   0          13m
wordpress-679f68768c-zjmjj   1/1     Running   0          13m

Note: WordPress pods may take 30-90 seconds to become fully ready (1/1 READY status). During initial startup, you may see the pods in Running state but 0/1 READY.

This is normal as WordPress:

  1. Connects to the MySQL database
  2. Initializes the shared file system on FSx for OpenZFS
  3. Completes its readiness probe checks (30-second initial delay)

If pods remain 0/1 READY after 3 minutes, check the events and logs:

  1. Check events for issues:
    kubectl get events -n default --sort-by='.lastTimestamp' | tail -20
  2. Check WordPress logs:
    kubectl logs -l app=wordpress -n default --tail=20

5.3.2 Check the services:

kubectl get svc -n default

You should see output similar to below:

NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
kubernetes   ClusterIP   10.100.0.1       <none>        443/TCP    152m
mysql        ClusterIP   None             <none>        3306/TCP   16m
wordpress    ClusterIP   10.100.145.137   <none>        80/TCP     15m

Check that the PVC is bound:

kubectl get pvc -n default

You should see output similar to below:

NAME                STATUS   VOLUME                                    CAPACITY   ACCESS MODES    STORAGECLASS     VOLUMEATTRIBUTECLASS
wordpress-data      Bound    pvc-9f8a52bb-7d80-477c-b1de-d5685c5e57e1  1Gi        RWX             fsxz-vol-sc      <unset>

5.4 Verify shared storage

5.4.1 Get both pod names:

kubectl get pods -n default -l app=wordpress -o wide

5.4.2 Create a test file from the first pod:

POD1=$(kubectl get pods -n default -l app=wordpress -o jsonpath='{.items[0].metadata.name}')
	
	kubectl exec -it $POD1 -- touch /var/www/html/test-shared-storage.txt

5.4.3 Verify the file exists on the second pod demonstrating shared storage:

POD2=$(kubectl get pods -n default -l app=wordpress -o jsonpath='{.items[1].metadata.name}')
	
	kubectl exec -it $POD2 -- ls -la /var/www/html/ | grep test-shared-storage

5.4.4 Verify data persists across scaling:

  1. Scale down to 1 replica
    kubectl scale deployment wordpress --replicas=1 -n default
  2. Wait for scale down to complete
    kubectl get pods -n default -l app=wordpress
  3. Scale back up to 2 replicas
    kubectl scale deployment wordpress --replicas=2 -n default
  4. Verify the test file still exists after scaling
    kubectl exec -it deployment/wordpress -n default -- ls -la /var/www/html/ | grep test-shared-storage

This confirms the shared storage on FSx for OpenZFS is working correctly and data persists across pod scaling operations.

5.5 Test local access (optional)

Note: This step uses kubectl port-forward which only works if you’re running kubectl from your local machine. If you’re using AWS CloudShell or running kubectl from an Amazon EC2 instance, skip to step 5.5 to create an Ingress resource.

5.5.1 Test the WordPress application locally using port-forward:

kubectl port-forward svc/wordpress -n default 8080:80

5.5.2 Open your browser and navigate to http://localhost:8080 to access the WordPress installation.

5.6 Create ingress resource (optional)

For production access, you can create an Ingress resource.

5.6.1 Ensure you have an ingress controller installed:

# Install AWS Load Balancer Controller (if not already installed)
helm repo add eks https://aws.github.io/eks-charts
helm repo update
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
  -n kube-system \
  --set clusterName=eks-wordpress \
  --set serviceAccount.create=false \
  --set serviceAccount.name=aws-load-balancer-controller \
  --set region=$AWS_REGION \
  --set vpcId=$VPC_ID

5.6.2 Create the internal VPC Ingress resource:

cd /tmp
cat << EOF > wordpress-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: wordpress-ingress
  namespace: default
  annotations:
    alb.ingress.kubernetes.io/scheme: internal
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/healthcheck-path: /
    alb.ingress.kubernetes.io/success-codes: 200,302
    alb.ingress.kubernetes.io/load-balancer-attributes: routing.http.drop_invalid_header_fields.enabled=true
spec:
  ingressClassName: alb
  rules:
    - http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: wordpress
                port:
                  number: 80
EOF

5.6.3 Apply the Ingress:

kubectl apply -f wordpress-ingress.yaml

5.6.4 Get the Ingress URL:

kubectl get ingress wordpress-ingress -n default

5.7 Monitor the application

5.7.1 Check logs from WordPress pods:

kubectl logs -f deployment/wordpress -n default

5.7.2 Monitor resource usage:

kubectl top pods -n default

Security considerations

For production deployments, avoid exposing WordPress directly to the internet without proper security controls. We recommend the following access methods.

Option 1: Port-forward (most secure for testing)

kubectl port-forward svc/wordpress -n default 8080:80
# Access via: http://localhost:8080

Important notes: Keep the terminal window open while using port-forward. If the page doesn’t load immediately, then wait 10-15 seconds and refresh. WordPress redirects to the installation page automatically. You can also access the setup directly at: http://localhost:8080/wp-admin/install.php

Troubleshooting: If the connection fails, then try a different port: kubectl port-forward svc/wordpress -n default 8081:80. Make sure that no other applications are using port 8080. Check that WordPress pods are running: kubectl get pods -n default -l app=wordpress

Option 2: Internal ALB (VPC-Only Access)

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: wordpress-ingress
  namespace: default
  annotations:
    alb.ingress.kubernetes.io/scheme: internal  # VPC-only access
    alb.ingress.kubernetes.io/target-type: ip
	alb.ingress.kubernetes.io/load-balancer-attributes: routing.http.drop_invalid_header_fields.enabled=true
spec:
  ingressClassName: alb
  rules:
    - http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: wordpress
                port:
                  number: 80

Option 3: Internet-facing with security controls (production)

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: wordpress-ingress
  namespace: default
  annotations:
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    # Restrict to specific IP ranges
    alb.ingress.kubernetes.io/inbound-cidrs: "YOUR-OFFICE-IP/32"
    # Add WAF protection
    alb.ingress.kubernetes.io/wafv2-acl-arn: "arn:aws:wafv2:region:account:webacl/wordpress-protection/id"
    # Enable SSL/TLS
    alb.ingress.kubernetes.io/certificate-arn: "arn:aws:acm:region:account:certificate/cert-id"
    alb.ingress.kubernetes.io/ssl-redirect: '443'
	alb.ingress.kubernetes.io/load-balancer-attributes: routing.http.drop_invalid_header_fields.enabled=true
spec:
  ingressClassName: alb
  rules:
    - http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: wordpress
                port:
                  number: 80

Security best practices

      • Use internal ALBs for demo/testing environments
      • Implement IP restrictions for internet-facing deployments
      • Add AWS WAF protection
      • Enable SSL/TLS certificates
      • Use authentication/authorization (ALB OIDC, etc.)
      • Regular security scanning and updates

Cleaning up

      1. Delete the FSx for OpenZFS file system:
        aws fsx delete-file-system \
          --file-system-id $FILE_SYSTEM_ID \
          --open-zfs-configuration 'SkipFinalBackup=true,Options=["DELETE_CHILD_VOLUMES_AND_SNAPSHOTS"]'
      2. Delete the EKS cluster:
        cd /tmp
        eksctl delete cluster -f cluster.yaml

Conclusion

Integrating Amazon FSx for OpenZFS with WordPress on Amazon EKS enhances performance, scalability, and reliability through high-throughput, low-latency shared storage. This provides fast access to files and data, efficiently handling high traffic loads for a smoother user experience. Dynamic storage provisioning optimizes resource management and costs. FSx for OpenZFS also supports horizontal scaling, allowing multiple WordPress pods to share persistent storage without data inconsistency, and streamlines operations with CSI driver integration. It enhances reliability and data integrity, providing content availability across pods and protecting against data loss with HA and disaster recovery features. These benefits make FSx for OpenZFS an ideal choice for running WordPress on Amazon EKS.

Aaron Dailey

Aaron Dailey

Aaron Dailey is a Senior Solutions Architect in the Worldwide Specialist Organization specializing in Storage services at AWS. Aaron has over 20 years of experience partnering with business teams to design and implement infrastructure solutions. When not at AWS, Aaron enjoys day hikes, traveling, and spending time with his family.

Abhi Karode

Abhi Karode

Abhi Karode is a Senior Solutions Architect in the AWS ISV team based in San Francisco Bay Area. He has deep expertise in AWS, Kubernetes, and cloud-native architectures. He is passionate about helping businesses leverage the benefits of containerization and cloud computing to achieve their goals.

Munish Dabra

Munish Dabra

Munish Dabra is a Principal Solutions Architect at Amazon Web Services (AWS). His current area of focus are AI/ML and Observability. He has a strong background in designing and building scalable distributed systems. He enjoys helping customers innovate and transform their businesses on AWS. LinkedIn: /mdabra

Miraj Ranpura

Miraj Ranpura

Miraj Ranpura is a Senior Solutions Architect with the AWS Enterprise SA team in Ireland. He has worked in IT infrastructure, architecture, consultancy and system administration for over 12 years and holds a degree in computer engineering from India. He works with companies of all sizes in Ireland to innovate in the cloud using the latest technologies. These days, he is passionate about AWS Containers, Application Modernisation and AWS GenAI offerings.