Building a GitOps pipeline with Amazon EKS

This post is contributed by Anita Buehrle, Director of Content at Weaveworks.

In the first part of this series, we discussed the history of GitOps, its benefits, and how it works. Now that you have an idea of what GitOps is all about, we’re going to dive into how to configure a GitOps pipeline with Flux and Amazon EKS for application deployments.

What do we need for GitOps on AWS?

These are the tools that you’ll be working with in order to create a basic GitOps pipeline for application deployments or for cluster components. If you were running this in production, you would also need an additional component to scan base images for added security.

Component Implementation Notes
Amazon EKS eksctl Managed Kubernetes cluster.
Git repo A Git repository containing your application and cluster manifests files.
CI System GitHub Actions Test and integrate the code. This can be anything from CircleCI to GitHub actions.
CD System Flux v2 Cluster <-> repo synchronization
Container registry Amazon ECR Can be any image registry or even a directory.
Secrets management Amazon ECR Can use Sealed Secrets or Vault. In this case, we are using Amazon Elastic Container Registry (ECR) which provides resource level  security.


You will need the following services:

  • AWS account with the ability to create EKS clusters
  • GitHub account
  • Amazon ECR
  • Kubectl version 1.18 or newer
  • Kustomize

In this example, you will install the standard Guestbook sample application to EKS. Then you’ll make a change to the button style and deploy the change to EKS with GitHub Actions and GitOps.

Part 1: Fork the Guestbook app repository

You will need a GitHub account for this step.

  • Fork and clone the repository: `weaveworks/guestbook-gitops`
  • Keep the repo handy as you’ll be adding a deployment key to the repo that Flux requires as well as a few credentials once you have your ECR container registry set up.

Part 2: Create an EKS cluster:

1. Set up permissions for eksctl:

  • Authenticate your cli and ensure your IAM credentials are set up properly before running eksctl:
  • To pull images from the ECR registry, you’ll need to set the correct IAM permissions for the same user that set up your cluster.
  • You can set this from the console or from the command line:
    • The AWS-managed “AmazonEC2ContainerRegistryPowerUser” policy. This must be set to create and to push/pull

2. Stand up the cluster: 

  • Download and install, or update the command line tool, eksctl.
  • Run . <(eksctl completion bash)
  • Run eksctl create cluster
  • The cluster will take a few minutes to come up.
  • Once the cluster creation is completed, run:

kubectl get pods -A

You should see something like this:

NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE

kube-system   aws-node-284tq             1/1     Running   0          20m

kube-system   aws-node-csvs7             1/1     Running   0          20m

kube-system   coredns-59dfd6b59f-24qzg   1/1     Running   0          27m

kube-system   coredns-59dfd6b59f-knvbj   1/1     Running   0          27m

kube-system   kube-proxy-25mks           1/1     Running   0          20m
kube-system   kube-proxy-nxf4n           1/1     Running   0          20m

Part 2: Create an ECR container registry Account and set up GitHub Actions Workflow

In this section, you will set up an ECR registry and a mini CI pipeline using GitHub Actions. The actions builds a new container on a `git push`, tags it with the git-sha, and then pushes it to the ECR registry. It also updates and commits the image tag change to your kustomize file. Once the new image is in the repository, Flux notices the new image and then deploys it to the cluster. This entire flow will become more apparent in the next section after you’ve configured Flux.

1. Spin up ECR

Open ECR in the same region that your cluster is running.  You can call the repository guestbook.

2. Specify secrets for ECR

ECR is an encrypted container repository and as a result any images pulled to and from it need to be authenticated. You can specify secrets for ECR in the Settings → Secrets tab on your forked guestbook-gitops repository. These are needed by the GitHub Actions script before it can push the new image to the container registry.

Create the following three GitHub secrets:




The secret access key is found on your AWS Account ID and is obtainable from the AWS Management Console or by running aws sts get-caller-identity and the new user’s key ID and the secret key (found in your ~/.aws/credentials file).

3. Configure the GitHub actions workflow

View the workflow in your repo under .github/workflows/main.yml. Ensure you have the environment variables on lines 16-20 of main.yml set up properly.

16 AWS_DEFAULT_REGION: eu-west-1


18 AWS_ACCOUNT_ID: ${{ secrets.AWS_ACCOUNT_ID }}



Line 16 is the region where your EKS cluster resides. This should match the region in your ~/.aws/config or the region you specified when you ran `aws configuration`.

Part 3: Install Flux and start shipping

Flux keeps Kubernetes clusters in sync with configuration kept under source control like Git repositories, and automates updates to that configuration when there is new code to deploy. It is built using Kubernetes’ API extension server, and can integrate with Prometheus and other core components of the Kubernetes ecosystem. Flux supports multi-tenancy and syncs an arbitrary number of Git repositories.

In this section, we’ll set up Flux to synchronize changes in the guestbook-gitops repository. This example is a very simple pipeline that demonstrates how to sync one application repo to a single cluster, but as mentioned, Flux is capable of a lot more than that. For an overview of all of Flux’s features, refer to this page for more detail on what’s in Flux v2.

1. For MacOS users, you can install flux with Homebrew: 

brew install fluxcd/tap/flux

For other installation options, see installing the Flux CLI.

2. Once installed, check that your EKS cluster satisfies the prerequisites: 

flux check --pre

If successful, it returns something similar to this:

► checking prerequisites

✔ kubectl 1.19.3 >=1.18.0

✔ Kubernetes 1.17.9-eks-a84824 >=1.16.0

✔ prerequisites checks passed

Flux supports synchronizing manifests in a single directory, but when you have a lot of YAML, it is more efficient to use Kustomize to manage them. For the Guestbook example, all of the manifests were copied into a deploy directory and a kustomization file was added. For this example, the kustomization file contains a `newTag` directive for the frontend images section of your deployment manifest:


- name: frontend

  newTag: new

As mentioned above, the Github Actions script updates the image tag in this file after the image is built and pushed, indicating to Flux that a new image is available in ECR.

3. But before we see that in action, let’s install Flux and its other controllers to your cluster.

  • Set up a GitHub token:

In order to create the reconciliation repository for Flux, you’ll need a personal access token for your GitHub account that has permissions to create repositories. The token must have all permissions under repo checked off. Copy and keep your token handy and in a safe place.

  • On the command line, export your GitHub personal access token and username:

export GITHUB_TOKEN=[your-github-token]

export GITHUB_USER=[your-github-username]

4. Create the Flux reconciliation repository. In this case we’ll call it fleet-infra, but you can call it anything you want. 

In this step, a private repository is created and all of the controllers will also be installed to your EKS cluster. When bootstrapping a repository with Flux, it’s also possible to apply only a sub-directory in the repo and therefore connect to multiple clusters or locations on which to apply configuration. To keep things simple, this example sets the name of one cluster as the apply path:

flux bootstrap github \

  --owner=$GITHUB_USER \

  --repository=fleet-infra \

  --branch=main \

  --path=[cluster-name] \


Flux version 2 lets you easily work with multiple clusters and multiple repositories. New cluster configurations and applications can be applied from the same repo by specifying a new path for each.

Once it’s finished bootstrapping, you will see the following:

► connecting to

✔ repository cloned

✚ generating manifests

✔ components manifests pushed

► installing components in flux-system namespace …..

deployment "source-controller" successfully rolled out

deployment "kustomize-controller" successfully rolled out

deployment "helm-controller" successfully rolled out

deployment "notification-controller" successfully rolled out

  • Check the cluster for the flux-system namespace with:

kubectl get namespaces

NAME              STATUS   AGE

default           Active   5h25m

flux-system       Active   5h13m

kube-node-lease   Active   5h25m

kube-public       Active   5h25m

kube-system       Active   5h25m

  • Clone and then cd into the newly created private fleet-infra repository:

git clone$GITHUB_USER/fleet-infra

cd fleet-infra

  • Connect the guestbook-gitops repo to the fleet-infra repo with:

flux create source git [guestbook-gitops] \

  --url=[github-user-id/guestbook-gitops] \

  --branch=master \

  --interval=30s \

  --export > ./[cluster-name]/[guestbook-gitops]-source.yaml


[guestbook] is the name of your app or service

[cluster-name] is the cluster name

[github-user-id/guestbook-gitops] is the forked guestbook repository

  • Configure a Flux kustomization to apply to the ./deploy directory from your new repo with:

flux create kustomization guestbook-gitops \

 --source=guestbook-gitops \

 --path="./deploy" \

 --prune=true \

 --validation=client \

 --interval=1h \

 --export > ./[cluster-name]/guestbook-gitops-sync.yaml

  • Commit all of the changes to the repository with:

git add -A && git commit -m "add guestbook-gitops deploy" && git push

watch flux get kustomizations

You should now see the latest revisions for the flux toolkit components as well as the guestbook-gitops source pulled and deployed to your cluster:

NAME            REVISION                                        SUSPENDED       READY   MESSAGE

flux-system     main/e1c2a084e398b9d36ce7f5067c44178b5cf9a126   False           True    Applied revision: main/e1c2a084e398b9d36ce7f5067c44178b5cf9a126

guestbook       master/35147c43026fec5a49ae31371ae8c046e4d5860e False           True    Applied revision: master/35147c43026fec5a49ae31371ae8c046e4d5860e

Check that all of the services of the guestbook are deploying and running in the cluster with:

kubectl get pods -A

Where you should see something like the following:

NAMESPACE     NAME                            READY   STATUS    RESTARTS   AGE

default       redis-master-545d695785-cjnks   1/1     Running   0          9m33s

default       redis-slave-84548fdbc-kbnj      1/1     Running   0          9m33s

default       redis-slave-84548fdbc-tqf52     1/1     Running   0          9m33s

flux          flux-75888db95c-9vsp6           1/1     Running   0          17m

flux          memcached-86869f57fd-42f5m      1/1     Running   0          17m

kube-system   aws-node-284tq                  1/1     Running   0          41m

kube-system   aws-node-csvs7                  1/1     Running   0          41m

kube-system   coredns-59dfd6b59f-24qzg        1/1     Running   0          48m

kube-system   coredns-59dfd6b59f-knvbj        1/1     Running   0          48m

kube-system   kube-proxy-25mks                1/1     Running   0          41m

kube-system   kube-proxy-nxf4n                1/1     Running   0          41m

Flux troubleshooting tips

If you have any trouble with configuring Flux, run these commands to see how everything is setup and help you troubleshoot any set up errors:

flux get sources git

flux get kustomizations

To see all of the commands you can run with Flux run:

flux --help

There are many new features in Flux version 2 that you can explore on your own in the Flux version 2 documentation.

5. Make a change to the guestbook app and deploy it with a `git push`

Note: If you’d like to see the Guestbook UI before you make a change, kick off a build by adding a space to the index.html file and pushing it to git.

Let’s make a simple change to the buttons on the app and push it to Git:

Open the index.html file and change line 15:

<button type="button" class="btn btn-primary btn-lg" ng-click="controller.onRedis()">Submit</button>


<button type="button" class="btn btn-primary btn-lg btn-block" ng-click="controller.onRedis()">Submit</button>

Once you’ve made the change to the buttons do a, `git add`, `git commit` and `git push`. Click on the Actions tab in your repo to watch the pipeline test, build, and tag the image.

Now check to see that the new frontend images are deploying:

kubectl get pods -A

NAMESPACE     NAME                            READY   STATUS    RESTARTS   AGE

default       frontend-6f9b84d75d-g48hf       1/1     Running   0          95s

default       frontend-6f9b84d75d-ncqj6       1/1     Running   0          84s

default       frontend-6f9b84d75d-v5pfs       1/1     Running   0          107s

default       redis-master-545d695785-r8ckm   1/1     Running   0          58m

default       redis-slave-84548fdbc-nk4mf     1/1     Running   0          58m

default       redis-slave-84548fdbc-vvmws     1/1     Running   0          58m

flux          flux-75888db95c-pnztj           1/1     Running   0          61m

flux          memcached-86869f57fd-hhqnk      1/1     Running   0          61m

kube-system   aws-node-bcw7j                  1/1     Running   0          67m

kube-system   aws-node-gt52t                  1/1     Running   0          67m

kube-system   coredns-6f6c47b49d-57w8q        1/1     Running   0          74m

kube-system   coredns-6f6c47b49d-k2dc5        1/1     Running   0          74m

kube-system   kube-proxy-mgzwv                1/1     Running   0          67m

kube-system   kube-proxy-pxbfk                1/1     Running   0          67m

Display the Guestbook application

Display the Guestbook frontend in your browser by retrieving the URL from the app running in the cluster with:

kubectl get service frontend

The response should be similar to this:

  NAME       TYPE        CLUSTER-IP      EXTERNAL-IP        PORT(S)        AGE

  frontend   ClusterIP     80:32372/TCP   1m

Now that you have Flux set up, you can keep making changes to the UI, and run the change through GitHub Actions to build and push new images to ECR. Flux will notice the new image and deploy your changes to the cluster, kicking your software development into overdrive.

Cleaning up

To uninstall flux run:

brew uninstall flux

To delete the cluster, run:

eksctl delete cluster --name [name of your cluster]

Final Thoughts

These two posts explain GitOps concepts and its origins. We then demonstrated how to pull together a GitOps pipeline for application deployments. This is one example of how to leverage GitOps.