AWS Open Source Blog
Connecting AWS managed services to your Argo CD pipeline with open source Crossplane
This article is a guest post from Dan Mangum, a software engineer at Upbound.
Cloud infrastructure is maturing rapidly, enabling businesses to take advantage of new architectures and services alongside applications running on Amazon Elastic Container Service (Amazon ECS). Infrastructure teams find that they are managing both traditional cloud environments, using tools such as AWS CloudFormation, as well as managed container-native systems, like Amazon ECS or Kubernetes.
To make sense of this increased capability and complexity, users have turned to GitOps and tools such as Argo CD and Flux CD as a way of managing their workflows. This allows organizations to have an opinionated workflow to author and deploy applications, but the applications must be specific to that environment or platform.
This is where the Crossplane project comes in. Crossplane, which is released under the Apache 2.0 license, enables complex applications and infrastructure to be defined, deployed, and managed all from kubectl. Crossplane uses the Kubernetes API to declaratively define, deploy, and manage cloud infrastructure, including SaaS services. Crossplane’s functionality can be included in your CI/CD pipeline, giving a singular approach to defining and deploying any resource, whether that resource is Kubernetes-native or a component of a managed service.
In this article, we explain how to use Crossplane and Argo CD to deploy a simple application using Amazon Relational Database Service (Amazon RDS) to two AWS regions.
Prerequisites
To set up our deployment pipeline, we will need to install Crossplane and Argo CD in a “control” Kubernetes cluster. A control cluster is similar to the concept of a “bootstrap” cluster, but differs in that it continues to manage the “bootstrapped” clusters after they are created. From the control cluster, we will be able to provision more Kubernetes clusters, deploy applications into them, and deploy managed services that our applications will consume. While any compute service that exposes the Kubernetes API is suitable for our use case, we will choose Amazon Elastic Kubernetes Service (Amazon EKS) for our control cluster.
Argo CD allows us to deploy continuously from any hosted Git repository. We will be using GitHub for this article, and a public repository with our infrastructure already exists on GitHub. If you want to use your own repository, or if you want to deploy from a fork of the existing repository, you will need to have a GitHub account.
Lastly, Crossplane is distributed using Helm. In order to easily install Crossplane and the necessary providers, Helm must be installed. We will be using Helm 3 to install Crossplane, but instructions for using older versions of Helm can be found in the Crossplane installation documentation.
Before getting started, make sure you have done all of the following:
- Installed and configured the AWS Command Line Interface (AWS CLI) tool with
Administrative
privileges. - Provisioned and connected to an Amazon EKS cluster.
- Logged in to GitHub or signed up for a new account, if you want to deploy your own custom infrastructure.
- Installed Helm.
Install and set up Crossplane
To deploy managed services on AWS, we must install Crossplane and provider-aws into our Amazon EKS cluster. This can be accomplished with the following commands when using Helm 3:
kubectl create namespace crossplane-system
helm repo add crossplane-alpha https://charts.crossplane.io/alpha
helm install crossplane --namespace crossplane-system crossplane-alpha/crossplane --version 0.8.0 --set clusterStacks.aws.deploy=true --set clusterStacks.aws.version=v0.6.0 --disable-openapi-validation
After you have completed the installation, you should see that the following four pods in the crossplane-system
namespace:
$ kubectl get pods -n crossplane-system
NAME READY STATUS RESTARTS AGE
crossplane-65bdd6599c-sxtr9 1/1 Running 0 2m26s
crossplane-stack-manager-5556749f76-9zvl4 1/1 Running 0 2m26s
stack-aws-578bt 0/1 Completed 0 2m18s
stack-aws-858b7b8bb9-v2cz6 1/1 Running 0 2m1s
We also want to load our AWS credentials into the control cluster so that Crossplane is able to provision infrastructure on our behalf. The Crossplane docs contain extensive documentation on how to add your AWS credentials. However, we are going to create two separate AWS Provider
objects in this tutorial: one to provision resources in us-west-2
and one for us-east-1
. Both of these objects can reference the same account Secret
, but the region
field should be different.
To create the credentials Secret
, run the following commands (assumes usage of default
profile):
BASE64ENCODED_AWS_ACCOUNT_CREDS=$(echo -e "[default]\naws_access_key_id = $(aws configure get aws_access_key_id --profile default)\naws_secret_access_key = $(aws configure get aws_secret_access_key --profile default)" | base64 | tr -d "\n")
cat > aws-credentials.yaml <<EOF
---
apiVersion: v1
kind: Secret
metadata:
name: aws-account-creds
namespace: crossplane-system
type: Opaque
data:
credentials: ${BASE64ENCODED_AWS_ACCOUNT_CREDS}
---
apiVersion: aws.crossplane.io/v1alpha3
kind: Provider
metadata:
name: aws-provider-west
spec:
credentialsSecretRef:
name: aws-account-creds
namespace: crossplane-system
key: credentials
region: us-west-2
---
apiVersion: aws.crossplane.io/v1alpha3
kind: Provider
metadata:
name: aws-provider-east
spec:
credentialsSecretRef:
name: aws-account-creds
namespace: crossplane-system
key: credentials
region: us-east-1
EOF
kubectl apply -f "aws-credentials.yaml"
After completing the steps, you should see creation of the following resources:
secret/aws-account-creds created
provider.aws.crossplane.io/aws-provider-west created
provider.aws.crossplane.io/aws-provider-east created
Lastly, create a new Namespace that can be used for the resources created in the remainder of this guide:
kubectl create namespace wordpress-app
Install Argo CD
Argo CD can be installed with the following commands:
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
To view the Argo CD UI on your local machine, you can port-forward from your Amazon EKS cluster:
kubectl port-forward svc/argocd-server -n argocd 8080:443
Note that Argo CD uses self-signed certificates, and enabling insecure connections on localhost to utilize port-forwarding may be necessary.
Now if you navigate to localhost:8080
, you should be able to view the Argo CD UI. For initial login, the username is admin
and the password is the pod name of the Argo CD API server. To find your generated pod name, run the following command:
kubectl get pods -n argocd -l app.kubernetes.io/name=argocd-server -o name | cut -d'/' -f 2</code
For further information, take a look at the Argo CD Getting Started guide.
Deploy Infrastructure
Because Crossplane allows for an application to be deployed in an infrastructure-agnostic manner, we will provision infrastructure in both us-west-2
and us-east-1
then deploy WordPress to each. To start, we can set up an ArgoCD project by launching the UI, logging in, and going to Settings
| Projects
| New Project
. You may configure your project however you see fit, but it will be easiest to give the most permissive access for this tutorial and narrow the scope if you intend to run in a long-term production scenario. Importantly, you must at least whitelist all Crossplane cluster-scoped object types that you intend to use. Additionally, you must at least enable in-cluster
as a destination.
Argo CD comes with a default
project with full permissions that we will use for simplicity throughout this tutorial:
Now that we have a project configured, we want to go about provisioning our infrastructure. In Argo CD, the term Application
is used to refer to a set of configuration files that should be deployed as a single unit. An application allows you to specify a source repository for your configuration files, then it watches for updates and creates or updates objects in your Kubernetes cluster based on observed changes.
As previously mentioned, we already have our infrastructure defined for this tutorial in a GitHub repository. If you take a look at the infra/
directory, you will notice subdirectories for us-west-2
and us-east-1
. The configuration files in these directories specify identical infrastructure in two different regions, including everything from the VPC to Amazon EKS cluster. Some of the resources are used to statically provision resources on AWS (i.e., create the external resource immediately when the Kubernetes object is created) and others are classes that are used for dynamic provisioning (i.e., create a configuration that can be used to provision resources in an abstract manner at a later time). You can read more about static and dynamic provisioning in the Crossplane documentation.
To create our infrastructure Application
, go to Application
| New Application
and set up the Source
to point to the infra/
directory of our repository. The Destination
should be set to https://kubernetes.default.svc
, meaning we intend for these resources to be created in the same Kubernetes cluster in which we have installed Crossplane and Argo CD. The full configuration for the application should looks as follows:
Click Create
and you should be able to view each of the resources and their status by clicking on the application. You will notice that we are creating a VPC in each region, subnets and networking components for those VPCs, as well as provisioning an Amazon EKS cluster in each. If you go to the AWS console for the account whose credentials were used in your account credentials Secret
, you should see these resources being created in their respective dashboards.
We are specifically interested in the readiness of argo-west-cluster
and argo-east-cluster
, which are the claims for the Amazon EKS clusters we have created in each region. A claim in Crossplane refers to a Kubernetes object that serves as an abstract request for a concrete implementation of a managed service.
In this case, the claims argo-west-cluster
and argo-east-cluster
are KubernetesCluster
claims, which are being satisfied by an EKSCluster
. They could just as easily be satisfied by a managed Kubernetes offering from a different provider. It may take some time, but when they are fully provisioned, Crossplane will be able to schedule our applications on each of those clusters. You will see a corresponding Secret
and KubernetesTarget
appear for each cluster in the Argo CD UI when they are ready:
Deploy application in us-west-2
We will first deploy our application, which is a WordPress blog, into our us-west-2
Amazon EKS cluster. To do so, create a new Argo CD Application, this time pointing the Source
to the /app-1
directory.
On creation, we should immediately see two resources being created: a KubernetesApplication
and a MySQLInstance
. The KubernetesApplication
object specifies the resources we want deployed into our us-west-2
Amazon EKS cluster. You will find templates for a Namespace
, Deployment
, and Service
in the /app-1/kubernetesapplication.yaml
file. These are the necessary Kubernetes components to run a public-facing WordPress blog in Kubernetes, but we also need a database to back the application. Although we could run a MySQL database within our cluster, taking advantage of managed services like Amazon RDS allows us to offload that responsibility to an experienced cloud provider. The /app-1/mysqlinstanceclaim.yaml
defines a claim for a MySQL database, which will be satisfied by the RDSInstanceClass
that we created as part of our infrastructure deployment.
Creating these resources causes Crossplane to provision an Amazon RDS instance, obtain the connection information, then inject it into the WordPress application that it deploys to our Amazon EKS cluster in us-west-2
. When this process is complete, you should see a Secret
appear in the Argo CD UI that is associated with the MySQLInstance
.
Shortly after, you should be able to click on the wordpress-west-service
KubernetesApplicationResource
and see a host name at the bottom of the YAML manifest.
Copy and pasting into your browser should take you to a WordPress setup page.
Deploy application in us-east-1
Being able to deploy an application alongside claims for its dependent infrastructure is valuable, but the true power of this model is its portability. To demonstrate, we will deploy the same application configuration to a different region, us-east-1
.
For the purposes of this tutorial, we will create a new Argo CD Application with Source
pointed to the /app-2
directory. However, if you were the owner of the repository, you could simply modify the configuration we used for us-west-2
by changing the targetSelector
on your KubernetesApplication
to app: wordpress-east
and the classSelector
on your MySQLInstance
claim to region: east
. In fact, if you compare the two application directories, you will notice almost identical configuration outside of these changes.
After creating the Argo CD application for our us-east-1
WordPress application, we should once again see a host name on in the wordpress-east-service KubernetesApplicationResource
. Navigate to the address and you will be greeted by the WordPress setup page.
Clean up
To clean up all of our deployed application and infrastructure components, you can simply delete each of the Argo CD applications we created. All of the AWS infrastructure components, as well as their corresponding Kubernetes resources, will be removed from your cluster.
Conclusion
The Crossplane project enables infrastructure owners to define their custom cloud resources, including managed services, in a standardized way using the Kubernetes API. That in turn enables application developers to author workloads in an abstract way that can be deployed anywhere, and that can be declaratively managed.
Get involved
The Crossplane.io project is entirely open source, and we’d love for you to join the community to help us shape the future of cloud computing. Join us on Slack and GitHub, tune in to our biweekly livestream “The Binding Status”, and follow the project on Twitter.
The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.