AWS Open Source Blog

Introducing kro: Kube Resource Orchestrator

Today, we’re excited to release Kube Resource Orchestrator (kro), a new experimental open source project that simplifies and empowers the use of custom APIs and resources with Kubernetes. With kro, you can define complex, multi-resource API extensions for reuse in any Kubernetes cluster. Defining new resource groups with kro is like creating your own custom application and infrastructure building blocks for Kubernetes.

Kube Resource Orchestrator (kro, which we’re pronouncing “crow”) provides a powerful abstraction layer that handles all of the dependency and configuration ordering of your resources, and then creates and manages the resources you need. You define your custom resources using the project’s fundamental custom resource, ResourceGroup. Rather than creating multiple Custom Resource Definitions (CRDs) and then using a secondary solution to make them work together, you just create a single ResourceGroup. This single resource serves as a blueprint for creating and managing collections of underlying Kubernetes resources. When your ResourceGroup is applied to a cluster, kro will validate your specification and create the required CRDs for you automatically. Then it will deploy and manage dedicated dynamic controllers to orchestrate the lifecycle of your new custom resources.

The Evolution of Platform APIs in Kubernetes

Kubernetes’ extensibility through Custom Resource Definitions (CRDs) and custom controllers has revolutionized how customers build platforms. This powerful combination of custom resources and dedicated controllers is known as the Operator Pattern, and it unlocks the power of Kubernetes control loop automation for use beyond the native Kubernetes functionality. This extensibility enables organizations to create their own APIs that encapsulate operational knowledge, best practices, and business requirements into reusable building blocks.

With broad adoption of Kubernetes at scale, a common pattern has emerged. Platform and DevOps teams want to create standardized ways for application teams to deploy their workloads. These platform APIs need to handle everything from resource creation to security configurations, monitoring setup, and more.

However, creating these platform APIs today presents significant challenges. First, teams must write and maintain often complex controller code. Deep Kubernetes expertise is required, and a significant amount of time is invested in infrastructure and platform extensions, as each custom API needs its own controller. Once built, these controllers require monitoring, updating, and lifecycle management operations like upgrades and patching. While this pattern is powerful, you may end up managing large fleets of bespoke controllers, and might have to build a dedicated team just to manage it all. Some customers have told us they often feel like they are building for Kubernetes, rather than with Kubernetes.

How does kro help?

kro is a Kubernetes-native solution that lets you create reusable APIs for deploying multiple resources as a single unit. It transforms complex Kubernetes deployments into simple, reusable components that your teams can use. By handling resource dependencies and configuration under the hood, kro lets you focus on building your applications and services and less on the operational burden of managing CRD versions and controller lifecycle.

kro has one fundamental custom resource, the ResourceGroup. When you install kro in your cluster, it installs ResourceGroup as a Custom Resource Definition (CRD). Platform and DevOps teams can create custom Kubernetes APIs with operational standards and best practices baked in by defining specific custom resources and configuration as part of the ResourceGroup.

For example, you can create a ResourceGroup called WebAppRG which defines a new Kubernetes resource called WebApp. This new resource encapsulates all the necessary resources for a base web application environment, including Deployments, Services, monitoring agents, and managed cloud resources like Amazon Simple Storage Service (Amazon S3) buckets or Amazon Simple Queue Service (Amazon SQS) queues. The WebAppRG specification can specify any default configuration your organization wants to standardize on, and also defines which configuration elements can be set at deployment time. This gives you the flexibility to decide what is immutable, and only allow a limited set of mutable configuration for deployments.

When this new ResourceGroup is applied to a Kubernetes cluster, kro generates the CRD for WebApp, registers it with the Kubernetes API server, and deploys a dedicated microcontroller to respond to events for the new custom resource. Then, developers can create instances of WebApp, providing the configuration specifications you’ve defined in the resource group, and the underlying resources will be created, configured, and managed by the dedicated controller.

With kro, organizations can encode their operational knowledge and organizational standards into their custom resource groups. Developers use these to deploy applications, and don’t need to work directly with the discrete underlying resources. kro handles all the complexity, by sorting out the correct order to create resources, handling the inter-dependencies between resources and injecting configuration as specified (for example, adding a value from one resource to the specification of another). kro then helps you manage the lifecycle of the entire stack of dependencies, and keeps everything in sync with the desired state.

Let’s see this in action with a complete example.

Prerequisites

Before you begin, ensure you have the following:

  1. A Kubernetes cluster, in kind, Amazon Elastic Kubernetes Service (Amazon EKS), or any cluster you have.
  2. Helm 3.x installed
  3. kubectl installed and configured to interact with your Kubernetes cluster

Installation

First, install kro in your cluster.

export KRO_VERSION=$(curl -sL \
    https://api.github.com/repos/awslabs/kro/releases/latest | \
    jq -r '.tag_name | ltrimstr("v")'
  )
helm install kro oci://public.ecr.aws/kro/kro \
  --namespace kro \
  --create-namespace \
  --version=${KRO_VERSION}

Create a Resource Group

Now, we will create a simple web application API that platform teams might provide to their developers.

This API will deploy:

  • a Deployment running the application containers
  • a Service exposing the application within the cluster
  • an optional Ingress for external access

Once everything is set up in the cluster, developers will work with a simple resource specification, and all they need provide is a name and container image URI. kro will handle all the proper configuration and connections between resources.

First, define a new ResourceGroup in a file name web-app-rg.yaml.

apiVersion: kro.run/v1alpha1
kind: ResourceGroup
metadata:
  name: web-app-rg
spec:
  schema:
    apiVersion: v1alpha1
    # The name of your new Kubernetes resource
    kind: WebApp
    # Define what users can configure
    spec:
      name: string
      image: string | default="nginx"
      ingress:
        enabled: boolean | default=false
    status:
      deploymentConditions: ${deployment.status.conditions}
      availableReplicas: ${deployment.status.availableReplicas}
  # Define your resources
  resources:
    - id: deployment
      template:
        apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: ${schema.spec.name} # User-provided name
        spec:
          replicas: 3
          selector:
            matchLabels:
              app: ${schema.spec.name}
          template:
            metadata:
              labels:
                app: ${schema.spec.name}
            spec:
              containers:
                - name: ${schema.spec.name}
                  image: ${schema.spec.image} # Optionally user-provided image
                  ports:
                    - containerPort: 80
    - id: service
      template:
        apiVersion: v1
        kind: Service
        metadata:
          name: ${schema.spec.name}-service
        spec:
          selector: ${deployment.spec.selector.matchLabels} # Connect the deployment to the service
          ports:
            - protocol: TCP
              port: 80
              targetPort: 80
    - id: ingress
      includeWhen:
        - ${schema.spec.ingress.enabled} # Optionally created if ingress is true
      template:
        apiVersion: networking.k8s.io/v1
        kind: Ingress
        metadata:
          name: ${schema.spec.name}-ingress
          annotations:
            kubernetes.io/ingress.class: alb
            alb.ingress.kubernetes.io/scheme: internet-facing
            alb.ingress.kubernetes.io/target-type: ip
            alb.ingress.kubernetes.io/healthcheck-path: /health
            alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
            alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=60
        spec:
          rules:
            - http:
                paths:
                  - path: "/"
                    pathType: Prefix
                    backend:
                      service:
                        name: ${service.metadata.name} # Use the service name
                        port:
                          number: 80

Next, deploy the ResourceGroup to your cluster.

kubectl apply -f web-app-rg.yaml

You can inspect the ResourceGroup and check its status using kubectl.

kubectl get rg web-app-rg -owide

The output should show the ResourceGroup in the Active state, along with relevant information to help you understand your application. Note that kro has detected the correct order for resource creation.

NAME        APIVERSION   KIND    STATE    TOPOLOGICALORDER                    AGE
web-app-rg   v1alpha1    WebApp  Active   ["deployment","service", "ingress"] 30s

Create a Resource Group Instance

Now that your ResourceGroup is created, kro has generated a new API (Application) that orchestrates creation of a Deployment, a Service, and an optional Ingress. Let’s use it to create an application.

Create a new file named my-app-instance.yaml with the following content. Note, we’re opting to create an Ingress.

apiVersion: kro.run/v1alpha1
kind: WebApp
metadata:
  name: my-app-instance
spec:
  name: my-awesome-app
  ingress:
    enabled: true

Use the kubectl command to deploy the Application instance to your Kubernetes cluster.

kubectl apply -f my-app-instance.yaml

Inspect the Application instance and check the status of its resources.

kubectl get webapps

After a few seconds, you should note the Application instance in the Active state:

NAME              STATE    SYNCED   AGE
my-app-instance   ACTIVE   True     10s

Now check the resources created by the Application instance.

kubectl get deployments,services,ingresses

The output should show the Deployment, Service, and Ingress created by the Application instance.

NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-awesome-app   3/3     3            3           69s

NAME                             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/my-awesome-app-service   ClusterIP   10.100.167.72   <none>        80/TCP    65s

NAME                                               CLASS    HOSTS   ADDRESS   PORTS   AGE
ingress.networking.k8s.io/my-awesome-app-ingress   <none>   *                 80      62s

kro can also help you clean up resources when you’re done with them.

kubectl delete webapp my-app-instance

By deleting the WebApp instance, all of its underlying resources are also deleted. If you created a cluster in Amazon EKS specifically for this demo, don’t forget to delete it.

What’s next?

This new project is experimental and in active development. It’s not yet intended for production use, as the ResourceGroup CRD and other aspects of the project are not solidified and highly subject to change. We will be actively working on the project, and would like to hear from you where we should go with it.

Some of the big ideas we will be working on include:

  • Resource group versioning
  • Resource change management
  • External references and adoption
  • Building collections (with for loop-like support)
  • Garbage collection
  • Enhanced status reporting
  • Enhanced security integrations

We’re excited to see how customers will use kro, and to hear from you on what works and doesn’t work for you with the project.

Check out the project on the AWS Labs GitHub, and our read our docs to learn more about kro’s concepts and capabilities, and how to get started building your own resource groups.

Amine Hilaly

Amine Hilaly

Kubernetes things at AWS.

Christina Andonov

Christina Andonov

Christina is a Senior Specialist Solutions Architect at AWS where she guides organizations through the adoption of Kubernetes along with Open Source Software and AWS Managed Services.

Lukonde Mwila

Lukonde Mwila

Lukonde is a Senior Product Manager at AWS focusing on Kubernetes. He has years of experience in application development, solution architecture, cloud engineering, and DevOps workflows.