AWS Open Source Blog
Introducing kro: Kube Resource Orchestrator
Today, we’re excited to release Kube Resource Orchestrator (kro), a new experimental open source project that simplifies and empowers the use of custom APIs and resources with Kubernetes. With kro, you can define complex, multi-resource API extensions for reuse in any Kubernetes cluster. Defining new resource groups with kro is like creating your own custom application and infrastructure building blocks for Kubernetes.
Kube Resource Orchestrator (kro, which we’re pronouncing “crow”) provides a powerful abstraction layer that handles all of the dependency and configuration ordering of your resources, and then creates and manages the resources you need. You define your custom resources using the project’s fundamental custom resource, ResourceGroup. Rather than creating multiple Custom Resource Definitions (CRDs) and then using a secondary solution to make them work together, you just create a single ResourceGroup. This single resource serves as a blueprint for creating and managing collections of underlying Kubernetes resources. When your ResourceGroup is applied to a cluster, kro will validate your specification and create the required CRDs for you automatically. Then it will deploy and manage dedicated dynamic controllers to orchestrate the lifecycle of your new custom resources.
The Evolution of Platform APIs in Kubernetes
Kubernetes’ extensibility through Custom Resource Definitions (CRDs) and custom controllers has revolutionized how customers build platforms. This powerful combination of custom resources and dedicated controllers is known as the Operator Pattern, and it unlocks the power of Kubernetes control loop automation for use beyond the native Kubernetes functionality. This extensibility enables organizations to create their own APIs that encapsulate operational knowledge, best practices, and business requirements into reusable building blocks.
With broad adoption of Kubernetes at scale, a common pattern has emerged. Platform and DevOps teams want to create standardized ways for application teams to deploy their workloads. These platform APIs need to handle everything from resource creation to security configurations, monitoring setup, and more.
However, creating these platform APIs today presents significant challenges. First, teams must write and maintain often complex controller code. Deep Kubernetes expertise is required, and a significant amount of time is invested in infrastructure and platform extensions, as each custom API needs its own controller. Once built, these controllers require monitoring, updating, and lifecycle management operations like upgrades and patching. While this pattern is powerful, you may end up managing large fleets of bespoke controllers, and might have to build a dedicated team just to manage it all. Some customers have told us they often feel like they are building for Kubernetes, rather than with Kubernetes.
How does kro help?
kro is a Kubernetes-native solution that lets you create reusable APIs for deploying multiple resources as a single unit. It transforms complex Kubernetes deployments into simple, reusable components that your teams can use. By handling resource dependencies and configuration under the hood, kro lets you focus on building your applications and services and less on the operational burden of managing CRD versions and controller lifecycle.
kro has one fundamental custom resource, the ResourceGroup. When you install kro in your cluster, it installs ResourceGroup as a Custom Resource Definition (CRD). Platform and DevOps teams can create custom Kubernetes APIs with operational standards and best practices baked in by defining specific custom resources and configuration as part of the ResourceGroup.
For example, you can create a ResourceGroup called WebAppRG which defines a new Kubernetes resource called WebApp. This new resource encapsulates all the necessary resources for a base web application environment, including Deployments, Services, monitoring agents, and managed cloud resources like Amazon Simple Storage Service (Amazon S3) buckets or Amazon Simple Queue Service (Amazon SQS) queues. The WebAppRG specification can specify any default configuration your organization wants to standardize on, and also defines which configuration elements can be set at deployment time. This gives you the flexibility to decide what is immutable, and only allow a limited set of mutable configuration for deployments.
When this new ResourceGroup is applied to a Kubernetes cluster, kro generates the CRD for WebApp, registers it with the Kubernetes API server, and deploys a dedicated microcontroller to respond to events for the new custom resource. Then, developers can create instances of WebApp, providing the configuration specifications you’ve defined in the resource group, and the underlying resources will be created, configured, and managed by the dedicated controller.
With kro, organizations can encode their operational knowledge and organizational standards into their custom resource groups. Developers use these to deploy applications, and don’t need to work directly with the discrete underlying resources. kro handles all the complexity, by sorting out the correct order to create resources, handling the inter-dependencies between resources and injecting configuration as specified (for example, adding a value from one resource to the specification of another). kro then helps you manage the lifecycle of the entire stack of dependencies, and keeps everything in sync with the desired state.
Let’s see this in action with a complete example.
Prerequisites
Before you begin, ensure you have the following:
- A Kubernetes cluster, in kind, Amazon Elastic Kubernetes Service (Amazon EKS), or any cluster you have.
Helm
3.x installedkubectl
installed and configured to interact with your Kubernetes cluster
Installation
First, install kro in your cluster.
Create a Resource Group
Now, we will create a simple web application API that platform teams might provide to their developers.
This API will deploy:
- a Deployment running the application containers
- a Service exposing the application within the cluster
- an optional Ingress for external access
Once everything is set up in the cluster, developers will work with a simple resource specification, and all they need provide is a name and container image URI. kro will handle all the proper configuration and connections between resources.
First, define a new ResourceGroup in a file name web-app-rg.yaml
.
Next, deploy the ResourceGroup to your cluster.
You can inspect the ResourceGroup and check its status using kubectl.
The output should show the ResourceGroup in the Active state, along with relevant information to help you understand your application. Note that kro has detected the correct order for resource creation.
Create a Resource Group Instance
Now that your ResourceGroup is created, kro has generated a new API (Application) that orchestrates creation of a Deployment, a Service, and an optional Ingress. Let’s use it to create an application.
Create a new file named my-app-instance.yaml
with the following content. Note, we’re opting to create an Ingress.
Use the kubectl
command to deploy the Application instance to your Kubernetes cluster.
Inspect the Application instance and check the status of its resources.
After a few seconds, you should note the Application instance in the Active state:
Now check the resources created by the Application instance.
The output should show the Deployment, Service, and Ingress created by the Application instance.
kro can also help you clean up resources when you’re done with them.
By deleting the WebApp instance, all of its underlying resources are also deleted. If you created a cluster in Amazon EKS specifically for this demo, don’t forget to delete it.
What’s next?
This new project is experimental and in active development. It’s not yet intended for production use, as the ResourceGroup CRD and other aspects of the project are not solidified and highly subject to change. We will be actively working on the project, and would like to hear from you where we should go with it.
Some of the big ideas we will be working on include:
- Resource group versioning
- Resource change management
- External references and adoption
- Building collections (with for loop-like support)
- Garbage collection
- Enhanced status reporting
- Enhanced security integrations
We’re excited to see how customers will use kro, and to hear from you on what works and doesn’t work for you with the project.
Check out the project on the AWS Labs GitHub, and our read our docs to learn more about kro’s concepts and capabilities, and how to get started building your own resource groups.