Serverless or Kubernetes on AWS
Helping you choose
Amazon Web Services (AWS) provides customers with the flexibility to choose a strategy for building modern applications that map to their business needs. Customers developing a strategy for building modern apps on AWS often take one of two high-level approaches: a Serverless with AWS strategy model using AWS Lambda and containers, or Kubernetes on AWS.
AWS is designed to provide you with access to the right compute and supporting services (such as database, messaging, and orchestration) for your workloads. AWS provides the ability to create serverless applications using an event-driven compute service such as AWS Lambda and serverless container options such as AWS Fargate and AWS App Runner. AWS offers ways to build container applications with Amazon Elastic Container Service (Amazon ECS) and Kubernetes options such as Amazon Elastic Kubernetes Service (Amazon EKS), Red Hat OpenShift Service on AWS (ROSA), and self-managed Kubernetes on Amazon Elastic Compute Cloud (Amazon EC2).
The following content will help you choose the approach that is best suited for your environment and help you get started implementing your approach on AWS.
Step 1: Determine your criteria
Modern applications are built with modular architectural patterns and can take advantage of serverless operational models to build more sustainable, scalable, and resilient applications. They allow you to innovate faster, reduce risk, accelerate time to market, and decrease your TCO. When building new modern applications or modernizing existing applications, architects and developers have a large number of building blocks available – and one of the most important is the choice of compute services. AWS is designed to provide you with access to the right compute service for their workloads – a choice that supports whatever modern app development strategy you decide makes sense to meet your needs.
Before determining which modern app development strategy you should implement, you need to determine your criteria based on the size of your workloads.
Managed service and operation overhead
Organization size and skills
Performance, scalability, resiliency
Organizations may choose the cloud to reduce operational cost by standardizing on managed services that shift the operational burden to AWS. Higher levels of abstraction allow developers and operators to focus on their own unique value-add activities, instead of undifferentiated tasks.
Building with AWS Serverless uses services with the higher levels of abstraction to shift the operational overhead of maintaining infrastructure to AWS.
AWS also provides several managed offerings for Kubernetes. These offerings differ in the layer of technology an organization needs to manage. AWS also provides accelerators, such as add-ons and EKS Blueprints, to speed configuration.
Many AWS customers standardize on open source technologies that are broadly supported. Open-source can enable an organization to find the right skills - and avoid some risk around lock-in. Making the wrong choices in an open-source ecosystem, can lead to being locked into abstractions and homegrown integrations. Additionally, the responsibility for making different open-source components work together often sits with the organization making the choice. It's common for organizations to spend too much time on maintaining open-source integrations.
- Consider community sources and backing from enterprises or foundations when making these investments. An investment in these projects is not just financial. You also need to invest in training and educating your team on the technologies used.
- You may also incur some technical debt (due to the work involved in maintaining those integrations) as these components and associated integrations will typically need updating.
- Visit the AWS Open Source blog for ideas on open source implementations.
When selecting a modern app development strategy, it's important that your strategy accommodates a variety of workload patterns. You can more easily make architecture choices by understanding your workload patterns. For example, Web Applications, API-based microservices, Event-Driven Applications, Streaming and Messaging, Data Pipelines, IT Automations, and more. Some workloads will perform better or be more cost effective in one compute environment versus another type.
When building with AWS Serverless, services such as App Runner and AWS Batch are designed to readily support the demands of a given workload type. Workloads using this approach will be easier to build, perform better, or be more cost effective, although it comes at the expense of flexibility.
Kubernetes provides consistency across clouds and on-premises environments if required by your organization.
Workloads do not live in isolation. Workloads are supported by technologies such as databases, messaging, streaming, orchestration, and other services. An effective modern app development strategy requires integration with these services. Managed integrations simplify operational overhead as much as management of the underlying infrastructure.
AWS Serverless options are deeply integrated in the AWS ecosystem. AWS Lambda can subscribe to events from more than 200 other services. AWS Lambda extensions, for example, enable integration with monitoring, observability, security, and governance tools. Lambda invokes a given function in a run environment, which provides a secure and isolated runtime where the function code is run.
AWS-managed offerings for Kubernetes provide integrations with managed offerings. Kubernetes itself has a rich partner ecosystem, offering integration with numerous other technologies.
Many customers need to create experiments to validate ideas. These prototypes might be the idea for a new application, and/or thrown away if it's unsuccessful. The ability to provide an environment where you can quickly write, deploy, and validate ideas is essential for a healthy environment. This environment is often overlooked when developing a modern app development strategy, but the ability to innovate within the company may depend on it. Enabling teams to use services that can be used to rapidly build, test, and iterate are invaluable in discovering new business opportunities.
Many customers want to ensure that their applications can run in and be easily migrated or moved to a different environment. They want to be able to preserve the choice to move cloud providers, or run an application both on-premises and in the cloud. This often includes the need for supporting popular language frameworks and development scenarios. For example, Java developers might want to use Spring or Python, and Data Engineers might want to use PyTorch. Selecting a modern development app approach will not be enough by itself. It will require best practices and architecture to achieve application portability. We recommend building competency in software architectures and build packaging that allows you to more readily port differentiating business logic between compute services.
Applications built using some technologies may run more effectively on some compute services than others. Container services are usually a better migration target for legacy applications than Lambda, which will require a change in architecture.
There should be a clear difference between application and automation portability. Most of the time, the choice between AWS Serverless or AWS Kubernetes actually has little to do with whether or not an application is portable. Instead, it is the tools used by Infrastructure teams and DevOps teams that are a key factor. Customers who choose the Kubernetes approach on AWS are often looking for portability of infrastructure and deployment tools.
The skills of your organization are a major factor when deciding on whether to take a serverless or container-based approach to modern app development. Both serverless and containers will require some investment in DevOps and Site Reliability Engineer (SRE) teams. Building out an automated pipeline to deploy applications is common for most modern applications.
Some choices elevate the amount of management. For example, some organizations have skills and resources to run and manage a Kubernetes implementation, because they invest in strong SRE teams to manage Kubernetes Clusters. These teams handle frequent cluster upgrades (for example, Kubernetes has three major releases a year, and deprecates old versions).
Organization size is a key factor, as smaller startups might have a small IT staff made up of people fulfilling multiple roles, while larger enterprises may support hundreds of workloads in production at once.
Architecture decisions come with tradeoffs, one of which is often cost. It's common to struggle with estimating the cost of modern applications due to the dynamic nature of the components involved. While the cost of a fixed set of servers is easier to estimate, you also pay for those servers even when they are not adding business value.
Both approaches offer multiple levers to meet cost targets for your workloads. The high-level choice between approaches should consider the cost around not just the resources but also the effort in creating and maintaining that strategy. The AWS Pricing Calculator can be useful in understanding costs of a given workload.
Meeting security and compliance requirements are a cornerstone aspect of every workload. AWS services across both strategies can allow you to meet stringent compliance requirements.
Building with AWS Serverless allows you to take advantage of integration with AWS Identity and Access Management (IAM) to control access, native AWS networking constructs, and support from AWS governance tools.
Kubernetes on AWS requires an understanding of both the Kubernetes and AWS security models and may require mapping between security policies. Managed AWS offerings, such as Amazon EKS, provide accelerators for running applications securely on EKS.
Building with both AWS Serverless and Kubernetes on AWS allow you to build performant, scalable, and resilient workloads. You can meet the technical requirements you need. Services with the highest levels of abstraction manage placement across AWS Availability Zones and scaling of the environment, improving performance and resiliency. Lower levels of abstraction allow you more control in how your workload scales. The primary tradeoffs are cost and management complexity.
Step 2: Determine how much you want to manage
One of the key benefits of modernization is the ability to shift operational responsibilities, allowing you to free-up resources to do more value-add and innovation-led activities. There is a spectrum of shared responsibility options across different levels of modernization, ranging from Amazon EC2, where you can build and run your code while managing integrations, scaling, security configurations, provisioning, patching, and more, to serverless functions like AWS Lambda, where all you manage is your application code.
Step 3: Determine your use case
We recommend that you evaluate the most appropriate compute option on a workload-by-workload basis within your default strategy: Serverless with AWS or Kubernetes on AWS.
Use case examples for a Serverless on AWS strategy might include building a document processing system or handling website workloads. Within that strategy, you could select AWS Lambda to build the primarily event-driven document processing workload and AWS AppRunner for the needed low latency and scalability of a transactional website.
Meanwhile, an example use case for containers – and Kubernetes on AWS – might be as part of a progressive approach to moving existing applications to microservices. Many legacy middleware applications can be modified with minimal effort to containers.
Note: Some organizations will support multiple options or workload patterns to allow for workload or developer choice.
In the table following, there are a number of other use cases you may want to consider in choosing your compute services.
Step 4: Compare and make the right choices for your workloads
AWS offers different container options, such as Amazon ECS, Serverless Containers with AWS Fargate, and AWS App Runner, and different Kubernetes options, such as Amazon EKS, ROSA, and self-managed Kubernetes on Amazon EC2.
The following comparison table can help you determine your approach based on your workload requirements. You might choose elements of both approaches, or a compromise, and have different teams that use different approaches. It is not uncommon to see a very large enterprise have departments with different strategies.
|Serverless with AWS||Kubernetes on AWS|
|Workloads||Supports a range of workload patterns with specific services optimized for specific workloads. AWS Lambda might be used for asynchronous workloads with AWS Fargate used for synchronous workloads.||Supports a wide range of workload patterns where consistent deployment models across clouds or on-premises data centers is preferred.|
|Architecture Characteristics||Supports most architectures and patterns with specific services that provide optimizations for performance, scalability, reliability, and cost.||Supports most architectures where consistency across technology stack is preferred. Some levels of optimizations available, but will require more integration and management effort.|
|Integrations||AWS Serverless offers integrations with many managed services. Some options, such as AWS Lambda, can subscribe to more than 200 services and receive events in a managed fashion. Many partners provide AWS integrations.||AWS managed offerings for Kubernetes provide integrations with AWS Services. The partner ecosystem for Kubernetes is rich and provides integration with other open source technologies. Many technology partners use EKS as their first choice to validate their Kubernetes integrations.|
|Prototyping||AWS Serverless is optimized for allowing customers to write code quickly, deploy it, and change it, making it a useful option for doing fast prototyping work that doesn’t require making a lot of choices upfront.||Customers who have a Kubernetes strategy often need to setup special clusters dedicated for prototyping and maintain it like any other environment.|
|Overhead||AWS Serverless is designed around making most use of managed services, requiring only minimal amount of management effort for servers, networking, and integrations.||AWS provides several patterns for Kubernetes. This may mean managing multiple layers of technology (such as cluster and cloud networking), or Kubernetes and AWS security roles, although AWS provides accelerators such as add-ons and EKS Blueprints.|
|Ecosystem||AWS has a large ecosystem of partners that build on top of AWS Services and provide solutions and integrations. AWS builds on and with many open source technologies in various spaces.||Kubernetes is a large ecosystem that has major support in every cloud, and a vast amount of community support. The CNCF landscape is an example of the vast ecosystem. AWS provides add-ons and blueprints to help customers adopt some of these popular tools.|
|Application Portability||Application architecture (for applications that use a managed service or services) can be designed in such a way that business logic can easily be ported from Lambda, App Mesh, or ECS to other compute environments.||Kubernetes is available on most clouds and on-premises, and workloads are mostly portable. Certain Kubernetes patterns that rely on service mesh in code (like Istio, or programming models like KNative) means that your application requires Kubernetes, and it is not portable to something other than Kubernetes. Some features can lock you into a specific version. You may need to rebuild images to support various Linux builds or specific hardware (ARM or AMD).|
|Automation||AWS-specific automation support using flavors based on open technology. AWS CloudFormation, AWS Serverless Application Model (AWS SAM), and some AWS Cloud Development Kit (AWS CDK) libraries are examples. Customers can use open-source options such as Terraform. Many AWS customers use open source DevOps tools such as Jenkins.||Customers that strategize on Kubernetes usually choose to use tools that automate Kubernetes. Automation for Cluster creation is needed, and you can use AWS APIs, AWS tools such as AWS CloudFormation, or CDK. Many customers use technology like Terraform. AWS EKS Blueprints is an example of an accelerator for Terraform and CDK, for the creation of clusters and add-ons. Many customers are beginning to adopt GitOps tools like Flux or ArgoCD to deploy with a Kubernetes API and use tools like AWS Controllers or Crossplane to provision native cloud resources.|
|Organization Size and Skills||Some enterprises standardize on a primary cloud and choose AWS Serverless first because while they may have money/resources to create SRE teams, they opt to optimize on agility and reduce ops cost going all in with AWS.||Medium to larger enterprises (or software vendors who sell products that are meant to run in different clouds or on-premises data centers) often use a Kubernetes-first approach. The customers who are successful usually invest much more on infrastructure and SRE teams that manage, run, and maintain lots of clusters.|
|Scalability/Resiliency||AWS Lambda customers only pay while a function is invoked. If Lambda isn’t a fit, you can use Fargate or ECS. Offers further choice in hardware architectures, and compute pricing options (such as AWS EC2 Spot Instances).||The Kubernetes on AWS approach can be Well-Architected to achieve your performance, scalability, and resiliency requirements. EKS also offers the ability to use options such as AWS EC2 Spot and Graviton.|
|Security||The AWS Serverless approach makes use of AWS security, networking, and practices.||The Kubernetes approach requires an understanding of both Kubernetes Security and AWS Security and additional operational overhead, such as mapping Kubernetes Security Policies to AWS Policies, securing container images and runtimes, and validating third party container tools.|
When building new modern applications or modernizing existing applications, architects and developers have a large number of building blocks available - and one of the most important is the choice of compute services.
Knowing the type of workloads and applications your organization builds and deploys is a key factor in your choice of modern app development strategy. Each workload has different characteristics and requirements. For example, a document processing workload has very different latency and uptime requirements than a transactional website.
Step 5: Avoid common pitfalls
Understand standardization in your environment
Many organizations that run multiple workload patterns want to standardize a set of patterns they can effectively support and run. Some large organizations attempt to standardize on compute options and select a default platform to run most workloads, providing exceptions when required.
Standardization can help provide consistency and minimize the number of people with specialist skills an organization needs to hire. These can become the preferred compute choices, with other options only considered when the preferred choices don’t work. Often, the standard choice effectively supports a set of workloads, but struggles with others. Therefore, some organizations will support multiple options or workload patterns to allow for workload or developer choice.
Understand integration in your environment
It's common that some organizations spend more time than they would like on maintaining open-source integrations.
We recommend that you consider community sources, and/or backing from enterprises or foundations when making these investments. An investment in these projects is not just financial, but also an investment in knowledge capital and potentially technical debt, as these components and associated integrations will typically need updating. For more information refer to the AWS Open Source blog.
Understand your architecture characteristics
The ability to support a wide range of architectures is important. We recommend that you leverage the AWS Well-Architected Framework as a guide to help you understand the pros and cons of the decisions you make when building systems on AWS. Additionally, using the Framework allows you to learn architectural best practices for designing and operating reliable, scalable, secure, efficient, and cost-effective systems in the Cloud.
Step 6: Decide your approach
Based on your organization and workload size, patterns, and business requirements, you might need to choose certain elements of both approaches, or have different teams use different approaches. It is not uncommon have teams use different strategies.
AWS Serverless with AWS
- Use AWS managed services and tools as your first choice, such as AWS Lambda, AWS App Runner, Amazon ECS, and AWS Fargate.
- Invest in developing discipline around AWS including provisioning, DevOps, infrastructure automation, security, networking, and observability/operations.
- Increase productivity and minimize operational burden.
Kubernetes on AWS
- Use Kubernetes as your primary compute platform interface.
- Adopt discipline around running and managing several Kubernetes clusters and the workload and tools on them, advanced patterns like GitOps.
- Integrate with different ecosystems and partner tools.
- Kubernetes customers can use managed AWS services and other compute options like AWS Lambda for certain use cases.
Implement your approach
Now that you have determined which approach best fits your workload for your environment, we recommend that you review the following resources to help you begin implementing your approach.
Building with AWS ServerlessUse a serverless-first strategy to build modern applications in a way that increases agility throughout your application stack. This guide highlights serverless services for all three layers of your stack: compute, integration, and data stores.In this tutorial, you'll learn how to create a simple serverless web application using AWS Lambda, Amazon API Gateway, AWS Amplify, Amazon DynamoDB, and Amazon Cognito.These free, guided workshops introduce the basics of building serverless applications and microservices using services such as AWS Lambda, AWS Step Functions, Amazon API Gateway, Amazon DynamoDB, Amazon Kinesis, and Amazon S3.AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. AWS Fargate is compatible with both Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (Amazon EKS).Use AWS App Runner to build, deploy, and run containerized web applications and API services without prior infrastructure or container experience.
Building on Kubernetes with AWSReview your options for using the Amazon Elastic Kubernetes Service (EKS) managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers.Provides a step-by-step guide to get started using Amazon EKS with links to useful blogs, videos and a detailed tutorial.Get hands-on with step-by-step instructions for how to get the most out of Amazon EKS.ACK is a tool that lets you directly manage AWS services from Kubernetes. ACK makes it simple to build scalable and highly-available Kubernetes applications that use AWS services.Explore using ROSA to create Kubernetes clusters using the ROSA APIs and tools, and have access to the full breadth and depth of AWS services.
Security is a critical component of configuring and maintaining Kubernetes clusters and applications. Amazon EKS provides secure, managed Kubernetes clusters by default, but you still need to ensure that you configure the nodes and applications you run as part of the cluster to ensure a secure implementation. For more information, see Amazon EKS Best Practices Guide for Security.
Use Amazon Inspector, an automated vulnerability management service that continually scans AWS workloads for software vulnerabilities and unintended network exposure.