Containers
SaaS deployment architectures with Amazon EKS
As companies scale their software as a service (SaaS) offerings, they’re expanding their market reach by offering flexible deployment options directly within their customers’ environments. This versatile deployment model enables organizations to maintain data sovereignty, meet compliance standards, achieve optimal performance through reduced latency, and maximize efficiency by running applications close to existing customer datasets. SaaS providers can embrace this approach to serve a broader range of industries and unlock new business opportunities, particularly in highly regulated sectors and performance-sensitive markets. This method of extending the deployment environments into the tenant’s owned environments has been labelled “SaaS Anywhere” in this post.
Although SaaS Anywhere solves important challenges related to managing remote customers’ environments, they introduce significant complexity in management and operation. SaaS providers must develop robust systems for provisioning and maintaining their application stack across numerous customer environments, implement cross-account monitoring solutions, manage distributed lifecycle updates, and provide consistent security controls. All of this is done while maintaining operational excellence at scale. In this post we explore patterns and practices for building and operating these distributed Amazon Elastic Kubernetes Service (Amazon EKS)-based applications effectively.
Patterns for managing remote environment with Amazon EKS in its core
When designing SaaS solutions, organizations typically employ one of three deployment models, each offering distinct advantages for specific use cases:
- SaaS Provider Hosted: Both data and control planes reside in the provider’s Amazon Web Services (AWS) account, optimizing for operational efficiency and rapid customer onboarding. Although lightweight agents may exist in customer environments for telemetry, all core processing remains provider hosted.
- Remote Application Plane: The data plane runs in the customer’s environment while the control plane stays in the provider’s account. This model balances compliance needs with operational efficiency, allowing customers to maintain data sovereignty while using AWS services.
- Hybrid Nodes: Amazon EKS control plane runs in the provider’s AWS account while compute nodes operate in the customer’s on-premises infrastructure. This enables local data processing and workload execution while maintaining the benefits of cloud-managed Kubernetes.
Each model represents a different point on the spectrum between operational clarity and customer control, so that customers can choose the approach that best aligns with their market requirements and operational capabilities. Before exploring the technical implementation details, we first establish a shared responsibility model across deployment patterns.
Shared responsibility
AWS clearly defines the shared responsibility model, where AWS is responsible for security “of” the cloud, while customers are responsible for security “in” the cloud. Likewise, companies building SaaS must establish clear boundaries of responsibility with their customers across different SaaS deployment models. In a SaaS Provider hosted model, the provider bears the most operational responsibility, including infrastructure management, application security, data protection, and compliance. Simultaneously, customers are primarily responsible for their user access management and data governance, and the way that they configure the service.
As different organizations might define their responsibility differently, it’s worth focusing on the decisions necessary for creating a shared responsibly model rather than trying to build a “one size fits all” model, as shown in the following figure. This is especially important for the “Remote” and “Hybrid Nodes” models, because this is where parts of the service architecture are deployed on a remote environment.

Figure 1: Progression of Hosting Models
Building distributed SaaS on Kubernetes
In these sections we explore how to implement these deployment models using containerized applications running on Kubernetes. We focus on both open source and AWS-specific tools that enable efficient management of distributed environments, and we examine DevOps practices that support those deployment patterns.
Application packaging and deployment
Success with distributed SaaS environments begins with consistent application packaging. When operating across multiple environments, applications must be packaged to enable reliable and repeatable deployments through environment-agnostic bundles that can be customized for specific tenant needs. Kubernetes-native tools such as AWS Controllers for Kubernetes (ACK), Crossplane, or the Tofu Controller, combined with Helm charts or Kube Resource Orchestrator (Kro), enable you to package both application resources and infrastructure definitions into a single, versioned source of truth, ensuring consistent deployments across all environments.
The second crucial requirement is maintaining these environments over time. As your application evolves, you need systematic ways to roll out updates across different environments while coordinating both infrastructure and application changes. This is where GitOps practices become important, providing a reliable, automated process for keeping all environments in sync with your desired state. By treating infrastructure and application configurations as code, stored in version control, you can create a scalable foundation for managing updates across your entire customer base.
SaaS Provider Hosted
In the SaaS Provider Hosted model, the entire application stack — including both control and data planes — operates within the SaaS provider’s AWS account. This deployment approach maximizes operational efficiency and streamlines management by consolidating all infrastructure and application components under the provider’s direct control, as shown in the following figure.

Figure 2: Composition of Application and Control Planes
Although some implementations may need lightweight agents in customer environments for specific functions such as telemetry collection, the core application processing and data storage remain within the provider’s infrastructure. This model is particularly well-suited for organizations prioritizing rapid customer onboarding and those whose customers don’t have strict data residency requirements, as shown in the following figure.

Figure 3: SaaS Provider Hosted Model
Tenant onboarding
- The tenant initiates the onboarding process by interacting with the SaaS Control Plane.
- Tenant metadata is stored in Amazon DynamoDB within the SaaS provider’s account.
- The SaaS provider’s GitOps repository is updated with the new tenant’s configuration.
- The GitOps controller deploys the relevant microservices in the remote EKS cluster.
- Tenant’s deployment can be either a pooled environment (shared) or a siloed environment (dedicated).
Shared responsibility
The SaaS provider maintains full responsibility for infrastructure management, security, scalability, and application maintenance, including EKS clusters, networking, storage, monitoring, and compliance. They handle all aspects of platform security, from encryption to patch management, while providing high availability and disaster recovery. Customers are responsible for managing their user access controls, role assignments, and application-specific configurations.
Day-2 operations
Day-2 operations in the SaaS Provider Hosted model are streamlined because the provider has complete control over the entire stack. The provider manages all aspects of the Amazon EKS environment, including automated control plane upgrades, worker node updates, and add-on management through GitOps workflows. Using EKS Auto Mode streamlines operations by automating node and managed add-ons upgrades. A single pane of glass (SPOG) observability should be implemented to provide centralized visibility into both technical metrics (cluster health, application performance) and business metrics (tenant usage, feature adoption), enabling data-driven decisions. For example, the Amazon EKS Dashboard can be used to centralize cluster metrics.
Remote Application Plane
The Remote Application Plane model splits the SaaS architecture into two parts: a provider-managed control plane and a customer-hosted data plane. This approach enables customers to maintain data sovereignty while using the provider’s expertise.
The provider’s control plane handles tenant management and orchestration, while the application runs in the customer’s AWS account. Deployment can happen either through customer-run scripts or provider-managed remote provisioning. Secure channels such as AWS Identity and Access Management (IAM) roles with cross-account assume permissions and GitOps reconciliation processes maintain communication between both planes.
This hybrid model particularly appeals to organizations with strict data governance needs, offering the perfect balance between SaaS convenience and data control. During onboarding, the provider sets up the necessary resources in the customer’s account and connects them to the central control plane, creating a seamless operational experience, as shown in the following figure.

Figure 4: Remote Application Plane Model
Tenant onboarding
- The tenant initiates the onboarding process by interacting with the SaaS Control Plane.
- Tenant metadata is stored in DynamoDB within the SaaS provider’s account.
- The tenant’s DevOps team sets up the SaaS Data Plane in their own AWS account. This section can be completely automated by the SaaS provider, with an appropriate responsibility model, making sure that the future patches, updates, and upgrades to the underlying services are completely managed by the SaaS provider.
- The SaaS provider’s GitOps repository is updated with the new tenant’s configuration.
- AWS CodeBuild is triggered to build the necessary components for the new tenant.
- Built artifacts are pushed to the Amazon Elastic Container Registry (Amazon ECR) registry in the tenant’s AWS account.
- The GitOps controller in the customer’s account deploys the relevant microservices in the remote EKS cluster.
This reference architecture demonstrates a typical flow for onboarding a new tenant in a Customer Hosted Data Plane model. The process maintains secure deployment and integration of the tenant’s environment with the SaaS provider’s control plane. Specific implementations and tools may vary based on individual SaaS provider requirements and customer needs.
Shared responsibility
In the Remote Application Plane model, the SaaS provider manages the SaaS Control Plane and all application components running in the customer’s data plane. This includes application deployments, updates, monitoring, and the GitOps workflow that maintains these components. The SaaS provider is responsible for application availability, performance, and feature updates across all tenant environments. The customer owns and manages their AWS account infrastructure where the data plane operates. This encompasses network security, IAM policies, compliance requirements, and cost management of AWS resources. Although the customer provides and maintains the EKS cluster, they should not modify the SaaS provider-managed applications or their configurations, because these are controlled through the GitOps workflow and are reversed upon the next reconciliation. As mentioned previously in the Tenant Onboarding section, the SaaS provider can take the ownership one step further and provision the base infrastructure services in the customer’s account. This can be done while maintaining proper processes to maintain and upgrade those services along with the application. Kubernetes cluster upgrades represent a key intersection of these responsibilities. Although the customer owns the EKS cluster, any upgrades must be coordinated or automated with the SaaS provider to maintain application compatibility and minimize service disruption. This coordination process and its specific procedures are detailed in the day-2 operations section.
Day-2 operations
In the Remote Application Plane model, day-2 operations primarily revolve around upgrade management. The SaaS provider maintains control over application-level changes through GitOps, while infrastructure management responsibilities are shared. Amazon EKS upgrades need a coordinated effort. EKS Auto Mode streamlines this process significantly, automatically upgrading worker nodes and core components such as VPC CNI and CoreDNS. For those not using EKS Auto Mode, Karpenter offers an alternative for efficient node management, yet it lacks some of EKS Auto Mode’s add-on capabilities. In these cases, the SaaS provider typically manages critical add-ons through GitOps. It’s crucial to stay ahead of Kubernetes version lifecycles. Clusters running outdated versions face automatic upgrades after 26 months, which could potentially disrupt workloads if not planned properly. This approach uses AWS automation to minimize manual work while maintaining system integrity. However, clear communication between the provider and customer remains essential, especially for major upgrades and version lifecycle management. SaaS providers can embrace these practices and thus offer a more robust, manageable solution that balances control and convenience for their customers.
Customer Hosted Data Plane with EKS Hybrid Nodes
The Customer Hosted Data Plane with EKS Hybrid Nodes is an extension of the SaaS Provider Hosted model that allows for deployment into customer on-premises environments while keeping the SaaS Control Plane in the provider account. This is shown in the following figure. EKS Hybrid Nodes function as standard Kubernetes worker nodes but operate on customer-managed infrastructure. This hybrid approach enables SaaS providers to offer flexible deployment options that can accommodate diverse customer needs.

Figure 5: Customer Hosted Data Plane with EKS Hybrid Nodes
Tenant onboarding
-
- The tenant initiates the onboarding process by interacting with the SaaS Control Plane.
- Tenant metadata is stored in DynamoDB within the SaaS provider’s account.
- The tenant’s DevOps team uses OS Image provided by the SaaS Provider to provision and onboard their EKS Hybrid Nodes.
- The customer establishes the necessary routes and AWS Site-to-Site VPN to create a connection to the SaaS provider’s environment.
- The SaaS provider’s GitOps repository is updated with the new tenant’s configuration.
- The GitOps controller watches GitOps Repo for changes to tenant configuration.
- The GitOps controller deploys the tenant’s microservices in their EKS cluster on the Hybrid Nodes.
The key differentiator in this hybrid model is the SaaS provider’s distribution of standardized OVA images containing the complete EKS Hybrid Node software stack. These pre-configured virtual appliances include the Kubernetes runtime, container runtime, security configurations, and pre-baked AWS Systems Manager activation codes that enable automatic registration with the SaaS provider’s AWS environment upon deployment.
Shared responsibility
The shared responsibility model for EKS Hybrid Nodes creates a complex division of duties due to infrastructure spanning AWS and customer on-premises environments. The SaaS provider maintains control over application-level operations through GitOps, creates and distributes standardized OVA images with embedded Systems Manager activation codes, and manages monitoring solutions across the hybrid infrastructure. Furthermore, the SaaS provider needs to implement extended security and networking controls to make sure that no tenant can gain access to hybrid nodes live on a different tenant’s account, because all hybrid nodes are connected to the same cluster. Customers bear significant responsibility for their on-premises infrastructure, including maintaining the underlying virtualization platform, maintaining adequate resources, managing network connectivity and VPN connections to AWS, and maintaining physical security. Critically, customers are explicitly responsible for initiating and managing all hybrid node upgrades, including Kubernetes version updates and OS patches, while the SaaS provider should provide best practices and guidelines for conducting these operations. Although the SaaS provider can technically deliver updates through embedded Systems Manager activation, customers maintain full control over when and how upgrades are applied to avoid unexpected disruptions to their workloads.
Day-2 operations
Kubernetes upgrades in hybrid environments are streamlined by the SaaS provider’s control over both the Amazon EKS control plane and Kubernetes manifests. The provider manages which API versions are deployed through their GitOps workflow; thus, API compatibility issues are minimized. The SaaS provider initiates control plane upgrades and provides upgrade instructions or scripts to customers, who then perform kubelet upgrades and apply security patches. Customers are responsible for initiating and managing all hybrid node upgrades, and maintaining full control over when and how these upgrades are applied, to avoid unexpected disruptions to their workloads.
A significant operational challenge in hybrid deployments involves provisioning supporting services needed by applications. When applications need databases, message queues, or other infrastructure services, the SaaS provider must decide whether to deploy these services as containerized workloads within Kubernetes or necessitate that customers provide equivalent services from their on-premises infrastructure. For example, if an application needs a PostgreSQL database, then the provider can either ship a containerized database instance that runs on hybrid nodes or establish connectivity requirements for the customer to provide their own database infrastructure.
Conclusion
SaaS Anywhere deployment models represent a strategic evolution for software providers looking to expand into regulated industries and performance-sensitive markets. The spectrum from Provider Hosted to Distributed/Remote Application Plane to Amazon EKS Hybrid Nodes offers increasing customer control at the cost of operational complexity. Success necessitates establishing clear shared responsibility boundaries, investing in robust automation such as GitOps workflows, and building specialized platform engineering capabilities to manage cross-account provisioning, monitoring, and lifecycle operations at scale.
The technical foundations continue to mature rapidly, with AWS services such as Amazon EKS Auto Mode and Amazon EKS Hybrid Nodes, and cloud-native tooling such as AWS Controllers for Kubernetes (ACK) and resource orchestration with kro reducing operational barriers. Combined with GitOps practices and infrastructure as code (IaC), these technologies make distributed SaaS economically viable for organizations that are willing to invest in the necessary platform capabilities.
About the authors
Tsahi Duek is a Principal Container Specialist Solutions Architect at Amazon Web Services. He has over 20 years of experience building systems, applications, and production environments, with a focus on reliability, scalability, and operational aspects. He is a system architect with a software engineering mindset.
Tiago Reichert is a Principal Containers Specialist SA at AWS, where he helps startups optimize their container and cloud-native strategies. With experience in DevOps, GenAI, and SaaS architectures, he collaborates with businesses to design scalable and efficient cloud solutions. Tiago also actively contributes to the tech community as an organizer of KCD Brazil and meetups focused on promoting cloud-native technologies.
Lucas Duarte is a Principal Containers Specialist SA at AWS, dedicated to supporting ISV customers in AMER through AWS Container services. Beyond his Solutions Architect role, Lucas brings extensive hands-on experience in Kubernetes and DevOps leadership. He’s been a key contributor to multiple companies in Brazil and US, driving DevOps excellence.
Eric Anderson is an Associate Specialist SA at AWS specializing in container technologies. He assists customers in implementing containerized solutions and optimizing their use of AWS services.