Connecting Google Kubernetes Engine (GKE) Clusters to Amazon EKS

Customers running Google Kubernetes Engine (GKE) clusters can now use the Amazon Elastic Kubernetes Service (Amazon EKS) to visualize GKE cluster resources. This post describes how to use Amazon EKS Connector to connect a GKE cluster to the Amazon EKS console.

The EKS console provides a single pane of glass to visualize all your Kubernetes clusters. Customers that prefer a graphical user interface use the Amazon EKS console to view the Kubernetes cluster’s status, configuration, and workloads. With Amazon EKS Connector, customers can also view their GKE cluster’s information along with Amazon EKS clusters.

In addition to GKE, Amazon EKS Connector allows you to register and connect any conformant Kubernetes cluster to Amazon EKS. Any external cluster information shown in the EKS console is view-only.


You can connect your GKE clusters to Amazon EKS using the AWS CLI, the AWS Management Console, or eksctl. This post uses eksctl. You can find the steps for AWS Management Console or AWS CLI here.

You’ll need the following tools:

You’ll also need a GKE cluster on which you can install EKS Connector.

With eksctl, connecting an external Kubernetes cluster to Amazon EKS is a two-step process:

  • Generate configuration for the external cluster
  • Install EKS Connector

Generate external cluster configuration

eksctl makes it easy to register clusters by creating the required AWS resources and generating Kubernetes manifests for EKS Connector.

Register your GKE cluster:

eksctl register cluster \
--name <Cluster Name> \
--provider GKE \
--region <GCP Region>

Please change the Region in the command above to match your environment.

eksctl register cluster registers the external cluster to Amazon EKS and creates three files.

  • eks-connector.yaml
  • eks-connector-clusterrole.yaml
  • eks-connector-console-dashboard-full-access-clusterrole.yaml

These manifests will create eks-connector statefulset in a new namespace. The manifests permit EKS connector to get and list resources in all namespaces in the applied cluster.

eksctl also creates an IAM role that EKS Connector uses to invoke Systems Manager APIs.

  "Version": "2012-10-17",
  "Statement": [
      "Sid": "SsmControlChannel",
      "Effect": "Allow",
      "Action": [
      "Resource": "arn:aws:eks:*:*:cluster/*"
      "Sid": "ssmDataplaneOperations",
      "Effect": "Allow",
      "Action": [
      "Resource": "*"

The next step is to apply these manifests to the GKE cluster.

Install EKS Connector on GKE

Apply the manifests that eksctl generated in the previous step:

kubectl apply -f eks-connector.yaml,eks-connector-clusterrole.yaml,eks-connector-console-dashboard-full-access-group.yaml

Verify that EKS Connector pods are running:

kubectl get all --namespace eks-connector

NAME                  READY    STATUS    RESTARTS   AGE
pod/eks-connector-0   2/2      Running   0          5m49s
pod/eks-connector-1   2/2      Running   0          5m31s

NAME                              READY     AGE
statefulset.apps/eks-connector    2/2       5m51s

EKS Connector uses user impersonation to authorize against the GKE cluster’s API server. Kubernetes administrators can further customize EKS Connector’s permissions to limit its access to cluster resources. Please see Granting access to a user to view a cluster to configure more restrictive access.

For this post, we have used a role that permits view access to all cluster resources.

Once the GKE cluster is registered, you can use eksctl to list your clusters, including your GKE cluster.

~# eksctl get clusters
2021-11-11 06:32:42 [i] eksctl version 0.74.0-rc.0
2021-11-11 06:32:42 [i] using region us-east-1
NAME                               REGION       EKSCTL CREATED
eks-cluster-aws                    us-east-1    True
gke-connected-by-eks-connector     us-east-1    False

You can also use the AWS CLI to list and describe registered clusters:

~# aws eks describe-cluster –name <Cluster Name>

  "cluster": {
    "name": "gke-connected-by-eks-connector",
    "arn": "arn:aws:eks:us-east-1:831141539580:cluster/gke-connected-by-eks-connector",
    "createdAt": "2021-11-10T21:33:43.179000+00:00",
    "status": "ACTIVE",
    "tags": {},
    "connectorConfig": {
      "activationId": "397acad4-4ecf-4fca-8beb-e71bc6128481",
      "activationExpiry": "2021-11-13T21:33:42.779000+00:00",
      "provider": "GKE",
      "roleArn": "arn:aws:iam::831141539580:role/eksctl-20211110213333349284"

The Amazon EKS console will now show the GKE cluster along with your EKS clusters and any other registered external clusters. The user or role with which you log in to the AWS Management Console and the role used to generate manifests in the previous step should be the same.

Here’s a screenshot of the EKS console that shows an overview of the GKE cluster’s data plane.

You can view node level details such as its compute resources, kernel, and operating system details.

The panel also displays the resource allocation and pods running on the specific node.

The Workloads tab shows workloads (pods, deployments, statefulsets, and daemonsets) running in the cluster. It also allows filtering by namespace or resource type.

You can select a workload to view pod details such as events, status, labels, and annotations.

When pods contain multiple containers, you can further drill down to view container-level details such as image, mounts, ports, and environment variables.

The EKS Console also provides information on the objects in JSON format. Here’s information about a node in JSON:

Similarly, you can enable the Raw view toggle to view the JSON of other Kubernetes objects in the cluster.

Tags can be applied to registered clusters to help you track each cluster’s owner, organization, cluster function, and so on. You can then search and filter the clusters based on the tags that you add.


Run the following command to delete resources created in this post:

kubectl delete -f eks-connector.yaml,eks-connector-clusterrole.yaml,eks-connector-console-dashboard-full-access-group.yaml

eksctl deregister cluster <Cluster Name>


Using EKS connector, you can view any conformant Kubernetes cluster information in the Amazon EKS console. You can connect any conformant Kubernetes cluster, including Amazon EKS Anywhere clusters running on-premises, self-managed clusters on Amazon Elastic Compute Cloud (Amazon EC2), and other Kubernetes clusters running outside of AWS such as GKE. Regardless of where your cluster is running, you can use the Amazon EKS console to get a centralized view of all connected clusters and the Kubernetes resources running on them.

EKS Connector is open-source. Visit the EKS documentation for more details.

*Google Kubernetes Engine and icon are trademarks of Google LLC. AWS is not affiliated with Google LLC or Google Kubernetes Engine.

Gokul Chandra

Gokul Chandra

Gokul is a Specialist Solutions Architect at Amazon Web Services. He assists customers in modernizing with containers helping them to use AWS container services to design scalable and secure applications. He is passionate about cloud native space and Kubernetes. Gokul's areas of interest include Containers, Microservices, Public & Private Cloud Platforms, Cloud Native for Telco, Edge Computing, Hybrid & Multi Cloud Architectures and NFV. You can find him in Medium @gokulchandrapr and Linkedin @gokulchandra.

Re Alvarez-Parmar

Re Alvarez-Parmar

In his role as Containers Specialist Solutions Architect at Amazon Web Services, Re advises engineering teams with modernizing and building distributed services in the cloud. Prior to joining AWS, he spent more than 15 years as Enterprise and Software Architect. He is based out of Seattle. Connect on LinkedIn at: