Getting Started with Istio on Amazon EKS
Note: Broken links have been removed. (July 27, 2020)
Service Meshes enable service-to-service communication in a secure, reliable, and observable way. In this blog post, Matt Turner, CTO at Native Wave, explains the concept of a Service Mesh, shows how Istio can be installed as a Service Mesh on a Kubernetes cluster running on AWS using Amazon EKS, and then explains some key features of Istio and how it helps make your applications more resilient.
- 9 April 2019: Minor re-write for Istio 1.1 & recent EKS improvements
- 12 October 2018: Update to reflect EKS now supporting Webhook Admission Controllers
The Istio project just reached version 1.1. Istio is the leading example of a new class of projects called Service Meshes. Service meshes manage traffic between microservices at layer 7 of the OSI Model. Using this in-depth knowledge of the traffic semantics – for example HTTP request hosts, methods, and paths – traffic handling can be much more sophisticated.
In this post about Istio on Amazon Elastic Container Service for Kubernetes (Amazon EKS), we’ll walk through installation, then see a motivating example in action.
Oh, and to explain all the terrible nautical puns in this post: Istio is Greek for “sail.”
Istio works by having a small network proxy sit alongside each microservice. This so-called “sidecar” intercepts all of the service’s traffic, and handles it more intelligently than a simple layer 3 network can. Istio uses the Envoy proxy as its sidecar. Envoy was originally written at Lyft and is now a CNCF project. The whole set of sidecars, one per microservice, is called the data plane. The work of the sidecars is coordinated by a small number of central components called the control plane. Control and data plane architectures are very common in distributed systems, from network switches to compute farms.
Istio aims to run in multiple environments, but by far the most common is Kubernetes. In this configuration, Istio’s control plane components are run as Kubernetes workloads themselves, like any other Controller in Kubernetes. In addition, Kubernetes’s Pod construct lends itself very well to Istio’s sidecar model for the data plane.
Recall that a Pod is a tightly coupled set of containers, all sharing one IP address (technically, one network namespace) – this is perfect for a network sidecar.
Istio’s layer 7 proxy runs as another container in the same network context as the main service. From that position it is able to intercept, inspect, and manipulate all network traffic heading through the Pod, yet the primary container needs no alteration or even knowledge that this is happening. The practical upshot of this is that Istio can augment any set of services, however old, and written in any language.
Istio on EKS
Enough theory; let’s get going with Istio!
In another post of mine, I covered how to install the pre-1.0 nightly builds of Istio into Amazon EKS. That was a bit of a minefield, but with the 1.x releases of Istio, the process has gotten a lot simpler.
Provision an Amazon EKS Cluster
The first thing you’ll need is an Amazon EKS cluster.
Whichever instructions you follow should include information on installing the client-side aws-iam-authenticator as well; if not, see aws-iam-authenticator.
For example, to bring up a basic Amazon EKS cluster with
eksctl (tested with
eksctl create cluster \
--region us-west-2 \
--name istio-on-eks \
--nodes 2 \
This command will bring up a Kubernetes cluster with a managed (and hidden) control plane, and two
m5.large worker nodes.
That’s enough worker capacity to accommodate Istio’s control plane and the example app we’ll be using, without having to wait for the cluster autoscaler.
eksctl adds connection information for this cluster to your
~/.kube/config and sets your current context to that cluster, so we can just start using it. If you’d rather
eksctl didn’t edit that file, you can pass
--kubeconfig to have it write a standalone file, which you can use in select terminals with
A really nice feature of Amazon EKS clusters is that they use your AWS IAM users and groups for authentication, rather than the cluster having a separate set of users (as you’re probably accustomed to). Although the authentication is different, authorization uses the same RBAC system – you’re just binding your existing AWS Identity and Access Management (IAM) users to Roles instead of Kubernetes-internal users.
For this authentication to work, your
kubectl needs to be able to present your AWS credentials to the cluster, rather than the Kubernetes-specific x509 certificate you probably use now.
To do that,
kubectl needs a plugin:
go get -u -v \
Now let’s check that everything works, and poke around our new cluster using
kubectl get nodes. Notice a few things about the output:
- There are no master nodes visible.
- The worker nodes are running a recent version of Kubernetes.
- The workers are running Amazon Linux. This is actually an opinion of
eksctl; Amazon EKS lets you bring your own worker node AMI if you have specific requirements, and the Amazon EKS AMI Build Specification is publicly available to help you create images to use as a starting point for customization.
Download Istio Deployment Files
At the time of writing, Istio is at version 1.1.2.
Istio provides a convenient script which downloads and extract the latest Istio release for you:
curl -L https://git.io/getLatestIstio | sh -
For the more security-conscious, the tarballs are available from the Istio GitHub releases page.
We’ll be using Helm, a common package manager for Kubernetes, to install Istio into our Amazon EKS cluster.
First, make sure you have Helm installed. Instructions specific to your platform are available in Helm’s comprehensive documentation.
Next, you need to deploy Helm’s server-side component, Tiller, to your Amazon EKS cluster. Due to Kubernetes’s RBAC security mechanisms, this can get quite complicated. Luckily, the Istio release provides a simple configuration to get up and running.
Still in the
istio-1.* directory, deploy that config, and then Tiller:
kubectl create -f install/kubernetes/helm/helm-service-account.yaml
helm init --service-account tiller
NB: This configuration will get you going, but it is not an example of best security practice. Do not do this in a production cluster! Helm’s documentation on Role-Based Access Control will show you how to set up securely.
Install Istio on EKS
We’ll again use Helm, this time to simplify our Istio installation to a couple of commands. These instructions have been tested with Helm 2.13. First, we must install some prerequisites:
helm install \
--name istio-init \
--namespace istio-system \
Then you can simply install the Helm chart:
helm install \
--name istio \
--namespace istio-system \
For those not familiar with Helm: we name our Helm-managed deployment “istio”, as there may be more than one of these in advanced configurations; this gives us an easy name to use to manage and uninstall it later. We keep it in its own Kubernetes namespace,
istio-system – again, just to make our lives easier.
The only special parameter we’re using here is the last one, which enables a few more features than the basic install, some of which we’ll explore in this post.
Bookinfo, a Sample Application
The Istio project provides a sample microservices app, Bookinfo, which is designed to help demonstrate many of Istio’s features.
Now that we have Istio installed, let’s take a tour!
Bookinfo is designed to run in Kubernetes, and the Istio release we downloaded comes with a YAML file declaring all of the cluster resources for a Bookinfo deployment.
Recall that in order for Istio to add intelligence to these services, it needs its sidecar alongside all of Bookinfo’s code, intercepting and managing all the network traffic.
These sidecars are automatically added by a Mutating Admission Controller Webhook (configured by resource kind
MutatingWebhookConfiguration,installed along with the rest of Istio). This is a webhook, registered with the Kubernetes control plane, to which all new resource definitions are sent for inspection. These webhooks can either check resources and possibly reject them (Validating), or make changes to them (Mutating). Due to Istio’s use of a Mutating Webhook Admission Controller, the whole system is transparent not only to the developers of the application, but also to its operators.
However, Istio still operates on an opt-in basis. We’ll be lazy and install Bookinfo into the Kubernetes namespace
default, so we need to add a label to that to tell Istio’s webhook to inject the sidecars into any Pod deployed in that namespace
$ kubectl label namespace default istio-injection=enabled
Now, we can deploy a vanilla (Istio-unaware) definition of the Bookinfo application, and the Mutating Webhook will alter the definition of any Pod it sees to include the Envoy sidecar container.
$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
(FYI: as far as I’m aware, there’s no canonical pronunciation of
kubectl, so argue kube-control / cuddle / cuttle / c-t-l amongst yourselves!).
The one thing that Bookinfo’s supplied resources don’t do is expose the front-end service to the world.
Networking, especially the low-level aspects like this, is complex, difficult, and environment-specific.
For this reason, the basic Bookinfo install leaves this aspect out.
In place of the more familiar
nginx Ingress Controller, Istio will be handing ingress for us (adding all its layer 7 goodness as it does so).
The actual ingress traffic is handled by Envoy instances (separate from the sidecars for various reasons), but, as with the rest of the mesh, these are configured by the Istio control plane.
While Istio can interpret the Kubernetes Ingress resources that the
nginx Ingress Controller uses, it has its own preferred networking resource types which offer more control.
Since we’re in a greenfield cluster, we’ll use these new ingress types, starting with the
$ kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
These resources are not unlike an Ingress resource, in that the routing apparatus they configure is ultimately placed behind a “physical” load balancer external to the Kubernetes cluster – in our case, an AWS Elastic Load Balancer.
The following commands will locate the host and port we ultimately need to hit to access our Bookinfo application from across the internet:
$ export INGRESS_HOST=$(kubectl -n istio-system \
get service istio-ingressgateway \
$ export INGRESS_PORT=$(kubectl -n istio-system \
get service istio-ingressgateway \
$ export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
You can now browse to
http://$GATEWAY_URL/productpage, Bookinfo’s landing page (replacing
$GATEWAY_URL with the value we just assigned to it, or on a Mac,
It should look a little like this:
In this post I’ve shown you how to provision an Amazon EKS cluster, use the Helm package manager to install Istio in an EKS-conformant way, and install an example microservices application with Istio augmentation.
To round out this post, let’s take a quick peek at one of Istio’s many features – some advanced traffic routing enabled by the fact that Istio deals with traffic at layer 7.
Load Bookinfo a few times by again visiting
http://$GATEWAY_URL/productpage and hitting refresh a bunch.
Notice how sometimes the reviews on the right have star ratings, sometimes in color, and sometimes there are no stars at all.
This is because these reviews come from a separate
reviews service, and in the system we just deployed there are three separate versions of it, as you can see with
kubectl get pods.
There’s just one Kubernetes Service pointing at all of them, so the other Pods can call for the reviews service just by using the name
The upshot of this is that we get just Kubernetes’ basic round-robin “load balancing,” as you would during a rolling upgrade.
Layer 7 Routing
So, let’s get things under control and pin all calls to
reviews v1 for now.
The Bookinfo sample has a few pre-made Istio configs we can use, and this is one of them.
First we need to tell Istio about the different versions that exist and how to tell them apart (in this case, labels on the Kubernetes Deployment).
$ kubectl apply -f samples/bookinfo/networking/destination-rule-all.yaml
Let’s take a look through the part of that file that pertains to our reviews service.
It’s of kind
DestinationRule, which specifies how to talk to the workloads, e.g. Pods, comprising the service. Examples of rules include strategies for load balancing between the Pods, the maximum connections to allow to any one Pod, etc. In this example we’re not actually using any of these, but rather telling Istio how to tell the different versions of destinations (Pods) apart.
The destination in question is anything with hostname reviews, i.e. our reviews Service (in the Kubernetes sense). Any HTTP request with a header of Host: reviews will have this rule applied. As we said, this is necessary but not sufficient to tell the different versions apart.
- name: v1
- name: v2
- name: v3
So, the final section of the file states that the service’s workloads should be treated as three separate subsets. Because of Istio’s tight integration with Kubernetes, it can identify endpoints by the labels on their Pods.
With those subsets of the
reviews Service defined, we can tell Istio that anyone looking to call
reviews should always be directed to v1.
$ kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml
In this file we define a resource called a VirtualService, which again matches the traffic to
reviews and says that all of it should go to version 1. A more advanced VirtualService would match traffic on HTTP paths and methods as well, and support URL rewrites, giving us a lot of the power of a more traditional reverse proxy. This simple example only matches the
host header, so it looks fairly similar to the DestinationRule, but, whereas that resource specifies how to talk to workloads, VirtualServices are about which workloads to route to, for various request formats.
Hit Bookinfo again a few times and you’ll see just the basic reviews with no stars at all. Notice that we didn’t change any Kubernetes Services here, let alone delete the unwanted versions. Other pods can have
reviews at other versions.
An Amazon EKS cluster still has a normal IP network, so in any language you can continue to make use of the normal socket routines – no special calls to weird RPC libraries here.
productpage still makes DNS requests for “reviews,” so it will still work without Istio, or even outside Kubernetes.
However, this would mean that, when a request leaves the
productpage container, its destination IP address would be chosen by the Kubernetes’ Service’s
ClusterIP, which would choose a Pod, of any version, at random.
So how does Istio handle this request? How does it even know where the request is meant to be headed?
Remember that Istio understands the HTTP content of the request, so it looks at the HTTP
host: header, matches that against the
VirtualService, and sends the request where we really want it to go: v1 of
While I concede there are other ways to achieve what we just did (though I would argue that Istio’s way is neater and more flexible), this HTTP-aware routing has much more power up its sleeve.
Let’s say you’re doing exploratory testing of the new version – using it in your browser as a user would, poking at the edges, looking for bugs.
You want to have the
productpage use v2 of
reviews, but only for you.
Let’s also say you’re called Jason.
Apply the following file and once again hit that refresh button.
$ kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml
Compare this file, shown below, with the previous all-v1 version. That original routing rule is still there at the end of the file, but rules are applied in order, so we’ve inserted a new statement just before the old rule that catches just Jason’s traffic and directs it elsewhere. All other traffic continues to fall through to the original, default rule.
Everyone but Jason should still be seeing
Now hit Sign in in the top right, and sign in as “jason” (case-sensitive, but any password will do – I think we found a bug there!).
Refreshing one more time, you should now see the new shiny star ratings that your co-worker wanted you to kick the tires on.
Have a look in that latest YAML file, and you’ll see that network traffic routing is now contingent on an HTTP header.
Try doing that with
Of course, your criterion could be user-agent, logged-in vs logged-out – anything that can be inferred from an HTTP header (and of course any metadata from further down the stack, such as a port number).
I hope this has given you a taste for what Istio can do, and shown you that it isn’t so hard to install either.
The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.