AWS Open Source Blog

Getting Started with Istio on Amazon EKS

Service Meshes enable service-to-service communication in a secure, reliable, and observable way. In this multi-part blog series, Matt Turner, founding engineer at Tetrate, will explain the concept of a Service Mesh, shows how Istio can be installed as a Service Mesh on a Kubernetes cluster running on AWS using Amazon EKS, and then explain some key features of Istio and how it helps make your applications more resilient.
Arun


Update: Automatic sidecar injection now supported

Update: As of today, 12th October 2018, EKS now supports Webhook Admission Controllers. This means Istio’s automatic sidecar injection now works. We’ve preserved this original post for reference, but to use this feature, make the following changes to the instructions below:

  • Use the latest eksctl; at least 0.1.6
  • Remove the following two lines from the installation of Istio:
    --set global.configValidation=false \
    --set sidecarInjectorWebhook.enabled=false
  • Run the following command after using Helm to install Istio:
    kubectl label namespace default istio-injection=enabled

Istio

The Istio project just reached version 1.0. Istio is the leading example of a new class of projects called Service Meshes. Service meshes manage traffic between microservices at Layer 7 of the OSI Model. Using this in-depth knowledge of the traffic semantics – for example HTTP request hosts, methods, and paths – traffic handling can be much more sophisticated.

In this first in a series of posts about Istio on EKS, we’ll walk through installation, then see a motivating example in action.

Oh, and to explain all the terrible nautical puns in this post: Istio is Greek for “sail.”

Architecture

Istio works by having a small network proxy sit alongside each microservice. This so-called “sidecar” intercepts all of the service’s traffic, and handles it more intelligently than a simple layer 3 network can. Istio uses the Envoy proxy as its sidecar. Envoy was originally written at Lyft and is now a CNCF project. The whole set of sidecars, one per microservice, is called the data plane. The work of the sidecars is coordinated by a small number of central components called the control plane. Control and data plane architectures are very common in distributed systems, from network switches to compute farms.

Istio Control Plane API
Istio aims to run in multiple environments, but by far the most common is Kubernetes. In this configuration, Istio’s control plane components are run as Kubernetes workloads themselves, like any other Controller in Kubernetes. Kubernetes’s Pod construct lends itself very well to Istio’s sidecar model.

Recall that a Pod is a tightly coupled set of containers, all sharing one IP address (technically, one network namespace) – this is perfect for a network sidecar.

Istio’s layer 7 proxy runs as another container in the same network context as the main service. From that position it is able to intercept, inspect, and manipulate all network traffic heading through the Pod, yet the primary container needs no alteration or even knowledge that this is happening. The practical upshot of this is that Istio can augment any set of services, however old, and written in any language.

It retrofits all the features of a library like Hystrix or Finagle, but, while those are JVM-only, Istio is language-agnostic.

Istio on EKS

Enough theory; let’s get going with Istio!

In another post of mine, I covered how to install the pre-1.0 nightly builds of Istio into EKS. That was a bit of a minefield, but with the 1.0 release of Istio, the process has gotten a lot simpler. There are still a couple of things to work around, as we’ll see.

Provision an EKS Cluster

The first thing you’ll need is an EKS cluster.

If you don’t yet have one, there are various ways to provision one, including eksctl, the AWS Console, or Terraform.

Whatever instructions you follow should include information on installing the client-side aws-iam-authenticator as well; if not, see aws-iam-authenticator.

For example, to bring up a basic EKS cluster with eksctl, run:

eksctl create cluster \
    --region us-west-2 \
    --name istio-on-eks \
    --nodes 2 \
    --ssh-public-key "~/.ssh/id_rsa.pub"

This command will bring up a Kubernetes cluster with a managed (and hidden) control plane, and two m5.large worker nodes.

That’s enough worker capacity to accommodate Istio’s control plane and the example app we’ll be using, without having to wait for the cluster autoscaler.

eksctl adds connection information for this cluster to your ~/.kube/config and sets your current context to that cluster, so we can just start using it. If you’d rather eksctl didn’t edit that file, you can pass --kubeconfig to have it write a standalone file, which you can use in select terminals with export KUBECONFIG=.

A really nice feature of EKS clusters is that they use your AWS IAM users and groups for authentication, rather than the cluster having a separate set of users (as you’re probably accustomed to). Although the authentication is different, authorization uses the same RBAC system – you’re just binding your existing AWS IAM users to Roles instead of Kubernetes-internal users.

For this authentication to work, your kubectl needs to be able to present your AWS credentials to the cluster, rather than the Kubernetes-specific x509 certificate you probably use now.

To do that, kubectl needs a plugin:

go get -u -v \
github.com/kubernetes-sigs/aws-iam-authenticator/cmd/aws-iam-authenticator

Now let’s check that everything works, and poke around our new cluster using kubectl get nodes. Notice a few things about the output:

  • There are no master nodes visible.
  • The worker nodes are running a recent version of Kubernetes.
  • The workers are running Amazon Linux. This is actually an opinion of eksctl; EKS lets you bring your own worker node AMI if you have specific requirements, and the Amazon EKS AMI Build Specification is publicly available to help you create images to use as a starting point for customization.

Download Istio Deployment Files

At the time of writing, Istio is at version 1.0.2.
Istio provides a convenient script which downloads and extract the latest Istio release for you:

curl -L https://git.io/getLatestIstio | sh -
cd istio-1.*

For the more security-conscious, the tarballs are available from the Istio GitHub releases page.

Configure Helm

We’ll be using Helm, the de facto package manager for Kubernetes, to install Istio into our EKS cluster.

First, make sure you have Helm installed. Instructions specific to your platform are available in Helm’s comprehensive documentation.

Next, you need to deploy Helm’s server-side component, Tiller, to your EKS cluster. Due to Kubernetes’s RBAC security mechanisms, this can get quite complicated. Luckily, the Istio release provides a simple configuration to get up and running.

Still in the istio-1.* directory, deploy that config, and then Tiller:

kubectl create -f install/kubernetes/helm/helm-service-account.yaml
helm init --service-account tiller

NB: This configuration will get you going, but it is not an example of best security practice. Do not do this in a production cluster! Helm’s documentation on Role-based Access Control will show you how to set up securely.

Install Istio on EKS

Istio’s Helm chart relies on the recently-improved ability of Helm to correctly order the deployment of resources that depend on one another. This requires the latest version of Helm; 2.10. Make sure you have at least that version installed, then you can simply install the Helm chart:

helm install \
--wait \
--name istio \
--namespace istio-system \
install/kubernetes/helm/istio \
--set global.configValidation=false \
--set sidecarInjectorWebhook.enabled=false

For those not familiar with Helm: we name our Helm-managed deployment “istio”, as there may be more than one in advanced configurations; this gives us an easy name to use to manage and uninstall it later. We keep it in its own Kubernetes namespace, istio-system – again, just to make our lives easier.

The only special parameters we’re using here are the last two, which both disable Istio features that use Kubernetes Mutating or Validating WebhookAdmissionControllers.

I won’t go into the details, but these mechanisms let Kubernetes-hosted applications (e.g., the Istio control plane) register webhooks to be called when new resources are deployed to the cluster. They can either check them and possibly reject them (Validating), or make changes to them (Mutating).

Istio wants to use a Validating hook to perform extra checks on its custom resource types (Istio uses Kubernetes Custom Resource Definitions (CRDs) to store its config, and these natively support only minimal validation). It also tries to use a Mutating hook to inject the sidecar proxy Container into every Pod deployed to the system.

However, today webhook admission controllers aren’t enabled on EKS, so we have to tell Istio that it can’t rely on those features.

Bookinfo – a Sample Application

The Istio project provides a sample microservices app, Bookinfo, which is designed to help demonstrate many of Istio’s features.

Now that we have Istio installed, let’s take a tour!

Install Bookinfo

Bookinfo is designed to run in Kubernetes, and the Istio release we downloaded comes with a YAML file declaring all of the cluster resources for a Bookinfo deployment.

Recall that, in order for Istio to add intelligence to these services, it needs its sidecar alongside all of Bookinfo’s code, intercepting and managing all the network traffic.

Normally these sidecars would be automatically added by a Mutating Admission Webhook (configured by resource kind MutatingWebhookConfiguration), and the whole system would be transparent not only to the developers of the application, but also to its operators.

However, with webook admission controllers not available in EKS at the time of writing, we must manually add the Container to Bookinfo’s Pod definitions.

Istio’s command line, istioctl, has a command for just this, reading in a set of standard Kubernetes Deployments and emitting the injected versions.

(As with kubectl, as far as I’m aware there’s no canonical pronunciation of this command, so argue istio-control / cuddle / cuttle / c-t-l amongst yourselves!).

With a bit of BASH magic, we can inject the sidecar and deploy the resulting resources in one easy-to-copy command:

$ kubectl apply -f \
<(istioctl kube-inject -f samples/bookinfo/platform/kube/bookinfo.yaml)

Expose Bookinfo

The one thing that Bookinfo’s supplied resources don’t do is expose the front-end service to the world.

Networking, especially the low-level aspects like this, is complex, difficult, and environment-specific.

For this reason, the basic Bookinfo install leaves this aspect out.

In place of the more familiar nginx Ingress Controller, Istio will be handing ingress for us (adding all its layer-7 goodness as it does so).

The actual ingress traffic is handled by Envoy instances (separate from the sidecars for various reasons), but, as with the rest of the mesh, these are configured by the Istio control plane.

While Istio can interpret the Kubernetes Ingress resources that the nginx Ingress Controller uses, it has its own preferred networking resource types which offer more control.

Since we’re in a greenfield cluster, we’ll use these new ingress types, starting with the Gateway resource:

$ kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml

These resources are not unlike an Ingress resource, in that the routing apparatus they configure is ultimately placed behind a “physical” load balancer external to the Kubernetes cluster – in our case, an AWS ELB.

The following commands will locate the host and port we ultimately need to hit to access our Bookinfo application from across the internet:

$ export INGRESS_HOST=$(kubectl -n istio-system \
get service istio-ingressgateway \
-o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
$ export INGRESS_PORT=$(kubectl -n istio-system \
get service istio-ingressgateway \
-o jsonpath='{.spec.ports[?(@.name=="http2")].port}')
$ export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT

You can now browse to http://$GATEWAY_URL/productpage, Bookinfo’s landing page (replacing $GATEWAY_URL with the value we just assigned to it, or on a Mac, open http://$GATEWAY_URL/productpage).

It should look a little like this:

Bookinfo - Istio demo app

Traffic Routing

In this post I’ve shown you how to provision an EKS cluster, use the Helm package manager to install Istio in an EKS-conformant way, and install an example microservices application with Istio augmentation.

To round out this post, let’s take a quick peek at one of Istio’s many features – some advanced traffic routing enabled by the fact that Istio deals with traffic at Layer 7.

Default Behaviour

Load Bookinfo a few times by again visiting http://$GATEWAY_URL/productpage and hitting refresh a bunch.

Notice how sometimes the reviews on the right have star ratings, sometimes in color, and sometimes there are no stars at all.

This is because these reviews come from a separate reviews service, and in the system we just deployed there are three separate versions of it, as you can see with kubectl get pods.

There’s just one Kubernetes Service pointing at all of them, so the other Pods can call for the reviews service just by using the name reviews.

The upshot of this is that we get just Kubernetes’ basic round-robin “load balancing,” as you would during a rolling upgrade.

Kubernetes round-robin diagram

Layer 7 Routing

So, let’s get things under control and pin all calls to reviews v1 for now.

The Bookinfo sample has a few pre-made Istio configs we can use, and this is one of them.

First we need to tell Istio about the different versions that exist and how to tell them apart (in this case, labels on the Kubernetes Deployment).

$ kubectl apply -f samples/bookinfo/networking/destination-rule-all.yaml

Let’s take a look through the part of that file that pertains to our reviews service.

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
 name: reviews
...

It’s of kind DestinationRule, which specifies how to talk to the workloads, e.g. Pods, comprising the service. Examples of rules include strategies for load balancing between the Pods, the maximum connections to allow to any one Pod, etc. In this example we’re not actually using any of these, but rather telling Istio how to tell the different versions of destinations (Pods) apart.

...
spec:
host: reviews
...

The destination in question is anything with hostname reviews, i.e. our reviews Service (in the Kubernetes sense). Any HTTP request with a header of Host: reviews will have this rule applied. As we said, this is necessary but not sufficient to tell the different versions apart.

...
subsets:
- name: v1
  labels:
   version: v1
- name: v2
  labels:
   version: v2
- name: v3
  labels:
   version: v3

So, the final section of the file states that the service’s workloads should be treated as three separate subsets. Because of Istio’s tight integration with Kubernetes, it can identify endpoints by the labels on their Pods.

With those subsets of the reviews Service defined, we can tell Istio that anyone looking to call reviews should always be directed to v1.

$ kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml

Istio host:reviews diagram

In this file we define a resource called a VirtualService, which again matches the traffic to reviews and says that all of it should go to version 1. A more advanced VirtualService would match traffic on HTTP paths and methods as well, and support URL rewrites, giving us a lot of the power of a more traditional reverse proxy. This simple example only matches the host header, so it looks fairly similar to the DestinationRule, but, whereas that resource specifies how to talk to workloads, VirtualServices are about which workloads to route to, for various request formats.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
 name: reviews
spec:
 hosts:
 - reviews
 http:
 - route:
   - destination:
      host: reviews
      subset: v1

Hit Bookinfo again a few times and you’ll see just the basic reviews with no stars at all. Notice that we didn’t change any Kubernetes Services here, let alone delete the unwanted versions. Other pods can have reviews at other versions.

An EKS cluster still has a normal IP network, so in any language you can continue to make use of the normal socket routines – no special calls to weird RPC libraries here.

productpage still makes DNS requests for “reviews,” so it will still work without Istio, or even outside Kubernetes.

However, this would mean that, when a request leaves the productpage container, its destination IP address would be chosen by the Kubernetes’ Service’s ClusterIP, which would choose a Pod, of any version, at random.

So how does Istio handle this request? How does it even know where the request is meant to be headed?

Remember that Istio understands the HTTP content of the request, so it looks at the HTTP host: header, matches that against the VirtualService, and sends the request where we really want it to go: v1 of reviews only.

Advanced Routing

While I concede there are other ways to achieve what we just did (though I would argue that Istio’s way is neater and more flexible), this HTTP-aware routing has much more power up its sleeve.

Let’s say you’re doing exploratory testing of the new version – using it in your browser as a user would, poking at the edges, looking for bugs.

You want to have the productpage use v2 of reviews, but only for you.

Let’s also say you’re called Jason.

Apply the following file and once again hit that refresh button.

$ kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml

Istio host:reviews user:Jason diagram

Compare this file, shown below, with the previous all-v1 version. That original routing rule is still there at the end of the file, but rules are applied in order, so we’ve inserted a new statement just before the old rule that catches just Jason’s traffic and directs it elsewhere. All other traffic continues to fall through to the original, default rule.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
 name: reviews
spec:
 hosts:
  - reviews
 http:
 - match:
 - headers:
    end-user:
     exact: jason
route:
- destination:
   host: reviews
   subset: v2
- route:
  - destination:
     host: reviews
     subset: v1

Everyone but Jason should still be seeing reviews v1.

Now hit Sign in in the top right, and sign in as “jason” (case-sensitive, but any password will do – I think we found a bug there!).

Refreshing one more time, you should now see the new shiny star ratings that your co-worker wanted you to kick the tires on.

Have a look in that latest YAML file, and you’ll see that network traffic routing is now contingent on an HTTP header.

Try doing that with iptables!

Of course, your criterion could be user-agent, logged-in vs logged-out – anything that can be inferred from an HTTP header (and of course any metadata from further down the stack, such as a port number).

I hope this has given you taste for what Istio can do, and shown you that, at version 1.0, it isn’t so hard to install either.

This is just the first post in a series I’ll be writing on Istio in EKS and AWS. In future we’ll be looking at Ingress, Telemetry, advanced Traffic Routing, Traffic Management, Security, and more.

Check back soon!


Matt Turner Matt Turner is a founding engineer at Tetrate, a cloud native startup focusing on application management for the hybrid and multi-cloud world. Matt is working on Istio-related products at Tetrate, and in the past has done software engineering, sometimes with added operations, for ten years. His idea of “full-stack” is Linux, Kubernetes, and now Istio, too. He’s given several talks and workshops on Kubernetes and Istio, and is co-organiser of the Istio London meetup. He tweets @mt165pro and blogs at mt165.co.uk.

The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.

Arun Gupta

Arun Gupta

Arun Gupta is a Principal Open Source Technologist at Amazon Web Services. He focuses on everything containers and open source at AWS. He is responsible for CNCF strategy within AWS, and participates at CNCF Board and technical meetings actively. He has built and led developer communities for 12+ years at Sun, Oracle, Red Hat and Couchbase. He has extensive speaking experience in more than 40 countries on myriad topics and is a JavaOne Rock Star for four years in a row. Gupta also founded the Devoxx4Kids chapter in the US and continues to promote technology education among children. A prolific blogger, author of several books, an avid runner, a globe trotter, a Docker Captain, a Java Champion, a JUG leader, NetBeans Dream Team member, he is easily accessible at @arungupta.