AWS Open Source Blog

Deploy OpenFaaS on AWS EKS

OpenFaas + EKS

We’ve talked about FaaS (Functions as a Service) in Running FaaS on a Kubernetes Cluster on AWS Using Kubeless by Sebastien Goasguen. In this post, Alex Ellis, founder of the OpenFaaS project, walks you through how to use OpenFaaS on Amazon EKS. OpenFaaS is one of the most popular tools in the FaaS space, with around 13k stars and 130 contributors on GitHub. Running OpenFaaS on Amazon EKS allows you to co-locate all of your workloads, from functions to containerized microservices, allowing you to utilize all of the resources available.

Chris


In this blog post we will deploy OpenFaaS – Serverless Functions Made Simple for Kubernetes – on AWS using Amazon Elastic Container Service for Kubernetes (Amazon EKS). We will start by installing CLIs to manage EKS, Kubernetes, and Helm, and then move on to deploy OpenFaaS using its Helm chart repo.

Once installed, we can deploy serverless functions to Kubernetes using the OpenFaaS CLI from the community Function Store, or build our own using popular languages like Golang.

Set Up CLI Tooling

A CLI tool called eksctl from Weaveworks will help us automate the task of creating the cluster. Several other CLI tools will also be required for the setup. Here are the steps:

Create the Cluster with eksctl

Create a Kubernetes cluster with two worker nodes in the us-west-2 region. The config file for kubectl will be saved into a separate file in ~/.kube/eksctl/clusters/openfaas-eks.

eksctl create cluster --name=openfaas-eks --nodes=2 --auto-kubeconfig --region=us-west-2

The cluster may take 10-15 minutes to provision but, once it’s ready, we will be able to list the nodes, specifying the alternative kubeconfig file for kubectl:

$ export KUBECONFIG=~/.kube/eksctl/clusters/openfaas-eks
$ kubectl get nodes
NAME                                            STATUS    ROLES     AGE       VERSION
ip-192-168-121-190.us-west-2.compute.internal   Ready     <none>    8m        v1.10.3
ip-192-168-233-148.us-west-2.compute.internal   Ready     <none>    8m        v1.10.3

If you get an error about missing heptio-authenticator-aws, follow the instructions to configure kubectl for Amazon EKS.

Install Helm

Many open source projects can be installed on Kubernetes by using the Helm tool. Helm consists of server (Tiller) and client (Helm) components. We installed the Helm CLI earlier; now it’s time to install the server component into the cluster.

First, create a Kubernetes service account for the server component of Helm called Tiller:

kubectl -n kube-system create sa tiller
kubectl create clusterrolebinding tiller-cluster-rule \
    --clusterrole=cluster-admin \
    --serviceaccount=kube-system:tiller 

Now deploy Tiller into the cluster:

helm init --upgrade --service-account tiller

Create Kubernetes Namespaces for OpenFaaS

To make management easier, the core OpenFaaS services will be deployed to the (openfaas) namespace, and any functions will be deployed to a separate namespace (openfaas-fn).

Create the namespaces below:

kubectl apply -f https://raw.githubusercontent.com/openfaas/faas-netes/master/namespaces.yml

Prepare Authentication for OpenFaaS

You should always enable authentication for OpenFaaS, even in development and testing environments, so that only users with credentials can deploy and manage functions.

Generate a random password by reading random bytes, then create a hash:

PASSWORD=$(head -c 12 /dev/urandom | shasum | cut -d' ' -f1)

Your password will be a hash and will look similar to this:

783987b5cb9c4ae45a780b81b7538b44e660c700

This command creates a secret in Kubernetes, you can change the username from admin to something else if you want to customise it:

kubectl -n openfaas create secret generic basic-auth \
--from-literal=basic-auth-user=admin \
--from-literal=basic-auth-password=$PASSWORD

OpenFaaS architecture

In OpenFaaS, any binary, process, or container can be packaged as a serverless function using the well-known Docker image format.

At a technical level, every function deployed to OpenFaaS causes Kubernetes to create a Deployment and a Service API object. The Deployment object will have a minimum scale defaulting to one replica, meaning we have one Pod ready to serve traffic at all times. This can be configured, and we will read more about it in the final section.

In the diagram below we see the OpenFaaS operator, which is accessible via kubectl or via the faas-cli using an AWS LoadBalancer.

Image provided by Stefan Prodan

You can read more on the OpenFaaS Architecture in the official documentation.

Deploy OpenFaaS with Helm

You can pick the classic OpenFaaS Kubernetes controller called faas-netes or the CRD-based controller called OpenFaaS-Operator by passing the flag operator.create=true. Read more on the differences in Introducing the OpenFaaS Operator for Serverless on Kubernetes.

Add the OpenFaaS Helm chart repo:

helm repo add openfaas https://openfaas.github.io/faas-netes/

If you already have the repo listed, you can use helm repo update to synchronise with the latest chart.

We can now install OpenFaaS with authentication enabled by default using Helm:

helm upgrade openfaas --install openfaas/openfaas \
    --namespace openfaas  \
    --set functionNamespace=openfaas-fn \
    --set serviceType=LoadBalancer \
    --set basic_auth=true \
    --set operator.create=true \
    --set gateway.replicas=2 \
    --set queueWorker.replicas=2

The settings above mean that functions will be installed into a separate namespace for easier management, a LoadBalancer will be created (read below), basic authentication will be used to protect the gateway UI, and then we are opting to use the OpenFaaS Operator to manage the cluster with operator.create=true. We set two replicas of the gateway to be available so that there is high availability, and two queue-workers for high availability and increased concurrent processing.

Run the following command until you see all services showing Available as at least “1:”

kubectl --namespace=openfaas get deployments -l "release=openfaas, app=openfaas"

A LoadBalancer will be created for your OpenFaaS Gateway. It may take 10-15 minutes for this to be provisioned, then the public IP address can be found with this command:

kubectl get svc -n openfaas -o wide

When the EXTERNAL-IP field shows an address, you can save the value into an environmental variable for use with the CLI and the rest of the tutorial:

export OPENFAAS_URL=$(kubectl get svc -n openfaas gateway-external -o  jsonpath='{.status.loadBalancer.ingress[*].hostname}'):8080  \
&& echo Your gateway URL is: $OPENFAAS_URL

Use the OpenFaaS CLI to save your login credentials (they will be written to ~/.openfaas/config.yaml):

echo $PASSWORD | faas-cli login --username admin --password-stdin

In a follow-up post we will cover how to install HTTPS certificates with Let’s Encrypt and Kubernetes Ingress.

Deploy Your First Function

The first function we will deploy will come from the built-in OpenFaaS Function Store, a collection of project- and user-created functions designed to help new users experience functions.

You can see all available functions via faas-cli store list

Let’s deploy a machine-learning function called inception which can take a URL of an image as an input and return a JSON structure showing what objects it was able to detect:

faas-cli store deploy inception

The function will be made available from the OpenFaaS API Gateway via the following default route:

$OPENFAAS_URL/function/inception

You can invoke the function using the OpenFaaS UI, the faas-cli via faas-cli invoke or by using an API-testing tool such as Postman.

Find an image with a Creative Commons License from a website such as Wikipedia.com; search for your favourite animal (such as a bear) to test out the function. Here’s a bear image I found.

Now invoke the function using one of the methods shown above:

curl -i $OPENFAAS_URL/function/inception --data https://upload.wikimedia.org/wikipedia/commons/7/79/2010-brown-bear.jpg

Make sure you surround any URLS with quotes when using curl. You should see the results appear in one or two seconds, depending on the size and specification of the machines picked to run the EKS cluster.

  {
    "name": "brown bear",
    "score": 0.767388105392456
  },
  {
    "name": "ice bear",
    "score": 0.006604922469705343
  },
  {
    "name": "bottlecap",
    "score": 0.003021928481757641
  },
  {
    "name": "reel",
    "score": 0.0026519917882978916
  },
  {
    "name": "American black bear",
    "score": 0.0018049173522740602
  }
]

With my sample image, you can see a score of 76% for a brown bear which shows a positive match. You could use this function or one of the several similar functions to create new features for your existing applications by invoking the function through the API Gateway.

Working with Events

Functions can also be invoked via events such as Webhooks over HTTP/S, AWS SNS, CloudEvents, Kafka, and RabbitMQ. In a follow-up post, we will cover how to combine events with AWS services to create an event-driven pipeline. If you’d like to read more about events, see the OpenFaaS documentation.

To invoke a function asynchronously, just change the route from /function/ to /async-function. You can even provide a callback URL where the result will be published once complete. This would be a useful combination with machine-learning functions which can take several seconds per invocation to execute. This is especially important when working with events and webhooks. Webhook producers such as GitHub require a response within 10 seconds (though it’s common to see as responses as fast as 1-2 seconds). De-coupling event from execution enables you to work around this restriction.

To use this with inception, you can set up a temporary HTTP receiver with RequestBin and then use that URL as the callback URL to see a response. Visit RequestBin and click Create, then enter the URL in the command below. After one or two seconds you can refresh the page, and you’ll see the same data we received above transmitted to the endpoint.

curl -i $OPENFAAS_URL/async-function/inception -H “X-Callback-Url: http://requestbin.fullcontact.com/10ds1ob1” --data https://upload.wikimedia.org/wikipedia/commons/7/79/2010-brown-bear.jpg

You can also chain functions together by picking another function as the callback URL.

Write Your Own Function

You can write your own function in one of the supported languages, or add your own template. An OpenFaaS template consists of a Dockerfile and an entrypoint (hidden from the user). The user sees a way to specify packages, and writes code in the handler file. You can explore the official OpenFaaS templates to find your preferred language.

Let’s pull down the latest images from GitHub and then list what’s available:

faas-cli template pull
faas-cli new --list

The faas-cli new command can be used to scaffold a function. All functions are built into immutable Docker images so that the function works the same way on your local machine as on your production cluster. That means we need to add a prefix to the function specifying your Docker Hub account name or the address of your private Docker registry as an Amazon Container Registry (ACR) address. The function’s image will need to be prefixed with the account name or remote registry address (for example, alexellis2/hello-golang). Use the --prefix flag or edit the YAML file at any time.

faas-cli new --lang go hello-golang --prefix=alexellis2

This creates the following files:

./hello-golang.yml
./hello-golang/
./hello-golang/handler.go

Edit handler.go:

package function

import (
	"fmt"
)

// Handle a serverless request
func Handle(req []byte) string {
	return fmt.Sprintf("Hello, Go. You said: %s", string(req))
}

Now edit the message, and then run the following command:

faas-cli up -f hello-golang.yml

If you rename an OpenFaaS YAML stack file stack.yml, the CLI will attempt to load it first if it is in the current working directory.

The up command saves typing and corresponds to the following commands, one by one:

faas-cli build && \
faas-cli push && \
faas-cli deploy

You can now invoke your new function in the same way we did with the inception function, or navigate to the OpenFaaS UI to manage your functions.

Note: If you need to add additional dependencies to your Golang function, you can use the dep tool and vendoring.

You can also use the OpenFaaS UI to deploy functions from the Function Store, which gives you a chance to explore how to use them and find their source code.

Monitoring with the Grafana Dashboard

The OpenFaaS Gateway collects metrics on how many replicas of your functions exist, how often they are invoked, their HTTP codes (success/failure) and the latency of each request. You can view this data in the OpenFaaS UI, or via faas-cli list, but the most effective way to monitor the data is through a dashboard using Grafana.

Stefan Prodan has pre-packed the OpenFaaS dashboard in a Docker image which we can run on the cluster and then view using kubectl.

kubectl -n openfaas run --image=stefanprodan/faas-grafana:4.6.3 --port=3000 grafana
kubectl expose deploy/grafana --type ClusterIP -n openfaas --port 3000

Now use kubectl port-forward to access the service without having to expose it over the Internet:

kubectl port-forward -n openfaas svc/grafana 3000:3000

Open http://127.0.0.1:3000/ in a browser:

Navigate to the dashboard, where you will see the invocations we made to your custom function and the inception function. The credentials are admin/admin, and can also be changed through this page.

Trigger Auto-Scaling

Auto-scaling in OpenFaaS can be managed through the built-in Prometheus metrics, with AlertManager firing alerts when thresholds on requests per second are met; functions can scale up and down depending on demand, even to zero.

Note: Kubernetes’ own auto-scaling HPAv2 can also be used with OpenFaaS. For a detailed overview on auto-scaling options, see the OpenFaaS documentation.

I’ll demonstrate by triggering auto-scaling with a function from the OpenFaaS store, then monitoring the scaling in Grafana.

Create a new directory and deploy a store function:

$ mkdir -p openfaas-eks-scaling
$ cd openfaas-eks-scaling
$ faas-cli store deploy figlet --label com.openfaas.scale.min=2 --label com.openfaas.scale.max=8
Deployed. 202 Accepted.
URL: http://my-lb.us-west-2.elb.amazonaws.com:8080/function/figlet

This will deploy the function named “figlet” that generates ASCII text logos (which I hope will be more entertaining than seeing “hello world” repeated several thousand times).

Now generate some load (just enough to trigger the scaling alert):

for i in {1..10000} ; do
  echo $i | faas-cli invoke figlet
done

How fast you generate load will depend on how close you are to your chosen AWS region. If you are unable to trigger the scaling, open several Terminal windows and paste the command into each. Make sure you set the OPENFAAS_URL variable in any new terminal windows.

Kafka connector

Here we see the function’s invocation rate in the top left quadrant increasing dramatically as I generate load. The replicas of the figlet function increased in steps from 2 to 7, which would have continued to the upper limit. Then when I stopped the traffic another alert was fired off which caused the figlet function to scale back to its lower limit. This type of scaling allows us to make use of all of the nodes in the EKS cluster.

Summary

In this post we deployed a Kubernetes cluster to AWS using EKS, deployed OpenFaaS with authentication enabled, then deployed a machine learning function and invoked it both synchronously and asynchronously. We also wrote a custom Golang function, deployed that to the cluster, and monitored it with a Grafana dashboard.

In upcoming posts in this series, we will explore how to enable TLS using Let’s Encrypt and CertManager and AWS Route53 to manage DNS entries. We will then create an event-driven pipeline for Optical Character Recognition (OCR) using AWS services and events such as S3 and SNS. Stay tuned, and subscribe to the OpenFaaS blog for more tutorials!

Alex Ellis

Alex Ellis is the founder and lead of OpenFaaS, the open-source Serverless project. Alex has 12 years of experience writing enterprise software and scaling distributed systems for over 500k clients and currently works as a Senior Staff Engineer at VMware’s Open Source Technology Center. He is very active in the cloud and container community building community, mentoring, writing and speaking at global events on everything from Kubernetes to Go and Raspberry Pi.

Alex Ellis – Senior Staff Engineer (Open Source) @ VMware OSTC, UK

GitHub: alexellis

Twitter: alexellisuk

The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.

Chris Hein

Chris Hein

Chris Hein is a Partner Solutions Architect for the Amazon Partner Network where he specializes in all things containers. Before Amazon, Chris worked for a number of large and small companies like GoPro, Sproutling, & Mattel. Follow him at @christopherhein