In this post we will dive into how we can configure our own serverless architecture with the help of Kubernetes and OpenFaas, with a big focus on doing it in a secure matter.

As I was looking into the serverless ecosystem and lately specifically into OpenFaas on top of Kubernetes, I noticed that the default deployment is not covering a lot of basic security concerns, such as isolation between the pods and authentication, so I thought that it might be a good idea to show how to address those issues.

Briefly about serverless

Serverless is a term that was first used to describe applications that significantly or fully depend on third-party services to manage server-side logic and state, but nowadays it is usually used to describe applications where some amount of server-side logic is still written by the application developer. Unlike traditional architectures, the app is run in stateless ephemeral compute containers (Function as a Service or “FaaS”), the best known vendor host of which currently is AWS Lambda.

Other vendors like Microsoft and Google are also in the game of serverless with Azure Functions and Google Cloud Functions. There are numerous open-source projects such as OpenFaas, Apache OpenWhisk, IronFunctions and many others.

I chose to focus on OpenFaas on Kubernetes (GKE), as it feels like it is the most widespread open-source combination at this moment. So, let’s start with OpenFaas, and later explore Kubernetes and see how we can seal the holes on the wall.

OpenFaaS

From OpenFaaS documentation:

If you plan to expose OpenFaaS on the public Internet you need to enable basic authentication with a proxy such as Kong or Traefik at a minimum. TLS is also highly recommended and freely available with LetsEncrypt.org.

Note: We are also looking to automate authentication “out the box” to cover edge cases.

Until the automated authentication will be implemented, we will have to sort things in a kind-of a manual way 🙂

First let’s prepare our GKE environment for OpenFaas by installing Helm, creating RBAC permissions for Tiller and initializing it:
$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash

$ kubectl -n kube-system create sa tiller \
&& kubectl create clusterrolebinding tiller \
–clusterrole cluster-admin \
–serviceaccount=kube-system:tiller

$ helm init –service-account=tiller

After that make sure you got tiller up and running by checking the status of the pods with:
$ helm -n kube-system get pods
You should see tiller running there. Next let’s clone the openfaas helm chart to our server (the openfaas chart is still not in the main repo):

$ git clone https://github.com/openfaas/faas-netes.git
$ cd faas-netes/chart

Next, create namespaces specifically for OpenFaas:
$ kubectl create ns openfaas
$ kubectl create ns openfaas-fn

Notice: You can deploy openfaas in the default namespace, but security-wise I would not recommend doing that!

We are deploying openfaas in two namespaces as one of the namespaces will hold our functions and the other one will hold the management pods of openfaas itself, and away from the default namespace, as they say – don’t put all of your eggs in one basket!

This gives us even more security control once combined with the network policy that we can create with Kubernetes.

In order to avoid the dangers of MiTM attacks, we will also want to setup TLS authentication on our OpenFaas portals such as the gateway.

Fortunately for us, Kubernetes allows us to easily set up basic authentication with secrets on Ingress traffic (when the ingress controller supports it). Unfortunately, GKE is using GCE as the ingress controller by default, which does not supports basic-auth through ingress rules.

One of the ways we could overcome this issue is to set up nginx as the ingress controller on our cluster (if you prefer, Traefik can also be used). Once we change the controller to nginx, it’s a matter of one kubectl command to incorporate our basic-auth secret into the OpenFaas namespace, but first let’s setup our nginx ingress controller:

$ helm install stable/nginx-ingress --set rbac.create=true

Notice: If RBAC (role-based access control) is for some reason disabled in your environment you will have to remove the rbac.create flag. RBAC is enabled by default in GKE in version > .

For those of you who are unfamiliar with RBAC, RBAC is an implementation of access control policies based on your role in the organization; Users that are logged in to the system are taking upon one or more roles in the system, on which the permissions are enforced.

We can now check for the nginx ingress service and test that it works by trying to access the exposed LoadBalancer IP. It should return 404 error on every request except of /healthz which will return 200

Now we should have nginx as the ingress controller running, and we can go back to installing OpenFaas:

$ helm upgrade --install openfaas openfaas/ \
--namespace openfaas \
--set functionNamespace=openfaas-fn \
--set ingress.enabled=true \
--set rbac=true

Again, if for some reason RBAC is disabled for you, please remove the RBAC flag from the command.

Note the ingress.enabled flag. This flag will automatically create a basic ingress rule for openfaas which we will now edit in order to add basic-auth:
$ Kubectl -n openfaas edit ing openfaas-ingress

Inside the ingress rule to enable basic-auth we will add the following annotations:

kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-realm: "Authentication Required - foo"
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-type: basic

The nginx.ingress.kubernetes.io/auth-secret annotation is pointing to our basic auth secret, which we have yet created, so let’s create it now.

For this we will need to install apache2-utils in order to generate a password with the help of httpasswd:
$ sudo apt-get install apache2-utils

Now with httpasswd we will configure a password for our gateway:

And finally incorporate the secret into the openfaas namespace:
$ kubectl -n openfaas create secret generic basic-auth --from-file=auth

After this your OpenFaaS UI should be protected and you won’t be able to access it without the ingress controller slamming the basic auth on you, but let’s tighten things a little bit more with isolation.

Configure TLS with LetsEncrypt

Now that we can access our OpenFaaS through our Nginx ingress controller, the next step will be to enable TLS with LetsEncrypt.

Kube-lego allows us to automatically request and renew LetsEncrypt certificates for public domains with the help of a few additions to our ingress rule from the previous steps.

Note that in order for kube-lego to work a public domain name is required. In our example we will use DOMAIN_NAME, but these references should be replaced with your real domain name and picking a secret name.

In order to register with LetsEncrypt, an email must be provided when installing kube-lego. You will have to replace YOUR_EMAIL with a valid email address in the following command:

$ helm install stable/kube-lego --namespace kube-system --set config.LEGO_EMAIL=YOUR_EMAIL,config.LEGO_URL=https://acme-v02.api.letsencrypt.org/directory

Now in our ingress rule, we will have to add a tls section and set the hostname to match DOMAIN_NAME, in addition to adding kubernetes.io/tls-acme: 'true' to our annotations.

Here’s our ingress rule after the additions:

kube-lego will pick the change to the Ingress rule, request the certificate from LetsEncrypt and store it in the openfaas-tls-cert Secret. In turn, the Nginx Ingress Controller will read the TLS configuration and load the certificate from the Secret. Once the Nginx server is updated, a visit to the domain in the browser should present openFaaS over a secure TLS connection.

Manually Configure TLS certificates

While LetsEncrypt is great for getting fresh certificates, sometimes we need to use our existing certificates. Fortunately there are no special requirements for that and all you have to do is to create a secret from your certificate by issuing the following command:

$ kubectl create secret tls some-tls-cert --key /path/to/tls.key --cert /path/to/tls.crt

Now just reference this secret in the ingress rule in the tls section such as:

...
...
tls:
- secretName: some-tls-cert
hosts:
- DOMAIN_NAME

Configuring Client Certificate Authentication

You can follow these steps in order to enable client cert authentication by simply using different annotations:

  • Create a file named ca.crt containing the trusted certificate authority chain, to verify client certificates
  • Create a secret from this file by issuing the following command:
    kubectl create secret generic auth-tls-chain --from-file=ca.crt --namespace=openfaas
  • Add the following annotations to your ingress rule:

# Enable client certificate authentication
nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"

# Create the secret containing the trusted ca certificates with `kubectl create secret generic auth-tls-chain –from-file=ca.crt –namespace=default`

nginx.ingress.kubernetes.io/auth-tls-secret: “default/auth-tls-chain”

# Specify the verification depth in the client certificates chain

nginx.ingress.kubernetes.io/auth-tls-verify-depth: “1”

# Specify an error page to be redirected to on verification errors

nginx.ingress.kubernetes.io/auth-tls-error-page: “http://www.mysite.com/error-cert.html”

# Specify if certificates are be passed to upstream server

nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: “false”

Kubernetes

Taken from the kubernetes documentation:

By default, pods are non-isolated; they accept traffic from any source.
Pods become isolated by having a NetworkPolicy that selects them. Once there is any NetworkPolicy in a namespace selecting a particular pod, that pod will reject any connections that are not allowed by any NetworkPolicy. (Other pods in the namespace that are not selected by any NetworkPolicy will continue to accept all traffic.)

This means that if an attacker can break into one of your functions that is stored on a pod, he will be able to communicate with any other pods inside of your kubernetes cluster. For example, the attacker could send requests to ‘internal’ functions that should not be accessible to the world, poison the network with bogus traffic, execute tcpdump on the clusters network, or possibly just connect to your openFaaS gateway and start spawning malicious functions and abuse your resources.

In any case, this is a bad notion to leave this configuration as it is if we are planning to setup a secured serverless architecture.

We can now see that Network Policies are pretty important — let’s see what those actually are:

A network policy is a specification of how groups of pods are allowed to communicate with each other and other network endpoints, and are implemented by a network plugin such as kube-router, so you must be using a networking solution which supports the NetworkPolicy resource — simply creating the resource without a controller to implement it will have no effect.

Some of the other network plugin options are Wave Net, Romana and others.

Note: In this post we are exploring kubernetes on GKE and not minikube, although you can archive the same results with minikube by using the –network-plugin flag.

A NetworkPolicy is represented in a YAML format. For example, a default “deny all ingress traffic” policy for a specific namespace will look like this:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
spec:
podSelector: {}
policyTypes:
– Ingress

This ensures that even pods that aren’t selected by any other NetworkPolicy will still be isolated. This policy does not change the default egress isolation behavior.

Now let’s look at a more complex policy. Here is our example serverless-policy.yaml file content:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: serverless-policy
namespace: openfaas
spec:
podSelector:
matchLabels:
role: functions
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.136/29
except:
- 172.17.0.142
- namespaceSelector:
matchLabels:
project: openfaas-functions
- podSelector:
matchLabels:
role: function-gate
ports:
- protocol: TCP
port: 6435
egress:
- to:
- ipBlock:
cidr: 10.0.0.34/32
ports:
- protocol: TCP
port: 6987

A full definition of the NetworkPolicy Resource can be found over here.

Let’s break down the NetworkPolicy:

The first 3 fields – apiversion, kind & metadata are mandatory, and are required by any kubernetes related config. The one that is important for us here is namespace and as its name suggests, it is responsible for indicating in which namespace the network policy will be enforced.

The 4th field is the spec field, and inside it holds all the information that is required to create a network policy — let’s go over them and their properties:

podSelector: Each NetworkPolicy includes a podSelector which selects the grouping of pods to which the policy applies. An empty podSelector selects all pods in the namespace. In our example we select all the pods with the label of functions inside the namespace openfaas, as stated in the metadata field.

policyTypes: Each NetworkPolicy includes a policyTypes list which may include either Ingress, Egress, or both. The policyTypes field indicates whether or not the given policy applies to ingress traffic to selected pod, egress traffic from selected pods, or both. If no policyTypes are specified on a NetworkPolicy then by default Ingress will always be set and Egress will be set if the NetworkPolicy has any egress rules. In our example we define both Ingress and Egress.

ingress: Each NetworkPolicy may include a list of whitelist ingress rules. Each rule allows traffic which matches both the from and ports sections. The example policy contains a single rule, which matches traffic on a single port, from one of three sources, the first specified via an ipBlock, the second via a namespaceSelector and the third via a podSelector.
The except entry inside the ipBlock describes CIDRs that should not be included within this rule.

egress: Each NetworkPolicy may include a list of whitelist egress rules. Each rule allows traffic which matches both the to and ports sections. The example policy contains a single rule, which matches traffic on a single port to any destination in 10.0.0.0/24.

So, our example NetworkPolicy is doing the following:

  1. Isolates “role=functions” pods in the “openfaas” namespace for both ingress and egress traffic
  2. Allows connections to TCP port 6435 of “role=functions” pods in the “openfaas” namespace from any pod in the “openfaas” namespace with the label “role=function-gate”
  3. Allows connections to TCP port 6435 of “role=functions” pods in the “openfaas” namespace from any pod in a namespace with the label “project=openfaas-functions”
  4. Allows connections to TCP port 6435 of “role=functions” pods in the “openfaas” namespace from IP addresses that are in CIDR 172.17.0.136/29 and not 172.17.0.142
  5. Allows connections from any pod in the “openfaas” namespace with the label “role=functions” to CIDR 10.0.0.34/32 on TCP port 6987.

Of course, all of this is scenario-dependant, and if you simply copy-paste this policy chances are it will not do any good to your deployment.

Once we have our desired NetworkPlicy constructed, we can put it to action by issuing a kubectl command:
kubectl create -f serverless-policy.yaml

By carefully labeling and picking the groups that are allowed to communicate with each other, we can archive a great level of isolation between our pods, and with our basic-auth and the ingress controller, we should be safe from malicious actors accessing our gateway even from inside the cluster

Conclusions

In this post we saw once again how important it is not to blindly trust default settings. We walked through some steps to make our environment less exposed to attacks. With Twistlock on the stack, we would provide even further security of the platform with the help of our behavioural network firewall and the runtime prevention engine.

OpenFaaS is still young and it will be interesting to see how things will develop for it and the serverless ecosystem as a whole.

Thank you for reading! Stay tuned for more juicy content by following us on Twitter: @TwistlockLabs

← Back to All Posts Next Post →