cc by-sa flurdy

Kubernetes Ingress and TLS

Add Ingress and TLS to your Kubernetes

Using Helm

Started: March 2019. Last updated: 15th Mar 2019.

Aim

To show how to add an Ingress to Kubernetes so that you can redirect traffic to multiple applications to fully utilise a Kubernetes cluster.

And to then show how to easily add a TLS certificate to secure your sites traffic, using Let's Encrypt.

All this will be done using Helm, the package manager for Kubernetes.

Prerequisite

This howto follows on from my Kubernetes 101: Launch your first application with Kubernetes. There is no need to have followed each step in that howto as we will mostly build from scratch in this howto, and refer to the previous howto where applicable to avoid duplication, but it may help to have read the whole of previous one.

Basic Kubernetes

You do need some basic understanding of Kubernetes. Please read my Kubernetes basics to get up to scratch. It's brief but gets you going.

Kubernetes cluster

You do need a Kubernetes cluster up and running. Please follow my create a Kubernetes cluster instructions. A fresh new cluster is preferable to avoid any confusion and mistakes, but it should work with existing clusters.

kubectl

You do need to have kubectl installed. Please follow my install kubectl instructions.

And you do need to make sure you have downloaded the cluster configuration and authenticated kubectl with it. Again, refer to my kubectl connect section of the introduction howto.

Helm

Helm is the package manager for Kubernetes. Think the apt, homebrew, npm, rubygem, maven, etc but for k8s.

Helm allows one command to install complicated applications. Often includes RBAC, Namespaces, multiple services, several deployments and other dependencies.

Helm uses charts to define what to install. This library includes most of the applications you might use with Kubernetes. You can also create your own charts.

Helm consists of a local part, the Helm client, and a server part, the Tiller service.

Install Helm

To install Helm locally you can use Homebrew, Snap, or there are binary downloads.

brew install kubernetes-helm sudo snap install helm --classic

Or download e.g. for Linux 64 bit:

wget https://storage.googleapis.com/kubernetes-helm/helm-v2.13.0-linux-arm64.tar.gz
tar xzf helm-v2.13.0-linux-arm64.tar.gz
sudo mv helm tiller /usr/bin/
Install Tiller

To install Tiller we first will create a service account for it

kubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller

Then actually install Tiller by initializing Helm.

helm init --service-account tiller

You can confirm installation by listing any packages installed (none at this time).

helm list

Applications

Let's setup a simple application deployment and service, similar to the previous howto. But this time we will set up 2 applications.

Deployment

First lets create our first echo deployment. (Make sure you version control these files)

If you are building from the previous howto this deployment may already exist.

vi echo1-deployment.yml apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo1
spec:
  selector:
    matchLabels:
      app: echo1
  replicas: 2
  template:
    metadata:
      labels:
        app: echo1
  spec:
    containers:
    - name: echo1
      image: hashicorp/http-echo
      args:
      - "-text=echoNumberOne"
      ports:
      - containerPort: 5678
kubectl apply -f echo1-deployment.yml

And lets quickly create a second echo deployment which we did not have in the previous howto.

vi echo2-deployment.yml apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo2
spec:
  selector:
    matchLabels:
      app: echo2
  replicas: 2
  template:
    metadata:
      labels:
        app: echo2
  spec:
    containers:
    - name: echo1
      image: hashicorp/http-echo
      args:
      - "-text=echoNumberTwo"
      ports:
      - containerPort: 5678
kubectl apply -f echo2-deployment.yml

This should give us two deployments:

kubectl get deployments

The output should be something like:

NAME    DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
echo1   2        2        2           0          21s
echo2   2        2        2           0          2s
Services

If you have come from the previous howto we need to delete the load balanced service. If fresh cluster this is not needed.

kubectl delete service echo1

Lets (re)add a service in front of each of echo1.

vi echo1-service.yml apiVersion: v1
kind: Service
metadata:
  name: echo1
spec:
  ports:
  - port: 80
  targetPort: 5678
  selector:
    app: echo1
kubectl apply -f echo1-service.yml

And the same for the second echo:

vi echo2-service.yml apiVersion: v1
kind: Service
metadata:
  name: echo2
spec:
  ports:
  - port: 80
  targetPort: 5678
  selector:
    app: echo2
kubectl apply -f echo2-service.yml

This should create two services listed like this:

kubectl get services NAME        TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)  AGE
echo1       ClusterIP  10.24.40.234   <none>       80/TCP   15s
echo2       ClusterIP  10.24.65.74    <none>       80/TCP   48s
kubernetes  ClusterIP  10.24.0.1      <none>       443/TCP  2d

Without any external IP addresses.

So now we have two applications exposed as services internally. To expose these we need an Ingress.

Ingress controller

Before we set up a custom Ingress we need an Ingress controller. For this we will install it with Helm as it otherwise is a complicated list of RBAC, namespaces etc that needs to be configured. An ingress controller is basically a type of load balancer.

A common Ingress controller is Nginx. and there are many alternative to the one we use below including an Nginx based one made by Nginx Inc themselves. Another popular traffic manager is Istio.

Install Nginx Ingress

This Nginx chart uses ConfigMap to configure Nginx. We don't need to configure anything in our use case.

helm install --name nginx-ingress stable/nginx-ingress

As you can see from the output it install a lot of things, that we now don't need to worry about.

We now have a few more services:

kubectl get services NAME                          TYPE         CLUSTER-IP   EXTERNAL-IP PORT(S)                    AGE
echo1                         ClusterIP    10.24.40.234 <none>      80/TCP                     1d
echo2                         ClusterIP    10.24.65.74  <none>      80/TCP                     1d
kubernetes                    ClusterIP    10.24.0.1    <none>      443/TCP                    2d
nginx-ingress-controller      LoadBalancer 10.24.22.205 1.2.3.4     80:30617/TCP,443:32262/TCP 2m
nginx-ingress-default-backend ClusterIP    10.24.10.74  <none>      80/TCP                     2m

As you can see we now have two more services: nginx-ingress-controller and nginx-ingress-default-backend. If you lookup that external IP you will see the default response from nginx-ingress-default-backend, basically a 404. The default backend is what respond when no Ingress rules are matched.

curl 1.2.3.4 default backend - 404

Application Ingress

Let's add an Ingress to direct request traffic to our echo services.

vi echo-ingress.yml apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: echo-ingress
spec:
  rules:
  - host: echo1.ex.com
    http:
      paths:
      - backend:
        serviceName: echo1
        servicePort: 80
  - host: echo2.ex.com
    http:
      paths:
      - backend:
        serviceName: echo2
        servicePort: 80

Note this redirects on echo1.ex.com and echo2.ex.com (abbreviated from example.com for display purposes). This will only work if you add these to your /etc/hosts file as the external IP 1.2.3.4 from the ingress controller service. However you may prefer to use real DNS names for others to also use the service, and to later add SSL etc.

kubectl create -f echo-ingress.yml kubectl get ingress NAME          HOSTS                      ADDRESS  PORTS  AGE
echo-ingress  echo1.ex.com,echo2.ex.com  5.4.3.2  80     3s

You now have an Ingress routing traffic to either echo service depending on hostname in the request.

curl 1.2.3.4 default backend - 404 curl echo1.ex.com echoNumberOne curl echo2.ex.com echoNumberTwo

TLS Certificate with Cert Manager

These days there is no excuse for all web traffic not to use https. To do that we need to add a TLS certificate to our echo sites.

SSL & TLS certificates used to be a convoluted and expensive ordeal. But not any more since Let's encrypt was launched.

With Kubernetes there is a Cert Manager to act as a Cluster Issuer for generating and managing certificates with Let's Encrypt, which makes this very easy to configure and automate.

Install Cert Manager

With Helm installing a Cert Manager there are a few steps (compared to a lot of steps in a manifest) to do first:

kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.6/deploy/manifests/00-crds.yaml

Output should list various apiextensions.k8s.io created.

kubectl label namespace kube-system certmanager.k8s.io/disable-validation="true"

Then you can install it with:

helm install \
  --name cert-manager \
  --namespace kube-system \
  jetstack/cert-manager

I sometimes have to specify which version, e.g. "--version v0.5.2". But at the time of writing the default works fine. The output of that should towards the end say

cert-manager has been deployed successfully!
Staging issuer

Whilst testing let's create an issuer but use Let's Encrypt's staging server to avoid flooding the production one with bad data.

vi staging-issuer.yml apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
  name: letsencrypt-staging
spec:
  acme:
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    email: youremail@ex.com
    privateKeySecretRef:
      name: letsencrypt-staging-secret
    http01: {}
kube apply -f staging-issuer.yml kube get clusterissuer NAME                  AGE
letsencrypt-staging   16s
Add TLS to Ingress

Lets now modify the ingress for echo to include the cluster issuer and hosts to create a certificate for.

vi echo-issuer.yml apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: echo-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    certmanager.k8s.io/cluster-issuer: letsencrypt-staging
spec:
  tls:
  - hosts:
    - echo1.ex.com
    - echo2.ex.com
    secretName: letsencrypt-staging-secret
  rules:
  - host: echo1.ex.com
    http:
      paths:
      - backend:
        serviceName: echo1
        servicePort: 80
  - host: echo2.ex.com
    http:
      paths:
      - backend:
        serviceName: echo2
        servicePort: 80
kube apply -f echo-issuer.yml kubectl get ingress NAME          HOSTS                      ADDRESS  PORTS     AGE
echo-ingress  echo1.ex.com,echo2.ex.com  3.5.8.13  80, 443  3s

You should now also have port 433 redirected to this ingress. Note, the IP listed in the output is not exposed so ignore it.

You can inspect the certificate generated:

kube describe certificate letsencrypt-staging

It should list your domains under spec/Acme/Config/Domains. You can also use the describe keyword on the certificate and ingress to check any recent events, e.g. certificate creation etc.

Lets inspect the certificate in a https call with curl and wget.

curl -I echo1.ex.com HTTP/1.1 308 Permanent Redirect
Server: nginx/1.15.8
...
Location: https://echo1.ex.com/
wget --save-headers -O- echo1.ex.com ...
Connecting to echo1.ex.com (echo1.ex.com)|1.2.3.4|:443... connected.
ERROR: cannot verify echo1.ex.com's certificate, issued by ‘CN=Fake LE Intermediate X1’:
Unable to locally verify the issuer's authority.
To connect to echo1.ex.com insecurely, use `--no-check-certificate'.

So a normal http request issues a redirect to the https url. And the https call works and has a certificate. And as expected as it is using the Let's Encrypt staging API so it is not verified. Lets fix that.

Production issuer

Now the staging provider works lets switch to the real production one.

vi production-issuer.yml apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
  name: letsencrypt-production
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: youremail@ex.com
    privateKeySecretRef:
      name: letsencrypt-production-secret
    http01: {}
kube apply -f production-issuer.yml kube get clusterissuer NAME                    AGE
letsencrypt-production  6s
letsencrypt-staging     15m

And then update the cluster issuer and secret in the ingress configuration.

vi echo-issuer.yml apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: echo-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    certmanager.k8s.io/cluster-issuer: letsencrypt-production
spec:
  tls:
  - hosts:
    - echo1.ex.com
    - echo2.ex.com
    secretName: letsencrypt-production-secret
  rules:
  - host: echo1.ex.com
    http:
      paths:
      - backend:
        serviceName: echo1
        servicePort: 80
  - host: echo2.ex.com
    http:
      paths:
      - backend:
        serviceName: echo2
        servicePort: 80
kube apply -f echo-issuer.yml kubectl get certificate NAME                           AGE
letsencrypt-production-secret  7s
letsencrypt-staging-secret     1d
kubectl describe certificate letsencrypt-production ...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Generated 3m cert-manager Generated new private key
Normal OrderCreated 3m cert-manager Created Order resource "letsencrypt-prod-secret-9876"
Normal OrderComplete 2m cert-manager Order "letsencrypt-prod-secret-9876" completed successfully
Normal CertIssued 2m cert-manager Certificate issued successfully

When the events say the certificate has been created successfully:

wget --save-headers -O- echo1.ex.com ...
Connecting to echo1.ex.com (echo1.ex.com)|1.2.3.4|:443... connected.
...
HTTP/1.1 200 OK
...

There should be no certificate errors and wget will simply download the echo1 response of echoNumberOne.

So now you have a load balanced service, routing traffic via an ingress and over secure TLS traffic.

If you want to try out your own Docker images, read how to use 3rd party Docker registry in my previous howto.

Feedback

Please fork and send a pull request for to correct any typos, or useful additions.

Buy a t-shirt if you found this guide useful. Hire me for short term advice or long term consultancy.

Otherwise contact me. Especially for things factually incorrect. Apologies for procrastinated replies.