Kubernetes Federation With Google Global Load Balancer

Konrad Rotkiewicz
Konrad Rotkiewicz
9 July 2017 · 15 min read

Ever tried deploying your application to 4 clusters around the world and load balancing it across all of them? It can turn out to be a puzzling and painstaking process. It needn’t be like that though, as with Kubernetes Federation and Google Global Load Balancer the job can be done in matter of minutes.

FEDERATED CLUSTERS

Kubernetes Federation gives you the ability to manage Deployments and Services across all the clusters located in different regions. Put simply, if you deploy to single cluster, you can use the same command to deploy to multiple clusters.

How does it work behind the scenes though? Well, Kubernetes creates a so called “virtual” cluster with an API endpoint that you can use to manage Deployments and Services across all the clusters. The clusters can even be located in different IaaS (we tested it across GCP and AWS), which is something worth shouting about.

overview

Wondering about load balancing? Perhaps you are curious as to how we can distribute the load to an app in all the clusters. In reply to that question we have found 2 options, depending on whether GCP or AWS are used.

GCP gives you a Global Load Balancer, which comes in very handy Indeed. As GCP is deeply integrated with Kubernetes, it automatically recognizes your Kubernetes Services across all clusters as well as the load balance between them. This essentially means you have a single Load Balancer (at only $20 + traffic cost) for your federated application across all clusters. Not too shabby at all!

AWS does not provide any integration with Kubernetes, so our proposition is to use a LoadBalancer Service type to create Elastic Load Balancer for each app in each cluster and then use Route53 Latency-based Routing on them.

LET’S FEDERATE AN APP

Let’s use GCK to create a 4 cluster federation, and Global Load Balancer to distribute the load. We will use a simple Python app that returns to the region in which it is located.
What you need here is an account in GCP and gcloud CLI installed. Let’s start!

First, we need the DNS zone and clusters’ nodes, the latter of which needs permission to add entries to it. The Federation uses it to provide DNS records for local cross cluster service discovery.

$ gcloud dns managed-zones create federation --description=federation --dns-name=demo.madeden.com.
$ export SCOPES=”cloud-platform,storage-ro,service-control,service-management,https://www.googleapis.com/auth/ndev.clouddns.readwrite”
$ gcloud container clusters create us-central --zone us-central1-a -m n1-standard-1 --scopes $SCOPES
$ gcloud container clusters create eu-west --zone europe-west1-d -m n1-standard-1 --scopes $SCOPES
$ gcloud container clusters create us-east --zone us-east1-c -m n1-standard-1 --scopes $SCOPES
$ gcloud container clusters create us-west --zone us-west1-b -m n1-standard-1 --scopes $SCOPES
view raw 1.k8s.sh hosted with ❤ by GitHub

gcloud automatically add clusters to our ~/.kube/config so we can check them using kubectl config get-contexts

$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
gke_test-c63bb_us-west1-b_us-west gke_test-c63bb_us-west1-b_us-west gke_test-c63bb_us-west1-b_us-west
gke_test-c63bb_europe-west1-d_eu-west gke_test-c63bb_europe-west1-d_eu-west gke_test-c63bb_europe-west1-d_eu-west
gke_test-c63bb_us-east1-c_us-east gke_test-c63bb_us-east1-c_us-east gke_test-c63bb_us-east1-c_us-east
gke_test-c63bb_us-central1-a_us-central gke_test-c63bb_us-central1-a_us-central gke_test-c63bb_us-central1-a_us-central
view raw 2.k8s.sh hosted with ❤ by GitHub

Let’s choose the cluster on which we will install the Federated Control Plane (it should be the closest one to you):

$ kubectl config set-context gke_test-c63bb_europe-west1-d_eu-west
view raw 3.k8s.sh hosted with ❤ by GitHub

and check if we can communicate with it:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.2", GitCommit:"922a86cfcd65915a9b2f69f3f193b8907d741d9c", GitTreeState:"clean", BuildDate:"2017-07-21T08:23:22Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.7", GitCommit:"095136c3078ccf887b9034b7ce598a0a1faff769", GitTreeState:"clean", BuildDate:"2017-07-05T16:40:42Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
view raw 4.k8s.sh hosted with ❤ by GitHub

Now we can install the Control Plane:

$ kubefed init fed --host-cluster-context gke_test-c63bb_europe-west1-d_eu-west --dns-provider="google-clouddns" --dns-zone-name="gcp.ulam.io."
Federation API server is running at: 104.155.10.255
view raw 5.k8s.sh hosted with ❤ by GitHub

It then appears as a new context:

$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
gke_test-c63bb_us-west1-b_us-west gke_test-c63bb_us-west1-b_us-west gke_test-c63bb_us-west1-b_us-west
* fed fed fed
gke_test-c63bb_europe-west1-d_eu-west gke_test-c63bb_europe-west1-d_eu-west gke_test-c63bb_europe-west1-d_eu-west
gke_test-c63bb_us-east1-c_us-east gke_test-c63bb_us-east1-c_us-east gke_test-c63bb_us-east1-c_us-east
gke_test-c63bb_us-central1-a_us-central gke_test-c63bb_us-central1-a_us-central gke_test-c63bb_us-central1-a_us-central
$ kubectl get clusters
No resources found.
view raw 6.k8s.sh hosted with ❤ by GitHub

Currently there are no clusters attached to it so let’s add all our clusters:

$ kubectl config use-context fed
Switched to context "fed".
$ kubefed join eu-west --host-cluster-context=gke_test-c63bb_europe-west1-d_eu-west --cluster-context=gke_test-c63bb_europe-west1-d_eu-west
cluster "eu-west" created
$ kubefed join us-central --host-cluster-context=gke_test-c63bb_europe-west1-d_eu-west --cluster-context=gke_test-c63bb_us-central1-a_us-central
cluster "us-central" created
$ kubefed join us-east --host-cluster-context=gke_test-c63bb_europe-west1-d_eu-west --cluster-context=gke_test-c63bb_us-east1-c_us-east
cluster "us-east" created
$ kubefed join us-west --host-cluster-context=gke_test-c63bb_europe-west1-d_eu-west --cluster-context=gke_test-c63bb_us-west1-b_us-west
cluster "us-west" created
view raw 7.k8s.sh hosted with ❤ by GitHub

and wait until they become ready:

$ kubectl get clusters
NAME STATUS AGE
eu-west Ready 9m
us-central Ready 7m
us-east Ready 7m
us-west Ready 7m
view raw 8.k8s.sh hosted with ❤ by GitHub

With our 4 cluster federation ready, it’s now time to deploy the get-region app and wait until we have 16 pods ready

$ kubectl create deployment get-region --image=ulamlabs/get-region
deployment "get-region" created
$ kubectl scale deployment get-region --replicas=16
deployment "get-region" scaled
$ kubectl get deploy
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
get-region 16 16 0 16 1m
view raw 9.k8s.sh hosted with ❤ by GitHub

Now we create the Federated Service of NodePort type with the 30040 port exposed on each clusters’ node, you can check that the Deployment and Service has been propagated by checking one of the clusters:

$ kubectl create service nodeport get-region --tcp=80:80 --node-port=30040
service "get-region" created
$ kubectl get deploy --context=gke_test-c63bb_us-central1-a_us-central
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
get-region 1 1 1 1 31m
$ kubectl get svc --context=gke_test-c63bb_us-central1-a_us-central
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
get-region 10.7.243.24 <nodes> 8000:30001/TCP 5m
view raw 10.k8s.sh hosted with ❤ by GitHub

You can see that we have used the NodePort type, the reason behind this is that we can now easily create a Global Load Balancer that will connect with all our clusters.

The easiest way to do it is to create an Ingress on the Federation Control Plane and GKE will automatically create and connect the Global Load Balancer.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: get-region
annotations:
kubernetes.io/ingress.global-static-ip-name: “get-region”
spec:
backend:
serviceName: get-region
servicePort: 80
view raw ingress.yml hosted with ❤ by GitHub
$ kubectl create -f ingress.yml
ingress "get-region" created
view raw z11.k8s.sh hosted with ❤ by GitHub

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: get-region
annotations:
kubernetes.io/ingress.global-static-ip-name: “get-region”
spec:
backend:
serviceName: get-region
servicePort: 80
view raw ingress.yml hosted with ❤ by GitHub
$ kubectl create -f ingress.yml
ingress "get-region" created
view raw z11.k8s.sh hosted with ❤ by GitHub

You can check the status of the Load Balancer here. After you see 3/3 Healthy for all backends, it means it is ready.

network

The next thing to do is is to test if it really works. Do you remember that our Python app returns to the region in which it is located? In order to check that I have created 3 DigitalOcean nodes in San Francisco, New York and London, I run a curl on each of them.

terminal

By going to the Monitoring tab in the Load Balancer list page you can see how the traffic is distributed:

network2

CONCLUSIONS

The results look impressive and this simple setup can be the first step to some great enterprise grade solutions like a custom CDN or a multi-region web application.

Is it ready to use in practice? We believe so, we already use it with confidence but it does require some additional effort.

The Control Plane uses a single separate etcd – this has to be adjusted to use a dedicated etcd cluster to provide minimal fault tolerance.
On GKE, the Global Load Balancer is an ideal way to distribute traffic. It is very much the way to go as using AWS or Azure could be harder due to the reliance on DNS, which is not so flexible.

Last, but not least, it is also worth mentioning that there is a pending proposition to make Helm work with Federation here and here. This is great news because it would make it easier to maintain the federated application whether it be internal or open source.

Share on
Have a question?
Feel free to contact us.
Get a free consultation!