Konrad Rotkiewicz
19
min read
Last Update:
March 29, 2023

Ever tried deploying your application to 4 clusters around the world and load balancing it across all of them? It can turn out to be a puzzling and painstaking process. It needn’t be like that though, as with Kubernetes Federation and Google Global Load Balancer the job can be done in matter of minutes.

FEDERATED CLUSTERS

Kubernetes Federation gives you the ability to manage Deployments and Services across all the clusters located in different regions. Put simply, if you deploy to single cluster, you can use the same command to deploy to multiple clusters.

How does it work behind the scenes though? Well, Kubernetes creates a so called “virtual” cluster with an API endpoint that you can use to manage Deployments and Services across all the clusters. The clusters can even be located in different IaaS (we tested it across GCP and AWS), which is something worth shouting about.

overview
                                                         

Wondering about load balancing? Perhaps you are curious as to how we can distribute the load to an app in all the clusters. In reply to that question we have found 2 options, depending on whether GCP or AWS are used.

GCP gives you a Global Load Balancer, which comes in very handy Indeed. As GCP is deeply integrated with Kubernetes, it automatically recognizes your Kubernetes Services across all clusters as well as the load balance between them. This essentially means you have a single Load Balancer (at only $20 + traffic cost) for your federated application across all clusters. Not too shabby at all!

AWS does not provide any integration with Kubernetes, so our proposition is to use a LoadBalancer Service type to create Elastic Load Balancer for each app in each cluster and then use Route53 Latency-based Routing on them.

LET’S FEDERATE AN APP

Let’s use GCK to create a 4 cluster federation, and Global Load Balancer to distribute the load. We will use a simple Python app that returns to the region in which it is located.
What you need here is an account in GCP and gcloud CLI installed. Let’s start!

First, we need the DNS zone and clusters’ nodes, the latter of which needs permission to add entries to it. The Federation uses it to provide DNS records for local cross cluster service discovery.

Let’s choose the cluster on which we will install the Federated Control Plane (it should be the closest one to you):

and check if we can communicate with it:

Now we can install the Control Plane:

It then appears as a new context:

Currently there are no clusters attached to it so let’s add all our clusters:

and wait until they become ready:

With our 4 cluster federation ready, it’s now time to deploy the get-region app and wait until we have 16 pods ready

Now we create the Federated Service of NodePort type with the 30040 port exposed on each clusters’ node, you can check that the Deployment and Service has been propagated by checking one of the clusters:

You can see that we have used the NodePort type, the reason behind this is that we can now easily create a Global Load Balancer that will connect with all our clusters.

The easiest way to do it is to create an Ingress on the Federation Control Plane and GKE will automatically create and connect the Global Load Balancer.

You can check the status of the Load Balancer here. After you see 3/3 Healthy for all backends, it means it is ready.

network
                                                         

The next thing to do is is to test if it really works. Do you remember that our Python app returns to the region in which it is located? In order to check that I have created 3 DigitalOcean nodes in San Francisco, New York and London, I run a curl on each of them.

terminal
                                                         

By going to the Monitoring tab in the Load Balancer list page you can see how the traffic is distributed:

network2
                                                         

CONCLUSIONS

The results look impressive and this simple setup can be the first step to some great enterprise grade solutions like a custom CDN or a multi-region web application.

Is it ready to use in practice? We believe so, we already use it with confidence but it does require some additional effort.

The Control Plane uses a single separate etcd – this has to be adjusted to use a dedicated etcd cluster to provide minimal fault tolerance.
On GKE, the Global Load Balancer is an ideal way to distribute traffic. It is very much the way to go as using AWS or Azure could be harder due to the reliance on DNS, which is not so flexible.

Last, but not least, it is also worth mentioning that there is a pending proposition to make Helm work with Federation here and here. This is great news because it would make it easier to maintain the federated application whether it be internal or open source.

Written by
Konrad Rotkiewicz
CEO

Full Lifecycle Software Development

Let’s turn concepts into reliable digital products

Learn more