Was it worth migrating to Kubernetes?

Allan John
CodeX
Published in
4 min readAug 27, 2021

--

Introduction

I would like to provide my experience with a migration project which I was involved, what issues I faced and the solutions for it and what I am looking forward to improve my setup

Existing setup

A server which has a docker-compose setup to run applications. The entrypoint to the setup was an NGINX proxy container with a LetsEncrypt container for certificate management and 15 different types of applications running behind the proxy. Applications included Atlassian apps, Postgres databases, blogs running on Nginx, and some testing apps run by developers. New applications are added to the docker-compose using an Ansible playbook. This Ansible playbook was also capable of configuring the server.

Problems

It started as a simple setup with Ansible configuring servers and deploying new applications, by adding new app config into docker-compose.yml. This was easy at first, but as the ecosystem grew with lot of applications, there was real trouble in maintaining the apps.

The docker-compose file was a jinja template file, so to add new apps, new config should be added. I know we could create a loop and add app config there, but the playbook was not created properly from the start.

Solution

I had multiple solutions on my mind

  1. Fix Ansible playbook: This was the first solution that came to my mind. This also had its side effects, still there was problem with the docker-compose. Working with the ansible plugin docker-compose was also pain when working with different OS with different python version
  2. Use container orchestration solution: This was what I thought to go forward with. Already we had a lot of applications on containers, so it would be easy to migrate to container orchestration solution. Kubernetes was the one that struck me, as it is the defacto container orchestration tool, unless there is some specific or special requirements. Since there was not much special requirements. I chose to go with kubernetes

Plan and Preparation

Since most of the applications were containerised, my setup was easy. The replacements where:

  • Docker volume to PVC
  • Containers to Deployments
  • Configurations to Secrets and ConfigMaps
  • Nginx frontend to Traefik as Ingress Controller with LetsEncrypt for certificates

Since PVs where going to be a lot of trouble with multiple nodes, I decided to get a big server first, and migrate all apps in there, so that I then can have PV with Hostpath plugin

There was already some plugins and tools that i tested to convert docker-compose to yaml files. But this was crazy. It had lot of resources which I didnt like well. So I started to do everything from scratch. For every application, started from creating PV, then PVC, ConfigMaps, Secrets, then Deployment, Service and Ingress.

Migration

The first app to migrate was the docker-registry. Created PV, PVCs, Deployment and Service. Migrated all data of registry to PV folder.

Once that was ready, Traefik was deployed with LetsEncrypt configured inside. So each ingress created had a certificate

Well, then that was it, rest of the applications were migrated fast as the yaml files were similar. There was changes in the configuration for applications that communicated each other, as service names was used to connect to. For example, to connect Confluence with its own Postgres server, the configuration required to connect to the Postgres service, which was a ClusterIP.

As all applications was setup, Observability was a must. So I decided to go with Prometheus, Loki, Grafana stack, so that I can have metrics and logs in one place and setup alerts based on some metrics. Also developers have a single place to view the logs of all apps instead of doing tail on container logs. I decided to go with prometheus operator, which was easy to deploy. But to be honest, prometheus was the reason why, I had to get a bigger server.

Applications were categorized based on the purpose. For each category a namespace was created and the apps were deployed there. For ex: traefik, docker registry was deployed in system namespace and the testing apps were deployed in testing namespace. This allowed segregation of applications, which was really useful, because now we can apply proper roles for access.

Now that all applications was migrated, a Github repo was created to add all yaml files in place. This was done for two purposes. One to have every manifests in a single place and two, to have this Github repo as single source of truth. This also led me to an idea of setting up GitOps approach for the cluster.

So I installed Argocd, with an Argocd app-of-apps application pointing to the Github repo. So any change to the Github repo master branch, is synced with the Argocd, and thus the applications on the cluster is updated.

Finally, since traefik was running on Nodeport on the cluster, I setup HAProxy on the server, which redirects requests to traefik.

Next Steps

The next plans would be to :

  1. Migrate the application manifests, to helm charts if possible
  2. Add new nodes to the cluster to make the cluster run apps on worker nodes other than master node.
  3. Setup NFS inside cluster to replace PV Hostpath. This would be tricky, because NFS can be slow compared to Hostpath. If moving to cloud environment, this would be easy as I can use cloud disks as provisioners. Since this is on-prem solution, I would stick with NFS
  4. Setup proper RBAC policies to improve cluster security

Conclusion

The application migration, required a bit of time to plan and prepare and the migration was easy. IMHO, I think this is a good one, because now the devs can deploy apps through GitOps approach and no need to run any Ansible code to deploy it on the server. They also have an added advantage of moving from Docker to Kubernetes, and learn to start using Kubernetes.

Hope you liked it :)

--

--

Allan John
CodeX

Passionate DevOps and Automation Engineer