Understanding Kubernetes Ingress
I’ve become super excited about Kubernetes and its model of declarative configurations backed by controllers. You tell Kubernetes what you want, and the system (hopefully) eventually converges to your desired state. The docs, including books like the excellent Kubernetes Up And Running, do a great job explaining how to specify things like Pods and Deployments, and giving a high level intuition for how their controllers work (are there enough running pods for this deployment? If not, start a new one).
The concept that has taken me the longest to get a grasp on has been Ingress controllers.
For me, the Ingress controller is the first feature I’ve used where the declarative config isn’t backed by a single baked-in controller, but instead by your choice of several controllers. Due to the rapidly developing nature of Kubernetes, not all of them interpret the Ingress resource the same way.
The intro docs show you how to apply a Deployment and a Service of type LoadBalancer
to expose your first app to the world.
This is great for experimentation and development, and for company-internal clusters you might even able to stop here.
However, if you want to host a number of different internet-facing domains and projects, paying for individual load balancers for each Service and hand-managing a corresponding number of TLS certificates quickly becomes daunting and un-fun.
I had a number of active and “legacy” personal projects currently living on a hand-curated EC2 instance. The situation looks like this:
An nginx instance handles HTTPS requests from the internet. Some requests are reverse proxied to local HTTP-based services (like a personal music service), where these services are managed as systemd units and deployed with RPMs. Others, like the previous version of this blog, are directories on the filesystem with content served directly by nginx.
I’m hoping that by moving my projects into containers and onto Kubernetes I can lower the barrier to writing new blog posts and sharing experiments (in addition to learning Kubernetes just for the sake of it!). I love the amount of automation around build, testing and deployment I see at work and would love to have that for personal projects. I’ve largely followed the trail Will Larson laid out in his Trying out Google Container Engine and Simple Continuous Deployment posts, except around Ingress.
Ingress Basics
A Kubernetes Ingress controller is a declarative configuration detailing the hosts and routes to services you are exposing to the world.
It is a centralized alternative to sprinkling LoadBalancer
services across your cluster.
The Kubernetes version of my personal domain looks like this:
Instead of RPMs and scripts to push files around, all projects are deployed as containers in pods. For projects that used to be directories on the filesystem, they are now deployed in a container with their own nginx. The memory usage of this nginx is single-digit megabytes, and nothing to lose sleep over.
Choosing an Ingress controller
There are 4 Ingress controllers that seem viable at first glance: Google’s GLBC, Nginx Ingress, Istio Ingress, and Heptio Contour. GLBC is an Ingress controller that configures a Google L7 load balancer to route traffic into Kubernetes. It is exciting to think you are using the same high performance load balancers as google.com for your piddly blog, but it has a number of limitations on the number of backend services it can route to that make it non-ideal.
Also, unfortunately, the configuration syntax between GLBC and Nginx isn’t actually the same.
For example, with one you need to specify /
to route all requests to a service, while the other requires /*
.
I ended up choosing nginx ingress, as it was straightforward to deploy. The ingress-nginx nginx build is terrifying, but since everyone else is using it we can just ignore that for now.
Cert Manager
Cert manager will automatically look for Ingress controllers, obtain LetsEncrypt certificates, and renew those certificates behind the scenes. It works great once you get the incantation right.
Bringing it Together
Nginx ingress with cert-manager provides a clean way to manage routing traffic into your cluster.
I was going to write more here, but in the interest of not having this block post sit as a draft for 6 more months I’m publishing as is.