Kubernetes Traffic Routing - Part 1 of 2
This blog introduces multiple ways how Kubernetes (a.k.a K8S) routes inbound traffic from outside of K8S cluster in a practical manner. By the end, you will get some input to achieve K8S loadbalancer function as the best practice.
In order to route inbound traffic, it is needed to expose Pods running in K8S cluster to outside the world. There are multiple ways to achieve this: Service REST object in K8S, PortProxy, and Ingress rule set in K8S.
Service is a logical abstraction communication layer to Pods and a policy by which to access them.
Service allows full time connection to the Pods by connecting their Service which stasys stable during the pod life cycle. An important thing about Service is what their type is, it determines how the Service exposes Pods to outside the cluster.
Typically, Services and Pods have virtual IPs with the type of “ClusterIP”, only routable by the cluster network and all traffic usually that ends up at an edge router is either dropped or forwarded elsewhere, while exposing Pods to outside the cluster with the type of “NodePort” or “Loadbalancer” will route inbound traffic.
As for the type of “NodePort” or “Loadbalancer” in Service, specific ports on each node are supposed to be mapped to the Service and those ports are in a special range set aside for allocations. This means you are now allowed to choose well-known port, like 80, to expose a Service on your nodes.
PortProxy identifies a very small container image which enable you to forward a pod port or a host port to a service in a port-forwarding manner. You can choose any pods port or host port, and they are not limited in the same way Services are.
Ingress is a K8S resource to define set of rules that map to Service. It can be configured to give Service externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc. In order for the Ingress to work, K8S cluster must have an Ingress controller running, which is an application that watch Ingress in K8S cluster and configure a balancer to apply the rules defined in Ingress. Ingress Controller implementation with third party loadbalancer can be anything of nginx, HAProxy, Vulcand or Traefik.
Pros and Cons
Because how to take a port on nodes in K8S cluster looks the key factor to expose Pods running in the cluster to outside, the strategy against port in each way would affect the best practice for exposing Pods to build loadbalancer.
Even though there are a few differences among the practices in above, as for simply exposing pods, there would be no special care but compliant with each of the specification for the achievement. But taking into account some issues you may face with after expoing pods, like port limitation, hard port manageability due to running L4 loadbalancer, or less performance, I think it should be more worth pushing Ingress as a solution to expose pods than Service or PortProxy , because of it’s powerful function to enable L7 loadbalancing which somehow would give you a countermeasure to handle the issues.
In next part, deeply diving into Ingress, I will introduce how it works as an underlying router to achieve L7 loadbalancer.