Hidden feature of Kubernetes Service

Hidden feature of Kubernetes Service

Expose multiple apps on the same LoadBalancer Service

Have you ever wondered what is the difference between defining ports using names or using ports in Kubernetes Service?

The documentation says that you can reference container port names in the targetPort attribute (https://kubernetes.io/docs/concepts/services-networking/service/#field-spec-ports), so it can look like this: targetPort: 8080 or this targetPort: my-http.

Let's see it in practice.

Environment information

  • Kubernetes 1.26 on GKE

  • For demo purposes, I will use the following Pod to test services:

apiVersion: v1
kind: Pod
metadata:
  name: tester
  namespace: tests
spec:
  containers:
    - name: debug
      image: nicolaka/netshoot
      command: ["/bin/sh"]
      args: ["-c", 'trap "exit" TERM; while true; do sleep 1; done']

nicolaka/netshoot image contains curl and many more tools.

We will use 2 different Pods which expose the same port number (80). Also, we will use a single Service that routes traffic to these Pods.

We have 4 possibilities

Service\Podhttp as port namedifferent name as port name
port number as targetPort12
port name as targetPort34

Scenario 1.

  • http as the port name in containers

  • port number as targetPort in the Service

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  namespace: tests
  labels:
    type: server-www
spec:
  containers:
    - name: nginx-container
      image: nginx
      ports:
        - name: http # HERE
          containerPort: 80
          protocol: TCP
---
apiVersion: v1
kind: Pod
metadata:
  name: apache
  namespace: tests
  labels:
    type: server-www
spec:
  containers:
    - name: apache-container
      image: httpd
      ports:
        - name: http # HERE
          containerPort: 80
          protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  name: my-service
  namespace: tests
spec:
  selector:
    type: server-www
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80 # HERE

Results

Service uses a round-robin to route traffic one by one to each Pod:

tester:~# curl -s -I my-service | grep Server
Server: Apache/2.4.57 (Unix)
tester:~# curl -s -I my-service | grep Server
Server: nginx/1.25.2
tester:~# curl -s -I my-service | grep Server
Server: nginx/1.25.2
tester:~# curl -s -I my-service | grep Server
Server: Apache/2.4.57 (Unix)

The same would when Pods are under the ReplicaSet control, but then they have the same image, so we would not see the difference like the above.

kubectl -n tests describe svc my-service
Name:              my-service
Namespace:         tests
...
Selector:          type=server-www
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.52.0.28:80,10.52.2.38:80
...

Scenario 2.

  • different name as the port name in containers

  • port number as targetPort in the Service

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  namespace: tests
  labels:
    type: server-www
spec:
  containers:
    - name: nginx-container
      image: nginx
      ports:
        - name: nginx-port # HERE
          containerPort: 80
          protocol: TCP
---
apiVersion: v1
kind: Pod
metadata:
  name: apache
  namespace: tests
  labels:
    type: server-www
spec:
  containers:
    - name: apache-container
      image: httpd
      ports:
        - name: apache-port # HERE
          containerPort: 80
          protocol: TCP
---
# Service YAML the same as in Scenario 1.

Results

The same results as in Scenario 1.

Scenario 3.

  • http as the port name in containers

  • port name as targetPort in the Service

# Pods YAML the same as in Scenario 1.
---
apiVersion: v1
kind: Service
metadata:
  name: my-service
  namespace: tests
spec:
  selector:
    type: server-www
  ports:
    - protocol: TCP
      port: 80
      targetPort: http # HERE

Results

The same results as in Scenario 1 and 2.

Scenario 4.

  • different name as the port name in containers

  • port name as targetPort in the Service

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  namespace: tests
  labels:
    type: server-www
spec:
  containers:
    - name: nginx-container
      image: nginx
      ports:
        - name: nginx-port # HERE - different than apache has
          containerPort: 80
          protocol: TCP
---
apiVersion: v1
kind: Pod
metadata:
  name: apache
  namespace: tests
  labels:
    type: server-www
spec:
  containers:
    - name: apache-container
      image: httpd
      ports:
        - name: apache-port # HERE - different than nginx has
          containerPort: 80
          protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  name: my-service
  namespace: tests
spec:
  selector:
    type: server-www
  ports:
    # HERE
    - name: nginx
      protocol: TCP
      port: 90
      targetPort: nginx-port
    - name: apache
      protocol: TCP
      port: 91
      targetPort: apache-port

As you can see, the Service has 2 ports definitions. It is because we want to specify 2 different names as targetPort. Also, we have to choose different port numbers.

Results

# Nginx application is exposed on port 90
tester:~# curl -s -I my-service:90 | grep Server
Server: nginx/1.25.2
tester:~# curl -s -I my-service:90 | grep Server
Server: nginx/1.25.2
tester:~# curl -s -I my-service:90 | grep Server
Server: nginx/1.25.2

# Apache applications is exposed on port 91
tester:~# curl -s -I my-service:91 | grep Server
Server: Apache/2.4.57 (Unix)
tester:~# curl -s -I my-service:91 | grep Server
Server: Apache/2.4.57 (Unix)
tester:~# curl -s -I my-service:91 | grep Server
Server: Apache/2.4.57 (Unix)
kubectl -n tests describe svc my-service
Name:              my-service
Namespace:         tests
Selector:          type=server-www
Type:              ClusterIP
...
Port:              nginx  90/TCP
TargetPort:        nginx-port/TCP
Endpoints:         10.92.0.30:80
Port:              apache  91/TCP
TargetPort:        apache-port/TCP
Endpoints:         10.92.2.42:80
...

BINGO!

In this scenario we can point each application separately by using different port numbers, but the same Service (also DNS name).

The Service links each container with the related port number.

Conclusion

We can expose multiple different applications (from different Deployments/StatefulSets/ReplicaSets) using a single Service object. They only need to be split by port number on the Service configuration, and they have to use unique port names.

Where is the magic? You can think it's not useful. Yes, it does not make any sense to use this with the service type ClusterIP .

The magic begins when this is used with type LoadBalancer in private clusters.

Private clusters in cloud environments (like GKE) need external Load Balancers to expose applications outside. Thanks to this hidden feature you can expose multiple applications behind the same Load Balancer.

There are a few cons of it, for example, cost optimization.