So in the last few days, I tried to find a way to dynamically attach ingress names (like game-1.myapp.com) to solve TCP & UDP for Steam Dedicated Servers on Kubernetes. I have attached the following diagram on how I planned it, but there are some issues I encountered.
I can dynamically create Namespaces, Pods (controlled by Stateful Sets), PVCs, Services, and Ingresses for each individual game server using the Kubernetes API. Each game server lies in its own namespace, completely separated by the others. I assured that the server runs under the hood, the Pod is also Running and active, the logs are good.
I got locked out when I needed to assign the Stateful Set service to an Ingress that is able to continuously reply to TCP/UDP traffic by using a namespaced DNS, that routes to the cluster's Ingress Controller (in Minikube; for Production an ALB/NLB should be used, AFAIK).
Somehow, I need a way to ingress the game-xxxxx.myapp.com to the specific game-xxxxx namespace's pod. It doesn't really matter if they will have appended ports or not.
For this, I can simply just API-call the DNS solver for myapp.com and add or remove A Records when it's needed. This seems okay, but I have found out that I can use ExternalDNS (https://github.com/bitnami/charts/tree/master/bitnami/external-dns) to do this automatically for me, based on the existent services.
What I have tried, no luck yet:
Setting up NGINX, but I had to define the exposed ports for each Service. Based on their documentation (https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services), it is OVERKILL to modify that ConfigMap and Recreate the NGINX pods each time, since there might be many changes and this does not seem viable. Plus, I highly doubt that NGINX will be a breeze under heavy load, I find it more suitable for web servers rather than game servers.
Also, I might need a way to make sure that I can have duplicated ports. For example, I cannot assign in NGINX the same 28015 port to many other servers, even when they are in different namespaces. If I use Agones (https://github.com/googleforgames/agones/blob/release-1.9.0/examples/gameserver.yaml) to assign random ports, at some point I might run out of them to assign.
I have tried to use Traefik, but had no luck. The IngressRoute allows the TCP/UDP routing from a Router to and EntryPoint than then routes it to the service assigned. I am not really sure how this works, I tried setting annotations to services & defining entry points, but it still refuses to work: https://doc.traefik.io/traefik/routing/providers/kubernetes-crd/#kind-ingressroutetcp
Agones should be working for game servers and it supports TCPUDP protocol for service ports, but again, no luck with this.
I have posted below the diagram on how things should work. I also have this following YAML file that will create the Stateful Set, a PVC, and the Service. You can clearly see I tried ExternalName setup so maybe I can set the Minikube IP to that name and be able to connect, yet again, no luck:
Steam Dedicated Server workflow
apiVersion: v1
kind: Service
metadata:
name: rust-service
labels:
game: rust
spec:
# type: ExternalName
# externalName: rust-1.rust.coal.app
# clusterIP: ""
selector:
game: rust
ports:
- name: rust-server-tcp
protocol: TCP
port: 28015
targetPort: 28015
- name: rust-server-udp
protocol: UDP
port: 28015
targetPort: 28015
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: rust-server
spec:
selector:
matchLabels:
game: rust
replicas: 1
serviceName: rust-service
template:
metadata:
name: rust-server
labels:
game: rust
spec:
containers:
- name: rust
image: didstopia/rust-server:latest
ports:
- name: rust-server-tcp
protocol: TCP
containerPort: 28015
- name: rust-server-udp
protocol: UDP
containerPort: 28015
volumeClaimTemplates:
- metadata:
name: local-disk
spec:
resources:
requests:
storage: "10Gi"
accessModes: ["ReadWriteOnce"]Edit: bump
A side note!
If Ingress resource is used/mentioned it's referring to
HTTP/HTTPStraffic.
The diagram that you've posted is looking like a good opportunity to use Service of type LoadBalancer.
Service of type LoadBalancer is used to handle external TCP/UDP traffic (Layer 4).
Disclaimer!
This solution supports only one at the time protocol, either
TCPorUDP.To have both protocol on the same port you will need to fallback to
Serviceof typeNodePort(which allocates port on a node from30000to32767).You can read more about creating cloud agnostic LoadBalancer that is using
NodePorttype of service by following this link:
In this setup Ingress controllers like Traefik or Nginx are not needed as they will only be an additional step between your Client and a Pod.
The example of such LoadBalancer you already have in your YAML definition (I slightly modified it):
apiVersion: v1
kind: Service
metadata:
name: rust-service
labels:
game: rust
spec:
type: LoadBalancer # <-- THE CHANGE
selector:
game: rust
ports:
- name: rust-server-tcp
protocol: TCP
port: 28015
targetPort: 28015If you are intending to use AWS with it's EKS please refer to it's documentation:
Example of a possible setup (steps):
namespace "game-X-namespace"deployment "game-X-deployment"service of type LoadBalancer "game-X-" that would point to a "game-X-deployment"DNS record pointing "game-X.com" to IP of LoadBalancer created in previous step. Each LoadBalancer would have it's own IP and the DNS name associated with it like:
awesome-game.com with IP of 123.123.123.123 and port to connect to 28015/TCP magnificent-game.com with IP of 234.234.234.234 and port to connect to 28015/TCP I reckon this medium guide to create dedicated Steam server could prove useful:
Additional resources:
So as the previous reply said, ingresses are for web traffic. I didn't achieve working setups using Ingresses, BUT I have managed to use the service with NodePort and make sure to create a DNS record using External DNS that binds the custom sub-domain name to the right IP: https://github.com/kubernetes-sigs/external-dns
Now, the issue stands way after the idea of having to create a Deployment, wait for the Pod to get assigned, and make sure the pod sticks with that node or in case the node gets drained, to somehow keep the same node IP (a non-ephemeral one) that will keep staying with the pod as long as the deployment doesn't get deleted. In this way, I should make sure I label nodes when I need an existing Deployment to deploy the pods.