i want a daemonset-redis where every node will have it's own caching and each deployment pod will communicate with it's local daemonset-redis how to achieve it? how to reference daemonset pod in the same node from within docker-container?
UPDATE: i rather not use service option and make sure each pod access its local daemonset
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: redislocal
spec:
selector:
matchLabels:
name: redislocal
template:
metadata:
labels:
name: redislocal
spec:
hostNetwork: true
containers:
- name: redislocal
image: redis:5.0.5-alpine
ports:
- containerPort: 6379
hostPort: 6379
There is a way of not using a
service.
You can Expose Pod Information to Containers Through Environment Variables.
And you can use
status.hostIP to know the ip address of node where pod is running.
This was introduced in Kubernetes 1.7 link
You can add that to your
pod or deployment yaml:
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
It will set an variable
HOST_IP which will have a value of node ip on which the pod is running, then you can use it to connect to a local DeamonSet.This is an old question but I recently had to tackle the exact same problem, and just using the node IP did not really do it when deploying the redis Daemonset through the Bitnami\Redis helm chart. The solution was simpler - use
internalTrafficPolicy: Local on the service configuration, which enforces calls made to the clusterIP associated with the service to be redirected locally (and only locally):
The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy setting. When it's set to Local, only node local endpoints are considered. When it's Cluster (the default), or is not set, Kubernetes considers all endpoints.
you should define a service ( selecting all redis pods ) and then communicate with redis from other pods