My set up
I have one physical node K8s cluster where I taint master node so it can also act a worker. the node has Centos7 with total of 512 GB memory. I am limiting my experiments to one node cluster; once a solution is found, I will test it on my small scale k8s cluster where master and worker services are on separate nodes.
What I am trying to do
I want k8s worker to use only 256GB of memory to begin with. later on with if a particular node condition is met I want to increase memory allocation for k8s worker to (lets say) 400GB
Where I am right now with this
chmem -d <range> to offline 256 GB memory. Now OS only sees 256 GB memory is available.kubeadm init to kubectl taint nodes --all node-role.kubernetes.io/master-sleep 100000. so there is no memory stress.chmem -e <range> to enable some memory and now OS sees 400GB.It's a long shot, but you can try to restart kubelet via systemctl restart kubelet. The containers should not be restarted this way and there's a hope that once restarted, it'll notice the increased memory configuration.
kubectl describe nodessystemctl restart kubelet