I created a helm chart which has
secrets.yaml as:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: appdbpassword
stringData:
password: password@1
My pod is:
apiVersion: v1
kind: Pod
metadata:
name: expense-pod-sample-1
spec:
containers:
- name: expense-container-sample-1
image: exm:1
command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
envFrom:
- secretRef:
name: appdbpassword
Whenever I run the
kubectl get secrets command, I get the following secrets:
name Type Data Age
appdbpassword Opaque 1 41m
sh.helm.release.v1.myhelm-1572515128.v1 helm.sh/release.v1 1 41m
Why am I getting that extra secret? Am I missing something here?
Helm v2 used ConfigMaps by default to store release information. The ConfigMaps were created in the same namespace of the Tiller (generally
kube-system).
In Helm v3 the Tiller was removed, and the information about each release version had to go somewhere:
In Helm 3, release information about a particular release is now stored in the same namespace as the release itself.
Furthermore, Helm v3 uses Secrets as default storage driver instead of ConfigMaps (i.e., it's expected that you see these helm secrets for each namespace that has a release version on it).
There is an option to
helm upgrade to limit the number of old deployment secrets that are kept:
--history-max int limit the maximum number of revisions saved per release.
Use 0 for no limit (default 10)
This is because there is no Tiller anymore in Helm 3. Hence, release information is now stored in the same namespace as the release itself as a secret.
Which Helm uses as the default storage driver.