I have a service account in kubernetes:
apiVersion: v1
kind: ServiceAccount
metadata:
name: testsa
namespace: project-1And I've assigned it the view role:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: testsa-view
namespace: project-1
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- kind: ServiceAccount
name: testsa
namespace: project-1This should grant the service account read access to all resources. Inside a pod in the project-1 namespace I am trying to run the following Python code:
>>> from kubernetes import client, config
>>> config.load_incluster_config()
>>> api = client.CoreV1Api()
>>> api.list_pod_for_all_namespaces()
But this fails with a 403 error:
kubernetes.client.rest.ApiException: (403)
Reason: Forbidden
[...]
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods is forbidden: User \"system:serviceaccount:project-1:testsa\" cannot list resource \"pods\" in API group \"\" at the cluster scope","reason":"Forbidden","details":{"kind":"pods"},"code":403}The pod is associated with the service account:
apiVersion: v1
kind: Pod
metadata:
labels:
run: testsa
name: testsa-2-l929g
namespace: project-1
spec:
serviceAccountName: testsa
automountServiceAccountToken: true
containers:
- image: larsks/testsa
imagePullPolicy: Always
name: testsa
ports:
- containerPort: 8080
protocol: TCP
resources: {}And inside the container, I can see the mounted secrets:
/src $ find /run/secrets/ -type f
/run/secrets/kubernetes.io/serviceaccount/..2020_09_04_16_30_26.292719465/ca.crt
/run/secrets/kubernetes.io/serviceaccount/..2020_09_04_16_30_26.292719465/token
/run/secrets/kubernetes.io/serviceaccount/..2020_09_04_16_30_26.292719465/service-ca.crt
/run/secrets/kubernetes.io/serviceaccount/..2020_09_04_16_30_26.292719465/namespace
/run/secrets/rhsm/ca/redhat-uep.pem
/run/secrets/rhsm/ca/redhat-entitlement-authority.pemWhat am I missing here?
The error says cannot list resource \"pods\" in API group \"\" at the cluster scope because you are trying to access all pods of all namespaces across the cluster instead of all pods of only project-1 namespace.
So change the Role to a ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: testsa-view
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- kind: ServiceAccount
name: testsa
namespace: project-1Referring from the examples here RoleBinding always gives permission to namespace scoped resources limited to only that specific namespace even if you are referring to a ClusterRole in it.
You can use below commands to check permission of a service account
kubectl auth can-i --list --as=system:serviceaccount:project-1:testsa
kubectl auth can-i --list --as=system:serviceaccount:project-1:testsa -n project-1
kubectl auth list pods --as=system:serviceaccount:project-1:testsa
kubectl auth list pods --as=system:serviceaccount:project-1:testsa -n project-1