{
	"id": "c02ebb8e-fccf-486d-97c4-32ae17967a9b",
	"created_at": "2026-04-06T00:19:42.181139Z",
	"updated_at": "2026-04-10T03:21:52.963921Z",
	"deleted_at": null,
	"sha1_hash": "8f4aea0e0e29729576f7aebcb41764e9d5c1acb8",
	"title": "Configure a Security Context for a Pod or Container",
	"llm_title": "",
	"authors": "",
	"file_creation_date": "0001-01-01T00:00:00Z",
	"file_modification_date": "0001-01-01T00:00:00Z",
	"file_size": 243685,
	"plain_text": "Configure a Security Context for a Pod or Container\r\nArchived: 2026-04-05 17:45:59 UTC\r\nA security context defines privilege and access control settings for a Pod or Container. Security context settings\r\ninclude, but are not limited to:\r\nDiscretionary Access Control: Permission to access an object, like a file, is based on user ID (UID) and\r\ngroup ID (GID).\r\nSecurity Enhanced Linux (SELinux): Objects are assigned security labels.\r\nRunning as privileged or unprivileged.\r\nLinux Capabilities: Give a process some privileges, but not all the privileges of the root user.\r\nAppArmor: Use program profiles to restrict the capabilities of individual programs.\r\nSeccomp: Filter a process's system calls.\r\nallowPrivilegeEscalation : Controls whether a process can gain more privileges than its parent process.\r\nThis bool directly controls whether the no_new_privs flag gets set on the container process.\r\nallowPrivilegeEscalation is always true when the container:\r\nis run as privileged, or\r\nhas CAP_SYS_ADMIN\r\nreadOnlyRootFilesystem : Mounts the container's root filesystem as read-only.\r\nThe above bullets are not a complete set of security context settings -- please see SecurityContext for a\r\ncomprehensive list.\r\nBefore you begin\r\nYou need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate\r\nwith your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as\r\ncontrol plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one\r\nof these Kubernetes playgrounds:\r\niximiuz Labs\r\nKillercoda\r\nKodeKloud\r\nTo check the version, enter kubectl version .\r\nSet the security context for a Pod\r\nhttps://kubernetes.io/docs/tasks/configure-pod-container/security-context/\r\nPage 1 of 18\n\nTo specify security settings for a Pod, include the securityContext field in the Pod specification. The\r\nsecurityContext field is a PodSecurityContext object. The security settings that you specify for a Pod apply to\r\nall Containers in the Pod. Here is a configuration file for a Pod that has a securityContext and an emptyDir\r\nvolume:\r\napiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n name: security-context-demo\r\nspec:\r\n securityContext:\r\n runAsUser: 1000\r\n runAsGroup: 3000\r\n fsGroup: 2000\r\n supplementalGroups: [4000]\r\n volumes:\r\n - name: sec-ctx-vol\r\n emptyDir: {}\r\n containers:\r\n - name: sec-ctx-demo\r\n image: busybox:1.28\r\n command: [ \"sh\", \"-c\", \"sleep 1h\" ]\r\n volumeMounts:\r\n - name: sec-ctx-vol\r\n mountPath: /data/demo\r\n securityContext:\r\n allowPrivilegeEscalation: false\r\nIn the configuration file, the runAsUser field specifies that for any Containers in the Pod, all processes run with\r\nuser ID 1000. The runAsGroup field specifies the primary group ID of 3000 for all processes within any\r\ncontainers of the Pod. If this field is omitted, the primary group ID of the containers will be root(0). Any files\r\ncreated will also be owned by user 1000 and group 3000 when runAsGroup is specified. Since fsGroup field is\r\nspecified, all processes of the container are also part of the supplementary group ID 2000. The owner for volume\r\n/data/demo and any files created in that volume will be Group ID 2000. Additionally, when the\r\nsupplementalGroups field is specified, all processes of the container are also part of the specified groups. If this\r\nfield is omitted, it means empty.\r\nCreate the Pod:\r\nkubectl apply -f https://k8s.io/examples/pods/security/security-context.yaml\r\nVerify that the Pod's Container is running:\r\nkubectl get pod security-context-demo\r\nhttps://kubernetes.io/docs/tasks/configure-pod-container/security-context/\r\nPage 2 of 18\n\nGet a shell to the running Container:\r\nkubectl exec -it security-context-demo -- sh\r\nIn your shell, list the running processes:\r\nThe output shows that the processes are running as user 1000, which is the value of runAsUser :\r\nPID USER TIME COMMAND\r\n 1 1000 0:00 sleep 1h\r\n 6 1000 0:00 sh\r\n...\r\nIn your shell, navigate to /data , and list the one directory:\r\nThe output shows that the /data/demo directory has group ID 2000, which is the value of fsGroup .\r\ndrwxrwsrwx 2 root 2000 4096 Jun 6 20:08 demo\r\nIn your shell, navigate to /data/demo , and create a file:\r\ncd demo\r\necho hello \u003e testfile\r\nList the file in the /data/demo directory:\r\nThe output shows that testfile has group ID 2000, which is the value of fsGroup .\r\n-rw-r--r-- 1 1000 2000 6 Jun 6 20:08 testfile\r\nRun the following command:\r\nThe output is similar to this:\r\nuid=1000 gid=3000 groups=2000,3000,4000\r\nFrom the output, you can see that gid is 3000 which is same as the runAsGroup field. If the runAsGroup was\r\nomitted, the gid would remain as 0 (root) and the process will be able to interact with files that are owned by the\r\nroot(0) group and groups that have the required group permissions for the root (0) group. You can also see that\r\ngroups contains the group IDs which are specified by fsGroup and supplementalGroups , in addition to gid .\r\nExit your shell:\r\nhttps://kubernetes.io/docs/tasks/configure-pod-container/security-context/\r\nPage 3 of 18\n\nImplicit group memberships defined in /etc/group in the container image\r\nBy default, kubernetes merges group information from the Pod with information defined in /etc/group in the\r\ncontainer image.\r\napiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n name: security-context-demo\r\nspec:\r\n securityContext:\r\n runAsUser: 1000\r\n runAsGroup: 3000\r\n supplementalGroups: [4000]\r\n containers:\r\n - name: sec-ctx-demo\r\n image: registry.k8s.io/e2e-test-images/agnhost:2.45\r\n command: [ \"sh\", \"-c\", \"sleep 1h\" ]\r\n securityContext:\r\n allowPrivilegeEscalation: false\r\nThis Pod security context contains runAsUser , runAsGroup and supplementalGroups . However, you can see\r\nthat the actual supplementary groups attached to the container process will include group IDs which come from\r\n/etc/group in the container image.\r\nCreate the Pod:\r\nkubectl apply -f https://k8s.io/examples/pods/security/security-context-5.yaml\r\nVerify that the Pod's Container is running:\r\nkubectl get pod security-context-demo\r\nGet a shell to the running Container:\r\nkubectl exec -it security-context-demo -- sh\r\nCheck the process identity:\r\nThe output is similar to this:\r\nuid=1000 gid=3000 groups=3000,4000,50000\r\nhttps://kubernetes.io/docs/tasks/configure-pod-container/security-context/\r\nPage 4 of 18\n\nYou can see that groups includes group ID 50000 . This is because the user ( uid=1000 ), which is defined in\r\nthe image, belongs to the group ( gid=50000 ), which is defined in /etc/group inside the container image.\r\nCheck the /etc/group in the container image:\r\nYou can see that uid 1000 belongs to group 50000 .\r\n...\r\nuser-defined-in-image:x:1000:\r\ngroup-defined-in-image:x:50000:user-defined-in-image\r\nExit your shell:\r\nNote:\r\nImplicitly merged supplementary groups may cause security problems particularly when accessing the volumes\r\n(see kubernetes/kubernetes#112879 for details). If you want to avoid this. Please see the below section.\r\nFEATURE STATE: Kubernetes v1.33 [beta] (enabled by default)\r\nThis feature can be enabled by setting the SupplementalGroupsPolicy feature gate for kubelet and kube-apiserver, and setting the .spec.securityContext.supplementalGroupsPolicy field for a pod.\r\nThe supplementalGroupsPolicy field defines the policy for calculating the supplementary groups for the\r\ncontainer processes in a pod. There are two valid values for this field:\r\nMerge : The group membership defined in /etc/group for the container's primary user will be merged.\r\nThis is the default policy if not specified.\r\nStrict : Only group IDs in fsGroup , supplementalGroups , or runAsGroup fields are attached as the\r\nsupplementary groups of the container processes. This means no group membership from /etc/group for\r\nthe container's primary user will be merged.\r\nWhen the feature is enabled, it also exposes the process identity attached to the first container process in\r\n.status.containerStatuses[].user.linux field. It would be useful for detecting if implicit group ID's are\r\nattached.\r\napiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n name: security-context-demo\r\nspec:\r\n securityContext:\r\n runAsUser: 1000\r\n runAsGroup: 3000\r\n supplementalGroups: [4000]\r\nhttps://kubernetes.io/docs/tasks/configure-pod-container/security-context/\r\nPage 5 of 18\n\nsupplementalGroupsPolicy: Strict\r\n containers:\r\n - name: sec-ctx-demo\r\n image: registry.k8s.io/e2e-test-images/agnhost:2.45\r\n command: [ \"sh\", \"-c\", \"sleep 1h\" ]\r\n securityContext:\r\n allowPrivilegeEscalation: false\r\nThis pod manifest defines supplementalGroupsPolicy=Strict . You can see that no group memberships defined\r\nin /etc/group are merged to the supplementary groups for container processes.\r\nCreate the Pod:\r\nkubectl apply -f https://k8s.io/examples/pods/security/security-context-6.yaml\r\nVerify that the Pod's Container is running:\r\nkubectl get pod security-context-demo\r\nCheck the process identity:\r\nkubectl exec -it security-context-demo -- id\r\nThe output is similar to this:\r\nuid=1000 gid=3000 groups=3000,4000\r\nSee the Pod's status:\r\nkubectl get pod security-context-demo -o yaml\r\nYou can see that the status.containerStatuses[].user.linux field exposes the process identity attached to the\r\nfirst container process.\r\n...\r\nstatus:\r\n containerStatuses:\r\n - name: sec-ctx-demo\r\n user:\r\n linux:\r\n gid: 3000\r\n supplementalGroups:\r\n - 3000\r\nhttps://kubernetes.io/docs/tasks/configure-pod-container/security-context/\r\nPage 6 of 18\n\n- 4000\r\n uid: 1000\r\n...\r\nNote:\r\nPlease note that the values in the status.containerStatuses[].user.linux field is the first attached process\r\nidentity to the first container process in the container. If the container has sufficient privilege to make system calls\r\nrelated to process identity (e.g. setuid(2) , setgid(2) or setgroups(2) , etc.), the container process can\r\nchange its identity. Thus, the actual process identity will be dynamic.\r\nNote: This section links to third party projects that provide functionality required by Kubernetes. The Kubernetes\r\nproject authors aren't responsible for these projects, which are listed alphabetically. To add a project to this list,\r\nread the content guide before submitting a change. More information.\r\nThe following container runtimes are known to support fine-grained SupplementalGroups control.\r\nCRI-level:\r\ncontainerd, since v2.0\r\nCRI-O, since v1.31\r\nYou can see if the feature is supported in the Node status.\r\napiVersion: v1\r\nkind: Node\r\n...\r\nstatus:\r\n features:\r\n supplementalGroupsPolicy: true\r\nNote:\r\nAt this alpha release(from v1.31 to v1.32), when a pod with SupplementalGroupsPolicy=Strict are scheduled to\r\na node that does NOT support this feature(i.e. .status.features.supplementalGroupsPolicy=false ), the pod's\r\nsupplemental groups policy falls back to the Merge policy silently.\r\nHowever, since the beta release (v1.33), to enforce the policy more strictly, such pod creation will be rejected by\r\nkubelet because the node cannot ensure the specified policy. When your pod is rejected, you will see warning\r\nevents with reason=SupplementalGroupsPolicyNotSupported like below:\r\napiVersion: v1\r\nkind: Event\r\n...\r\ntype: Warning\r\nhttps://kubernetes.io/docs/tasks/configure-pod-container/security-context/\r\nPage 7 of 18\n\nreason: SupplementalGroupsPolicyNotSupported\r\nmessage: \"SupplementalGroupsPolicy=Strict is not supported in this node\"\r\ninvolvedObject:\r\n apiVersion: v1\r\n kind: Pod\r\n ...\r\nConfigure volume permission and ownership change policy for Pods\r\nFEATURE STATE: Kubernetes v1.23 [stable]\r\nBy default, Kubernetes recursively changes ownership and permissions for the contents of each volume to match\r\nthe fsGroup specified in a Pod's securityContext when that volume is mounted. For large volumes, checking\r\nand changing ownership and permissions can take a lot of time, slowing Pod startup. You can use the\r\nfsGroupChangePolicy field inside a securityContext to control the way that Kubernetes checks and manages\r\nownership and permissions for a volume.\r\nfsGroupChangePolicy - fsGroupChangePolicy defines behavior for changing ownership and permission of the\r\nvolume before being exposed inside a Pod. This field only applies to volume types that support fsGroup\r\ncontrolled ownership and permissions. This field has two possible values:\r\nOnRootMismatch: Only change permissions and ownership if the permission and the ownership of root\r\ndirectory does not match with expected permissions of the volume. This could help shorten the time it\r\ntakes to change ownership and permission of a volume.\r\nAlways: Always change permission and ownership of the volume when volume is mounted.\r\nFor example:\r\nsecurityContext:\r\n runAsUser: 1000\r\n runAsGroup: 3000\r\n fsGroup: 2000\r\n fsGroupChangePolicy: \"OnRootMismatch\"\r\nDelegating volume permission and ownership change to CSI driver\r\nFEATURE STATE: Kubernetes v1.26 [stable]\r\nIf you deploy a Container Storage Interface (CSI) driver which supports the VOLUME_MOUNT_GROUP\r\nNodeServiceCapability , the process of setting file ownership and permissions based on the fsGroup specified\r\nin the securityContext will be performed by the CSI driver instead of Kubernetes. In this case, since\r\nKubernetes doesn't perform any ownership and permission change, fsGroupChangePolicy does not take effect,\r\nand as specified by CSI, the driver is expected to mount the volume with the provided fsGroup , resulting in a\r\nvolume that is readable/writable by the fsGroup .\r\nhttps://kubernetes.io/docs/tasks/configure-pod-container/security-context/\r\nPage 8 of 18\n\nSet the security context for a Container\r\nTo specify security settings for a Container, include the securityContext field in the Container manifest. The\r\nsecurityContext field is a SecurityContext object. Security settings that you specify for a Container apply only\r\nto the individual Container, and they override settings made at the Pod level when there is overlap. Container\r\nsettings do not affect the Pod's Volumes.\r\nHere is the configuration file for a Pod that has one Container. Both the Pod and the Container have a\r\nsecurityContext field:\r\napiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n name: security-context-demo-2\r\nspec:\r\n securityContext:\r\n runAsUser: 1000\r\n containers:\r\n - name: sec-ctx-demo-2\r\n image: gcr.io/google-samples/hello-app:2.0\r\n securityContext:\r\n runAsUser: 2000\r\n allowPrivilegeEscalation: false\r\nCreate the Pod:\r\nkubectl apply -f https://k8s.io/examples/pods/security/security-context-2.yaml\r\nVerify that the Pod's Container is running:\r\nkubectl get pod security-context-demo-2\r\nGet a shell into the running Container:\r\nkubectl exec -it security-context-demo-2 -- sh\r\nIn your shell, list the running processes:\r\nThe output shows that the processes are running as user 2000. This is the value of runAsUser specified for the\r\nContainer. It overrides the value 1000 that is specified for the Pod.\r\nUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\r\n2000 1 0.0 0.0 4336 764 ? Ss 20:36 0:00 /bin/sh -c node server.js\r\nhttps://kubernetes.io/docs/tasks/configure-pod-container/security-context/\r\nPage 9 of 18\n\n2000 8 0.1 0.5 772124 22604 ? Sl 20:36 0:00 node server.js\r\n...\r\nExit your shell:\r\nSet capabilities for a Container\r\nWith Linux capabilities, you can grant certain privileges to a process without granting all the privileges of the root\r\nuser. To add or drop Linux capabilities for a Container, include the capabilities field in the securityContext\r\nsection of the Container manifest.\r\nFirst, see what happens when you don't include a capabilities field. Here is configuration file that does not add\r\nor drop any Container capabilities:\r\napiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n name: security-context-demo-3\r\nspec:\r\n containers:\r\n - name: sec-ctx-3\r\n image: gcr.io/google-samples/hello-app:2.0\r\nCreate the Pod:\r\nkubectl apply -f https://k8s.io/examples/pods/security/security-context-3.yaml\r\nVerify that the Pod's Container is running:\r\nkubectl get pod security-context-demo-3\r\nGet a shell into the running Container:\r\nkubectl exec -it security-context-demo-3 -- sh\r\nIn your shell, list the running processes:\r\nThe output shows the process IDs (PIDs) for the Container:\r\nUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\r\nroot 1 0.0 0.0 4336 796 ? Ss 18:17 0:00 /bin/sh -c node server.js\r\nroot 5 0.1 0.5 772124 22700 ? Sl 18:17 0:00 node server.js\r\nhttps://kubernetes.io/docs/tasks/configure-pod-container/security-context/\r\nPage 10 of 18\n\nIn your shell, view the status for process 1:\r\nThe output shows the capabilities bitmap for the process:\r\n...\r\nCapPrm: 00000000a80425fb\r\nCapEff: 00000000a80425fb\r\n...\r\nMake a note of the capabilities bitmap, and then exit your shell:\r\nNext, run a Container that is the same as the preceding container, except that it has additional capabilities set.\r\nHere is the configuration file for a Pod that runs one Container. The configuration adds the CAP_NET_ADMIN and\r\nCAP_SYS_TIME capabilities:\r\napiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n name: security-context-demo-4\r\nspec:\r\n containers:\r\n - name: sec-ctx-4\r\n image: gcr.io/google-samples/hello-app:2.0\r\n securityContext:\r\n capabilities:\r\n add: [\"NET_ADMIN\", \"SYS_TIME\"]\r\nCreate the Pod:\r\nkubectl apply -f https://k8s.io/examples/pods/security/security-context-4.yaml\r\nGet a shell into the running Container:\r\nkubectl exec -it security-context-demo-4 -- sh\r\nIn your shell, view the capabilities for process 1:\r\nThe output shows capabilities bitmap for the process:\r\n...\r\nCapPrm: 00000000aa0435fb\r\nCapEff: 00000000aa0435fb\r\n...\r\nhttps://kubernetes.io/docs/tasks/configure-pod-container/security-context/\r\nPage 11 of 18\n\nCompare the capabilities of the two Containers:\r\n00000000a80425fb\r\n00000000aa0435fb\r\nIn the capability bitmap of the first container, bits 12 and 25 are clear. In the second container, bits 12 and 25 are\r\nset. Bit 12 is CAP_NET_ADMIN , and bit 25 is CAP_SYS_TIME . See capability.h for definitions of the capability\r\nconstants.\r\nNote:\r\nLinux capability constants have the form CAP_XXX . But when you list capabilities in your container manifest, you\r\nmust omit the CAP_ portion of the constant. For example, to add CAP_SYS_TIME , include SYS_TIME in your list\r\nof capabilities.\r\nSet the Seccomp Profile for a Container\r\nTo set the Seccomp profile for a Container, include the seccompProfile field in the securityContext section of\r\nyour Pod or Container manifest. The seccompProfile field is a SeccompProfile object consisting of type and\r\nlocalhostProfile . Valid options for type include RuntimeDefault , Unconfined , and Localhost .\r\nlocalhostProfile must only be set if type: Localhost . It indicates the path of the pre-configured profile on\r\nthe node, relative to the kubelet's configured Seccomp profile location (configured with the --root-dir flag).\r\nHere is an example that sets the Seccomp profile to the node's container runtime default profile:\r\n...\r\nsecurityContext:\r\n seccompProfile:\r\n type: RuntimeDefault\r\nHere is an example that sets the Seccomp profile to a pre-configured file at \u003ckubelet-root-dir\u003e/seccomp/my-profiles/profile-allow.json :\r\n...\r\nsecurityContext:\r\n seccompProfile:\r\n type: Localhost\r\n localhostProfile: my-profiles/profile-allow.json\r\nSet the AppArmor Profile for a Container\r\nTo set the AppArmor profile for a Container, include the appArmorProfile field in the securityContext section\r\nof your Container. The appArmorProfile field is a AppArmorProfile object consisting of type and\r\nhttps://kubernetes.io/docs/tasks/configure-pod-container/security-context/\r\nPage 12 of 18\n\nlocalhostProfile . Valid options for type include RuntimeDefault (default), Unconfined , and Localhost .\r\nlocalhostProfile must only be set if type is Localhost . It indicates the name of the pre-configured profile\r\non the node. The profile needs to be loaded onto all nodes suitable for the Pod, since you don't know where the\r\npod will be scheduled. Approaches for setting up custom profiles are discussed in Setting up nodes with profiles.\r\nNote: If containers[*].securityContext.appArmorProfile.type is explicitly set to RuntimeDefault , then the\r\nPod will not be admitted if AppArmor is not enabled on the Node. However if\r\ncontainers[*].securityContext.appArmorProfile.type is not specified, then the default (which is also\r\nRuntimeDefault ) will only be applied if the node has AppArmor enabled. If the node has AppArmor disabled the\r\nPod will be admitted but the Container will not be restricted by the RuntimeDefault profile.\r\nHere is an example that sets the AppArmor profile to the node's container runtime default profile:\r\n...\r\ncontainers:\r\n- name: container-1\r\n securityContext:\r\n appArmorProfile:\r\n type: RuntimeDefault\r\nHere is an example that sets the AppArmor profile to a pre-configured profile named k8s-apparmor-example-deny-write :\r\n...\r\ncontainers:\r\n- name: container-1\r\n securityContext:\r\n appArmorProfile:\r\n type: Localhost\r\n localhostProfile: k8s-apparmor-example-deny-write\r\nFor more details please see, Restrict a Container's Access to Resources with AppArmor.\r\nAssign SELinux labels to a Container\r\nTo assign SELinux labels to a Container, include the seLinuxOptions field in the securityContext section of\r\nyour Pod or Container manifest. The seLinuxOptions field is an SELinuxOptions object. Here's an example that\r\napplies an SELinux level:\r\n...\r\nsecurityContext:\r\n seLinuxOptions:\r\n level: \"s0:c123,c456\"\r\nhttps://kubernetes.io/docs/tasks/configure-pod-container/security-context/\r\nPage 13 of 18\n\nNote:\r\nTo assign SELinux labels, the SELinux security module must be loaded on the host operating system. On\r\nWindows and Linux worker nodes without SELinux support, this field and any SELinux feature gates described\r\nbelow have no effect.\r\nEfficient SELinux volume relabeling\r\nFEATURE STATE: Kubernetes v1.28 [beta] (enabled by default)\r\nNote:\r\nKubernetes v1.27 introduced an early limited form of this behavior that was only applicable to volumes (and\r\nPersistentVolumeClaims) using the ReadWriteOncePod access mode.\r\nKubernetes v1.33 promotes SELinuxChangePolicy and SELinuxMount feature gates as beta to widen that\r\nperformance improvement to other kinds of PersistentVolumeClaims, as explained in detail below. While in beta,\r\nSELinuxMount is still disabled by default.\r\nWith SELinuxMount feature gate disabled (the default in Kubernetes 1.33 and any previous release), the container\r\nruntime recursively assigns SELinux label to all files on all Pod volumes by default. To speed up this process,\r\nKubernetes can change the SELinux label of a volume instantly by using a mount option -o context=\u003clabel\u003e .\r\nTo benefit from this speedup, all these conditions must be met:\r\nThe feature gate SELinuxMountReadWriteOncePod must be enabled.\r\nPod must use PersistentVolumeClaim with applicable accessModes and feature gates:\r\nEither the volume has accessModes: [\"ReadWriteOncePod\"] , and feature gate\r\nSELinuxMountReadWriteOncePod is enabled.\r\nOr the volume can use any other access modes and all feature gates\r\nSELinuxMountReadWriteOncePod , SELinuxChangePolicy and SELinuxMount must be enabled and\r\nthe Pod has spec.securityContext.seLinuxChangePolicy either nil (default) or MountOption .\r\nPod (or all its Containers that use the PersistentVolumeClaim) must have seLinuxOptions set.\r\nThe corresponding PersistentVolume must be either:\r\nA volume that uses the legacy in-tree iscsi , rbd or fc volume type.\r\nOr a volume that uses a CSI driver. The CSI driver must announce that it supports mounting with -\r\no context by setting spec.seLinuxMount: true in its CSIDriver instance.\r\nWhen any of these conditions is not met, SELinux relabelling happens another way: the container runtime\r\nrecursively changes the SELinux label for all inodes (files and directories) in the volume. Calling out explicitly,\r\nthis applies to Kubernetes ephemeral volumes like secret , configMap and projected , and all volumes whose\r\nCSIDriver instance does not explicitly announce mounting with -o context .\r\nWhen this speedup is used, all Pods that use the same applicable volume concurrently on the same node must\r\nhave the same SELinux label. A Pod with a different SELinux label will fail to start and will be\r\nContainerCreating until all Pods with other SELinux labels that use the volume are deleted.\r\nhttps://kubernetes.io/docs/tasks/configure-pod-container/security-context/\r\nPage 14 of 18\n\nFEATURE STATE: Kubernetes v1.33 [beta] (enabled by default)\r\nFor Pods that want to opt-out from relabeling using mount options, they can set\r\nspec.securityContext.seLinuxChangePolicy to Recursive . This is required when multiple pods share a single\r\nvolume on the same node, but they run with different SELinux labels that allows simultaneous access to the\r\nvolume. For example, a privileged pod running with label spc_t and an unprivileged pod running with the\r\ndefault label container_file_t . With unset spec.securityContext.seLinuxChangePolicy (or with the default\r\nvalue MountOption ), only one of such pods is able to run on a node, the other one gets ContainerCreating with\r\nerror conflicting SELinux labels of volume \u003cname of the volume\u003e: \u003clabel of the running pod\u003e and \u003clabel\r\nof the pod that can't start\u003e .\r\nSELinuxWarningController\r\nTo make it easier to identify Pods that are affected by the change in SELinux volume relabeling, a new controller\r\ncalled SELinuxWarningController has been introduced in kube-controller-manager. It is disabled by default and\r\ncan be enabled by either setting the --controllers=*,selinux-warning-controller command line flag, or by\r\nsetting genericControllerManagerConfiguration.controllers field in KubeControllerManagerConfiguration.\r\nThis controller requires SELinuxChangePolicy feature gate to be enabled.\r\nWhen enabled, the controller observes running Pods and when it detects that two Pods use the same volume with\r\ndifferent SELinux labels:\r\n1. It emits an event to both of the Pods. kubectl describe pod \u003cpod-name\u003e the shows SELinuxLabel \"\r\n\u003clabel on the pod\u003e\" conflicts with pod \u003cthe other pod name\u003e that uses the same volume as this\r\npod with SELinuxLabel \"\u003cthe other pod label\u003e\". If both pods land on the same node, only one of\r\nthem may access the volume .\r\n2. Raise selinux_warning_controller_selinux_volume_conflict metric. The metric has both pod names +\r\nnamespaces as labels to identify the affected pods easily.\r\nA cluster admin can use this information to identify pods affected by the planning change and proactively opt-out\r\nPods from the optimization (i.e. set spec.securityContext.seLinuxChangePolicy: Recursive ).\r\nWarning:\r\nWe strongly recommend clusters that use SELinux to enable this controller and make sure that\r\nselinux_warning_controller_selinux_volume_conflict metric does not report any conflicts before enabling\r\nSELinuxMount feature gate or upgrading to a version where SELinuxMount is enabled by default.\r\nFeature gates\r\nThe following feature gates control the behavior of SELinux volume relabeling:\r\nSELinuxMountReadWriteOncePod : enables the optimization for volumes with accessModes:\r\n[\"ReadWriteOncePod\"] . This is a very safe feature gate to enable, as it cannot happen that two pods can\r\nshare one single volume with this access mode. This feature gate is enabled by default sine v1.28.\r\nhttps://kubernetes.io/docs/tasks/configure-pod-container/security-context/\r\nPage 15 of 18\n\nSELinuxChangePolicy : enables spec.securityContext.seLinuxChangePolicy field in Pod and related\r\nSELinuxWarningController in kube-controller-manager. This feature can be used before enabling\r\nSELinuxMount to check Pods running on a cluster, and to pro-actively opt-out Pods from the optimization.\r\nThis feature gate requires SELinuxMountReadWriteOncePod enabled. It is beta and enabled by default in\r\n1.33.\r\nSELinuxMount enables the optimization for all eligible volumes. Since it can break existing workloads, we\r\nrecommend enabling SELinuxChangePolicy feature gate + SELinuxWarningController first to check the\r\nimpact of the change. This feature gate requires SELinuxMountReadWriteOncePod and\r\nSELinuxChangePolicy enabled. It is beta, but disabled by default in 1.33.\r\nManaging access to the /proc filesystem\r\nFEATURE STATE: Kubernetes v1.33 [beta] (enabled by default)\r\nFor runtimes that follow the OCI runtime specification, containers default to running in a mode where there are\r\nmultiple paths that are both masked and read-only. The result of this is the container has these paths present inside\r\nthe container's mount namespace, and they can function similarly to if the container was an isolated host, but the\r\ncontainer process cannot write to them. The list of masked and read-only paths are as follows:\r\nMasked Paths:\r\n/proc/asound\r\n/proc/acpi\r\n/proc/kcore\r\n/proc/keys\r\n/proc/latency_stats\r\n/proc/timer_list\r\n/proc/timer_stats\r\n/proc/sched_debug\r\n/proc/scsi\r\n/sys/firmware\r\n/sys/devices/virtual/powercap\r\nRead-Only Paths:\r\n/proc/bus\r\n/proc/fs\r\n/proc/irq\r\n/proc/sys\r\n/proc/sysrq-trigger\r\nFor some Pods, you might want to bypass that default masking of paths. The most common context for wanting\r\nthis is if you are trying to run containers within a Kubernetes container (within a pod).\r\nhttps://kubernetes.io/docs/tasks/configure-pod-container/security-context/\r\nPage 16 of 18\n\nThe securityContext field procMount allows a user to request a container's /proc be Unmasked , or be\r\nmounted as read-write by the container process. This also applies to /sys/firmware which is not in /proc .\r\n...\r\nsecurityContext:\r\n procMount: Unmasked\r\nNote:\r\nSetting procMount to Unmasked requires the spec.hostUsers value in the pod spec to be false . In other\r\nwords: a container that wishes to have an Unmasked /proc or unmasked /sys must also be in a user\r\nnamespace. Kubernetes v1.12 to v1.29 did not enforce that requirement.\r\nDiscussion\r\nThe security context for a Pod applies to the Pod's Containers and also to the Pod's Volumes when applicable.\r\nSpecifically fsGroup and seLinuxOptions are applied to Volumes as follows:\r\nfsGroup : Volumes that support ownership management are modified to be owned and writable by the\r\nGID specified in fsGroup . See the Ownership Management design document for more details.\r\nseLinuxOptions : Volumes that support SELinux labeling are relabeled to be accessible by the label\r\nspecified under seLinuxOptions . Usually you only need to set the level section. This sets the Multi-Category Security (MCS) label given to all Containers in the Pod as well as the Volumes.\r\nWarning:\r\nAfter you specify an MCS label for a Pod, all Pods with the same label can access the Volume. If you need inter-Pod protection, you must assign a unique MCS label to each Pod.\r\nClean up\r\nDelete the Pod:\r\nkubectl delete pod security-context-demo\r\nkubectl delete pod security-context-demo-2\r\nkubectl delete pod security-context-demo-3\r\nkubectl delete pod security-context-demo-4\r\nWhat's next\r\nPodSecurityContext\r\nSecurityContext\r\nCRI Plugin Config Guide\r\nhttps://kubernetes.io/docs/tasks/configure-pod-container/security-context/\r\nPage 17 of 18\n\nSecurity Contexts design document\r\nOwnership Management design document\r\nPodSecurity Admission\r\nAllowPrivilegeEscalation design document\r\nFor more information about security mechanisms in Linux, see Overview of Linux Kernel Security\r\nFeatures (Note: Some information is out of date)\r\nRead about User Namespaces for Linux pods.\r\nMasked Paths in the OCI Runtime Specification\r\nSource: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/\r\nhttps://kubernetes.io/docs/tasks/configure-pod-container/security-context/\r\nPage 18 of 18\n\nContainer. It USER overrides the value PID %CPU %MEM VSZ 1000 that is specified RSS TTY for the Pod. STAT START TIME COMMAND\n2000 1 0.0 0.0 4336 764 ? Ss 20:36 0:00 /bin/sh-c node server.js\n   Page 9 of 18",
	"extraction_quality": 1,
	"language": "EN",
	"sources": [
		"MITRE"
	],
	"references": [
		"https://kubernetes.io/docs/tasks/configure-pod-container/security-context/"
	],
	"report_names": [
		"security-context"
	],
	"threat_actors": [],
	"ts_created_at": 1775434782,
	"ts_updated_at": 1775791312,
	"ts_creation_date": 0,
	"ts_modification_date": 0,
	"files": {
		"pdf": "https://archive.orkl.eu/8f4aea0e0e29729576f7aebcb41764e9d5c1acb8.pdf",
		"text": "https://archive.orkl.eu/8f4aea0e0e29729576f7aebcb41764e9d5c1acb8.txt",
		"img": "https://archive.orkl.eu/8f4aea0e0e29729576f7aebcb41764e9d5c1acb8.jpg"
	}
}