{
	"id": "e222ff78-caff-4fb8-84ab-1939b4ed8a44",
	"created_at": "2026-04-06T01:29:50.336741Z",
	"updated_at": "2026-04-10T03:23:51.895313Z",
	"deleted_at": null,
	"sha1_hash": "26d4e4e7b07eb287d0762e37536e7f9c03a55bd2",
	"title": "DaemonSet",
	"llm_title": "",
	"authors": "",
	"file_creation_date": "0001-01-01T00:00:00Z",
	"file_modification_date": "0001-01-01T00:00:00Z",
	"file_size": 102874,
	"plain_text": "DaemonSet\r\nArchived: 2026-04-06 01:10:58 UTC\r\nA DaemonSet defines Pods that provide node-local facilities. These might be fundamental to the operation of your\r\ncluster, such as a networking helper tool, or be part of an add-on.\r\nA DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are\r\nadded to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet\r\nwill clean up the Pods it created.\r\nSome typical uses of a DaemonSet are:\r\nrunning a cluster storage daemon on every node\r\nrunning a logs collection daemon on every node\r\nrunning a node monitoring daemon on every node\r\nIn a simple case, one DaemonSet, covering all nodes, would be used for each type of daemon. A more complex\r\nsetup might use multiple DaemonSets for a single type of daemon, but with different flags and/or different\r\nmemory and cpu requests for different hardware types.\r\nWriting a DaemonSet Spec\r\nCreate a DaemonSet\r\nYou can describe a DaemonSet in a YAML file. For example, the daemonset.yaml file below describes a\r\nDaemonSet that runs the fluentd-elasticsearch Docker image:\r\napiVersion: apps/v1\r\nkind: DaemonSet\r\nmetadata:\r\n name: fluentd-elasticsearch\r\n namespace: kube-system\r\n labels:\r\n k8s-app: fluentd-logging\r\nspec:\r\n selector:\r\n matchLabels:\r\n name: fluentd-elasticsearch\r\n template:\r\n metadata:\r\n labels:\r\n name: fluentd-elasticsearch\r\n spec:\r\nhttps://kubernetes.io/docs/concepts/workloads/controllers/daemonset/\r\nPage 1 of 7\n\ntolerations:\r\n # these tolerations are to have the daemonset runnable on control plane nodes\r\n # remove them if your control plane nodes should not run pods\r\n - key: node-role.kubernetes.io/control-plane\r\n operator: Exists\r\n effect: NoSchedule\r\n - key: node-role.kubernetes.io/master\r\n operator: Exists\r\n effect: NoSchedule\r\n containers:\r\n - name: fluentd-elasticsearch\r\n image: quay.io/fluentd_elasticsearch/fluentd:v5.0.1\r\n resources:\r\n limits:\r\n memory: 200Mi\r\n requests:\r\n cpu: 100m\r\n memory: 200Mi\r\n volumeMounts:\r\n - name: varlog\r\n mountPath: /var/log\r\n # it may be desirable to set a high priority class to ensure that a DaemonSet Pod\r\n # preempts running Pods\r\n # priorityClassName: important\r\n terminationGracePeriodSeconds: 30\r\n volumes:\r\n - name: varlog\r\n hostPath:\r\n path: /var/log\r\nCreate a DaemonSet based on the YAML file:\r\nkubectl apply -f https://k8s.io/examples/controllers/daemonset.yaml\r\nRequired Fields\r\nAs with all other Kubernetes config, a DaemonSet needs apiVersion , kind , and metadata fields. For general\r\ninformation about working with config files, see running stateless applications and object management using\r\nkubectl.\r\nThe name of a DaemonSet object must be a valid DNS subdomain name.\r\nA DaemonSet also needs a .spec section.\r\nPod Template\r\nhttps://kubernetes.io/docs/concepts/workloads/controllers/daemonset/\r\nPage 2 of 7\n\nThe .spec.template is one of the required fields in .spec .\r\nThe .spec.template is a pod template. It has exactly the same schema as a Pod, except it is nested and does not\r\nhave an apiVersion or kind .\r\nIn addition to required fields for a Pod, a Pod template in a DaemonSet has to specify appropriate labels (see pod\r\nselector).\r\nA Pod Template in a DaemonSet must have a RestartPolicy equal to Always , or be unspecified, which\r\ndefaults to Always .\r\nPod Selector\r\nThe .spec.selector field is a pod selector. It works the same as the .spec.selector of a Job.\r\nYou must specify a pod selector that matches the labels of the .spec.template . Also, once a DaemonSet is\r\ncreated, its .spec.selector can not be mutated. Mutating the pod selector can lead to the unintentional\r\norphaning of Pods, and it was found to be confusing to users.\r\nThe .spec.selector is an object consisting of two fields:\r\nmatchLabels - works the same as the .spec.selector of a ReplicationController.\r\nmatchExpressions - allows to build more sophisticated selectors by specifying key, list of values and an\r\noperator that relates the key and values.\r\nWhen the two are specified the result is ANDed.\r\nThe .spec.selector must match the .spec.template.metadata.labels . Config with these two not matching\r\nwill be rejected by the API.\r\nRunning Pods on select Nodes\r\nIf you specify a .spec.template.spec.nodeSelector , then the DaemonSet controller will create Pods on nodes\r\nwhich match that node selector. Likewise if you specify a .spec.template.spec.affinity , then DaemonSet\r\ncontroller will create Pods on nodes which match that node affinity. If you do not specify either, then the\r\nDaemonSet controller will create Pods on all nodes.\r\nHow Daemon Pods are scheduled\r\nA DaemonSet can be used to ensure that all eligible nodes run a copy of a Pod. The DaemonSet controller creates\r\na Pod for each eligible node and adds the spec.affinity.nodeAffinity field of the Pod to match the target host.\r\nAfter the Pod is created, the default scheduler typically takes over and then binds the Pod to the target host by\r\nsetting the .spec.nodeName field. If the new Pod cannot fit on the node, the default scheduler may preempt\r\n(evict) some of the existing Pods based on the priority of the new Pod.\r\nNote:\r\nhttps://kubernetes.io/docs/concepts/workloads/controllers/daemonset/\r\nPage 3 of 7\n\nIf it's important that the DaemonSet pod run on each node, it's often desirable to set the\r\n.spec.template.spec.priorityClassName of the DaemonSet to a PriorityClass with a higher priority to ensure\r\nthat this eviction occurs.\r\nThe user can specify a different scheduler for the Pods of the DaemonSet, by setting the\r\n.spec.template.spec.schedulerName field of the DaemonSet.\r\nThe original node affinity specified at the .spec.template.spec.affinity.nodeAffinity field (if specified) is\r\ntaken into consideration by the DaemonSet controller when evaluating the eligible nodes, but is replaced on the\r\ncreated Pod with the node affinity that matches the name of the eligible node.\r\nnodeAffinity:\r\n requiredDuringSchedulingIgnoredDuringExecution:\r\n nodeSelectorTerms:\r\n - matchFields:\r\n - key: metadata.name\r\n operator: In\r\n values:\r\n - target-host-name\r\nTaints and tolerations\r\nThe DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:\r\nToleration key Effect Details\r\nnode.kubernetes.io/not-ready NoExecute\r\nDaemonSet Pods can be scheduled onto nodes\r\nthat are not healthy or ready to accept Pods. Any\r\nDaemonSet Pods running on such nodes will not\r\nbe evicted.\r\nnode.kubernetes.io/unreachable NoExecute\r\nDaemonSet Pods can be scheduled onto nodes\r\nthat are unreachable from the node controller. Any\r\nDaemonSet Pods running on such nodes will not\r\nbe evicted.\r\nnode.kubernetes.io/disk-pressure NoSchedule\r\nDaemonSet Pods can be scheduled onto nodes\r\nwith disk pressure issues.\r\nnode.kubernetes.io/memory-pressure\r\nNoSchedule\r\nDaemonSet Pods can be scheduled onto nodes\r\nwith memory pressure issues.\r\nnode.kubernetes.io/pid-pressure NoSchedule\r\nDaemonSet Pods can be scheduled onto nodes\r\nwith process pressure issues.\r\nhttps://kubernetes.io/docs/concepts/workloads/controllers/daemonset/\r\nPage 4 of 7\n\nToleration key Effect Details\r\nnode.kubernetes.io/unschedulable NoSchedule\r\nDaemonSet Pods can be scheduled onto nodes\r\nthat are unschedulable.\r\nnode.kubernetes.io/network-unavailable\r\nNoSchedule\r\nOnly added for DaemonSet Pods that request\r\nhost networking, i.e., Pods having\r\nspec.hostNetwork: true . Such DaemonSet\r\nPods can be scheduled onto nodes with\r\nunavailable network.\r\nYou can add your own tolerations to the Pods of a DaemonSet as well, by defining these in the Pod template of the\r\nDaemonSet.\r\nBecause the DaemonSet controller sets the node.kubernetes.io/unschedulable:NoSchedule toleration\r\nautomatically, Kubernetes can run DaemonSet Pods on nodes that are marked as unschedulable.\r\nIf you use a DaemonSet to provide an important node-level function, such as cluster networking, it is helpful that\r\nKubernetes places DaemonSet Pods on nodes before they are ready. For example, without that special toleration,\r\nyou could end up in a deadlock situation where the node is not marked as ready because the network plugin is not\r\nrunning there, and at the same time the network plugin is not running on that node because the node is not yet\r\nready.\r\nCommunicating with Daemon Pods\r\nSome possible patterns for communicating with Pods in a DaemonSet are:\r\nPush: Pods in the DaemonSet are configured to send updates to another service, such as a stats database.\r\nThey do not have clients.\r\nNodeIP and Known Port: Pods in the DaemonSet can use a hostPort , so that the pods are reachable via\r\nthe node IPs. Clients know the list of node IPs somehow, and know the port by convention.\r\nDNS: Create a headless service with the same pod selector, and then discover DaemonSets using the\r\nendpoints resource or retrieve multiple A records from DNS.\r\nService: Create a service with the same Pod selector, and use the service to reach a daemon on a random\r\nnode. Use Service Internal Traffic Policy to limit to pods on the same node.\r\nUpdating a DaemonSet\r\nIf node labels are changed, the DaemonSet will promptly add Pods to newly matching nodes and delete Pods from\r\nnewly not-matching nodes.\r\nYou can modify the Pods that a DaemonSet creates. However, Pods do not allow all fields to be updated. Also, the\r\nDaemonSet controller will use the original template the next time a node (even with the same name) is created.\r\nhttps://kubernetes.io/docs/concepts/workloads/controllers/daemonset/\r\nPage 5 of 7\n\nYou can delete a DaemonSet. If you specify --cascade=orphan with kubectl , then the Pods will be left on the\r\nnodes. If you subsequently create a new DaemonSet with the same selector, the new DaemonSet adopts the\r\nexisting Pods. If any Pods need replacing the DaemonSet replaces them according to its updateStrategy .\r\nYou can perform a rolling update on a DaemonSet.\r\nAlternatives to DaemonSet\r\nInit scripts\r\nIt is certainly possible to run daemon processes by directly starting them on a node (e.g. using init , upstartd ,\r\nor systemd ). This is perfectly fine. However, there are several advantages to running such processes via a\r\nDaemonSet:\r\nAbility to monitor and manage logs for daemons in the same way as applications.\r\nSame config language and tools (e.g. Pod templates, kubectl ) for daemons and applications.\r\nRunning daemons in containers with resource limits increases isolation between daemons from app\r\ncontainers. However, this can also be accomplished by running the daemons in a container but not in a Pod.\r\nBare Pods\r\nIt is possible to create Pods directly which specify a particular node to run on. However, a DaemonSet replaces\r\nPods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node\r\nmaintenance, such as a kernel upgrade. For this reason, you should use a DaemonSet rather than creating\r\nindividual Pods.\r\nStatic Pods\r\nIt is possible to create Pods by writing a file to a certain directory watched by Kubelet. These are called static\r\npods. Unlike DaemonSet, static Pods cannot be managed with kubectl or other Kubernetes API clients. Static Pods\r\ndo not depend on the apiserver, making them useful in cluster bootstrapping cases. Also, static Pods may be\r\ndeprecated in the future.\r\nDeployments\r\nDaemonSets are similar to Deployments in that they both create Pods, and those Pods have processes which are\r\nnot expected to terminate (e.g. web servers, storage servers).\r\nUse a Deployment for stateless services, like frontends, where scaling up and down the number of replicas and\r\nrolling out updates are more important than controlling exactly which host the Pod runs on. Use a DaemonSet\r\nwhen it is important that a copy of a Pod always run on all or certain hosts, if the DaemonSet provides node-level\r\nfunctionality that allows other Pods to run correctly on that particular node.\r\nFor example, network plugins often include a component that runs as a DaemonSet. The DaemonSet component\r\nmakes sure that the node where it's running has working cluster networking.\r\nhttps://kubernetes.io/docs/concepts/workloads/controllers/daemonset/\r\nPage 6 of 7\n\nWhat's next\r\nLearn about Pods:\r\nLearn about static Pods, which are useful for running Kubernetes control plane components.\r\nFind out how to use DaemonSets:\r\nPerform a rolling update on a DaemonSet.\r\nPerform a rollback on a DaemonSet (for example, if a roll out didn't work how you expected).\r\nUnderstand how Kubernetes assigns Pods to Nodes.\r\nLearn about device plugins and add ons, which often run as DaemonSets.\r\nDaemonSet is a top-level resource in the Kubernetes REST API. Read the DaemonSet object definition to\r\nunderstand the API for daemon sets.\r\nSource: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/\r\nhttps://kubernetes.io/docs/concepts/workloads/controllers/daemonset/\r\nPage 7 of 7",
	"extraction_quality": 1,
	"language": "EN",
	"sources": [
		"MITRE"
	],
	"references": [
		"https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/"
	],
	"report_names": [
		"daemonset"
	],
	"threat_actors": [
		{
			"id": "d90307b6-14a9-4d0b-9156-89e453d6eb13",
			"created_at": "2022-10-25T16:07:23.773944Z",
			"updated_at": "2026-04-10T02:00:04.746188Z",
			"deleted_at": null,
			"main_name": "Lead",
			"aliases": [
				"Casper",
				"TG-3279"
			],
			"source_name": "ETDA:Lead",
			"tools": [
				"Agentemis",
				"BleDoor",
				"Cobalt Strike",
				"CobaltStrike",
				"RbDoor",
				"RibDoor",
				"Winnti",
				"cobeacon"
			],
			"source_id": "ETDA",
			"reports": null
		}
	],
	"ts_created_at": 1775438990,
	"ts_updated_at": 1775791431,
	"ts_creation_date": 0,
	"ts_modification_date": 0,
	"files": {
		"pdf": "https://archive.orkl.eu/26d4e4e7b07eb287d0762e37536e7f9c03a55bd2.pdf",
		"text": "https://archive.orkl.eu/26d4e4e7b07eb287d0762e37536e7f9c03a55bd2.txt",
		"img": "https://archive.orkl.eu/26d4e4e7b07eb287d0762e37536e7f9c03a55bd2.jpg"
	}
}