DaemonSets are a core Kubernetes workload type that ensures a copy of a Pod runs on all (or a subset of) nodes in a cluster. They are essential for node-level services: log collectors, monitoring agents, storage plugins, and network daemons that must run on every node. While the built-in Kubernetes DaemonSet controller handles basic scheduling, managing DaemonSets at scale requires careful planning around resource allocation, update strategies, node selection, and health monitoring.
In this guide, we explore the best practices for managing DaemonSets in self-hosted Kubernetes clusters, along with the tools and patterns that make DaemonSet operations reliable and maintainable.
What Is a DaemonSet?
A DaemonSet is a Kubernetes controller that guarantees a Pod runs on every node matching its selector. Unlike Deployments (which manage a desired replica count across the cluster) or StatefulSets (which manage ordered, identity-aware replicas), DaemonSets are node-centric — they scale automatically as nodes are added or removed from the cluster.
Common DaemonSet use cases include:
- Log collection — Fluentd, Fluent Bit, Vector, or Filebeat agents running on every node
- Monitoring agents — Prometheus Node Exporter, Datadog agent, or custom metrics collectors
- Storage daemons — Ceph OSD, GlusterFS, or Rook storage agents
- Networking plugins — Calico, Cilium, or kube-router networking agents
- Security agents — Falco, Tetragon, or other runtime security monitors
Built-In DaemonSet Controller
Kubernetes includes a native DaemonSet controller as part of its core control plane. It handles the basic lifecycle: creating Pods on eligible nodes, updating Pods when the DaemonSet spec changes, and removing Pods when nodes are deleted.
Key features:
OnDeleteupdate strategy — Pods are only updated when manually deletedRollingUpdatestrategy — Pods are updated node-by-node with configurablemaxUnavailable- Node selector and toleration support for targeting specific node groups
- Automatic Pod creation on new nodes and cleanup on node deletion
Basic DaemonSet Example
| |
DaemonSet Management Tools and Patterns
While the built-in controller handles basic operations, several tools and patterns extend DaemonSet management capabilities:
1. Helm Charts for DaemonSet Deployment
Helm provides a declarative, templated approach to DaemonSet management. Popular chart repositories include pre-built DaemonSets for common node-level services:
| |
Helm charts handle DaemonSet lifecycle management including upgrades, rollbacks, and configuration overrides. The helm upgrade command supports --wait and --timeout flags for controlled rolling updates.
2. Kustomize for DaemonSet Overlays
Kustomize provides a patch-based approach to customizing DaemonSets across different environments:
| |
Kustomize is ideal for managing DaemonSet variations across development, staging, and production clusters without duplicating base configurations.
3. Argo CD for GitOps DaemonSet Management
Argo CD provides continuous reconciliation for DaemonSets defined in Git repositories. When the DaemonSet manifest in Git changes, Argo CD automatically applies the update to the cluster:
| |
Argo CD’s self-healing capability ensures DaemonSets always match their Git-defined state, even if someone manually modifies them with kubectl.
DaemonSet Best Practices
Resource Management
DaemonSets run on every node, so resource requests multiply across the cluster. A DaemonSet requesting 200m CPU and 256Mi memory on a 50-node cluster consumes 10 CPU cores and 12.8Gi RAM — resources that could otherwise run application workloads.
| |
Update Strategy Configuration
The RollingUpdate strategy should be tuned based on the service’s criticality:
| |
For non-critical DaemonSets (like experimental monitoring agents), maxUnavailable: 20% allows faster rollouts. For critical services (networking, storage), maxUnavailable: 1 ensures minimal disruption.
Node Selection and Tolerations
DaemonSets should be explicitly scoped to avoid running on inappropriate nodes:
| |
Health Monitoring
Monitor DaemonSet health with standard Kubernetes metrics:
| |
Comparison Table
| Aspect | Built-In Controller | Helm Charts | Kustomize | Argo CD (GitOps) |
|---|---|---|---|---|
| Deployment | Manual YAML | Templated | Patch-based | Git-synced |
| Rollback | Manual | helm rollback | git revert | Auto-sync |
| Multi-env | Duplicate YAML | Values files | Overlays | App-of-apps |
| Drift Detection | None | helm diff | None | Continuous |
| Learning Curve | Low | Medium | Medium | High |
| Best For | Single cluster | Reusable configs | Env variations | GitOps workflows |
Why Self-Host Kubernetes DaemonSet Management?
Self-hosting your Kubernetes cluster means you have full control over DaemonSet deployment, configuration, and lifecycle management. In managed Kubernetes services (EKS, GKE, AKS), many DaemonSet capabilities are abstracted away — and in some cases, restricted. Self-hosted clusters offer:
Custom DaemonSet scheduling: Define exactly which nodes run which DaemonSets, using node selectors, tolerations, and affinity rules tailored to your infrastructure. Managed services often restrict access to control-plane nodes or enforce predefined DaemonSets.
Custom update strategies: Configure rolling update parameters (maxUnavailable, partition-based rollouts) that match your organization’s change management policies. Self-hosted clusters let you pause, resume, and debug DaemonSet rollouts without vendor-imposed constraints.
Full observability: Run custom monitoring, logging, and security DaemonSets that integrate with your self-hosted observability stack. Managed services may limit which DaemonSets you can deploy on their infrastructure.
Cost control: DaemonSets consume cluster resources. In self-hosted environments, you can optimize resource requests, use lightweight alternatives (Fluent Bit vs Fluentd), and right-size nodes to minimize waste.
For Kubernetes storage management, see our CSI drivers comparison. For cluster autoscaling, check our Karpenter vs Cluster Autoscaler vs KEDA guide. For network policies, our CNI deep-dive covers networking DaemonSets.
FAQ
What is a Kubernetes DaemonSet?
A DaemonSet is a Kubernetes workload controller that ensures a copy of a specific Pod runs on every node (or a subset of nodes) in the cluster. Unlike Deployments that manage a fixed number of replicas, DaemonSets scale automatically as nodes are added or removed. They are ideal for node-level services like log collectors, monitoring agents, and networking plugins.
How does a DaemonSet differ from a Deployment?
Deployments manage a desired number of Pod replicas that can run on any node. DaemonSets ensure exactly one Pod per eligible node. If you add a node to the cluster, a Deployment’s replica count stays the same, but a DaemonSet automatically creates a new Pod on the new node. Use Deployments for stateless application workloads and DaemonSets for infrastructure services that must run on every node.
What update strategies are available for DaemonSets?
DaemonSets support two update strategies: OnDelete (Pods are only updated when manually deleted) and RollingUpdate (Pods are updated node-by-node). RollingUpdate supports maxUnavailable to control how many nodes can be without the updated Pod during the rollout. For most use cases, RollingUpdate with maxUnavailable: 1 is the safest choice.
How do I prevent a DaemonSet from running on control-plane nodes?
By default, DaemonSets run on all nodes. To exclude control-plane nodes, add a nodeSelector that targets worker nodes only, or ensure the DaemonSet’s tolerations do not match the control-plane taint (node-role.kubernetes.io/control-plane:NoSchedule). Alternatively, use nodeAffinity with requiredDuringSchedulingIgnoredDuringExecution to specify eligible nodes.
How do I monitor DaemonSet health?
Use kubectl get daemonsets -A to see the desired, current, and ready Pod counts for each DaemonSet. The kubectl rollout status daemonset/<name> command tracks rollout progress. For automated monitoring, use Prometheus with the kube_daemonset_* metrics exported by kube-state-metrics: kube_daemonset_status_current_number_scheduled, kube_daemonset_status_desired_number_scheduled, and kube_daemonset_status_number_misscheduled.
Can I run multiple instances of the same DaemonSet on a single node?
By default, Kubernetes ensures only one Pod from a DaemonSet runs per node. However, you can run multiple DaemonSets with different selectors or tolerations on the same node. For example, one DaemonSet for general log collection and another for security-specific log collection, each with different configurations and resource profiles.