Managing Kubernetes across multiple clusters — whether for disaster recovery, geographic distribution, or hybrid cloud — requires a federation layer. Three open-source projects address this challenge with different philosophies: KubeFed (API-level federation), Open Cluster Management (application lifecycle across clusters), and Cluster API (declarative infrastructure provisioning).
This guide compares their architectures, self-hosting requirements, and deployment approaches for teams running multi-cluster Kubernetes.
What is Kubernetes Multi-Cluster Federation?
Kubernetes federation allows you to manage multiple clusters as a unified entity. Instead of configuring each cluster independently, a federation control plane propagates resources (Deployments, Services, ConfigMaps) across member clusters. This enables:
- Disaster recovery: Automatic failover between clusters
- Geo-distribution: Serve users from the nearest cluster
- Cloud independence: Run workloads across providers without vendor lock-in
- Unified governance: Apply consistent policies across all clusters
Quick Comparison
| Feature | KubeFed | Open Cluster Management (OCM) | Cluster API (CAPI) |
|---|---|---|---|
| GitHub | kubernetes-sigs/kubefed | open-cluster-management-io | kubernetes-sigs/cluster-api |
| Stars | 2,482 | 2,100+ | 4,200 |
| Language | Go | Go | Go |
| Primary Focus | API federation & resource propagation | Application lifecycle & governance | Infrastructure provisioning |
| Cluster Creation | ❌ Assumes clusters exist | ❌ Assumes clusters exist | ✅ Creates & manages clusters |
| Resource Propagation | ✅ Federated resources | ✅ Placement & manifests | ❌ Cluster-level only |
| Placement Policies | ✅ Override & scheduling | ✅ PlacementRules | ✅ MachineDeployments |
| Helm Install | ✅ | ✅ | ✅ |
| Last Active | 2023-03 (stable) | 2026-05 | 2026-05 |
KubeFed: API-Level Federation
KubeFed is the original Kubernetes federation project, incubated under kubernetes-sigs. It creates a federated control plane that watches Federated* CRDs and propagates changes to member clusters.
Architecture
KubeFed installs a federation API server and controller manager. You create FederatedDeployment, FederatedService, and other federated resource types. The federation controller watches these and applies equivalent resources to each registered member cluster.
Installation via Helm
| |
| |
Federated Deployment Example
| |
Key Strengths
- Mature API design: Federated resource types follow Kubernetes API conventions
- Override support: Per-cluster customization of propagated resources
- DNS and ingress federation: Built-in support for federated DNS and ingress
- Lightweight: Only requires the federation control plane
Limitations
- Development pace: Last significant commit was March 2023; the project is in maintenance mode
- No cluster lifecycle: Assumes clusters already exist
- Limited observability: No built-in dashboard for federation status
Open Cluster Management (OCM): Application Lifecycle Federation
Open Cluster Management, originally from Red Hat, focuses on managing applications and policies across multiple clusters. It uses a hub-and-spoke model where the hub cluster manages spoke clusters.
Architecture
OCM installs a hub control plane (multicluster-engine) on a management cluster. Spoke clusters register with the hub using ManagedCluster resources. Applications are deployed via ManifestWork and ManagedClusterSetBinding resources.
Installation
| |
Application Deployment via ManifestWork
| |
Policy-Based Placement
| |
Key Strengths
- Active development: Regular releases and active community
- Policy framework: Built-in governance and compliance (Gatekeeper integration)
- Application lifecycle: Full deploy/update/rollback workflow
- Add-on ecosystem: Submariner, Klusterlet, and observability add-ons
Cluster API (CAPI): Declarative Infrastructure Provisioning
Cluster API takes a different approach: instead of federating resources across existing clusters, it manages the clusters themselves as Kubernetes resources. You define Cluster, MachineDeployment, and Machine objects, and CAPI provisions the infrastructure.
Architecture
CAPI uses a provider model. The core CAPI defines abstractions (Cluster, Machine), while infrastructure providers (AWS, Azure, Docker, Metal3) implement the actual provisioning. This means CAPI can create clusters on any supported platform.
Installation
| |
MachineDeployment for Worker Nodes
| |
Key Strengths
- Infrastructure as Code: Clusters defined as Kubernetes resources
- Provider extensibility: AWS, Azure, GCP, VMware, bare metal (Metal3)
- Cluster lifecycle: Create, upgrade, scale, and delete clusters declaratively
- Self-hosted: Can run the management cluster on any Kubernetes
Choosing the Right Tool
| Use Case | Best Choice | Why |
|---|---|---|
| Propagate Deployments across existing clusters | KubeFed | Simple API-level federation |
| Multi-cluster governance & policy | Open Cluster Management | Built-in policy framework |
| Provision clusters on multiple clouds | Cluster API | Infrastructure provider model |
| Active development & community | OCM or CAPI | KubeFed is in maintenance mode |
| Disaster recovery (active-passive) | KubeFed | Federated resource propagation |
| Hybrid cloud (on-prem + cloud) | CAPI + OCM | CAPI for provisioning, OCM for management |
Why Multi-Cluster Federation Matters
Running multiple Kubernetes clusters is no longer optional for many organizations. Regulatory requirements demand data residency. Geographic distribution reduces latency. Multi-cloud strategies prevent vendor lock-in. But managing clusters independently is operationally expensive.
Federation tools solve this by providing a single control plane. Instead of running kubectl apply against 10 clusters, you define the desired state once and the federation layer handles propagation. This is especially valuable for:
- Platform teams managing shared infrastructure across business units
- SRE teams implementing consistent observability and alerting
- Security teams enforcing uniform policies and network configurations
For teams needing cluster-level automation beyond federation, our Cluster management guide covers Rancher, KubeSpray, and Kind. For multi-cluster networking, the Karmada vs Liqo vs Submariner comparison covers the networking layer.
FAQ
What is the difference between KubeFed and Cluster API?
KubeFed federates resources (Deployments, Services) across existing clusters, while Cluster API manages the clusters themselves as infrastructure. KubeFed answers “how do I run the same app across 5 clusters?” while CAPI answers “how do I create and manage 5 clusters?”
Is KubeFed still maintained?
KubeFed’s last significant release was in 2023. The project is in maintenance mode — it works well for existing deployments but is not receiving major new features. For active development, consider Open Cluster Management or Cluster API.
Can I use multiple federation tools together?
Yes. A common pattern is Cluster API for cluster provisioning, Open Cluster Management for application lifecycle, and KubeFed (or Karmada) for resource propagation. Each tool addresses a different layer of the multi-stack.
Does Open Cluster Management require Red Hat OpenShift?
No. OCM runs on any Kubernetes cluster, including vanilla K8s, EKS, GKE, and AKS. The hub and spoke clusters can be on different platforms.
How does Cluster API handle cluster upgrades?
CAPI handles upgrades through rolling updates of MachineDeployments. You change the version field in the MachineDeployment spec, and CAPI replaces nodes one at a time with the new version. Control plane upgrades use KubeadmControlPlane’s rolling update strategy.
What happens when a federated member cluster goes offline?
KubeFed marks the cluster as offline and stops propagating changes to it. When the cluster reconnects, pending changes are applied. OCM uses a similar pattern with ManagedClusterConditionAvailable.
Docker Compose for Local Testing
For local development, you can test CAPI with the Docker infrastructure provider:
| |
For more on Kubernetes operator patterns, see our Operators guide. For virtual cluster approaches to multi-tenancy, check vCluster vs Capsule vs Loft.