Traditional perimeter firewalls are insufficient for modern containerized workloads. Once traffic passes the network boundary, lateral movement between services is unrestricted. Network microsegmentation solves this by enforcing security policies at the individual workload level — controlling which pods, containers, or VMs can communicate with each other.
In this guide, we compare three self-hosted microsegmentation platforms: Cilium Network Policies (eBPF-based), AccuKnox (policy discovery and enforcement), and Calico Network Policies (traditional iptables/eBPF). Each takes a different approach to container network security.
What Is Network Microsegmentation?
Microsegmentation divides your network into isolated security zones at the workload level. Unlike traditional VLANs or security groups that operate at the subnet or VM level, microsegmentation enforces policies per individual process, container, or pod.
Key capabilities of a microsegmentation platform include:
- Workload-level policies: Allow/deny traffic between specific containers or pods
- Application-aware filtering: Layer 7 rules based on HTTP paths, DNS queries, or process IDs
- Automatic policy discovery: Observing traffic patterns to suggest least-privilege rules
- Policy enforcement: Dropping unauthorized traffic at the kernel level
- Visibility and auditing: Logging all allowed and denied connections for compliance
Without microsegmentation, a compromised container in your web frontend tier can freely scan and attack your database tier. Proper segmentation limits the blast radius of any single compromise.
Comparison: Microsegmentation Platforms
| Feature | Cilium Network Policies | AccuKnox | Calico Network Policies |
|---|---|---|---|
| Enforcement Engine | eBPF (Linux kernel) | eBPF + iptables | iptables / eBPF |
| Layer 7 Filtering | Yes (HTTP, DNS, Kafka, gRPC) | Yes | Limited (iptables mode) |
| Policy Discovery | Hubble observability | Auto-discovery + CIS benchmarks | Manual |
| Multi-Cluster | ClusterMesh | Yes (multi-cloud) | Global network policy |
| Service Mesh | Built-in mTLS | Integrates with Istio | No |
| Host Firewall | Yes | Yes | Yes |
| Complexity | Medium | Medium-High | Medium |
| GitHub Stars | 24,300+ | Discovery engine 1,000+ | 13,500+ |
| Best For | K8s-native with L7 policies | Policy automation + compliance | Production-proven reliability |
1. Cilium Network Policies (eBPF-Based)
Cilium uses eBPF (Extended Berkeley Packet Filter) to enforce network policies directly in the Linux kernel — without iptables rules. This provides faster packet processing and enables Layer 7 filtering for HTTP, DNS, Kafka, and gRPC protocols.
Docker Compose Setup (Single-Node Test Lab)
| |
Network Policy Example — restrict frontend to only talk to backend on port 8080:
| |
Layer 7 Policy — allow only GET requests to /api/v1/*:
| |
Cilium’s Hubble observability platform provides real-time service dependency maps, flow logs, and policy enforcement visibility — essential for understanding which workloads communicate with each other.
2. AccuKnox (Policy Discovery + Enforcement)
AccuKnox focuses on automated policy discovery — observing your workloads’ actual traffic patterns and generating least-privilege security policies. It combines Cilium’s eBPF enforcement with an intelligent policy engine that suggests rules based on observed behavior and CIS benchmarks.
Docker Compose Setup
| |
The AccuKnox agent runs in “monitor mode” initially, observing traffic without enforcing policies. After a discovery period (typically 24-48 hours), it generates recommended policies that you can review and apply. This prevents accidentally blocking legitimate traffic.
3. Calico Network Policies
Calico is the most widely adopted network policy engine for Kubernetes. It supports both traditional iptables-based enforcement and modern eBPF dataplane. Calico’s strength lies in its maturity, extensive documentation, and integration with major cloud platforms.
Docker Compose Setup
| |
Calico GlobalNetworkPolicy — deny all inter-namespace traffic by default:
| |
Calico’s BGP-based routing also provides high-performance pod-to-pod networking without overlay networks, reducing latency compared to VXLAN-based CNI plugins.
Choosing the Right Microsegmentation Platform
Use Cilium when:
- You need Layer 7 filtering (HTTP, DNS, Kafka)
- eBPF observability (Hubble) is important for debugging
- You want built-in mTLS without a separate service mesh
- Your team is comfortable with Kubernetes-native tooling
Use AccuKnox when:
- You want automated policy discovery to avoid manual rule writing
- Compliance (CIS benchmarks) is a requirement
- You operate mixed environments (K8s + VMs + bare metal)
- You need a visual policy management interface
Use Calico when:
- You need battle-tested, production-proven network policies
- BGP-based routing (no overlay) is preferred
- You operate on-premises bare-metal Kubernetes clusters
- Your team has existing iptables/networking expertise
Why Self-Host Your Microsegmentation Platform?
Network security is not an area where you want to rely on cloud-provider-specific tools. When you self-host microsegmentation, your security policies travel with your workloads — whether they run on-premises, in AWS, or at the edge. This portability is critical for hybrid cloud strategies and multi-cloud deployments.
Self-hosted microsegmentation also gives you full visibility into every allowed and denied connection. Cloud-native security groups are opaque — you see the final allow/deny decision but not the enforcement path. eBPF-based platforms like Cilium provide kernel-level visibility into exactly which policy matched each packet, which is invaluable for incident response and compliance auditing.
The cost argument is equally compelling. Managed container security platforms charge per node or per workload. At 500+ nodes, these costs can exceed $10,000/year. Self-hosted microsegmentation runs on the same infrastructure as your workloads with minimal overhead.
For foundational network policies, see our Calico vs Cilium vs kube-router guide. If you need deeper eBPF visibility, our XDP/eBPF network firewalls guide covers packet-level filtering. For multi-cluster connectivity, check our Kubernetes multi-cluster service mesh guide.
FAQ
What is the difference between network policies and microsegmentation?
Network policies define which traffic is allowed between workloads. Microsegmentation is the broader practice of dividing your network into isolated security zones — of which network policies are the enforcement mechanism. Microsegmentation also includes visibility, policy discovery, compliance reporting, and incident response capabilities.
Does eBPF-based microsegmentation require a specific kernel version?
Yes. eBPF features used by Cilium require Linux kernel 4.19 or later for basic functionality, and 5.10+ for advanced features like socket-level filtering. Most modern distributions (Ubuntu 22.04, RHEL 9, Debian 12) ship with compatible kernels.
Can I run microsegmentation on non-Kubernetes workloads?
Cilium and AccuKnox both support non-Kubernetes environments. Cilium can run in “host mode” to protect VMs and bare-metal servers. AccuKnox works with Docker containers and traditional Linux hosts. Calico supports VMs via its BGP integration but requires manual configuration.
How does microsegmentation affect network performance?
eBPF-based enforcement (Cilium, AccuKnox) adds negligible overhead — typically under 1 microsecond per packet. iptables-based enforcement (Calico in iptables mode) has slightly higher latency due to rule chain traversal. At scale, eBPF consistently outperforms iptables because it avoids linear rule matching.
How do I migrate from permissive to enforcing mode safely?
Start with all policies in “log” or “monitor” mode. Observe traffic patterns for 1-2 weeks to understand baseline communication. Generate least-privilege policies from observed traffic. Apply them in monitor mode first, verify no legitimate traffic is flagged, then switch to enforcing mode. Both AccuKnox and Cilium support this workflow natively.
What happens to existing connections when a policy is updated?
With eBPF-based enforcement, policy updates are applied atomically at the kernel level — existing connections are not dropped unless the new policy explicitly blocks them. With iptables-based enforcement, rule updates may cause brief connection resets during the reload cycle.