When your Kubernetes microservices start misbehaving, the first question every engineer asks is: what is actually happening on the wire? Pod-to-pod communication in a Kubernetes cluster involves multiple abstraction layers — Container Network Interface (CNI) plugins, service mesh sidecars, iptables or eBPF-based routing, and encrypted TLS tunnels. Traditional packet capture tools struggle in this environment because traffic is ephemeral, distributed across nodes, and often encrypted end-to-end.

In this guide, we compare three approaches to Kubernetes traffic analysis: Kubeshark (eBPF-powered live dashboard), ksniff (kubectl plugin streaming tcpdump to Wireshark), and the classic Wireshark desktop application. Each has a distinct place in the debugging workflow, and understanding when to reach for which tool will save you hours during incident response.

Why Self-Hosted Kubernetes Traffic Analysis Matters

Debugging network issues in a Kubernetes cluster is fundamentally different from debugging a single server. A typical request flows through multiple pods, services, ingress controllers, and possibly a service mesh — each adding headers, encryption, or routing decisions. When latency spikes or connections fail, you need visibility into the actual packets traversing your cluster.

Self-hosted traffic analysis tools give you that visibility without sending sensitive network data to third-party SaaS platforms. For teams running workloads with compliance requirements (HIPAA, SOC 2, GDPR), keeping packet captures and traffic metadata within your own infrastructure is non-negotiable. Additionally, self-hosted tools work in air-gapped environments and do not depend on external API availability during outages.

For related reading, see our eBPF networking observability guide with Cilium, Pixie, and Tetragon and our Kubernetes debugging tools comparison covering Stern, Kubetail, and Inspektor Gadget.

Kubeshark: eBPF-Powered Live Traffic Dashboard

Kubeshark is an open-source network observability tool built specifically for Kubernetes. It uses eBPF (Extended Berkeley Packet Filter) to hook into the kernel’s networking stack and capture all cluster-wide traffic at the socket level. Unlike traditional packet capture, Kubeshark decrypts TLS/mTLS traffic automatically using eBPF — no sidecar proxies, no key injection, no application changes required.

With over 11,883 GitHub stars and weekly active development, Kubeshark has become one of the most popular K8s-native traffic analysis tools. Its web-based dashboard provides real-time visibility into every connection, with the ability to filter by namespace, workload, protocol, or custom query expressions.

Key Features

  • Automatic TLS decryption — eBPF hooks extract plaintext from encrypted traffic without certificate access
  • L7 protocol dissection — HTTP, gRPC, DNS, Kafka, AMQP, Redis, PostgreSQL, and more are automatically parsed
  • Kubernetes-aware queries — filter traffic by namespace, pod, service, deployment, or label selectors
  • Retrospective PCAP downloads — export cluster-wide packet captures filtered by time range and workload
  • MCP server integration — expose captured traffic data to external tools via the Model Context Protocol
  • Real-time streaming — see traffic as it happens with sub-second latency on the dashboard

Installing Kubeshark via Helm

The recommended installation method uses the official Helm chart. Kubeshark deploys a set of worker pods (one per node) that run eBPF probes, plus a central hub that aggregates and indexes the captured data:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# Add the Kubeshark Helm repository
helm repo add kubeshark https://helm.kubeshark.com
helm repo update

# Install Kubeshark into your cluster
helm install kubeshark kubeshark/kubeshark \
  --namespace kubeshark \
  --create-namespace

# Port-forward to the frontend dashboard
kubectl port-forward -n kubeshark svc/kubeshark-front 8899:80

Once the port-forward is active, open http://localhost:8899 in your browser. The dashboard immediately begins showing live traffic from all pods in the cluster.

Running Kubeshark via CLI

For quick debugging sessions, the kubeshark CLI provides a one-command alternative:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
# Install the CLI (Linux/macOS)
sh <(curl -Ls https://kubeshark.co/install)

# Start capturing in the current cluster context
kubeshark tap

# Tap specific namespaces only
kubeshark tap -n production -n staging

# Set resource limits for worker pods
kubeshark tap --set tap.resources.worker.limits.cpu=500m \
  --set tap.resources.worker.limits.memory=1Gi

Production Deployment with Ingress

For persistent deployments, expose Kubeshark through an ingress controller rather than relying on port-forwarding:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kubeshark
  namespace: kubeshark
  annotations:
    nginx.ingress.kubernetes.io/auth-type: basic
    nginx.ingress.kubernetes.io/auth-secret: kubeshark-auth
    nginx.ingress.kubernetes.io/auth-realm: "Kubeshark Dashboard"
spec:
  ingressClassName: nginx
  rules:
  - host: traffic.internal.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: kubeshark-front
            port:
              number: 80

Important: Always protect the Kubeshark dashboard with authentication. It exposes plaintext network traffic — including credentials and tokens — to anyone with access to the web UI.

ksniff: tcpdump Meets Wireshark on Kubernetes

ksniff takes a different approach. Instead of deploying a persistent monitoring stack, ksniff is a kubectl plugin that uploads a statically compiled tcpdump binary to your target pod, starts a capture, and streams the output directly to your local Wireshark instance. This gives you the full power of Wireshark’s protocol dissectors with minimal impact on the running pod.

With 3,446 GitHub stars, ksniff is a lightweight tool ideal for targeted debugging of individual pods. However, note that the project has not seen major updates since August 2024, and the maintainers explicitly state it is not production-ready.

Installing ksniff

ksniff installs via krew, the kubectl plugin manager:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
# Install krew if you don't have it
(
  set -x; cd "$(mktemp -d)" &&
  OS="$(uname | tr '[:upper:]' '[:lower:]')" &&
  ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')" &&
  KREW="krew-${OS}_${ARCH}" &&
  curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/${KREW}.tar.gz" &&
  tar zxvf "${KREW}.tar.gz" &&
  ./"${KREW}" install krew
)
export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"

# Install ksniff
kubectl krew install sniff

Using ksniff to Capture Pod Traffic

The basic workflow is simple — point ksniff at a pod and Wireshark opens automatically:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
# Capture traffic from a specific pod (opens Wireshark live)
kubectl sniff <pod-name> -n <namespace>

# Capture with a BPF filter (only HTTP traffic on port 8080)
kubectl sniff <pod-name> -n production -f "port 8080"

# Save to a pcap file instead of opening Wireshark
kubectl sniff <pod-name> -n production -o capture.pcap

# Capture on a specific network interface inside the pod
kubectl sniff <pod-name> -n production -i eth0

# Specify a capture filter and output file
kubectl sniff my-app-7d9f8b6c4-x2k9p \
  -n production \
  -c my-app-container \
  -f "tcp port 443" \
  -o /tmp/encrypted-traffic.pcap

How ksniff Works Under the Hood

When you run kubectl sniff, the plugin:

  1. Connects to the target pod using kubectl exec
  2. Uploads a statically compiled tcpdump binary to /tmp/tcpdump inside the container
  3. Starts tcpdump with your specified filter
  4. Streams the pcap output back through the kubectl exec channel
  5. Pipes the stream into your local Wireshark for real-time analysis

This approach is clever but has limitations. It requires the target pod to allow kubectl exec, and the uploaded tcpdump binary must be compatible with the pod’s architecture. Pods running as nonroot with restrictive security contexts may reject the binary upload.

Wireshark: The Classic Desktop Packet Analyzer

Wireshark is the industry-standard network protocol analyzer. With over 9,270 stars on its GitHub mirror and decades of development, it supports more protocols than any other tool. However, Wireshark is a desktop application — it captures traffic from a network interface on the machine where it runs, not from inside a Kubernetes cluster.

Using Wireshark with Kubernetes

To use Wireshark for Kubernetes traffic analysis, you have several options:

Option 1: Capture on a cluster node

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# SSH into a Kubernetes node
ssh node-1.cluster.internal

# List available interfaces
ip link show

# Capture on the CNI bridge (varies by CNI plugin)
sudo wireshark -i cni0 &

# Or capture on the node's primary interface for external traffic
sudo wireshark -i eth0 &

Option 2: Run tcpdump in a debug pod and open the pcap locally

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# Run a debug pod with network tools
kubectl run debug --rm -it --image=nicolaka/netshoot -- /bin/bash

# Inside the debug pod, capture traffic
tcpdump -i any -w /tmp/capture.pcap host 10.244.1.5

# Copy the pcap to your local machine
kubectl cp default/debug:/tmp/capture.pcap ./capture.pcap

# Open in Wireshark
wireshark capture.pcap

Option 3: Use the kubectl debug ephemeral containers

1
2
3
4
5
6
7
8
9
# Add an ephemeral debug container to a running pod
kubectl debug -it <pod-name> --image=nicolaka/netshoot --target=<container-name>

# Run tcpdump inside the ephemeral container
tcpdump -i any -w /tmp/capture.pcap port 8080

# Copy out and analyze
kubectl cp <pod-name>:/tmp/capture.pcap ./capture.pcap
wireshark capture.pcap

Feature Comparison

FeatureKubesharkksniffWireshark
ArchitectureeBPF kernel probestcpdump binary injectionDesktop packet capture
DeploymentHelm chart / CLIkubectl krew pluginDesktop application
Traffic scopeEntire clusterSingle podLocal machine interface
TLS decryptionAutomatic via eBPFManual (requires keys)Manual (requires key log)
L7 protocol parsingHTTP, gRPC, DNS, Kafka, Redis, PostgreSQLVia Wireshark dissectors3,000+ protocols
Real-time dashboardWeb-based UIWireshark desktopWireshark desktop
Kubernetes awarenessNative (namespace, labels, workloads)Pod-level onlyNone
PCAP exportCluster-wide, filtered by time/workloadSingle pod streamLocal interface capture
Production readinessYesNo (per maintainer)Yes
GitHub stars11,8833,4469,270+
Last updatedApril 2026August 2024April 2026
Resource overheadModerate (one worker per node)Minimal (per-capture only)None on cluster
Multi-node captureYes (automatic)No (one pod at a time)No (one interface at a time)
Air-gapped supportYes (self-hosted)Yes (offline plugin)Yes (offline desktop)

Choosing the Right Tool

Use Kubeshark when:

  • You need cluster-wide visibility into all network traffic
  • TLS/mTLS decryption without sidecars is required
  • You want a persistent monitoring stack with a web dashboard
  • Incident response requires retrospective traffic queries across multiple pods
  • You need L7 protocol parsing for HTTP, gRPC, Kafka, or Redis traffic

Use ksniff when:

  • You need to debug a specific pod’s network traffic quickly
  • Your team is already comfortable with Wireshark’s interface
  • You want minimal resource overhead (no persistent deployment)
  • The target pod allows kubectl exec and has a compatible architecture
  • You are debugging in a development or staging environment

Use Wireshark directly when:

  • You are analyzing traffic at the node level (CNI bridges, host networking)
  • You need deep protocol analysis for obscure or custom protocols
  • You are working in an environment where cluster tool deployment is not possible
  • You are reviewing saved pcap files from any source
  • You need offline analysis with no cluster connectivity

For more on packet capture fundamentals, check our tcpdump vs tshark vs termshark comparison.

Common Debugging Scenarios

Scenario 1: Intermittent Connection Timeouts Between Services

When pod A intermittently fails to reach pod B on port 8080:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# Use Kubeshark to watch traffic in real-time
kubeshark tap -n production --grep "pod-a.*pod-b"

# Or use ksniff for targeted capture on the source pod
kubectl sniff pod-a-xyz -n production -f "port 8080" -o timeout-capture.pcap

# Analyze the TCP handshake in Wireshark:
# - Look for SYN without SYN-ACK (timeout)
# - Check for TCP retransmissions
# - Verify the destination IP matches the Service ClusterIP

Scenario 2: Debugging gRPC Connection Errors

gRPC uses HTTP/2, which requires protocol-specific analysis:

1
2
3
4
5
6
7
8
# Kubeshark automatically dissects gRPC traffic
# Filter by gRPC status code in the dashboard:
#   grpc.status.code != 0

# For ksniff + Wireshark:
kubectl sniff grpc-server-abc -n production -f "port 9090" -o grpc.pcap
# In Wireshark: Filter with `grpc` or `http2` display filter
# Look for: GOAWAY frames, RST_STREAM, HTTP/2 HEADERS with error status

Scenario 3: Verifying mTLS Is Working Correctly

1
2
3
4
5
6
7
8
# Kubeshark: TLS decryption happens automatically
# Check the dashboard — if you see plaintext HTTP in an mTLS-enabled namespace,
# the service mesh is correctly decrypting traffic

# Wireshark: You need the SSLKEYLOGFILE from the application
# Set the environment variable on the target deployment:
kubectl set env deployment/my-app SSLKEYLOGFILE=/tmp/sslkey.log
# Then mount and extract the keylog for Wireshark analysis

FAQ

Does Kubeshark work with all CNI plugins?

Yes. Kubeshark uses eBPF to capture traffic at the kernel socket level, which is independent of the CNI plugin. It works with Calico, Cilium, Flannel, Weave Net, and any other CNI that operates on standard Linux networking. The eBPF probes attach to network socket events rather than specific network interfaces.

Is Kubeshark safe to run in production?

Kubeshark is designed for production use and runs as read-only eBPF probes that do not modify network traffic. However, the dashboard exposes plaintext data — including credentials, tokens, and sensitive payloads. Always enable authentication (basic auth, OAuth, or network policies) and restrict access to the dashboard using Kubernetes NetworkPolicies or ingress authentication.

The ksniff maintainers explicitly state the tool is not production-ready. It uploads arbitrary binaries into running containers via kubectl exec, which can violate security policies, trigger audit alerts, or fail on hardened pods with read-only filesystems or restricted security contexts. For production environments, prefer Kubeshark or kubectl debug ephemeral containers.

Can I use Wireshark to decrypt TLS traffic from Kubernetes?

Yes, but you need the TLS session keys. For applications that support SSLKEYLOGFILE (curl, Firefox, Chrome, many Go applications), set the environment variable and collect the key log file. Then configure Wireshark (Edit → Preferences → Protocols → TLS) to read the key log. For mTLS traffic in a service mesh, you would need the mesh’s CA keys — which Kubeshark avoids entirely by using eBPF-based decryption.

How much cluster resource does Kubeshark consume?

Kubeshark deploys one worker pod per node, each typically using 100-500m CPU and 256Mi-1Gi memory depending on traffic volume. The hub pod uses approximately 200m CPU and 512Mi memory. For a 10-node cluster, expect a total overhead of roughly 1-2 CPU cores and 3-6 GB RAM. You can tune resource limits via Helm values:

1
2
3
helm install kubeshark kubeshark/kubeshark \
  --set tap.resources.worker.limits.cpu=250m \
  --set tap.resources.worker.limits.memory=512Mi

Does Kubeshark support offline or air-gapped environments?

Yes. Kubeshark is fully self-hosted and does not require any external connectivity. You can pull the Docker images (kubeshark/worker, kubeshark/hub, kubeshark/front) in an internet-connected environment, transfer them to your air-gapped registry using docker save / docker load, and install via Helm with --set global.imageRegistry=your-internal-registry.example.com.