InterPlanetary File System (IPFS) has become the backbone of decentralized content storage, but running a single IPFS node isn’t enough for production workloads. Content eviction, unreliable pinning, and lack of redundancy make standalone nodes unsuitable for serving websites, hosting datasets, or distributing software packages. This guide compares three self-hosted approaches to IPFS pinning and content distribution that keep your data persistent and accessible.

Understanding IPFS Pinning

IPFS is a content-addressed network — files are identified by their cryptographic hash (CID), not by location. By default, nodes only cache content temporarily. When a node restarts or runs low on disk space, unpinned content gets garbage-collected. Pinning tells a node to permanently retain specific content.

A pinning strategy answers three questions:

  • Which CIDs should persist? (single node pinning)
  • How many nodes should hold each CID? (replication factor)
  • What happens when a node goes offline? (fault tolerance)
FeatureSingle Node PinningIPFS ClusterCrust Network
ReplicationManualAutomatic (configurable)Incentivized (crypto-economic)
Multi-node coordinationNoYes (Raft consensus)Yes (order-preserving consensus)
Garbage collection controlManualAutomatic across clusterVia storage orders
Incentive layerNoneNoneCRU token staking
DashboardCLI onlyWeb UI + APIWeb UI + CLI
Kubernetes supportManualHelm chart availableOperator available
Minimum nodes13 (Raft quorum)1+ (with担保 nodes)

IPFS Cluster: Coordinated Pinset Management

IPFS Cluster (1,500+ GitHub stars) provides a distributed pinning system built on top of IPFS Kubo nodes. It uses a Raft consensus protocol to maintain a consistent pinset across all cluster peers, ensuring every pinned CID exists on a configurable number of nodes.

Architecture

IPFS Cluster runs as a sidecar alongside each IPFS Kubo node. Cluster peers communicate via libp2p and maintain consensus through Raft. When you pin a CID through any peer, the cluster allocates it to the appropriate number of nodes based on your replication factor configuration.

Docker Compose Deployment

Here’s a production-ready Docker Compose configuration for a 3-node IPFS Cluster:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
version: "3.8"

services:
  ipfs-cluster0:
    image: ipfs/cluster:latest
    ports:
      - "9094:9094"
      - "9095:9095"
      - "9096:9096"
    volumes:
      - ./cluster0/data:/data/ipfs
      - ./cluster0/staging:/data/ipfs-cluster
    environment:
      - CLUSTER_PEERNAME=cluster0
      - CLUSTER_SECRET=${CLUSTER_SECRET}
      - CLUSTER_IPFSHTTP_NODEMULTIADDRESS=/dns/ipfs0/tcp/5001
      - CLUSTER_CRDT_TRUSTEDPEERS=/dns/cluster1/tcp/9096/p2p/...,/dns/cluster2/tcp/9096/p2p/...
    networks:
      - ipfsnet

  ipfs0:
    image: ipfs/kubo:latest
    ports:
      - "4001:4001"
      - "5001:5001"
      - "8080:8080"
    volumes:
      - ./ipfs0/data:/data/ipfs
    command: ["daemon", "--migrate", "--enable-gc"]
    networks:
      - ipfsnet

networks:
  ipfsnet:
    driver: bridge

Key Configuration Options

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
{
  "cluster": {
    "replication_factor_min": 3,
    "replication_factor_max": 5,
    "monitor_ping_interval": "2s",
    "leave_on_shutdown": true
  },
  "ipfsconnector": {
    "node_multiaddress": "/dns/ipfs0/tcp/5001",
    "pin_method": "pin/add"
  }
}

Cluster supports both the legacy Raft-based consensus and the newer CRDT (Conflict-free Replicated Data Type) mode. CRDT mode is recommended for larger deployments as it eliminates the single-leader bottleneck.

Crust Network: Incentivized Decentralized Storage

Crust Network (500+ GitHub stars) adds an economic incentive layer to IPFS pinning. Instead of running your own cluster, you place storage orders on the Crust network and nodes are financially motivated to pin and serve your content. The Rust implementation ensures efficient storage and retrieval.

How Crust Works

Crust uses a担保 (guarantee) model where storage providers stake CRU tokens to participate in the network. When you place a storage order:

  1. Your file is split into segments with redundancy
  2. Storage providers bid to host segments
  3. Providers must periodically prove they still store the data (Proof of Replication)
  4. Providers earn CRU tokens for successful storage proofs
  5. Failed proofs result in slashing (token penalties)

Docker Compose Deployment

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
version: "3.8"

services:
  crust-node:
    image: crustio/crust:latest
    ports:
      - "33333:33333"
      - "33344:33344"
    volumes:
      - ./crust/data:/crust/data
      - ./crust/config:/crust/config
    environment:
      - CRUST_SEEDS=${CRUST_SEEDS}
      - CRUST_BASE_PATH=/crust/data
    restart: unless-stopped

  ipfs-gateway:
    image: ipfs/kubo:latest
    ports:
      - "8080:8080"
    command: ["daemon", "--migrate", "--enable-namesys-pubsub"]
    volumes:
      - ./ipfs-data:/data/ipfs
    depends_on:
      - crust-node

Crust also provides a web-based management dashboard and supports integration with existing IPFS gateways for seamless content access.

IPFS Kubo: Single-Node Pinning

The baseline approach uses a single IPFS Kubo node with manual or automated pinning. While this lacks the redundancy of cluster solutions, it’s the simplest starting point and works well for development, personal use, or when paired with external backup strategies.

Docker Compose for Production Pinning

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
version: "3.8"

services:
  ipfs-kubo:
    image: ipfs/kubo:latest
    ports:
      - "4001:4001/tcp"
      - "4001:4001/udp"
      - "5001:5001"
      - "8080:8080"
      - "8081:8081"
    volumes:
      - ./ipfs-data:/data/ipfs
      - ./ipfs-staging:/data/staging
    environment:
      - IPFS_PROFILE=server
      - LIBP2P_FORCE_PNET=1
    command: >
      daemon --migrate
      --enable-gc
      --enable-namesys-pubsub
    restart: unless-stopped
    deploy:
      resources:
        limits:
          memory: 4G
        reservations:
          memory: 2G

Automated Pinning Script

For single-node setups, automate pinning with a cron-driven script:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
#!/bin/bash
# /usr/local/bin/ipfs-pin-critical.sh
# Pins critical content and verifies persistence

IPFS_API="http://localhost:5001"
CRITICAL_CIDS=(
  "QmYourWebsiteCID"
  "QmYourDatasetCID"
)

for cid in "${CRITICAL_CIDS[@]}"; do
  if ! ipfs pin ls --type=recursive "$cid" > /dev/null 2>&1; then
    echo "Pinning missing CID: $cid"
    ipfs pin add "$cid"
    ipfs pin remote service add --pinning-service-ipfs "$IPFS_API" "$cid"
  fi
done

# Run garbage collection to reclaim space from unpinned content
ipfs repo gc 2>/dev/null

Performance Comparison

MetricIPFS ClusterCrust NetworkKubo Single Node
Read latency (local)~50ms~100ms (depends on nearest node)~30ms
Write propagation<1s across cluster~5-30s (network-dependent)Instant
Storage overheadReplication factor × sizeRedundant encoding (~1.5×)1× (no replication)
Bandwidth costInternal cluster trafficPublic networkPublic network
Fault tolerancen-1 node failures (Raft)Provider-dependent0 (single point of failure)
Setup complexityModerateLow (join network)Minimal

When to Choose Each Approach

IPFS Cluster is the right choice when:

  • You control all infrastructure nodes
  • You need predictable, low-latency access
  • You want fine-grained control over replication
  • You operate in environments where crypto incentives aren’t appropriate

Crust Network fits when:

  • You want to leverage a global storage network
  • You prefer paying per-use rather than maintaining infrastructure
  • You need geographic distribution without managing nodes
  • Your use case benefits from economic guarantees

Single Node Kubo works for:

  • Development and testing environments
  • Personal websites or small projects
  • When paired with cloud backup for critical content
  • Budget-constrained deployments with single-node tolerance

For related infrastructure topics, see our decentralized storage comparison, P2P container distribution guide, and S3 object storage comparison.

Why Self-Host IPFS Pinning?

Running your own IPFS pinning infrastructure gives you complete control over content persistence without depending on commercial pinning services that may change pricing, discontinue operations, or impose content restrictions. Self-hosted pinning ensures your data remains accessible on your terms, with configurable replication factors and geographic distribution.

For organizations handling sensitive datasets, academic repositories, or software distribution, self-hosted IPFS pinning eliminates the risk of third-party content eviction. Combined with IPFS Cluster’s automatic replication, you get enterprise-grade content persistence without vendor lock-in.

Cost analysis: A 3-node IPFS Cluster on commodity hardware (4 CPU, 16GB RAM, 2TB SSD each) costs approximately $150-200/month in cloud compute. Compare this to commercial pinning services charging $0.05-0.15/GB/month — at 1TB of pinned content, self-hosting becomes cost-effective within 3-6 months.

FAQ

What happens to pinned content if an IPFS node restarts?

Pinned content survives node restarts because the pin record is stored in the node’s datastore. However, unpinned content may be garbage-collected during startup if the node detects disk pressure. Always configure --enable-gc carefully and monitor disk usage.

How many replicas does IPFS Cluster need for production?

For production workloads, set replication_factor_min to at least 3. This ensures content survives up to 2 simultaneous node failures. For critical content, use 5 replicas across geographically distributed nodes.

Can I mix IPFS Cluster with commercial pinning services?

Yes. IPFS Cluster supports remote pinning services, allowing you to pin content to both your cluster peers and external services like Pinata or web3.storage simultaneously. This provides an additional disaster recovery layer.

Does Crust Network support existing IPFS CIDs?

Crust is fully IPFS-compatible. Any CID can be pinned to the Crust network by placing a storage order. The content is verified using IPFS’s standard content-addressing, ensuring interoperability with the broader IPFS ecosystem.

How do I migrate from single-node pinning to IPFS Cluster?

  1. Deploy the IPFS Cluster alongside your existing Kubo node
  2. Connect the cluster peer to your Kubo node via the IPFS HTTP API
  3. Export your existing pins: ipfs pin ls --type=recursive > pins.txt
  4. Import them into the cluster: ipfs-cluster-ctl pin add < cid
  5. Gradually add more cluster peers for redundancy

What is the minimum disk space for an IPFS Cluster node?

Each node needs enough space for its share of the total pinned content multiplied by the replication factor. For a 3-node cluster with replication factor 3 and 100GB of pinned content, each node stores ~100GB (the full set, since replication factor equals node count in small clusters).