Modern storage management often requires combining multiple drives of different sizes, speeds, and purposes into a single coherent view. Whether you are building a home NAS, managing a media server, or organizing backup volumes, union file systems provide the flexibility to pool heterogeneous storage without the rigidity of traditional RAID.
This guide compares three leading open-source approaches to union storage: SnapRAID (2,479 stars), MergerFS (5,616 stars), and UnionFS-Fuse (366 stars). We cover deployment strategies, parity configuration, pooling policies, and Docker integration so you can build the storage layer that fits your infrastructure.
What Are Union File Systems?
A union file system overlays multiple directories (potentially on different physical drives) into a single mount point. Unlike traditional RAID, which requires identical drives and dedicates capacity to parity, union file systems let you:
- Pool drives of different sizes — mix 4 TB, 8 TB, and 16 TB drives without wasted space
- Add or remove drives on the fly — no rebuild process, no waiting for resync
- Choose pooling policies — control which drive receives new writes (most free space, first available, round-robin)
- Separate parity from data — SnapRAID stores parity on dedicated drives, protecting against disk failure without locking your array
Traditional RAID (mdadm, ZFS, Btrfs RAID) offers strong data integrity but at the cost of flexibility. Union file systems trade some of that rigidity for operational simplicity and heterogeneous drive support.
SnapRAID: Parity-Based Protection
SnapRAID is a user-space parity tool inspired by hardware RAID5/6. It computes parity files across a set of data drives and can reconstruct data from a single failed drive (or two, with dual parity).
Key characteristics:
- Runs as a scheduled task, not in real-time — parity is computed when you run
snapraid sync - Supports up to 6 parity drives (RAID6 equivalent with dual parity, up to RAID13 with six)
- Works with any filesystem underneath (ext4, XFS, Btrfs, NTFS)
- Does NOT protect against simultaneous multi-drive failure beyond your parity count
- Ideal for cold storage, media libraries, and archives where data is written once and read often
SnapRAID Docker Deployment
SnapRAID does not have an official Docker image, but LinuxServer.io provides a community container and you can run it in a privileged container with host drive mounts:
| |
SnapRAID configuration (/config/snapraid.conf):
| |
Running a sync:
| |
MergerFS: Real-Time Drive Pooling
MergerFS is a FUSE-based union filesystem that creates a pooled view of multiple directories in real time. Unlike SnapRAID, it does not provide parity — it focuses on flexible write policies and a unified namespace.
Key characteristics:
- Real-time file operations — reads and writes go directly to underlying drives
- Rich policy engine:
epmfs(existing path with most free space),mfs(most free space),ff(first found),rand(random) - Supports hard links, extended attributes, and POSIX permissions
- Can be combined with SnapRAID: MergerFS pools drives for writes, SnapRAID provides parity protection
- Ideal for active storage, download directories, and media servers
MergerFS Docker Deployment
MergerFS can be deployed in a privileged container with FUSE support:
| |
Direct host installation (recommended for production):
| |
Useful MergerFS policies:
category.create=epmfs— write to drive with most free space that already has the file’s directorycategory.create=mfs— write to drive with most free spacecategory.create=ff— write to first drive with enough spacecache.files=partial— cache read operations for better performance
UnionFS-Fuse: Layered Read/Write Overlay
UnionFS-Fuse is a FUSE implementation of the UnionFS concept, designed to overlay a read-write branch on top of read-only branches. This is commonly used for:
- Creating writable overlays on read-only filesystems (Live CD environments)
- Combining a fast SSD cache layer with slower HDD storage
- Testing environments where changes should be discardable
Key characteristics:
- Supports
COW(copy-on-write) semantics — writes to read-only branches are redirected to the RW branch - Simpler feature set than MergerFS — no policy engine for write distribution
- Best suited for overlay scenarios rather than drive pooling
- 366 stars, actively maintained but smaller community than MergerFS
UnionFS-Fuse Docker Deployment
| |
Direct host installation:
| |
Comparison: SnapRAID vs MergerFS vs UnionFS-Fuse
| Feature | SnapRAID | MergerFS | UnionFS-Fuse |
|---|---|---|---|
| Stars | 2,479 | 5,616 | 366 |
| Language | C | C++ | C |
| Primary purpose | Parity protection | Drive pooling | RW overlay on RO |
| Real-time | No (scheduled sync) | Yes | Yes |
| Parity/RAID | Yes (up to 6 parity) | No | No |
| Write policies | N/A | epmfs, mfs, ff, rand, lfs | COW only |
| Hot-add drives | Yes (add, then sync) | Yes (instant) | Yes (remount) |
| Filesystem agnostic | Yes | Yes | Yes |
| Bit-rot detection | Yes (scrub) | No | No |
| Docker deployment | Community image (LSIO) | Privileged container | Privileged container |
| Best use case | Archive/media library | Active storage pool | Read-only overlay |
Recommended Architecture: Combining SnapRAID + MergerFS
The most popular self-hosted storage setup combines both tools:
- MergerFS pools all data drives into a single mount point (
/srv/pool) for real-time read/write access - SnapRAID runs on a cron schedule (every 6 hours) to compute parity from the pooled data
- New files land on the drive with most free space (MergerFS
epmfspolicy) - If a drive fails, SnapRAID reconstructs the lost data from parity
| |
Why Self-Host Your Storage Pool?
Commercial NAS devices from Synology and QNAP offer polished interfaces but come with hardware lock-in, limited upgrade paths, and premium pricing. Building your own storage pool with open-source tools gives you:
Complete hardware freedom — use any combination of drives, controllers, and chassis. Mix enterprise SAS drives with consumer SATA drives. Add NVMe cache when needed. Upgrade one drive at a time without replacing the entire array.
No vendor lock-in — your data lives on standard filesystems (ext4, XFS). If your software stack fails, you can mount any individual drive on any Linux system and read the data directly. Proprietary RAID formats often require the original controller or software to recover.
Cost efficiency — a used server chassis with 12 drive bays costs less than a 4-bay NAS. Drives are the biggest expense, and buying used enterprise drives (with proper health checks) cuts costs by 50-70%.
Data sovereignty — parity data never leaves your network. Cloud backup services require uploading your entire library, which is impractical for multi-terabyte collections. SnapRAID parity stays local.
For decentralized storage alternatives, see our IPFS vs Storj vs Sia comparison. For S3-compatible object storage, check our MinIO vs SeaweedFS vs Garage guide. If you need NFS sharing for your pooled storage, our NFS server guide covers deployment options.
FAQ
What is the difference between SnapRAID and traditional RAID?
SnapRAID computes parity on a scheduled basis (not in real-time), so there is a window between writes and parity updates where data is unprotected. Traditional RAID (mdadm, ZFS) protects data instantly. However, SnapRAID works with any filesystem, supports heterogeneous drives, and does not require rebuilding the entire array when a drive fails — you simply restore the affected files from parity.
Can I use SnapRAID and MergerFS together?
Yes, this is the recommended setup. MergerFS creates a pooled view of all your data drives for real-time access, while SnapRAID periodically computes parity across those same drives. The combination gives you the flexibility of pooling with the safety of parity protection.
How many parity drives does SnapRAID support?
SnapRAID supports up to 6 parity drives. With 1 parity drive, you can survive a single disk failure (RAID5 equivalent). With 2 parity drives, you can survive 2 simultaneous failures (RAID6 equivalent). Additional parity drives provide further protection at the cost of storage capacity.
Does MergerFS provide any data protection?
No. MergerFS is purely a pooling tool — it provides no redundancy, parity, or backup functionality. If a drive in a MergerFS pool fails, files stored on that drive are lost. Combine MergerFS with SnapRAID or a separate backup solution for data protection.
Can I add a new drive to an existing SnapRAID array?
Yes. Add the new drive to your snapraid.conf file, then run snapraid sync. The new drive will be included in the next parity calculation. No data migration or array rebuild is required.
Is UnionFS-Fuse suitable for a home NAS?
UnionFS-Fuse is better suited for overlay scenarios (e.g., writable layer on read-only media) than for general NAS drive pooling. For NAS use, MergerFS (pooling) combined with SnapRAID (parity) is the more feature-complete solution. UnionFS-Fuse shines in Live CD environments and container base image layering.
Choosing the Right Union File System
Choose SnapRAID if: You need parity protection for large, mostly-static datasets (media libraries, archives). Your data is written infrequently and you can tolerate running sync on a schedule.
Choose MergerFS if: You need real-time drive pooling with flexible write policies. You have drives of different sizes and want to maximize usable capacity without wasting space on parity.
Choose UnionFS-Fuse if: You need a read-write overlay on top of read-only storage. You are building a Live CD environment or need COW semantics for testing.
Choose SnapRAID + MergerFS if: You want the best of both worlds — real-time pooling with scheduled parity protection. This is the most popular combination in the self-hosted community and the recommended architecture for home NAS builds.