Network File System (NFS) remains one of the most widely used protocols for network file sharing on Linux and UNIX systems. Whether you’re serving build artifacts to CI/CD runners, sharing media libraries across homelab nodes, or providing storage for Kubernetes PersistentVolumes, choosing the right NFS server implementation matters for performance, security, and operational simplicity.
This guide compares three self-hosted NFS server options: NFS-Ganesha (user-space, feature-rich), UNFS3 (lightweight NFSv3-only), and the Kernel NFS Server (nfsd, built into Linux). We’ll cover architecture differences, Docker deployment, configuration examples, and help you pick the right tool for your use case.
For a broader comparison of self-hosted file sharing protocols including SMB/CIFS and WebDAV, see our Samba vs NFS vs WebDAV guide.
Why Self-Host an NFS Server?
NFS provides native POSIX-compliant file sharing with near-local disk performance. Unlike object storage (S3) or block storage (iSCSI), NFS lets applications read and write files using standard system calls — no special SDKs or mounting protocols required.
Key advantages of self-hosting your own NFS server:
- Zero licensing costs — all three options are fully open source
- Full data sovereignty — files never leave your infrastructure
- Native POSIX semantics — file locking, permissions, and extended attributes work correctly
- Wide client support — Linux, macOS, BSD, and Windows (with NFS client enabled) all connect natively
- Low overhead — NFS runs over TCP with minimal protocol overhead compared to HTTP-based alternatives
For distributed file system needs spanning multiple nodes, check out our JuiceFS vs Alluxio vs CephFS comparison.
Architecture Comparison
NFS-Ganesha: User-Space Feature-Rich Server
NFS-Ganesha is an NFSv3, v4, and v4.1 file server that runs entirely in user space. With 1,749 GitHub stars and active development (last updated April 2026), it’s the most feature-complete open source NFS server available.
Key architectural features:
- Runs as a standard userspace process — no kernel module dependencies
- Pluggable backend architecture (FSAL — File System Abstraction Layer) supporting local filesystems, Ceph, CephFS, GLUSTER, CEPH, and more
- Full NFSv4.1 support including pNFS (parallel NFS) for performance scaling
- Built-in Kerberos authentication (krb5, krb5i, krb5p)
- Dynamic export configuration — add/remove shares without restarting
- Multi-threaded I/O with configurable thread pools
UNFS3: Lightweight NFSv3 Server
UNFS3 is a user-space implementation of the NFSv3 specification. At 155 stars, it’s a smaller project but serves a specific niche: minimal, easy-to-deploy NFSv3 sharing.
Key characteristics:
- Implements only NFSv3 — no NFSv4 features (no Kerberos, no pNFS)
- Runs entirely in user space with minimal dependencies
- Single binary deployment — simple to install and configure
- Suitable for basic file sharing where NFSv4 features aren’t needed
- Limited to TCP transport (no UDP)
Kernel NFS Server (nfsd): Built Into Linux
The kernel NFS server (nfsd) has been part of the Linux kernel since version 1.0. It’s the most battle-tested NFS implementation, powering everything from home NAS boxes to enterprise storage arrays.
Key characteristics:
- Runs in kernel space — maximum performance, zero context switching overhead
- Supports NFSv2, v3, v4, v4.1, and v4.2
- Managed via the
nfs-kernel-serverpackage on Debian/Ubuntu ornfs-utilson RHEL/CentOS - Tightly integrated with the kernel VFS layer for optimal performance
- Configuration via
/etc/exports— simple but requires service restart for changes
Feature Comparison Table
| Feature | NFS-Ganesha | UNFS3 | Kernel NFS (nfsd) |
|---|---|---|---|
| NFS Versions | v3, v4.0, v4.1 | v3 only | v2, v3, v4.0, v4.1, v4.2 |
| Execution Mode | User-space | User-space | Kernel-space |
| Kerberos Auth | Yes (krb5/krb5i/krb5p) | No | Yes |
| pNFS Support | Yes | No | Yes (v4.1+) |
| Dynamic Exports | Yes (DBUS/API) | No | No (requires restart) |
| Pluggable Backends | Yes (FSAL: POSIX, CEPH, GLUSTER, CEPH) | No (POSIX only) | No (kernel VFS only) |
| NFSv4 Delegations | Yes | N/A | Yes |
| TCP Wrappers | Yes | Yes | Yes |
| NLM (Lock Manager) | Yes | Yes | Yes |
| Docker Friendly | Excellent | Good | Limited (requires privileged) |
| GitHub Stars | 1,749 | 155 | In-kernel |
| Last Active | April 2026 | July 2025 | Always updated with kernel |
Installation & Configuration
NFS-Ganesha: Docker Compose Deployment
NFS-Ganesha deploys cleanly in Docker because it runs in user space. Here’s a production-ready docker-compose.yml:
| |
Create ganesha.conf for the export configuration:
| |
Start the server and verify:
| |
UNFS3: Docker Compose Deployment
UNFS3 is simpler to deploy but limited to NFSv3. Here’s a working compose configuration:
| |
The exports file format is identical to the standard /etc/exports:
| |
| |
Kernel NFS Server: Native Installation
The kernel NFS server cannot run cleanly inside an unprivileged Docker container because it requires kernel module access. The recommended deployment is native on the host:
| |
Configure exports in /etc/exports:
| |
Apply and start:
| |
For clients to connect:
| |
Performance Considerations
Kernel NFS: Maximum Throughput
Running in kernel space means nfsd avoids context switching between user and kernel mode for every I/O operation. For high-throughput workloads (video editing, database backups, large file transfers), the kernel server delivers the best raw performance.
NFS-Ganesha: Competitive With Tuning
NFS-Ganesha’s user-space design adds some overhead, but configurable thread pools and the FSAL architecture let you optimize for specific workloads. For most homelab and small-business workloads, the performance difference is negligible (< 5%).
Key tuning parameters in ganesha.conf:
| |
UNFS3: Adequate for Light Workloads
UNFS3 is not optimized for high throughput. It handles basic file sharing well but should not be used for performance-critical workloads like database storage or video editing NAS.
Security Best Practices
All three NFS servers support the following security hardening measures:
1. Restrict exports to specific subnets — never use * in export definitions:
| |
2. Use root_squash to prevent root access on the server:
| |
3. Enable Kerberos authentication (NFS-Ganesha and kernel NFS only):
| |
4. Use TCP-only transport — disable UDP to prevent spoofing attacks:
| |
5. Firewall the NFS ports:
| |
For a comprehensive self-hosted firewall guide, see our UFW vs firewalld vs iptables comparison.
When to Choose Which Server
| Scenario | Recommendation | Rationale |
|---|---|---|
| Kubernetes PersistentVolumes | NFS-Ganesha | NFSv4.1 + pNFS, dynamic exports, container-native |
| Homelab media sharing | Kernel NFS (nfsd) | Maximum throughput, zero overhead, proven reliability |
| Quick NFSv3 file share | UNFS3 | Single binary, minimal config, no kernel dependencies |
| Ceph backend storage | NFS-Ganesha | Native CEPH FSAL — no kernel Ceph client needed |
| Enterprise with Kerberos | NFS-Ganesha or Kernel NFS | Both support krb5/krb5i/krb5p |
| High-performance video editing NAS | Kernel NFS (nfsd) | Lowest latency, no context switching overhead |
| Docker container deployment | NFS-Ganesha | Clean user-space architecture, no privileged mode needed |
Troubleshooting
Common NFS Client Errors
“mount.nfs: access denied by server” — Check that the client IP matches an export rule and the exports file has been reloaded (exportfs -ra for kernel, restart for Ganesha/UNFS3).
“Stale file handle” — The server-side file was modified outside NFS (e.g., directly on the server filesystem). Remount the share:
| |
“Permission denied” despite RW export — Check root_squash is set correctly and file permissions on the server allow the mapped UID/GID:
| |
Debugging NFS-Ganesha
| |
Debugging Kernel NFS
| |
FAQ
What is the difference between NFSv3 and NFSv4?
NFSv4 introduced several major improvements over NFSv3: integrated file locking (no separate NLM protocol), stateful protocol with better recovery, built-in security (Kerberos), compound operations for reduced latency, and pNFS for parallel data access. NFSv3 remains popular for its simplicity and wide client support, but NFSv4 is recommended for new deployments.
Can I run NFS inside a Docker container?
Yes, but with caveats. NFS-Ganesha runs cleanly in Docker because it operates in user space — you only need SYS_ADMIN and DAC_READ_SEARCH capabilities. UNFS3 also works well. The kernel NFS server (nfsd) requires loading kernel modules (nfsd, lockd, sunrpc), which means running with --privileged or on the host directly. For container-native deployments, NFS-Ganesha is the best choice.
Is NFS secure for use over the internet?
NFS was designed for trusted local networks and should never be exposed directly to the internet. Always restrict NFS to private networks using firewall rules and export restrictions. For cross-network file sharing, use an SSH tunnel, WireGuard VPN, or a higher-level protocol like SFTP or WebDAV. See our WireGuard VPN guide for setting up secure network tunnels.
How do I back up an NFS share?
Since NFS exports are backed by a local filesystem on the server, use standard backup tools on the server side — not from NFS clients. Tools like rsync, restic, or borgbackup should run directly on the NFS server machine accessing the underlying filesystem. This ensures consistent snapshots and avoids network transfer overhead during backup.
What port does NFS use and do I need to open firewall rules?
NFS primarily uses TCP port 2049. Additional ports include: 111 (portmapper/rpcbind, TCP+UDP), 662 (MNT/Mountd, TCP+UDP), and 32769/32803 (legacy mountd). For NFSv4-only deployments, you only need port 2049 (TCP) open — NFSv4 eliminates the need for portmapper and mountd. For NFSv3, all the above ports must be accessible between client and server.
Can NFS-Ganesha use Ceph or other distributed storage as a backend?
Yes. NFS-Ganesha’s FSAL (File System Abstraction Layer) supports multiple backends: POSIX (local filesystem), CEPH (Ceph RBD), CEPH_FS (CephFS), GLUSTER, CEPH_RGW (Ceph object gateway), MEM (in-memory), and more. This makes it ideal for serving NFS exports backed by distributed storage without requiring kernel Ceph clients on the NFS server host.