iSCSI (Internet Small Computer Systems Interface) enables block-level storage access over IP networks, allowing servers to mount remote storage volumes as if they were local disks. For self-hosted infrastructure, running your own iSCSI target server provides cost-effective SAN (Storage Area Network) capabilities without expensive proprietary hardware.
In this guide, we compare three open-source iSCSI target implementations: LIO (Linux IO Target, managed via targetcli-fb), TGT (Linux SCSI Target Framework), and SCST (SCSI Target Subsystem).
What Is an iSCSI Target?
In iSCSI terminology, a target is the storage server that provides block devices, and an initiator is the client that connects to those block devices. The target exports storage (disk partitions, LVM volumes, or files) as iSCSI LUNs (Logical Unit Numbers) that initiators can format and mount.
Key components:
- Target Portal Group (TPG): Defines the IP addresses and ports the target listens on
- LUN: A logical unit of storage exported to initiators
- IQN (iSCSI Qualified Name): Unique identifier for targets and initiators (e.g.,
iqn.2026-05.com.example:storage) - ACL (Access Control List): Restricts which initiators can connect to which LUNs
- CHAP authentication: Optional mutual authentication between target and initiator
LIO (Linux IO Target)
LIO is the default iSCSI target in modern Linux kernels (since 2.6.38). It supports iSCSI, Fibre Channel, FCoE, and SRP protocols through a unified target framework. Configuration is managed through the targetcli-fb interactive shell.
Installation and Configuration
| |
Create an iSCSI Target with targetcli
| |
LIO with Block Storage (LVM)
For better performance, use block-backed storage instead of file-backed:
| |
LIO Docker Deployment
| |
TGT (Linux SCSI Target Framework)
TGT (tgtadm) is an older userspace iSCSI target implementation. While largely superseded by LIO in modern kernels, it remains simpler to configure for basic use cases and doesn’t require kernel-space target drivers.
Installation
| |
Configuration via /etc/tgt/targets.conf
| |
Apply configuration:
| |
SCST (SCSI Target Subsystem)
SCST is a high-performance in-kernel SCSI target framework. It is designed for enterprise workloads and supports a wide range of transport protocols including iSCSI, SRP, and FCoE. SCST is commonly used in high-end storage appliances and provides the best performance for demanding I/O workloads.
Installation
SCST requires kernel module compilation:
| |
Configuration
Configure /etc/iscsi-scst.conf:
| |
Start the daemon:
| |
Comparison: LIO vs TGT vs SCST
| Feature | LIO (targetcli) | TGT (tgtadm) | SCST |
|---|---|---|---|
| Kernel version | 2.6.38+ (mainline) | Userspace | External module |
| Configuration method | targetcli shell | XML config file | Config file + CLI |
| Performance | High (kernel-space) | Medium (userspace) | Very high (kernel-space) |
| Protocol support | iSCSI, FC, FCoE, SRP | iSCSI only | iSCSI, SRP, FCoE |
| Backstore types | fileio, block, pscsi, ramdisk | file, block | fileio, block, pscsi |
| CHAP auth | Yes | Yes | Yes |
| Active development | Yes (kernel tree) | Maintenance | Yes (community) |
| Installation complexity | Low | Low | High (kernel build) |
| Production readiness | Excellent | Good | Excellent |
| Docker support | Good | Good | Poor (kernel modules) |
Initiator (Client) Configuration
Connect to an iSCSI target from a client:
| |
Add to /etc/fstab for persistent mounting:
| |
The _netdev flag ensures the filesystem is mounted after the network is available.
Why Self-Host iSCSI Targets?
Running your own iSCSI target infrastructure provides centralized block storage that multiple servers can share. Instead of purchasing expensive SAN hardware, you can build a software-defined storage system using commodity servers and standard network equipment.
Self-hosted iSCSI is ideal for virtualization clusters (Proxmox, oVirt, XCP-ng) where VMs need shared block storage for live migration. For file-level sharing alternatives, see our NFS server comparison. For distributed file systems, check our JuiceFS vs Alluxio vs CephFS guide. And for object storage needs, our MinIO vs SeaweedFS vs Garage comparison covers S3-compatible alternatives.
FAQ
What is the difference between iSCSI and NFS?
iSCSI provides block-level storage (raw disks), while NFS provides file-level storage (shared directories). With iSCSI, each initiator formats and manages its own filesystem on the exported LUN. With NFS, the server manages the filesystem and clients mount shared directories. Choose iSCSI for databases and VMs, NFS for shared files.
Can multiple initiators access the same iSCSI LUN simultaneously?
Multiple initiators can connect to the same LUN, but simultaneous read/write access requires a cluster-aware filesystem (OCFS2, GFS2) or a distributed filesystem. Without cluster filesystem support, simultaneous writes will corrupt data. For shared access, consider using a distributed filesystem instead.
Is iSCSI secure over untrusted networks?
Standard iSCSI transmits data in plaintext. For untrusted networks, use iSCSI over IPsec or deploy iSCSI on an isolated storage VLAN. CHAP authentication provides initiator-level access control but does not encrypt the data payload.
What backstore type should I use for best performance?
Block-backed storage (LVM volumes, raw partitions) provides the best performance because it bypasses the host filesystem. File-backed storage (fileio) adds filesystem overhead but is easier to manage and resize. Use block-backed for production databases and VMs, fileio for testing.
How do I back up iSCSI LUNs?
For file-backed LUNs, snapshot the underlying filesystem. For block-backed LUNs, use LVM snapshots or storage-level snapshots. The initiator can also run traditional backup tools (rsync, restic, borg) on the mounted filesystem.
Does LIO work with Docker containers?
LIO can run in Docker using privileged containers with host network mode, as shown in the deployment example above. However, the kernel modules must be loaded on the host system. The container runs the management interface (targetcli) while the kernel handles the actual I/O path.