← Back to posts
comparison guide self-hosted · · 14 min read

Complete Guide to MinIO: Self-Hosted S3 Object Storage 2026

A comprehensive guide to deploying and managing MinIO as a self-hosted S3-compatible object storage solution. Covers installation, configuration, lifecycle policies, and production best practices.

OS
Editorial Team

Cloud object storage is convenient, but it comes with ongoing costs, vendor lock-in, and data sovereignty concerns. If you store terabytes of backups, media assets, or application data, the monthly bills from AWS S3, Google Cloud Storage, or Azure Blob Storage add up fast. MinIO gives you the exact same S3-compatible API — but running on your own hardware, under your full control.

Why Self-Host Your Object Storage

Moving object storage in-house solves several problems at once:

Cost predictability. AWS S3 Standard pricing starts around $0.023 per GB per month. That sounds cheap until you hit 10 TB — suddenly you are paying $230 every month just to store files. With MinIO on your own server, you pay for the hardware once and storage is essentially free after that. For large datasets, the break-even point is often reached within the first few months.

No egress fees. Cloud providers charge you to download your own data. AWS charges $0.09 per GB for data transfer out to the internet. Moving 5 TB out costs $450. MinIO has zero egress fees — your data moves on your network at the speed of your connection.

Full data control. When you self-host, data never leaves your infrastructure. This matters for healthcare, finance, and any organization bound by GDPR, HIPAA, or SOC 2 requirements. You control encryption keys, access policies, and retention schedules.

S3 API compatibility. MinIO implements the full Amazon S3 API. Any application or tool that works with S3 works with MinIO without code changes — just point the endpoint URL to your MinIO server instead of s3.amazonaws.com.

What Is MinIO

MinIO is a high-performance, distributed object storage server written in Go. It was designed from the ground up to be S3-compatible and to run efficiently on commodity hardware. Key features include:

  • Full S3 API compatibility — works with any S3 SDK, CLI tool, or application
  • Erasure coding — protects data against drive and node failures without full replication
  • Bitrot protection — detects and silently repairs silent data corruption
  • Encryption at rest — supports server-side encryption with SSE-S3, SSE-KMS, and SSE-C
  • Lifecycle management — automatic tiering, expiration, and transition policies
  • Multi-tenant support — isolate teams and projects with separate buckets and policies
  • Horizontal scaling — add nodes to a distributed cluster to increase capacity and throughput

MinIO consistently ranks at the top of object storage benchmarks. On NVMe hardware, a single MinIO server can achieve read/write speeds exceeding 100 GB/s, making it suitable for machine learning pipelines, video processing, and large-scale analytics workloads.

Installation Methods

The fastest way to get MinIO running is with Docker. This single command starts a standalone server with the web console accessible on port 9001:

1
2
3
4
5
6
7
8
9
docker run -d \
  --name minio \
  -p 9000:9000 \
  -p 9001:9001 \
  -e MINIO_ROOT_USER=admin \
  -e MINIO_ROOT_PASSWORD=securepassword123 \
  -v /srv/minio/data:/data \
  -v /srv/minio/config:/root/.minio \
  minio/minio server /data --console-address ":9001"

The -v flags map persistent directories to your host filesystem. Replace securepassword123 with a strong password — the root user must have at least 8 characters.

Docker Compose

For production deployments, a Compose file gives you version-controlled configuration:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
services:
  minio:
    image: minio/minio:latest
    container_name: minio
    restart: unless-stopped
    ports:
      - "9000:9000"
      - "9001:9001"
    environment:
      MINIO_ROOT_USER: ${MINIO_ROOT_USER:-admin}
      MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD:-secu[prometheus](https://prometheus.io/)123}
      MINIO_PROMETHEUS_AUTH_TYPE: public
    volumes:
      - ./data:/data
      - ./config:/root/.minio
    command: server /data --console-address ":9001"
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 10s
      retries: 3
    labels:
      com.centurylinklabs.watchtower.enable: "true"

Store credentials in a .env file and never commit them to version control:

1
2
MINIO_ROOT_USER=admin
MINIO_ROOT_PASSWORD=YourStrongPasswordHere

Binary Installation

For bare-metal deployments without container overhead, download the MinIO binary directly:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
# Download the latest binary
wget https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio
sudo mv minio /usr/local/bin/

# Create system user and directories
sudo useradd -r minio -s /sbin/nologin
sudo mkdir -p /srv/minio/data
sudo chown minio:minio /srv/minio/data

# Create systemd service
sudo tee /etc/systemd/system/minio.service > /dev/null << 'EOF'
[Unit]
Description=MinIO Object Storage
Documentation=https://min.io/docs/minio/linux/index.html
Wants=network-online.target
After=network-online.target

[Service]
WorkingDirectory=/usr/local/

User=minio
Group=minio

EnvironmentFile=-/etc/default/minio
ExecStartPre=/bin/bash -c "if [ -z \"${MINIO_VOLUMES}\" ]; then echo \"Variable MINIO_VOLUMES not set in /etc/default/minio\"; exit 1; fi"

ExecStart=/usr/local/bin/minio server $MINIO_OPTS $MINIO_VOLUMES

Restart=always
LimitNOFILE=1048576
TimeoutStopSec=infinity
SendSIGKILL=no

[Install]
WantedBy=multi-user.target
EOF

# Set environment variables
sudo tee /etc/default/minio > /dev/null << 'EOF'
MINIO_ROOT_USER=admin
MINIO_ROOT_PASSWORD=YourStrongPasswordHere
MINIO_VOLUMES="/srv/minio/data"
MINIO_OPTS="--console-address :9001"
EOF

# Start the service
sudo systemctl daemon-reload
sudo systemctl enable minio
sudo systemctl start minio
sudo systemctl status minio

Getting Started with the mc Client

MinIO ships with a command-line client called mc that replaces aws s3 for most operations. Install it with:

1
2
3
wget https://dl.min.io/client/mc/release/linux-amd64/mc
chmod +x mc
sudo mv mc /usr/local/bin/

Configure an Alias

Point mc at your local MinIO instance:

1
2
mc alias set local http://localhost:9000 admin securepassword123
mc admin info local

You should see server information including total capacity, used space, and the number of drives.

Create Buckets and Upload Files

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
# Create a bucket
mc mb local/backups

# Upload a single file
mc cp /path/to/database.sql local/backups/

# Upload an entire directory recursively
mc cp --recursive /var/log/app/ local/backups/app-logs/

# List bucket contents
mc ls local/backups

# Mirror a directory (keeps destination in sync)
mc mirror /srv/data/ local/data-bucket/

# Download files
mc cp local/backups/database.sql ./restored-database.sql

Set Bucket Policies

Control public and private access with policies:

1
2
3
4
5
6
7
8
9
# Make a bucket publicly readable (for static assets)
mc anonymous set download local/public-assets

# Make a bucket fully private (default)
mc anonymous set private local/backups

# Upload and download with public bucket
mc cp logo.png local/public-assets/images/
# Now accessible at http://localhost:9000/public-assets/images/logo.png

Security Hardening

Enable TLS

MinIO supports automatic TLS certificate generation, but for production you should use certificates from Let’s Encrypt or your own CA:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# Create the certs directory
mkdir -p /srv/minio/certs

# Copy your certificates (must be named cert.pem and private.key)
cp /etc/letsencrypt/live/storage.example.com/fullchain.pem /srv/minio/certs/cert.pem
cp /etc/letsencrypt/live/storage.example.com/privkey.pem /srv/minio/certs/private.key

# Ensure correct ownership
chown minio:minio /srv/minio/certs/cert.pem /srv/minio/certs/private.key
chmod 600 /srv/minio/certs/private.key

Restart MinIO and it will automatically detect the certificates and serve HTTPS. Update your mc alias:

1
mc alias set local https://storage.example.com:9000 admin securepassword123

Create Service Accounts

Never use the root credentials for applications. Create scoped service accounts with limited permissions:

1
2
3
4
5
6
# Create a service account with read-only access to a specific bucket
mc admin user svcacct add \
  --access-key backup-reader \
  --secret-key $(openssl rand -base64 32) \
  --policy '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Action":["s3:GetObject","s3:ListBucket"],"Resource":["arn:aws:s3:::backups","arn:aws:s3:::backups/*"]}]}' \
  local admin

Enable Server-Side Encryption

Configure automatic encryption for all objects:

1
2
3
4
5
6
# Enable SSE-S3 encryption on a bucket
mc encrypt set sse-s3 local/backups

# Enable SSE-KMS with a custom key
mc admin kms key create local my-encryption-key
mc encrypt set sse-kms my-encryption-key local/backups

Distributed Mode for Production

For high availability and data durability, run MinIO in distributed mode across multiple nodes. The minimum recommended configuration is 4 nodes:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
services:
  minio1:
    image: minio/minio:latest
    hostname: minio1
    volumes:
      - /srv/minio/data1:/data1
      - /srv/minio/data2:/data2
    environment:
      MINIO_ROOT_USER: admin
      MINIO_ROOT_PASSWORD: securepassword123
    command: server http://minio{1...4}/data{1...2} --console-address ":9001"
    ports:
      - "9001:9001"

  minio2:
    image: minio/minio:latest
    hostname: minio2
    volumes:
      - /srv/minio/data1:/data1
      - /srv/minio/data2:/data2
    environment:
      MINIO_ROOT_USER: admin
      MINIO_ROOT_PASSWORD: securepassword123
    command: server http://minio{1...4}/data{1...2} --console-address ":9001"

  minio3:
    image: minio/minio:latest
    hostname: minio3
    volumes:
      - /srv/minio/data1:/data1
      - /srv/minio/data2:/data2
    environment:
      MINIO_ROOT_USER: admin
      MINIO_ROOT_PASSWORD: securepassword123
    command: server http://minio{1...4}/data{1...2} --console-address ":9001"

  minio4:
    image: minio/minio:latest
    hostname: minio4
    volumes:
      - /srv/minio/data1:/data1
      - /srv/minio/data2:/data2
    environment:
      MINIO_ROOT_USER: admin
      MINIO_ROOT_PASSWORD: securepassword123
    command: server http://minio{1...4}/data{1...2} --console-address ":9001"

In this configuration, each node contributes 2 drives. MinIO uses erasure coding across all 8 drives (4 nodes × 2 drives), meaning the cluster can survive up to 4 drive failures without data loss. The erasure coding parity is calculated automatically — with 8 drives, MinIO uses a 4+4 scheme (4 data blocks + 4 parity blocks).

Lifecycle Policies and Storage Tiering

Automate data lifecycle management to reduce storage costs and keep only what you need:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
# Set a lifecycle rule: delete objects older than 90 days
mc ilm rule add \
  --expiry-days 90 \
  local/backups

# Set a rule: transition objects older than 30 days to a cold tier
mc ilm rule add \
  --transition-days 30 \
  --storage-class COLD \
  --sc-flag \
  local/media

# List all lifecycle rules
mc ilm rule ls local/backups

# Export lifecycle configuration as JSON
mc ilm export local/backups > lifecycle.json

A typical lifecycle configuration for a backups bucket might look like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
{
  "Rules": [
    {
      "ID": "Delete old backups",
      "Status": "Enabled",
      "Filter": { "Prefix": "" },
      "Expiration": { "Days": 90 }
    },
    {
      "ID": "Transition large files to cold storage",
      "Status": "Enabled",
      "Filter": { "Prefix": "archives/" },
      "Transition": { "Days": 30, "StorageClass": "COLD" }
    }
  ]
}

Integration with Applications

MinIO works with any S3-compatible SDK. Here are common integration patterns.

Python with boto3

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
import boto3
from botocore.config import Config

s3 = boto3.client(
    "s3",
    endpoint_url="http://localhost:9000",
    aws_access_key_id="admin",
    aws_secret_access_key="securepassword123",
    config=Config(signature_version="s3v4"),
)

# Upload a file
s3.upload_file("report.pdf", "reports", "2026/Q1/report.pdf")

# Download a file
s3.download_file("reports", "2026/Q1/report.pdf", "./report.pdf")

# List objects
response = s3.list_objects_v2(Bucket="reports", Prefix="2026/")
for obj in response.get("Contents", []):
    print(f"{obj['Key']} ({obj['Size']} bytes)")

# Generate a presigned URL (temporary public access)
url = s3.generate_presigned_url(
    "get_object",
    Params={"Bucket": "reports", "Key": "2026/Q1/report.pdf"},
    ExpiresIn=3600,
)
print(f"Download link (valid for 1 hour): {url}")

Go SDK

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
package main

import (
    "context"
    "fmt"
    "log"
    "os"

    "github.com/minio/minio-go/v7"
    "github.com/minio/minio-go/v7/pkg/credentials"
)

func main() {
    client, err := minio.New("localhost:9000", &minio.Options{
        Creds:  credentials.NewStaticV4("admin", "securepassword123", ""),
        Secure: false,
    })
    if err != nil {
        log.Fatal(err)
    }

    // Create a bucket if it does not exist
    exists, err := client.BucketExists(context.Background(), "app-data")
    if err != nil {
        log.Fatal(err)
    }
    if !exists {
        err = client.MakeBucket(context.Background(), "app-data", minio.MakeBucketOptions{})
        if err != nil {
            log.Fatal(err)
        }
    }

    // Upload a file
    info, err := client.FPutObject(context.Background(), "app-data", "config.yaml", "/etc/app/config.yaml", minio.PutObjectOptions{})
    if err != nil {
        log.Fatal(err)
    }
    fmt.Printf("Uploaded %s (%d bytes)\n", info.Key, info.Size)
}

Nginx Reverse Proxy

Put MinIO behind Nginx for TLS termination and custom domain support:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
upstream minio {
    server 127.0.0.1:9000;
}

upstream minio-console {
    server 127.0.0.1:9001;
}

server {
    listen 80;
    server_name storage.example.com;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl http2;
    server_name storage.example.com;

    ssl_certificate /etc/letsencrypt/live/storage.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/storage.example.com/privkey.pem;

    # Client body size — increase for large uploads
    client_max_body_size 500M;

    # MinIO API
    location / {
        proxy_pass http://minio;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # Required for MinIO
        proxy_buffering off;
        proxy_request_buffering off;
        chunked_transfer_encoding on;
    }

    # MinIO Console
    location ~ ^/console/?(.*) {
        proxy_pass http://minio-console/$1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # WebSocket support for the console
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
}

Backup and Disaster Recovery

Even with erasure coding, you need offsite backups for disaster recovery. Use mc mirror to replicate to a secondary MinIO cluster:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# Add the secondary cluster as an alias
mc alias set remote https://backup.example.com:9000 admin backup-password

# Mirror all buckets to the remote cluster
mc mirror local/ remote/

# Set up continuous replication (available in MinIO enterprise)
mc replicate add local/backups \
  --remote-bucket http://backup-admin:backup-password@backup.example.com:9000/backups \
  --replicate delete,delete-marker,existing-objects

For simpler setups, a cron job with mc mirror runs periodic syncs:

1
2
# /etc/cron.d/minio-backup
0 2 * * * root /usr/local/bin/mc mirror --quiet --overwrite local/backups remote/backups >> /var/log/minio-mirror.log 2>&1

Monitoring

MinIO exposes Prometheus-compatible metrics out of the box. Add this to your Prometheus configuration:

1
2
3
4
5
6
scrape_configs:
  - job_name: minio
    metrics_path: /minio/v2/metrics/cluster
    scheme: http
    static_configs:
      - targets: ["localhost:9000"]

Key metrics to alert on:

MetricAlert ConditionMeaning
minio_cluster_disk_offline_total> 0A drive has gone offline
minio_node_file_descriptor_open_total> 80% of limitFile descriptor exhaustion risk
minio_bucket_usage_total_bytesApproaching capacityStorage running full
minio_cluster_health!= 1Cluster is degraded
minio_s3_requests_errors_totalSpikingApplication errors increasing

Comparison: MinIO vs Cloud Object Storage

FeatureMinIO (Self-Hosted)AWS S3Cloudflare R2
Storage costHardware cost only$0.023/GB/month$0.015/GB/month
Egress fees$0$0.09/GB$0 (no egress fees)
API compatibilityFull S3 APINative S3 APIFull S3 API
Data sovereigntyYour serverUS regionsConfigurable
Max throughput100+ GB/s (NVMe)Region dependentRegion dependent
Erasure codingBuilt-inBuilt-inBuilt-in
Lifecycle managementBuilt-inBuilt-inLimited
Setup complexityMediumLowLow
Operational overheadYour responsibilityAWS managedCloudflare managed
EncryptionSSE-S3, SSE-KMS, SSE-CSSE-S3, SSE-KMS, SSE-CSSE-S3

Comparison: MinIO vs Other Self-Hosted Options

FeatureMinIOCephSeaweedFS
Primary focusObject storageUnified storage (block + file + object)Object + file storage
S3 compatibilityFull (gold standard)Partial (via RGW)Good
Setup complexityLowVery highMedium
ScalabilityExcellentExcellentExcellent
Performance on NVMeBest-in-classGoodGood
Learning curveShallowSteepModerate
Community sizeLarge (20k+ GitHub stars)LargeGrowing
Best forS3 replacement, app storageEnterprise unified storageSimple distributed FS + S3

MinIO is the simplest to deploy and the most S3-compatible. Ceph is more feature-rich but significantly more complex to operate. SeaweedFS is a good middle ground if you also need a distributed filesystem alongside object storage.

Production Checklist

Before running MinIO in production, verify each item:

  1. Use separate credentials — never use root credentials in application code. Create service accounts with scoped policies.
  2. Enable TLS — encrypt data in transit with certificates from Let’s Encrypt or your internal CA.
  3. Set up monitoring — connect Prometheus and Grafana for visibility into cluster health.
  4. Configure lifecycle rules — automatically expire or transition objects to control storage growth.
  5. Enable server-side encryption — protect data at rest, especially for sensitive buckets.
  6. Test backup and restore — practice restoring from your backup target before you need it.
  7. Set resource limits — configure ulimit -n to at least 65536 for open file descriptors.
  8. Use dedicated drives — do not share storage drives with the OS or other services.
  9. Plan for growth — MinIO distributed clusters scale by adding node sets of 4. Plan your initial node count accordingly.
  10. Document your setup — record your topology, credentials location, and recovery procedures.

Conclusion

MinIO delivers a production-grade, S3-compatible object storage layer that you fully control. Whether you are replacing AWS S3 to cut costs, building a backup target for your infrastructure, or providing storage for an internal application, MinIO gives you the right balance of simplicity, performance, and compatibility. The Docker setup gets you running in minutes, and the distributed mode scales to petabytes when you need it.

Frequently Asked Questions (FAQ)

Which one should I choose in 2026?

The best choice depends on your specific requirements:

  • For beginners: Start with the simplest option that covers your core use case
  • For production: Choose the solution with the most active community and documentation
  • For teams: Look for collaboration features and user management
  • For privacy: Prefer fully open-source, self-hosted options with no telemetry

Refer to the comparison table above for detailed feature breakdowns.

Can I migrate between these tools?

Most tools support data import/export. Always:

  1. Backup your current data
  2. Test the migration on a staging environment
  3. Check official migration guides in the documentation

Are there free versions available?

All tools in this guide offer free, open-source editions. Some also provide paid plans with additional features, priority support, or managed hosting.

How do I get started?

  1. Review the comparison table to identify your requirements
  2. Visit the official documentation (links provided above)
  3. Start with a Docker Compose setup for easy testing
  4. Join the community forums for troubleshooting
Advertise here