TL;DR — Quick Summary

Complete MinIO guide: architecture, distributed mode, erasure coding, IAM, SSE, bucket notifications, replication, and production Docker Compose deployment.

MinIO is a high-performance, Kubernetes-native object storage system that speaks the full AWS S3 API. Where the existing MinIO setup guide covers basic installation and troubleshooting, this guide goes deeper: distributed architecture with erasure coding, built-in IAM, server-side encryption, bucket notifications, cross-site replication, and a production Docker Compose stack with nginx. Every S3-compatible tool — boto3, rclone, Terraform, the aws CLI — works with MinIO by changing a single endpoint URL.

Prerequisites

  • Linux server (Ubuntu 22.04+ or RHEL 9+), or Docker 24+.
  • At least 8 GB RAM and dedicated drives (not the OS disk) for production.
  • Ports 9000 (API) and 9001 (Console) accessible.
  • For distributed mode: identical hardware across nodes, DNS or /etc/hosts resolving every node name, and synchronized clocks (NTP).

MinIO Architecture: Erasure Coding and Bitrot Protection

MinIO stores objects using erasure coding (Reed-Solomon). When you write an object, MinIO splits it into N data shards and M parity shards — collectively called an erasure set. A 16-drive erasure set with EC:8 gives you 8 data + 8 parity shards: MinIO survives losing any 8 drives simultaneously without data loss.

Key architectural concepts:

  • Erasure set — the unit of redundancy, typically 4–16 drives. MinIO selects the largest power of 2 that fits your drive count.
  • Server pool — a group of nodes that form a single namespace. Add pools online to expand capacity without downtime.
  • Bitrot protection — every shard is checksummed (HighwayHash). Reads verify checksums; corrupted shards are healed from parity automatically.
  • Inline data encryption — objects are encrypted at the shard level before being written to disk, not at the volume level.
  • Read quorum / Write quorum — reads need N/2 shards, writes need N/2 + 1. Below quorum, MinIO returns errors rather than corrupt data.

Installation Methods

# Download binary
wget https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio
sudo mv minio /usr/local/bin/

# Dedicated user and data directory
sudo useradd -r minio-user -s /sbin/nologin
sudo mkdir -p /data/minio
sudo chown minio-user:minio-user /data/minio

Environment file /etc/default/minio:

MINIO_ROOT_USER=minioadmin
MINIO_ROOT_PASSWORD=StrongPassword123!
MINIO_VOLUMES="/data/minio"
MINIO_OPTS="--console-address :9001"

Systemd unit /etc/systemd/system/minio.service:

[Unit]
Description=MinIO Object Storage
After=network-online.target
Wants=network-online.target

[Service]
User=minio-user
Group=minio-user
EnvironmentFile=/etc/default/minio
ExecStart=/usr/local/bin/minio server $MINIO_VOLUMES $MINIO_OPTS
Restart=always
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable --now minio

Method 2: Docker

docker run -d \
  --name minio \
  -p 9000:9000 -p 9001:9001 \
  -e MINIO_ROOT_USER=minioadmin \
  -e MINIO_ROOT_PASSWORD=StrongPassword123! \
  -v /data/minio:/data \
  quay.io/minio/minio server /data --console-address ":9001"

Method 3: Kubernetes Operator

kubectl apply -f https://raw.githubusercontent.com/minio/operator/master/deploy/namespace.yaml
helm install minio-operator minio/operator --namespace minio-operator

The Operator manages Tenant custom resources, each representing an independent MinIO cluster with its own storage, TLS certificates, and IAM.


Distributed Mode: Multi-Node Multi-Drive

Distributed MinIO requires launching the same command on every node simultaneously. All nodes must share identical credentials and MinIO version.

For a 4-node cluster with 4 drives each (16 drives total, EC:8):

# Run on ALL 4 nodes — identical command
export MINIO_ROOT_USER=minioadmin
export MINIO_ROOT_PASSWORD=StrongPassword123!

minio server \
  http://minio-node{1...4}/data/{1...4} \
  --console-address ":9001"

MinIO uses brace expansion — minio-node{1...4} expands to minio-node1, minio-node2, minio-node3, minio-node4. Each node must be resolvable via DNS or /etc/hosts.

Adding a server pool to an existing cluster:

minio server \
  http://minio-node{1...4}/data/{1...4} \
  http://minio-node{5...8}/data/{1...4} \
  --console-address ":9001"

Pools are independent erasure sets that share the same namespace. Objects are distributed across pools by a weighted placement algorithm.


MinIO Console and mc CLI

MinIO Console (Web UI)

The Console runs on port 9001 and covers all operational tasks:

  • Buckets — create, configure versioning, lifecycle, encryption, notifications.
  • Identity — users, groups, service accounts, IAM policies.
  • Monitoring — real-time metrics, drive health, erasure set status.
  • Logs — audit log viewer, server log streaming.

mc CLI Reference

# Install
wget https://dl.min.io/client/mc/release/linux-amd64/mc
chmod +x mc && sudo mv mc /usr/local/bin/

# Configure alias
mc alias set myminio https://minio.example.com minioadmin StrongPassword123!
CommandDescription
mc mb myminio/backupsCreate bucket
mc cp file.tar.gz myminio/backups/Upload file
mc mirror /local/dir myminio/backups/dir/Sync directory
mc ls myminio/backupsList objects
mc cat myminio/backups/config.jsonStream object to stdout
mc rm myminio/backups/old.tar.gzDelete object
mc version enable myminio/backupsEnable versioning
mc policy set download myminio/publicMake bucket public-read
mc admin info myminioCluster status
mc admin prometheus generate myminioPrometheus scrape config

Bucket Management: Versioning, Lifecycle, and Object Locking

Versioning

mc version enable myminio/my-bucket
# Every PUT creates a new version; DELETEs insert a delete marker

Lifecycle Rules (ILM)

# Expire non-current versions after 30 days
mc ilm add --expiry-days 30 --noncurrentversion-expiration-days 30 myminio/my-bucket

# Transition objects to a cold tier after 90 days (requires MinIO Tiering license)
mc ilm add --transition-days 90 --transition-storage-class COLD myminio/my-bucket

Object Locking (WORM)

# Enable at bucket creation — cannot be disabled later
mc mb --with-lock myminio/compliance-bucket

# Set default retention: COMPLIANCE mode, 7 years
mc retention set --default COMPLIANCE "7y" myminio/compliance-bucket

COMPLIANCE mode locks objects even from the root user. GOVERNANCE mode allows root to override. Use COMPLIANCE for regulatory WORM requirements.


Identity and Access Management

MinIO’s IAM is a subset of AWS IAM. Policies are JSON documents with Version, Statement, Effect, Action, and Resource fields.

Custom Policy Example

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["s3:GetObject", "s3:ListBucket"],
      "Resource": [
        "arn:aws:s3:::backups",
        "arn:aws:s3:::backups/*"
      ]
    }
  ]
}
mc admin policy create myminio readonly-backups policy.json
mc admin user add myminio appuser secretkey123
mc admin policy attach myminio readonly-backups --user appuser

Service Accounts

# Create a service account (scoped access key) for an application
mc admin user svcacct add myminio appuser
# Returns: AccessKey + SecretKey for application config

LDAP / Active Directory Integration

export MINIO_IDENTITY_LDAP_SERVER_ADDR="ldap.corp.example.com:389"
export MINIO_IDENTITY_LDAP_LOOKUP_BIND_DN="cn=minio,dc=corp,dc=example,dc=com"
export MINIO_IDENTITY_LDAP_LOOKUP_BIND_PASSWORD="ldappassword"
export MINIO_IDENTITY_LDAP_USER_DN_SEARCH_BASE_DN="ou=users,dc=corp,dc=example,dc=com"
export MINIO_IDENTITY_LDAP_USER_DN_SEARCH_FILTER="(uid=%s)"

OpenID Connect (Keycloak, Okta, Azure AD)

export MINIO_IDENTITY_OPENID_CONFIG_URL="https://keycloak.example.com/realms/myrealm/.well-known/openid-configuration"
export MINIO_IDENTITY_OPENID_CLIENT_ID="minio"
export MINIO_IDENTITY_OPENID_CLIENT_SECRET="secret"
export MINIO_IDENTITY_OPENID_CLAIM_NAME="policy"

Map the policy claim in your OIDC token to a MinIO IAM policy name, and users inherit their permissions from the IdP.


Server-Side Encryption

SSE-S3 (Internal KMS)

# Single-key internal encryption — simplest option
export MINIO_KMS_SECRET_KEY="my-minio-key:bXltaW5pb2tleXBhc3N3b3JkMTIzNDU2Nzg="
# Enable default encryption on a bucket
mc encrypt set sse-s3 myminio/my-bucket

SSE-KMS with HashiCorp Vault

export MINIO_KMS_KES_ENDPOINT="https://kes.example.com:7373"
export MINIO_KMS_KES_KEY_NAME="minio-default-key"
export MINIO_KMS_KES_CERT_FILE="/etc/minio/kes-client.crt"
export MINIO_KMS_KES_KEY_FILE="/etc/minio/kes-client.key"
export MINIO_KMS_KES_CAPATH="/etc/minio/kes-ca.crt"

# Enable SSE-KMS default encryption on a bucket
mc encrypt set sse-kms minio-default-key myminio/sensitive-bucket

MinIO uses KES (Key Encryption Service) as a bridge between MinIO and external KMS providers (Vault, AWS KMS, Azure Key Vault, GCP KMS). Each object gets a unique data encryption key (DEK) wrapped by the master key via envelope encryption.


Bucket Notifications

MinIO fires events on object lifecycle (PUT, DELETE, GET, etc.) to external targets.

Configure a Webhook Target

mc admin config set myminio notify_webhook:1 \
  endpoint="https://hooks.example.com/minio" \
  auth_token="Bearer mytoken"
mc admin service restart myminio

# Enable notifications on a bucket
mc event add myminio/my-bucket arn:minio:sqs::1:webhook \
  --event put,delete

Configure a Kafka Target

mc admin config set myminio notify_kafka:1 \
  brokers="kafka1.example.com:9092" \
  topic="minio-events" \
  tls=off
mc admin service restart myminio
mc event add myminio/logs-bucket arn:minio:sqs::1:kafka --event put

Supported targets: notify_webhook, notify_amqp (RabbitMQ), notify_kafka, notify_redis, notify_postgres, notify_mysql, notify_nats, notify_elasticsearch.


Site Replication and Bucket Replication

Site-to-Site Replication (Active-Active)

# Set up aliases for both sites
mc alias set site-a https://minio-a.example.com adminA passA
mc alias set site-b https://minio-b.example.com adminB passB

# Enable site replication — both sites must have admin access
mc admin replicate add site-a site-b

Site replication mirrors all buckets, IAM, and metadata between two MinIO clusters. Both sites serve reads and writes; changes replicate asynchronously.

Bucket-Level Replication (Disaster Recovery)

# Source and destination buckets must have versioning enabled
mc version enable site-a/important-bucket
mc version enable site-b/important-bucket-replica

# Create replication rule
mc replicate add site-a/important-bucket \
  --remote-bucket https://adminB:passB@minio-b.example.com/important-bucket-replica \
  --replicate "delete,delete-marker,existing-objects"

Monitoring with Prometheus and Grafana

# Generate Prometheus scrape config
mc admin prometheus generate myminio > /etc/prometheus/minio.yml

Key MinIO metrics exported:

MetricDescription
minio_cluster_capacity_raw_total_bytesTotal raw storage
minio_cluster_capacity_usable_free_bytesUsable free space
minio_cluster_nodes_online_totalOnline node count
minio_cluster_drive_offline_totalOffline drive count
minio_s3_requests_totalRequest rate by type
minio_s3_errors_totalError rate

MinIO’s official Grafana dashboard ID is 13502 — import it directly from grafana.com.


MinIO vs Alternatives

FeatureMinIOCeph RGWSeaweedFSGlusterFSAWS S3
S3 APIFullFullPartialNoFull (canonical)
Erasure codingYes (native)Yes (via RADOS)Yes (Reed-Solomon)NoManaged
Setup complexityLowVery highLowMediumNone
Kubernetes-nativeYesPartialPartialNoN/A
Object locking (WORM)YesYesNoNoYes
Bucket notificationsYesLimitedNoNoYes (SNS)
Site replicationYes (active-active)YesManualNoYes (CRR)
IAM / OIDCYesYesNoNoYes (full IAM)
Inline encryptionSSE-S3/KMSSSE-S3/KMSSSENoSSE-S3/KMS
Best forS3 workloadsScale-out all typesHigh file countShared filesystemManaged cloud

Production Docker Compose with nginx

# docker-compose.yml
services:
  minio:
    image: quay.io/minio/minio:latest
    container_name: minio
    restart: unless-stopped
    command: server /data --console-address ":9001"
    environment:
      MINIO_ROOT_USER: "${MINIO_ROOT_USER}"
      MINIO_ROOT_PASSWORD: "${MINIO_ROOT_PASSWORD}"
      MINIO_KMS_SECRET_KEY: "${MINIO_KMS_SECRET_KEY}"
    volumes:
      - minio_data:/data
    healthcheck:
      test: ["CMD", "mc", "ready", "local"]
      interval: 30s
      timeout: 10s
      retries: 3
    networks:
      - minio_net

  nginx:
    image: nginx:alpine
    container_name: minio-nginx
    restart: unless-stopped
    ports:
      - "443:443"
      - "9001:9001"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
      - ./certs:/etc/nginx/certs:ro
    depends_on:
      minio:
        condition: service_healthy
    networks:
      - minio_net

volumes:
  minio_data:

networks:
  minio_net:
    driver: bridge

nginx configuration (nginx.conf):

events { worker_connections 1024; }

http {
  upstream minio_api {
    server minio:9000;
  }
  upstream minio_console {
    server minio:9001;
  }

  server {
    listen 443 ssl;
    server_name s3.example.com;
    ssl_certificate /etc/nginx/certs/fullchain.pem;
    ssl_certificate_key /etc/nginx/certs/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;
    client_max_body_size 0;
    proxy_buffering off;

    location / {
      proxy_pass http://minio_api;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
    }
  }

  server {
    listen 9001 ssl;
    server_name s3.example.com;
    ssl_certificate /etc/nginx/certs/fullchain.pem;
    ssl_certificate_key /etc/nginx/certs/privkey.pem;

    location / {
      proxy_pass http://minio_console;
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "upgrade";
    }
  }
}

Gotchas and Edge Cases

  • Clock skew — S3 signatures expire if the client clock differs from the server by more than 15 minutes. Keep all nodes and clients NTP-synchronized.
  • Drive count must form valid erasure sets — MinIO requires 4, 6, 8, 10, 12, 14, or 16 drives per erasure set. An odd or prime count will fail at startup.
  • Object locking is irreversible — You cannot disable object locking on a bucket once enabled. Create compliance-locked buckets only when required.
  • Distributed mode: all nodes start together — MinIO will not start until it can contact a quorum of nodes. Node startup order matters during cluster initialization.
  • force_path_style: true for non-AWS endpoints — AWS SDK defaults to virtual-hosted style (bucket.endpoint). Non-AWS MinIO deployments require path-style (endpoint/bucket).
  • Console WebSocket — nginx must pass Upgrade and Connection headers for the Console’s real-time metrics to work.

Summary

  • MinIO provides a complete S3-compatible object storage stack self-hosted — erasure coding, IAM, SSE, notifications, and replication.
  • Run single-node for dev/small deployments; distributed mode with server pools for production scale-out.
  • Use SSE-KMS + KES for regulated workloads requiring external key management.
  • Object locking (WORM) in COMPLIANCE mode satisfies SEC 17a-4, FINRA, and similar regulations.
  • Monitor with Prometheus + Grafana dashboard 13502; set alerts on minio_cluster_drive_offline_total.
  • The Docker Compose + nginx stack provides TLS termination and a production-ready starting point.