System containers give you the isolation of virtual machines with the performance of bare metal. Unlike application containers (Docker) that run a single process, LXC containers run a full operating system with init, networking, and multiple services — making them ideal for running traditional workloads without the overhead of a hypervisor. If you manage infrastructure and need lightweight VMs for development, testing, or production isolation, LXC with the LXD management layer is a powerful tool in your arsenal.
This guide walks you through installing LXD, launching containers, configuring networking and storage, building reusable profiles, and managing container lifecycles with snapshots and backups.
Prerequisites
Before you begin, ensure you have:
- Ubuntu 22.04 LTS or newer (or any Linux distribution with snap support)
- sudo access on the host machine
- At least 10 GB free disk space for images and container storage
- Basic familiarity with the Linux command line
What Are LXC System Containers?
LXC (Linux Containers) is a userspace interface for the Linux kernel’s containment features. It combines kernel namespaces, cgroups, and security policies to create isolated environments that share the host kernel but have their own process trees, network stacks, and filesystems.
Key characteristics of system containers:
- Full OS environment — Containers run systemd (or another init), have their own users, and can run multiple services
- Shared kernel — All containers use the host’s Linux kernel, eliminating the need for a guest kernel
- Near-native performance — No hardware emulation means containers start in seconds and have negligible CPU overhead
- Persistent — Unlike Docker containers, LXC containers are designed to be long-lived and stateful
LXC vs LXD
LXC is the low-level container runtime. LXD is the modern management daemon built on top of LXC that provides:
- A clean CLI (
lxccommand — note: this is the LXD client, not the raw LXC tools) - A REST API for programmatic management
- Image management with remote image servers
- Built-in clustering, live migration, and project isolation
- Storage pool and network management
For this guide, we use LXD to manage containers.
Step 1: Install and Initialize LXD
Install LXD via Snap
The recommended installation method is the snap package, which provides the latest stable release:
sudo snap install lxd
If you have an older lxd deb package installed, remove it first:
sudo apt remove --purge lxd lxd-client
sudo snap install lxd
Add your user to the lxd group to run commands without sudo:
sudo usermod -aG lxd $USER
newgrp lxd
Initialize LXD
Run the interactive initialization wizard:
lxd init
For most setups, accept the defaults or use these recommended options:
Would you like to use LXD clustering? (yes/no) [default=no]: no
Do you want to configure a new storage pool? (yes/no) [default=yes]: yes
Name of the new storage pool [default=default]: default
Name of the storage backend to use (dir, zfs, btrfs) [default=zfs]: zfs
Create a new ZFS pool? (yes/no) [default=yes]: yes
Would you like to use an existing empty block device? (yes/no) [default=no]: no
Size in GiB of the new loop device (1GiB minimum) [default=30GiB]: 30
Would you like to connect to a MAAS server? (yes/no) [default=no]: no
Would you like to create a new local network bridge? (yes/no) [default=yes]: yes
What should the new bridge be called? [default=lxdbr0]: lxdbr0
What IPv4 address should be used? (CIDR subnet notation) [default=auto]: auto
What IPv6 address should be used? (CIDR subnet notation) [default=auto]: none
Would you like the LXD server to be available over the network? (yes/no) [default=no]: no
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] yes
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
For automated setups, use a preseed file:
# lxd-preseed.yaml
config: {}
networks:
- config:
ipv4.address: 10.10.10.1/24
ipv4.nat: "true"
ipv6.address: none
description: ""
name: lxdbr0
type: bridge
storage_pools:
- config:
size: 30GiB
description: ""
name: default
driver: zfs
profiles:
- config: {}
description: Default LXD profile
devices:
eth0:
name: eth0
network: lxdbr0
type: nic
root:
path: /
pool: default
type: disk
name: default
Apply it with:
cat lxd-preseed.yaml | lxd init --preseed
Step 2: Launch Your First Container
Browse Available Images
LXD downloads container images from remote image servers. List available Ubuntu images:
lxc image list ubuntu: | head -30
Or search for a specific distribution:
lxc image list images: debian/12
lxc image list images: alpine/3.19
lxc image list images: rocky/9
Launch a Container
Create and start a container in one command:
lxc launch ubuntu:22.04 web-01
This downloads the Ubuntu 22.04 image (if not cached), creates a container named web-01, and starts it. The first launch may take a minute; subsequent launches from the cached image take 2-3 seconds.
Basic Container Operations
# List running containers
lxc list
# Get detailed info about a container
lxc info web-01
# Open a shell inside the container
lxc exec web-01 -- bash
# Run a single command in the container
lxc exec web-01 -- apt update
# Stop and start containers
lxc stop web-01
lxc start web-01
# Delete a container (must be stopped first)
lxc stop web-01 && lxc delete web-01
# Force-delete a running container
lxc delete web-01 --force
Push and Pull Files
Transfer files between host and container:
# Push a file into the container
lxc file push /path/to/local/file web-01/etc/nginx/nginx.conf
# Pull a file from the container
lxc file pull web-01/var/log/syslog ./container-syslog.txt
# Push an entire directory
lxc file push -r ./myapp/ web-01/opt/
Step 3: Container Networking
Default NAT Networking
By default, LXD creates a bridge (lxdbr0) with NAT. Containers get IP addresses from LXD’s built-in DHCP server and can access the internet through the host. However, they are not directly accessible from the external network.
# View the network configuration
lxc network show lxdbr0
# List container IP addresses
lxc list -c n,s,4,6
Assign a Static IP Address
Set a static IP for a container using a device override:
lxc config device override web-01 eth0 ipv4.address=10.10.10.101
lxc restart web-01
Proxy Devices (Port Forwarding)
Forward a host port to a container port:
# Forward host port 80 to container port 80
lxc config device add web-01 http proxy listen=tcp:0.0.0.0:80 connect=tcp:127.0.0.1:80
# Forward host port 3306 to a database container
lxc config device add db-01 mysql proxy listen=tcp:0.0.0.0:3306 connect=tcp:127.0.0.1:3306
# Remove a proxy device
lxc config device remove web-01 http
Bridged Networking (Direct LAN Access)
For containers that need their own IP on the physical network, use macvlan or a bridge to the physical interface:
# Create a macvlan network attached to enp0s3
lxc network create macvlan0 type=macvlan parent=enp0s3
# Attach a container to the macvlan network
lxc config device add web-01 eth1 nic network=macvlan0
Step 4: Profiles and Resource Limits
Understanding Profiles
Profiles are reusable configuration templates applied to containers. Every container uses the default profile unless specified otherwise. You can stack multiple profiles.
# View the default profile
lxc profile show default
# List all profiles
lxc profile list
Create a Web Server Profile
lxc profile create webserver
lxc profile edit webserver
Paste this configuration:
config:
limits.cpu: "2"
limits.memory: 1GB
limits.memory.swap: "false"
security.nesting: "false"
description: Profile for web server containers
devices:
eth0:
name: eth0
network: lxdbr0
type: nic
root:
path: /
pool: default
size: 10GB
type: disk
Create a Database Profile
lxc profile create database
lxc profile edit database
config:
limits.cpu: "4"
limits.memory: 4GB
limits.memory.swap: "false"
description: Profile for database containers
devices:
eth0:
name: eth0
network: lxdbr0
type: nic
root:
path: /
pool: default
size: 50GB
type: disk
Launch Containers with Profiles
# Launch with a specific profile
lxc launch ubuntu:22.04 web-02 --profile webserver
# Launch with multiple profiles (applied in order)
lxc launch ubuntu:22.04 db-01 --profile default --profile database
Set Resource Limits Directly
You can also set limits on individual containers:
# CPU limit (number of cores)
lxc config set web-01 limits.cpu 2
# Memory limit
lxc config set web-01 limits.memory 512MB
# Disk I/O priority (0-10, 10 = highest)
lxc config set web-01 limits.disk.priority 5
# Network bandwidth limit
lxc config device set web-01 eth0 limits.ingress 100Mbit
lxc config device set web-01 eth0 limits.egress 50Mbit
Step 5: Storage Management
Storage Pools
LXD supports multiple storage backends. Check your current pools:
lxc storage list
lxc storage info default
Create additional pools for different workloads:
# Create a btrfs pool for containers that need snapshots
lxc storage create fast-pool zfs size=50GiB
# Create a directory-backed pool (no special filesystem needed)
lxc storage create archive-pool dir source=/mnt/archive
Custom Storage Volumes
Create persistent volumes that can be attached to containers:
# Create a volume
lxc storage volume create default app-data
# Attach it to a container
lxc config device add web-01 appdata disk pool=default source=app-data path=/opt/data
# Detach and attach to a different container
lxc config device remove web-01 appdata
lxc config device add web-02 appdata disk pool=default source=app-data path=/opt/data
Step 6: Snapshots and Backups
Create Snapshots
Snapshots capture the complete state of a container at a point in time:
# Create a snapshot
lxc snapshot web-01 before-upgrade
# List snapshots
lxc info web-01 | grep -A 20 Snapshots
# Restore a snapshot
lxc restore web-01 before-upgrade
# Delete a snapshot
lxc delete web-01/before-upgrade
Automated Snapshots
Configure automatic snapshot schedules:
# Take a snapshot every day, keep the last 7
lxc config set web-01 snapshots.schedule "0 2 * * *"
lxc config set web-01 snapshots.schedule.stopped false
lxc config set web-01 snapshots.expiry 7d
lxc config set web-01 snapshots.pattern "auto-%d"
Export and Import Containers
Back up containers as tarball files:
# Export a container (includes all snapshots)
lxc export web-01 web-01-backup.tar.gz
# Import on the same or different host
lxc import web-01-backup.tar.gz web-01-restored
# Publish a container as an image
lxc publish web-01 --alias my-web-image
lxc image list local: | grep my-web-image
Practical Example: Multi-Container Web Stack
Let’s deploy a practical web application stack with three containers:
# Create the containers
lxc launch ubuntu:22.04 nginx-proxy --profile webserver
lxc launch ubuntu:22.04 app-server --profile webserver
lxc launch ubuntu:22.04 pg-server --profile database
# Install nginx on the proxy
lxc exec nginx-proxy -- bash -c "apt update && apt install -y nginx"
# Install Node.js on the app server
lxc exec app-server -- bash -c "curl -fsSL https://deb.nodesource.com/setup_20.x | bash - && apt install -y nodejs"
# Install PostgreSQL on the database server
lxc exec pg-server -- bash -c "apt update && apt install -y postgresql postgresql-contrib"
# Verify all containers are running
lxc list -c n,s,4,P
Configure nginx as a reverse proxy pointing to the app server’s container IP, and configure the app to connect to PostgreSQL on the database container’s IP. Because all containers share the same bridge network, they can communicate directly.
Troubleshooting Common Issues
Container Fails to Start
# Check the container log
lxc info web-01 --show-log
# Check LXD daemon logs
sudo journalctl -u snap.lxd.daemon -f
# Verify storage pool health
lxc storage info default
Networking Not Working Inside Container
# Verify the bridge is up
ip addr show lxdbr0
# Check iptables NAT rules
sudo iptables -t nat -L -n | grep lxd
# Restart LXD networking
sudo systemctl restart snap.lxd.daemon
Permission Denied Errors
LXD uses unprivileged containers by default (UID/GID mapping). If a service inside the container needs to bind to a privileged port or access specific devices:
# Check the id map
lxc config get web-01 raw.idmap
# For containers that need raw host access (use with caution)
lxc config set web-01 security.privileged true
lxc restart web-01
Image Download Failures
# Check remote image servers
lxc remote list
# Refresh the image cache
lxc image list ubuntu: --force-local
# Manually copy an image
lxc image copy ubuntu:22.04 local: --alias ubuntu-22.04
LXD Useful Commands Reference
| Command | Description |
|---|---|
lxc list | List all containers and their status |
lxc info <name> | Detailed container information |
lxc exec <name> -- bash | Open a shell inside a container |
lxc config show <name> | Display container configuration |
lxc profile list | List all profiles |
lxc network list | List networks and bridges |
lxc storage list | List storage pools |
lxc snapshot <name> <snap> | Create a snapshot |
lxc copy <name> <new-name> | Clone a container |
lxc move <name> <new-name> | Rename a container |
Summary
LXC/LXD system containers provide a lightweight, high-performance alternative to traditional virtual machines for running full operating system environments. You’ve learned how to install and initialize LXD, launch containers from remote images, configure networking with bridges and proxy devices, create reusable profiles with resource limits, manage persistent storage volumes, and protect your data with snapshots and backups.
For production deployments, consider enabling LXD clustering for high availability, setting up automated snapshots with retention policies, and integrating container management into your existing infrastructure automation tools like Ansible. System containers excel for development environments, CI/CD pipelines, multi-tenant hosting, and any workload that benefits from OS-level isolation without hypervisor overhead.