Proxmox Virtual Environment (Proxmox VE) is a powerful open-source server virtualization platform that combines KVM hypervisor and LXC container technology under a single management interface. Whether you are building a home lab to learn system administration, self-hosting applications, or running a small production environment, Proxmox VE provides enterprise-grade features without licensing costs. This guide walks you through the complete setup process, from bare-metal installation to running your first virtual machines and containers, with storage, networking, and backup configurations along the way.
Prerequisites
Before you begin, make sure you have:
- A dedicated machine (server, desktop PC, or mini PC) with a 64-bit CPU supporting Intel VT-x or AMD-V
- At least 8 GB of RAM (16 GB or more recommended for running multiple VMs)
- A dedicated SSD or NVMe drive (64 GB minimum) for the Proxmox OS
- Additional storage (HDD or SSD) for VM and container disks
- A USB flash drive (2 GB or larger) for the installer
- A wired Ethernet connection to your local network
- A separate computer with a web browser for management
What Is Proxmox VE?
Proxmox VE is a Debian-based Linux distribution purpose-built for virtualization. It integrates two virtualization technologies into one platform:
- KVM (Kernel-based Virtual Machine) — full hardware virtualization that allows you to run complete operating systems including Windows, Linux, and BSD with dedicated virtual hardware
- LXC (Linux Containers) — operating-system-level virtualization that runs isolated Linux instances sharing the host kernel, providing near-native performance with minimal overhead
The platform is managed through a web-based interface accessible on port 8006, and it supports clustering, live migration, software-defined storage, and software-defined networking. Proxmox VE is licensed under the GNU AGPL v3, meaning it is completely free to use, with optional paid subscriptions for enterprise support and repository access.
Proxmox VE vs VMware vs Hyper-V
Understanding how Proxmox compares to other popular hypervisors helps you make an informed decision for your home lab:
| Feature | Proxmox VE | VMware ESXi | Microsoft Hyper-V |
|---|---|---|---|
| Cost | Free (open-source) | Free tier limited, vSphere licensed | Free with Windows Server |
| Hypervisor type | Type 1 (KVM) | Type 1 | Type 1 |
| Container support | Built-in LXC | Requires separate Docker/K8s | Requires separate setup |
| Web interface | Built-in (port 8006) | vSphere Client required | Windows Admin Center |
| Storage | ZFS, Ceph, NFS, LVM | VMFS, vSAN, NFS | ReFS, CSV, SMB |
| Clustering | Free (up to 32 nodes) | Requires vCenter license | Requires Windows Server |
| Live migration | Free | Requires vMotion license | Free with clustering |
| Backup | Built-in vzdump + PBS | Requires third-party tools | Requires Windows Server Backup |
| Base OS | Debian Linux | Proprietary | Windows Server |
| Community | Large, active forums | Large, enterprise-focused | Microsoft ecosystem |
Proxmox VE stands out for home labs because it is fully featured at zero cost, natively supports both VMs and containers, and includes built-in backup and clustering without additional licensing.
Installing Proxmox VE
Download the ISO
Download the latest Proxmox VE ISO installer from the official download page at https://www.proxmox.com/en/downloads. At the time of writing, the current version is Proxmox VE 8.x.
Create a Bootable USB Drive
On Linux, use dd to write the ISO to a USB drive:
# Identify your USB drive (be very careful to select the correct device)
lsblk
# Write the ISO to the USB drive (replace /dev/sdX with your USB device)
sudo dd bs=1M conv=fdatasync if=proxmox-ve_8.*.iso of=/dev/sdX status=progress
On Windows, use Etcher or Rufus in DD mode to write the ISO.
Run the Installer
Boot the target machine from the USB drive and follow these steps:
- Select Install Proxmox VE (Graphical) from the boot menu
- Accept the EULA
- Select the target disk for installation — Proxmox will use the entire disk. For ZFS, you can select multiple disks in a RAID configuration
- Set your country, timezone, and keyboard layout
- Enter a strong root password and a valid email address for notifications
- Configure the management network interface: set a hostname (e.g.,
pve.home.lab), IP address, gateway, and DNS server - Review the summary and click Install
The installation takes approximately 5-10 minutes. After completion, remove the USB drive and reboot.
Post-Installation Configuration
Access the Web Interface
Once the server boots, open a browser on another computer and navigate to:
https://YOUR_SERVER_IP:8006
Log in with the username root and the password you set during installation. The realm should be Linux PAM standard authentication.
Remove the Subscription Notice
If you are using Proxmox VE without a paid subscription, you will see a subscription nag dialog on every login. You can safely dismiss it, or remove it by modifying the JavaScript file:
# Backup the original file first
cp /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js.bak
# Remove the subscription notice
sed -Ei.bak "s/res === null \|\| res === undefined \|\| \!res \|\| res.data.status.toLowerCase\(\) !== 'active'/false/g" /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js
# Restart the web service
systemctl restart pveproxy.service
Switch to the No-Subscription Repository
The enterprise repository requires a paid subscription key. Switch to the free no-subscription repository:
# Disable the enterprise repository
sed -i 's/^deb/#deb/' /etc/apt/sources.list.d/pve-enterprise.list
# Add the no-subscription repository
echo "deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription" > /etc/apt/sources.list.d/pve-no-subscription.list
# Update and upgrade the system
apt update && apt full-upgrade -y
Disable the Ceph Enterprise Repository
If you see an error about the Ceph enterprise repository:
# Disable the Ceph enterprise repository
sed -i 's/^deb/#deb/' /etc/apt/sources.list.d/ceph.list
Creating Virtual Machines (KVM)
Upload an ISO Image
Before creating a VM, upload an installation ISO to Proxmox. In the web interface:
- Navigate to your node in the left sidebar
- Click local (pve) under the node
- Select ISO Images from the content menu
- Click Upload and select your ISO file, or use Download from URL to fetch it directly
Alternatively, use the command line:
# Download Ubuntu Server ISO directly to the ISO storage
cd /var/lib/vz/template/iso/
wget https://releases.ubuntu.com/24.04/ubuntu-24.04.1-live-server-amd64.iso
Create a New VM
From the web interface, click Create VM in the top-right corner. Configure the following tabs:
General:
- Node: your Proxmox node
- VM ID: auto-assigned or choose your own numbering scheme (e.g., 100+)
- Name: descriptive name (e.g.,
ubuntu-server-01)
OS:
- Select the uploaded ISO image
- Type: Linux, Version: 6.x - 2.6 Kernel
System:
- Machine: q35 (recommended for modern guests)
- BIOS: OVMF (UEFI) for modern OS or SeaBIOS for legacy
- Add EFI Disk if using UEFI
- SCSI Controller: VirtIO SCSI single
Disks:
- Bus: VirtIO Block or SCSI
- Disk size: as needed (e.g., 32 GB for a server)
- Enable Discard for thin provisioning on SSDs
- Enable SSD emulation if the backing storage is SSD
CPU:
- Cores: allocate based on workload (e.g., 2-4)
- Type:
hostfor best performance (orx86-64-v2-AESfor migration compatibility)
Memory:
- Set the desired RAM (e.g., 2048 MB for a lightweight server)
- Enable Ballooning for dynamic memory allocation
Network:
- Bridge:
vmbr0(default bridge) - Model: VirtIO (paravirtualized)
Click Finish to create the VM, then start it and open the console to complete the OS installation.
Optimal VM Settings via CLI
You can also create and configure VMs using the qm command:
# Create a VM with ID 101
qm create 101 --name ubuntu-server-01 --memory 2048 --cores 2 --net0 virtio,bridge=vmbr0
# Import a disk image
qm importdisk 101 ubuntu-cloud.img local-lvm
# Attach the imported disk
qm set 101 --scsi0 local-lvm:vm-101-disk-0
# Set boot order
qm set 101 --boot order=scsi0
# Start the VM
qm start 101
Creating LXC Containers
LXC containers are the lightweight alternative to full VMs. They share the host kernel, start in seconds, and use a fraction of the resources.
Download Container Templates
In the web interface:
- Navigate to your node > local (pve) > CT Templates
- Click Templates and select from the available list (Ubuntu, Debian, Alpine, etc.)
Or from the command line:
# List available templates
pveam available --section system
# Download a template
pveam download local ubuntu-24.04-standard_24.04-2_amd64.tar.zst
Create a New Container
Click Create CT in the web interface and configure:
# Create a container with ID 200
pct create 200 local:vztmpl/ubuntu-24.04-standard_24.04-2_amd64.tar.zst \
--hostname docker-host \
--memory 2048 \
--swap 512 \
--cores 2 \
--net0 name=eth0,bridge=vmbr0,ip=dhcp \
--storage local-lvm \
--rootfs local-lvm:8 \
--unprivileged 1 \
--features nesting=1 \
--password
# Start the container
pct start 200
# Enter the container console
pct enter 200
Key options explained:
--unprivileged 1— runs the container without root-level access to the host (recommended for security)--features nesting=1— enables nesting, required for running Docker inside an LXC container--rootfs local-lvm:8— creates an 8 GB root filesystem on local-lvm storage
Privileged vs Unprivileged Containers
Unprivileged containers map the container’s UID/GID range to an unprivileged range on the host, providing better security isolation. Use privileged containers only when necessary, such as when you need direct access to host hardware (e.g., GPU passthrough or NFS mounts).
Storage Configuration
Proxmox VE supports multiple storage backends. Choosing the right one depends on your hardware and use case.
Local Storage (Default)
The default installation creates two storage entries:
- local — directory-based storage at
/var/lib/vzfor ISOs, templates, and backups - local-lvm — an LVM thin pool for VM disks and container volumes
Check your current storage configuration:
# List all configured storage
pvesm status
# Show detailed information
pvesm list local
pvesm list local-lvm
ZFS Storage
ZFS provides built-in RAID, snapshots, compression, and data integrity checks. If you selected ZFS during installation, it is already configured. To add a new ZFS pool:
# Create a ZFS mirror pool from two disks
zpool create -f tank mirror /dev/sdb /dev/sdc
# Enable compression
zfs set compression=lz4 tank
# Add the ZFS pool to Proxmox storage
pvesm add zfspool zfs-tank --pool tank --content images,rootdir
# Verify the storage
pvesm status
NFS Storage
NFS is ideal for shared storage across multiple Proxmox nodes, especially for ISOs, templates, and backups:
# Add NFS storage from a NAS
pvesm add nfs nas-backup \
--server 192.168.1.50 \
--export /volume1/proxmox-backup \
--content backup,iso,vztmpl \
--options vers=4.1
# Verify connectivity
pvesm status
Ceph Storage
For multi-node clusters, Ceph provides distributed, replicated storage:
# Install Ceph packages on all nodes (run on each node)
pveceph install
# Initialize the Ceph cluster (run on the first node)
pveceph init --network 10.10.10.0/24
# Create monitors on each node
pveceph mon create
# Add OSDs (one per disk, on each node)
pveceph osd create /dev/sdb
pveceph osd create /dev/sdc
# Create a storage pool
pveceph pool create vm-pool --pg_num 128
# Add the pool to Proxmox storage
pvesm add rbd ceph-pool --pool vm-pool --content images,rootdir
Networking
Default Bridge Configuration
During installation, Proxmox creates vmbr0, a Linux bridge bound to your physical network interface. Review the network configuration:
# View current network configuration
cat /etc/network/interfaces
The default configuration looks like:
auto lo
iface lo inet loopback
iface enp0s31f6 inet manual
auto vmbr0
iface vmbr0 inet static
address 192.168.1.100/24
gateway 192.168.1.1
bridge-ports enp0s31f6
bridge-stp off
bridge-fd 0
VLAN Configuration
VLANs allow you to segment network traffic. Enable VLAN awareness on the bridge:
# Edit the network configuration
nano /etc/network/interfaces
Add bridge-vlan-aware yes to the bridge definition:
auto vmbr0
iface vmbr0 inet static
address 192.168.1.100/24
gateway 192.168.1.1
bridge-ports enp0s31f6
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
Apply the configuration:
# Apply network changes (be careful on remote systems)
ifreload -a
Now you can assign VLANs to VMs and containers in their network configuration by adding a VLAN tag (e.g., tag=10 for VLAN 10).
Network Bonding
For redundancy and increased throughput, bond multiple network interfaces:
auto bond0
iface bond0 inet manual
bond-slaves enp0s31f6 enp1s0
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer3+4
auto vmbr0
iface vmbr0 inet static
address 192.168.1.100/24
gateway 192.168.1.1
bridge-ports bond0
bridge-stp off
bridge-fd 0
Backup and Restore
Backup with vzdump
Proxmox includes vzdump for backing up VMs and containers. There are three backup modes:
- Snapshot — takes a live snapshot without stopping the guest (recommended)
- Suspend — briefly suspends the guest for consistency
- Stop — stops the guest before backup (most consistent but causes downtime)
Create a manual backup:
# Backup VM 101 in snapshot mode
vzdump 101 --mode snapshot --compress zstd --storage local
# Backup container 200 in snapshot mode
vzdump 200 --mode snapshot --compress zstd --storage local
# Backup all guests on the node
vzdump --all --mode snapshot --compress zstd --storage local
Schedule Automatic Backups
From the web interface, navigate to Datacenter > Backup > Add to create a scheduled backup job. Or configure via CLI:
# Edit the backup job configuration
nano /etc/pve/jobs.cfg
Add a backup job:
vzdump: daily-backup
enabled 1
schedule daily
all 1
mode snapshot
compress zstd
storage local
mailnotification always
mailto admin@example.com
Restore from Backup
Restore a VM or container from a backup:
# List available backups
ls /var/lib/vz/dump/
# Restore a VM backup
qmrestore /var/lib/vz/dump/vzdump-qemu-101-2026_01_21-02_00_00.vma.zst 101
# Restore a container backup
pct restore 200 /var/lib/vz/dump/vzdump-lxc-200-2026_01_21-02_00_00.tar.zst
Proxmox Backup Server (PBS)
For advanced backup management, deploy a dedicated Proxmox Backup Server. PBS provides:
- Incremental backups with data deduplication
- Client-side encryption
- Backup verification and integrity checks
- Detailed backup history and statistics
Add PBS as a storage backend in Proxmox VE:
# Add PBS storage
pvesm add pbs pbs-server \
--server 192.168.1.51 \
--datastore backups \
--username backup@pbs \
--password \
--fingerprint <PBS_FINGERPRINT>
Templates and Cloud-Init
Creating Templates
Templates allow you to quickly deploy identical VMs. Convert an existing VM into a template:
# Stop the VM first
qm stop 101
# Convert to template
qm template 101
Cloud-Init Integration
Cloud-Init enables automatic configuration of VMs at first boot (hostname, SSH keys, network, users). Create a cloud-init ready template:
# Download a cloud image
wget https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img
# Create a new VM
qm create 9000 --name ubuntu-cloud-template --memory 2048 --cores 2 --net0 virtio,bridge=vmbr0
# Import the cloud image as a disk
qm importdisk 9000 noble-server-cloudimg-amd64.img local-lvm
# Attach the disk
qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0
# Add a Cloud-Init drive
qm set 9000 --ide2 local-lvm:cloudinit
# Set boot order and serial console
qm set 9000 --boot order=scsi0 --serial0 socket --vga serial0
# Configure Cloud-Init defaults
qm set 9000 --ciuser admin --cipassword <password> --sshkeys ~/.ssh/id_rsa.pub
qm set 9000 --ipconfig0 ip=dhcp
# Resize the disk to your desired size
qm disk resize 9000 scsi0 32G
# Convert to a template
qm template 9000
To deploy a new VM from the template:
# Clone the template into a full VM
qm clone 9000 110 --name web-server-01 --full
# Customize Cloud-Init for the clone
qm set 110 --ipconfig0 ip=192.168.1.110/24,gw=192.168.1.1
qm set 110 --nameserver 192.168.1.1
qm set 110 --searchdomain home.lab
# Start the cloned VM
qm start 110
Proxmox CLI Commands Reference
The following table provides a quick reference for the most commonly used Proxmox CLI commands:
| Command | Description | Example |
|---|---|---|
qm list | List all virtual machines | qm list |
qm start <vmid> | Start a virtual machine | qm start 101 |
qm stop <vmid> | Stop a virtual machine (hard stop) | qm stop 101 |
qm shutdown <vmid> | Gracefully shut down a VM | qm shutdown 101 |
qm reboot <vmid> | Reboot a virtual machine | qm reboot 101 |
qm create <vmid> | Create a new virtual machine | qm create 102 --name myvm --memory 2048 |
qm destroy <vmid> | Delete a virtual machine | qm destroy 102 --purge |
qm config <vmid> | Show VM configuration | qm config 101 |
qm set <vmid> | Modify VM configuration | qm set 101 --memory 4096 |
qm clone <vmid> <newid> | Clone a VM or template | qm clone 9000 103 --name clone-01 --full |
qm snapshot <vmid> | Create a snapshot | qm snapshot 101 pre-upgrade |
qm template <vmid> | Convert VM to template | qm template 9000 |
qm importdisk <vmid> | Import a disk image | qm importdisk 101 image.img local-lvm |
pct list | List all containers | pct list |
pct start <ctid> | Start a container | pct start 200 |
pct stop <ctid> | Stop a container | pct stop 200 |
pct enter <ctid> | Open a container shell | pct enter 200 |
pct create <ctid> | Create a new container | pct create 201 local:vztmpl/ubuntu-24.04-standard.tar.zst |
pct destroy <ctid> | Delete a container | pct destroy 201 --purge |
pct config <ctid> | Show container configuration | pct config 200 |
pct set <ctid> | Modify container configuration | pct set 200 --memory 4096 |
pvesm status | List all storage backends | pvesm status |
pvesm add <type> <id> | Add a storage backend | pvesm add nfs nas --server 192.168.1.50 --export /backup |
pvesm list <storage> | List contents of a storage | pvesm list local |
vzdump <vmid/ctid> | Backup a VM or container | vzdump 101 --mode snapshot --compress zstd |
qmrestore <file> <vmid> | Restore a VM from backup | qmrestore backup.vma.zst 101 |
pct restore <ctid> <file> | Restore a container from backup | pct restore 200 backup.tar.zst |
pveam available | List available templates | pveam available --section system |
pveam download | Download a template | pveam download local ubuntu-24.04-standard.tar.zst |
pvecm status | Show cluster status | pvecm status |
pveversion | Show Proxmox version | pveversion --verbose |
Troubleshooting
VM Will Not Start: IOMMU or Virtualization Errors
If a VM fails to start with KVM-related errors, verify that hardware virtualization is enabled:
# Check if KVM is available
kvm-ok
# If not available, check CPU virtualization support
egrep -c '(vmx|svm)' /proc/cpuinfo
# Verify IOMMU is enabled (for PCI passthrough)
dmesg | grep -e DMAR -e IOMMU
Ensure Intel VT-x or AMD-V is enabled in your BIOS/UEFI settings.
Web Interface Not Accessible
If you cannot reach the web interface at port 8006:
# Check if pveproxy is running
systemctl status pveproxy
# Restart the proxy service
systemctl restart pveproxy
# Check the firewall
iptables -L -n | grep 8006
# Verify the IP configuration
ip addr show vmbr0
Storage Full or Disk Space Issues
When storage fills up, VMs may freeze or fail to start:
# Check disk usage
df -h
# Check LVM thin pool usage
lvs -a
# Check ZFS pool usage (if using ZFS)
zpool list
zfs list
# Remove old backups to free space
ls -la /var/lib/vz/dump/
rm /var/lib/vz/dump/vzdump-qemu-101-OLD_DATE*.vma.zst
Container Fails to Start with Permission Errors
Unprivileged containers may have permission issues with certain operations:
# Check the container log
pct start 200 --debug
# If you need nesting (e.g., for Docker), ensure it is enabled
pct set 200 --features nesting=1
# For mount issues in unprivileged containers, check ID mapping
cat /etc/pve/lxc/200.conf
Cluster Communication Issues
If nodes lose contact with each other in a cluster:
# Check the cluster status
pvecm status
# Verify Corosync is running
systemctl status corosync
# Check the Corosync ring
pvecm expected 1
# Review the Corosync log
journalctl -u corosync -f
Slow VM Performance
If a VM runs slower than expected:
# Check if VirtIO drivers are installed in the guest
# For Windows guests, install the VirtIO drivers from the ISO
# Available at: https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/
# Enable the host CPU type for best performance
qm set 101 --cpu host
# Check if ballooning is causing memory pressure
qm monitor 101
# In the monitor, type: info balloon
# Verify disk I/O is not bottlenecked
iostat -x 1
Summary
Proxmox VE is an excellent choice for a home lab virtualization platform. It combines the power of KVM virtual machines and LXC containers in a single, web-managed, open-source platform at zero cost. In this guide, you learned how to install Proxmox VE, configure the post-installation settings, create virtual machines and containers, set up multiple storage backends including ZFS and NFS, configure networking with VLANs and bonding, automate backups with vzdump and Proxmox Backup Server, and deploy VMs rapidly with templates and Cloud-Init.
With your Proxmox VE home lab running, you can now deploy services like Docker containers inside your VMs or LXC containers. Check out our guide on How to Install Docker on Ubuntu to get started with containerized applications. To secure your virtual machines, follow our guide on How to Configure UFW Firewall on Ubuntu Server to set up proper firewall rules on your guest operating systems.