Setting up an NFS server on Linux is one of the fastest ways to share files across machines on the same network without installing heavy software stacks. NFS (Network File System) lets you mount a remote directory as if it were a local disk, making it ideal for shared home directories, lab clusters, build caches, and media libraries. In this guide you will configure a full NFS server and client from scratch, understand the key /etc/exports options, choose between NFSv3 and NFSv4, tune read/write buffer sizes for performance, and persist mounts reliably with /etc/fstab.

Prerequisites

  • Two Linux machines on the same network — one acting as the server, one as the client. Both roles can also be on virtual machines.
  • Root or sudo access on both machines.
  • Firewall access to port 2049/TCP (and additional ports for NFSv3 if used).
  • Debian/Ubuntu or RHEL/Fedora/Rocky family — commands for both are shown throughout.
  • Basic knowledge of IP addressing and Linux file permissions.

Installing the NFS Server

On the server node, install the NFS kernel server package:

# Debian / Ubuntu
sudo apt update && sudo apt install -y nfs-kernel-server

# RHEL / Fedora / Rocky / AlmaLinux
sudo dnf install -y nfs-utils

Enable and start the service so it survives reboots:

sudo systemctl enable --now nfs-server
sudo systemctl status nfs-server

Verify that the NFS server is listening on port 2049:

ss -tlnp | grep 2049

You should see an entry for nfsd bound to 0.0.0.0:2049.

Configuring /etc/exports

The /etc/exports file is the heart of NFS server configuration. Each line defines one exported path and which clients can access it, along with mount options.

Basic syntax:

/path/to/export  client_spec(options)

A minimal working example that exports /srv/nfs to an entire subnet:

/srv/nfs  192.168.1.0/24(rw,sync,no_subtree_check)

Create the shared directory and set permissions before exporting:

sudo mkdir -p /srv/nfs
sudo chown nobody:nogroup /srv/nfs   # typical NFS anonymous mapping
sudo chmod 755 /srv/nfs

Common /etc/exports options explained:

OptionMeaning
rwAllow read and write access
roAllow read-only access
syncWrite data to disk before acknowledging; safer but slower
asyncAcknowledge writes before flushing to disk; faster but risky on crash
no_subtree_checkDisable subtree checking; reduces spurious errors when files move
subtree_checkEnable subtree checking; required for security when exporting a subdirectory
root_squashMap root on client to the nobody user (default, recommended)
no_root_squashAllow root on client to act as root; only use in fully trusted environments
all_squashMap all clients to the anonymous user
anonuid=1000Set the UID for anonymous mappings
fsid=0Mark this export as the NFSv4 pseudo root

A production-ready exports file with multiple shares:

# Read-write share for the dev team subnet
/srv/nfs/projects  192.168.10.0/24(rw,sync,no_subtree_check,root_squash)

# Read-only ISO repository available to all internal clients
/srv/nfs/isos  192.168.0.0/16(ro,sync,no_subtree_check)

# Shared home directories — specific hosts only
/home  192.168.10.5(rw,sync,no_subtree_check) 192.168.10.6(rw,sync,no_subtree_check)

After editing /etc/exports, apply the changes without restarting the server:

sudo exportfs -ra    # reload all exports
sudo exportfs -v     # list active exports with their options

NFSv3 vs NFSv4 Comparison

Choosing the right NFS version matters for security, firewall configuration, and feature support. Modern deployments should default to NFSv4 unless legacy systems require NFSv3.

FeatureNFSv3NFSv4
Protocol stateStatelessStateful
Ports used2049 + dynamic (111, mountd, statd, lockd)2049 only
Firewall rulesComplex — multiple dynamic portsSimple — single port
SecurityAUTH_SYS (UID/GID only)AUTH_SYS + Kerberos (RPCSEC_GSS)
File lockingNLM (separate daemon)Built-in, mandatory
ACL supportVendor extensions onlyPOSIX ACLs natively
UTF-8 filenamesOptionalRequired
DelegationNoYes — client-side caching
Pseudo rootNoYes — unified namespace
Recommended forLegacy systems, simpler setupsModern Linux clusters, cross-firewall

To force a specific NFS version on the client:

# Mount using NFSv4 explicitly
sudo mount -t nfs4 server:/srv/nfs /mnt/nfs

# Mount using NFSv3 (if required by legacy server)
sudo mount -t nfs -o vers=3 server:/srv/nfs /mnt/nfs

Installing the NFS Client and Mounting Shares

On the client node, install the NFS client utilities:

# Debian / Ubuntu
sudo apt install -y nfs-common

# RHEL / Fedora / Rocky
sudo dnf install -y nfs-utils

Create a local mount point and mount the export:

sudo mkdir -p /mnt/nfs
sudo mount -t nfs4 192.168.1.100:/srv/nfs /mnt/nfs

Confirm the mount is active and check available space:

df -hT /mnt/nfs
mount | grep nfs

Test write access from the client:

touch /mnt/nfs/testfile && echo "Write access confirmed"

Persisting NFS Mounts with /etc/fstab

A manual mount disappears after reboot. Add the share to /etc/fstab to make it permanent:

# Format: <server>:<export>  <mountpoint>  <type>  <options>  <dump>  <pass>
192.168.1.100:/srv/nfs  /mnt/nfs  nfs4  defaults,_netdev  0  0

Key fstab options for NFS:

OptionPurpose
_netdevDelays mount until networking is up — essential for NFS at boot
nofailBoot continues even if the NFS server is unreachable
softReturn an error if the server does not respond (instead of hanging)
hardRetry indefinitely until the server responds (default, safer for data)
timeo=30Timeout in tenths of a second before retrying
retrans=3Number of retransmissions before giving up (with soft)

A robust fstab entry for production:

192.168.1.100:/srv/nfs  /mnt/nfs  nfs4  defaults,_netdev,nofail,hard,timeo=30  0  0

After editing fstab, test it without rebooting:

sudo mount -a
df -hT /mnt/nfs

Performance Tuning: rsize and wsize

By default, NFS uses conservative buffer sizes (often 1 MB on modern kernels, but older systems may default to 32 KB or 64 KB). On a gigabit or faster LAN, increasing rsize (read buffer) and wsize (write buffer) to 1 MB significantly improves throughput.

Mount with tuned buffers:

sudo mount -t nfs4 -o rsize=1048576,wsize=1048576 192.168.1.100:/srv/nfs /mnt/nfs

Benchmark throughput before and after tuning:

# Write test (client to server)
dd if=/dev/zero of=/mnt/nfs/testfile bs=1M count=512 conv=fdatasync

# Read test (server to client)
dd if=/mnt/nfs/testfile of=/dev/null bs=1M

Additional performance options:

OptionEffect
async (server-side)Increases write speed at the cost of data safety on crash
noatimeDisables access time updates; reduces write traffic for read-heavy workloads
actimeo=60Caches file attributes for 60 seconds; reduces metadata RPCs
nconnect=4Uses multiple TCP connections to the server (Linux 5.3+, NFSv4.1+)

For NVMe-backed servers on a 10 GbE network, combine rsize=1048576,wsize=1048576,nconnect=4,noatime for maximum throughput.

Add tuning options to fstab:

192.168.1.100:/srv/nfs  /mnt/nfs  nfs4  defaults,_netdev,nofail,rsize=1048576,wsize=1048576,noatime  0  0

Real-World Scenario: Shared /home Across a Lab Cluster

You have a four-node Linux lab cluster — one management node (lab-mgmt) and three worker nodes (lab-worker-1, lab-worker-2, lab-worker-3). All nodes share the same user accounts, and you want users to see their home directories regardless of which node they log into. NFS makes this seamless.

On lab-mgmt (NFS server):

# Install server
sudo apt install -y nfs-kernel-server

# The home directories already exist under /home
# Export /home to all worker nodes
sudo bash -c 'cat >> /etc/exports <<EOF

/home  192.168.10.11(rw,sync,no_subtree_check) 192.168.10.12(rw,sync,no_subtree_check) 192.168.10.13(rw,sync,no_subtree_check)
EOF'

sudo exportfs -ra
sudo exportfs -v

On each worker node (lab-worker-1/2/3):

# Install client
sudo apt install -y nfs-common

# Create mount point (back up local /home if needed)
sudo mount -t nfs4 192.168.10.10:/home /home

# Test: log in as a regular user and check that files are present
ls /home/jcarlos

Persist with fstab on each worker:

192.168.10.10:/home  /home  nfs4  defaults,_netdev,hard,timeo=30,rsize=1048576,wsize=1048576  0  0

Now when a user logs into any worker node, their home directory — including shell history, SSH keys, and configuration files — is identical across all nodes. Pair this with a shared /etc/passwd or LDAP for fully consistent user accounts.

Gotchas and Edge Cases

  • UID/GID mismatch: NFS relies on numeric UIDs and GIDs, not usernames. If a user has UID 1001 on the server but UID 1002 on the client, they will see each other’s files as the wrong owner. Synchronize UIDs/GIDs across all nodes with LDAP or a shared /etc/passwd, or use NFSv4 with idmapd.
  • root_squash and privileged operations: With root_squash enabled (the default), the root user on a client is mapped to nobody. This breaks operations like chown and some backups. Use no_root_squash only on fully trusted internal networks.
  • Firewall rules for NFSv3: NFSv3 uses dynamic RPC ports for mountd, statd, and lockd. Pin these to static ports in /etc/sysconfig/nfs and open them in your firewall, or switch to NFSv4 which only needs port 2049.
  • async vs sync: The async option improves write performance but data not yet flushed to disk can be lost if the server crashes. Always use sync for shared databases, build artifacts, or anything where data integrity matters.
  • Stale NFS handles: If the server reboots or exports change while clients are mounted, processes may get “Stale file handle” errors. Unmount and remount the share to clear the stale state.
  • Automounting with autofs: For large environments, consider autofs instead of static fstab entries. Autofs mounts shares on demand and unmounts them when idle, reducing boot time and avoiding failures when NFS servers are temporarily unavailable.

Troubleshooting Common Issues

Mount hangs indefinitely: The client cannot reach the server. Check connectivity with ping, verify that the NFS service is running on the server (systemctl status nfs-server), and confirm port 2049 is open (nc -zv server 2049).

Permission denied on mount: The client’s IP address is not in /etc/exports. Verify with exportfs -v and add the correct client CIDR or hostname, then run exportfs -ra.

Files owned by nobody on the client: UID/GID mismatch. Align UIDs across systems or configure idmapd for NFSv4 identity mapping. Check /var/log/syslog on the server for idmapd errors.

Slow NFS performance: Check rsize and wsize values with mount | grep nfs. Increase to 1048576. Also verify that the server-side export uses async if write speed is critical and durability risk is acceptable.

“exportfs: /etc/exports: No such file or directory”: Create the file: sudo touch /etc/exports, then add your export lines and run exportfs -ra.

Summary

  • NFS shares directories from a server to multiple clients using a lightweight, kernel-native protocol.
  • Install nfs-kernel-server on the server and nfs-common on clients (Debian/Ubuntu), or nfs-utils on both for RHEL systems.
  • Configure exports in /etc/exports with the format /path client(options) and reload with exportfs -ra.
  • Prefer NFSv4 for modern deployments: single port (2049), stateful connections, built-in locking, and better firewall compatibility.
  • Use _netdev in /etc/fstab to ensure NFS mounts wait for the network at boot, and test with mount -a.
  • Tune rsize=1048576,wsize=1048576 on the client for high-throughput LAN transfers; add nconnect=4 on Linux 5.3+ for multi-connection NFSv4.1.
  • Synchronize UIDs/GIDs across all nodes to avoid permission mismatches — the most common NFS headache.