iperf3 is the standard tool for measuring network throughput between two endpoints on Linux. When you need to answer “how fast is this link actually performing?” — whether you’re troubleshooting slow transfers, validating a new network segment, or benchmarking after a switch upgrade — iperf3 gives you precise TCP and UDP performance numbers in seconds.

This guide covers installation, common test scenarios, UDP testing, parallel streams, and how to interpret results to pinpoint network bottlenecks.

Prerequisites

  • Two Linux machines connected over the network you want to test
  • iperf3 installed on both endpoints
  • Port 5201/TCP open between the machines (or custom port with -p)
  • Root or sudo access for installation

Installing iperf3

On Debian/Ubuntu:

sudo apt update && sudo apt install -y iperf3

On RHEL/Fedora/AlmaLinux:

sudo dnf install -y iperf3

On Alpine:

sudo apk add iperf3

Verify the installation:

iperf3 --version

iperf3 vs Other Network Testing Tools

Before running tests, understand what iperf3 measures and when to use alternatives:

ToolMeasuresBest For
iperf3Throughput (bandwidth)Max speed between two points
pingLatency (RTT)Basic connectivity, response time
mtrLatency per hopPath analysis, finding slow hops
tcpdumpPacket captureDeep protocol analysis
netperfThroughput + latencyRequest-response benchmarks
ethtoolLink speed/settingsPhysical interface verification

iperf3 answers “how much data can I push through this link?” — it’s not a latency tool. Use it alongside ping/mtr for a complete picture.

Running Your First TCP Bandwidth Test

Every iperf3 test requires two machines: a server (listener) and a client (sender).

Start the Server

On Machine A (e.g., 10.0.0.10):

iperf3 -s
-----------------------------------------------------------
Server listening on 5201 (test #1)
-----------------------------------------------------------

The server listens on port 5201 by default and accepts one client at a time.

Run the Client

On Machine B:

iperf3 -c 10.0.0.10
Connecting to host 10.0.0.10, port 5201
[  5] local 10.0.0.20 port 43218 connected to 10.0.0.10 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   112 MBytes   941 Mbits/sec    0    378 KBytes
[  5]   1.00-2.00   sec   112 MBytes   940 Mbits/sec    0    378 KBytes
[  5]   2.00-3.00   sec   112 MBytes   940 Mbits/sec    0    378 KBytes
...
[  5]   9.00-10.00  sec   112 MBytes   941 Mbits/sec    0    378 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.09 GBytes   940 Mbits/sec    0             sender
[  5]   0.00-10.04  sec  1.09 GBytes   936 Mbits/sec                  receiver

Reading the Output

ColumnMeaning
TransferTotal data transferred per interval
BitrateThroughput in Mbits/sec or Gbits/sec
RetrTCP retransmits (packet loss indicator)
CwndTCP congestion window size

In this example, a 1 Gbps link shows 940 Mbits/sec — that’s 94% efficiency, which is normal. TCP headers, Ethernet framing, and protocol overhead consume the remaining 6%.

Gotcha: If you see significantly lower numbers (e.g., 100 Mbits on a 1 Gbps link), check the physical link speed with ethtool eth0 | grep Speed — auto-negotiation failures are a common culprit.

Testing UDP Performance

UDP testing reveals jitter and packet loss — critical metrics for VoIP, video streaming, and real-time applications.

iperf3 -c 10.0.0.10 -u -b 500M

The -b flag is required for UDP because unlike TCP, UDP has no built-in congestion control and will send at whatever rate you specify.

[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-1.00   sec  59.6 MBytes   500 Mbits/sec  0.018 ms  0/43200 (0%)
[  5]   1.00-2.00   sec  59.6 MBytes   500 Mbits/sec  0.021 ms  3/43198 (0.0069%)
...
[  5]   0.00-10.00  sec   596 MBytes   500 Mbits/sec  0.019 ms  12/432000 (0.0028%)  receiver
MetricAcceptable RangeProblem Threshold
Jitter< 1 ms> 5 ms (VoIP degrades)
Packet loss< 0.1%> 1% (noticeable quality loss)

Real-world scenario: You have a VoIP system with call quality complaints. Run a UDP test at the codec bitrate (e.g., -b 100K for G.711) between the phone server and the network segment. If jitter exceeds 5 ms or loss exceeds 1%, the issue is network-side, not application-side.

Advanced Testing Scenarios

Parallel Streams

A single TCP stream may not saturate a high-bandwidth link due to congestion window limits. Use parallel streams:

iperf3 -c 10.0.0.10 -P 4

This creates 4 parallel TCP connections. The summary shows per-stream and aggregate bandwidth. On 10 Gbps links, you’ll often need 4-8 parallel streams to reach full throughput.

Reverse Mode (Server Sends to Client)

Test the opposite direction without switching roles:

iperf3 -c 10.0.0.10 -R

The -R flag tells the server to send data to the client. Useful for testing asymmetric links or when you can’t run the client on both ends.

Custom Test Duration and Interval

iperf3 -c 10.0.0.10 -t 60 -i 5
  • -t 60 — run for 60 seconds instead of the default 10
  • -i 5 — report every 5 seconds instead of every 1

Longer tests are essential for detecting intermittent issues like microbursts or periodic congestion.

Set TCP Window Size

iperf3 -c 10.0.0.10 -w 256K

The TCP window size limits how much data can be in-flight before waiting for acknowledgment. For high-latency links (WAN, VPN), increasing the window size can dramatically improve throughput:

Bandwidth-Delay Product (BDP): Required window = Bandwidth × RTT

For a 1 Gbps link with 20 ms RTT: 1,000,000,000 × 0.020 / 8 = 2.5 MB

iperf3 -c 10.0.0.10 -w 2500K

JSON Output for Scripting

iperf3 -c 10.0.0.10 -J > result.json

Parse with jq:

jq '.end.sum_sent.bits_per_second / 1000000' result.json

This outputs the bandwidth in Mbits/sec — perfect for monitoring scripts and trend analysis.

Custom Port

If port 5201 is blocked:

# Server
iperf3 -s -p 9999

# Client
iperf3 -c 10.0.0.10 -p 9999

Running iperf3 as a Persistent Service

For ongoing testing, run the server as a systemd service:

sudo tee /etc/systemd/system/iperf3.service > /dev/null << 'EOF'
[Unit]
Description=iperf3 Network Performance Server
After=network.target

[Service]
Type=simple
ExecStart=/usr/bin/iperf3 -s
Restart=on-failure
User=nobody

[Install]
WantedBy=multi-user.target
EOF

sudo systemctl enable --now iperf3.service

Security note: iperf3 has no authentication. Only expose it on trusted networks. Use firewall rules to restrict access by source IP.

Troubleshooting iperf3 Network Performance Tests

“iperf3: error - unable to connect to server”: The server isn’t listening or a firewall is blocking port 5201. Check with ss -tlnp | grep 5201 on the server and verify firewall rules allow the port.

“iperf3: error - the server is busy”: iperf3 handles only one client at a time. Wait for the current test to finish, or run multiple server instances on different ports: iperf3 -s -p 5202.

Wildly inconsistent results between runs: Other traffic is competing for bandwidth. Run tests during maintenance windows, or use -t 60 for longer averages. Also check for CPU saturation — iperf3 is single-threaded, and a slow CPU can bottleneck before the network does.

Throughput drops after a few seconds: Likely TCP buffer bloat or switch buffer overflow. Watch the Retr (retransmit) column — rising retransmits mean packets are being dropped. Reduce parallel streams or check switch QoS settings.

Near-zero throughput: Check MTU mismatches. If one side has MTU 9000 (jumbo frames) and the other has 1500, packets get fragmented or dropped. Verify with ip link show eth0 | grep mtu.

Summary

  • iperf3 measures TCP and UDP throughput between two endpoints — install on both sides, run -s on one and -c on the other
  • TCP tests show bandwidth, retransmits, and congestion window; UDP tests add jitter and packet loss metrics
  • Use -P 4 parallel streams to saturate high-bandwidth links and -R for reverse direction testing
  • For high-latency links (WAN/VPN), calculate the bandwidth-delay product and set the window size with -w
  • UDP testing with -u -b is essential for VoIP and streaming quality assessment — watch jitter and loss
  • JSON output (-J) enables scripted monitoring and trend analysis
  • Run iperf3 as a systemd service for persistent availability on test endpoints
  • Always check physical link speed with ethtool and firewall rules before blaming the network

tcpdump: Capture and Analyze Network Traffic on Linux | nftables: The Modern Linux Firewall Replacing iptables | Configure UFW Firewall on Ubuntu Server | Private Network Segments - Understanding RFC 1918