If you manage Linux servers, Bash scripting is not optional — it is a core skill. Every repetitive task you perform manually is a candidate for automation: checking disk space, rotating logs, creating backups, auditing user accounts, monitoring services. A well-written Bash script executes in seconds what would take you minutes (or hours) of clicking and typing. This guide teaches you Bash scripting from the ground up, with a focus on practical sysadmin tasks that you can put to work immediately.
By the end of this article, you will know how to write scripts that use variables, conditionals, loops, and functions. You will understand error handling patterns that prevent scripts from causing damage when something goes wrong. And you will have four complete, production-ready scripts that solve real-world sysadmin problems.
Why Bash Scripting for Sysadmins?
Bash is the default shell on virtually every Linux distribution. It is preinstalled on Ubuntu, Debian, CentOS, RHEL, Fedora, Arch, and even macOS (though macOS has switched to zsh as the default interactive shell, Bash is still available). This means your scripts will run on any server without installing additional software.
Here is what Bash scripting gives you:
- Automation: Turn any sequence of terminal commands into a repeatable script
- Consistency: Scripts execute the same way every time, eliminating human error
- Speed: A script can process thousands of files or users in seconds
- Scheduling: Combined with cron, scripts run automatically at any interval
- Portability: Bash scripts run on any Linux system without dependencies
- Auditability: Scripts serve as documentation of exactly what was done and when
When to use Bash vs. Python: Bash is ideal for tasks that chain together existing command-line tools (file operations, text processing, service management). If your task requires complex data structures, API calls, or processing JSON/YAML, consider Python instead.
Prerequisites
Before you begin, make sure you have:
- A Linux system (Ubuntu 22.04 or 24.04 recommended, but any distribution works)
- Terminal access with a regular user account that has
sudoprivileges - A text editor (
nanofor beginners,vimor VS Code for more advanced users) - Basic familiarity with the Linux command line (navigating directories, listing files, reading file contents)
Verify your Bash version:
bash --version
You should see version 4.0 or newer. Ubuntu 22.04 ships with Bash 5.1, and Ubuntu 24.04 ships with Bash 5.2.
Your First Script
Every Bash script starts with a shebang line that tells the system which interpreter to use. Create a file called hello.sh:
#!/bin/bash
# My first Bash script
echo "Hello, $(whoami)! Today is $(date +%A), $(date +%B\ %d,\ %Y)."
echo "You are running Bash version: $BASH_VERSION"
echo "Your home directory is: $HOME"
Make it executable and run it:
chmod +x hello.sh
./hello.sh
Output:
Hello, jc! Today is Monday, January 27, 2026.
You are running Bash version: 5.2.21(1)-release
Your home directory is: /home/jc
Key concepts:
#!/bin/bash— the shebang tells Linux to use Bash to interpret this filechmod +x— sets the executable permission so you can run the script directly$(command)— command substitution: runs the command and inserts its output$VARIABLE— variable expansion: inserts the value of the variable
Best practice: Always use
#!/bin/bash(not#!/bin/sh) when your script uses Bash-specific features like arrays,[[ ]]tests, or{1..10}brace expansion. If you need maximum portability across different shells, write POSIX-compliantshscripts instead.
Variables and Data Types
Bash variables store strings by default. There are no separate integer, float, or boolean types — everything is a string that can be interpreted as a number in arithmetic contexts.
Defining Variables
#!/bin/bash
# String variables (no spaces around =)
HOSTNAME="webserver01"
BACKUP_DIR="/var/backups"
LOG_FILE="/var/log/myscript.log"
# Integer arithmetic using $(( ))
MAX_RETRIES=5
CURRENT_RETRY=0
TOTAL=$((MAX_RETRIES - CURRENT_RETRY))
# Command substitution
CURRENT_DATE=$(date +%Y-%m-%d)
DISK_USAGE=$(df -h / | awk 'NR==2 {print $5}')
UPTIME=$(uptime -p)
echo "Server: $HOSTNAME"
echo "Backup directory: $BACKUP_DIR"
echo "Date: $CURRENT_DATE"
echo "Disk usage: $DISK_USAGE"
echo "Uptime: $UPTIME"
echo "Retries remaining: $TOTAL"
Special Variables
#!/bin/bash
echo "Script name: $0"
echo "First argument: $1"
echo "Second argument: $2"
echo "All arguments: $@"
echo "Number of arguments: $#"
echo "Exit status of last command: $?"
echo "Process ID of this script: $$"
echo "Process ID of last background command: $!"
Arrays
#!/bin/bash
# Indexed array
SERVERS=("web01" "web02" "db01" "db02" "cache01")
echo "First server: ${SERVERS[0]}"
echo "All servers: ${SERVERS[@]}"
echo "Number of servers: ${#SERVERS[@]}"
# Loop through array
for server in "${SERVERS[@]}"; do
echo "Checking $server..."
done
# Associative array (Bash 4+)
declare -A SERVICE_PORTS
SERVICE_PORTS[http]=80
SERVICE_PORTS[https]=443
SERVICE_PORTS[ssh]=22
SERVICE_PORTS[mysql]=3306
for service in "${!SERVICE_PORTS[@]}"; do
echo "$service -> port ${SERVICE_PORTS[$service]}"
done
Common mistake: Never put spaces around the
=sign when assigning variables.NAME="value"is correct.NAME = "value"will fail because Bash interpretsNAMEas a command.
Conditionals: if, elif, else
Conditionals control the flow of your script based on test conditions.
Basic Syntax
#!/bin/bash
FILE="/etc/ssh/sshd_config"
if [[ -f "$FILE" ]]; then
echo "$FILE exists."
elif [[ -d "$FILE" ]]; then
echo "$FILE is a directory, not a file."
else
echo "$FILE does not exist."
fi
File Test Operators
#!/bin/bash
TARGET="/var/log/syslog"
# File existence and type tests
[[ -e "$TARGET" ]] && echo "Exists"
[[ -f "$TARGET" ]] && echo "Is a regular file"
[[ -d "$TARGET" ]] && echo "Is a directory"
[[ -L "$TARGET" ]] && echo "Is a symbolic link"
[[ -r "$TARGET" ]] && echo "Is readable"
[[ -w "$TARGET" ]] && echo "Is writable"
[[ -x "$TARGET" ]] && echo "Is executable"
[[ -s "$TARGET" ]] && echo "Has size greater than zero"
String Comparisons
#!/bin/bash
ENVIRONMENT="production"
if [[ "$ENVIRONMENT" == "production" ]]; then
echo "Running in production mode -- extra caution!"
VERBOSE=false
elif [[ "$ENVIRONMENT" == "staging" ]]; then
echo "Running in staging mode."
VERBOSE=true
else
echo "Unknown environment: $ENVIRONMENT"
exit 1
fi
# String checks
[[ -z "$VAR" ]] && echo "VAR is empty or unset"
[[ -n "$VAR" ]] && echo "VAR is not empty"
# Pattern matching with [[ ]]
if [[ "$HOSTNAME" == web* ]]; then
echo "This is a web server."
fi
Numeric Comparisons
#!/bin/bash
DISK_PERCENT=$(df / | awk 'NR==2 {print $5}' | tr -d '%')
if [[ "$DISK_PERCENT" -gt 90 ]]; then
echo "CRITICAL: Disk usage at ${DISK_PERCENT}%"
elif [[ "$DISK_PERCENT" -gt 75 ]]; then
echo "WARNING: Disk usage at ${DISK_PERCENT}%"
else
echo "OK: Disk usage at ${DISK_PERCENT}%"
fi
Tip: Use
[[ ]](double brackets) instead of[ ](single brackets). Double brackets are a Bash extension that handles spaces in variables better, supports pattern matching with==, and allows&&/||inside the test expression.
Loops: for, while, until
Loops let you repeat operations across files, servers, users, or any list of items.
For Loop
#!/bin/bash
# Loop over a list
for PACKAGE in nginx curl wget htop; do
echo "Installing $PACKAGE..."
sudo apt install -y "$PACKAGE" > /dev/null 2>&1
echo "$PACKAGE installed."
done
# Loop over a range
for i in {1..10}; do
echo "Iteration $i"
done
# C-style for loop
for ((i = 0; i < 5; i++)); do
echo "Counter: $i"
done
# Loop over files
for FILE in /var/log/*.log; do
SIZE=$(stat -c%s "$FILE" 2>/dev/null || echo 0)
echo "$FILE: $SIZE bytes"
done
While Loop
#!/bin/bash
# Read a file line by line
while IFS= read -r line; do
echo "Processing: $line"
done < /etc/hosts
# Counter-based loop
COUNTER=0
MAX=5
while [[ $COUNTER -lt $MAX ]]; do
echo "Attempt $((COUNTER + 1)) of $MAX"
COUNTER=$((COUNTER + 1))
done
# Wait for a service to become available
RETRIES=0
MAX_RETRIES=30
while ! curl -s http://localhost:8080/health > /dev/null 2>&1; do
RETRIES=$((RETRIES + 1))
if [[ $RETRIES -ge $MAX_RETRIES ]]; then
echo "Service did not start within $MAX_RETRIES attempts."
exit 1
fi
echo "Waiting for service... (attempt $RETRIES/$MAX_RETRIES)"
sleep 2
done
echo "Service is ready!"
Until Loop
#!/bin/bash
# Until loop -- runs until the condition becomes true
COUNT=0
until [[ $COUNT -ge 5 ]]; do
echo "Count is $COUNT"
COUNT=$((COUNT + 1))
done
Functions
Functions let you organize your scripts into reusable blocks. This is critical for larger scripts that perform multiple related tasks.
#!/bin/bash
# Function definition
log_message() {
local LEVEL="$1"
local MESSAGE="$2"
local TIMESTAMP
TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S')
echo "[$TIMESTAMP] [$LEVEL] $MESSAGE"
}
# Function with return value
check_service() {
local SERVICE_NAME="$1"
if systemctl is-active --quiet "$SERVICE_NAME"; then
return 0 # success
else
return 1 # failure
fi
}
# Function that outputs a value (capture with command substitution)
get_memory_usage() {
free -m | awk 'NR==2 {printf "%.1f", $3/$2 * 100}'
}
# Using the functions
log_message "INFO" "Starting system check..."
SERVICES=("nginx" "ssh" "cron")
for svc in "${SERVICES[@]}"; do
if check_service "$svc"; then
log_message "INFO" "$svc is running."
else
log_message "WARN" "$svc is NOT running!"
fi
done
MEM_USAGE=$(get_memory_usage)
log_message "INFO" "Memory usage: ${MEM_USAGE}%"
Key points about Bash functions:
- Use
localto declare function-scoped variables (prevents polluting the global scope) - Functions use
returnfor exit codes (0-255), not for returning data - To return data, use
echoinside the function and capture with$(function_name) - Arguments are accessed with
$1,$2,$@inside the function
Input/Output and Redirection
Understanding I/O redirection is essential for writing scripts that log their output, process files, and handle errors correctly.
#!/bin/bash
# Redirect stdout to a file (overwrite)
echo "Log entry" > /tmp/output.log
# Redirect stdout to a file (append)
echo "Another entry" >> /tmp/output.log
# Redirect stderr to a file
command_that_fails 2> /tmp/error.log
# Redirect both stdout and stderr to the same file
some_command > /tmp/all_output.log 2>&1
# Modern syntax (Bash 4+) for redirecting both
some_command &> /tmp/all_output.log
# Discard output entirely
noisy_command > /dev/null 2>&1
# Here document (heredoc) for multi-line input
cat << 'EOF' > /tmp/config.txt
server {
listen 80;
server_name example.com;
root /var/www/html;
}
EOF
# Pipe output to another command
ps aux | grep nginx | grep -v grep
# Tee: write to file AND stdout simultaneously
echo "System check passed" | tee -a /var/log/checks.log
# Process substitution
diff <(ls /dir1) <(ls /dir2)
Reading User Input
#!/bin/bash
read -p "Enter the server name: " SERVER_NAME
read -sp "Enter the password: " PASSWORD
echo "" # newline after hidden input
read -p "Proceed with deployment to $SERVER_NAME? (y/n): " CONFIRM
if [[ "$CONFIRM" != "y" ]]; then
echo "Deployment cancelled."
exit 0
fi
Error Handling: set -euo pipefail and trap
Proper error handling separates production-quality scripts from fragile ones. Without error handling, a script can silently fail halfway through, leaving your system in an inconsistent state.
The Strict Mode Header
#!/bin/bash
set -euo pipefail
IFS=$'\n\t'
What each option does:
set -e— Exit immediately if any command returns a non-zero exit statusset -u— Treat references to unset variables as errorsset -o pipefail— If any command in a pipeline fails, the entire pipeline fails (not just the last command)IFS=$'\n\t'— Set the Internal Field Separator to newline and tab only (prevents word splitting on spaces in filenames)
Using trap for Cleanup
#!/bin/bash
set -euo pipefail
TEMP_DIR=""
cleanup() {
local EXIT_CODE=$?
if [[ -n "$TEMP_DIR" && -d "$TEMP_DIR" ]]; then
rm -rf "$TEMP_DIR"
echo "Cleaned up temp directory: $TEMP_DIR"
fi
if [[ $EXIT_CODE -ne 0 ]]; then
echo "Script failed with exit code: $EXIT_CODE"
fi
exit $EXIT_CODE
}
trap cleanup EXIT ERR
TEMP_DIR=$(mktemp -d)
echo "Working in $TEMP_DIR"
# Your script logic here...
# If anything fails, cleanup() runs automatically
cp /some/important/file "$TEMP_DIR/"
process_data "$TEMP_DIR"
Handling Expected Failures
#!/bin/bash
set -euo pipefail
# Method 1: Use || true to allow a command to fail
grep "pattern" /var/log/syslog || true
# Method 2: Use if to check the result
if grep -q "error" /var/log/syslog; then
echo "Errors found in syslog"
else
echo "No errors found"
fi
# Method 3: Capture and check exit code
set +e # temporarily disable exit-on-error
risky_command
EXIT_CODE=$?
set -e # re-enable
if [[ $EXIT_CODE -ne 0 ]]; then
echo "Command failed with code $EXIT_CODE"
fi
Working with Files and Directories
Sysadmin scripts frequently need to create, check, move, and process files. Here are the most common patterns:
#!/bin/bash
set -euo pipefail
# Create a directory structure
BACKUP_BASE="/var/backups/myapp"
BACKUP_DIR="${BACKUP_BASE}/$(date +%Y-%m-%d)"
mkdir -p "$BACKUP_DIR"
# Check if a file exists before operating on it
CONFIG_FILE="/etc/myapp/config.yml"
if [[ ! -f "$CONFIG_FILE" ]]; then
echo "ERROR: Config file not found: $CONFIG_FILE"
exit 1
fi
# Find files older than 30 days
find /var/log/myapp -name "*.log" -mtime +30 -type f -print
# Find and delete old files (with confirmation)
find /tmp -name "*.tmp" -mtime +7 -type f -delete
# Get file size in bytes
FILE_SIZE=$(stat -c%s "$CONFIG_FILE")
echo "Config file size: $FILE_SIZE bytes"
# Count lines in a file
LINE_COUNT=$(wc -l < "$CONFIG_FILE")
echo "Config has $LINE_COUNT lines"
# Read a config file, skipping comments and empty lines
while IFS='=' read -r key value; do
# Skip comments and empty lines
[[ "$key" =~ ^#.*$ || -z "$key" ]] && continue
echo "Config: $key = $value"
done < "$CONFIG_FILE"
# Safe temporary file creation
TEMP_FILE=$(mktemp /tmp/myapp.XXXXXX)
echo "Using temp file: $TEMP_FILE"
# Always clean up temp files (use trap as shown earlier)
Practical Sysadmin Scripts
Here are four complete, production-ready scripts that solve common sysadmin problems.
Script 1: Disk Space Monitor with Email Alert
#!/bin/bash
set -euo pipefail
# Configuration
THRESHOLD=80
ALERT_EMAIL="admin@example.com"
LOG_FILE="/var/log/disk-monitor.log"
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
log "Starting disk space check..."
ALERT_TRIGGERED=false
while IFS= read -r line; do
# Skip the header line
[[ "$line" == Filesystem* ]] && continue
USAGE=$(echo "$line" | awk '{print $5}' | tr -d '%')
MOUNT=$(echo "$line" | awk '{print $6}')
FILESYSTEM=$(echo "$line" | awk '{print $1}')
if [[ "$USAGE" -ge "$THRESHOLD" ]]; then
ALERT_TRIGGERED=true
log "WARNING: $MOUNT is at ${USAGE}% ($FILESYSTEM)"
else
log "OK: $MOUNT is at ${USAGE}%"
fi
done < <(df -h --output=source,size,used,avail,pcent,target -x tmpfs -x devtmpfs)
if [[ "$ALERT_TRIGGERED" == true ]]; then
SUBJECT="[ALERT] Disk space warning on $(hostname)"
BODY="One or more filesystems exceeded ${THRESHOLD}% usage on $(hostname) at $(date).\n\n$(df -h)"
echo -e "$BODY" | mail -s "$SUBJECT" "$ALERT_EMAIL" 2>/dev/null || \
log "WARNING: Could not send email alert"
fi
log "Disk space check complete."
Script 2: Log Cleanup and Rotation
#!/bin/bash
set -euo pipefail
# Configuration
LOG_DIRS=("/var/log/myapp" "/var/log/nginx" "/home/*/logs")
MAX_AGE_DAYS=30
COMPRESS_AGE_DAYS=7
DRY_RUN=false
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1"
}
# Parse arguments
while [[ $# -gt 0 ]]; do
case "$1" in
--dry-run) DRY_RUN=true; shift ;;
--max-age) MAX_AGE_DAYS="$2"; shift 2 ;;
*) echo "Unknown option: $1"; exit 1 ;;
esac
done
TOTAL_FREED=0
for DIR_PATTERN in "${LOG_DIRS[@]}"; do
for DIR in $DIR_PATTERN; do
[[ -d "$DIR" ]] || continue
log "Processing $DIR..."
# Compress logs older than COMPRESS_AGE_DAYS
while IFS= read -r -d '' file; do
if [[ "$DRY_RUN" == true ]]; then
log " [DRY RUN] Would compress: $file"
else
gzip "$file"
log " Compressed: $file"
fi
done < <(find "$DIR" -name "*.log" -mtime +"$COMPRESS_AGE_DAYS" -type f -print0)
# Delete compressed logs older than MAX_AGE_DAYS
while IFS= read -r -d '' file; do
SIZE=$(stat -c%s "$file")
TOTAL_FREED=$((TOTAL_FREED + SIZE))
if [[ "$DRY_RUN" == true ]]; then
log " [DRY RUN] Would delete: $file ($SIZE bytes)"
else
rm -f "$file"
log " Deleted: $file ($SIZE bytes)"
fi
done < <(find "$DIR" -name "*.gz" -mtime +"$MAX_AGE_DAYS" -type f -print0)
done
done
FREED_MB=$((TOTAL_FREED / 1024 / 1024))
log "Total space freed: ${FREED_MB} MB"
Script 3: Automated Backup Script
#!/bin/bash
set -euo pipefail
# Configuration
BACKUP_SOURCE="/var/www /etc/nginx /etc/myapp"
BACKUP_DEST="/var/backups"
RETENTION_DAYS=14
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
HOSTNAME=$(hostname)
BACKUP_FILE="${BACKUP_DEST}/${HOSTNAME}_backup_${TIMESTAMP}.tar.gz"
LOG_FILE="/var/log/backup.log"
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
cleanup() {
local EXIT_CODE=$?
if [[ $EXIT_CODE -ne 0 ]]; then
log "ERROR: Backup failed with exit code $EXIT_CODE"
rm -f "$BACKUP_FILE"
fi
}
trap cleanup EXIT
# Ensure backup directory exists
mkdir -p "$BACKUP_DEST"
log "Starting backup of: $BACKUP_SOURCE"
log "Destination: $BACKUP_FILE"
# Create compressed tarball
# shellcheck disable=SC2086
tar -czf "$BACKUP_FILE" $BACKUP_SOURCE 2>/dev/null
BACKUP_SIZE=$(stat -c%s "$BACKUP_FILE")
BACKUP_SIZE_MB=$((BACKUP_SIZE / 1024 / 1024))
log "Backup created: ${BACKUP_SIZE_MB} MB"
# Verify the backup
if tar -tzf "$BACKUP_FILE" > /dev/null 2>&1; then
log "Backup integrity check: PASSED"
else
log "ERROR: Backup integrity check FAILED"
exit 1
fi
# Remove old backups
DELETED_COUNT=0
while IFS= read -r -d '' old_backup; do
rm -f "$old_backup"
DELETED_COUNT=$((DELETED_COUNT + 1))
log "Removed old backup: $old_backup"
done < <(find "$BACKUP_DEST" -name "${HOSTNAME}_backup_*.tar.gz" -mtime +"$RETENTION_DAYS" -type f -print0)
log "Backup complete. Removed $DELETED_COUNT old backup(s)."
Script 4: User Account Audit
#!/bin/bash
set -euo pipefail
# Configuration
OUTPUT_FILE="/tmp/user_audit_$(date +%Y%m%d).txt"
MIN_UID=1000
log() {
echo "$1" | tee -a "$OUTPUT_FILE"
}
# Clear previous output
> "$OUTPUT_FILE"
log "============================================"
log " USER ACCOUNT AUDIT REPORT"
log " Host: $(hostname)"
log " Date: $(date '+%Y-%m-%d %H:%M:%S')"
log "============================================"
log ""
# List all human users (UID >= 1000)
log "--- Human User Accounts (UID >= $MIN_UID) ---"
while IFS=: read -r username _ uid gid _ home shell; do
if [[ $uid -ge $MIN_UID ]]; then
LAST_LOGIN=$(lastlog -u "$username" 2>/dev/null | tail -1 | awk '{print $4, $5, $6, $7, $8, $9}')
GROUPS=$(groups "$username" 2>/dev/null | cut -d: -f2)
HAS_SUDO="No"
if groups "$username" 2>/dev/null | grep -qw "sudo\|wheel"; then
HAS_SUDO="Yes"
fi
log " User: $username (UID: $uid)"
log " Home: $home"
log " Shell: $shell"
log " Sudo: $HAS_SUDO"
log " Groups:$GROUPS"
log " Last Login: $LAST_LOGIN"
log ""
fi
done < /etc/passwd
# Check for users with empty passwords
log "--- Users with Empty Passwords ---"
EMPTY_PASS=$(sudo awk -F: '($2 == "") {print $1}' /etc/shadow 2>/dev/null || true)
if [[ -n "$EMPTY_PASS" ]]; then
log " WARNING: $EMPTY_PASS"
else
log " None found (good)."
fi
log ""
# Check for users with UID 0 (root equivalents)
log "--- Users with UID 0 (Root Privileges) ---"
while IFS=: read -r username _ uid _; do
if [[ $uid -eq 0 ]]; then
log " $username (UID: 0)"
fi
done < /etc/passwd
log ""
# List SSH authorized keys
log "--- SSH Authorized Keys ---"
for HOME_DIR in /home/*; do
USERNAME=$(basename "$HOME_DIR")
AUTH_KEYS="$HOME_DIR/.ssh/authorized_keys"
if [[ -f "$AUTH_KEYS" ]]; then
KEY_COUNT=$(wc -l < "$AUTH_KEYS")
log " $USERNAME: $KEY_COUNT key(s) in $AUTH_KEYS"
fi
done
log ""
log "Report saved to: $OUTPUT_FILE"
Scheduling with Cron
Cron is the standard Linux job scheduler. It runs scripts at defined intervals without manual intervention.
Editing the Crontab
# Edit crontab for the current user
crontab -e
# Edit crontab for a specific user (requires root)
sudo crontab -u www-data -e
# List current crontab entries
crontab -l
Cron Syntax
# ┌───────────── minute (0 - 59)
# │ ┌───────────── hour (0 - 23)
# │ │ ┌───────────── day of month (1 - 31)
# │ │ │ ┌───────────── month (1 - 12)
# │ │ │ │ ┌───────────── day of week (0 - 7, 0 and 7 = Sunday)
# │ │ │ │ │
# * * * * * command
Common Cron Schedules
# Run disk monitor every hour
0 * * * * /usr/local/bin/disk-monitor.sh >> /var/log/disk-monitor.log 2>&1
# Run backup daily at 2 AM
0 2 * * * /usr/local/bin/backup.sh >> /var/log/backup.log 2>&1
# Run log cleanup every Sunday at 3 AM
0 3 * * 0 /usr/local/bin/log-cleanup.sh >> /var/log/cleanup.log 2>&1
# Run user audit on the 1st of every month
0 9 1 * * /usr/local/bin/user-audit.sh >> /var/log/user-audit.log 2>&1
# Run a health check every 5 minutes
*/5 * * * * /usr/local/bin/health-check.sh > /dev/null 2>&1
Important cron rules: Always use absolute paths in cron jobs (not relative paths). Cron runs with a minimal environment, so
PATHmay not include/usr/local/bin. Always redirect output to a log file or/dev/nullto prevent cron from sending email on every execution.
Using Environment Variables in Cron
# Set environment variables at the top of your crontab
SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
MAILTO=admin@example.com
# Now your jobs have access to common paths
0 2 * * * /usr/local/bin/backup.sh
Bash Operators Reference Table
| Category | Operator | Description | Example |
|---|---|---|---|
| File Tests | -f | File exists and is regular file | [[ -f /etc/hosts ]] |
-d | Directory exists | [[ -d /var/log ]] | |
-e | File or directory exists | [[ -e /tmp/lock ]] | |
-r | File is readable | [[ -r config.yml ]] | |
-w | File is writable | [[ -w /var/log/app.log ]] | |
-x | File is executable | [[ -x script.sh ]] | |
-s | File size is greater than zero | [[ -s output.log ]] | |
-L | File is a symbolic link | [[ -L /usr/bin/python ]] | |
| String | == | Strings are equal | [[ "$a" == "$b" ]] |
!= | Strings are not equal | [[ "$a" != "$b" ]] | |
-z | String is empty | [[ -z "$var" ]] | |
-n | String is not empty | [[ -n "$var" ]] | |
=~ | Regex match | [[ "$s" =~ ^[0-9]+$ ]] | |
| Numeric | -eq | Equal | [[ $a -eq $b ]] |
-ne | Not equal | [[ $a -ne $b ]] | |
-gt | Greater than | [[ $a -gt $b ]] | |
-ge | Greater than or equal | [[ $a -ge $b ]] | |
-lt | Less than | [[ $a -lt $b ]] | |
-le | Less than or equal | [[ $a -le $b ]] | |
| Logical | && | AND (inside [[ ]]) | [[ $a -gt 0 && $a -lt 100 ]] |
|| | OR (inside [[ ]]) | [[ $a -eq 0 || $a -eq 1 ]] | |
! | NOT | [[ ! -f /tmp/lock ]] |
Troubleshooting and Debugging
When a script does not work as expected, use these techniques to find the problem.
Debug with set -x
#!/bin/bash
set -x # Print each command before executing it
NAME="world"
echo "Hello, $NAME"
# Output:
# + NAME=world
# + echo 'Hello, world'
# Hello, world
You can enable debugging for specific sections:
#!/bin/bash
echo "Normal output here"
set -x # Start debug output
RESULT=$(some_complex_command)
process "$RESULT"
set +x # Stop debug output
echo "Back to normal output"
Run a Script in Debug Mode
# Debug the entire script without modifying it
bash -x ./myscript.sh
# Verbose mode (print lines as they are read)
bash -v ./myscript.sh
# Combined verbose + trace
bash -xv ./myscript.sh
ShellCheck: Static Analysis
ShellCheck is an essential tool that catches common Bash mistakes, quoting errors, and portability issues.
# Install ShellCheck
sudo apt install -y shellcheck
# Analyze a script
shellcheck myscript.sh
# Example output:
# In myscript.sh line 5:
# echo $VARIABLE
# ^--------^ SC2086: Double quote to prevent globbing and word splitting.
Common Debugging Checklist
- Check the shebang: Is it
#!/bin/bash(not#!/bin/sh)? - Check permissions: Did you run
chmod +x script.sh? - Check line endings: Windows line endings (
\r\n) cause/bin/bash^M: bad interpreter. Fix withdos2unix script.sh. - Quote your variables: Always use
"$VARIABLE"not$VARIABLEto handle spaces and special characters. - Check exit codes: After each critical command, verify
$?or useset -e. - Test with hardcoded values: Replace variables with known values to isolate the problem.
- Check cron environment: If a script works manually but not in cron, the issue is usually
PATHor missing environment variables.
# Quick script to dump the cron environment
* * * * * env > /tmp/cron-env.txt 2>&1
# (remove after checking)
Summary
Bash scripting is one of the most valuable skills a Linux system administrator can develop. In this guide, you learned how to:
- Write scripts with proper structure (shebang, permissions, strict mode)
- Use variables, arrays, and command substitution
- Control flow with conditionals and loops
- Organize code with functions
- Handle errors safely with
set -euo pipefailandtrap - Build practical scripts for disk monitoring, log cleanup, backups, and user auditing
- Schedule scripts with cron for fully automated execution
- Debug scripts with
set -xand ShellCheck
The four practical scripts in this article are starting points. Customize them for your specific environment, add email notifications, integrate them with monitoring systems, and build on them as your needs grow.
For related server administration topics, check out our Linux Server Security Checklist: 20 Essential Steps to secure the servers your scripts run on, and SSH Hardening: 12 Steps to Secure Your Linux Server to lock down the SSH access you use to manage them.