Slow queries are the silent killers of database performance. A single unoptimized query running thousands of times per day can bring an otherwise healthy MySQL server to its knees. The MySQL slow query log is your first line of defense — it captures every query exceeding a time threshold, giving you the raw data needed to identify and eliminate performance bottlenecks. Combined with analysis tools like mysqldumpslow and pt-query-digest, you can systematically find, prioritize, and fix the worst offenders in your workload.

Prerequisites

  • MySQL 5.7+ or MySQL 8.0 (commands work on both; minor syntax differences noted)
  • Root or administrative access to the MySQL server
  • Shell access to the server running MySQL
  • Percona Toolkit installed (for pt-query-digest)
  • Basic understanding of SQL and MySQL configuration files

Enabling the Slow Query Log

The slow query log is disabled by default in most MySQL installations. You can enable it permanently via configuration or temporarily at runtime.

Permanent Configuration (my.cnf)

Add these lines to your MySQL configuration file, typically at /etc/mysql/my.cnf or /etc/my.cnf:

[mysqld]
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 1
log_output = FILE

Restart MySQL to apply:

sudo systemctl restart mysql

Runtime Configuration (No Restart)

Enable the slow query log without restarting the server:

SET GLOBAL slow_query_log = 1;
SET GLOBAL slow_query_log_file = '/var/log/mysql/slow.log';
SET GLOBAL long_query_time = 1;
SET GLOBAL log_output = 'FILE';

Verify the settings:

SHOW VARIABLES LIKE 'slow_query%';
SHOW VARIABLES LIKE 'long_query_time';

Expected output:

+---------------------+-------------------------------+
| Variable_name       | Value                         |
+---------------------+-------------------------------+
| slow_query_log      | ON                            |
| slow_query_log_file | /var/log/mysql/slow.log       |
+---------------------+-------------------------------+

log_output Options

MySQL supports three output destinations:

  • FILE — writes to the file specified by slow_query_log_file (default, recommended)
  • TABLE — writes to mysql.slow_log table (queryable but adds overhead)
  • NONE — disables logging despite slow_query_log=1

Use FILE for production. The TABLE option adds measurable write overhead and makes the mysql schema grow unpredictably.

Configuring Log Parameters

long_query_time

This threshold determines which queries get logged. The default of 10 seconds is far too high for most workloads.

-- Log queries taking more than 0.5 seconds
SET GLOBAL long_query_time = 0.5;

-- Log ALL queries (useful for short audits, not production)
SET GLOBAL long_query_time = 0;

Recommended starting values:

Environmentlong_query_timeRationale
Development0Capture everything for analysis
Staging0.1Catch queries >100ms
Production1.0Focus on clearly slow queries
Aggressive0.5Balance between noise and coverage

log_queries_not_using_indexes

This flag logs queries that perform full table scans, regardless of execution time:

SET GLOBAL log_queries_not_using_indexes = 1;

This is extremely valuable for catching queries that are fast now but will degrade as tables grow. However, it can generate a lot of log entries on databases with many small tables.

min_examined_row_limit

Filter out trivial queries by requiring a minimum number of examined rows:

SET GLOBAL min_examined_row_limit = 1000;

This prevents the log from filling up with fast single-row lookups that happen to miss an index.

Analyzing with mysqldumpslow

mysqldumpslow ships with MySQL and requires no additional installation. It parses the slow query log, abstracts literal values, and groups similar queries.

Basic Usage

# Top 10 queries by total time
mysqldumpslow -s t -t 10 /var/log/mysql/slow.log

# Top 10 queries by count (frequency)
mysqldumpslow -s c -t 10 /var/log/mysql/slow.log

# Top 10 queries by average time
mysqldumpslow -s at -t 10 /var/log/mysql/slow.log

# Filter queries matching a pattern
mysqldumpslow -s t -t 10 -g "orders" /var/log/mysql/slow.log

Sort Options

FlagSorts By
-s tTotal time
-s cCount (frequency)
-s atAverage time
-s lLock time
-s alAverage lock time
-s rRows examined
-s arAverage rows examined

Example Output

Count: 1523  Time=2.45s (3731s)  Lock=0.00s (1s)  Rows=245.3 (373534), root[root]@localhost
  SELECT * FROM orders WHERE customer_id = N AND status = 'S' ORDER BY created_at DESC LIMIT N;

This tells you the query ran 1,523 times, averaging 2.45 seconds each, examining 245 rows per execution, and consuming 3,731 seconds total.

Limitations

mysqldumpslow is useful for quick checks but has significant limitations:

  • No percentile analysis (P95, P99)
  • No query response time distribution
  • Cannot read binary-logged or TABLE-format logs
  • Limited filtering and reporting options

Deep Analysis with pt-query-digest

pt-query-digest from the Percona Toolkit is the industry standard for slow query log analysis. It provides far more detailed insights than mysqldumpslow.

Installation

# Debian/Ubuntu
sudo apt-get install percona-toolkit

# RHEL/CentOS
sudo yum install percona-toolkit

# From source
wget https://www.percona.com/downloads/percona-toolkit/LATEST/tarball/percona-toolkit-LATEST.tar.gz
tar xzf percona-toolkit-LATEST.tar.gz
cd percona-toolkit-* && perl Makefile.PL && make && sudo make install

Basic Usage

# Full report
pt-query-digest /var/log/mysql/slow.log

# Report for last 24 hours
pt-query-digest --since '24h' /var/log/mysql/slow.log

# Filter by database
pt-query-digest --filter '$event->{db} eq "myapp"' /var/log/mysql/slow.log

# Output to file
pt-query-digest /var/log/mysql/slow.log > /tmp/slow-report.txt

Report Structure

The output has three sections:

1. Overall Summary

# 2.1M QPS, 1.2x concurrency, 45% of time in query
# Attribute          total     min     max     avg     95%  stddev  median
# ============     ======= ======= ======= ======= ======= ======= =======
# Exec time         3731s    100ms     45s   2.45s   4.12s   1.33s   1.85s
# Lock time            1s       0     15ms    10us    22us    38us     8us
# Rows sent        373534       0    1000     245   480.0   180.3   198.0
# Rows examine     2.1M         0   50000    1382    3200    2100     850

2. Profile (Query Ranking)

# Rank Query ID                         Response time  Calls  R/Call  Item
# ==== ================================ ============== ====== ======= ====
#    1 0xE77769C62EF669AA1A6...          1800.0 48.2%   1523  1.1819  SELECT orders
#    2 0xA4B9D8C1F3E5A7B2C9...           950.3 25.5%    892  1.0654  SELECT products
#    3 0xF1C2D3E4A5B6C7D8E9...           480.1 12.9%   3201  0.1500  SELECT users

3. Per-Query Detail — for each ranked query, you get the full fingerprint, time distribution, EXPLAIN recommendations, and sample query with literal values.

mysqldumpslow vs pt-query-digest vs MySQL Enterprise Monitor vs PMM

Featuremysqldumpslowpt-query-digestMySQL Enterprise MonitorPercona PMM
CostFree (built-in)Free (open source)Commercial licenseFree (open source)
InstallationNonePercona ToolkitDedicated serverDocker/VM
Real-time monitoringNoNoYesYes
Historical analysisBasicExcellentExcellentExcellent
Query fingerprintingBasicAdvancedAdvancedAdvanced
Percentile analysisNoYes (P95, P99)YesYes
Visual dashboardNoNoYesYes (Grafana)
EXPLAIN integrationNoPartialYesYes
AlertingNoNoYesYes
Best forQuick spot checksDeep batch analysisEnterprise teamsTeams wanting free UI

Optimizing Slow Queries

Once you have identified the slowest queries, follow this systematic approach.

Step 1: EXPLAIN the Query

EXPLAIN SELECT * FROM orders
WHERE customer_id = 42 AND status = 'pending'
ORDER BY created_at DESC LIMIT 10;

Key columns to examine:

  • type: Should be ref, range, or const. Avoid ALL (full table scan)
  • key: Which index MySQL chose. NULL means no index used
  • rows: Estimated rows examined. High numbers indicate missing indexes
  • Extra: Watch for Using filesort and Using temporary

In MySQL 8.0, use EXPLAIN ANALYZE for actual execution statistics:

EXPLAIN ANALYZE SELECT * FROM orders
WHERE customer_id = 42 AND status = 'pending'
ORDER BY created_at DESC LIMIT 10;

Step 2: Add Missing Indexes

Based on EXPLAIN output, create composite indexes that cover the WHERE and ORDER BY clauses:

-- Covers WHERE customer_id AND status, ORDER BY created_at
CREATE INDEX idx_orders_customer_status_created
ON orders (customer_id, status, created_at DESC);

Index design rules:

  • Put equality columns first (customer_id, status)
  • Put range/sort columns last (created_at)
  • Include columns from SELECT if creating a covering index
  • Avoid over-indexing — each index slows down writes

Step 3: Rewrite Problematic Patterns

Common rewrites that improve performance:

-- BAD: Subquery executed per row
SELECT * FROM orders WHERE customer_id IN (
  SELECT id FROM customers WHERE region = 'US'
);

-- GOOD: JOIN executes once
SELECT o.* FROM orders o
INNER JOIN customers c ON o.customer_id = c.id
WHERE c.region = 'US';

-- BAD: SELECT * fetches unused columns
SELECT * FROM orders WHERE id = 42;

-- GOOD: Select only needed columns
SELECT id, status, total, created_at FROM orders WHERE id = 42;

-- BAD: Function on indexed column prevents index use
SELECT * FROM orders WHERE YEAR(created_at) = 2025;

-- GOOD: Range scan uses index
SELECT * FROM orders
WHERE created_at >= '2025-01-01' AND created_at < '2026-01-01';

Real-World Scenario

You manage the database for an e-commerce platform. Users report that product pages load slowly during peak hours (2-6 PM), but the rest of the site feels fine. Server metrics show CPU spikes correlating with the complaints, but memory and disk I/O are normal.

Step 1: Enable slow query log with long_query_time=0.5 during peak hours.

Step 2: After 2 hours of logging, run pt-query-digest:

pt-query-digest --since '2h' /var/log/mysql/slow.log

Step 3: The report reveals the top query consuming 62% of total time:

SELECT p.*, c.name as category_name,
       (SELECT AVG(rating) FROM reviews WHERE product_id = p.id) AS avg_rating,
       (SELECT COUNT(*) FROM reviews WHERE product_id = p.id) AS review_count
FROM products p
JOIN categories c ON p.category_id = c.id
WHERE p.category_id = 15 AND p.active = 1
ORDER BY p.created_at DESC LIMIT 20;

Step 4: EXPLAIN shows the correlated subqueries execute once per product row (2,400 times per page load). Fix by rewriting with a JOIN:

SELECT p.*, c.name as category_name,
       COALESCE(r.avg_rating, 0) AS avg_rating,
       COALESCE(r.review_count, 0) AS review_count
FROM products p
JOIN categories c ON p.category_id = c.id
LEFT JOIN (
  SELECT product_id, AVG(rating) as avg_rating, COUNT(*) as review_count
  FROM reviews GROUP BY product_id
) r ON r.product_id = p.id
WHERE p.category_id = 15 AND p.active = 1
ORDER BY p.created_at DESC LIMIT 20;

Step 5: Add a composite index:

CREATE INDEX idx_products_category_active_created
ON products (category_id, active, created_at DESC);

Result: Query time drops from 2.4 seconds to 12 milliseconds. Page load returns to normal.

Gotchas and Edge Cases

  • Log rotation is essential. Without rotation, the slow log can consume all available disk space. Use logrotate or the MySQL FLUSH SLOW LOGS command with a cron job.
  • Disk space monitoring. Setting long_query_time=0 on a busy server can generate gigabytes of log data per hour. Always monitor disk usage when lowering the threshold.
  • Replication lag from logging. On replicas with log_slow_replica_statements=1 (MySQL 8.0.26+), logging replayed queries can increase replication delay. Monitor Seconds_Behind_Source.
  • log_queries_not_using_indexes flood. Small lookup tables (countries, statuses) will trigger this flag even though full scans on 50-row tables are optimal. Use min_examined_row_limit to filter them.
  • Prepared statements. By default, prepared statements are not logged. Set log_slow_extra=1 (MySQL 8.0.14+) to capture additional statistics.
  • Runtime changes don’t persist. SET GLOBAL changes are lost on restart. Always update my.cnf in addition to runtime commands.

Troubleshooting

Slow query log file is not created: Check that MySQL has write permissions to the log directory:

sudo mkdir -p /var/log/mysql
sudo chown mysql:mysql /var/log/mysql
sudo chmod 750 /var/log/mysql

Log file shows zero queries despite slow_query_log=ON: Verify the long_query_time is not set too high. Check with:

SELECT @@global.long_query_time;

If it shows 10.000000, queries must take over 10 seconds to be logged. Lower it to 1.

pt-query-digest reports “No events processed”: The log file may be empty or the format unrecognized. Ensure log_output=FILE (not TABLE). Check file permissions:

ls -la /var/log/mysql/slow.log
file /var/log/mysql/slow.log

mysqldumpslow shows “Permission denied”: The tool needs read access to the log file. Run with sudo or add your user to the mysql group:

sudo mysqldumpslow -s t -t 10 /var/log/mysql/slow.log

Queries appear in slow log but EXPLAIN shows fast execution: This usually means lock contention. Check the Lock_time field in the log entry. High lock times indicate table-level or row-level locking issues, not query optimization problems.

Summary

  • Enable the slow query log with slow_query_log=1 and set long_query_time to 1 second or lower
  • Use log_queries_not_using_indexes to catch queries that will degrade as data grows
  • Start with mysqldumpslow for quick analysis; switch to pt-query-digest for production-grade insights
  • Focus on total time (frequency × average time) rather than just the slowest single execution
  • Use EXPLAIN and EXPLAIN ANALYZE (MySQL 8.0) to understand execution plans before adding indexes
  • Design composite indexes with equality columns first, range/sort columns last
  • Rewrite correlated subqueries as JOINs for order-of-magnitude improvements
  • Always implement log rotation and monitor disk space when running the slow query log in production