How to Find and Fix Slow Queries in PostgreSQL Using EXPLAIN ANALYZE

Jan 18, 2026

PostgreSQL Performance Tuning: How to Find and Fix Slow Queries with EXPLAIN ANALYZE

Is your application feeling a bit... sluggish? In the world of database management, slow queries are the silent killers of user retention. If your dashboard takes five seconds to load, your users aren't waiting—they're leaving.

To fix a slow database, you need more than just a "gut feeling." You need to master the most powerful diagnostic tool in the Postgres ecosystem: EXPLAIN ANALYZE.

In this guide, we’ll show you how to move from "Why is this slow?" to "Here is exactly how we fixed it."

1. Locating the Bottlenecks

You can't fix what you can't find. Before diving into execution plans, you need to identify your "Top Offenders."

  • pg_stat_statements: This is the gold standard. It’s a module that tracks execution statistics of all SQL statements. Look for queries with high total_exec_time or those that consistently show high mean_exec_time.

  • The Slow Query Log: Set your log_min_duration_statement to a threshold (e.g., 250ms). This catches the "outliers" that occasionally spike and ruin your p99 latency.

2. EXPLAIN vs. EXPLAIN ANALYZE

This is a common point of confusion for developers. Think of it this way:

  • EXPLAIN: This is the "Weather Forecast." It tells you what Postgres thinks will happen based on internal statistics. It doesn't actually run the query.

  • EXPLAIN ANALYZE: This is the "Weather Report." It executes the query and tells you exactly what happened, where the time was spent, and how many rows were actually processed.

⚠️ Pro Tip: Because EXPLAIN ANALYZE actually runs the query, be careful with UPDATE or DELETE statements in production. Always wrap them in a transaction block: BEGIN; EXPLAIN ANALYZE [Your Query]; ROLLBACK;menu.

3. Decoding the Execution Plan

When you run EXPLAIN ANALYZE, Postgres returns a tree of operations. Your job is to find the "heaviest" nodes.

What to Look For:

Term

Why it Matters

Sequential Scan (Seq Scan)

The DB is reading every single row in the table. Often a sign of a missing index.

Index Scan

Good! The DB is using an index. But watch out for high "Actual Time" here.

Bitmap Heap Scan

A middle ground. The DB found many rows via an index and is now fetching them in bulk.

Actual Time vs. Cost

If the "Cost" (the estimate) is low but the "Actual Time" is high, your table statistics are likely outdated.

  1. Common Performance Killers (And Quick Fixes)

  • The "N+1" Query Trap: Are you hitting the database 100 times for 100 rows? Check your ORM logs.

  • Missing Indexes: If you see a Seq Scan on a table with 1M+ rows, you likely need a CREATE INDEX on your WHERE clause columns.

  • Data Type Mismatches: If you compare a VARCHAR to a BIGINT, Postgres might ignore your index entirely.

  • Memory Spills: If you see Sort Method: external merge Disk, it means your work_mem is too low, and Postgres is using the slow hard drive to sort data instead of RAM.

5. Scaling Your Optimization Strategy

Manual tuning is great for a single query, but it’s a nightmare for a growing platform. Real-time performance monitoring tools take the guesswork out of the equation.

Instead of staring at text-heavy EXPLAIN outputs, modern monitoring tools provide:

  • Visual Query Plans: Heatmaps that highlight the exact node causing the delay.

  • Index Suggestions: AI-driven insights that tell you exactly which index to add.

  • Historical Context: See if a query became slow because of a code deploy or a sudden increase in data volume.

Stop Guessing. Start Optimizing.

The difference between a "laggy" app and a "snappy" one is often just a few well-placed indexes and a bit of EXPLAIN ANALYZE detective work.