Query Optimization in Postgres with pg_stat_statements
"I want to work on optimizing all my queries all day long because it will definitely be worth the time and effort," is a statement that has hopefully never been said. So when it comes to query optimizing, how should you pick your battles? Luckily, in PostgreSQL we have a way to take a system-wide look at database queries:
- Which ones have taken up the most amount of time cumulatively to execute
- Which ones are run the most frequently
- And how long on average they take to execute
And that's just a few places you can shine a light on, using pg_stat_statements.
pg_stat_statements is an extension that has been around in PostgreSQL since version 8.4. It's evolved over the years (notably since 9.2) to become such a helpful tool. So we thought now would be as good a time as any to highlight it here on our Crunchy blog.
You can technically set Postgres to log all calls made to the database, but if you're interested in something that gives you more of an overview, pg_stat_statements gives you easy access to query statistics. It doesn't prevent runaway queries from bogging down your system, but it could be a simple yet important addition to your Postgres performance toolkit. You don't need to be an expert to start using and getting value out of this extension. Let's it a quick spin.
Getting started
pg_stat_statements is included in the contrib module, so it ships with standard Postgres, but might not be automatically enabled:
- If you're using a managed Postgres system such as Crunchy Bridge, pg_stat_statements may have already been added in postgresql.conf. (Tip: run
SHOW shared_preload_libraries;
to check.) In that case you can skip to the next step. - Log back in to Postgres. In the database which you want to access pg_stat_statements, run:
CREATE EXTENSION pg_stat_statements;
And that's it, you can start using it. You do need to enable the extension in each database where you want to access it, but pg_stat_statements actually tracks across all databases in that same server from the start.
pg_stat_statements on Crunchy Bridge
pg_stat_statements gives us a view that contains aggregated query statistics. I'm using a Crunchy Bridge instance running PG 13, and here's the result when describing the view:
\d pg_stat_statements
View "public.pg_stat_statements"
Column | Type | Collation | Nullable | Default
---------------------+------------------+-----------+----------+---------
userid | oid | | |
dbid | oid | | |
queryid | bigint | | |
query | text | | |
plans | bigint | | |
total_plan_time | double precision | | |
min_plan_time | double precision | | |
max_plan_time | double precision | | |
mean_plan_time | double precision | | |
stddev_plan_time | double precision | | |
calls | bigint | | |
total_exec_time | double precision | | |
min_exec_time | double precision | | |
max_exec_time | double precision | | |
mean_exec_time | double precision | | |
stddev_exec_time | double precision | | |
rows | bigint | | |
...
blk_read_time | double precision | | |
blk_write_time | double precision | | |
wal_records | bigint | | |
wal_fpi | bigint | | |
wal_bytes | numeric | | |
We'll revisit a few of these attributes in just a bit, but for now let's just dive in. I spun up a new Crunchy Bridge cluster, and since pg_stat_statements has already been added, all I really needed to do was to run CREATE EXTENSION. We can then query against the pg_stat_statements view, just like so:
SELECT * FROM pg_stat_statements;
Usage
Here's a basic example of how you can query pg_stat_statements:
SELECT
(total_exec_time / 1000 / 60) as total_min,
mean_exec_time as avg_ms,
calls,
query
FROM pg_stat_statements
ORDER BY 1 DESC
LIMIT 500;
total_exec_time
and mean_exec_time
are in milliseconds, and calls is the number of times the query has been run. min_exec_time
, max_exec_time
, and stddev_exec_time
are available as well. You could also look at the total number of cumulative rows returned by the query. There's many different ways you could slice and dice the information from pg_stat_statements.
You may have noticed that stats on query planning are included in addition to just execution. These are new additions as of PG 13, and could provide some more helpful insights (for example, where planning is taking significantly longer compared to execution).
In case you were wondering if pg_stat_statements also records queries issued against the view itself: yes, by default it does.
Resetting statistics
You could also do a manual reset, which flushes out the statistics gathered so far:
SELECT pg_stat_statements_reset();
It can make sense to do this after you've carried out some tuning on your end, such as adding an index. Or, you may have made some changes to the database schema. Either way, even if you do periodic resets you should still be able to make comparisons across time by taking snapshots of the view, for example by saving the query output.
A look at how queries are matched
Let's say I run the following statements:
CREATE TABLE payment (
id INTEGER PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY,
date timestamptz NOT NULL DEFAULT CURRENT_TIMESTAMP,
type TEXT NOT NULL
CHECK (type in ('Event', 'Package')),
status TEXT
CHECK (status in ('Waive', 'Refund')),
client_id INTEGER NOT NULL REFERENCES client(id),
event_id INTEGER REFERENCES event(id),
amount NUMERIC(8,2) NOT NULL,
balance NUMERIC(8,2) NOT NULL
);
INSERT INTO payment (date, type, client_id, event_id, amount, balance)
VALUES ('2020-01-10 13:00 PST',
'Event',
1,
4,
20.00,
0.00)
;
Here's what I get in pg_stat_statements (slightly reformatted for easier reading):
-[ RECORD 12 ]---------------------------------------------------------------------
total_min | 0.00022264500000000002
avg_ms | 13.3587
calls | 1
query | CREATE TABLE payment (id INTEGER PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY, date timestamptz NOT NULL DEFAULT CURRENT_TIMESTAMP, type TEXT NOT NULL CHECK (type in ('Event', 'Package')), status TEXT CHECK (status in ('Waive', 'Refund')), client_id INTEGER NOT NULL REFERENCES client(id), event_id INTEGER REFERENCES event(id), amount NUMERIC(8,2) NOT NULL, balance NUMERIC(8,2) NOT NULL)
...
-[ RECORD 50 ]---------------------------------------------------------------------
total_min | 6.687333333333334e-05
avg_ms | 4.0124
calls | 1
query | INSERT INTO payment (date, type, client_id, event_id, amount, balance) VALUES ($1, $2, $3, $4, $5, $6)
Notice the difference? The CREATE TABLE query is saved "as-is," but the INSERT has the literal values stripped out and replaced by parameter symbols ($1
etc). (You may also see this being referred to as the "normalized form" of the query.) That allows semantically equivalent queries to be evaluated in the aggregate. That is, if I were to run another insert into the payment table with the same syntax, we do want to consider it to be the same operation, but don't necessarily care that we may be inserting different values.
Note on I/O stats
A tip from my colleague Greg Smith: if you want to use the block read and write time (blk_read_time and blk_write_time) statistics, you also need to turn on the track_io_timing
parameter in postgresql.conf. That's recommended if you want all the features pg_stat_statements can deliver. It's disabled by default because timing things is very slow on some systems.
To check how fast the timers in your system are, use the pg_test_timing utility. The overhead of collecting all this timing data is low for most hardware.
I hadn't enabled track_io_timing, so we've focused on more simple examples for now. Do keep an eye out for a future deep dive into this topic on our blog.
pg_stat_statements for the easy win
I describe pg_stat_statements as giving you an aggregate look at query stats. Which attributes are most important and what thresholds you want to set will probably depend on how your database is used and what your requirements are. But, once you're able to zero in on some specific queries, one of the next things you'll probably want to do is dig in individually using EXPLAIN (ANALYZE). Then, you might go on to optimize these queries perhaps by creating new indexes. That's just one example, but I hope this helps illustrate how pg_stat_statements can be a really handy tool for improving the performance of your database.
One thing to keep in mind is that the query texts are "kept in an external disk file, and do not consume shared memory" (taken from the official docs). pgstat_statements should leave only a relatively small footprint on your system especially compared to logging _all of the things. That said, you could also make sure to set a lower threshold on pg_stat_statements.max
, or set only certain types of statements to be tracked using the pg_stat_statements.track
parameters.
pg_stat_statements saves you from the effort of parsing through all your logs just to get a pulse on what's going on with your queries. For those who weren't familiar with the extension, I hope this encourages you to start digging in. And while we're here, we do have a class on EXPLAIN (ANALYZE) in the Crunchy Learning Portal, for those of you who'd also like to give that a whirl. Enjoy!
Related Articles
- Crunchy Data Warehouse: Postgres with Iceberg for High Performance Analytics
8 min read
- Loading the World! OpenStreetMap Import In Under 4 Hours
6 min read
- Easy Totals and Subtotals in Postgres with Rollup and Cube
5 min read
- A change to ResultRelInfo - A Near Miss with Postgres 17.1
8 min read
- Accessing Large Language Models from PostgreSQL
5 min read