Active Active in Postgres 16
Support for logical replication arrived at Postgres just over five years ago with Postgres 10. Since then it's had a steady stream of improvements, but mostly logical replication has been limited to migrating data or unidirectional change data capture workflows. With Postgres 16 freshly released today, Postgres now has a better foundation to leverage logical replication for active-active setups.
What is logical replication and active-active?
If you're unfamiliar with the concepts of logical replication or what does active-active mean we've got you covered.
Logical replication is a method of replicating data changes based on the logical contents of the database, rather than at the physical level (bytes on disk). In simplified terms you can think of it as INSERT
, UPDATE
, DELETE
statements. Logical replication allows you to selectively replicate tables, specific columns, or even specific rows based on defined replication rules. This flexibility makes logical replication ideal for scenarios where you need to replicate only a subset of the data or perform transformations during replication.
Active-active replication - when referring to databases - is the ability to write to any of two (or more) Postgres instances and each have a full live set of data. Active-active is generally appealing to improve availability. Generally this brings complexity which can be a significant tradeoff. To date using logical replication for bi-directional replication was difficult and at best not very efficient.
Transaction loop back
Prior to Postgres 16 to even make this work with Postgres you had to accomplish some special processing to prevent transaction loop back.
Transaction loop back occurs when a transaction is replicated from the source to the target and then replicated back to the source. In Postgres 16 there is a feature that solves this problem. When creating a subscription, the subscriber asks the publisher to ignore transactions that were applied via the replication apply process. This is possible due to the origin messages in the WAL stream.
If you're still with us up to here, lets dig in and actually work on setting this up with Postgres 16.
Origin filter
The WAL stream contains information referred to in the documentation as 'origin messages'. These messages identify the source of the transaction, whether it was local or from an apply process. Let's take a look at the following to gain some insight into these messages.
Below is an excerpt from pg_waldump from a local transaction:
rmgr: Standby len (rec/tot): 50/ 50, tx: 0, lsn: 0/47000028, prev 0/46000A40, desc: RUNNING_XACTS nextXid 900 latestCompletedXid 899 oldestRunningXid 900
rmgr: Heap len (rec/tot): 114/ 114, tx: 900, lsn: 0/47000060, prev 0/47000028, desc: HOT_UPDATE off 17 xmax 900 flags 0x10 ; new off 18 xmax 0, blkref #0: rel 1663/5/24792 blk 0
rmgr: Transaction len (rec/tot): 46/ 46, tx: 900, lsn: 0/470000D8, prev 0/47000060, desc: COMMIT 2023-06-20 16:43:03.908882 EDT
Now let's Compare it with the COMMIT line from the logical replication apply process:
rmgr: Heap len (rec/tot): 54/ 54, tx: 901, lsn: 0/47000108, prev 0/470000D8, desc: LOCK off 18: xid 901: flags 0x00 LOCK_ONLY EXCL_LOCK KEYS_UPDATED , blkref #0: rel 1663/5/24792 blk 0
rmgr: Heap len (rec/tot): 117/ 117, tx: 901, lsn: 0/47000140, prev 0/47000108, desc: HOT_UPDATE off 18 xmax 901 flags 0x10 KEYS_UPDATED ; new off 19 xmax 901, blkref #0: rel 1663/5/24792 blk 0
rmgr: Transaction len (rec/tot): 65/ 65, tx: 901, lsn: 0/470001B8, prev 0/47000140, desc: COMMIT 2023-06-20 16:43:17.412369 EDT; origin: node 1, lsn 6/A95C2780, at 2023-06-20 16:43:17.412675 EDT
Notice the origin message in the COMMIT entry. This indicates that the transaction originated from 'node 1' at source LSN '6/A95C2780'. With Postgres 16, setting the 'origin=none' flag on the subscriber instructs the publisher to only send messages that do not have this origin information, indicating it was a transaction performed locally.
Sample environment
Let's do a quick test of setting up an active active replication set up. Start by creating two Postgres 16 instances. Set the following Postgres parameters to configure the instance for logical replication:
- wal_level = 'logical'
- max_worker_processes = 16
The WAL level set to logical will start the logical decoding. Since we are adding several processes on both sides to extract and replay, I also increased max worker processes to not interfere with other replication activities. After setting the above parameters, restart Postgres. For this example, the two Postgres instances will be referred to as pg1 and pg2.
In pg1, execute the following to configure the sample database objects.
CREATE SEQUENCE emp_eid_seq
START 1
INCREMENT 2;
CREATE TABLE emp (eid int NOT NULL DEFAULT nextval('emp_eid_seq') primary key,
first_name varchar(40),
last_name varchar(40),
email varchar(100),
hire_dt timestamp);
INSERT INTO emp (FIRST_NAME,LAST_NAME,EMAIL,HIRE_DT) VALUES ('John', 'Doe', 'johndoe@example.com', '2021-01-15 09:00:00'),
('Jane', 'Smith', 'janesmith@example.com', '2022-03-20 14:30:00'),
('Michael', 'Johnson', 'michaelj@example.com', '2020-12-10 10:15:00'),
('Emily', 'Williams', 'emilyw@example.com', '2023-05-05 08:45:00'),
('David', 'Brown', 'davidbrown@example.com', '2019-11-25 11:20:00'),
('Sarah', 'Taylor', 'saraht@example.com', '2022-09-08 13:00:00'),
('Robert', 'Anderson', 'roberta@example.com', '2021-07-12 16:10:00'),
('Jennifer', 'Martinez', 'jenniferm@example.com', '2023-02-18 09:30:00'),
('William', 'Jones', 'williamj@example.com', '2020-04-30 12:45:00'),
('Linda', 'Garcia', 'lindag@example.com', '2018-06-03 15:55:00');
In pg2, a slightly different script is used to prepare the database objects.
CREATE SEQUENCE emp_eid_seq
START 2
INCREMENT 2;
CREATE TABLE emp (eid int NOT NULL DEFAULT nextval('emp_eid_seq') primary key,
first_name varchar(40),
last_name varchar(40),
email varchar(100),
hire_dt timestamp);
Notice special design considerations are already taking shape. To avoid primary key conflicts, pg1 generates primary key values with odd numbers, INCREMENT=2
, and pg2 will use even numbers.
Last setup piece is to create a user for replication on both systems.
CREATE ROLE repuser WITH REPLICATION LOGIN PASSWORD 'welcome1';
GRANT all ON all tables IN schema public TO repuser;
Publication
Using a publish/subscribe model, changes captured in one Postgres instance (publisher) can be replicated to multiple Postgres instances (subscribers). Using the command below create a publisher on each instance.
pg1:
CREATE PUBLICATION hrpub1
FOR TABLE emp;
pg2:
CREATE PUBLICATION hrpub2
FOR TABLE emp;
The publication name could have been the same for each side, but having different names will help us later on as we measure latency using a custom heartbeat table.
Subscription
With the publishers ready, the next step is to create the subscribers. By default, logical replication starts with an initial snapshot on the publisher and copies the data to the subscriber. Since we are doing bi-directional, we will allow the snapshot from pg1 to pg2, but do not need the reverse copy to happen and will therefore disable the initial copy.
pg1:
CREATE SUBSCRIPTION hrsub1
CONNECTION 'host=pg2 port=5432 user=repuser password=welcome1 dbname=postgres'
PUBLICATION hrpub2
WITH (origin = none, copy_data = false);
pg2:
CREATE SUBSCRIPTION hrsub2
CONNECTION 'host=pg1 port=5432 user=repuser password=welcome1 dbname=postgres'
PUBLICATION hrpub1
WITH (origin = none, copy_data = true);
The key is the origin setting in the subscription (origin = none). The default for origin is 'any' which will instruct the publisher to send all transactions to the subscriber regardless of the source. For bi-directional this is bad. With the setting of 'any', an update performed on pg1 would be replicated to pg2 (so far so good). That replicated transaction would be captured and sent back to pg1, and so forth. This is what we call transaction loopback.
By setting origin to none, the subscriber will request the publisher to only send changes that do not have an origin and thus ignore replicated transactions. Now, Postgres is ready for bi-directional logical replication.
After a few seconds, verify that the initial copy of the emp table has occurred between pg1 and pg2.
Replication Test
With the replication configured, update data on each side and verify replication.
pg1:
SELECT * FROM emp WHERE eid=1;
UPDATE emp SET first_name='Bugs', last_name='Bunny' WHERE eid=1;
pg2:
SELECT * FROM emp WHERE eid=1;
SELECT * FROM emp WHERE eid=3;
UPDATE emp SET first_name='Road', last_name='Runner' WHERE eid=3;
Don't get too carried away
Setting up bi-directional replication is easy, but not without risk. Before you go open a pull request against prod, there are many things to consider like monitoring, restrictions, change volume, application behavior, backup and recovery, data reconciliation, etc. Let's do a quick exercise to demonstrate application behavior that results in data integrity issues. For the following example open a database connection to both pg1 and pg2.
Start a transaction in each session using the following and note the value of email and last_name.
BEGIN;
SELECT * FROM emp WHERE eid=1;
In pg1 update the email address of the employee with EID = 1 but do not commit.
UPDATE emp SET email='bugs.bunny@acme.com' WHERE eid=1;
In pg2, update the last name but do not commit.
UPDATE emp SET last_name='Jones' WHERE eid=1;
The expectation after committing is that last_name will be equal to Jones and email will be bugs.bunny@acme.com
. Commit the transaction in pg1 and then in pg2. What happens?
--pg1
SELECT * FROM emp WHERE eid=1;
eid | first_name | last_name | email | hire_dt
-----+------------+-----------+-------------------------+---------------------
1 | Bugs | Jones | johndoe@example.com | 2022-09-25 16:04:47
(1 row)
--pg2
SELECT * FROM emp WHERE eid=1;
eid | first_name | last_name | email | hire_dt
-----+------------+-----------+---------------------+---------------------
1 | Bugs | Bunny | bugs.bunny@acme.com | 2022-09-25 16:04:47
(1 row)
Now both rows are out of sync. In pg1, the update of the email was lost and in pg2 the update of last_name was lost. This happens because the entire row is sent over during logical replication and not just the fields that were updated. In such cases, even eventual consistency is not possible.
Conclusion
Logical replication with PostgreSQL offers a flexible and powerful solution for replicating data changes across multiple database instances. Its ability to selectively replicate data, scalability features, and high availability options make it a valuable tool for various use cases. New features in Postgres 16 take an already powerful feature and make it even better. Bi-directional replication is now within reach using native replication. However, one must plan and test to maintain data integrity and consistency.
If you want to test it out, try Crunchy Bridge. The default WAL level is logical and we're running Postgres 16.
Related Articles
- Crunchy Data Warehouse: Postgres with Iceberg for High Performance Analytics
8 min read
- Loading the World! OpenStreetMap Import In Under 4 Hours
6 min read
- Easy Totals and Subtotals in Postgres with Rollup and Cube
5 min read
- A change to ResultRelInfo - A Near Miss with Postgres 17.1
8 min read
- Accessing Large Language Models from PostgreSQL
5 min read