1 <!-- doc/src/sgml/logical-replication.sgml -->
3 <chapter id=
"logical-replication">
4 <title>Logical Replication
</title>
7 Logical replication is a method of replicating data objects and their
8 changes, based upon their replication identity (usually a primary key). We
9 use the term logical in contrast to physical replication, which uses exact
10 block addresses and byte-by-byte replication. PostgreSQL supports both
11 mechanisms concurrently, see
<xref linkend=
"high-availability"/>. Logical
12 replication allows fine-grained control over both data replication and
17 Logical replication uses a
<firstterm>publish
</firstterm>
18 and
<firstterm>subscribe
</firstterm> model with one or
19 more
<firstterm>subscribers
</firstterm> subscribing to one or more
20 <firstterm>publications
</firstterm> on a
<firstterm>publisher
</firstterm>
21 node. Subscribers pull data from the publications they subscribe to and may
22 subsequently re-publish data to allow cascading replication or more complex
27 When logical replication of a table typically starts, PostgreSQL takes
28 a snapshot of the table's data on the publisher database and copies it
29 to the subscriber. Once complete, changes on the publisher since the
30 initial copy are sent continually to the subscriber. The subscriber
31 applies the data in the same
32 order as the publisher so that transactional consistency is guaranteed for
33 publications within a single subscription. This method of data replication
34 is sometimes referred to as transactional replication.
38 The typical use-cases for logical replication are:
43 Sending incremental changes in a single database or a subset of a
44 database to subscribers as they occur.
50 Firing triggers for individual changes as they arrive on the
57 Consolidating multiple databases into a single one (for example for
64 Replicating between different major versions of PostgreSQL.
70 Replicating between PostgreSQL instances on different platforms (for
71 example Linux to Windows)
77 Giving access to replicated data to different groups of users.
83 Sharing a subset of the database between multiple databases.
90 The subscriber database behaves in the same way as any other PostgreSQL
91 instance and can be used as a publisher for other databases by defining its
92 own publications. When the subscriber is treated as read-only by
93 application, there will be no conflicts from a single subscription. On the
94 other hand, if there are other writes done either by an application or by other
95 subscribers to the same set of tables, conflicts can arise.
98 <sect1 id=
"logical-replication-publication">
99 <title>Publication
</title>
102 A
<firstterm>publication
</firstterm> can be defined on any physical
103 replication primary. The node where a publication is defined is referred to
104 as
<firstterm>publisher
</firstterm>. A publication is a set of changes
105 generated from a table or a group of tables, and might also be described as
106 a change set or replication set. Each publication exists in only one database.
110 Publications are different from schemas and do not affect how the table is
111 accessed. Each table can be added to multiple publications if needed.
112 Publications may currently only contain tables and all tables in schema.
113 Objects must be added explicitly, except when a publication is created for
114 <literal>ALL TABLES
</literal>.
118 Publications can choose to limit the changes they produce to
119 any combination of
<command>INSERT
</command>,
<command>UPDATE
</command>,
120 <command>DELETE
</command>, and
<command>TRUNCATE
</command>, similar to how triggers are fired by
121 particular event types. By default, all operation types are replicated.
122 These publication specifications apply only for DML operations; they do not affect the initial
123 data synchronization copy. (Row filters have no effect for
124 <command>TRUNCATE
</command>. See
<xref linkend=
"logical-replication-row-filter"/>).
128 Every publication can have multiple subscribers.
132 A publication is created using the
<link linkend=
"sql-createpublication"><command>CREATE PUBLICATION
</command></link>
133 command and may later be altered or dropped using corresponding commands.
137 The individual tables can be added and removed dynamically using
138 <link linkend=
"sql-alterpublication"><command>ALTER PUBLICATION
</command></link>. Both the
<literal>ADD
139 TABLE
</literal> and
<literal>DROP TABLE
</literal> operations are
140 transactional, so the table will start or stop replicating at the correct
141 snapshot once the transaction has committed.
144 <sect2 id=
"logical-replication-publication-replica-identity">
145 <title>Replica Identity
</title>
148 A published table must have a
<firstterm>replica identity
</firstterm>
149 configured in order to be able to replicate
<command>UPDATE
</command>
150 and
<command>DELETE
</command> operations, so that appropriate rows to
151 update or delete can be identified on the subscriber side.
155 By default, this is the primary key, if there is one. Another unique index
156 (with certain additional requirements) can also be set to be the replica
157 identity. If the table does not have any suitable key, then it can be set
158 to replica identity
<literal>FULL
</literal>, which means the entire row
159 becomes the key. When replica identity
<literal>FULL
</literal> is
160 specified, indexes can be used on the subscriber side for searching the
161 rows. Candidate indexes must be btree or hash, non-partial, and the
162 leftmost index field must be a column (not an expression) that references
163 the published table column. These restrictions on the non-unique index
164 properties adhere to some of the restrictions that are enforced for
165 primary keys. If there are no such suitable indexes, the search on the
166 subscriber side can be very inefficient, therefore replica identity
167 <literal>FULL
</literal> should only be used as a fallback if no other
168 solution is possible.
172 If a replica identity other than
<literal>FULL
</literal> is set on the
173 publisher side, a replica identity comprising the same or fewer columns
174 must also be set on the subscriber side.
178 Tables with a replica identity defined as
<literal>NOTHING
</literal>,
179 <literal>DEFAULT
</literal> without a primary key, or
<literal>USING
180 INDEX
</literal> with a dropped index, cannot support
181 <command>UPDATE
</command> or
<command>DELETE
</command> operations when
182 included in a publication replicating these actions. Attempting such
183 operations will result in an error on the publisher.
187 <command>INSERT
</command> operations can proceed regardless of any replica identity.
191 See
<link linkend=
"sql-altertable-replica-identity"><literal>ALTER TABLE...REPLICA IDENTITY
</literal></link>
192 for details on how to set the replica identity.
198 <sect1 id=
"logical-replication-subscription">
199 <title>Subscription
</title>
202 A
<firstterm>subscription
</firstterm> is the downstream side of logical
203 replication. The node where a subscription is defined is referred to as
204 the
<firstterm>subscriber
</firstterm>. A subscription defines the connection
205 to another database and set of publications (one or more) to which it wants
210 The subscriber database behaves in the same way as any other PostgreSQL
211 instance and can be used as a publisher for other databases by defining its
216 A subscriber node may have multiple subscriptions if desired. It is
217 possible to define multiple subscriptions between a single
218 publisher-subscriber pair, in which case care must be taken to ensure
219 that the subscribed publication objects don't overlap.
223 Each subscription will receive changes via one replication slot (see
224 <xref linkend=
"streaming-replication-slots"/>). Additional replication
225 slots may be required for the initial data synchronization of
226 pre-existing table data and those will be dropped at the end of data
231 A logical replication subscription can be a standby for synchronous
232 replication (see
<xref linkend=
"synchronous-replication"/>). The standby
233 name is by default the subscription name. An alternative name can be
234 specified as
<literal>application_name
</literal> in the connection
235 information of the subscription.
239 Subscriptions are dumped by
<command>pg_dump
</command> if the current user
240 is a superuser. Otherwise a warning is written and subscriptions are
241 skipped, because non-superusers cannot read all subscription information
242 from the
<structname>pg_subscription
</structname> catalog.
246 The subscription is added using
<link linkend=
"sql-createsubscription"><command>CREATE SUBSCRIPTION
</command></link> and
247 can be stopped/resumed at any time using the
248 <link linkend=
"sql-altersubscription"><command>ALTER SUBSCRIPTION
</command></link> command and removed using
249 <link linkend=
"sql-dropsubscription"><command>DROP SUBSCRIPTION
</command></link>.
253 When a subscription is dropped and recreated, the synchronization
254 information is lost. This means that the data has to be resynchronized
259 The schema definitions are not replicated, and the published tables must
260 exist on the subscriber. Only regular tables may be
261 the target of replication. For example, you can't replicate to a view.
265 The tables are matched between the publisher and the subscriber using the
266 fully qualified table name. Replication to differently-named tables on the
267 subscriber is not supported.
271 Columns of a table are also matched by name. The order of columns in the
272 subscriber table does not need to match that of the publisher. The data
273 types of the columns do not need to match, as long as the text
274 representation of the data can be converted to the target type. For
275 example, you can replicate from a column of type
<type>integer
</type> to a
276 column of type
<type>bigint
</type>. The target table can also have
277 additional columns not provided by the published table. Any such columns
278 will be filled with the default value as specified in the definition of the
279 target table. However, logical replication in binary format is more
281 <link linkend=
"sql-createsubscription-params-with-binary"><literal>binary
</literal></link>
282 option of
<command>CREATE SUBSCRIPTION
</command> for details.
285 <sect2 id=
"logical-replication-subscription-slot">
286 <title>Replication Slot Management
</title>
289 As mentioned earlier, each (active) subscription receives changes from a
290 replication slot on the remote (publishing) side.
293 Additional table synchronization slots are normally transient, created
294 internally to perform initial table synchronization and dropped
295 automatically when they are no longer needed. These table synchronization
296 slots have generated names:
<quote><literal>pg_%u_sync_%u_%llu
</literal></quote>
297 (parameters: Subscription
<parameter>oid
</parameter>,
298 Table
<parameter>relid
</parameter>, system identifier
<parameter>sysid
</parameter>)
301 Normally, the remote replication slot is created automatically when the
302 subscription is created using
<link linkend=
"sql-createsubscription">
303 <command>CREATE SUBSCRIPTION
</command></link> and it
304 is dropped automatically when the subscription is dropped using
305 <link linkend=
"sql-dropsubscription"><command>DROP SUBSCRIPTION
</command></link>.
306 In some situations, however, it can
307 be useful or necessary to manipulate the subscription and the underlying
308 replication slot separately. Here are some scenarios:
313 When creating a subscription, the replication slot already exists. In
314 that case, the subscription can be created using
315 the
<literal>create_slot = false
</literal> option to associate with the
322 When creating a subscription, the remote host is not reachable or in an
323 unclear state. In that case, the subscription can be created using
324 the
<literal>connect = false
</literal> option. The remote host will then not
325 be contacted at all. This is what
<application>pg_dump
</application>
326 uses. The remote replication slot will then have to be created
327 manually before the subscription can be activated.
333 When dropping a subscription, the replication slot should be kept.
334 This could be useful when the subscriber database is being moved to a
335 different host and will be activated from there. In that case,
336 disassociate the slot from the subscription using
337 <link linkend=
"sql-altersubscription"><command>ALTER SUBSCRIPTION
</command></link>
338 before attempting to drop the subscription.
344 When dropping a subscription, the remote host is not reachable. In
345 that case, disassociate the slot from the subscription
346 using
<command>ALTER SUBSCRIPTION
</command> before attempting to drop
347 the subscription. If the remote database instance no longer exists, no
348 further action is then necessary. If, however, the remote database
349 instance is just unreachable, the replication slot (and any still
350 remaining table synchronization slots) should then be
351 dropped manually; otherwise it/they would continue to reserve WAL and might
352 eventually cause the disk to fill up. Such cases should be carefully
360 <sect2 id=
"logical-replication-subscription-examples">
361 <title>Examples: Set Up Logical Replication
</title>
364 Create some test tables on the publisher.
366 test_pub=# CREATE TABLE t1(a int, b text, PRIMARY KEY(a));
368 test_pub=# CREATE TABLE t2(c int, d text, PRIMARY KEY(c));
370 test_pub=# CREATE TABLE t3(e int, f text, PRIMARY KEY(e));
372 </programlisting></para>
375 Create the same tables on the subscriber.
377 test_sub=# CREATE TABLE t1(a int, b text, PRIMARY KEY(a));
379 test_sub=# CREATE TABLE t2(c int, d text, PRIMARY KEY(c));
381 test_sub=# CREATE TABLE t3(e int, f text, PRIMARY KEY(e));
383 </programlisting></para>
386 Insert data to the tables at the publisher side.
388 test_pub=# INSERT INTO t1 VALUES (
1, 'one'), (
2, 'two'), (
3, 'three');
390 test_pub=# INSERT INTO t2 VALUES (
1, 'A'), (
2, 'B'), (
3, 'C');
392 test_pub=# INSERT INTO t3 VALUES (
1, 'i'), (
2, 'ii'), (
3, 'iii');
394 </programlisting></para>
397 Create publications for the tables. The publications
<literal>pub2
</literal>
398 and
<literal>pub3a
</literal> disallow some
399 <link linkend=
"sql-createpublication-params-with-publish"><literal>publish
</literal></link>
400 operations. The publication
<literal>pub3b
</literal> has a row filter (see
401 <xref linkend=
"logical-replication-row-filter"/>).
403 test_pub=# CREATE PUBLICATION pub1 FOR TABLE t1;
405 test_pub=# CREATE PUBLICATION pub2 FOR TABLE t2 WITH (publish = 'truncate');
407 test_pub=# CREATE PUBLICATION pub3a FOR TABLE t3 WITH (publish = 'truncate');
409 test_pub=# CREATE PUBLICATION pub3b FOR TABLE t3 WHERE (e
> 5);
411 </programlisting></para>
414 Create subscriptions for the publications. The subscription
415 <literal>sub3
</literal> subscribes to both
<literal>pub3a
</literal> and
416 <literal>pub3b
</literal>. All subscriptions will copy initial data by default.
418 test_sub=# CREATE SUBSCRIPTION sub1
419 test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub1'
420 test_sub-# PUBLICATION pub1;
422 test_sub=# CREATE SUBSCRIPTION sub2
423 test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub2'
424 test_sub-# PUBLICATION pub2;
426 test_sub=# CREATE SUBSCRIPTION sub3
427 test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=sub3'
428 test_sub-# PUBLICATION pub3a, pub3b;
430 </programlisting></para>
433 Observe that initial table data is copied, regardless of the
434 <literal>publish
</literal> operation of the publication.
436 test_sub=# SELECT * FROM t1;
444 test_sub=# SELECT * FROM t2;
451 </programlisting></para>
454 Furthermore, because the initial data copy ignores the
<literal>publish
</literal>
455 operation, and because publication
<literal>pub3a
</literal> has no row filter,
456 it means the copied table
<literal>t3
</literal> contains all rows even when
457 they do not match the row filter of publication
<literal>pub3b
</literal>.
459 test_sub=# SELECT * FROM t3;
466 </programlisting></para>
469 Insert more data to the tables at the publisher side.
471 test_pub=# INSERT INTO t1 VALUES (
4, 'four'), (
5, 'five'), (
6, 'six');
473 test_pub=# INSERT INTO t2 VALUES (
4, 'D'), (
5, 'E'), (
6, 'F');
475 test_pub=# INSERT INTO t3 VALUES (
4, 'iv'), (
5, 'v'), (
6, 'vi');
477 </programlisting></para>
480 Now the publisher side data looks like:
482 test_pub=# SELECT * FROM t1;
493 test_pub=# SELECT * FROM t2;
504 test_pub=# SELECT * FROM t3;
514 </programlisting></para>
517 Observe that during normal replication the appropriate
518 <literal>publish
</literal> operations are used. This means publications
519 <literal>pub2
</literal> and
<literal>pub3a
</literal> will not replicate the
520 <literal>INSERT
</literal>. Also, publication
<literal>pub3b
</literal> will
521 only replicate data that matches the row filter of
<literal>pub3b
</literal>.
522 Now the subscriber side data looks like:
524 test_sub=# SELECT * FROM t1;
535 test_sub=# SELECT * FROM t2;
543 test_sub=# SELECT * FROM t3;
551 </programlisting></para>
554 <sect2 id=
"logical-replication-subscription-examples-deferred-slot">
555 <title>Examples: Deferred Replication Slot Creation
</title>
558 There are some cases (e.g.
559 <xref linkend=
"logical-replication-subscription-slot"/>) where, if the
560 remote replication slot was not created automatically, the user must create
561 it manually before the subscription can be activated. The steps to create
562 the slot and activate the subscription are shown in the following examples.
563 These examples specify the standard logical decoding output plugin
564 (
<literal>pgoutput
</literal>), which is what the built-in logical
568 First, create a publication for the examples to use.
570 test_pub=# CREATE PUBLICATION pub1 FOR ALL TABLES;
572 </programlisting></para>
574 Example
1: Where the subscription says
<literal>connect = false
</literal>
580 Create the subscription.
582 test_sub=# CREATE SUBSCRIPTION sub1
583 test_sub-# CONNECTION 'host=localhost dbname=test_pub'
584 test_sub-# PUBLICATION pub1
585 test_sub-# WITH (connect=false);
586 WARNING: subscription was created, but is not connected
587 HINT: To initiate replication, you must manually create the replication slot, enable the subscription, and refresh the subscription.
589 </programlisting></para>
593 On the publisher, manually create a slot. Because the name was not
594 specified during
<literal>CREATE SUBSCRIPTION
</literal>, the name of the
595 slot to create is same as the subscription name, e.g.
"sub1".
597 test_pub=# SELECT * FROM pg_create_logical_replication_slot('sub1', 'pgoutput');
599 -----------+-----------
602 </programlisting></para>
606 On the subscriber, complete the activation of the subscription. After
607 this the tables of
<literal>pub1
</literal> will start replicating.
609 test_sub=# ALTER SUBSCRIPTION sub1 ENABLE;
611 test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION;
613 </programlisting></para>
619 Example
2: Where the subscription says
<literal>connect = false
</literal>,
620 but also specifies the
621 <link linkend=
"sql-createsubscription-params-with-slot-name"><literal>slot_name
</literal></link>
626 Create the subscription.
628 test_sub=# CREATE SUBSCRIPTION sub1
629 test_sub-# CONNECTION 'host=localhost dbname=test_pub'
630 test_sub-# PUBLICATION pub1
631 test_sub-# WITH (connect=false, slot_name='myslot');
632 WARNING: subscription was created, but is not connected
633 HINT: To initiate replication, you must manually create the replication slot, enable the subscription, and refresh the subscription.
635 </programlisting></para>
639 On the publisher, manually create a slot using the same name that was
640 specified during
<literal>CREATE SUBSCRIPTION
</literal>, e.g.
"myslot".
642 test_pub=# SELECT * FROM pg_create_logical_replication_slot('myslot', 'pgoutput');
644 -----------+-----------
647 </programlisting></para>
651 On the subscriber, the remaining subscription activation steps are the
654 test_sub=# ALTER SUBSCRIPTION sub1 ENABLE;
656 test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION;
658 </programlisting></para>
664 Example
3: Where the subscription specifies
<literal>slot_name = NONE
</literal>
668 Create the subscription. When
<literal>slot_name = NONE
</literal> then
669 <literal>enabled = false
</literal>, and
670 <literal>create_slot = false
</literal> are also needed.
672 test_sub=# CREATE SUBSCRIPTION sub1
673 test_sub-# CONNECTION 'host=localhost dbname=test_pub'
674 test_sub-# PUBLICATION pub1
675 test_sub-# WITH (slot_name=NONE, enabled=false, create_slot=false);
677 </programlisting></para>
681 On the publisher, manually create a slot using any name, e.g.
"myslot".
683 test_pub=# SELECT * FROM pg_create_logical_replication_slot('myslot', 'pgoutput');
685 -----------+-----------
688 </programlisting></para>
692 On the subscriber, associate the subscription with the slot name just
695 test_sub=# ALTER SUBSCRIPTION sub1 SET (slot_name='myslot');
697 </programlisting></para>
701 The remaining subscription activation steps are same as before.
703 test_sub=# ALTER SUBSCRIPTION sub1 ENABLE;
705 test_sub=# ALTER SUBSCRIPTION sub1 REFRESH PUBLICATION;
707 </programlisting></para>
715 <sect1 id=
"logical-replication-failover">
716 <title>Logical Replication Failover
</title>
719 To allow subscriber nodes to continue replicating data from the publisher
720 node even when the publisher node goes down, there must be a physical standby
721 corresponding to the publisher node. The logical slots on the primary server
722 corresponding to the subscriptions can be synchronized to the standby server by
723 specifying
<literal>failover = true
</literal> when creating subscriptions. See
724 <xref linkend=
"logicaldecoding-replication-slots-synchronization"/> for details.
726 <link linkend=
"sql-createsubscription-params-with-failover"><literal>failover
</literal></link>
727 parameter ensures a seamless transition of those subscriptions after the
728 standby is promoted. They can continue subscribing to publications on the
733 Because the slot synchronization logic copies asynchronously, it is
734 necessary to confirm that replication slots have been synced to the standby
735 server before the failover happens. To ensure a successful failover, the
736 standby server must be ahead of the subscriber. This can be achieved by
738 <link linkend=
"guc-synchronized-standby-slots"><varname>synchronized_standby_slots
</varname></link>.
742 To confirm that the standby server is indeed ready for failover, follow these
743 steps to verify that all necessary logical replication slots have been
744 synchronized to the standby server:
748 <step performance=
"required">
750 On the subscriber node, use the following SQL to identify which replication
751 slots should be synced to the standby that we plan to promote. This query
752 will return the relevant replication slots associated with the
753 failover-enabled subscriptions.
756 array_agg(quote_literal(s.subslotname)) AS slots
757 FROM pg_subscription s
758 WHERE s.subfailover AND
759 s.subslotname IS NOT NULL;
762 {'sub1','sub2','sub3'}
764 </programlisting></para>
766 <step performance=
"required">
768 On the subscriber node, use the following SQL to identify which table
769 synchronization slots should be synced to the standby that we plan to promote.
770 This query needs to be run on each database that includes the failover-enabled
771 subscription(s). Note that the table sync slot should be synced to the standby
772 server only if the table copy is finished
773 (See
<xref linkend=
"catalog-pg-subscription-rel"/>).
774 We don't need to ensure that the table sync slots are synced in other scenarios
775 as they will either be dropped or re-created on the new primary server in those
779 array_agg(quote_literal(slot_name)) AS slots
782 SELECT CONCAT('pg_', srsubid, '_sync_', srrelid, '_', ctl.system_identifier) AS slot_name
783 FROM pg_control_system() ctl, pg_subscription_rel r, pg_subscription s
784 WHERE r.srsubstate = 'f' AND s.oid = r.srsubid AND s.subfailover
788 {'pg_16394_sync_16385_7394666715149055164'}
790 </programlisting></para>
792 <step performance=
"required">
794 Check that the logical replication slots identified above exist on
795 the standby server and are ready for failover.
797 test_standby=# SELECT slot_name, (synced AND NOT temporary AND NOT conflicting) AS failover_ready
798 FROM pg_replication_slots
800 ('sub1','sub2','sub3', 'pg_16394_sync_16385_7394666715149055164');
801 slot_name | failover_ready
802 --------------------------------------------+----------------
806 pg_16394_sync_16385_7394666715149055164 | t
808 </programlisting></para>
813 If all the slots are present on the standby server and the result
814 (
<literal>failover_ready
</literal>) of the above SQL query is true, then
815 existing subscriptions can continue subscribing to publications now on the
821 <sect1 id=
"logical-replication-row-filter">
822 <title>Row Filters
</title>
825 By default, all data from all published tables will be replicated to the
826 appropriate subscribers. The replicated data can be reduced by using a
827 <firstterm>row filter
</firstterm>. A user might choose to use row filters
828 for behavioral, security or performance reasons. If a published table sets a
829 row filter, a row is replicated only if its data satisfies the row filter
830 expression. This allows a set of tables to be partially replicated. The row
831 filter is defined per table. Use a
<literal>WHERE
</literal> clause after the
832 table name for each published table that requires data to be filtered out.
833 The
<literal>WHERE
</literal> clause must be enclosed by parentheses. See
834 <xref linkend=
"sql-createpublication"/> for details.
837 <sect2 id=
"logical-replication-row-filter-rules">
838 <title>Row Filter Rules
</title>
841 Row filters are applied
<emphasis>before
</emphasis> publishing the changes.
842 If the row filter evaluates to
<literal>false
</literal> or
<literal>NULL
</literal>
843 then the row is not replicated. The
<literal>WHERE
</literal> clause expression
844 is evaluated with the same role used for the replication connection (i.e.
845 the role specified in the
846 <link linkend=
"sql-createsubscription-params-connection"><literal>CONNECTION
</literal></link>
847 clause of the
<xref linkend=
"sql-createsubscription"/>). Row filters have
848 no effect for
<command>TRUNCATE
</command> command.
853 <sect2 id=
"logical-replication-row-filter-restrictions">
854 <title>Expression Restrictions
</title>
857 The
<literal>WHERE
</literal> clause allows only simple expressions. It
858 cannot contain user-defined functions, operators, types, and collations,
859 system column references or non-immutable built-in functions.
863 If a publication publishes
<command>UPDATE
</command> or
864 <command>DELETE
</command> operations, the row filter
<literal>WHERE
</literal>
865 clause must contain only columns that are covered by the replica identity
866 (see
<xref linkend=
"sql-altertable-replica-identity"/>). If a publication
867 publishes only
<command>INSERT
</command> operations, the row filter
868 <literal>WHERE
</literal> clause can use any column.
873 <sect2 id=
"logical-replication-row-filter-transformations">
874 <title>UPDATE Transformations
</title>
877 Whenever an
<command>UPDATE
</command> is processed, the row filter
878 expression is evaluated for both the old and new row (i.e. using the data
879 before and after the update). If both evaluations are
<literal>true
</literal>,
880 it replicates the
<command>UPDATE
</command> change. If both evaluations are
881 <literal>false
</literal>, it doesn't replicate the change. If only one of
882 the old/new rows matches the row filter expression, the
<command>UPDATE
</command>
883 is transformed to
<command>INSERT
</command> or
<command>DELETE
</command>, to
884 avoid any data inconsistency. The row on the subscriber should reflect what
885 is defined by the row filter expression on the publisher.
889 If the old row satisfies the row filter expression (it was sent to the
890 subscriber) but the new row doesn't, then, from a data consistency
891 perspective the old row should be removed from the subscriber.
892 So the
<command>UPDATE
</command> is transformed into a
<command>DELETE
</command>.
896 If the old row doesn't satisfy the row filter expression (it wasn't sent
897 to the subscriber) but the new row does, then, from a data consistency
898 perspective the new row should be added to the subscriber.
899 So the
<command>UPDATE
</command> is transformed into an
<command>INSERT
</command>.
903 <xref linkend=
"logical-replication-row-filter-transformations-summary"/>
904 summarizes the applied transformations.
907 <table id=
"logical-replication-row-filter-transformations-summary">
908 <title><command>UPDATE
</command> Transformation Summary
</title>
912 <entry>Old row
</entry><entry>New row
</entry><entry>Transformation
</entry>
917 <entry>no match
</entry><entry>no match
</entry><entry>don't replicate
</entry>
920 <entry>no match
</entry><entry>match
</entry><entry><literal>INSERT
</literal></entry>
923 <entry>match
</entry><entry>no match
</entry><entry><literal>DELETE
</literal></entry>
926 <entry>match
</entry><entry>match
</entry><entry><literal>UPDATE
</literal></entry>
934 <sect2 id=
"logical-replication-row-filter-partitioned-table">
935 <title>Partitioned Tables
</title>
938 If the publication contains a partitioned table, the publication parameter
939 <link linkend=
"sql-createpublication-params-with-publish-via-partition-root"><literal>publish_via_partition_root
</literal></link>
940 determines which row filter is used. If
<literal>publish_via_partition_root
</literal>
941 is
<literal>true
</literal>, the
<emphasis>root partitioned table's
</emphasis>
942 row filter is used. Otherwise, if
<literal>publish_via_partition_root
</literal>
943 is
<literal>false
</literal> (default), each
<emphasis>partition's
</emphasis>
949 <sect2 id=
"logical-replication-row-filter-initial-data-sync">
950 <title>Initial Data Synchronization
</title>
953 If the subscription requires copying pre-existing table data
954 and a publication contains
<literal>WHERE
</literal> clauses, only data that
955 satisfies the row filter expressions is copied to the subscriber.
959 If the subscription has several publications in which a table has been
960 published with different
<literal>WHERE
</literal> clauses, rows that satisfy
961 <emphasis>any
</emphasis> of the expressions will be copied. See
962 <xref linkend=
"logical-replication-row-filter-combining"/> for details.
967 Because initial data synchronization does not take into account the
968 <link linkend=
"sql-createpublication-params-with-publish"><literal>publish
</literal></link>
969 parameter when copying existing table data, some rows may be copied that
970 would not be replicated using DML. Refer to
971 <xref linkend=
"logical-replication-snapshot"/>, and see
972 <xref linkend=
"logical-replication-subscription-examples"/> for examples.
978 If the subscriber is in a release prior to
15, copy pre-existing data
979 doesn't use row filters even if they are defined in the publication.
980 This is because old releases can only copy the entire table data.
986 <sect2 id=
"logical-replication-row-filter-combining">
987 <title>Combining Multiple Row Filters
</title>
990 If the subscription has several publications in which the same table has
991 been published with different row filters (for the same
992 <link linkend=
"sql-createpublication-params-with-publish"><literal>publish
</literal></link>
993 operation), those expressions get ORed together, so that rows satisfying
994 <emphasis>any
</emphasis> of the expressions will be replicated. This means all
995 the other row filters for the same table become redundant if:
999 One of the publications has no row filter.
1004 One of the publications was created using
1005 <link linkend=
"sql-createpublication-params-for-all-tables"><literal>FOR ALL TABLES
</literal></link>.
1006 This clause does not allow row filters.
1011 One of the publications was created using
1012 <link linkend=
"sql-createpublication-params-for-tables-in-schema"><literal>FOR TABLES IN SCHEMA
</literal></link>
1013 and the table belongs to the referred schema. This clause does not allow
1017 </itemizedlist></para>
1021 <sect2 id=
"logical-replication-row-filter-examples">
1022 <title>Examples
</title>
1025 Create some tables to be used in the following examples.
1027 test_pub=# CREATE TABLE t1(a int, b int, c text, PRIMARY KEY(a,c));
1029 test_pub=# CREATE TABLE t2(d int, e int, f int, PRIMARY KEY(d));
1031 test_pub=# CREATE TABLE t3(g int, h int, i int, PRIMARY KEY(g));
1033 </programlisting></para>
1036 Create some publications. Publication
<literal>p1
</literal> has one table
1037 (
<literal>t1
</literal>) and that table has a row filter. Publication
1038 <literal>p2
</literal> has two tables. Table
<literal>t1
</literal> has no row
1039 filter, and table
<literal>t2
</literal> has a row filter. Publication
1040 <literal>p3
</literal> has two tables, and both of them have a row filter.
1042 test_pub=# CREATE PUBLICATION p1 FOR TABLE t1 WHERE (a
> 5 AND c = 'NSW');
1044 test_pub=# CREATE PUBLICATION p2 FOR TABLE t1, t2 WHERE (e =
99);
1046 test_pub=# CREATE PUBLICATION p3 FOR TABLE t2 WHERE (d =
10), t3 WHERE (g =
10);
1048 </programlisting></para>
1051 <command>psql
</command> can be used to show the row filter expressions (if
1052 defined) for each publication.
1056 Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
1057 ----------+------------+---------+---------+---------+-----------+----------
1058 postgres | f | t | t | t | t | f
1060 "public.t1" WHERE ((a
> 5) AND (c = 'NSW'::text))
1063 Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
1064 ----------+------------+---------+---------+---------+-----------+----------
1065 postgres | f | t | t | t | t | f
1068 "public.t2" WHERE (e =
99)
1071 Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
1072 ----------+------------+---------+---------+---------+-----------+----------
1073 postgres | f | t | t | t | t | f
1075 "public.t2" WHERE (d =
10)
1076 "public.t3" WHERE (g =
10)
1077 </programlisting></para>
1080 <command>psql
</command> can be used to show the row filter expressions (if
1081 defined) for each table. See that table
<literal>t1
</literal> is a member
1082 of two publications, but has a row filter only in
<literal>p1
</literal>.
1083 See that table
<literal>t2
</literal> is a member of two publications, and
1084 has a different row filter in each of them.
1088 Column | Type | Collation | Nullable | Default
1089 --------+---------+-----------+----------+---------
1090 a | integer | | not null |
1092 c | text | | not null |
1094 "t1_pkey" PRIMARY KEY, btree (a, c)
1096 "p1" WHERE ((a
> 5) AND (c = 'NSW'::text))
1101 Column | Type | Collation | Nullable | Default
1102 --------+---------+-----------+----------+---------
1103 d | integer | | not null |
1107 "t2_pkey" PRIMARY KEY, btree (d)
1114 Column | Type | Collation | Nullable | Default
1115 --------+---------+-----------+----------+---------
1116 g | integer | | not null |
1120 "t3_pkey" PRIMARY KEY, btree (g)
1123 </programlisting></para>
1126 On the subscriber node, create a table
<literal>t1
</literal> with the same
1127 definition as the one on the publisher, and also create the subscription
1128 <literal>s1
</literal> that subscribes to the publication
<literal>p1
</literal>.
1130 test_sub=# CREATE TABLE t1(a int, b int, c text, PRIMARY KEY(a,c));
1132 test_sub=# CREATE SUBSCRIPTION s1
1133 test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=s1'
1134 test_sub-# PUBLICATION p1;
1136 </programlisting></para>
1139 Insert some rows. Only the rows satisfying the
<literal>t1 WHERE
</literal>
1140 clause of publication
<literal>p1
</literal> are replicated.
1142 test_pub=# INSERT INTO t1 VALUES (
2,
102, 'NSW');
1144 test_pub=# INSERT INTO t1 VALUES (
3,
103, 'QLD');
1146 test_pub=# INSERT INTO t1 VALUES (
4,
104, 'VIC');
1148 test_pub=# INSERT INTO t1 VALUES (
5,
105, 'ACT');
1150 test_pub=# INSERT INTO t1 VALUES (
6,
106, 'NSW');
1152 test_pub=# INSERT INTO t1 VALUES (
7,
107, 'NT');
1154 test_pub=# INSERT INTO t1 VALUES (
8,
108, 'QLD');
1156 test_pub=# INSERT INTO t1 VALUES (
9,
109, 'NSW');
1159 test_pub=# SELECT * FROM t1;
1173 test_sub=# SELECT * FROM t1;
1179 </programlisting></para>
1182 Update some data, where the old and new row values both
1183 satisfy the
<literal>t1 WHERE
</literal> clause of publication
1184 <literal>p1
</literal>. The
<command>UPDATE
</command> replicates
1185 the change as normal.
1187 test_pub=# UPDATE t1 SET b =
999 WHERE a =
6;
1190 test_pub=# SELECT * FROM t1;
1204 test_sub=# SELECT * FROM t1;
1210 </programlisting></para>
1213 Update some data, where the old row values did not satisfy
1214 the
<literal>t1 WHERE
</literal> clause of publication
<literal>p1
</literal>,
1215 but the new row values do satisfy it. The
<command>UPDATE
</command> is
1216 transformed into an
<command>INSERT
</command> and the change is replicated.
1217 See the new row on the subscriber.
1219 test_pub=# UPDATE t1 SET a =
555 WHERE a =
2;
1222 test_pub=# SELECT * FROM t1;
1236 test_sub=# SELECT * FROM t1;
1243 </programlisting></para>
1246 Update some data, where the old row values satisfied
1247 the
<literal>t1 WHERE
</literal> clause of publication
<literal>p1
</literal>,
1248 but the new row values do not satisfy it. The
<command>UPDATE
</command> is
1249 transformed into a
<command>DELETE
</command> and the change is replicated.
1250 See that the row is removed from the subscriber.
1252 test_pub=# UPDATE t1 SET c = 'VIC' WHERE a =
9;
1255 test_pub=# SELECT * FROM t1;
1269 test_sub=# SELECT * FROM t1;
1275 </programlisting></para>
1278 The following examples show how the publication parameter
1279 <link linkend=
"sql-createpublication-params-with-publish-via-partition-root"><literal>publish_via_partition_root
</literal></link>
1280 determines whether the row filter of the parent or child table will be used
1281 in the case of partitioned tables.
1285 Create a partitioned table on the publisher.
1287 test_pub=# CREATE TABLE parent(a int PRIMARY KEY) PARTITION BY RANGE(a);
1289 test_pub=# CREATE TABLE child PARTITION OF parent DEFAULT;
1292 Create the same tables on the subscriber.
1294 test_sub=# CREATE TABLE parent(a int PRIMARY KEY) PARTITION BY RANGE(a);
1296 test_sub=# CREATE TABLE child PARTITION OF parent DEFAULT;
1298 </programlisting></para>
1301 Create a publication
<literal>p4
</literal>, and then subscribe to it. The
1302 publication parameter
<literal>publish_via_partition_root
</literal> is set
1303 as true. There are row filters defined on both the partitioned table
1304 (
<literal>parent
</literal>), and on the partition (
<literal>child
</literal>).
1306 test_pub=# CREATE PUBLICATION p4 FOR TABLE parent WHERE (a
< 5), child WHERE (a
>=
5)
1307 test_pub-# WITH (publish_via_partition_root=true);
1311 test_sub=# CREATE SUBSCRIPTION s4
1312 test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=s4'
1313 test_sub-# PUBLICATION p4;
1315 </programlisting></para>
1318 Insert some values directly into the
<literal>parent
</literal> and
1319 <literal>child
</literal> tables. They replicate using the row filter of
1320 <literal>parent
</literal> (because
<literal>publish_via_partition_root
</literal>
1323 test_pub=# INSERT INTO parent VALUES (
2), (
4), (
6);
1325 test_pub=# INSERT INTO child VALUES (
3), (
5), (
7);
1328 test_pub=# SELECT * FROM parent ORDER BY a;
1340 test_sub=# SELECT * FROM parent ORDER BY a;
1347 </programlisting></para>
1350 Repeat the same test, but with a different value for
<literal>publish_via_partition_root
</literal>.
1351 The publication parameter
<literal>publish_via_partition_root
</literal> is
1352 set as false. A row filter is defined on the partition (
<literal>child
</literal>).
1354 test_pub=# DROP PUBLICATION p4;
1356 test_pub=# CREATE PUBLICATION p4 FOR TABLE parent, child WHERE (a
>=
5)
1357 test_pub-# WITH (publish_via_partition_root=false);
1361 test_sub=# ALTER SUBSCRIPTION s4 REFRESH PUBLICATION;
1363 </programlisting></para>
1366 Do the inserts on the publisher same as before. They replicate using the
1367 row filter of
<literal>child
</literal> (because
1368 <literal>publish_via_partition_root
</literal> is false).
1370 test_pub=# TRUNCATE parent;
1372 test_pub=# INSERT INTO parent VALUES (
2), (
4), (
6);
1374 test_pub=# INSERT INTO child VALUES (
3), (
5), (
7);
1377 test_pub=# SELECT * FROM parent ORDER BY a;
1389 test_sub=# SELECT * FROM child ORDER BY a;
1396 </programlisting></para>
1402 <sect1 id=
"logical-replication-col-lists">
1403 <title>Column Lists
</title>
1406 Each publication can optionally specify which columns of each table are
1407 replicated to subscribers. The table on the subscriber side must have at
1408 least all the columns that are published. If no column list is specified,
1409 then all columns on the publisher are replicated.
1410 See
<xref linkend=
"sql-createpublication"/> for details on the syntax.
1414 The choice of columns can be based on behavioral or performance reasons.
1415 However, do not rely on this feature for security: a malicious subscriber
1416 is able to obtain data from columns that are not specifically
1417 published. If security is a consideration, protections can be applied
1418 at the publisher side.
1422 If no column list is specified, any columns added to the table later are
1423 automatically replicated. This means that having a column list which names
1424 all columns is not the same as having no column list at all.
1428 A column list can contain only simple column references. The order
1429 of columns in the list is not preserved.
1433 Specifying a column list when the publication also publishes
1434 <link linkend=
"sql-createpublication-params-for-tables-in-schema"><literal>FOR TABLES IN SCHEMA
</literal></link>
1439 For partitioned tables, the publication parameter
1440 <link linkend=
"sql-createpublication-params-with-publish-via-partition-root"><literal>publish_via_partition_root
</literal></link>
1441 determines which column list is used. If
<literal>publish_via_partition_root
</literal>
1442 is
<literal>true
</literal>, the root partitioned table's column list is
1443 used. Otherwise, if
<literal>publish_via_partition_root
</literal> is
1444 <literal>false
</literal> (the default), each partition's column list is used.
1448 If a publication publishes
<command>UPDATE
</command> or
1449 <command>DELETE
</command> operations, any column list must include the
1450 table's replica identity columns (see
1451 <xref linkend=
"sql-altertable-replica-identity"/>).
1452 If a publication publishes only
<command>INSERT
</command> operations, then
1453 the column list may omit replica identity columns.
1457 Column lists have no effect for the
<literal>TRUNCATE
</literal> command.
1461 During initial data synchronization, only the published columns are
1462 copied. However, if the subscriber is from a release prior to
15, then
1463 all the columns in the table are copied during initial data synchronization,
1464 ignoring any column lists. If the subscriber is from a release prior to
18,
1465 then initial table synchronization won't copy generated columns even if they
1466 are defined in the publisher.
1469 <warning id=
"logical-replication-col-list-combining">
1470 <title>Warning: Combining Column Lists from Multiple Publications
</title>
1472 There's currently no support for subscriptions comprising several
1473 publications where the same table has been published with different
1474 column lists.
<xref linkend=
"sql-createsubscription"/> disallows
1475 creating such subscriptions, but it is still possible to get into
1476 that situation by adding or altering column lists on the publication
1477 side after a subscription has been created.
1480 This means changing the column lists of tables on publications that are
1481 already subscribed could lead to errors being thrown on the subscriber
1485 If a subscription is affected by this problem, the only way to resume
1486 replication is to adjust one of the column lists on the publication
1487 side so that they all match; and then either recreate the subscription,
1488 or use
<link linkend=
"sql-altersubscription-params-setadddrop-publication">
1489 <literal>ALTER SUBSCRIPTION ... DROP PUBLICATION
</literal></link> to
1490 remove one of the offending publications and add it again.
1494 <sect2 id=
"logical-replication-col-list-examples">
1495 <title>Examples
</title>
1498 Create a table
<literal>t1
</literal> to be used in the following example.
1500 test_pub=# CREATE TABLE t1(id int, a text, b text, c text, d text, e text, PRIMARY KEY(id));
1502 </programlisting></para>
1505 Create a publication
<literal>p1
</literal>. A column list is defined for
1506 table
<literal>t1
</literal> to reduce the number of columns that will be
1507 replicated. Notice that the order of column names in the column list does
1510 test_pub=# CREATE PUBLICATION p1 FOR TABLE t1 (id, b, a, d);
1512 </programlisting></para>
1515 <literal>psql
</literal> can be used to show the column lists (if defined)
1516 for each publication.
1520 Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
1521 ----------+------------+---------+---------+---------+-----------+----------
1522 postgres | f | t | t | t | t | f
1524 "public.t1" (id, a, b, d)
1525 </programlisting></para>
1528 <literal>psql
</literal> can be used to show the column lists (if defined)
1533 Column | Type | Collation | Nullable | Default
1534 --------+---------+-----------+----------+---------
1535 id | integer | | not null |
1542 "t1_pkey" PRIMARY KEY, btree (id)
1545 </programlisting></para>
1548 On the subscriber node, create a table
<literal>t1
</literal> which now
1549 only needs a subset of the columns that were on the publisher table
1550 <literal>t1
</literal>, and also create the subscription
1551 <literal>s1
</literal> that subscribes to the publication
1552 <literal>p1
</literal>.
1554 test_sub=# CREATE TABLE t1(id int, b text, a text, d text, PRIMARY KEY(id));
1556 test_sub=# CREATE SUBSCRIPTION s1
1557 test_sub-# CONNECTION 'host=localhost dbname=test_pub application_name=s1'
1558 test_sub-# PUBLICATION p1;
1560 </programlisting></para>
1563 On the publisher node, insert some rows to table
<literal>t1
</literal>.
1565 test_pub=# INSERT INTO t1 VALUES(
1, 'a-
1', 'b-
1', 'c-
1', 'd-
1', 'e-
1');
1567 test_pub=# INSERT INTO t1 VALUES(
2, 'a-
2', 'b-
2', 'c-
2', 'd-
2', 'e-
2');
1569 test_pub=# INSERT INTO t1 VALUES(
3, 'a-
3', 'b-
3', 'c-
3', 'd-
3', 'e-
3');
1571 test_pub=# SELECT * FROM t1 ORDER BY id;
1572 id | a | b | c | d | e
1573 ----+-----+-----+-----+-----+-----
1574 1 | a-
1 | b-
1 | c-
1 | d-
1 | e-
1
1575 2 | a-
2 | b-
2 | c-
2 | d-
2 | e-
2
1576 3 | a-
3 | b-
3 | c-
3 | d-
3 | e-
3
1578 </programlisting></para>
1581 Only data from the column list of publication
<literal>p1
</literal> is
1584 test_sub=# SELECT * FROM t1 ORDER BY id;
1586 ----+-----+-----+-----
1591 </programlisting></para>
1597 <sect1 id=
"logical-replication-conflicts">
1598 <title>Conflicts
</title>
1601 Logical replication behaves similarly to normal DML operations in that
1602 the data will be updated even if it was changed locally on the subscriber
1603 node. If incoming data violates any constraints the replication will
1604 stop. This is referred to as a
<firstterm>conflict
</firstterm>. When
1605 replicating
<command>UPDATE
</command> or
<command>DELETE
</command>
1606 operations, missing data is also considered as a
1607 <firstterm>conflict
</firstterm>, but does not result in an error and such
1608 operations will simply be skipped.
1612 Additional logging is triggered, and the conflict statistics are collected (displayed in the
1613 <link linkend=
"monitoring-pg-stat-subscription-stats"><structname>pg_stat_subscription_stats
</structname></link> view)
1614 in the following
<firstterm>conflict
</firstterm> cases:
1616 <varlistentry id=
"conflict-insert-exists" xreflabel=
"insert_exists">
1617 <term><literal>insert_exists
</literal></term>
1620 Inserting a row that violates a
<literal>NOT DEFERRABLE
</literal>
1621 unique constraint. Note that to log the origin and commit
1622 timestamp details of the conflicting key,
1623 <link linkend=
"guc-track-commit-timestamp"><varname>track_commit_timestamp
</varname></link>
1624 should be enabled on the subscriber. In this case, an error will be
1625 raised until the conflict is resolved manually.
1629 <varlistentry id=
"conflict-update-origin-differs" xreflabel=
"update_origin_differs">
1630 <term><literal>update_origin_differs
</literal></term>
1633 Updating a row that was previously modified by another origin.
1634 Note that this conflict can only be detected when
1635 <link linkend=
"guc-track-commit-timestamp"><varname>track_commit_timestamp
</varname></link>
1636 is enabled on the subscriber. Currently, the update is always applied
1637 regardless of the origin of the local row.
1641 <varlistentry id=
"conflict-update-exists" xreflabel=
"update_exists">
1642 <term><literal>update_exists
</literal></term>
1645 The updated value of a row violates a
<literal>NOT DEFERRABLE
</literal>
1646 unique constraint. Note that to log the origin and commit
1647 timestamp details of the conflicting key,
1648 <link linkend=
"guc-track-commit-timestamp"><varname>track_commit_timestamp
</varname></link>
1649 should be enabled on the subscriber. In this case, an error will be
1650 raised until the conflict is resolved manually. Note that when updating a
1651 partitioned table, if the updated row value satisfies another partition
1652 constraint resulting in the row being inserted into a new partition, the
1653 <literal>insert_exists
</literal> conflict may arise if the new row
1654 violates a
<literal>NOT DEFERRABLE
</literal> unique constraint.
1658 <varlistentry id=
"conflict-update-missing" xreflabel=
"update_missing">
1659 <term><literal>update_missing
</literal></term>
1662 The tuple to be updated was not found. The update will simply be
1663 skipped in this scenario.
1667 <varlistentry id=
"conflict-delete-origin-differs" xreflabel=
"delete_origin_differs">
1668 <term><literal>delete_origin_differs
</literal></term>
1671 Deleting a row that was previously modified by another origin. Note that
1672 this conflict can only be detected when
1673 <link linkend=
"guc-track-commit-timestamp"><varname>track_commit_timestamp
</varname></link>
1674 is enabled on the subscriber. Currently, the delete is always applied
1675 regardless of the origin of the local row.
1679 <varlistentry id=
"conflict-delete-missing" xreflabel=
"delete_missing">
1680 <term><literal>delete_missing
</literal></term>
1683 The tuple to be deleted was not found. The delete will simply be
1684 skipped in this scenario.
1689 Note that there are other conflict scenarios, such as exclusion constraint
1690 violations. Currently, we do not provide additional details for them in the
1695 The log format for logical replication conflicts is as follows:
1697 LOG: conflict detected on relation
"<replaceable>schemaname</replaceable>.<replaceable>tablename</replaceable>": conflict=
<replaceable>conflict_type
</replaceable>
1698 DETAIL:
<replaceable class=
"parameter">detailed_explanation
</replaceable>.
1699 {
<replaceable class=
"parameter">detail_values
</replaceable> [; ... ]}.
1701 <phrase>where
<replaceable class=
"parameter">detail_values
</replaceable> is one of:
</phrase>
1703 <literal>Key
</literal> (
<replaceable>column_name
</replaceable> <optional>, ...
</optional>)=(
<replaceable>column_value
</replaceable> <optional>, ...
</optional>)
1704 <literal>existing local tuple
</literal> <optional>(
<replaceable>column_name
</replaceable> <optional>, ...
</optional>)=
</optional>(
<replaceable>column_value
</replaceable> <optional>, ...
</optional>)
1705 <literal>remote tuple
</literal> <optional>(
<replaceable>column_name
</replaceable> <optional>, ...
</optional>)=
</optional>(
<replaceable>column_value
</replaceable> <optional>, ...
</optional>)
1706 <literal>replica identity
</literal> {(
<replaceable>column_name
</replaceable> <optional>, ...
</optional>)=(
<replaceable>column_value
</replaceable> <optional>, ...
</optional>) | full
<optional>(
<replaceable>column_name
</replaceable> <optional>, ...
</optional>)=
</optional>(
<replaceable>column_value
</replaceable> <optional>, ...
</optional>)}
1709 The log provides the following information:
1712 <term><literal>LOG
</literal></term>
1717 <replaceable>schemaname
</replaceable>.
<replaceable>tablename
</replaceable>
1718 identifies the local relation involved in the conflict.
1723 <replaceable>conflict_type
</replaceable> is the type of conflict that occurred
1724 (e.g.,
<literal>insert_exists
</literal>,
<literal>update_exists
</literal>).
1732 <term><literal>DETAIL
</literal></term>
1737 <replaceable class=
"parameter">detailed_explanation
</replaceable> includes
1738 the origin, transaction ID, and commit timestamp of the transaction that
1739 modified the existing local tuple, if available.
1744 The
<literal>Key
</literal> section includes the key values of the local
1745 tuple that violated a unique constraint for
1746 <literal>insert_exists
</literal> or
<literal>update_exists
</literal>
1752 The
<literal>existing local tuple
</literal> section includes the local
1753 tuple if its origin differs from the remote tuple for
1754 <literal>update_origin_differs
</literal> or
<literal>delete_origin_differs
</literal>
1755 conflicts, or if the key value conflicts with the remote tuple for
1756 <literal>insert_exists
</literal> or
<literal>update_exists
</literal>
1762 The
<literal>remote tuple
</literal> section includes the new tuple from
1763 the remote insert or update operation that caused the conflict. Note that
1764 for an update operation, the column value of the new tuple will be null
1765 if the value is unchanged and toasted.
1770 The
<literal>replica identity
</literal> section includes the replica
1771 identity key values that were used to search for the existing local
1772 tuple to be updated or deleted. This may include the full tuple value
1773 if the local relation is marked with
1774 <link linkend=
"sql-altertable-replica-identity-full"><literal>REPLICA IDENTITY FULL
</literal></link>.
1779 <replaceable class=
"parameter">column_name
</replaceable> is the column name.
1780 For
<literal>existing local tuple
</literal>,
<literal>remote tuple
</literal>,
1781 and
<literal>replica identity full
</literal> cases, column names are
1782 logged only if the user lacks the privilege to access all columns of
1783 the table. If column names are present, they appear in the same order
1784 as the corresponding column values.
1789 <replaceable class=
"parameter">column_value
</replaceable> is the column value.
1790 The large column values are truncated to
64 bytes.
1800 Logical replication operations are performed with the privileges of the role
1801 which owns the subscription. Permissions failures on target tables will
1802 cause replication conflicts, as will enabled
1803 <link linkend=
"ddl-rowsecurity">row-level security
</link> on target tables
1804 that the subscription owner is subject to, without regard to whether any
1805 policy would ordinarily reject the
<command>INSERT
</command>,
1806 <command>UPDATE
</command>,
<command>DELETE
</command> or
1807 <command>TRUNCATE
</command> which is being replicated. This restriction on
1808 row-level security may be lifted in a future version of
1809 <productname>PostgreSQL
</productname>.
1813 A conflict that produces an error will stop the replication; it must be
1814 resolved manually by the user. Details about the conflict can be found in
1815 the subscriber's server log.
1819 The resolution can be done either by changing data or permissions on the subscriber so
1820 that it does not conflict with the incoming change or by skipping the
1821 transaction that conflicts with the existing data. When a conflict produces
1822 an error, the replication won't proceed, and the logical replication worker will
1823 emit the following kind of message to the subscriber's server log:
1825 ERROR: conflict detected on relation
"public.test": conflict=insert_exists
1826 DETAIL: Key already exists in unique index
"t_pkey", which was modified locally in transaction
740 at
2024-
06-
26 10:
47:
04.727375+
08.
1827 Key (c)=(
1); existing local tuple (
1, 'local'); remote tuple (
1, 'remote').
1828 CONTEXT: processing remote data for replication origin
"pg_16395" during
"INSERT" for replication target relation
"public.test" in transaction
725 finished at
0/
14C0378
1830 The LSN of the transaction that contains the change violating the constraint and
1831 the replication origin name can be found from the server log (LSN
0/
14C0378 and
1832 replication origin
<literal>pg_16395
</literal> in the above case). The
1833 transaction that produced the conflict can be skipped by using
1834 <link linkend=
"sql-altersubscription-params-skip"><command>ALTER SUBSCRIPTION ... SKIP
</command></link>
1836 (i.e., LSN
0/
14C0378). The finish LSN could be an LSN at which the transaction
1837 is committed or prepared on the publisher. Alternatively, the transaction can
1838 also be skipped by calling the
<link linkend=
"pg-replication-origin-advance">
1839 <function>pg_replication_origin_advance()
</function></link> function.
1840 Before using this function, the subscription needs to be disabled temporarily
1841 either by
<link linkend=
"sql-altersubscription-params-disable">
1842 <command>ALTER SUBSCRIPTION ... DISABLE
</command></link> or, the
1843 subscription can be used with the
1844 <link linkend=
"sql-createsubscription-params-with-disable-on-error"><literal>disable_on_error
</literal></link>
1845 option. Then, you can use
<function>pg_replication_origin_advance()
</function>
1846 function with the
<parameter>node_name
</parameter> (i.e.,
<literal>pg_16395
</literal>)
1847 and the next LSN of the finish LSN (i.e.,
0/
14C0379). The current position of
1848 origins can be seen in the
<link linkend=
"view-pg-replication-origin-status">
1849 <structname>pg_replication_origin_status
</structname></link> system view.
1850 Please note that skipping the whole transaction includes skipping changes that
1851 might not violate any constraint. This can easily make the subscriber
1853 The additional details regarding conflicting rows, such as their origin and
1854 commit timestamp can be seen in the
<literal>DETAIL
</literal> line of the
1855 log. But note that this information is only available when
1856 <link linkend=
"guc-track-commit-timestamp"><varname>track_commit_timestamp
</varname></link>
1857 is enabled on the subscriber. Users can use this information to decide
1858 whether to retain the local change or adopt the remote alteration. For
1859 instance, the
<literal>DETAIL
</literal> line in the above log indicates that
1860 the existing row was modified locally. Users can manually perform a
1866 <link linkend=
"sql-createsubscription-params-with-streaming"><literal>streaming
</literal></link>
1867 mode is
<literal>parallel
</literal>, the finish LSN of failed transactions
1868 may not be logged. In that case, it may be necessary to change the streaming
1869 mode to
<literal>on
</literal> or
<literal>off
</literal> and cause the same
1870 conflicts again so the finish LSN of the failed transaction will be written
1871 to the server log. For the usage of finish LSN, please refer to
<link
1872 linkend=
"sql-altersubscription"><command>ALTER SUBSCRIPTION ...
1873 SKIP
</command></link>.
1877 <sect1 id=
"logical-replication-restrictions">
1878 <title>Restrictions
</title>
1881 Logical replication currently has the following restrictions or missing
1882 functionality. These might be addressed in future releases.
1888 The database schema and DDL commands are not replicated. The initial
1889 schema can be copied by hand using
<literal>pg_dump
1890 --schema-only
</literal>. Subsequent schema changes would need to be kept
1891 in sync manually. (Note, however, that there is no need for the schemas
1892 to be absolutely the same on both sides.) Logical replication is robust
1893 when schema definitions change in a live database: When the schema is
1894 changed on the publisher and replicated data starts arriving at the
1895 subscriber but does not fit into the table schema, replication will error
1896 until the schema is updated. In many cases, intermittent errors can be
1897 avoided by applying additive schema changes to the subscriber first.
1903 Sequence data is not replicated. The data in serial or identity columns
1904 backed by sequences will of course be replicated as part of the table,
1905 but the sequence itself would still show the start value on the
1906 subscriber. If the subscriber is used as a read-only database, then this
1907 should typically not be a problem. If, however, some kind of switchover
1908 or failover to the subscriber database is intended, then the sequences
1909 would need to be updated to the latest values, either by copying the
1910 current data from the publisher (perhaps
1911 using
<command>pg_dump
</command>) or by determining a sufficiently high
1912 value from the tables themselves.
1918 Replication of
<command>TRUNCATE
</command> commands is supported, but
1919 some care must be taken when truncating groups of tables connected by
1920 foreign keys. When replicating a truncate action, the subscriber will
1921 truncate the same group of tables that was truncated on the publisher,
1922 either explicitly specified or implicitly collected via
1923 <literal>CASCADE
</literal>, minus tables that are not part of the
1924 subscription. This will work correctly if all affected tables are part
1925 of the same subscription. But if some tables to be truncated on the
1926 subscriber have foreign-key links to tables that are not part of the same
1927 (or any) subscription, then the application of the truncate action on the
1928 subscriber will fail.
1934 Large objects (see
<xref linkend=
"largeobjects"/>) are not replicated.
1935 There is no workaround for that, other than storing data in normal
1942 Replication is only supported by tables, including partitioned tables.
1943 Attempts to replicate other types of relations, such as views, materialized
1944 views, or foreign tables, will result in an error.
1950 When replicating between partitioned tables, the actual replication
1951 originates, by default, from the leaf partitions on the publisher, so
1952 partitions on the publisher must also exist on the subscriber as valid
1953 target tables. (They could either be leaf partitions themselves, or they
1954 could be further subpartitioned, or they could even be independent
1955 tables.) Publications can also specify that changes are to be replicated
1956 using the identity and schema of the partitioned root table instead of
1957 that of the individual leaf partitions in which the changes actually
1959 <link linkend=
"sql-createpublication-params-with-publish-via-partition-root"><literal>publish_via_partition_root
</literal></link>
1960 parameter of
<command>CREATE PUBLICATION
</command>).
1967 <link linkend=
"sql-altertable-replica-identity-full"><literal>REPLICA IDENTITY FULL
</literal></link>
1968 on published tables, it is important to note that the
<literal>UPDATE
</literal>
1969 and
<literal>DELETE
</literal> operations cannot be applied to subscribers
1970 if the tables include attributes with datatypes (such as point or box)
1971 that do not have a default operator class for B-tree or Hash. However,
1972 this limitation can be overcome by ensuring that the table has a primary
1973 key or replica identity defined for it.
1979 <sect1 id=
"logical-replication-architecture">
1980 <title>Architecture
</title>
1983 Logical replication is built with an architecture similar to physical
1984 streaming replication (see
<xref linkend=
"streaming-replication"/>). It is
1985 implemented by
<literal>walsender
</literal> and
<literal>apply
</literal>
1986 processes. The walsender process starts logical decoding (described
1987 in
<xref linkend=
"logicaldecoding"/>) of the WAL and loads the standard
1988 logical decoding output plugin (
<literal>pgoutput
</literal>). The plugin
1989 transforms the changes read
1990 from WAL to the logical replication protocol
1991 (see
<xref linkend=
"protocol-logical-replication"/>) and filters the data
1992 according to the publication specification. The data is then continuously
1993 transferred using the streaming replication protocol to the apply worker,
1994 which maps the data to local tables and applies the individual changes as
1995 they are received, in correct transactional order.
1999 The apply process on the subscriber database always runs with
2000 <link linkend=
"guc-session-replication-role"><varname>session_replication_role
</varname></link>
2001 set to
<literal>replica
</literal>. This means that, by default,
2002 triggers and rules will not fire on a subscriber. Users can optionally choose to
2003 enable triggers and rules on a table using the
2004 <link linkend=
"sql-altertable"><command>ALTER TABLE
</command></link> command
2005 and the
<literal>ENABLE TRIGGER
</literal> and
<literal>ENABLE RULE
</literal>
2010 The logical replication apply process currently only fires row triggers,
2011 not statement triggers. The initial table synchronization, however, is
2012 implemented like a
<command>COPY
</command> command and thus fires both row
2013 and statement triggers for
<command>INSERT
</command>.
2016 <sect2 id=
"logical-replication-snapshot">
2017 <title>Initial Snapshot
</title>
2019 The initial data in existing subscribed tables are snapshotted and
2020 copied in parallel instances of a special kind of apply process.
2021 These special apply processes are dedicated table synchronization
2022 workers, spawned for each table to be synchronized. Each table
2023 synchronization process will create its own replication slot and
2024 copy the existing data. As soon as the copy is finished the table
2025 contents will become visible to other backends. Once existing data
2026 is copied, the worker enters synchronization mode, which ensures
2027 that the table is brought up to a synchronized state with the main
2028 apply process by streaming any changes that happened during the
2029 initial data copy using standard logical replication. During this
2030 synchronization phase, the changes are applied and committed in the same
2031 order as they happened on the publisher. Once synchronization is done,
2032 control of the replication of the table is given back to the main apply
2033 process where replication continues as normal.
2038 <link linkend=
"sql-createpublication-params-with-publish"><literal>publish
</literal></link>
2039 parameter only affects what DML operations will be replicated. The
2040 initial data synchronization does not take this parameter into account
2041 when copying the existing table data.
2046 If a table synchronization worker fails during copy, the apply worker
2047 detects the failure and respawns the table synchronization worker to
2048 continue the synchronization process. This behaviour ensures that
2049 transient errors do not permanently disrupt the replication setup. See
2050 also
<link linkend=
"guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval
</varname></link>.
2056 <sect1 id=
"logical-replication-monitoring">
2057 <title>Monitoring
</title>
2060 Because logical replication is based on a similar architecture as
2061 <link linkend=
"streaming-replication">physical streaming replication
</link>,
2062 the monitoring on a publication node is similar to monitoring of a
2063 physical replication primary
2064 (see
<xref linkend=
"streaming-replication-monitoring"/>).
2068 The monitoring information about subscription is visible in
2069 <link linkend=
"monitoring-pg-stat-subscription">
2070 <structname>pg_stat_subscription
</structname></link>.
2071 This view contains one row for every subscription worker. A subscription
2072 can have zero or more active subscription workers depending on its state.
2076 Normally, there is a single apply process running for an enabled
2077 subscription. A disabled subscription or a crashed subscription will have
2078 zero rows in this view. If the initial data synchronization of any
2079 table is in progress, there will be additional workers for the tables
2080 being synchronized. Moreover, if the
2081 <link linkend=
"sql-createsubscription-params-with-streaming"><literal>streaming
</literal></link>
2082 transaction is applied in parallel, there may be additional parallel apply
2087 <sect1 id=
"logical-replication-security">
2088 <title>Security
</title>
2091 The role used for the replication connection must have
2092 the
<literal>REPLICATION
</literal> attribute (or be a superuser). If the
2093 role lacks
<literal>SUPERUSER
</literal> and
<literal>BYPASSRLS
</literal>,
2094 publisher row security policies can execute. If the role does not trust
2095 all table owners, include
<literal>options=-crow_security=off
</literal> in
2096 the connection string; if a table owner then adds a row security policy,
2097 that setting will cause replication to halt rather than execute the policy.
2098 Access for the role must be configured in
<filename>pg_hba.conf
</filename>
2099 and it must have the
<literal>LOGIN
</literal> attribute.
2103 In order to be able to copy the initial table data, the role used for the
2104 replication connection must have the
<literal>SELECT
</literal> privilege on
2105 a published table (or be a superuser).
2109 To create a publication, the user must have the
<literal>CREATE
</literal>
2110 privilege in the database.
2114 To add tables to a publication, the user must have ownership rights on the
2115 table. To add all tables in schema to a publication, the user must be a
2116 superuser. To create a publication that publishes all tables or all tables in
2117 schema automatically, the user must be a superuser.
2121 There are currently no privileges on publications. Any subscription (that
2122 is able to connect) can access any publication. Thus, if you intend to
2123 hide some information from particular subscribers, such as by using row
2124 filters or column lists, or by not adding the whole table to the
2125 publication, be aware that other publications in the same database could
2126 expose the same information. Publication privileges might be added to
2127 <productname>PostgreSQL
</productname> in the future to allow for
2128 finer-grained access control.
2132 To create a subscription, the user must have the privileges of
2133 the
<literal>pg_create_subscription
</literal> role, as well as
2134 <literal>CREATE
</literal> privileges on the database.
2138 The subscription apply process will, at a session level, run with the
2139 privileges of the subscription owner. However, when performing an insert,
2140 update, delete, or truncate operation on a particular table, it will switch
2141 roles to the table owner and perform the operation with the table owner's
2142 privileges. This means that the subscription owner needs to be able to
2143 <literal>SET ROLE
</literal> to each role that owns a replicated table.
2147 If the subscription has been configured with
2148 <literal>run_as_owner = true
</literal>, then no user switching will
2149 occur. Instead, all operations will be performed with the permissions
2150 of the subscription owner. In this case, the subscription owner only
2151 needs privileges to
<literal>SELECT
</literal>,
<literal>INSERT
</literal>,
2152 <literal>UPDATE
</literal>, and
<literal>DELETE
</literal> from the
2153 target table, and does not need privileges to
<literal>SET ROLE
</literal>
2154 to the table owner. However, this also means that any user who owns
2155 a table into which replication is happening can execute arbitrary code with
2156 the privileges of the subscription owner. For example, they could do this
2157 by simply attaching a trigger to one of the tables which they own.
2158 Because it is usually undesirable to allow one role to freely assume
2159 the privileges of another, this option should be avoided unless user
2160 security within the database is of no concern.
2164 On the publisher, privileges are only checked once at the start of a
2165 replication connection and are not re-checked as each change record is read.
2169 On the subscriber, the subscription owner's privileges are re-checked for
2170 each transaction when applied. If a worker is in the process of applying a
2171 transaction when the ownership of the subscription is changed by a
2172 concurrent transaction, the application of the current transaction will
2173 continue under the old owner's privileges.
2177 <sect1 id=
"logical-replication-config">
2178 <title>Configuration Settings
</title>
2181 Logical replication requires several configuration options to be set. Most
2182 options are relevant only on one side of the replication. However,
2183 <varname>max_replication_slots
</varname> is used on both the publisher and
2184 the subscriber, but it has a different meaning for each.
2187 <sect2 id=
"logical-replication-config-publisher">
2188 <title>Publishers
</title>
2191 <link linkend=
"guc-wal-level"><varname>wal_level
</varname></link> must be
2192 set to
<literal>logical
</literal>.
2196 <link linkend=
"guc-max-replication-slots"><varname>max_replication_slots
</varname></link>
2197 must be set to at least the number of subscriptions expected to connect,
2198 plus some reserve for table synchronization.
2202 <link linkend=
"guc-max-wal-senders"><varname>max_wal_senders
</varname></link>
2203 should be set to at least the same as
2204 <varname>max_replication_slots
</varname>, plus the number of physical
2205 replicas that are connected at the same time.
2209 Logical replication walsender is also affected by
2210 <link linkend=
"guc-wal-sender-timeout"><varname>wal_sender_timeout
</varname></link>.
2215 <sect2 id=
"logical-replication-config-subscriber">
2216 <title>Subscribers
</title>
2219 <link linkend=
"guc-max-replication-slots-subscriber"><varname>max_replication_slots
</varname></link>
2220 must be set to at least the number of subscriptions that will be added to
2221 the subscriber, plus some reserve for table synchronization.
2225 <link linkend=
"guc-max-logical-replication-workers"><varname>max_logical_replication_workers
</varname></link>
2226 must be set to at least the number of subscriptions (for leader apply
2227 workers), plus some reserve for the table synchronization workers and
2228 parallel apply workers.
2232 <link linkend=
"guc-max-worker-processes"><varname>max_worker_processes
</varname></link>
2233 may need to be adjusted to accommodate for replication workers, at least
2234 (
<link linkend=
"guc-max-logical-replication-workers"><varname>max_logical_replication_workers
</varname></link>
2235 +
<literal>1</literal>). Note, some extensions and parallel queries also
2236 take worker slots from
<varname>max_worker_processes
</varname>.
2240 <link linkend=
"guc-max-sync-workers-per-subscription"><varname>max_sync_workers_per_subscription
</varname></link>
2241 controls the amount of parallelism of the initial data copy during the
2242 subscription initialization or when new tables are added.
2246 <link linkend=
"guc-max-parallel-apply-workers-per-subscription"><varname>max_parallel_apply_workers_per_subscription
</varname></link>
2247 controls the amount of parallelism for streaming of in-progress
2248 transactions with subscription parameter
2249 <literal>streaming = parallel
</literal>.
2253 Logical replication workers are also affected by
2254 <link linkend=
"guc-wal-receiver-timeout"><varname>wal_receiver_timeout
</varname></link>,
2255 <link linkend=
"guc-wal-receiver-status-interval"><varname>wal_receiver_status_interval
</varname></link> and
2256 <link linkend=
"guc-wal-retrieve-retry-interval"><varname>wal_retrieve_retry_interval
</varname></link>.
2263 <sect1 id=
"logical-replication-upgrade">
2264 <title>Upgrade
</title>
2267 Migration of
<glossterm linkend=
"glossary-logical-replication-cluster">logical replication clusters
</glossterm>
2268 is possible only when all the members of the old logical replication
2269 clusters are version
17.0 or later.
2272 <sect2 id=
"prepare-publisher-upgrades">
2273 <title>Prepare for publisher upgrades
</title>
2276 <application>pg_upgrade
</application> attempts to migrate logical
2277 slots. This helps avoid the need for manually defining the same
2278 logical slots on the new publisher. Migration of logical slots is
2279 only supported when the old cluster is version
17.0 or later.
2280 Logical slots on clusters before version
17.0 will silently be
2285 Before you start upgrading the publisher cluster, ensure that the
2286 subscription is temporarily disabled, by executing
2287 <link linkend=
"sql-altersubscription"><command>ALTER SUBSCRIPTION ... DISABLE
</command></link>.
2288 Re-enable the subscription after the upgrade.
2292 There are some prerequisites for
<application>pg_upgrade
</application> to
2293 be able to upgrade the logical slots. If these are not met an error
2300 The new cluster must have
2301 <link linkend=
"guc-wal-level"><varname>wal_level
</varname></link> as
2302 <literal>logical
</literal>.
2307 The new cluster must have
2308 <link linkend=
"guc-max-replication-slots"><varname>max_replication_slots
</varname></link>
2309 configured to a value greater than or equal to the number of slots
2310 present in the old cluster.
2315 The output plugins referenced by the slots on the old cluster must be
2316 installed in the new PostgreSQL executable directory.
2321 The old cluster has replicated all the transactions and logical decoding
2322 messages to subscribers.
2327 All slots on the old cluster must be usable, i.e., there are no slots
2329 <link linkend=
"view-pg-replication-slots">pg_replication_slots
</link>.
<structfield>conflicting
</structfield>
2330 is not
<literal>true
</literal>.
2335 The new cluster must not have permanent logical slots, i.e.,
2336 there must be no slots where
2337 <link linkend=
"view-pg-replication-slots">pg_replication_slots
</link>.
<structfield>temporary
</structfield>
2338 is
<literal>false
</literal>.
2344 <sect2 id=
"prepare-subscriber-upgrades">
2345 <title>Prepare for subscriber upgrades
</title>
2348 Setup the
<link linkend=
"logical-replication-config-subscriber">
2349 subscriber configurations
</link> in the new subscriber.
2350 <application>pg_upgrade
</application> attempts to migrate subscription
2351 dependencies which includes the subscription's table information present in
2352 <link linkend=
"catalog-pg-subscription-rel">pg_subscription_rel
</link>
2353 system catalog and also the subscription's replication origin. This allows
2354 logical replication on the new subscriber to continue from where the
2355 old subscriber was up to. Migration of subscription dependencies is only
2356 supported when the old cluster is version
17.0 or later. Subscription
2357 dependencies on clusters before version
17.0 will silently be ignored.
2361 There are some prerequisites for
<application>pg_upgrade
</application> to
2362 be able to upgrade the subscriptions. If these are not met an error
2369 All the subscription tables in the old subscriber should be in state
2370 <literal>i
</literal> (initialize) or
<literal>r
</literal> (ready). This
2371 can be verified by checking
<link linkend=
"catalog-pg-subscription-rel">pg_subscription_rel
</link>.
<structfield>srsubstate
</structfield>.
2376 The replication origin entry corresponding to each of the subscriptions
2377 should exist in the old cluster. This can be found by checking
2378 <link linkend=
"catalog-pg-subscription">pg_subscription
</link> and
2379 <link linkend=
"catalog-pg-replication-origin">pg_replication_origin
</link>
2385 The new cluster must have
2386 <link linkend=
"guc-max-replication-slots"><varname>max_replication_slots
</varname></link>
2387 configured to a value greater than or equal to the number of
2388 subscriptions present in the old cluster.
2394 <sect2 id=
"upgrading-logical-replication-clusters">
2395 <title>Upgrading logical replication clusters
</title>
2398 While upgrading a subscriber, write operations can be performed in the
2399 publisher. These changes will be replicated to the subscriber once the
2400 subscriber upgrade is completed.
2405 The logical replication restrictions apply to logical replication cluster
2406 upgrades also. See
<xref linkend=
"logical-replication-restrictions"/> for
2410 The prerequisites of publisher upgrade apply to logical replication
2411 cluster upgrades also. See
<xref linkend=
"prepare-publisher-upgrades"/>
2415 The prerequisites of subscriber upgrade apply to logical replication
2416 cluster upgrades also. See
<xref linkend=
"prepare-subscriber-upgrades"/>
2423 Upgrading logical replication cluster requires multiple steps to be
2424 performed on various nodes. Because not all operations are
2425 transactional, the user is advised to take backups as described in
2426 <xref linkend=
"backup-base-backup"/>.
2431 The steps to upgrade the following logical replication clusters are
2436 Follow the steps specified in
2437 <xref linkend=
"steps-two-node-logical-replication-cluster"/> to upgrade
2438 a two-node logical replication cluster.
2443 Follow the steps specified in
2444 <xref linkend=
"steps-cascaded-logical-replication-cluster"/> to upgrade
2445 a cascaded logical replication cluster.
2450 Follow the steps specified in
2451 <xref linkend=
"steps-two-node-circular-logical-replication-cluster"/>
2452 to upgrade a two-node circular logical replication cluster.
2458 <sect3 id=
"steps-two-node-logical-replication-cluster">
2459 <title>Steps to upgrade a two-node logical replication cluster
</title>
2461 Let's say publisher is in
<literal>node1
</literal> and subscriber is
2462 in
<literal>node2
</literal>. The subscriber
<literal>node2
</literal> has
2463 a subscription
<literal>sub1_node1_node2
</literal> which is subscribing
2464 the changes from
<literal>node1
</literal>.
2468 <step id=
"two-node-cluster-disable-subscriptions-node2">
2470 Disable all the subscriptions on
<literal>node2
</literal> that are
2471 subscribing the changes from
<literal>node1
</literal> by using
2472 <link linkend=
"sql-altersubscription-params-disable"><command>ALTER SUBSCRIPTION ... DISABLE
</command></link>,
2475 node2=# ALTER SUBSCRIPTION sub1_node1_node2 DISABLE;
2482 Stop the publisher server in
<literal>node1
</literal>, e.g.:
2484 pg_ctl -D /opt/PostgreSQL/data1 stop
2491 Initialize
<literal>data1_upgraded
</literal> instance by using the
2492 required newer version.
2498 Upgrade the publisher
<literal>node1
</literal>'s server to the
2499 required newer version, e.g.:
2502 --old-datadir
"/opt/PostgreSQL/postgres/17/data1"
2503 --new-datadir
"/opt/PostgreSQL/postgres/18/data1_upgraded"
2504 --old-bindir
"/opt/PostgreSQL/postgres/17/bin"
2505 --new-bindir
"/opt/PostgreSQL/postgres/18/bin"
2512 Start the upgraded publisher server in
<literal>node1
</literal>, e.g.:
2514 pg_ctl -D /opt/PostgreSQL/data1_upgraded start -l logfile
2521 Stop the subscriber server in
<literal>node2
</literal>, e.g.:
2523 pg_ctl -D /opt/PostgreSQL/data2 stop
2530 Initialize
<literal>data2_upgraded
</literal> instance by using the
2531 required newer version.
2537 Upgrade the subscriber
<literal>node2
</literal>'s server to
2538 the required new version, e.g.:
2541 --old-datadir
"/opt/PostgreSQL/postgres/17/data2"
2542 --new-datadir
"/opt/PostgreSQL/postgres/18/data2_upgraded"
2543 --old-bindir
"/opt/PostgreSQL/postgres/17/bin"
2544 --new-bindir
"/opt/PostgreSQL/postgres/18/bin"
2551 Start the upgraded subscriber server in
<literal>node2
</literal>, e.g.:
2553 pg_ctl -D /opt/PostgreSQL/data2_upgraded start -l logfile
2560 On
<literal>node2
</literal>, create any tables that were created in
2561 the upgraded publisher
<literal>node1
</literal> server between
2562 <xref linkend=
"two-node-cluster-disable-subscriptions-node2"/>
2565 node2=# CREATE TABLE distributors (did integer PRIMARY KEY, name varchar(
40));
2573 Enable all the subscriptions on
<literal>node2
</literal> that are
2574 subscribing the changes from
<literal>node1
</literal> by using
2575 <link linkend=
"sql-altersubscription-params-enable"><command>ALTER SUBSCRIPTION ... ENABLE
</command></link>,
2578 node2=# ALTER SUBSCRIPTION sub1_node1_node2 ENABLE;
2586 Refresh the
<literal>node2
</literal> subscription's publications using
2587 <link linkend=
"sql-altersubscription-params-refresh-publication"><command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION
</command></link>,
2590 node2=# ALTER SUBSCRIPTION sub1_node1_node2 REFRESH PUBLICATION;
2599 In the steps described above, the publisher is upgraded first, followed
2600 by the subscriber. Alternatively, the user can use similar steps to
2601 upgrade the subscriber first, followed by the publisher.
2606 <sect3 id=
"steps-cascaded-logical-replication-cluster">
2607 <title>Steps to upgrade a cascaded logical replication cluster
</title>
2609 Let's say we have a cascaded logical replication setup
2610 <literal>node1
</literal>-
><literal>node2
</literal>-
><literal>node3
</literal>.
2611 Here
<literal>node2
</literal> is subscribing the changes from
2612 <literal>node1
</literal> and
<literal>node3
</literal> is subscribing
2613 the changes from
<literal>node2
</literal>. The
<literal>node2
</literal>
2614 has a subscription
<literal>sub1_node1_node2
</literal> which is
2615 subscribing the changes from
<literal>node1
</literal>. The
2616 <literal>node3
</literal> has a subscription
2617 <literal>sub1_node2_node3
</literal> which is subscribing the changes from
2618 <literal>node2
</literal>.
2622 <step id=
"cascaded-cluster-disable-sub-node1-node2">
2624 Disable all the subscriptions on
<literal>node2
</literal> that are
2625 subscribing the changes from
<literal>node1
</literal> by using
2626 <link linkend=
"sql-altersubscription-params-disable"><command>ALTER SUBSCRIPTION ... DISABLE
</command></link>,
2629 node2=# ALTER SUBSCRIPTION sub1_node1_node2 DISABLE;
2637 Stop the server in
<literal>node1
</literal>, e.g.:
2639 pg_ctl -D /opt/PostgreSQL/data1 stop
2646 Initialize
<literal>data1_upgraded
</literal> instance by using the
2647 required newer version.
2653 Upgrade the
<literal>node1
</literal>'s server to the required newer
2657 --old-datadir
"/opt/PostgreSQL/postgres/17/data1"
2658 --new-datadir
"/opt/PostgreSQL/postgres/18/data1_upgraded"
2659 --old-bindir
"/opt/PostgreSQL/postgres/17/bin"
2660 --new-bindir
"/opt/PostgreSQL/postgres/18/bin"
2667 Start the upgraded server in
<literal>node1
</literal>, e.g.:
2669 pg_ctl -D /opt/PostgreSQL/data1_upgraded start -l logfile
2674 <step id=
"cascaded-cluster-disable-sub-node2-node3">
2676 Disable all the subscriptions on
<literal>node3
</literal> that are
2677 subscribing the changes from
<literal>node2
</literal> by using
2678 <link linkend=
"sql-altersubscription-params-disable"><command>ALTER SUBSCRIPTION ... DISABLE
</command></link>,
2681 node3=# ALTER SUBSCRIPTION sub1_node2_node3 DISABLE;
2689 Stop the server in
<literal>node2
</literal>, e.g.:
2691 pg_ctl -D /opt/PostgreSQL/data2 stop
2698 Initialize
<literal>data2_upgraded
</literal> instance by using the
2699 required newer version.
2705 Upgrade the
<literal>node2
</literal>'s server to the required
2709 --old-datadir
"/opt/PostgreSQL/postgres/17/data2"
2710 --new-datadir
"/opt/PostgreSQL/postgres/18/data2_upgraded"
2711 --old-bindir
"/opt/PostgreSQL/postgres/17/bin"
2712 --new-bindir
"/opt/PostgreSQL/postgres/18/bin"
2719 Start the upgraded server in
<literal>node2
</literal>, e.g.:
2721 pg_ctl -D /opt/PostgreSQL/data2_upgraded start -l logfile
2728 On
<literal>node2
</literal>, create any tables that were created in
2729 the upgraded publisher
<literal>node1
</literal> server between
2730 <xref linkend=
"cascaded-cluster-disable-sub-node1-node2"/>
2733 node2=# CREATE TABLE distributors (did integer PRIMARY KEY, name varchar(
40));
2741 Enable all the subscriptions on
<literal>node2
</literal> that are
2742 subscribing the changes from
<literal>node1
</literal> by using
2743 <link linkend=
"sql-altersubscription-params-enable"><command>ALTER SUBSCRIPTION ... ENABLE
</command></link>,
2746 node2=# ALTER SUBSCRIPTION sub1_node1_node2 ENABLE;
2754 Refresh the
<literal>node2
</literal> subscription's publications using
2755 <link linkend=
"sql-altersubscription-params-refresh-publication"><command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION
</command></link>,
2758 node2=# ALTER SUBSCRIPTION sub1_node1_node2 REFRESH PUBLICATION;
2766 Stop the server in
<literal>node3
</literal>, e.g.:
2768 pg_ctl -D /opt/PostgreSQL/data3 stop
2775 Initialize
<literal>data3_upgraded
</literal> instance by using the
2776 required newer version.
2782 Upgrade the
<literal>node3
</literal>'s server to the required
2786 --old-datadir
"/opt/PostgreSQL/postgres/17/data3"
2787 --new-datadir
"/opt/PostgreSQL/postgres/18/data3_upgraded"
2788 --old-bindir
"/opt/PostgreSQL/postgres/17/bin"
2789 --new-bindir
"/opt/PostgreSQL/postgres/18/bin"
2796 Start the upgraded server in
<literal>node3
</literal>, e.g.:
2798 pg_ctl -D /opt/PostgreSQL/data3_upgraded start -l logfile
2805 On
<literal>node3
</literal>, create any tables that were created in
2806 the upgraded
<literal>node2
</literal> between
2807 <xref linkend=
"cascaded-cluster-disable-sub-node2-node3"/> and now,
2810 node3=# CREATE TABLE distributors (did integer PRIMARY KEY, name varchar(
40));
2818 Enable all the subscriptions on
<literal>node3
</literal> that are
2819 subscribing the changes from
<literal>node2
</literal> by using
2820 <link linkend=
"sql-altersubscription-params-enable"><command>ALTER SUBSCRIPTION ... ENABLE
</command></link>,
2823 node3=# ALTER SUBSCRIPTION sub1_node2_node3 ENABLE;
2831 Refresh the
<literal>node3
</literal> subscription's publications using
2832 <link linkend=
"sql-altersubscription-params-refresh-publication"><command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION
</command></link>,
2835 node3=# ALTER SUBSCRIPTION sub1_node2_node3 REFRESH PUBLICATION;
2843 <sect3 id=
"steps-two-node-circular-logical-replication-cluster">
2844 <title>Steps to upgrade a two-node circular logical replication cluster
</title>
2846 Let's say we have a circular logical replication setup
2847 <literal>node1
</literal>-
><literal>node2
</literal> and
2848 <literal>node2
</literal>-
><literal>node1
</literal>. Here
2849 <literal>node2
</literal> is subscribing the changes from
2850 <literal>node1
</literal> and
<literal>node1
</literal> is subscribing
2851 the changes from
<literal>node2
</literal>. The
<literal>node1
</literal>
2852 has a subscription
<literal>sub1_node2_node1
</literal> which is
2853 subscribing the changes from
<literal>node2
</literal>. The
2854 <literal>node2
</literal> has a subscription
2855 <literal>sub1_node1_node2
</literal> which is subscribing the changes from
2856 <literal>node1
</literal>.
2860 <step id=
"circular-cluster-disable-sub-node2">
2862 Disable all the subscriptions on
<literal>node2
</literal> that are
2863 subscribing the changes from
<literal>node1
</literal> by using
2864 <link linkend=
"sql-altersubscription-params-disable"><command>ALTER SUBSCRIPTION ... DISABLE
</command></link>,
2867 node2=# ALTER SUBSCRIPTION sub1_node1_node2 DISABLE;
2875 Stop the server in
<literal>node1
</literal>, e.g.:
2877 pg_ctl -D /opt/PostgreSQL/data1 stop
2884 Initialize
<literal>data1_upgraded
</literal> instance by using the
2885 required newer version.
2891 Upgrade the
<literal>node1
</literal>'s server to the required
2892 newer version, e.g.:
2895 --old-datadir
"/opt/PostgreSQL/postgres/17/data1"
2896 --new-datadir
"/opt/PostgreSQL/postgres/18/data1_upgraded"
2897 --old-bindir
"/opt/PostgreSQL/postgres/17/bin"
2898 --new-bindir
"/opt/PostgreSQL/postgres/18/bin"
2905 Start the upgraded server in
<literal>node1
</literal>, e.g.:
2907 pg_ctl -D /opt/PostgreSQL/data1_upgraded start -l logfile
2914 Enable all the subscriptions on
<literal>node2
</literal> that are
2915 subscribing the changes from
<literal>node1
</literal> by using
2916 <link linkend=
"sql-altersubscription-params-enable"><command>ALTER SUBSCRIPTION ... ENABLE
</command></link>,
2919 node2=# ALTER SUBSCRIPTION sub1_node1_node2 ENABLE;
2927 On
<literal>node1
</literal>, create any tables that were created in
2928 <literal>node2
</literal> between
<xref linkend=
"circular-cluster-disable-sub-node2"/>
2931 node1=# CREATE TABLE distributors (did integer PRIMARY KEY, name varchar(
40));
2940 Refresh the
<literal>node1
</literal> subscription's publications to
2941 copy initial table data from
<literal>node2
</literal> using
2942 <link linkend=
"sql-altersubscription-params-refresh-publication"><command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION
</command></link>,
2945 node1=# ALTER SUBSCRIPTION sub1_node2_node1 REFRESH PUBLICATION;
2951 <step id=
"circular-cluster-disable-sub-node1">
2953 Disable all the subscriptions on
<literal>node1
</literal> that are
2954 subscribing the changes from
<literal>node2
</literal> by using
2955 <link linkend=
"sql-altersubscription-params-disable"><command>ALTER SUBSCRIPTION ... DISABLE
</command></link>,
2958 node1=# ALTER SUBSCRIPTION sub1_node2_node1 DISABLE;
2966 Stop the server in
<literal>node2
</literal>, e.g.:
2968 pg_ctl -D /opt/PostgreSQL/data2 stop
2975 Initialize
<literal>data2_upgraded
</literal> instance by using the
2976 required newer version.
2982 Upgrade the
<literal>node2
</literal>'s server to the required
2986 --old-datadir
"/opt/PostgreSQL/postgres/17/data2"
2987 --new-datadir
"/opt/PostgreSQL/postgres/18/data2_upgraded"
2988 --old-bindir
"/opt/PostgreSQL/postgres/17/bin"
2989 --new-bindir
"/opt/PostgreSQL/postgres/18/bin"
2996 Start the upgraded server in
<literal>node2
</literal>, e.g.:
2998 pg_ctl -D /opt/PostgreSQL/data2_upgraded start -l logfile
3005 Enable all the subscriptions on
<literal>node1
</literal> that are
3006 subscribing the changes from
<literal>node2
</literal> by using
3007 <link linkend=
"sql-altersubscription-params-enable"><command>ALTER SUBSCRIPTION ... ENABLE
</command></link>,
3010 node1=# ALTER SUBSCRIPTION sub1_node2_node1 ENABLE;
3018 On
<literal>node2
</literal>, create any tables that were created in
3019 the upgraded
<literal>node1
</literal> between
<xref linkend=
"circular-cluster-disable-sub-node1"/>
3022 node2=# CREATE TABLE distributors (did integer PRIMARY KEY, name varchar(
40));
3030 Refresh the
<literal>node2
</literal> subscription's publications to
3031 copy initial table data from
<literal>node1
</literal> using
3032 <link linkend=
"sql-altersubscription-params-refresh-publication"><command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION
</command></link>,
3035 node2=# ALTER SUBSCRIPTION sub1_node1_node2 REFRESH PUBLICATION;
3046 <sect1 id=
"logical-replication-quick-setup">
3047 <title>Quick Setup
</title>
3050 First set the configuration options in
<filename>postgresql.conf
</filename>:
3054 The other required settings have default values that are sufficient for a
3059 <filename>pg_hba.conf
</filename> needs to be adjusted to allow replication
3060 (the values here depend on your actual network configuration and user you
3061 want to use for connecting):
3063 host all repuser
0.0.0.0/
0 md5
3068 Then on the publisher database:
3070 CREATE PUBLICATION mypub FOR TABLE users, departments;
3075 And on the subscriber database:
3077 CREATE SUBSCRIPTION mysub CONNECTION 'dbname=foo host=bar user=repuser' PUBLICATION mypub;
3082 The above will start the replication process, which synchronizes the
3083 initial table contents of the tables
<literal>users
</literal> and
3084 <literal>departments
</literal> and then starts replicating
3085 incremental changes to those tables.