At update of non-LP_NORMAL TID, fail instead of corrupting page header.
[pgsql.git] / doc / src / sgml / backup.sgml
blob25b8904baf7cdaa7529ee0e67d8cab93aad2e032
1 <!-- doc/src/sgml/backup.sgml -->
3 <chapter id="backup">
4 <title>Backup and Restore</title>
6 <indexterm zone="backup"><primary>backup</primary></indexterm>
8 <para>
9 As with everything that contains valuable data, <productname>PostgreSQL</productname>
10 databases should be backed up regularly. While the procedure is
11 essentially simple, it is important to have a clear understanding of
12 the underlying techniques and assumptions.
13 </para>
15 <para>
16 There are three fundamentally different approaches to backing up
17 <productname>PostgreSQL</productname> data:
18 <itemizedlist>
19 <listitem><para><acronym>SQL</acronym> dump</para></listitem>
20 <listitem><para>File system level backup</para></listitem>
21 <listitem><para>Continuous archiving</para></listitem>
22 </itemizedlist>
23 Each has its own strengths and weaknesses; each is discussed in turn
24 in the following sections.
25 </para>
27 <sect1 id="backup-dump">
28 <title><acronym>SQL</acronym> Dump</title>
30 <para>
31 The idea behind this dump method is to generate a file with SQL
32 commands that, when fed back to the server, will recreate the
33 database in the same state as it was at the time of the dump.
34 <productname>PostgreSQL</productname> provides the utility program
35 <xref linkend="app-pgdump"/> for this purpose. The basic usage of this
36 command is:
37 <synopsis>
38 pg_dump <replaceable class="parameter">dbname</replaceable> &gt; <replaceable class="parameter">dumpfile</replaceable>
39 </synopsis>
40 As you see, <application>pg_dump</application> writes its result to the
41 standard output. We will see below how this can be useful.
42 While the above command creates a text file, <application>pg_dump</application>
43 can create files in other formats that allow for parallelism and more
44 fine-grained control of object restoration.
45 </para>
47 <para>
48 <application>pg_dump</application> is a regular <productname>PostgreSQL</productname>
49 client application (albeit a particularly clever one). This means
50 that you can perform this backup procedure from any remote host that has
51 access to the database. But remember that <application>pg_dump</application>
52 does not operate with special permissions. In particular, it must
53 have read access to all tables that you want to back up, so in order
54 to back up the entire database you almost always have to run it as a
55 database superuser. (If you do not have sufficient privileges to back up
56 the entire database, you can still back up portions of the database to which
57 you do have access using options such as
58 <option>-n <replaceable>schema</replaceable></option>
59 or <option>-t <replaceable>table</replaceable></option>.)
60 </para>
62 <para>
63 To specify which database server <application>pg_dump</application> should
64 contact, use the command line options <option>-h
65 <replaceable>host</replaceable></option> and <option>-p <replaceable>port</replaceable></option>. The
66 default host is the local host or whatever your
67 <envar>PGHOST</envar> environment variable specifies. Similarly,
68 the default port is indicated by the <envar>PGPORT</envar>
69 environment variable or, failing that, by the compiled-in default.
70 (Conveniently, the server will normally have the same compiled-in
71 default.)
72 </para>
74 <para>
75 Like any other <productname>PostgreSQL</productname> client application,
76 <application>pg_dump</application> will by default connect with the database
77 user name that is equal to the current operating system user name. To override
78 this, either specify the <option>-U</option> option or set the
79 environment variable <envar>PGUSER</envar>. Remember that
80 <application>pg_dump</application> connections are subject to the normal
81 client authentication mechanisms (which are described in <xref
82 linkend="client-authentication"/>).
83 </para>
85 <para>
86 An important advantage of <application>pg_dump</application> over the other backup
87 methods described later is that <application>pg_dump</application>'s output can
88 generally be re-loaded into newer versions of <productname>PostgreSQL</productname>,
89 whereas file-level backups and continuous archiving are both extremely
90 server-version-specific. <application>pg_dump</application> is also the only method
91 that will work when transferring a database to a different machine
92 architecture, such as going from a 32-bit to a 64-bit server.
93 </para>
95 <para>
96 Dumps created by <application>pg_dump</application> are internally consistent,
97 meaning, the dump represents a snapshot of the database at the time
98 <application>pg_dump</application> began running. <application>pg_dump</application> does not
99 block other operations on the database while it is working.
100 (Exceptions are those operations that need to operate with an
101 exclusive lock, such as most forms of <command>ALTER TABLE</command>.)
102 </para>
104 <sect2 id="backup-dump-restore">
105 <title>Restoring the Dump</title>
107 <para>
108 Text files created by <application>pg_dump</application> are intended to
109 be read by the <application>psql</application> program using its default
110 settings. The general command form to restore a text dump is
111 <synopsis>
112 psql -X <replaceable class="parameter">dbname</replaceable> &lt; <replaceable class="parameter">dumpfile</replaceable>
113 </synopsis>
114 where <replaceable class="parameter">dumpfile</replaceable> is the
115 file output by the <application>pg_dump</application> command. The database <replaceable
116 class="parameter">dbname</replaceable> will not be created by this
117 command, so you must create it yourself from <literal>template0</literal>
118 before executing <application>psql</application> (e.g., with
119 <literal>createdb -T template0 <replaceable
120 class="parameter">dbname</replaceable></literal>).
121 To ensure <application>psql</application> runs with its default settings,
122 use the <option>-X</option> (<option>--no-psqlrc</option>) option.
123 <application>psql</application>
124 supports options similar to <application>pg_dump</application> for specifying
125 the database server to connect to and the user name to use. See
126 the <xref linkend="app-psql"/> reference page for more information.
127 </para>
129 <para>
130 Non-text file dumps should be restored using the <xref
131 linkend="app-pgrestore"/> utility.
132 </para>
134 <para>
135 Before restoring an SQL dump, all the users who own objects or were
136 granted permissions on objects in the dumped database must already
137 exist. If they do not, the restore will fail to recreate the
138 objects with the original ownership and/or permissions.
139 (Sometimes this is what you want, but usually it is not.)
140 </para>
142 <para>
143 By default, the <application>psql</application> script will continue to
144 execute after an SQL error is encountered. You might wish to run
145 <application>psql</application> with
146 the <literal>ON_ERROR_STOP</literal> variable set to alter that
147 behavior and have <application>psql</application> exit with an
148 exit status of 3 if an SQL error occurs:
149 <programlisting>
150 psql -X --set ON_ERROR_STOP=on <replaceable>dbname</replaceable> &lt; <replaceable>dumpfile</replaceable>
151 </programlisting>
152 Either way, you will only have a partially restored database.
153 Alternatively, you can specify that the whole dump should be
154 restored as a single transaction, so the restore is either fully
155 completed or fully rolled back. This mode can be specified by
156 passing the <option>-1</option> or <option>--single-transaction</option>
157 command-line options to <application>psql</application>. When using this
158 mode, be aware that even a minor error can rollback a
159 restore that has already run for many hours. However, that might
160 still be preferable to manually cleaning up a complex database
161 after a partially restored dump.
162 </para>
164 <para>
165 The ability of <application>pg_dump</application> and <application>psql</application> to
166 write to or read from pipes makes it possible to dump a database
167 directly from one server to another, for example:
168 <programlisting>
169 pg_dump -h <replaceable>host1</replaceable> <replaceable>dbname</replaceable> | psql -X -h <replaceable>host2</replaceable> <replaceable>dbname</replaceable>
170 </programlisting>
171 </para>
173 <important>
174 <para>
175 The dumps produced by <application>pg_dump</application> are relative to
176 <literal>template0</literal>. This means that any languages, procedures,
177 etc. added via <literal>template1</literal> will also be dumped by
178 <application>pg_dump</application>. As a result, when restoring, if you are
179 using a customized <literal>template1</literal>, you must create the
180 empty database from <literal>template0</literal>, as in the example
181 above.
182 </para>
183 </important>
185 <para>
186 After restoring a backup, it is wise to run <link
187 linkend="sql-analyze"><command>ANALYZE</command></link> on each
188 database so the query optimizer has useful statistics;
189 see <xref linkend="vacuum-for-statistics"/>
190 and <xref linkend="autovacuum"/> for more information.
191 For more advice on how to load large amounts of data
192 into <productname>PostgreSQL</productname> efficiently, refer to <xref
193 linkend="populate"/>.
194 </para>
195 </sect2>
197 <sect2 id="backup-dump-all">
198 <title>Using <application>pg_dumpall</application></title>
200 <para>
201 <application>pg_dump</application> dumps only a single database at a time,
202 and it does not dump information about roles or tablespaces
203 (because those are cluster-wide rather than per-database).
204 To support convenient dumping of the entire contents of a database
205 cluster, the <xref linkend="app-pg-dumpall"/> program is provided.
206 <application>pg_dumpall</application> backs up each database in a given
207 cluster, and also preserves cluster-wide data such as role and
208 tablespace definitions. The basic usage of this command is:
209 <synopsis>
210 pg_dumpall &gt; <replaceable>dumpfile</replaceable>
211 </synopsis>
212 The resulting dump can be restored with <application>psql</application>:
213 <synopsis>
214 psql -X -f <replaceable class="parameter">dumpfile</replaceable> postgres
215 </synopsis>
216 (Actually, you can specify any existing database name to start from,
217 but if you are loading into an empty cluster then <literal>postgres</literal>
218 should usually be used.) It is always necessary to have
219 database superuser access when restoring a <application>pg_dumpall</application>
220 dump, as that is required to restore the role and tablespace information.
221 If you use tablespaces, make sure that the tablespace paths in the
222 dump are appropriate for the new installation.
223 </para>
225 <para>
226 <application>pg_dumpall</application> works by emitting commands to re-create
227 roles, tablespaces, and empty databases, then invoking
228 <application>pg_dump</application> for each database. This means that while
229 each database will be internally consistent, the snapshots of
230 different databases are not synchronized.
231 </para>
233 <para>
234 Cluster-wide data can be dumped alone using the
235 <application>pg_dumpall</application> <option>--globals-only</option> option.
236 This is necessary to fully backup the cluster if running the
237 <application>pg_dump</application> command on individual databases.
238 </para>
239 </sect2>
241 <sect2 id="backup-dump-large">
242 <title>Handling Large Databases</title>
244 <para>
245 Some operating systems have maximum file size limits that cause
246 problems when creating large <application>pg_dump</application> output files.
247 Fortunately, <application>pg_dump</application> can write to the standard
248 output, so you can use standard Unix tools to work around this
249 potential problem. There are several possible methods:
250 </para>
252 <formalpara>
253 <title>Use compressed dumps.</title>
254 <para>
255 You can use your favorite compression program, for example
256 <application>gzip</application>:
258 <programlisting>
259 pg_dump <replaceable class="parameter">dbname</replaceable> | gzip &gt; <replaceable class="parameter">filename</replaceable>.gz
260 </programlisting>
262 Reload with:
264 <programlisting>
265 gunzip -c <replaceable class="parameter">filename</replaceable>.gz | psql <replaceable class="parameter">dbname</replaceable>
266 </programlisting>
270 <programlisting>
271 cat <replaceable class="parameter">filename</replaceable>.gz | gunzip | psql <replaceable class="parameter">dbname</replaceable>
272 </programlisting>
273 </para>
274 </formalpara>
276 <formalpara>
277 <title>Use <command>split</command>.</title>
278 <para>
279 The <command>split</command> command
280 allows you to split the output into smaller files that are
281 acceptable in size to the underlying file system. For example, to
282 make 2 gigabyte chunks:
284 <programlisting>
285 pg_dump <replaceable class="parameter">dbname</replaceable> | split -b 2G - <replaceable class="parameter">filename</replaceable>
286 </programlisting>
288 Reload with:
290 <programlisting>
291 cat <replaceable class="parameter">filename</replaceable>* | psql <replaceable class="parameter">dbname</replaceable>
292 </programlisting>
294 If using GNU <application>split</application>, it is possible to
295 use it and <application>gzip</application> together:
297 <programlisting>
298 pg_dump <replaceable class="parameter">dbname</replaceable> | split -b 2G --filter='gzip > $FILE.gz'
299 </programlisting>
301 It can be restored using <command>zcat</command>.
302 </para>
303 </formalpara>
305 <formalpara>
306 <title>Use <application>pg_dump</application>'s custom dump format.</title>
307 <para>
308 If <productname>PostgreSQL</productname> was built on a system with the
309 <application>zlib</application> compression library installed, the custom dump
310 format will compress data as it writes it to the output file. This will
311 produce dump file sizes similar to using <command>gzip</command>, but it
312 has the added advantage that tables can be restored selectively. The
313 following command dumps a database using the custom dump format:
315 <programlisting>
316 pg_dump -Fc <replaceable class="parameter">dbname</replaceable> &gt; <replaceable class="parameter">filename</replaceable>
317 </programlisting>
319 A custom-format dump is not a script for <application>psql</application>, but
320 instead must be restored with <application>pg_restore</application>, for example:
322 <programlisting>
323 pg_restore -d <replaceable class="parameter">dbname</replaceable> <replaceable class="parameter">filename</replaceable>
324 </programlisting>
326 See the <xref linkend="app-pgdump"/> and <xref
327 linkend="app-pgrestore"/> reference pages for details.
328 </para>
329 </formalpara>
331 <para>
332 For very large databases, you might need to combine <command>split</command>
333 with one of the other two approaches.
334 </para>
336 <formalpara>
337 <title>Use <application>pg_dump</application>'s parallel dump feature.</title>
338 <para>
339 To speed up the dump of a large database, you can use
340 <application>pg_dump</application>'s parallel mode. This will dump
341 multiple tables at the same time. You can control the degree of
342 parallelism with the <command>-j</command> parameter. Parallel dumps
343 are only supported for the "directory" archive format.
345 <programlisting>
346 pg_dump -j <replaceable class="parameter">num</replaceable> -F d -f <replaceable class="parameter">out.dir</replaceable> <replaceable class="parameter">dbname</replaceable>
347 </programlisting>
349 You can use <command>pg_restore -j</command> to restore a dump in parallel.
350 This will work for any archive of either the "custom" or the "directory"
351 archive mode, whether or not it has been created with <command>pg_dump -j</command>.
352 </para>
353 </formalpara>
354 </sect2>
355 </sect1>
357 <sect1 id="backup-file">
358 <title>File System Level Backup</title>
360 <para>
361 An alternative backup strategy is to directly copy the files that
362 <productname>PostgreSQL</productname> uses to store the data in the database;
363 <xref linkend="creating-cluster"/> explains where these files
364 are located. You can use whatever method you prefer
365 for doing file system backups; for example:
367 <programlisting>
368 tar -cf backup.tar /usr/local/pgsql/data
369 </programlisting>
370 </para>
372 <para>
373 There are two restrictions, however, which make this method
374 impractical, or at least inferior to the <application>pg_dump</application>
375 method:
377 <orderedlist>
378 <listitem>
379 <para>
380 The database server <emphasis>must</emphasis> be shut down in order to
381 get a usable backup. Half-way measures such as disallowing all
382 connections will <emphasis>not</emphasis> work
383 (in part because <command>tar</command> and similar tools do not take
384 an atomic snapshot of the state of the file system,
385 but also because of internal buffering within the server).
386 Information about stopping the server can be found in
387 <xref linkend="server-shutdown"/>. Needless to say, you
388 also need to shut down the server before restoring the data.
389 </para>
390 </listitem>
392 <listitem>
393 <para>
394 If you have dug into the details of the file system layout of the
395 database, you might be tempted to try to back up or restore only certain
396 individual tables or databases from their respective files or
397 directories. This will <emphasis>not</emphasis> work because the
398 information contained in these files is not usable without
399 the commit log files,
400 <filename>pg_xact/*</filename>, which contain the commit status of
401 all transactions. A table file is only usable with this
402 information. Of course it is also impossible to restore only a
403 table and the associated <filename>pg_xact</filename> data
404 because that would render all other tables in the database
405 cluster useless. So file system backups only work for complete
406 backup and restoration of an entire database cluster.
407 </para>
408 </listitem>
409 </orderedlist>
410 </para>
412 <para>
413 An alternative file-system backup approach is to make a
414 <quote>consistent snapshot</quote> of the data directory, if the
415 file system supports that functionality (and you are willing to
416 trust that it is implemented correctly). The typical procedure is
417 to make a <quote>frozen snapshot</quote> of the volume containing the
418 database, then copy the whole data directory (not just parts, see
419 above) from the snapshot to a backup device, then release the frozen
420 snapshot. This will work even while the database server is running.
421 However, a backup created in this way saves
422 the database files in a state as if the database server was not
423 properly shut down; therefore, when you start the database server
424 on the backed-up data, it will think the previous server instance
425 crashed and will replay the WAL log. This is not a problem; just
426 be aware of it (and be sure to include the WAL files in your backup).
427 You can perform a <command>CHECKPOINT</command> before taking the
428 snapshot to reduce recovery time.
429 </para>
431 <para>
432 If your database is spread across multiple file systems, there might not
433 be any way to obtain exactly-simultaneous frozen snapshots of all
434 the volumes. For example, if your data files and WAL log are on different
435 disks, or if tablespaces are on different file systems, it might
436 not be possible to use snapshot backup because the snapshots
437 <emphasis>must</emphasis> be simultaneous.
438 Read your file system documentation very carefully before trusting
439 the consistent-snapshot technique in such situations.
440 </para>
442 <para>
443 If simultaneous snapshots are not possible, one option is to shut down
444 the database server long enough to establish all the frozen snapshots.
445 Another option is to perform a continuous archiving base backup (<xref
446 linkend="backup-base-backup"/>) because such backups are immune to file
447 system changes during the backup. This requires enabling continuous
448 archiving just during the backup process; restore is done using
449 continuous archive recovery (<xref linkend="backup-pitr-recovery"/>).
450 </para>
452 <para>
453 Another option is to use <application>rsync</application> to perform a file
454 system backup. This is done by first running <application>rsync</application>
455 while the database server is running, then shutting down the database
456 server long enough to do an <command>rsync --checksum</command>.
457 (<option>--checksum</option> is necessary because <command>rsync</command> only
458 has file modification-time granularity of one second.) The
459 second <application>rsync</application> will be quicker than the first,
460 because it has relatively little data to transfer, and the end result
461 will be consistent because the server was down. This method
462 allows a file system backup to be performed with minimal downtime.
463 </para>
465 <para>
466 Note that a file system backup will typically be larger
467 than an SQL dump. (<application>pg_dump</application> does not need to dump
468 the contents of indexes for example, just the commands to recreate
469 them.) However, taking a file system backup might be faster.
470 </para>
471 </sect1>
473 <sect1 id="continuous-archiving">
474 <title>Continuous Archiving and Point-in-Time Recovery (PITR)</title>
476 <indexterm zone="backup">
477 <primary>continuous archiving</primary>
478 </indexterm>
480 <indexterm zone="backup">
481 <primary>point-in-time recovery</primary>
482 </indexterm>
484 <indexterm zone="backup">
485 <primary>PITR</primary>
486 </indexterm>
488 <para>
489 At all times, <productname>PostgreSQL</productname> maintains a
490 <firstterm>write ahead log</firstterm> (WAL) in the <filename>pg_wal/</filename>
491 subdirectory of the cluster's data directory. The log records
492 every change made to the database's data files. This log exists
493 primarily for crash-safety purposes: if the system crashes, the
494 database can be restored to consistency by <quote>replaying</quote> the
495 log entries made since the last checkpoint. However, the existence
496 of the log makes it possible to use a third strategy for backing up
497 databases: we can combine a file-system-level backup with backup of
498 the WAL files. If recovery is needed, we restore the file system backup and
499 then replay from the backed-up WAL files to bring the system to a
500 current state. This approach is more complex to administer than
501 either of the previous approaches, but it has some significant
502 benefits:
503 <itemizedlist>
504 <listitem>
505 <para>
506 We do not need a perfectly consistent file system backup as the starting point.
507 Any internal inconsistency in the backup will be corrected by log
508 replay (this is not significantly different from what happens during
509 crash recovery). So we do not need a file system snapshot capability,
510 just <application>tar</application> or a similar archiving tool.
511 </para>
512 </listitem>
513 <listitem>
514 <para>
515 Since we can combine an indefinitely long sequence of WAL files
516 for replay, continuous backup can be achieved simply by continuing to archive
517 the WAL files. This is particularly valuable for large databases, where
518 it might not be convenient to take a full backup frequently.
519 </para>
520 </listitem>
521 <listitem>
522 <para>
523 It is not necessary to replay the WAL entries all the
524 way to the end. We could stop the replay at any point and have a
525 consistent snapshot of the database as it was at that time. Thus,
526 this technique supports <firstterm>point-in-time recovery</firstterm>: it is
527 possible to restore the database to its state at any time since your base
528 backup was taken.
529 </para>
530 </listitem>
531 <listitem>
532 <para>
533 If we continuously feed the series of WAL files to another
534 machine that has been loaded with the same base backup file, we
535 have a <firstterm>warm standby</firstterm> system: at any point we can bring up
536 the second machine and it will have a nearly-current copy of the
537 database.
538 </para>
539 </listitem>
540 </itemizedlist>
541 </para>
543 <note>
544 <para>
545 <application>pg_dump</application> and
546 <application>pg_dumpall</application> do not produce file-system-level
547 backups and cannot be used as part of a continuous-archiving solution.
548 Such dumps are <emphasis>logical</emphasis> and do not contain enough
549 information to be used by WAL replay.
550 </para>
551 </note>
553 <para>
554 As with the plain file-system-backup technique, this method can only
555 support restoration of an entire database cluster, not a subset.
556 Also, it requires a lot of archival storage: the base backup might be bulky,
557 and a busy system will generate many megabytes of WAL traffic that
558 have to be archived. Still, it is the preferred backup technique in
559 many situations where high reliability is needed.
560 </para>
562 <para>
563 To recover successfully using continuous archiving (also called
564 <quote>online backup</quote> by many database vendors), you need a continuous
565 sequence of archived WAL files that extends back at least as far as the
566 start time of your backup. So to get started, you should set up and test
567 your procedure for archiving WAL files <emphasis>before</emphasis> you take your
568 first base backup. Accordingly, we first discuss the mechanics of
569 archiving WAL files.
570 </para>
572 <sect2 id="backup-archiving-wal">
573 <title>Setting Up WAL Archiving</title>
575 <para>
576 In an abstract sense, a running <productname>PostgreSQL</productname> system
577 produces an indefinitely long sequence of WAL records. The system
578 physically divides this sequence into WAL <firstterm>segment
579 files</firstterm>, which are normally 16MB apiece (although the segment size
580 can be altered during <application>initdb</application>). The segment
581 files are given numeric names that reflect their position in the
582 abstract WAL sequence. When not using WAL archiving, the system
583 normally creates just a few segment files and then
584 <quote>recycles</quote> them by renaming no-longer-needed segment files
585 to higher segment numbers. It's assumed that segment files whose
586 contents precede the last checkpoint are no longer of
587 interest and can be recycled.
588 </para>
590 <para>
591 When archiving WAL data, we need to capture the contents of each segment
592 file once it is filled, and save that data somewhere before the segment
593 file is recycled for reuse. Depending on the application and the
594 available hardware, there could be many different ways of <quote>saving
595 the data somewhere</quote>: we could copy the segment files to an NFS-mounted
596 directory on another machine, write them onto a tape drive (ensuring that
597 you have a way of identifying the original name of each file), or batch
598 them together and burn them onto CDs, or something else entirely. To
599 provide the database administrator with flexibility,
600 <productname>PostgreSQL</productname> tries not to make any assumptions about how
601 the archiving will be done. Instead, <productname>PostgreSQL</productname> lets
602 the administrator specify a shell command or an archive library to be executed to copy a
603 completed segment file to wherever it needs to go. This could be as simple
604 as a shell command that uses <literal>cp</literal>, or it could invoke a
605 complex C function &mdash; it's all up to you.
606 </para>
608 <para>
609 To enable WAL archiving, set the <xref linkend="guc-wal-level"/>
610 configuration parameter to <literal>replica</literal> or higher,
611 <xref linkend="guc-archive-mode"/> to <literal>on</literal>,
612 specify the shell command to use in the <xref
613 linkend="guc-archive-command"/> configuration parameter
614 or specify the library to use in the <xref
615 linkend="guc-archive-library"/> configuration parameter. In practice
616 these settings will always be placed in the
617 <filename>postgresql.conf</filename> file.
618 </para>
620 <para>
621 In <varname>archive_command</varname>,
622 <literal>%p</literal> is replaced by the path name of the file to
623 archive, while <literal>%f</literal> is replaced by only the file name.
624 (The path name is relative to the current working directory,
625 i.e., the cluster's data directory.)
626 Use <literal>%%</literal> if you need to embed an actual <literal>%</literal>
627 character in the command. The simplest useful command is something
628 like:
629 <programlisting>
630 archive_command = 'test ! -f /mnt/server/archivedir/%f &amp;&amp; cp %p /mnt/server/archivedir/%f' # Unix
631 archive_command = 'copy "%p" "C:\\server\\archivedir\\%f"' # Windows
632 </programlisting>
633 which will copy archivable WAL segments to the directory
634 <filename>/mnt/server/archivedir</filename>. (This is an example, not a
635 recommendation, and might not work on all platforms.) After the
636 <literal>%p</literal> and <literal>%f</literal> parameters have been replaced,
637 the actual command executed might look like this:
638 <programlisting>
639 test ! -f /mnt/server/archivedir/00000001000000A900000065 &amp;&amp; cp pg_wal/00000001000000A900000065 /mnt/server/archivedir/00000001000000A900000065
640 </programlisting>
641 A similar command will be generated for each new file to be archived.
642 </para>
644 <para>
645 The archive command will be executed under the ownership of the same
646 user that the <productname>PostgreSQL</productname> server is running as. Since
647 the series of WAL files being archived contains effectively everything
648 in your database, you will want to be sure that the archived data is
649 protected from prying eyes; for example, archive into a directory that
650 does not have group or world read access.
651 </para>
653 <para>
654 It is important that the archive command return zero exit status if and
655 only if it succeeds. Upon getting a zero result,
656 <productname>PostgreSQL</productname> will assume that the file has been
657 successfully archived, and will remove or recycle it. However, a nonzero
658 status tells <productname>PostgreSQL</productname> that the file was not archived;
659 it will try again periodically until it succeeds.
660 </para>
662 <para>
663 Another way to archive is to use a custom archive module as the
664 <varname>archive_library</varname>. Since such modules are written in
665 <literal>C</literal>, creating your own may require considerably more effort
666 than writing a shell command. However, archive modules can be more
667 performant than archiving via shell, and they will have access to many
668 useful server resources. For more information about archive modules, see
669 <xref linkend="archive-modules"/>.
670 </para>
672 <para>
673 When the archive command is terminated by a signal (other than
674 <systemitem>SIGTERM</systemitem> that is used as part of a server
675 shutdown) or an error by the shell with an exit status greater than
676 125 (such as command not found), or if the archive function emits an
677 <literal>ERROR</literal> or <literal>FATAL</literal>, the archiver process
678 aborts and gets restarted by the postmaster. In such cases, the failure is
679 not reported in <xref linkend="pg-stat-archiver-view"/>.
680 </para>
682 <para>
683 Archive commands and libraries should generally be designed to refuse to overwrite
684 any pre-existing archive file. This is an important safety feature to
685 preserve the integrity of your archive in case of administrator error
686 (such as sending the output of two different servers to the same archive
687 directory). It is advisable to test your proposed archive library to ensure
688 that it does not overwrite an existing file.
689 </para>
691 <para>
692 In rare cases, <productname>PostgreSQL</productname> may attempt to
693 re-archive a WAL file that was previously archived. For example, if the
694 system crashes before the server makes a durable record of archival
695 success, the server will attempt to archive the file again after
696 restarting (provided archiving is still enabled). When an archive command or library
697 encounters a pre-existing file, it should return a zero status or <literal>true</literal>, respectively,
698 if the WAL file has identical contents to the pre-existing archive and the
699 pre-existing archive is fully persisted to storage. If a pre-existing
700 file contains different contents than the WAL file being archived, the
701 archive command or library <emphasis>must</emphasis> return a nonzero status or
702 <literal>false</literal>, respectively.
703 </para>
705 <para>
706 The example command above for Unix avoids overwriting a pre-existing archive
707 by including a separate
708 <command>test</command> step. On some Unix platforms, <command>cp</command> has
709 switches such as <option>-i</option> that can be used to do the same thing
710 less verbosely, but you should not rely on these without verifying that
711 the right exit status is returned. (In particular, GNU <command>cp</command>
712 will return status zero when <option>-i</option> is used and the target file
713 already exists, which is <emphasis>not</emphasis> the desired behavior.)
714 </para>
716 <para>
717 While designing your archiving setup, consider what will happen if
718 the archive command or library fails repeatedly because some aspect requires
719 operator intervention or the archive runs out of space. For example, this
720 could occur if you write to tape without an autochanger; when the tape
721 fills, nothing further can be archived until the tape is swapped.
722 You should ensure that any error condition or request to a human operator
723 is reported appropriately so that the situation can be
724 resolved reasonably quickly. The <filename>pg_wal/</filename> directory will
725 continue to fill with WAL segment files until the situation is resolved.
726 (If the file system containing <filename>pg_wal/</filename> fills up,
727 <productname>PostgreSQL</productname> will do a PANIC shutdown. No committed
728 transactions will be lost, but the database will remain offline until
729 you free some space.)
730 </para>
732 <para>
733 The speed of the archive command or library is unimportant as long as it can keep up
734 with the average rate at which your server generates WAL data. Normal
735 operation continues even if the archiving process falls a little behind.
736 If archiving falls significantly behind, this will increase the amount of
737 data that would be lost in the event of a disaster. It will also mean that
738 the <filename>pg_wal/</filename> directory will contain large numbers of
739 not-yet-archived segment files, which could eventually exceed available
740 disk space. You are advised to monitor the archiving process to ensure that
741 it is working as you intend.
742 </para>
744 <para>
745 In writing your archive command or library, you should assume that the file names to
746 be archived can be up to 64 characters long and can contain any
747 combination of ASCII letters, digits, and dots. It is not necessary to
748 preserve the original relative path (<literal>%p</literal>) but it is necessary to
749 preserve the file name (<literal>%f</literal>).
750 </para>
752 <para>
753 Note that although WAL archiving will allow you to restore any
754 modifications made to the data in your <productname>PostgreSQL</productname> database,
755 it will not restore changes made to configuration files (that is,
756 <filename>postgresql.conf</filename>, <filename>pg_hba.conf</filename> and
757 <filename>pg_ident.conf</filename>), since those are edited manually rather
758 than through SQL operations.
759 You might wish to keep the configuration files in a location that will
760 be backed up by your regular file system backup procedures. See
761 <xref linkend="runtime-config-file-locations"/> for how to relocate the
762 configuration files.
763 </para>
765 <para>
766 The archive command or function is only invoked on completed WAL segments. Hence,
767 if your server generates only little WAL traffic (or has slack periods
768 where it does so), there could be a long delay between the completion
769 of a transaction and its safe recording in archive storage. To put
770 a limit on how old unarchived data can be, you can set
771 <xref linkend="guc-archive-timeout"/> to force the server to switch
772 to a new WAL segment file at least that often. Note that archived
773 files that are archived early due to a forced switch are still the same
774 length as completely full files. It is therefore unwise to set a very
775 short <varname>archive_timeout</varname> &mdash; it will bloat your archive
776 storage. <varname>archive_timeout</varname> settings of a minute or so are
777 usually reasonable.
778 </para>
780 <para>
781 Also, you can force a segment switch manually with
782 <function>pg_switch_wal</function> if you want to ensure that a
783 just-finished transaction is archived as soon as possible. Other utility
784 functions related to WAL management are listed in <xref
785 linkend="functions-admin-backup-table"/>.
786 </para>
788 <para>
789 When <varname>wal_level</varname> is <literal>minimal</literal> some SQL commands
790 are optimized to avoid WAL logging, as described in <xref
791 linkend="populate-pitr"/>. If archiving or streaming replication were
792 turned on during execution of one of these statements, WAL would not
793 contain enough information for archive recovery. (Crash recovery is
794 unaffected.) For this reason, <varname>wal_level</varname> can only be changed at
795 server start. However, <varname>archive_command</varname> and <varname>archive_library</varname> can be changed with a
796 configuration file reload. If you are archiving via shell and wish to
797 temporarily stop archiving,
798 one way to do it is to set <varname>archive_command</varname> to the empty
799 string (<literal>''</literal>).
800 This will cause WAL files to accumulate in <filename>pg_wal/</filename> until a
801 working <varname>archive_command</varname> is re-established.
802 </para>
803 </sect2>
805 <sect2 id="backup-base-backup">
806 <title>Making a Base Backup</title>
808 <para>
809 The easiest way to perform a base backup is to use the
810 <xref linkend="app-pgbasebackup"/> tool. It can create
811 a base backup either as regular files or as a tar archive. If more
812 flexibility than <xref linkend="app-pgbasebackup"/> can provide is
813 required, you can also make a base backup using the low level API
814 (see <xref linkend="backup-lowlevel-base-backup"/>).
815 </para>
817 <para>
818 It is not necessary to be concerned about the amount of time it takes
819 to make a base backup. However, if you normally run the
820 server with <varname>full_page_writes</varname> disabled, you might notice a drop
821 in performance while the backup runs since <varname>full_page_writes</varname> is
822 effectively forced on during backup mode.
823 </para>
825 <para>
826 To make use of the backup, you will need to keep all the WAL
827 segment files generated during and after the file system backup.
828 To aid you in doing this, the base backup process
829 creates a <firstterm>backup history file</firstterm> that is immediately
830 stored into the WAL archive area. This file is named after the first
831 WAL segment file that you need for the file system backup.
832 For example, if the starting WAL file is
833 <literal>0000000100001234000055CD</literal> the backup history file will be
834 named something like
835 <literal>0000000100001234000055CD.007C9330.backup</literal>. (The second
836 part of the file name stands for an exact position within the WAL
837 file, and can ordinarily be ignored.) Once you have safely archived
838 the file system backup and the WAL segment files used during the
839 backup (as specified in the backup history file), all archived WAL
840 segments with names numerically less are no longer needed to recover
841 the file system backup and can be deleted. However, you should
842 consider keeping several backup sets to be absolutely certain that
843 you can recover your data.
844 </para>
846 <para>
847 The backup history file is just a small text file. It contains the
848 label string you gave to <xref linkend="app-pgbasebackup"/>, as well as
849 the starting and ending times and WAL segments of the backup.
850 If you used the label to identify the associated dump file,
851 then the archived history file is enough to tell you which dump file to
852 restore.
853 </para>
855 <para>
856 Since you have to keep around all the archived WAL files back to your
857 last base backup, the interval between base backups should usually be
858 chosen based on how much storage you want to expend on archived WAL
859 files. You should also consider how long you are prepared to spend
860 recovering, if recovery should be necessary &mdash; the system will have to
861 replay all those WAL segments, and that could take awhile if it has
862 been a long time since the last base backup.
863 </para>
864 </sect2>
866 <sect2 id="backup-incremental-backup">
867 <title>Making an Incremental Backup</title>
869 <para>
870 You can use <xref linkend="app-pgbasebackup"/> to take an incremental
871 backup by specifying the <literal>--incremental</literal> option. You must
872 supply, as an argument to <literal>--incremental</literal>, the backup
873 manifest to an earlier backup from the same server. In the resulting
874 backup, non-relation files will be included in their entirety, but some
875 relation files may be replaced by smaller incremental files which contain
876 only the blocks which have been changed since the earlier backup and enough
877 metadata to reconstruct the current version of the file.
878 </para>
880 <para>
881 To figure out which blocks need to be backed up, the server uses WAL
882 summaries, which are stored in the data directory, inside the directory
883 <literal>pg_wal/summaries</literal>. If the required summary files are not
884 present, an attempt to take an incremental backup will fail. The summaries
885 present in this directory must cover all LSNs from the start LSN of the
886 prior backup to the start LSN of the current backup. Since the server looks
887 for WAL summaries just after establishing the start LSN of the current
888 backup, the necessary summary files probably won't be instantly present
889 on disk, but the server will wait for any missing files to show up.
890 This also helps if the WAL summarization process has fallen behind.
891 However, if the necessary files have already been removed, or if the WAL
892 summarizer doesn't catch up quickly enough, the incremental backup will
893 fail.
894 </para>
896 <para>
897 When restoring an incremental backup, it will be necessary to have not
898 only the incremental backup itself but also all earlier backups that
899 are required to supply the blocks omitted from the incremental backup.
900 See <xref linkend="app-pgcombinebackup"/> for further information about
901 this requirement. Note that there are restrictions on the use of
902 <literal>pg_combinebackup</literal> when the checksum status of the
903 cluster has been changed; see
904 <link linkend="app-pgcombinebackup-limitations">pg_combinebackup
905 limitations</link>.
906 </para>
908 <para>
909 Note that all of the requirements for making use of a full backup also
910 apply to an incremental backup. For instance, you still need all of the
911 WAL segment files generated during and after the file system backup, and
912 any relevant WAL history files. And you still need to create a
913 <literal>recovery.signal</literal> (or <literal>standby.signal</literal>)
914 and perform recovery, as described in
915 <xref linkend="backup-pitr-recovery" />. The requirement to have earlier
916 backups available at restore time and to use
917 <literal>pg_combinebackup</literal> is an additional requirement on top of
918 everything else. Keep in mind that <application>PostgreSQL</application>
919 has no built-in mechanism to figure out which backups are still needed as
920 a basis for restoring later incremental backups. You must keep track of
921 the relationships between your full and incremental backups on your own,
922 and be certain not to remove earlier backups if they might be needed when
923 restoring later incremental backups.
924 </para>
926 <para>
927 Incremental backups typically only make sense for relatively large
928 databases where a significant portion of the data does not change, or only
929 changes slowly. For a small database, it's simpler to ignore the existence
930 of incremental backups and simply take full backups, which are simpler
931 to manage. For a large database all of which is heavily modified,
932 incremental backups won't be much smaller than full backups.
933 </para>
935 <para>
936 An incremental backup is only possible if replay would begin from a later
937 checkpoint than for the previous backup upon which it depends. If you
938 take the incremental backup on the primary, this condition is always
939 satisfied, because each backup triggers a new checkpoint. On a standby,
940 replay begins from the most recent restartpoint. Therefore, an
941 incremental backup of a standby server can fail if there has been very
942 little activity since the previous backup, since no new restartpoint might
943 have been created.
944 </para>
945 </sect2>
947 <sect2 id="backup-lowlevel-base-backup">
948 <title>Making a Base Backup Using the Low Level API</title>
949 <para>
950 Instead of taking a full or incremental base backup using
951 <xref linkend="app-pgbasebackup"/>, you can take a base backup using the
952 low-level API. This procedure contains a few more steps than
953 the <application>pg_basebackup</application> method, but is relatively
954 simple. It is very important that these steps are executed in
955 sequence, and that the success of a step is verified before
956 proceeding to the next step.
957 </para>
958 <para>
959 Multiple backups are able to be run concurrently (both those
960 started using this backup API and those started using
961 <xref linkend="app-pgbasebackup"/>).
962 </para>
963 <para>
964 <orderedlist>
965 <listitem>
966 <para>
967 Ensure that WAL archiving is enabled and working.
968 </para>
969 </listitem>
970 <listitem>
971 <para>
972 Connect to the server (it does not matter which database) as a user with
973 rights to run <function>pg_backup_start</function> (superuser,
974 or a user who has been granted <literal>EXECUTE</literal> on the
975 function) and issue the command:
976 <programlisting>
977 SELECT pg_backup_start(label => 'label', fast => false);
978 </programlisting>
979 where <literal>label</literal> is any string you want to use to uniquely
980 identify this backup operation. The connection
981 calling <function>pg_backup_start</function> must be maintained until the end of
982 the backup, or the backup will be automatically aborted.
983 </para>
985 <para>
986 Online backups are always started at the beginning of a checkpoint.
987 By default, <function>pg_backup_start</function> will wait for the next
988 regularly scheduled checkpoint to complete, which may take a long time (see the
989 configuration parameters <xref linkend="guc-checkpoint-timeout"/> and
990 <xref linkend="guc-checkpoint-completion-target"/>). This is
991 usually preferable as it minimizes the impact on the running system. If you
992 want to start the backup as soon as possible, pass <literal>true</literal> as
993 the second parameter to <function>pg_backup_start</function> and it will
994 request an immediate checkpoint, which will finish as fast as possible using
995 as much I/O as possible.
996 </para>
998 </listitem>
999 <listitem>
1000 <para>
1001 Perform the backup, using any convenient file-system-backup tool
1002 such as <application>tar</application> or <application>cpio</application> (not
1003 <application>pg_dump</application> or
1004 <application>pg_dumpall</application>). It is neither
1005 necessary nor desirable to stop normal operation of the database
1006 while you do this. See
1007 <xref linkend="backup-lowlevel-base-backup-data"/> for things to
1008 consider during this backup.
1009 </para>
1010 </listitem>
1011 <listitem>
1012 <para>
1013 In the same connection as before, issue the command:
1014 <programlisting>
1015 SELECT * FROM pg_backup_stop(wait_for_archive => true);
1016 </programlisting>
1017 This terminates backup mode. On a primary, it also performs an automatic
1018 switch to the next WAL segment. On a standby, it is not possible to
1019 automatically switch WAL segments, so you may wish to run
1020 <function>pg_switch_wal</function> on the primary to perform a manual
1021 switch. The reason for the switch is to arrange for
1022 the last WAL segment file written during the backup interval to be
1023 ready to archive.
1024 </para>
1025 <para>
1026 <function>pg_backup_stop</function> will return one row with three
1027 values. The second of these fields should be written to a file named
1028 <filename>backup_label</filename> in the root directory of the backup. The
1029 third field should be written to a file named
1030 <filename>tablespace_map</filename> unless the field is empty. These files are
1031 vital to the backup working and must be written byte for byte without
1032 modification, which may require opening the file in binary mode.
1033 </para>
1034 </listitem>
1035 <listitem>
1036 <para>
1037 Once the WAL segment files active during the backup are archived, you are
1038 done. The file identified by <function>pg_backup_stop</function>'s first return
1039 value is the last segment that is required to form a complete set of
1040 backup files. On a primary, if <varname>archive_mode</varname> is enabled and the
1041 <literal>wait_for_archive</literal> parameter is <literal>true</literal>,
1042 <function>pg_backup_stop</function> does not return until the last segment has
1043 been archived.
1044 On a standby, <varname>archive_mode</varname> must be <literal>always</literal> in order
1045 for <function>pg_backup_stop</function> to wait.
1046 Archiving of these files happens automatically since you have
1047 already configured <varname>archive_command</varname> or <varname>archive_library</varname>.
1048 In most cases this happens quickly, but you are advised to monitor your
1049 archive system to ensure there are no delays.
1050 If the archive process has fallen behind because of failures of the
1051 archive command or library, it will keep retrying
1052 until the archive succeeds and the backup is complete.
1053 If you wish to place a time limit on the execution of
1054 <function>pg_backup_stop</function>, set an appropriate
1055 <varname>statement_timeout</varname> value, but make note that if
1056 <function>pg_backup_stop</function> terminates because of this your backup
1057 may not be valid.
1058 </para>
1059 <para>
1060 If the backup process monitors and ensures that all WAL segment files
1061 required for the backup are successfully archived then the
1062 <literal>wait_for_archive</literal> parameter (which defaults to true) can be set
1063 to false to have
1064 <function>pg_backup_stop</function> return as soon as the stop backup record is
1065 written to the WAL. By default, <function>pg_backup_stop</function> will wait
1066 until all WAL has been archived, which can take some time. This option
1067 must be used with caution: if WAL archiving is not monitored correctly
1068 then the backup might not include all of the WAL files and will
1069 therefore be incomplete and not able to be restored.
1070 </para>
1071 </listitem>
1072 </orderedlist>
1073 </para>
1074 <sect3 id="backup-lowlevel-base-backup-data">
1075 <title>Backing Up the Data Directory</title>
1076 <para>
1077 Some file system backup tools emit warnings or errors
1078 if the files they are trying to copy change while the copy proceeds.
1079 When taking a base backup of an active database, this situation is normal
1080 and not an error. However, you need to ensure that you can distinguish
1081 complaints of this sort from real errors. For example, some versions
1082 of <application>rsync</application> return a separate exit code for
1083 <quote>vanished source files</quote>, and you can write a driver script to
1084 accept this exit code as a non-error case. Also, some versions of
1085 GNU <application>tar</application> return an error code indistinguishable from
1086 a fatal error if a file was truncated while <application>tar</application> was
1087 copying it. Fortunately, GNU <application>tar</application> versions 1.16 and
1088 later exit with 1 if a file was changed during the backup,
1089 and 2 for other errors. With GNU <application>tar</application> version 1.23 and
1090 later, you can use the warning options <literal>--warning=no-file-changed
1091 --warning=no-file-removed</literal> to hide the related warning messages.
1092 </para>
1094 <para>
1095 Be certain that your backup includes all of the files under
1096 the database cluster directory (e.g., <filename>/usr/local/pgsql/data</filename>).
1097 If you are using tablespaces that do not reside underneath this directory,
1098 be careful to include them as well (and be sure that your backup
1099 archives symbolic links as links, otherwise the restore will corrupt
1100 your tablespaces).
1101 </para>
1103 <para>
1104 You should, however, omit from the backup the files within the
1105 cluster's <filename>pg_wal/</filename> subdirectory. This
1106 slight adjustment is worthwhile because it reduces the risk
1107 of mistakes when restoring. This is easy to arrange if
1108 <filename>pg_wal/</filename> is a symbolic link pointing to someplace outside
1109 the cluster directory, which is a common setup anyway for performance
1110 reasons. You might also want to exclude <filename>postmaster.pid</filename>
1111 and <filename>postmaster.opts</filename>, which record information
1112 about the running <application>postmaster</application>, not about the
1113 <application>postmaster</application> which will eventually use this backup.
1114 (These files can confuse <application>pg_ctl</application>.)
1115 </para>
1117 <para>
1118 It is often a good idea to also omit from the backup the files
1119 within the cluster's <filename>pg_replslot/</filename> directory, so that
1120 replication slots that exist on the primary do not become part of the
1121 backup. Otherwise, the subsequent use of the backup to create a standby
1122 may result in indefinite retention of WAL files on the standby, and
1123 possibly bloat on the primary if hot standby feedback is enabled, because
1124 the clients that are using those replication slots will still be connecting
1125 to and updating the slots on the primary, not the standby. Even if the
1126 backup is only intended for use in creating a new primary, copying the
1127 replication slots isn't expected to be particularly useful, since the
1128 contents of those slots will likely be badly out of date by the time
1129 the new primary comes on line.
1130 </para>
1132 <para>
1133 The contents of the directories <filename>pg_dynshmem/</filename>,
1134 <filename>pg_notify/</filename>, <filename>pg_serial/</filename>,
1135 <filename>pg_snapshots/</filename>, <filename>pg_stat_tmp/</filename>,
1136 and <filename>pg_subtrans/</filename> (but not the directories themselves) can be
1137 omitted from the backup as they will be initialized on postmaster startup.
1138 </para>
1140 <para>
1141 Any file or directory beginning with <filename>pgsql_tmp</filename> can be
1142 omitted from the backup. These files are removed on postmaster start and
1143 the directories will be recreated as needed.
1144 </para>
1146 <para>
1147 <filename>pg_internal.init</filename> files can be omitted from the
1148 backup whenever a file of that name is found. These files contain
1149 relation cache data that is always rebuilt when recovering.
1150 </para>
1152 <para>
1153 The backup label
1154 file includes the label string you gave to <function>pg_backup_start</function>,
1155 as well as the time at which <function>pg_backup_start</function> was run, and
1156 the name of the starting WAL file. In case of confusion it is therefore
1157 possible to look inside a backup file and determine exactly which
1158 backup session the dump file came from. The tablespace map file includes
1159 the symbolic link names as they exist in the directory
1160 <filename>pg_tblspc/</filename> and the full path of each symbolic link.
1161 These files are not merely for your information; their presence and
1162 contents are critical to the proper operation of the system's recovery
1163 process.
1164 </para>
1166 <para>
1167 It is also possible to make a backup while the server is
1168 stopped. In this case, you obviously cannot use
1169 <function>pg_backup_start</function> or <function>pg_backup_stop</function>, and
1170 you will therefore be left to your own devices to keep track of which
1171 backup is which and how far back the associated WAL files go.
1172 It is generally better to follow the continuous archiving procedure above.
1173 </para>
1174 </sect3>
1175 </sect2>
1177 <sect2 id="backup-pitr-recovery">
1178 <title>Recovering Using a Continuous Archive Backup</title>
1180 <para>
1181 Okay, the worst has happened and you need to recover from your backup.
1182 Here is the procedure:
1183 <orderedlist>
1184 <listitem>
1185 <para>
1186 Stop the server, if it's running.
1187 </para>
1188 </listitem>
1189 <listitem>
1190 <para>
1191 If you have the space to do so,
1192 copy the whole cluster data directory and any tablespaces to a temporary
1193 location in case you need them later. Note that this precaution will
1194 require that you have enough free space on your system to hold two
1195 copies of your existing database. If you do not have enough space,
1196 you should at least save the contents of the cluster's <filename>pg_wal</filename>
1197 subdirectory, as it might contain WAL files which
1198 were not archived before the system went down.
1199 </para>
1200 </listitem>
1201 <listitem>
1202 <para>
1203 Remove all existing files and subdirectories under the cluster data
1204 directory and under the root directories of any tablespaces you are using.
1205 </para>
1206 </listitem>
1207 <listitem>
1208 <para>
1209 If you're restoring a full backup, you can restore the database files
1210 directly into the target directories. Be sure that they
1211 are restored with the right ownership (the database system user, not
1212 <literal>root</literal>!) and with the right permissions. If you are using
1213 tablespaces,
1214 you should verify that the symbolic links in <filename>pg_tblspc/</filename>
1215 were correctly restored.
1216 </para>
1217 </listitem>
1218 <listitem>
1219 <para>
1220 If you're restoring an incremental backup, you'll need to restore the
1221 incremental backup and all earlier backups upon which it directly or
1222 indirectly depends to the machine where you are performing the restore.
1223 These backups will need to be placed in separate directories, not the
1224 target directories where you want the running server to end up.
1225 Once this is done, use <xref linkend="app-pgcombinebackup"/> to pull
1226 data from the full backup and all of the subsequent incremental backups
1227 and write out a synthetic full backup to the target directories. As above,
1228 verify that permissions and tablespace links are correct.
1229 </para>
1230 </listitem>
1231 <listitem>
1232 <para>
1233 Remove any files present in <filename>pg_wal/</filename>; these came from the
1234 file system backup and are therefore probably obsolete rather than current.
1235 If you didn't archive <filename>pg_wal/</filename> at all, then recreate
1236 it with proper permissions,
1237 being careful to ensure that you re-establish it as a symbolic link
1238 if you had it set up that way before.
1239 </para>
1240 </listitem>
1241 <listitem>
1242 <para>
1243 If you have unarchived WAL segment files that you saved in step 2,
1244 copy them into <filename>pg_wal/</filename>. (It is best to copy them,
1245 not move them, so you still have the unmodified files if a
1246 problem occurs and you have to start over.)
1247 </para>
1248 </listitem>
1249 <listitem>
1250 <para>
1251 Set recovery configuration settings in
1252 <filename>postgresql.conf</filename> (see <xref
1253 linkend="runtime-config-wal-archive-recovery"/>) and create a file
1254 <filename>recovery.signal</filename> in the cluster
1255 data directory. You might
1256 also want to temporarily modify <filename>pg_hba.conf</filename> to prevent
1257 ordinary users from connecting until you are sure the recovery was successful.
1258 </para>
1259 </listitem>
1260 <listitem>
1261 <para>
1262 Start the server. The server will go into recovery mode and
1263 proceed to read through the archived WAL files it needs. Should the
1264 recovery be terminated because of an external error, the server can
1265 simply be restarted and it will continue recovery. Upon completion
1266 of the recovery process, the server will remove
1267 <filename>recovery.signal</filename> (to prevent
1268 accidentally re-entering recovery mode later) and then
1269 commence normal database operations.
1270 </para>
1271 </listitem>
1272 <listitem>
1273 <para>
1274 Inspect the contents of the database to ensure you have recovered to
1275 the desired state. If not, return to step 1. If all is well,
1276 allow your users to connect by restoring <filename>pg_hba.conf</filename> to normal.
1277 </para>
1278 </listitem>
1279 </orderedlist>
1280 </para>
1282 <para>
1283 The key part of all this is to set up a recovery configuration that
1284 describes how you want to recover and how far the recovery should
1285 run. The one thing that you absolutely must specify is the <varname>restore_command</varname>,
1286 which tells <productname>PostgreSQL</productname> how to retrieve archived
1287 WAL file segments. Like the <varname>archive_command</varname>, this is
1288 a shell command string. It can contain <literal>%f</literal>, which is
1289 replaced by the name of the desired WAL file, and <literal>%p</literal>,
1290 which is replaced by the path name to copy the WAL file to.
1291 (The path name is relative to the current working directory,
1292 i.e., the cluster's data directory.)
1293 Write <literal>%%</literal> if you need to embed an actual <literal>%</literal>
1294 character in the command. The simplest useful command is
1295 something like:
1296 <programlisting>
1297 restore_command = 'cp /mnt/server/archivedir/%f %p'
1298 </programlisting>
1299 which will copy previously archived WAL segments from the directory
1300 <filename>/mnt/server/archivedir</filename>. Of course, you can use something
1301 much more complicated, perhaps even a shell script that requests the
1302 operator to mount an appropriate tape.
1303 </para>
1305 <para>
1306 It is important that the command return nonzero exit status on failure.
1307 The command <emphasis>will</emphasis> be called requesting files that are not
1308 present in the archive; it must return nonzero when so asked. This is not
1309 an error condition. An exception is that if the command was terminated by
1310 a signal (other than <systemitem>SIGTERM</systemitem>, which is used as
1311 part of a database server shutdown) or an error by the shell (such as
1312 command not found), then recovery will abort and the server will not start
1314 </para>
1316 <para>
1317 Not all of the requested files will be WAL segment
1318 files; you should also expect requests for files with a suffix of
1319 <literal>.history</literal>. Also be aware that
1320 the base name of the <literal>%p</literal> path will be different from
1321 <literal>%f</literal>; do not expect them to be interchangeable.
1322 </para>
1324 <para>
1325 WAL segments that cannot be found in the archive will be sought in
1326 <filename>pg_wal/</filename>; this allows use of recent un-archived segments.
1327 However, segments that are available from the archive will be used in
1328 preference to files in <filename>pg_wal/</filename>.
1329 </para>
1331 <para>
1332 Normally, recovery will proceed through all available WAL segments,
1333 thereby restoring the database to the current point in time (or as
1334 close as possible given the available WAL segments). Therefore, a normal
1335 recovery will end with a <quote>file not found</quote> message, the exact text
1336 of the error message depending upon your choice of
1337 <varname>restore_command</varname>. You may also see an error message
1338 at the start of recovery for a file named something like
1339 <filename>00000001.history</filename>. This is also normal and does not
1340 indicate a problem in simple recovery situations; see
1341 <xref linkend="backup-timelines"/> for discussion.
1342 </para>
1344 <para>
1345 If you want to recover to some previous point in time (say, right before
1346 the junior DBA dropped your main transaction table), just specify the
1347 required <link linkend="runtime-config-wal-recovery-target">stopping point</link>. You can specify
1348 the stop point, known as the <quote>recovery target</quote>, either by
1349 date/time, named restore point or by completion of a specific transaction
1350 ID. As of this writing only the date/time and named restore point options
1351 are very usable, since there are no tools to help you identify with any
1352 accuracy which transaction ID to use.
1353 </para>
1355 <note>
1356 <para>
1357 The stop point must be after the ending time of the base backup, i.e.,
1358 the end time of <function>pg_backup_stop</function>. You cannot use a base backup
1359 to recover to a time when that backup was in progress. (To
1360 recover to such a time, you must go back to your previous base backup
1361 and roll forward from there.)
1362 </para>
1363 </note>
1365 <para>
1366 If recovery finds corrupted WAL data, recovery will
1367 halt at that point and the server will not start. In such a case the
1368 recovery process could be re-run from the beginning, specifying a
1369 <quote>recovery target</quote> before the point of corruption so that recovery
1370 can complete normally.
1371 If recovery fails for an external reason, such as a system crash or
1372 if the WAL archive has become inaccessible, then the recovery can simply
1373 be restarted and it will restart almost from where it failed.
1374 Recovery restart works much like checkpointing in normal operation:
1375 the server periodically forces all its state to disk, and then updates
1376 the <filename>pg_control</filename> file to indicate that the already-processed
1377 WAL data need not be scanned again.
1378 </para>
1380 </sect2>
1382 <sect2 id="backup-timelines">
1383 <title>Timelines</title>
1385 <indexterm zone="backup">
1386 <primary>timelines</primary>
1387 </indexterm>
1389 <para>
1390 The ability to restore the database to a previous point in time creates
1391 some complexities that are akin to science-fiction stories about time
1392 travel and parallel universes. For example, in the original history of the database,
1393 suppose you dropped a critical table at 5:15PM on Tuesday evening, but
1394 didn't realize your mistake until Wednesday noon.
1395 Unfazed, you get out your backup, restore to the point-in-time 5:14PM
1396 Tuesday evening, and are up and running. In <emphasis>this</emphasis> history of
1397 the database universe, you never dropped the table. But suppose
1398 you later realize this wasn't such a great idea, and would like
1399 to return to sometime Wednesday morning in the original history.
1400 You won't be able
1401 to if, while your database was up-and-running, it overwrote some of the
1402 WAL segment files that led up to the time you now wish you
1403 could get back to. Thus, to avoid this, you need to distinguish the series of
1404 WAL records generated after you've done a point-in-time recovery from
1405 those that were generated in the original database history.
1406 </para>
1408 <para>
1409 To deal with this problem, <productname>PostgreSQL</productname> has a notion
1410 of <firstterm>timelines</firstterm>. Whenever an archive recovery completes,
1411 a new timeline is created to identify the series of WAL records
1412 generated after that recovery. The timeline
1413 ID number is part of WAL segment file names so a new timeline does
1414 not overwrite the WAL data generated by previous timelines.
1415 For example, in the WAL file name
1416 <filename>0000000100001234000055CD</filename>, the leading
1417 <literal>00000001</literal> is the timeline ID in hexadecimal. (Note that
1418 in other contexts, such as server log messages, timeline IDs are
1419 usually printed in decimal.)
1420 </para>
1422 <para>
1423 It is
1424 in fact possible to archive many different timelines. While that might
1425 seem like a useless feature, it's often a lifesaver. Consider the
1426 situation where you aren't quite sure what point-in-time to recover to,
1427 and so have to do several point-in-time recoveries by trial and error
1428 until you find the best place to branch off from the old history. Without
1429 timelines this process would soon generate an unmanageable mess. With
1430 timelines, you can recover to <emphasis>any</emphasis> prior state, including
1431 states in timeline branches that you abandoned earlier.
1432 </para>
1434 <para>
1435 Every time a new timeline is created, <productname>PostgreSQL</productname> creates
1436 a <quote>timeline history</quote> file that shows which timeline it branched
1437 off from and when. These history files are necessary to allow the system
1438 to pick the right WAL segment files when recovering from an archive that
1439 contains multiple timelines. Therefore, they are archived into the WAL
1440 archive area just like WAL segment files. The history files are just
1441 small text files, so it's cheap and appropriate to keep them around
1442 indefinitely (unlike the segment files which are large). You can, if
1443 you like, add comments to a history file to record your own notes about
1444 how and why this particular timeline was created. Such comments will be
1445 especially valuable when you have a thicket of different timelines as
1446 a result of experimentation.
1447 </para>
1449 <para>
1450 The default behavior of recovery is to recover to the latest timeline found
1451 in the archive. If you wish to recover to the timeline that was current
1452 when the base backup was taken or into a specific child timeline (that
1453 is, you want to return to some state that was itself generated after a
1454 recovery attempt), you need to specify <literal>current</literal> or the
1455 target timeline ID in <xref linkend="guc-recovery-target-timeline"/>. You
1456 cannot recover into timelines that branched off earlier than the base backup.
1457 </para>
1458 </sect2>
1460 <sect2 id="backup-tips">
1461 <title>Tips and Examples</title>
1463 <para>
1464 Some tips for configuring continuous archiving are given here.
1465 </para>
1467 <sect3 id="backup-standalone">
1468 <title>Standalone Hot Backups</title>
1470 <para>
1471 It is possible to use <productname>PostgreSQL</productname>'s backup facilities to
1472 produce standalone hot backups. These are backups that cannot be used
1473 for point-in-time recovery, yet are typically much faster to backup and
1474 restore than <application>pg_dump</application> dumps. (They are also much larger
1475 than <application>pg_dump</application> dumps, so in some cases the speed advantage
1476 might be negated.)
1477 </para>
1479 <para>
1480 As with base backups, the easiest way to produce a standalone
1481 hot backup is to use the <xref linkend="app-pgbasebackup"/>
1482 tool. If you include the <literal>-X</literal> parameter when calling
1483 it, all the write-ahead log required to use the backup will be
1484 included in the backup automatically, and no special action is
1485 required to restore the backup.
1486 </para>
1487 </sect3>
1489 <sect3 id="compressed-archive-logs">
1490 <title>Compressed Archive Logs</title>
1492 <para>
1493 If archive storage size is a concern, you can use
1494 <application>gzip</application> to compress the archive files:
1495 <programlisting>
1496 archive_command = 'gzip &lt; %p &gt; /mnt/server/archivedir/%f.gz'
1497 </programlisting>
1498 You will then need to use <application>gunzip</application> during recovery:
1499 <programlisting>
1500 restore_command = 'gunzip &lt; /mnt/server/archivedir/%f.gz &gt; %p'
1501 </programlisting>
1502 </para>
1503 </sect3>
1505 <sect3 id="backup-scripts">
1506 <title><varname>archive_command</varname> Scripts</title>
1508 <para>
1509 Many people choose to use scripts to define their
1510 <varname>archive_command</varname>, so that their
1511 <filename>postgresql.conf</filename> entry looks very simple:
1512 <programlisting>
1513 archive_command = 'local_backup_script.sh "%p" "%f"'
1514 </programlisting>
1515 Using a separate script file is advisable any time you want to use
1516 more than a single command in the archiving process.
1517 This allows all complexity to be managed within the script, which
1518 can be written in a popular scripting language such as
1519 <application>bash</application> or <application>perl</application>.
1520 </para>
1522 <para>
1523 Examples of requirements that might be solved within a script include:
1524 <itemizedlist>
1525 <listitem>
1526 <para>
1527 Copying data to secure off-site data storage
1528 </para>
1529 </listitem>
1530 <listitem>
1531 <para>
1532 Batching WAL files so that they are transferred every three hours,
1533 rather than one at a time
1534 </para>
1535 </listitem>
1536 <listitem>
1537 <para>
1538 Interfacing with other backup and recovery software
1539 </para>
1540 </listitem>
1541 <listitem>
1542 <para>
1543 Interfacing with monitoring software to report errors
1544 </para>
1545 </listitem>
1546 </itemizedlist>
1547 </para>
1549 <tip>
1550 <para>
1551 When using an <varname>archive_command</varname> script, it's desirable
1552 to enable <xref linkend="guc-logging-collector"/>.
1553 Any messages written to <systemitem>stderr</systemitem> from the script will then
1554 appear in the database server log, allowing complex configurations to
1555 be diagnosed easily if they fail.
1556 </para>
1557 </tip>
1558 </sect3>
1559 </sect2>
1561 <sect2 id="continuous-archiving-caveats">
1562 <title>Caveats</title>
1564 <para>
1565 At this writing, there are several limitations of the continuous archiving
1566 technique. These will probably be fixed in future releases:
1568 <itemizedlist>
1569 <listitem>
1570 <para>
1571 If a <link linkend="sql-createdatabase"><command>CREATE DATABASE</command></link>
1572 command is executed while a base backup is being taken, and then
1573 the template database that the <command>CREATE DATABASE</command> copied
1574 is modified while the base backup is still in progress, it is
1575 possible that recovery will cause those modifications to be
1576 propagated into the created database as well. This is of course
1577 undesirable. To avoid this risk, it is best not to modify any
1578 template databases while taking a base backup.
1579 </para>
1580 </listitem>
1582 <listitem>
1583 <para>
1584 <link linkend="sql-createtablespace"><command>CREATE TABLESPACE</command></link>
1585 commands are WAL-logged with the literal absolute path, and will
1586 therefore be replayed as tablespace creations with the same
1587 absolute path. This might be undesirable if the WAL is being
1588 replayed on a different machine. It can be dangerous even if the
1589 WAL is being replayed on the same machine, but into a new data
1590 directory: the replay will still overwrite the contents of the
1591 original tablespace. To avoid potential gotchas of this sort,
1592 the best practice is to take a new base backup after creating or
1593 dropping tablespaces.
1594 </para>
1595 </listitem>
1596 </itemizedlist>
1597 </para>
1599 <para>
1600 It should also be noted that the default <acronym>WAL</acronym>
1601 format is fairly bulky since it includes many disk page snapshots.
1602 These page snapshots are designed to support crash recovery, since
1603 we might need to fix partially-written disk pages. Depending on
1604 your system hardware and software, the risk of partial writes might
1605 be small enough to ignore, in which case you can significantly
1606 reduce the total volume of archived WAL files by turning off page
1607 snapshots using the <xref linkend="guc-full-page-writes"/>
1608 parameter. (Read the notes and warnings in <xref linkend="wal"/>
1609 before you do so.) Turning off page snapshots does not prevent
1610 use of the WAL for PITR operations. An area for future
1611 development is to compress archived WAL data by removing
1612 unnecessary page copies even when <varname>full_page_writes</varname> is
1613 on. In the meantime, administrators might wish to reduce the number
1614 of page snapshots included in WAL by increasing the checkpoint
1615 interval parameters as much as feasible.
1616 </para>
1617 </sect2>
1618 </sect1>
1620 </chapter>