2 Copyright (C) 2017 Red Hat Inc.
4 This work is licensed under the terms of the GNU GPL, version 2 or
5 later. See the COPYING file in the top-level directory.
7 ============================
8 Live Block Device Operations
9 ============================
11 QEMU Block Layer currently (as of QEMU 2.9) supports four major kinds of
12 live block device jobs -- stream, commit, mirror, and backup. These can
13 be used to manipulate disk image chains to accomplish certain tasks,
14 namely: live copy data from backing files into overlays; shorten long
15 disk image chains by merging data from overlays into backing files; live
16 synchronize data from a disk image chain (including current active disk)
17 to another target image; and point-in-time (and incremental) backups of
18 a block device. Below is a description of the said block (QMP)
19 primitives, and some (non-exhaustive list of) examples to illustrate
23 The file ``qapi/block-core.json`` in the QEMU source tree has the
24 canonical QEMU API (QAPI) schema documentation for the QMP
25 primitives discussed here.
27 .. todo (kashyapc):: Remove the ".. contents::" directive when Sphinx is
32 Disk image backing chain notation
33 ---------------------------------
35 A simple disk image chain. (This can be created live using QMP
36 ``blockdev-snapshot-sync``, or offline via ``qemu-img``)::
45 (backing file) (overlay)
47 The arrow can be read as: Image [A] is the backing file of disk image
48 [B]. And live QEMU is currently writing to image [B], consequently, it
49 is also referred to as the "active layer".
51 There are two kinds of terminology that are common when referring to
52 files in a disk image backing chain:
54 (1) Directional: 'base' and 'top'. Given the simple disk image chain
55 above, image [A] can be referred to as 'base', and image [B] as
56 'top'. (This terminology can be seen in in QAPI schema file,
59 (2) Relational: 'backing file' and 'overlay'. Again, taking the same
60 simple disk image chain from the above, disk image [A] is referred
61 to as the backing file, and image [B] as overlay.
63 Throughout this document, we will use the relational terminology.
66 The overlay files can generally be any format that supports a
67 backing file, although QCOW2 is the preferred format and the one
68 used in this document.
71 Brief overview of live block QMP primitives
72 -------------------------------------------
74 The following are the four different kinds of live block operations that
75 QEMU block layer supports.
77 (1) ``block-stream``: Live copy of data from backing files into overlay
80 .. note:: Once the 'stream' operation has finished, three things to
83 (a) QEMU rewrites the backing chain to remove
84 reference to the now-streamed and redundant backing
87 (b) the streamed file *itself* won't be removed by QEMU,
88 and must be explicitly discarded by the user;
90 (c) the streamed file remains valid -- i.e. further
91 overlays can be created based on it. Refer the
92 ``block-stream`` section further below for more
95 (2) ``block-commit``: Live merge of data from overlay files into backing
96 files (with the optional goal of removing the overlay file from the
97 chain). Since QEMU 2.0, this includes "active ``block-commit``"
98 (i.e. merge the current active layer into the base image).
100 .. note:: Once the 'commit' operation has finished, there are three
101 things to note here as well:
103 (a) QEMU rewrites the backing chain to remove reference
104 to now-redundant overlay images that have been
105 committed into a backing file;
107 (b) the committed file *itself* won't be removed by QEMU
108 -- it ought to be manually removed;
110 (c) however, unlike in the case of ``block-stream``, the
111 intermediate images will be rendered invalid -- i.e.
112 no more further overlays can be created based on
113 them. Refer the ``block-commit`` section further
114 below for more details.
116 (3) ``drive-mirror`` (and ``blockdev-mirror``): Synchronize a running
117 disk to another image.
119 (4) ``drive-backup`` (and ``blockdev-backup``): Point-in-time (live) copy
120 of a block device to a destination.
123 .. _`Interacting with a QEMU instance`:
125 Interacting with a QEMU instance
126 --------------------------------
128 To show some example invocations of command-line, we will use the
129 following invocation of QEMU, with a QMP server running over UNIX
132 $ ./x86_64-softmmu/qemu-system-x86_64 -display none -nodefconfig \
133 -M q35 -nodefaults -m 512 \
134 -blockdev node-name=node-A,driver=qcow2,file.driver=file,file.node-name=file,file.filename=./a.qcow2 \
135 -device virtio-blk,drive=node-A,id=virtio0 \
136 -monitor stdio -qmp unix:/tmp/qmp-sock,server,nowait
138 The ``-blockdev`` command-line option, used above, is available from
139 QEMU 2.9 onwards. In the above invocation, notice the ``node-name``
140 parameter that is used to refer to the disk image a.qcow2 ('node-A') --
141 this is a cleaner way to refer to a disk image (as opposed to referring
142 to it by spelling out file paths). So, we will continue to designate a
143 ``node-name`` to each further disk image created (either via
144 ``blockdev-snapshot-sync``, or ``blockdev-add``) as part of the disk
145 image chain, and continue to refer to the disks using their
146 ``node-name`` (where possible, because ``block-commit`` does not yet, as
147 of QEMU 2.9, accept ``node-name`` parameter) when performing various
150 To interact with the QEMU instance launched above, we will use the
151 ``qmp-shell`` utility (located at: ``qemu/scripts/qmp``, as part of the
152 QEMU source directory), which takes key-value pairs for QMP commands.
153 Invoke it as below (which will also print out the complete raw JSON
154 syntax for reference -- examples in the following sections)::
156 $ ./qmp-shell -v -p /tmp/qmp-sock
160 In the event we have to repeat a certain QMP command, we will: for
161 the first occurrence of it, show the ``qmp-shell`` invocation, *and*
162 the corresponding raw JSON QMP syntax; but for subsequent
163 invocations, present just the ``qmp-shell`` syntax, and omit the
164 equivalent JSON output.
167 Example disk image chain
168 ------------------------
170 We will use the below disk image chain (and occasionally spelling it
171 out where appropriate) when discussing various primitives::
173 [A] <-- [B] <-- [C] <-- [D]
175 Where [A] is the original base image; [B] and [C] are intermediate
176 overlay images; image [D] is the active layer -- i.e. live QEMU is
177 writing to it. (The rule of thumb is: live QEMU will always be pointing
178 to the rightmost image in a disk image chain.)
180 The above image chain can be created by invoking
181 ``blockdev-snapshot-sync`` commands as following (which shows the
182 creation of overlay image [B]) using the ``qmp-shell`` (our invocation
183 also prints the raw JSON invocation of it)::
185 (QEMU) blockdev-snapshot-sync node-name=node-A snapshot-file=b.qcow2 snapshot-node-name=node-B format=qcow2
187 "execute": "blockdev-snapshot-sync",
189 "node-name": "node-A",
190 "snapshot-file": "b.qcow2",
192 "snapshot-node-name": "node-B"
196 Here, "node-A" is the name QEMU internally uses to refer to the base
197 image [A] -- it is the backing file, based on which the overlay image,
200 To create the rest of the overlay images, [C], and [D] (omitting the raw
201 JSON output for brevity)::
203 (QEMU) blockdev-snapshot-sync node-name=node-B snapshot-file=c.qcow2 snapshot-node-name=node-C format=qcow2
204 (QEMU) blockdev-snapshot-sync node-name=node-C snapshot-file=d.qcow2 snapshot-node-name=node-D format=qcow2
207 A note on points-in-time vs file names
208 --------------------------------------
210 In our disk image chain::
212 [A] <-- [B] <-- [C] <-- [D]
214 We have *three* points in time and an active layer:
216 - Point 1: Guest state when [B] was created is contained in file [A]
217 - Point 2: Guest state when [C] was created is contained in [A] + [B]
218 - Point 3: Guest state when [D] was created is contained in
220 - Active layer: Current guest state is contained in [A] + [B] + [C] +
223 Therefore, be aware with naming choices:
225 - Naming a file after the time it is created is misleading -- the
226 guest data for that point in time is *not* contained in that file
227 (as explained earlier)
228 - Rather, think of files as a *delta* from the backing file
231 Live block streaming --- ``block-stream``
232 -----------------------------------------
234 The ``block-stream`` command allows you to do live copy data from backing
235 files into overlay images.
237 Given our original example disk image chain from earlier::
239 [A] <-- [B] <-- [C] <-- [D]
241 The disk image chain can be shortened in one of the following different
242 ways (not an exhaustive list).
246 (1) Merge everything into the active layer: I.e. copy all contents from
247 the base image, [A], and overlay images, [B] and [C], into [D],
248 *while* the guest is running. The resulting chain will be a
249 standalone image, [D] -- with contents from [A], [B] and [C] merged
250 into it (where live QEMU writes go to)::
256 (2) Taking the same example disk image chain mentioned earlier, merge
257 only images [B] and [C] into [D], the active layer. The result will
258 be contents of images [B] and [C] will be copied into [D], and the
259 backing file pointer of image [D] will be adjusted to point to image
260 [A]. The resulting chain will be::
266 (3) Intermediate streaming (available since QEMU 2.8): Starting afresh
267 with the original example disk image chain, with a total of four
268 images, it is possible to copy contents from image [B] into image
269 [C]. Once the copy is finished, image [B] can now be (optionally)
270 discarded; and the backing file pointer of image [C] will be
271 adjusted to point to [A]. I.e. after performing "intermediate
272 streaming" of [B] into [C], the resulting image chain will be (where
273 live QEMU is writing to [D])::
278 QMP invocation for ``block-stream``
279 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
281 For `Case-1`_, to merge contents of all the backing files into the
282 active layer, where 'node-D' is the current active image (by default
283 ``block-stream`` will flatten the entire chain); ``qmp-shell`` (and its
284 corresponding JSON output)::
286 (QEMU) block-stream device=node-D job-id=job0
288 "execute": "block-stream",
295 For `Case-2`_, merge contents of the images [B] and [C] into [D], where
296 image [D] ends up referring to image [A] as its backing file::
298 (QEMU) block-stream device=node-D base-node=node-A job-id=job0
300 And for `Case-3`_, of "intermediate" streaming", merge contents of
301 images [B] into [C], where [C] ends up referring to [A] as its backing
304 (QEMU) block-stream device=node-C base-node=node-A job-id=job0
306 Progress of a ``block-stream`` operation can be monitored via the QMP
309 (QEMU) query-block-jobs
311 "execute": "query-block-jobs",
316 Once the ``block-stream`` operation has completed, QEMU will emit an
317 event, ``BLOCK_JOB_COMPLETED``. The intermediate overlays remain valid,
318 and can now be (optionally) discarded, or retained to create further
319 overlays based on them. Finally, the ``block-stream`` jobs can be
320 restarted at anytime.
323 Live block commit --- ``block-commit``
324 --------------------------------------
326 The ``block-commit`` command lets you merge live data from overlay
327 images into backing file(s). Since QEMU 2.0, this includes "live active
328 commit" (i.e. it is possible to merge the "active layer", the right-most
329 image in a disk image chain where live QEMU will be writing to, into the
330 base image). This is analogous to ``block-stream``, but in the opposite
333 Again, starting afresh with our example disk image chain, where live
334 QEMU is writing to the right-most image in the chain, [D]::
336 [A] <-- [B] <-- [C] <-- [D]
338 The disk image chain can be shortened in one of the following ways:
340 .. _`block-commit_Case-1`:
342 (1) Commit content from only image [B] into image [A]. The resulting
343 chain is the following, where image [C] is adjusted to point at [A]
344 as its new backing file::
348 (2) Commit content from images [B] and [C] into image [A]. The
349 resulting chain, where image [D] is adjusted to point to image [A]
350 as its new backing file::
354 .. _`block-commit_Case-3`:
356 (3) Commit content from images [B], [C], and the active layer [D] into
357 image [A]. The resulting chain (in this case, a consolidated single
362 (4) Commit content from image only image [C] into image [B]. The
367 (5) Commit content from image [C] and the active layer [D] into image
368 [B]. The resulting chain::
373 QMP invocation for ``block-commit``
374 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
376 For :ref:`Case-1 <block-commit_Case-1>`, to merge contents only from
377 image [B] into image [A], the invocation is as follows::
379 (QEMU) block-commit device=node-D base=a.qcow2 top=b.qcow2 job-id=job0
381 "execute": "block-commit",
390 Once the above ``block-commit`` operation has completed, a
391 ``BLOCK_JOB_COMPLETED`` event will be issued, and no further action is
392 required. As the end result, the backing file of image [C] is adjusted
393 to point to image [A], and the original 4-image chain will end up being
399 The intermediate image [B] is invalid (as in: no more further
400 overlays based on it can be created).
402 Reasoning: An intermediate image after a 'stream' operation still
403 represents that old point-in-time, and may be valid in that context.
404 However, an intermediate image after a 'commit' operation no longer
405 represents any point-in-time, and is invalid in any context.
408 However, :ref:`Case-3 <block-commit_Case-3>` (also called: "active
409 ``block-commit``") is a *two-phase* operation: In the first phase, the
410 content from the active overlay, along with the intermediate overlays,
411 is copied into the backing file (also called the base image). In the
412 second phase, adjust the said backing file as the current active image
413 -- possible via issuing the command ``block-job-complete``. Optionally,
414 the ``block-commit`` operation can be cancelled by issuing the command
415 ``block-job-cancel``, but be careful when doing this.
417 Once the ``block-commit`` operation has completed, the event
418 ``BLOCK_JOB_READY`` will be emitted, signalling that the synchronization
419 has finished. Now the job can be gracefully completed by issuing the
420 command ``block-job-complete`` -- until such a command is issued, the
421 'commit' operation remains active.
423 The following is the flow for :ref:`Case-3 <block-commit_Case-3>` to
424 convert a disk image chain such as this::
426 [A] <-- [B] <-- [C] <-- [D]
432 Where content from all the subsequent overlays, [B], and [C], including
433 the active layer, [D], is committed back to [A] -- which is where live
434 QEMU is performing all its current writes).
436 Start the "active ``block-commit``" operation::
438 (QEMU) block-commit device=node-D base=a.qcow2 top=d.qcow2 job-id=job0
440 "execute": "block-commit",
450 Once the synchronization has completed, the event ``BLOCK_JOB_READY`` will
453 Then, optionally query for the status of the active block operations.
454 We can see the 'commit' job is now ready to be completed, as indicated
455 by the line *"ready": true*::
457 (QEMU) query-block-jobs
459 "execute": "query-block-jobs",
478 Gracefully complete the 'commit' block device job::
480 (QEMU) block-job-complete device=job0
482 "execute": "block-job-complete",
491 Finally, once the above job is completed, an event
492 ``BLOCK_JOB_COMPLETED`` will be emitted.
495 The invocation for rest of the cases (2, 4, and 5), discussed in the
496 previous section, is omitted for brevity.
499 Live disk synchronization --- ``drive-mirror`` and ``blockdev-mirror``
500 ----------------------------------------------------------------------
502 Synchronize a running disk image chain (all or part of it) to a target
505 Again, given our familiar disk image chain::
507 [A] <-- [B] <-- [C] <-- [D]
509 The ``drive-mirror`` (and its newer equivalent ``blockdev-mirror``) allows
510 you to copy data from the entire chain into a single target image (which
511 can be located on a different host).
513 Once a 'mirror' job has started, there are two possible actions while a
514 ``drive-mirror`` job is active:
516 (1) Issuing the command ``block-job-cancel`` after it emits the event
517 ``BLOCK_JOB_CANCELLED``: will (after completing synchronization of
518 the content from the disk image chain to the target image, [E])
519 create a point-in-time (which is at the time of *triggering* the
520 cancel command) copy, contained in image [E], of the the entire disk
521 image chain (or only the top-most image, depending on the ``sync``
524 (2) Issuing the command ``block-job-complete`` after it emits the event
525 ``BLOCK_JOB_COMPLETED``: will, after completing synchronization of
526 the content, adjust the guest device (i.e. live QEMU) to point to
527 the target image, and, causing all the new writes from this point on
528 to happen there. One use case for this is live storage migration.
530 About synchronization modes: The synchronization mode determines
531 *which* part of the disk image chain will be copied to the target.
532 Currently, there are four different kinds:
534 (1) ``full`` -- Synchronize the content of entire disk image chain to
537 (2) ``top`` -- Synchronize only the contents of the top-most disk image
538 in the chain to the target
540 (3) ``none`` -- Synchronize only the new writes from this point on.
542 .. note:: In the case of ``drive-backup`` (or ``blockdev-backup``),
543 the behavior of ``none`` synchronization mode is different.
544 Normally, a ``backup`` job consists of two parts: Anything
545 that is overwritten by the guest is first copied out to
546 the backup, and in the background the whole image is
547 copied from start to end. With ``sync=none``, it's only
550 (4) ``incremental`` -- Synchronize content that is described by the
554 Refer to the :doc:`bitmaps` document in the QEMU source
555 tree to learn about the detailed workings of the ``incremental``
556 synchronization mode.
559 QMP invocation for ``drive-mirror``
560 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
562 To copy the contents of the entire disk image chain, from [A] all the
563 way to [D], to a new target (``drive-mirror`` will create the destination
564 file, if it doesn't already exist), call it [E]::
566 (QEMU) drive-mirror device=node-D target=e.qcow2 sync=full job-id=job0
568 "execute": "drive-mirror",
577 The ``"sync": "full"``, from the above, means: copy the *entire* chain
580 Following the above, querying for active block jobs will show that a
581 'mirror' job is "ready" to be completed (and QEMU will also emit an
582 event, ``BLOCK_JOB_READY``)::
584 (QEMU) query-block-jobs
586 "execute": "query-block-jobs",
605 And, as noted in the previous section, there are two possible actions
608 (a) Create a point-in-time snapshot by ending the synchronization. The
609 point-in-time is at the time of *ending* the sync. (The result of
610 the following being: the target image, [E], will be populated with
611 content from the entire chain, [A] to [D])::
613 (QEMU) block-job-cancel device=job0
615 "execute": "block-job-cancel",
621 (b) Or, complete the operation and pivot the live QEMU to the target
624 (QEMU) block-job-complete device=job0
626 In either of the above cases, if you once again run the
627 `query-block-jobs` command, there should not be any active block
630 Comparing 'commit' and 'mirror': In both then cases, the overlay images
631 can be discarded. However, with 'commit', the *existing* base image
632 will be modified (by updating it with contents from overlays); while in
633 the case of 'mirror', a *new* target image is populated with the data
634 from the disk image chain.
637 QMP invocation for live storage migration with ``drive-mirror`` + NBD
638 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
640 Live storage migration (without shared storage setup) is one of the most
641 common use-cases that takes advantage of the ``drive-mirror`` primitive
642 and QEMU's built-in Network Block Device (NBD) server. Here's a quick
643 walk-through of this setup.
645 Given the disk image chain::
647 [A] <-- [B] <-- [C] <-- [D]
649 Instead of copying content from the entire chain, synchronize *only* the
650 contents of the *top*-most disk image (i.e. the active layer), [D], to a
651 target, say, [TargetDisk].
654 The destination host must already have the contents of the backing
655 chain, involving images [A], [B], and [C], visible via other means
656 -- whether by ``cp``, ``rsync``, or by some storage array-specific
659 Sometimes, this is also referred to as "shallow copy" -- because only
660 the "active layer", and not the rest of the image chain, is copied to
664 In this example, for the sake of simplicity, we'll be using the same
665 ``localhost`` as both source and destination.
667 As noted earlier, on the destination host the contents of the backing
668 chain -- from images [A] to [C] -- are already expected to exist in some
669 form (e.g. in a file called, ``Contents-of-A-B-C.qcow2``). Now, on the
670 destination host, let's create a target overlay image (with the image
671 ``Contents-of-A-B-C.qcow2`` as its backing file), to which the contents
672 of image [D] (from the source QEMU) will be mirrored to::
674 $ qemu-img create -f qcow2 -b ./Contents-of-A-B-C.qcow2 \
675 -F qcow2 ./target-disk.qcow2
677 And start the destination QEMU (we already have the source QEMU running
678 -- discussed in the section: `Interacting with a QEMU instance`_)
679 instance, with the following invocation. (As noted earlier, for
680 simplicity's sake, the destination QEMU is started on the same host, but
681 it could be located elsewhere)::
683 $ ./x86_64-softmmu/qemu-system-x86_64 -display none -nodefconfig \
684 -M q35 -nodefaults -m 512 \
685 -blockdev node-name=node-TargetDisk,driver=qcow2,file.driver=file,file.node-name=file,file.filename=./target-disk.qcow2 \
686 -device virtio-blk,drive=node-TargetDisk,id=virtio0 \
687 -S -monitor stdio -qmp unix:./qmp-sock2,server,nowait \
688 -incoming tcp:localhost:6666
690 Given the disk image chain on source QEMU::
692 [A] <-- [B] <-- [C] <-- [D]
694 On the destination host, it is expected that the contents of the chain
695 ``[A] <-- [B] <-- [C]`` are *already* present, and therefore copy *only*
696 the content of image [D].
698 (1) [On *destination* QEMU] As part of the first step, start the
699 built-in NBD server on a given host (local host, represented by
702 (QEMU) nbd-server-start addr={"type":"inet","data":{"host":"::","port":"49153"}}
704 "execute": "nbd-server-start",
716 (2) [On *destination* QEMU] And export the destination disk image using
717 QEMU's built-in NBD server::
719 (QEMU) nbd-server-add device=node-TargetDisk writable=true
721 "execute": "nbd-server-add",
723 "device": "node-TargetDisk"
727 (3) [On *source* QEMU] Then, invoke ``drive-mirror`` (NB: since we're
728 running ``drive-mirror`` with ``mode=existing`` (meaning:
729 synchronize to a pre-created file, therefore 'existing', file on the
730 target host), with the synchronization mode as 'top' (``"sync:
733 (QEMU) drive-mirror device=node-D target=nbd:localhost:49153:exportname=node-TargetDisk sync=top mode=existing job-id=job0
735 "execute": "drive-mirror",
740 "target": "nbd:localhost:49153:exportname=node-TargetDisk",
745 (4) [On *source* QEMU] Once ``drive-mirror`` copies the entire data, and the
746 event ``BLOCK_JOB_READY`` is emitted, issue ``block-job-cancel`` to
747 gracefully end the synchronization, from source QEMU::
749 (QEMU) block-job-cancel device=job0
751 "execute": "block-job-cancel",
757 (5) [On *destination* QEMU] Then, stop the NBD server::
759 (QEMU) nbd-server-stop
761 "execute": "nbd-server-stop",
765 (6) [On *destination* QEMU] Finally, resume the guest vCPUs by issuing the
775 Higher-level libraries (e.g. libvirt) automate the entire above
776 process (although note that libvirt does not allow same-host
777 migrations to localhost for other reasons).
780 Notes on ``blockdev-mirror``
781 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
783 The ``blockdev-mirror`` command is equivalent in core functionality to
784 ``drive-mirror``, except that it operates at node-level in a BDS graph.
786 Also: for ``blockdev-mirror``, the 'target' image needs to be explicitly
787 created (using ``qemu-img``) and attach it to live QEMU via
788 ``blockdev-add``, which assigns a name to the to-be created target node.
790 E.g. the sequence of actions to create a point-in-time backup of an
791 entire disk image chain, to a target, using ``blockdev-mirror`` would be:
793 (0) Create the QCOW2 overlays, to arrive at a backing chain of desired
796 (1) Create the target image (using ``qemu-img``), say, ``e.qcow2``
798 (2) Attach the above created file (``e.qcow2``), run-time, using
799 ``blockdev-add`` to QEMU
801 (3) Perform ``blockdev-mirror`` (use ``"sync": "full"`` to copy the
802 entire chain to the target). And notice the event
805 (4) Optionally, query for active block jobs, there should be a 'mirror'
806 job ready to be completed
808 (5) Gracefully complete the 'mirror' block device job, and notice the
809 the event ``BLOCK_JOB_COMPLETED``
811 (6) Shutdown the guest by issuing the QMP ``quit`` command so that
814 (7) Then, finally, compare the contents of the disk image chain, and
815 the target copy with ``qemu-img compare``. You should notice:
816 "Images are identical"
819 QMP invocation for ``blockdev-mirror``
820 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
822 Given the disk image chain::
824 [A] <-- [B] <-- [C] <-- [D]
826 To copy the contents of the entire disk image chain, from [A] all the
827 way to [D], to a new target, call it [E]. The following is the flow.
829 Create the overlay images, [B], [C], and [D]::
831 (QEMU) blockdev-snapshot-sync node-name=node-A snapshot-file=b.qcow2 snapshot-node-name=node-B format=qcow2
832 (QEMU) blockdev-snapshot-sync node-name=node-B snapshot-file=c.qcow2 snapshot-node-name=node-C format=qcow2
833 (QEMU) blockdev-snapshot-sync node-name=node-C snapshot-file=d.qcow2 snapshot-node-name=node-D format=qcow2
835 Create the target image, [E]::
837 $ qemu-img create -f qcow2 e.qcow2 39M
839 Add the above created target image to QEMU, via ``blockdev-add``::
841 (QEMU) blockdev-add driver=qcow2 node-name=node-E file={"driver":"file","filename":"e.qcow2"}
843 "execute": "blockdev-add",
845 "node-name": "node-E",
849 "filename": "e.qcow2"
854 Perform ``blockdev-mirror``, and notice the event ``BLOCK_JOB_READY``::
856 (QEMU) blockdev-mirror device=node-B target=node-E sync=full job-id=job0
858 "execute": "blockdev-mirror",
867 Query for active block jobs, there should be a 'mirror' job ready::
869 (QEMU) query-block-jobs
871 "execute": "query-block-jobs",
890 Gracefully complete the block device job operation, and notice the
891 event ``BLOCK_JOB_COMPLETED``::
893 (QEMU) block-job-complete device=job0
895 "execute": "block-job-complete",
904 Shutdown the guest, by issuing the ``quit`` QMP command::
913 Live disk backup --- ``drive-backup`` and ``blockdev-backup``
914 -------------------------------------------------------------
916 The ``drive-backup`` (and its newer equivalent ``blockdev-backup``) allows
917 you to create a point-in-time snapshot.
919 In this case, the point-in-time is when you *start* the ``drive-backup``
920 (or its newer equivalent ``blockdev-backup``) command.
923 QMP invocation for ``drive-backup``
924 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
926 Yet again, starting afresh with our example disk image chain::
928 [A] <-- [B] <-- [C] <-- [D]
930 To create a target image [E], with content populated from image [A] to
931 [D], from the above chain, the following is the syntax. (If the target
932 image does not exist, ``drive-backup`` will create it)::
934 (QEMU) drive-backup device=node-D sync=full target=e.qcow2 job-id=job0
936 "execute": "drive-backup",
945 Once the above ``drive-backup`` has completed, a ``BLOCK_JOB_COMPLETED`` event
946 will be issued, indicating the live block device job operation has
947 completed, and no further action is required.
950 Notes on ``blockdev-backup``
951 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
953 The ``blockdev-backup`` command is equivalent in functionality to
954 ``drive-backup``, except that it operates at node-level in a Block Driver
957 E.g. the sequence of actions to create a point-in-time backup
958 of an entire disk image chain, to a target, using ``blockdev-backup``
961 (0) Create the QCOW2 overlays, to arrive at a backing chain of desired
964 (1) Create the target image (using ``qemu-img``), say, ``e.qcow2``
966 (2) Attach the above created file (``e.qcow2``), run-time, using
967 ``blockdev-add`` to QEMU
969 (3) Perform ``blockdev-backup`` (use ``"sync": "full"`` to copy the
970 entire chain to the target). And notice the event
971 ``BLOCK_JOB_COMPLETED``
973 (4) Shutdown the guest, by issuing the QMP ``quit`` command, so that
976 (5) Then, finally, compare the contents of the disk image chain, and
977 the target copy with ``qemu-img compare``. You should notice:
978 "Images are identical"
980 The following section shows an example QMP invocation for
983 QMP invocation for ``blockdev-backup``
984 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
986 Given a disk image chain of depth 1 where image [B] is the active
987 overlay (live QEMU is writing to it)::
991 The following is the procedure to copy the content from the entire chain
992 to a target image (say, [E]), which has the full content from [A] and
995 Create the overlay [B]::
997 (QEMU) blockdev-snapshot-sync node-name=node-A snapshot-file=b.qcow2 snapshot-node-name=node-B format=qcow2
999 "execute": "blockdev-snapshot-sync",
1001 "node-name": "node-A",
1002 "snapshot-file": "b.qcow2",
1004 "snapshot-node-name": "node-B"
1009 Create a target image that will contain the copy::
1011 $ qemu-img create -f qcow2 e.qcow2 39M
1013 Then add it to QEMU via ``blockdev-add``::
1015 (QEMU) blockdev-add driver=qcow2 node-name=node-E file={"driver":"file","filename":"e.qcow2"}
1017 "execute": "blockdev-add",
1019 "node-name": "node-E",
1023 "filename": "e.qcow2"
1028 Then invoke ``blockdev-backup`` to copy the contents from the entire
1029 image chain, consisting of images [A] and [B] to the target image
1032 (QEMU) blockdev-backup device=node-B target=node-E sync=full job-id=job0
1034 "execute": "blockdev-backup",
1043 Once the above 'backup' operation has completed, the event,
1044 ``BLOCK_JOB_COMPLETED`` will be emitted, signalling successful
1047 Next, query for any active block device jobs (there should be none)::
1049 (QEMU) query-block-jobs
1051 "execute": "query-block-jobs",
1055 Shutdown the guest::
1066 The above step is really important; if forgotten, an error, "Failed
1067 to get shared "write" lock on e.qcow2", will be thrown when you do
1068 ``qemu-img compare`` to verify the integrity of the disk image
1069 with the backup content.
1072 The end result will be the image 'e.qcow2' containing a
1073 point-in-time backup of the disk image chain -- i.e. contents from
1074 images [A] and [B] at the time the ``blockdev-backup`` command was
1077 One way to confirm the backup disk image contains the identical content
1078 with the disk image chain is to compare the backup and the contents of
1079 the chain, you should see "Images are identical". (NB: this is assuming
1080 QEMU was launched with ``-S`` option, which will not start the CPUs at
1083 $ qemu-img compare b.qcow2 e.qcow2
1084 Warning: Image size mismatch!
1085 Images are identical.
1087 NOTE: The "Warning: Image size mismatch!" is expected, as we created the
1088 target image (e.qcow2) with 39M size.