1 .\" $NetBSD: bzip2.1,v 1.3 2008/03/18 14:47:07 christos Exp $
6 bzip2, bunzip2 \- a block-sorting file compressor, v1.0.5
8 bzcat \- decompresses files to stdout
10 bzip2recover \- recovers data from damaged bzip2 files
15 .RB [ " \-cdfkqstvzVL123456789 " ]
38 compresses files using the Burrows-Wheeler block sorting
39 text compression algorithm, and Huffman coding. Compression is
40 generally considerably better than that achieved by more conventional
41 LZ77/LZ78-based compressors, and approaches the performance of the PPM
42 family of statistical compressors.
44 The command-line options are deliberately very similar to
47 but they are not identical.
50 expects a list of file names to accompany the
51 command-line flags. Each file is replaced by a compressed version of
52 itself, with the name "original_name.bz2".
54 has the same modification date, permissions, and, when possible,
55 ownership as the corresponding original, so that these properties can
56 be correctly restored at decompression time. File name handling is
57 naive in the sense that there is no mechanism for preserving original
58 file names, permissions, ownerships or dates in filesystems which lack
59 these concepts, or have serious file name length restrictions, such as
65 will by default not overwrite existing
66 files. If you want this to happen, specify the \-f flag.
68 If no file names are specified,
70 compresses from standard
71 input to standard output. In this case,
74 write compressed output to a terminal, as this would be entirely
75 incomprehensible and therefore pointless.
81 specified files. Files which were not created by
83 will be detected and ignored, and a warning issued.
85 attempts to guess the filename for the decompressed file
86 from that of the compressed file as follows:
88 filename.bz2 becomes filename
89 filename.bz becomes filename
90 filename.tbz2 becomes filename.tar
91 filename.tbz becomes filename.tar
92 anyothername becomes anyothername.out
94 If the file does not end in one of the recognised endings,
101 complains that it cannot
102 guess the name of the original file, and uses the original name
107 As with compression, supplying no
108 filenames causes decompression from
109 standard input to standard output.
112 will correctly decompress a file which is the
113 concatenation of two or more compressed files. The result is the
114 concatenation of the corresponding uncompressed files. Integrity
117 compressed files is also supported.
119 You can also compress or decompress files to the standard output by
120 giving the \-c flag. Multiple files may be compressed and
121 decompressed like this. The resulting outputs are fed sequentially to
122 stdout. Compression of multiple files
123 in this manner generates a stream
124 containing multiple compressed file representations. Such a stream
125 can be decompressed correctly only by
128 later. Earlier versions of
130 will stop after decompressing
131 the first file in the stream.
136 decompresses all specified files to
140 will read arguments from the environment variables
144 in that order, and will process them
145 before any arguments read from the command line. This gives a
146 convenient way to supply default arguments.
148 Compression is always performed, even if the compressed
150 larger than the original. Files of less than about one hundred bytes
151 tend to get larger, since the compression mechanism has a constant
152 overhead in the region of 50 bytes. Random data (including the output
153 of most file compressors) is coded at about 8.05 bits per byte, giving
154 an expansion of around 0.5%.
156 As a self-check for your protection,
160 make sure that the decompressed version of a file is identical to the
161 original. This guards against corruption of the compressed data, and
162 against undetected bugs in
164 (hopefully very unlikely). The
165 chances of data corruption going undetected is microscopic, about one
166 chance in four billion for each file processed. Be aware, though, that
167 the check occurs upon decompression, so it can only tell you that
168 something is wrong. It can't help you
169 recover the original uncompressed
172 to try to recover data from
175 Return values: 0 for a normal exit, 1 for environmental problems (file
176 not found, invalid flags, I/O errors, &c), 2 to indicate a corrupt
177 compressed file, 3 for an internal consistency error (e.g., bug) which
185 Compress or decompress to standard output.
194 really the same program, and the decision about what actions to take is
195 done on the basis of which name is used. This flag overrides that
196 mechanism, and forces
201 The complement to \-d: forces compression, regardless of the
205 Check integrity of the specified file(s), but don't decompress them.
206 This really performs a trial decompression and throws away the result.
209 Force overwrite of output files. Normally,
212 existing output files. Also forces
215 to files, which it otherwise wouldn't do.
217 bzip2 normally declines to decompress files which don't have the
218 correct magic header bytes. If forced (-f), however, it will pass
219 such files through unmodified. This is how GNU gzip behaves.
222 Keep (don't delete) input files during compression
226 Reduce memory usage, for compression, decompression and testing. Files
227 are decompressed and tested using a modified algorithm which only
228 requires 2.5 bytes per block byte. This means any file can be
229 decompressed in 2300k of memory, albeit at about half the normal speed.
231 During compression, \-s selects a block size of 200k, which limits
232 memory use to around the same figure, at the expense of your compression
233 ratio. In short, if your machine is low on memory (8 megabytes or
234 less), use \-s for everything. See MEMORY MANAGEMENT below.
237 Suppress non-essential warning messages. Messages pertaining to
238 I/O errors and other critical events will not be suppressed.
241 Verbose mode -- show the compression ratio for each file processed.
242 Further \-v's increase the verbosity level, spewing out lots of
243 information which is primarily of interest for diagnostic purposes.
245 .B \-L --license -V --version
246 Display the software version, license terms and conditions.
248 .B \-1 (or \-\-fast) to \-9 (or \-\-best)
249 Set the block size to 100 k, 200 k .. 900 k when compressing. Has no
250 effect when decompressing. See MEMORY MANAGEMENT below.
251 The \-\-fast and \-\-best aliases are primarily for GNU gzip
252 compatibility. In particular, \-\-fast doesn't make things
253 significantly faster.
254 And \-\-best merely selects the default behaviour.
257 Treats all subsequent arguments as file names, even if they start
258 with a dash. This is so you can handle files with names beginning
259 with a dash, for example: bzip2 \-- \-myfilename.
261 .B \--repetitive-fast --repetitive-best
262 These flags are redundant in versions 0.9.5 and above. They provided
263 some coarse control over the behaviour of the sorting algorithm in
264 earlier versions, which was sometimes useful. 0.9.5 and above have an
265 improved algorithm which renders these flags irrelevant.
267 .SH MEMORY MANAGEMENT
269 compresses large files in blocks. The block size affects
270 both the compression ratio achieved, and the amount of memory needed for
271 compression and decompression. The flags \-1 through \-9
272 specify the block size to be 100,000 bytes through 900,000 bytes (the
273 default) respectively. At decompression time, the block size used for
274 compression is read from the header of the compressed file, and
276 then allocates itself just enough memory to decompress
277 the file. Since block sizes are stored in compressed files, it follows
278 that the flags \-1 to \-9 are irrelevant to and so ignored
279 during decompression.
281 Compression and decompression requirements,
282 in bytes, can be estimated as:
284 Compression: 400k + ( 8 x block size )
286 Decompression: 100k + ( 4 x block size ), or
287 100k + ( 2.5 x block size )
289 Larger block sizes give rapidly diminishing marginal returns. Most of
290 the compression comes from the first two or three hundred k of block
291 size, a fact worth bearing in mind when using
294 It is also important to appreciate that the decompression memory
295 requirement is set at compression time by the choice of block size.
297 For files compressed with the default 900k block size,
299 will require about 3700 kbytes to decompress. To support decompression
300 of any file on a 4 megabyte machine,
303 decompress using approximately half this amount of memory, about 2300
304 kbytes. Decompression speed is also halved, so you should use this
305 option only where necessary. The relevant flag is -s.
307 In general, try and use the largest block size memory constraints allow,
308 since that maximises the compression achieved. Compression and
309 decompression speed are virtually unaffected by block size.
311 Another significant point applies to files which fit in a single block
312 -- that means most files you'd encounter using a large block size. The
313 amount of real memory touched is proportional to the size of the file,
314 since the file is smaller than a block. For example, compressing a file
315 20,000 bytes long with the flag -9 will cause the compressor to
316 allocate around 7600k of memory, but only touch 400k + 20000 * 8 = 560
317 kbytes of it. Similarly, the decompressor will allocate 3700k but only
318 touch 100k + 20000 * 4 = 180 kbytes.
320 Here is a table which summarises the maximum memory usage for different
321 block sizes. Also recorded is the total compressed size for 14 files of
322 the Calgary Text Compression Corpus totalling 3,141,622 bytes. This
323 column gives some feel for how compression varies with block size.
324 These figures tend to understate the advantage of larger block sizes for
325 larger files, since the Corpus is dominated by smaller files.
327 Compress Decompress Decompress Corpus
328 Flag usage usage -s usage Size
330 -1 1200k 500k 350k 914704
331 -2 2000k 900k 600k 877703
332 -3 2800k 1300k 850k 860338
333 -4 3600k 1700k 1100k 846899
334 -5 4400k 2100k 1350k 845160
335 -6 5200k 2500k 1600k 838626
336 -7 6100k 2900k 1850k 834096
337 -8 6800k 3300k 2100k 828642
338 -9 7600k 3700k 2350k 828642
340 .SH RECOVERING DATA FROM DAMAGED FILES
342 compresses files in blocks, usually 900kbytes long. Each
343 block is handled independently. If a media or transmission error causes
345 file to become damaged, it may be possible to
346 recover data from the undamaged blocks in the file.
348 The compressed representation of each block is delimited by a 48-bit
349 pattern, which makes it possible to find the block boundaries with
350 reasonable certainty. Each block also carries its own 32-bit CRC, so
351 damaged blocks can be distinguished from undamaged ones.
354 is a simple program whose purpose is to search for
355 blocks in .bz2 files, and write each block out into its own .bz2
356 file. You can then use
360 integrity of the resulting files, and decompress those which are
364 takes a single argument, the name of the damaged file,
365 and writes a number of files "rec00001file.bz2",
366 "rec00002file.bz2", etc, containing the extracted blocks.
367 The output filenames are designed so that the use of
368 wildcards in subsequent processing -- for example,
369 "bzip2 -dc rec*file.bz2 > recovered_data" -- processes the files in
373 should be of most use dealing with large .bz2
374 files, as these will contain many blocks. It is clearly
375 futile to use it on damaged single-block files, since a
376 damaged block cannot be recovered. If you wish to minimise
377 any potential data loss through media or transmission errors,
378 you might consider compressing with a smaller
381 .SH PERFORMANCE NOTES
382 The sorting phase of compression gathers together similar strings in the
383 file. Because of this, files containing very long runs of repeated
384 symbols, like "aabaabaabaab ..." (repeated several hundred times) may
385 compress more slowly than normal. Versions 0.9.5 and above fare much
386 better than previous versions in this respect. The ratio between
387 worst-case and average-case compression time is in the region of 10:1.
388 For previous versions, this figure was more like 100:1. You can use the
389 \-vvvv option to monitor progress in great detail, if you want.
391 Decompression speed is unaffected by these phenomena.
394 usually allocates several megabytes of memory to operate
395 in, and then charges all over it in a fairly random fashion. This means
396 that performance, both for compressing and decompressing, is largely
397 determined by the speed at which your machine can service cache misses.
398 Because of this, small changes to the code to reduce the miss rate have
399 been observed to give disproportionately large performance improvements.
402 will perform best on machines with very large caches.
405 I/O error messages are not as helpful as they could be.
407 tries hard to detect I/O errors and exit cleanly, but the details of
408 what the problem is sometimes seem rather misleading.
410 This manual page pertains to version 1.0.5 of
412 Compressed data created by this version is entirely forwards and
413 backwards compatible with the previous public releases, versions
414 0.1pl2, 0.9.0, 0.9.5, 1.0.0, 1.0.1, 1.0.2 and 1.0.3, but with the following
415 exception: 0.9.0 and above can correctly decompress multiple
416 concatenated compressed files. 0.1pl2 cannot do this; it will stop
417 after decompressing just the first file in the stream.
420 versions prior to 1.0.2 used 32-bit integers to represent
421 bit positions in compressed files, so they could not handle compressed
422 files more than 512 megabytes long. Versions 1.0.2 and above use
423 64-bit ints on some platforms which support them (GNU supported
424 targets, and Windows). To establish whether or not bzip2recover was
425 built with such a limitation, run it without arguments. In any event
426 you can build yourself an unlimited version if you can recompile it
427 with MaybeUInt64 set to be an unsigned 64-bit integer.
432 Julian Seward, jsewardbzip.org.
436 The ideas embodied in
438 are due to (at least) the following
439 people: Michael Burrows and David Wheeler (for the block sorting
440 transformation), David Wheeler (again, for the Huffman coder), Peter
441 Fenwick (for the structured coding model in the original
443 and many refinements), and Alistair Moffat, Radford Neal and Ian Witten
444 (for the arithmetic coder in the original
447 indebted for their help, support and advice. See the manual in the
448 source distribution for pointers to sources of documentation. Christian
449 von Roques encouraged me to look for faster sorting algorithms, so as to
450 speed up compression. Bela Lubkin encouraged me to improve the
451 worst-case compression performance.
452 Donna Robinson XMLised the documentation.
453 The bz* scripts are derived from those of GNU gzip.
454 Many people sent patches, helped
455 with portability problems, lent machines, gave advice and were generally