Lossless data compression

HomePage | Recent changes | View source | Discuss this page | Page history | Log in |

Printable version | Disclaimers | Privacy policy

Lossless data compression is a type of data compression algorithm structured in such a way that the original data may be reconstructed exactly from the compressed data. One of the most used algorithms is Huffman coding. Lossless data compression is used in software compression tools such as the highly popular zip format, used by PKZIP and Winzip, and the Unix programs gzip and compress. Lossless compression is used when every byte of the data is important, such as executable programs and source code. Some image file formats, notably PNG, use only lossless compression, while others like TIFF and MNG may use either lossless or lossy methods. GIF uses a technically lossless compression method, but most GIF implementations are incapable of representing full color, so they quantize the image (often with dithering) to 255 or fewer colors before encoding as GIF. Color quantization is a lossy process, but reconstructing the color image and then re-quantizing it produces no additional loss. (Some rare GIF implementations make multiple passes over an image, adding 255 new colors on each pass.)

Lossless data compression does not always work

Lossless data compression algorithms cannot guarantee to compress (that is make smaller) all input data sets. In other words for any (lossless) data compression algorithm there will be an input data set that does not get smaller when processed by the algorithm. This is easily proven with elementary mathematics using a counting argument, as follows:

  • Assume that files can have lengths that are arbitary numbers of bits.
  • Consider the set of all binary files length at most N bits, which has 1 + 2 + 4 + ... + 2N = 2N+1-1 members. if we include the zero-length file. Assume, for the sake of argument, that a given compression function maps every one of these to a distinct shorter file of at most N-1 bits. (If the output files are not all distinct, the compression cannot be reversed without losing some data).
  • Now consider the set of all files of length at most N-1 bits. How many members are there in this set?
  • There are 1 + 2 + 4 + ... + 2N-1 = 2N-1 such files, if we include the zero-length file in the set. But this is smaller than 2N+1-1. So we cannot map all the members of the larger set uniquely into the members of the smaller set.
  • This contradiction implies that our original hypothesis (that the compression function makes all files smaller) must be untrue.

Notice that the difference in size is so marked that it makes no difference if we simply consider files of length exactly N as the input set: it is still larger (2N members) than the desired output set.

If we make all the files a multiple of 8 bits long (as in standard computer files) there are even fewer files in the smaller subset, and this argument still holds.


See also: Lossy data compression -- David A. Huffman