The quality of music recordings on compact discs or CDs is excellent. In the age of vinyl records, irritating clicks resulting from surface scratches were almost impossible to avoid. Modern recording media are largely free from this shortcoming. But this is curious: there are many reasons why CD music can be contaminated: dirt on the disc surface, flaws in the plastic substrate, errors in burning on the recording, scratches and fingerprints, and so on [TM077; or search for “thatsmaths” at irishtimes.com ]
Music is encoded on a CD in digital form as a stream of binary digits or bits. There are more than four million bits per second, so if one bit in ten thousand is in error (a 0.1% error rate) there will still be hundreds of errors every second. How then can we explain the high fidelity of the recordings? The answer lies in error-correcting codes.
The incoming audio signal is sampled 44,100 times per second. This allows us to hear frequencies up to 20,000 cycles per second, adequate for most purposes. Each sample is expressed in digital form as a string of 16 bits or 2 bytes (one byte is eight bits). The signal is broken into segments of 24 bytes. Then check bits are added to make it 32 bytes. These check bits are cleverly arranged so that it is possible not only to detect errors but to correct them.
The error-correction system used for CDs is called the Cross-interleaved Reed-Solomon Code, or CIRC for short. The information rate is 3/4, that is, 75% of the bits contain information and 25% of them enable error detection and correction. But this overhead is well worthwhile, as it makes all the difference between wonderful quality and intolerable contamination of the recording.
Millions of Errors
A typical CD may have as many as a million errors. Code correction is applied in two stages, extending each 24-bit string first to 28 bits and then, using a complementary method, to 32 bits. The resulting ‘product code’ is very effective. Errors tend to occur in local bursts; for example, a scratch may damage several adjacent tracks of the recording.
To counteract this, the bit strings are fragmented and distributed to different areas of the disc. Before we hear the recording, this interleaving is reversed, the errors are corrected and the digital stream is converted to an analogue audio signal. Then, thanks to a combination of technology and mathematics, we can relax and enjoy music free from any distortion or distracting surface noise.
Error-correcting codes have been around for more than fifty years. They were introduced by Richard Hamming, who worked in Bell Labs [see last week’s post]. He was so disturbed by the high level of errors in the old electro-mechanical computing machinery that he devised a method of adding redundant information so that the exact position of any bit that was in error could be located and thereby corrected.
Coding theory has blossomed in the digital age and is an active field of mathematical research today. We depend on reliable communication channels that transmit large volumes of data. This data must be compressed before sending and accurately expanded on arrival. If it is sensitive, it must be encrypted, and inevitable errors in noisy transmission channels must be detected and corrected. Richard Hamming’s wrestling match with punched card equipment has led to a world-wide industry