Not really, says Christopher "Monty" Montgomery, a digital audio engineer who heads the non-profit Xiph.org Foundation that's responsible for the Opus, Ogg Vorbis, and FLAC digital audio codecs. There may be problems with digital audio, Montgomery contends, but high-resolution 24/192 audio doesn't solve any of them.
Instead, Montgomery says, when you buy into high-resolution audio, all you end up with is a healthy dose of pseudoscience and bigger hard drive requirements for storing files that can be up to six times larger than what you get on a CD.
The problem with high-resolution audio? To put it bluntly: you can't hear the difference between high-resolution audio and a CD.
Sample rates, bit depth, and bit rates
The components of digital audio can be broken down into three basic categories — sample rates, bit depth, and bit rates. Let's start with sample rates, which are measured in kilohertz (kHz). High-resolution audio's sample rate is generally between 96 kHz and 192 kHz or higher whereas CDs are sampled at 44.1 kHz.
Imagine a sound wave continuously fluctuating through space. To turn that wave into a digital file, you have to grab parts, or samples, of that original wave and store them in digital form.
To capture the human audible sound wave — that is, the sound you and I can actually hear — all you have to do is make sure the sample rate is a little more than double the highest frequency in the original performance. This will accurately capture the entire audible sound wave in digital form, according to the Nyquist-Shannon sampling theorem, the basic principle governing how digital audio recording works.
The next consideration for digital music is depth, or the number of computer bits that you have to capture your audio. The more bits, the greater dynamic range of soft to loud sounds that your audio file can have. There are basically two audio depth measures in use today: 16 and 24 bit. CDs are traditionally produced as 16 bit, while 24 bit sound files are typically used by audio engineers during recording and production.
The final piece of the puzzle — bit rates — are the most widely quoted figure when talking about compressed audio files such as MP3s, AAC, and Ogg Vorbis. Apple, for example, touts its iTunes Match service for upgrading your music files to a bit rate of 256kbps. All this means is that to store one second of audio, a file uses 256 kilobits or 256,000 bits of data. The bigger the bit rate, the bigger the file — and, presumably, the better the sound quality.
But there's one little snag in all of this talk of juiced-up audio: the human ear. The maximum frequency the human ear can perceive is widely accepted to be 20 kHz. Based on what we know about sample rates, that means to get your ear to perceive 20 kHz, you have to produce an audio file with a sample rate slightly greater than twice that amount.
Sign up for CIO Asia eNewsletters.