Disclosure: We may receive commissions when you click our links and make purchases. However, this does not influence our reviews or ratings. We endeavor to keep our opinions fair and balanced to help you make informed buying choices.
Why do we have different file formats?
What does lossy compression do to your audio?
Learn the differences between mp3 and wav files and find out why they matter.
When it comes to audio file formats, you’ll see a few file types pop up more than others. Mp3 and wav are two of the most common formats, but they work in very different ways and can sound quite different depending on the circumstances.
You might already know that mp3 is a lossy format whereas wav files are lossless. But why do we need lossy formats in the first place? Shouldn’t everything be the highest possible quality at all times? We’ll answer these questions and dive right into the differences between mp3 and wav. Let’s go!
A Matter Of Convenience
Mp3 files consist of compressed audio data. We’re not talking about dynamic range compression here, but rather digital data compression. This means mp3s are much smaller in size than uncompressed audio formats like wav. With this in mind, mp3 is a more convenient format for a variety of reasons.
Not only does this mean mp3 files take up less space on storage mediums like hard drives or cloud storage, but it also means they use less bandwidth when being uploaded, downloaded, and streamed. For this reason, many internet audio streams use mp3 compression so the servers can handle the volume of data required for all users to hear uninterrupted audio.
Mp3 files also handle metadata better. This is because they have proper standardization for song information like song titles, album names, artist names, and more. This isn’t impossible with wav files, but there’s no real standard that has been agreed upon here, so don’t expect the same level of compatibility.
So, this makes things seem pretty rosy for mp3 files, right? Well, while they are undoubtedly great, they are not without their drawbacks. After all, it would be impossible to reduce the file size without some loss in information, right? In this case, that loss comes in the form of frequency information, and this is where the term lossy comes in.
What Is Lossy Compression?
Lossy compression is any type of data compression that results in an unrecoverable loss of information.
Compression of digital data is done because we need to save space and bandwidth. Really all we are doing is making the file smaller with some algorithm that encodes the data, and the data is reconstructed later with a decoder.
We can encode the file losslessly so that when it is reconstructed it is a perfect representation of its original state before compression. This seems like the obvious way to go all the time, right? We get a reduction in file size and the audio quality remains unaffected, so surely this is a no-brainer…
However, lossless compression isn’t always that efficient. You won’t save nearly as much space as you will with lossy compression. The only thing is, lossy compression has an effect on the quality of the audio. This effect can be quite significant at low settings.
Thus, we notice a trade-off between audio quality and file size. So if we want to avoid audio quality loss with mp3 compression, we need to use higher bitrates which results in bigger files. But what is actually happening when we adjust the bitrate to ensure the sound quality is higher? To answer this, we need to look at how and why mp3 files work.
What Are MP3 Files?
Ok, so we know that an mp3 file is a compressed audio format that saves space without compromising too heavily on audio quality. And we know that we have control over that compromise with bitrates. But what is actually going on inside an mp3 file, and how are they even made in the first place?
Mp3 files store frames of spectral audio data. These frames are encoded in a particular way to keep the file size as low as possible. This means they can’t be edited without first being decoded, and this process consumes CPU resources.
Part of the encoding process involves removing frequencies that we can’t hear. This doesn’t just mean frequencies that are out of our range of hearing, but also frequencies that are “masked” by other, stronger frequencies.
Masking is a phenomenon observed in psychoacoustics – or the study of how humans perceive sound. The general idea is that our hearing is quite selective, and will “ignore” certain frequencies when other frequencies are more strongly present. Mp3 files take advantage of this and save space by removing this information on the assumption that it is not going to be heard anyway.
Of course, we know that we can hear the lossy compression, particularly with lower bitrates. If you’re not sure what to listen out for, pay attention to “washy” or blurred high frequencies and a general lack of textural richness and dynamics. So this algorithm isn’t perfect and its assumptions about our hearing aren’t either – but they’re close enough to be practical, and this is why mp3 is such a useful format.
What Are Wav Files?
Wav files are lossless and offer perfect digital audio quality. Rather than representing the file with a series of encoded frames of spectral audio data, wav (WAVE or .WAV) files consist of raw audio data containing a literal representation of the waveform as a series of numbers.
Imagine a digital waveform that oscillates within a range, let’s say -1 and 1. Now imagine a list of all the samples in that waveform from start to finish. This is called pulse-code modulation or PCM, and is by far the most common way of storing audio data in a wav file (but not the only way).
So that’s more or less what a wav file is – a list of every sample value per channel from start to finish, with a few extra bits of information tacked on to let programs know how many channels there are, what sample rate the file is at, etc.
Because one second of audio typically contains around 44 100 samples of audio data per channel, you’re looking at a very long list of numbers when it comes to a 3-minute song. Thus, wav files tend to be quite large and are not convenient for music libraries.
Wav files, along with other lossless formats like FLAC and AIFF / AIF, offer the highest possible digital sound quality.
Wav files are also extremely easy to read from and write to. There’s no real decoding needed, just adding together bytes of data according to the bit depth, sample rate, and channel count. You don’t need to worry about this though, just know that audio software developers are very grateful for wav files for this reason.
You should use wav files for any audio recording and production you are doing (or at least an equivalent lossless format like AIFF). This means every audio track in a song should be recorded in lossless quality where possible. You should also use wav files for final masters and any stems you send out for remixing.
Basically, keep a wav copy of any songs you make, or any audio file that needs to be high quality. You will regret it if you convert your old masters to mp3 files without keeping the original wav files!
What Are Bitrates?
This is a term you will see pop up when discussing mp3 files in particular. The bitrate is quite simply a measure of how much data is being processed per second in any given file – usually video or audio.
Bitrate is measured in kbps or kilobits per second. Note that this is bits and not bytes, meaning the number is much larger than you might expect as there are 8 bits in every byte.
The higher the bitrate is, the more data is being processed per second, and the better the quality of the file is. With mp3 files, a bitrate of 128 kbps is considered to be acceptable – it’s the bare minimum before sound quality seriously suffers. But even at 128 kbps it is quite easy to hear the lossy compression, particularly in the higher frequencies.
When we reduce the bitrate of an mp3 file, we’re pushing the compression algorithm harder and harder to ignore “low priority” spectral information, causing further sound quality loss.
You’ll hear this on Soundcloud, which streams at 128 kbps for standard quality uploads. If Soundcloud streamed in higher quality, it would put a much greater strain on their servers. Huge companies like Apple and Spotify can afford to do this, but for a smaller company like Soundcloud, it’s not so easy.
Yes and no. While any time-based digital media can be measured with a bitrate, it makes less sense to do this with wav files for a few reasons.
Firstly, the number is usually in the thousands and hence harder to remember. 2116 kbps doesn’t quite roll off the tongue as easily as 128 or 320 kbps, does it?
With mp3 files, it makes sense to measure with bitrates as the algorithm is set up to have multiple quality options in the first place, and these are measured with bitrates by design. With wav files, we don’t get this choice.
There are no “variable quality” wav files, they can only be lossless. So they are essentially fixed to the highest bitrate already.
This doesn’t mean we don’t have some options for setting up the format of a wav file, however…
Wav File Formats
So, up until this point, it seems as though mp3 files are quite flexible whereas wav files are more or less “fixed” to the same settings. This isn’t quite true, though wav files have far fewer options in this area.
Wav files can have different bit depths, sample rates, and channel counts.
For CD-quality audio, you’re looking at a two channel wav file with a sample rate of 44 100 samples per second. Each sample has a bit depth of 16 bits – meaning there are 65 536 unique possible values for each sample.
With audio files in our DAWs, we have the luxury of going higher. For any recordings you do, I suggest a bit depth of at least 24 bits. This gives you much more accuracy with each sample value. 24 bit audio has over 16 million unique values per sample, which is a huge improvement over 16 bit audio.
It’s important to aim for a higher bit depth here because there is still so much processing to do with the audio before rendering the final song. A modern mixdown has multiple equalizers, compressors, reverbs, and tons of other processing in a single song. The more accurate your original recording is, the better it will fare after multiple layers of processing are applied.
In terms of sample rates for your wav files I would recommend working at 48 kHz, or 48 000 samples per second. While 44.1 kHz has dominated for a long time, it only exists as an arbitrary restriction from the CD-era. In case you haven’t checked a calendar in a while, this is nearly four decades ago.
With the emergence of YouTube as a media juggernaut, more and more music is being uploaded at 48 kHz, as this is the standard sample rate for most video formats.
You’ve probably heard about AIFF, which stands for Audio Interchange File Format. It was developed by Apple in the late 80s and still gets used today.
There are no major differences between wav and aiff files at least in terms of sound quality. Both files are structured differently in terms of how the data is organized, but the idea behind them is essentially the same. Both files contain raw PCM data that describes a waveform (think back to the “list of numbers” example from before).
AIFF files can actually support compression, meaning they might not always be as big as wav files. However, it’s rare for compression to be used here, it’s much easier to just convert to mp3 for the sake of compatibility alone. Most of the time, an AIFF file will contain lossless audio data and will end up being about the same size as a wav file.
So really AIFF files can be described as “Apple’s version of wav files”. But you will not gain any sound quality advantages by rendering to AIFF instead of wav.
Once again, there is no difference between FLAC and wav when it comes to audio quality. While FLAC does apply encoding to reduce file size, it’s a lossless format meaning that when the file is reconstructed, it’s a perfect representation of the original, uncompressed file.
This is different from mp3 files, which permanently delete information that cannot be properly recovered.
FLAC is a great option for those who want to keep their audio files lossless while saving on storage space. Most of the time, a FLAC file will be about half the size of a wav file. This can really add up if you have a huge audio library!
There are a couple of disadvantages to be aware of before you turn all your wav files into FLACs. Firstly, because it’s an encoded file, you must first decode the file before listening. Sure, there are plenty of programs that will do this for you, but it requires extra processing power. This may be a make-or-break deal when it comes to live audio if your computer isn’t up to scratch.
The other thing to consider is that FLAC files are less compatible with software compared to mp3 and wav. Ok, so most DAWs will read a FLAC file these days, but you can’t count on it. Make sure you scope out how well your DAW and other programs handle FLAC files before making any major decisions.
So, to make things clear, mp3 files compress your audio and compromise on the sound quality. Wav (and AIFF and FLAC) files are lossless, so the sound is as good as it gets.
Mp3 files are great for portability and compatibility. They take up far less space than wav files so are easier to store, upload, and download. Mp3 files can have different bit rates, and at the highest setting (320 kbps) can sound just as good as a wav file despite being less than half the size.
You should store your master recordings and stems as wav files, or at least use a similar lossless format such as FLAC and AIFF. It’s perfectly fine to have mp3 copies of these files, but you’ll seriously regret it if you delete the original lossless files.
This isn’t a “one or the other” situation. It’s not about a choice between wav or mp3, it’s about knowing which format to use and when to use it. Hopefully at this point you’ve got a pretty good idea about the differences and know exactly when to compress and when to stay lossless!