AI vs Human

Reciprocal Avatar

It’s becoming increasingly alarming that more and more artists and bands are cutting corners, seeking quick and easy shortcuts in the fundamental aspects of the music-making process.

This is especially evident with the rise of artificial intelligence-based mastering.

While AI may appear to make some changes to the music, whether those changes are the right ones is debatable.

Let’s first explore what AI does in mastering, why it does it, and then compare it to a human engineer.

We know perceived loudness isn’t ‘real’ loudness, but rather takes advantage of the resonant frequency of human hearing.

Human eardrums have a resonant frequency between 1,000-4,000 Hz, more commonly towards the upper end of that spectrum.

This means the eardrum is most sensitive to sound waves within that frequency range.

As a result, those frequencies seem louder, even though they actually aren’t. This way, we’re competing in the loudness war (for better or worse) while retaining the punch of the low end.

Now, let’s consider AI, which promises to create super loud masters (because apparently that’s all musicians care about).

AI can indeed create super loud audio, but it doesn’t tell you that it’s detracting from other areas of your track to compensate.

The loudest part of any track is the low frequencies and sub-bass—those frequencies you feel and make people go ‘ooo’ without fail, when a house record hits the drop.

AI knows these frequencies are the loudest and occupy the most bandwidth in the overall gain.

So, what does it do? It simply pushes them down to create more room to boost the higher frequencies and achieve unbelievable perceived loudness.

Tagged in :

Reciprocal Avatar

More Articles & Posts