I was driving back from the orchestra rehearsal on Thursday night and started thinking about a comment Ken Walker made about my latest musical composition. He didn’t like the fact that I had used “Voice Ahs” as the lead sound. The thing is though, I wasn’t using that sound. I was actually using something quite … Continue reading “Frequency based ducking.”
I was driving back from the orchestra rehearsal on Thursday night and started thinking about a comment Ken Walker made about my latest musical composition. He didn’t like the fact that I had used “Voice Ahs” as the lead sound. The thing is though, I wasn’t using that sound. I was actually using something quite a bit more subtle/interesting. Unfortunately, all the subtleties were lost in the overall sound of the mix.
One of the things that producers do to fix this kind of problem is to use a parametric EQ to carve out a space in the frequency spectrum for particular sounds, thereby making them more prominent. The problem with this is that it is a manual process.
Suddenly, it hit me: Build an “inverted vocoder”.
Vocoders work by splitting the frequency spectrum of a signal to be processed into many (say 128 or 256) small bands and then setting the level of each band based on the amount of energy present in that part of the frequency spectrum of a separate modulation signal. When you feed something rich in harmonics, like a ramp wave in as the signal to be processed and use the sound of your voice as the modulation signal, you get that traditional “singing robots” vocoder sound.
But what would happen if you inverted the signal, so that by default all of the frequency bands were “full on” (i.e. not cutting out the sound) and as the energy went up in the modulation signal, it was automatically lowered in signal to be processed? (You would want to be able to control the overall amount of reduction that occurs. Using too much would probably cause the result to be “unnatural”.)
Now imagine, feeding the backing track for your mix in as the signal to be processed and the “lead” line (or whatever it is that you want to make more prominent) as the modulation signal. The result would be to automatically reduce in the backing track the frequencies that contained the most energy in the lead line. Effectively, you would be automatically EQ’ing the track to make the lead line more prominent. (Of course, you still have to mix the original lead signal and the processed backing track to get the final result.)
This is similar to another process used by producers called “ducking”, which lowers the overall level of one signal when another signal is present, but with this mechanism you only remove certain frequencies.
What’s really weird about all this is that, I have never heard of anyone doing it before. If anyone else knows of software (or hardware) that implements this algorithm, please let me know, I’d like to find out if it works.
As unlikely as it is, if it does happen to be a novel idea, remember that I thought of it first. 😉 I’m probably not going to have time to do anything about this myself, but if you decided to try it, I’d appreciate it if you let me know. (And if turns out to be the “next big thing” in studio technology and you start making millions, it would be cool if you sent me one. LOL.)