“He wears his faith but as the fashion of his hat; it ever changes with the next block.”
― William Shakespeare, Much Ado About Nothing
We are conditioned to expect vocals to be upfront in the mix in modern genres of music. It is an industry standard. And now listeners are conditioned to hear that way and consider a song “good” when it sounds like that. But, by doing this, we sound-engineers are training the ears of future generations to lose a range of acoustic ability. When the metal album, “Synemotion”, was released, some members of my fanbase were disappointed there were no vocals. The person who helps me promote online said my listeners were expecting a prog album and an album with vocals – not a metal instrumental album. Being progressive was already difficult enough to market – building a niche audience was hard enough without me changing styles. “Why not give your fans what they want. They want to hear you sing,” she said.
I told her, I did not change style, it’s just music. Focusing on genre is something the music industry does to organize and contain a fanbase to sell music. People don’t generally just listen to one genre of music, what’s the big deal, I thought.
Synemotion being a metal album and instrumental was what the music wanted at the time. As I hesitate to fake my feelings, even in personal relationships or with friends; I certainly couldn’t do that with my relationship to my music either, right?
Vocals vs. Instrumentals
The truth is, I did try to put vocals on the music and it ruined the flow of the track. Those particular pieces were expressing a musical journey from dark despair to euphoria, some subtle emotions. which could be expressed more fluently through the harmonies and choruses – which are “vocals” – just not what people are conditioned to consider vocals . The point is that in those particular songs there was no place for an industry standard that prefers to “show-off” a lead singer. It didn’t make sense to force something out of the music for the sake of convention.
Loudness War
In the first album, “Sleeping World“, even though initially I mixed and mastered the album, I was given an opportunity to take it to a professional studio in Athens to listen to it a final time through their speakers. The sound-engineer suggested bringing the vocals higher up in the mix. But it sounded more pristine and smooth when the vocals sat in the mix as an instrument would. It sounded more natural with balanced dynamics which sounded better rather than heavily compressed just to make it a “louder” competitor in what people call the “loudness war”.
I was happy with the result then and still am. Of course, when it was played on radio the first time, you could hear how much quieter it was from the previous songs in the DJ’s playlist. I remember one of the first DJ’s to play “Sleeping World”, Nick Katona – kindly raising the volume a bit and expressed that he was a supporter of balanced dynamics.
Fortunately, YouTube and Spotify have recently tried to end this war with Spotify using “replay volume normalisation” and YouTube “playback loudness normalisation”. Still though, training listeners to expect vocals to sit upfront in the mix and feel that is the correct and only way to produce – is an issue. Some songs may need that but some may not. Vocals can sit back in the mix just fine. But what happens now is that if there are vocals in a song, listeners expect it to be loud. Using this type of preset over and over again, takes away from the songs identity and color. It would be like using the same Instagram filter on every photo. It may look beautiful but after a while, everything appears the same.