Originally Posted By: edshaw
Dave:
I'd like to hear a summary of your mixing process.

Hi Ed.

I'm not very good with short responses, hopefully something here will be useful. wink

I'll start with BiaB tracks and vocals. These will all go into Reaper. I'll normalize all the tracks. I'll often have "alternate" tracks, so I'll mute those.

The first thing I'll usually do is normalize the vocals so they are at a constant level throughout the song. I'll slice the vocal up at the word or syllable level, normalize that portion, and adjust it down if necessary.

You can do that with a compressor or compressors in series, and there's a feature in Melodyne to do that. But I prefer the level of control that I get doing it manually.

I'll also pay special attention to the plosives, balancing them out of they are too loud or too quiet. If there's any that are particularly bad, I'll see if I can steal a good one from somewhere else.

I do the same with fricative such as /s/, which I find easier and more precise than using a de-esser.

Once that's done, I'll get a quick balance of the instruments. I'll pay the most attention to the vocals, bass and drums, since these are the instruments that tend to stay at a pretty constant level in my mixes.

I'll often throw a mastering tool on the Master buss at this point, since it gives a better idea of what the final sound will be like. Ozone Elements is good for getting a loud mix, but I also like the Lurssen Mastering Console a lot.

I'll then have a listen to decide what's missing. Perhaps there's a section that could change to a simpler guitar strum, or some strings. I'll go back to BiaB and see if I can find something.

I'll also listen for where things need more change. The simplest way to do that is to remove some instruments.

If there's something in the same space as a lead, such as a strummed acoustic with a vocal, I'll often throw TrackSpacer on to make sure that instrument stays out of the way. If there are clarity issues, such as the drum masking plosives, I may go back to the vocal and boost those masked consonants.

Having the tracks sound "flat" is always a problem, so I try to figure out how to place them in space. I've got a number of tools such as Sunset Sound Studio Reverb and Fame Studio Reverb that place sounds into "real" rooms.

Recently I've been using Panagement. Although I have the paid version, the free version has everything I need. It can be used to place a sound into a virtual space, and I really like the way it makes things sound.

I don't like to fiddle with FX forever, so ReaEQ (Reaper's EQ) and EZMix get added to the signal chains frequently. I like the Greg Wells VoiceCentric plugin on vocals. My "go to" reverbs are Raum and LX480 essentials. If I'm lazy, I'll just add them to the end of the chain, but I'll usually have a dedicated FX buss for vocals.

I'll go through each track and manually adjust the gain envelopes. Each instrument will have an ebb and flow as it comes in, plays its part, and exits. I'll add peaks where I want to highlight something like a drum or guitar fill, but otherwise keep them in the background.

Once that's working, I render, listen, lather, rinse and repeat. I've got a room simulation software for my headphones called Realphones and some decent monitors. Burning the mix to CD and listening in the car is always a depressing experience. The mix always seems to sound brighter and punchier in the DAW then the final render.

Each song is a learning experience - especially when I come back with fresh ears! I'll occasionally come back and re-mix some of the songs, especially if it's got a synthetic vocal that I wasn't happy with.

Did that answer the question?


-- David Cuny
My virtual singer development blog

Vocal control, you say. Never heard of it. Is that some kind of ProTools thing?