Originally Posted by Brno
But how about that placing of the sound sources in the depth direction in the sound picture? I learned from one of the greatest music recording expert in Finland, that delaying the backing instruments a bit (under 10 ms), brings out the solists, because then you hear them first. So it is in the live performances, where solits are in the front stage.
I'm not sold that some 10ms delay is going to highlight one performer over another.

Sure, the psychoacoustics works out, but it would be better to avoid the conflict in the first place.

After all, a good mix should work even if it's rendered in mono.

With live performances, this happens by the players listening to each other. They know what is the current focus, and if it's not them, they move into a supporting role, staying out of the way. If it's a bass solo, that may mean that everyone but the drums sits out. Same thing if another soloist is coming up - they'll drop out before their entrance, so it has more impact when they come on.

If someone's not lead, they'll make sure what they play stays out of the way - something simple, and rhythmic instead of melodic.

It's like a painting, where there artist adds all sorts of subtle details so that the eye is led to a particular point in the canvas. Only with music, it's dynamic, so they ear follows the interplay and changes.

To me, a spacious sound stage is good, but not central to getting clarity in a mix.

Using a delay is also a way to place an instrument in the stereo field. That is, there are some panning plugins that add a delay time between the two ears instead of changing the amplitude to place a sound in the stereo field. Pretty cool, but I wouldn't do that manually. You're using the delay time to add depth, not width.


-- David Cuny
My virtual singer development blog

Vocal control, you say. Never heard of it. Is that some kind of ProTools thing?