"Cleanup in aisle 9, stat!"

Lots of terminology is starting to be thrown around here. One of the items that is likely being misinterpreted is the term introduced by jazzmammal as 'automatic limiting' when saving the 2 track mix to a stereo .wav.

In most DAW software, when one does a saving of the final output bux to a 2 track file, there is an option for 'normalizing' or 'normalize' or 'prevent clipping' or something along those lines.

What jazzmammal is describing is a normalizing process. What normalizing will do is look to see where the absolute maximum level would be in the samples of the mixdown, and set that so that particular sample's digital output value is 100% of the max possible digital output - coded to .wav - and then scale every other sample in the mixdown by the same scale factor.

It can be a greater than 1 multiplication, or less than one (as is being discussed in this thread) multiplication.

This is entirely different than automatic 'limiting', which looks to see if any part of the mix-down file would clip and reduces only those parts which would clip, leaving the others unchanged.

Does everyone understand the difference? Normalizing scales every single sample in the .wav file by the same amount, in order to achieve the end result that the absolute loudest sample in the mixdown, gets the max that the .wav file allows, where limiting only changes samples which exceed some threshold value, leaving other samples unchanged.

Are we clear? If not, please post as such and I'll try to explain in a different way.