Yes so maybe we should stick with analogue recording.

At least there we won't meet quantisation errors.
Quantisation simply is the process we got to live with if we work in the digital domain. If you record, it already starts with artifacts and errors to begin with since we always work with a certain bitrate, creating steps instead of a 100% smooth sound. We can only get as close to it as the technique lets us. That been said, in your theory I assume you would also not use 24 bit recording and convert it later to 16bits for creation of a CD? In fact the same goes for that, loads of quantization errors that need to be corrected. Ah! Of course, that is why they invented dither!

Just face it, any (destructive) editting process will have to deal with quantization errors and therefore, we only can relay on the quality of our equipment and software. The recordings therefore should be as good as possible, and higher levels (of course not clipping) gives less chances of quantization trouble. A too soft recording already has more errors in it's calculations already to begin with for that same matter. No matter how you look at it, a good mix stands or falls with a good start of right mic choices, well set input levels, gain staging, and good convertors. If all that is right, you
minimize quantization errors, but it can never be prevented with the norms we record up to today.
I did not want to add all this in a thread that asked about normalization, simply because it equals to starting a new topic. Therefore, as the original question has been answered already, without all the digital dillemma's that come from digital recording/mixing, I will make this my last post, to prevent me from getting into an endless discussion of how to make the right recordings without artifacts and lose track for the context of the original post ! (And even if this discussion would start somewhere else I likely In that case I would say now, record analogue and deal with tape distortion as a new point of discussion!)
