Pay good attention to John's advice up there.

And I'll add that the real answer to your question may just be, "BOTH" -- The mixfit is certainly very important here, the Mastering, however, should not be ignored, for if done properly, the Mastering also adds to the overall sound as well, as ANY thing done to the sound at any stage of the game should be expected to affect the sound.

Kemmrich's posted sample file is not all that bad at all IMO.

I wouldn't sweat the mentioned lack of dynamics on this one, give or take a few levels within the mix such as that lead guitar, which is sticking out a bit to my ears, given the chosen genre -- but I would recommend that you remix the file but on the second try, don't attempt to make it sound like a "finished" or Mastered file in that pass, instead, go for a good (no, great) mixfit. This is often at Volume Levels that are below what we want to hear in the finished product, something that can be taken care of later on during the Mastering pass. Digital audio seems to respond better to that sort of treatment and it takes a bit of experimentation, PRACTICE and education/learning about the use of the various plugins available to us that can work towards making that mixfit happen. Compression, used properly on tracks separately is one of the key tools, as well as the EQ, ofen necessary to "sculpt" EQ of a particular track in order to make room for other tracks that may be playing the same frequency at the same time, or perhaps to emphasize or deemphasize certain things about a certain track. For instance, use of the Graphic EQ plugin on a Guitar Track and knowing what areas of the frequency range define certain parts of the guitar sound is a very useful tool. Example, the "pick marks" of a Guitar are usually somewhere around the 5KHz mark or so, and boosting or cutting that can change the entire base sound of that guitar and how it fits into a mix. Etc.

Speaking of the Audio Compressor, it is also a good idea to Compress the vocal tracks as separate entities in the mix as well. I greatly prefer use of an Optical Compressor plugin emulator for vocals, due to the speed. This is how we can showcase a singing performance that, while the singer is not obviously having to "belt" in order to be heard overtop of the mix. Still one of the very best examples of that are the old late 60s/early 70s recordings made by The Carpenters. Optical Compressor can place a "small" voice out in front of a rather huge band if you follow what I'm trying to say.

Use of Compression is an Art in itself, that only comes together as the aspiring engineer works to study and UNDERSTAND what the Audio Compressor does that is good -- AND what it can do that is not so good. It is essential to learn to listen for "pumping" and the like, which basically means that you have set the darn thing to work TOO hard.

As far as EQ goes, study of the "Fletcher-Munson Curve" and the reason that it exists, plus awareness about those critical midrange frequencies that our ears hear a lot more readily while the low and high extremes are suppressed is absolutely essential.

Mastering Pass? "6dB per octave" is perhaps the first bit of knowledge to explore, after understanding what dB is really all about, that it MUST be cited as referenced to someting, never can be a standalone figure without being meaningless, that the 6 dB per octave rolloff is "musical" sounding in and of itself, and why that is the case.

That's a really good performance you've got there, nothing to be ashamed of, very good song values, tracking and strong performance.

Couple what you already have going on with a bit more study and effort to understand these engineering terms and the like, practice using them as that is the only way to gain the kind of experience necessary to have your product compete with those that you are comparing them to and you will have it ALL.

One excercise you can start on immediately is to direct A/B compare that song playback to a reference recording of the same style and genre that does sound like what you are after, and start trying to define the differences in what you hear in engineering terms rather than descriptives like, "warm" (really? what temperature? *g* or "dark" (What's the matter, can't you SEE it?) and how those kind of descriptors may translate into actual values about audio. For example, most people, not all, when they say, "dark" when attempting to describe an audio event, are likely trying to say that there is a lack of the higher EQ frequencies in the file, or perhaps the highs are there but there is just way too much LOW end on a track or too and using the EQ to shave the low end back rather than boosting the HIGHs may be a better answer. To find out, try it both ways and listen to the differences.

Over the years I've noticed that a lot of good songwriters and performers just don't seem to want to have to deal with the engineering aspects of this craft, many search for one-button automated answers rather than just spend the couple of months or so it would take to actually learn the ins and outs, the terminologies, the care and feeding and such that being a recording engineer demands. Well, we are built that way, but in my experience, there are no shortcuts here and each aspect that you work on learning, coupled with deep desire, will indeed be what is needed.


--Mac