The all-important Mastering

Compare notes on how to get the most from Scope devices, etc.

Moderators: valis, garyb

User avatar
at0m
Posts: 4743
Joined: Sat Jun 30, 2001 4:00 pm
Location: Bubble Metropolis
Contact:

Post by at0m »

You're a freak :grin:

And look how nasty it all started :lol:

Great job Nestor. I hope it's all permitted to quote such parts, I guess in Belgium it would be, for the educational part of the website :wink:

at0m.
User avatar
Nestor
Posts: 6676
Joined: Tue Mar 27, 2001 4:00 pm
Location: Fourth Dimension Paradise, Cloud Nine!

Post by Nestor »

(this can't be but good for the author, because I will put just some of it, of course. For the rest you have to buy the book)

SIGNAL RESOLUTION
Some resolution is lost every time the level of a signal is manipulated in the digital domain. For example, if EQ is added the signal level will probably also increase, so the overall is then scaled down to so that it doesn't take up any more space thant the origigal number of bits. The practical outcome of this is that low-level detail may suffer if a signal goes through several stages of digital processing, which could cause reverb tails to become less smooth or the stereo image to become blurred.

This problema may be overcome by initially working with more bits than necessary. This is why 24-bit mastering is sometimes used in CD production, even though CDs are only 16 bit. In 24-bit mastering, when all of the processing has been completed there should still be more than ample resolution, and by applying dither when the signal is finally reduced to 16 bits (to producer the CD master) low-level details will be preserved much more accurately. At the time of writing, there are few 24-bit DAT recorders available, although it's possible to master onto two tracks of a 20-bit ADAT or something similar and then transfer that data to a computer over a suitable interface.

Dither effectively adds a very low leve of noise, so the signal-to-noise ration suffers slightly while low-level distortion is reduced. A more sophisticated way of implementing dither is the practise of noise shaping, with is mathematically designed so that the components of the additional dither noise appear ath the high end of the audio spectrum, where the human ear is relatively insensitive. Many software editing packages include a dither function as standard.

NOISE
Good recordings shouldn't have too much noise in the first place, but recordings made in home studiios will often have some audible noise of themduring pauses or quiet passages because of the use of budget effects, noisy synths and problems with ground-loop hums. This noise can be reduced greately by making sure that each track is completely silent until the music starts, and this is something that can be easily accomplished with a hard-disk editor. If you're splicing analogue two-track tape, this simply menas that you have to make sure that the splice occurs just before the song starts. The best way to indintify this point is by manually rocking the tape over the heads and marking the back of the tape with a wax pencil to show you where to cut.

Eliminating noise before the start of a song is a great deal more strightforward on a hard-disk editor, because it's possible to see the waveform of the first sound in the song, and it's a simple matter to select the apprppriate area and then using the Silence command to replace it with digital silence. At the end of the song, it's also possible to perfom a digital face-out at the tail end of the natural decay of the last sound, so that the song fades into true silence rather than a low-level hiss. This topping and tailing was covered in chapter two.

However, the noise is more difficult to eliminate if it occurs during the track, and it's usually neccessary to install specialised digital-noise-removing software to tackle a prominent noise problem effectively. Less severe noise can be dealt with in the analogue domain with a single-ended noise-reduction processor, although I wouldn't recommend using one of these for serious mastering because they often produce audible side-effects. These units work by monitoring the level and frequency content of the input signal. When a low-level signal with little or no high-frequency content is detected, a variable-frequency low-pass filter moves down the audio band to filter out the noise. The filter obviously has some effect on the wanted signal, and so the trick is to configure the unit up so that it only has an effect on very low-level signals. The bypass witch can be toggled in and out so that you can hear if the sound quality is suffering, and you can then adjust the threshold level accordingly.

The operation of most software noise-removal systems relies on a bank of filters, each of which covers a very narrow part of the frequency spectrum. When the signal in a particula frequency band falls below the noise threshold, an expander kicks in to mute the signal. This happens over many independent frequency bands, and so it's quite possible to eliminate the noise with the expander in one part of the spectrum,leaving the other parts unaffected.

The most basic noise-reduction systems analyse and "learn" a section of recording which consists entirely of noise immediately before the start of a song. The system then refers to this noise spectrum to set the correct threshold for each of the multiple expander bands (although the user can define these parameters by hand if necessary). In practice, such systems are good for between 5dB and 8dB of noise reduction before any serious side-effects become evident, and although this doesn't look like much on paper it can make a huge difference to the subjective sound. Note, however, that overprocessing can cause the noise to take on a ringing or chirping character as frequency bands in different parts of the audio spectrum turn on and off.

The more advanced noise-removal programs continuously evaluate the noise in the presence of signal. This is better in situations where the level and character of the noise changes during a nix, which often happens if faders are being adjusted and tracks muted. These programs generally allow a greater improvement in the signal-to-noise ration before side-effects become evident.
*MUSIC* The most Powerful Language in the world! *INDEED*
User avatar
Nestor
Posts: 6676
Joined: Tue Mar 27, 2001 4:00 pm
Location: Fourth Dimension Paradise, Cloud Nine!

Post by Nestor »

PRACTICAL MASTERING TECHNIQUES
It's important to understand that there's a huge difference between what a professional engineer can achieve in a top commercial mastering suite and what the average project studio owner can do for themselves. Even so, as more computer-based mastering tools become availabe, it's quite possible for home studio users to achieve some very impressive results with relatively inexpensive equipment, as long as they have reasonably accurate monitorig equipment and a discerning ear.

Some people think that mastering simply menas compressing everything to make it sound as loud as possible, but although it's true that compression can play an important role in mastering it's only one piece of the puzzle. The most important too by far is the EAR of the engineer, because to master successully each and every project must be approached differently. There is no standard blanket treatment that can be applied to allmaterial in order to make it sound more produced.
*MUSIC* The most Powerful Language in the world! *INDEED*
User avatar
Nestor
Posts: 6676
Joined: Tue Mar 27, 2001 4:00 pm
Location: Fourth Dimension Paradise, Cloud Nine!

Post by Nestor »

TRICKY EDITS
When editing classicalmusic, or some other style of music with no obvious thytmic edits points that act as landmarks in the waveform display, the best way to work is to mark up the regions on the fly, place the regions in order within the software's playlist, and then loop around each edit point and nudge the end of one region or the start of the next until the timing is right. Only then sould you worry about trying to disguise the edit.

As with the previous example involving two slightly different vocal performances, you may need to nudge the whole edit backwards or forwards in time until you find a point that produces a smooth yoin, although you may need to perform a short crossfade to smoothe things over properly. When moving edit points like this, it's a good idea to record a few more seconds of audio than you need at either end of each section.

If the edit doesn't coincide with a strong beat, you may find that ther'es an audible glitch at the edit point, and there is a myth ahta making the edit at waveform zero crossing points will guarantee no glitching - it woun't. You'll only avoid a glitch if the waveform on one side of the edit flows smoothly into the waveform at the other side, and Figure 4.2 shows that, even if the waveforms on either side of the edit are identical, there are two possible scenarios, one which will cause a glitch and onw ehich won't. In the first example, the waveforms on either side of the edit are in phase, so the transition will be smooth; in the second, however, they are out of phase, resulting in a discontinuity at the edit point, causing a click.
*MUSIC* The most Powerful Language in the world! *INDEED*
User avatar
Nestor
Posts: 6676
Joined: Tue Mar 27, 2001 4:00 pm
Location: Fourth Dimension Paradise, Cloud Nine!

Post by Nestor »

CROSSFADE EDITS
The usutal way of solving an awkward edit is to use a crossfade between the two regions, but again these aren't foolproof. A crossfade involves fading one region out after the edit point while at the same time fading in the second region before the edit point (which is itself another reason for recording a few seconds more material at both ends of each section), but the problem with crossfades is that they are just fades that occur between two sounds, and so both sounds are audible in changing proportions for the duration of the clossfade, with the balance being equal in the middle of the crossfade. Unless the sounds are absolutely identical, and in phase, this may cause a double tracking or chorus-like effect with is one reason to keep crossfades as short as possible. Furthermore, if there is a drastic phase shift between the sounds on either side of the crossfade is less likely to cause level changes if you can arragne things so that your edit points occur at zero crossing points, and the waveforms on either side of the edit are in phase.

Try and avoid long crossfades over percussive beats, which can produce a FLAMMING effect if the timing of the two beats isn't exactly right. As a rule, a crossfade of 20ms or so is long enough to prevent cliks, although a longe one may be required to smoothe out an awkward transition.

In those situations where the material on either side of the crossfade is well matched (for example, if the regions are fromtwo takes of the same song and mixed similarly), it's important to keep the fades as short as possible while still making the edit smooth. If the material is completely different on both sidles of an edit (two different pieces of music, for example, or a decaying last note followed by a burst of spontaneous applause), you can use any length of crossfade necessary; because the waveforms aren't correlated in any way there won't be any phase cancellation.

_________________
Music is the most powerful language in the world! :smile:

<font size=-1>[ This Message was edited by: Nestor on 2002-04-11 11:41 ]</font>
Post Reply