Mixing with 2-bus processing

Mixing with 2-bus processing – for

As you already mentioned, mixing with bus processing allows you to work towards your “final sound” in one go. This is particularly useful when 2-bus processing is used for more than mastering (preparing a mix for a distribution format/medium). The best-known example is pumping compression. It’s not so easy to put together a mix where pumping 2-bus compression is a feature element if you’re not actually hearing it when you’re mixing! 

On the more extreme end, other processing tools are appropriate for inserting on the 2-bus while mixing. Personally, I’ve used things like buffer stutter effects, stereo width manipulation, filters, and even distortion. Mind you, these (even the stereo width manipulation) were not for mastering – they were “effects”. They were usually automated, and only enabled for specific sections of a song. 

On the more subtle end, I’ve heard of mix engineers “mixing into” a 2-bus compressor. Not necessarily to get an obvious “pumping” effect, but to gel the mix together. Apparently doing this allows the mixer to get away with using less channel compression. 


Mixing with 2-bus processing – against

One of the important reason not to mix into a 2-bus compressor is that it can be easy to start chasing your tail in circles. It can make mixing difficult because adjusting one instrument can radically change the behaviour of the other instruments. In the simplest case, turning an instrument up will subsequently cause other instruments to drop in level when that instrument in playing. If you’re not watching for it, you might later try to turn those other instruments up and wonder why the rest of the mix is dramatically rearranging itself with your every move. It can also make it difficult to compensate for (or keep) complex relationships between instruments. 

Another trap is using 2-bus processing to compensate for mix problems, when a more appropriate tool would be processing on individual channels. The obvious example is EQ. If there’s not enough bass (for example) in your mix, you might adjust the EQ on the 2-bus as a shortcut to adjusting the kick and bassline individually. By taking the shortcut, you’re adjusting the frequency spectrum of the kick and bassline in the same way, by the same amount (when it might be better to make more tailored adjustments). You’d also be boosting the bass of every other instrument in the mix. Unintended consequences may apprear later, and you’ll be scratching your head. 

Yet another trap is that 2-bus processing can be confusing. Again with the EQ example – if you’ve boosted the bass, you might be working on a background part and be wondering why the frequency balance is skewed, when you might not have any EQ (or even different EQ) on the channel itself. With 2-bus compression, instruments will sound different when solo’d to when they’re in the mix – sometimes radically so. 

The other issue is the confusion between 2-bus processing and mastering. As I’ve mentioned before, 2-bus processing is what happens when you insert plugins on the stereo pair that goes out to your speakers. Mastering is what happens when you prepare a stereo mixdown for a distribution format or medium. If you’re inexperienced, it can be easy to try to do both at once, when they really are completely different tasks (that happen to use similar tools). In short – mixing is the process of making the individual tracks work well together, and mastering is the process of making the overall sound work well in context (often next to other songs). By using mastering tools like EQ or compression on the 2-bus while mixing, it can be easy to fall into the trap of making mastering adjustments before the mix is finished. Of course, these mastering adjustments are undermined when you go back and change a mix element, which means you have to go back and change the mastering adjustments, and so on and so on. 


Mixing without 2-bus processing – for

This is the “traditional” approach. Without 2-bus processing, the mix is much more controllable, and allows much more precision in making mix decisions. You hear the individual tracks as they are. The EQ and compression settings on each track actually reflect what you hear. Adjusting the levels or EQ of a single instrument doesn’t magically adjust the levels or EQ of other instruments. You don’t have to worry about mastering taking your focus away from getting a good mix. You can master when the mix is done without worry about later mix decisions messing up your mastering adjustments. Soloing instruments is a good way to “zoom in” on a sound, while still having the confidence that the sound won’t change when the other instruments are brought in. 


Mixing without 2-bus processing – against

Of course, the downside to mixing without 2-bus processing is that you can’t hear the effect of any processing you might be planning to use. As I wrote earlier, this is particularly important where 2-bus processing (such as pumping compression) is a significant part of the character of the overall sound. Unless you’re quite experienced, it can be difficult to balance the instruments to hit the compressor in just the right way. It’s even more difficult if you can’t even hear the compression because it’s not plugged in! Regarding EQ, sometimes drastic EQ adjustments in mastering can reveal unintended sound elements. Mixing without such eq, it can be difficult to predict what mastering might bring out (or suppress). Arguably though, this can be mitigated somewhat by targeting your mix to be close to your target frequency spectrum. Close enough only – don’t get surgical – leave that for mastering! 


What I do

As a general rule, I mix without any EQ or compression on the 2-bus. Pumping 2-bus compression is not something I’m particularly fond of (for my own work). I prefer dense multilayered productions, which end up with very subtle and precarious balances between instruments. Mixing into a compressor would make this almost impossible for me. Similarly, with EQ I try to mix as close to the final spectral balance as I can in the first place without resorting to global EQ. EQ fine tuning is done during mastering, and I very rarely need to make an adjustment greater than +/-6dB. 

On the other hand, I don’t shy away from using 2-bus processing for “special effects”. I’ve used filters, stereo width manipulation, buffer stutter effects, even distortion and bitcrushing. These are “special” though, and usually only engaged for specific sections of a song. They’re usually automated too, for extra fun.

-Kim.

EQing reverb

Many reverbs have some onboard EQ. Depending on your host, you may also be able to insert a separate EQ plugin before or after the reverb. 

Generally, you can think about EQing reverb as three bands: 

Lows: Reduce or cut the lows to cut down on mud and boom. Getting surgical here can sometimes really help clean up a mix. Increase the lows for special effects – running a kick drum through a low-heavy reverb will give you a tasty huge BOOOOOM!!! 

Mids: Reducing (dipping) the mids of a reverb signal can thin it out, sometimes helping it fit in the mix. It can also help give you a very “hi-fi” sound. Boosting the mids (relatively) can increase thickness and body. 

Highs: Reducing the highs can go a long way to cleaning up annoying sillibance in vocals (“s” and “t”) if the reverb is catching too much of it (also think about using a de-esser on the vocal channel itself too). Reducing or cutting the highs can also make the reverb become less noticeable overall, which may sometimes help it sit in the mix better. Boosting the highs can work well when you want to emphasize the the reverb (make it noticeable) without muddying the mix. 

You’ll notice that I’ve used terms like “sit better in the mix”. This is an artistic judgment you have to make (if you cut everything to make the reverb sit “best” in the mix, you won’t have anything left!), and you’ll have to make it in the context of the mix. 

You’ll also notice that I’ve given advice for reducing AND boosting different frequency areas. There’s no simple advice like “Doing X will always improve your mix” (even the famous 500Hz dip!). Techniques will have certain audible results, but you have to decide if those results are appropriate for your mix, for your music. 

-Kim.

Mixing Synths

Without getting too deep into ergonomics and workflow, often synth parts from the same synth (and sometimes different synths from the same company/designer) will make it easy to design sounds that blend well together. This is not only because they might share the same oscillators or filters, but also because the user interface encourages a similar approach to sound design. 

On the flip side, synths with different user interfaces may encourage a different approach to sound design. Of course it’s not just the interface, other factors may include different oscillator and filter algorithms, as well as different envelope curves, keyboard/velocity scaling, portamento curves, etc. 

If you don’t have a precise idea of the sound you want to design before you design it, you’ll find yourself more influenced by the affordances of the instrument. In other words, if you think “I think a snappy bass might work here”, you’ll go with what the instrument guides you to – the type of sound that the instrument makes easy to design, and the type of sound that sounds good quickly on that instrument. On the other hand, if you’re very clear about the exact sound you want (ie, you can hear it in your head) AND you know your instruments well enough to know how to get it, then you’ll “fight harder” to get what you want, but the end result will work better in a diverse mix. 

If you’re working on a project with several different instruments and you’re finding a part isn’t quite blending with the rest, try this: 

1) Pull the part’s volume right down to silence. Don’t use the mute button – actually pull down the channel fader. 

2) Listen to your mix without the part, and IMAGINE the part. This is sound design, so don’t just imagine the notes or the type of sound (composition stuff – I’m assuming here you’ve already got that sorted). Really imagine how it sounds in the mix – frequency spectrum balance, dynamic range, depth, height (seriously!), interaction with other instruments, etc. This isn’t easy, and you’ll need to practise in order to get good at it. 

3) SLOWLY raise the channel fader of the offending part. Stop as soon as it sounds wrong (or, you can hear the wrongness). Mentally compare the sound you’re hearing with the sound you’re expecting. Try to pinpoint exactly what is wrong with the sound, and what changes need to be made. Sometimes it’s just one aspect of the sound, often it’s a combination (which is why it’s difficult to get the sound to sit in the mix if you don’t know exactly what you’re aiming for). 

4) Fix the sound. This is where it really pays to know your tools. Sometimes it’s adjusting the synth parameters. Sometimes it’s different eq, compression or other effects. If the sound is very wrong and you’ve used a lot of channel effects (such as eq and compression), remove them. Clear the channel and start again. 

If you still can’t get it to work, you might need to go back to the composition. What are you trying to achieve with that part? Perhaps the rhythm isn’t working well against the other parts. Perhaps you need to transpose the part up or down by an octave (or less than an octave!). Maybe your imagination has failed you and the music actually needs a different type of sound, a different instrument. 

Sometimes the music is simply better off without that part. Don’t try to shoe-horn in a sound just because you think it’s cool – every part in the music has to support the music. Ask yourself – what is the music trying to do here? How is this part supporting it? These are difficult questions to ask, and even more difficult to answer. With practice you’ll get better at it, and your music will thank you for it.

-Kim.

How to push sounds to the background

To push sounds further into the background, you don’t need any magic plugins, just an understanding of psychoacoustics: 

1) Less bass. Much less bass. Natural sounds that are far away will have very little bass and low mids (unless they’re truly huge sounds in movies), because lower frequencies require much more power to travel. The reverse of this is the proximity effect – where sounds very close to your ear (whispering I hope!) or very close to the microphone tend to have much stronger lower frequencies. To roll off the bass, try a low-strength (1-pole or similar) high pass filter. Start low and shift it up until you no longer “feel” the sound. 

2) Less treble. Less sparkle, less definition. Natural sounds that are far away will have reduced higher frequencies due to absorption by air and other materials. Distant sounds also have much less definition and clarity. Often a low-strength (1-pole of similar) low pass filter (with no resonance!) will work well. 

3) Reverb, modulation. As above, distant sounds tend to have much less definition and clarity. You should do whatever’s appropriate in the mix to “unfocus” the sound. Sometimes more reverb will do it. Often a very short reverb will work best. It doesn’t have to be a strict room – just something to diffuse the sound. Sometimes chorus or even subtle phaser will work better. It depends on the mix – you’re trying to reduce the clarity of the sound. 

4) Collapse to mono. Distant sounds do not wrap around the listener’s head. They’re often not “wide” (unless they’re truely huge sounds in movies). Sometimes a full mono collapse isn’t appropriate though – it depends on the sound. You might want to retain a little width in atmospheric sounds (like pads). Sometimes leaving a little width will improve the diffusion in the sound (when a full mono collapse might make it more focussed). 

5) Pan centre. This works for two reasons. Firstly, sounds that are panned to the side tend to “creep up” closer to the listener. Imagine the soundstage in front of you as a semicircle – the sounds on the side can (all things being equal) actually get closer to the “front” than the sounds in the center. Also, panning centre will hide the background sounds behind other foreground typically also panned centre (such as lead vocal and snare, depending on your genre). This will make it mroe difficult for the listener to focus on the background. 

6) Compose it in the background. To support the above, you should actually compose the parts as background parts. Again, this means understanding the application of psychoacoustics to composition. As listeners, we tend to focus on sounds that are: 
– louder 
– higher pitched 
– moving quickly 
– not repeating in short cycles (EDM- I’m looking at you!) 
– phrased (ie. not constant) 

Likewise, background parts will be the opposite: 
– quieter 
– lower pitched 
– moving slowly 
– repeating patterns 
– unphrased 

Likewise, background is only ever a relative measure. If your background part isn’t getting far enough in the background, it could be that you don’t have anything far enough in the foreground. Just like everything else in music – if everything is background, nothing is background. 

Of course, this is all fundamental composition technique. Believe it or not, we all can learn from the classics!

-Kim.

Pre-fader versus post-fader

Without going too deep into mixer topologies, the channel fader sets the gain (you might also think of it as the level or volume, though it’s not quite the same thing) of the sound going into the mix bus (also called the 2-bus or the master channel). Placing effects before the fader (pre-fader) mean that those effects will “hear” the same level, no matter what the fader is set to. Placing effects after the fader (post-fader) will mean that thsoe effects will “hear” a level depending on what the fader is set to. This is particularly noticeable with effects such as compression, which respond differently depending on the level of the sound. If you set up yoru compressor pre-fader, then it will behave the same no matter what the fader is set to. On the other hand, if you set up your comrpessor post-fader, then higher fader gain will result in more compression and lower fader gain will result in less comprssion. In effect, you will use the fader to simultaneously set the audible volume of the sound in the mix AND “drive” the compression. Normally this is not such a good idea beacuse it makes it more difficult to fine-tune the mix (changing the volume changes the compression too).

Post-fader effects are typically not used often, except for sends (also called “aux sends” or “FX sends”). The “send” effectively duplicates the sound and sends one copy to the send channel (the other copy is sent through the original channel as normal). If a wet reverb is applied to the send channel, you’ll have two channels making sound – the original “dry” (no reverb) channel, and the “wet” (reverb) send channel. If the send is post-fader, then the sound level that is sent to the reverb depends on the fader setting. This way, if you adjust the fader (to fine tune the mix, or perhaps automate a fade in or out) the RELATIVE level of the reverb stays the same. On the other hand, if the send it pre-fader, the absolute level of the reverb stays the same (so if you turn the fader all the way down, you’ll still hear some reverb, and if you turn the fader all the way up, you’ll hear less reverb relative to the original sound).

-Kim.

Ordering of EQ and Compression

The order in which you use EQ and compression will depend on the sound, what you want to do with the sound, and the rest of the mix. By using EQ first, followed by compression (sound->EQ->Compression), you are adjusting the frequency spectrum of the sound, then applying compression to the adjusted sound. You might think of this as “compressing the EQ”. This is useful because the compressor will respond in a natural and predictable way because it is operating on what you hear. You can use the EQ to remove problems and shape the sound for the mix, and the compressor will respond to the “fixed” sound instead of the “raw” sound. The downside is that sometimes a compressor will adjust the percieved frequency response of a sound (usually by reducing the low end or the high end), and it’s not as easy to compensate for it with pre-EQ. 

Conversely, by using compression first, followed by EQ (sound->Compression->EQ), you are adjusting the dynamics of the sound, and then adjusting the frequency spectrum. You might think of this as “EQing the compression”. This can be useful if your compressor is reducing the percieved high end or low end of the sound frequency spectrum – the EQ can compensate for any “funk” the compressor is adding. The downside is that the compressor is not responding to the sound you hear, which means it might not sound as natural or predictable. As an extreme example, your sound might have some significant low end rumble or resonance – if you compress before EQing, the compression will respond to the rumble or resonance even if you greatly reduce it with post-EQ. 

A hybrid approach might look like EQ->compression->EQ, where the first EQ (before compression) is used to address any problems in the sound and shape it in the mix, and the second EQ (after compression) is used to add any final touches or compensate for compression “funk”. 

Which approach you use entirely depends on your sound and your mix. It’s important to understand how it works though, so you can make an informed artistic judgement. 

-Kim.

Compression and reverb

It might be helpful to look at this as a choice between “reverbing the compression” (sound->compressor->reverb) or “compressing the reverb” (sound->reverb->compressor). Reverb is usually an additive process – it adds a component (the reverberation) to the existing sound. If you add the reverb last (after compression), you’ll be able to produce a conventional, natural sound. That’s because the signal being sent to the reverb has the same (or similar) dynamic response to the final sound we’ll hear in the mix. Also, the added reverb itself isn’t being significantly processed, which means it will sound close to what the reverb designer intended. 

By doing it the other way – “compressing the reverb”, you are directly altering the dynamic rseponse of the reverb itself. This is not a common process, but may be useful for achieving special effects or unnatural ambiences. For example, smooth deep compression on a long reverb tail may lengthen the reverb tail or make it sound “deeper”. More aggressive comrpession can create a very unnatural pumping effect that emphasises the reverb without washing out the original sound. 

-Kim.