Posts Tagged ‘ Layering ’

When to separate sounds and when not to

When working on a song with multiple instruments, often you have to think about which parts will be played by each instrument. With larger productions, you’ll tend to have more instruments than parts. In these cases, you’ll need to think about when two different instrument sounds should be combined, and when they should be kept separate. When combining the sounds of two instruments, the effect is of a single sound that is a composite of both instruments.

How to separate

Separating instruments can be achieved in a variety of ways:

  • Different register (pitch range)
  • Different rhythms
  • Different tonality (notes in the scale)
  • Different volume
  • Different sound character
  • Different space (panning, depth)

For best effect, you should separate instruments using several of the above techniques.

How to combine

Quite simply, the techniques for combining instruments are the exact opposite to those for separating them:

  • Similar register (pitch range)
  • Similar rhythms
  • Similar tonality (notes in the scale)
  • Similar volume
  • Similar sound character
  • Similar space (panning, depth)

And again, for best effect, you should use several of the above techniques when combining instruments.

Of course, simply knowing how to separate or combine sounds is just part if the story – you must also know when to do it. When should you make two instruments more separate from each other? When should you try to combine them?

When to separate

You’ll get the best results in separating instruments when the parts played by those instruments are already quite different. The best way to determine this is to think about the function each part has – what it’s contributing to the song or the mix.

For example, you might have two melodic or harmonic parts in your song. You would be better off separating them if one is playing long slow notes and is only heard during the climaxes of the song, and the other part is playing small repeated arpeggios throughout most of the song. Even if both parts are played with similar instruments, in the same pitch range and with the same tonality, the two parts are definitely performing different functions in the song. To further separate them, consider changing the instrument or sonic character, the pitch range, or perhaps the depth of one or both parts. This will bring more clarity to the overall song. In other words – you hear them as different parts, so make them more different.

When to combine

The opposite is true when deciding when to combine instruments. If two parts are performing a similar function in the song, they’re probably good candidates for being combined.

For example, you might have two parts that are playing short staccato rhythms (think of an arpeggio with gaps between the notes). The two parts might be played by different instruments, with different rhythms and different pan positions. Even still, they are both performing the same function in the song. To combine them, consider making the pitch ranges, pan position, or sonic character more similar. This will make your song more cohesive and and focussed. In other words – you hear them as similar parts, so make them more similar.

In applying these ideas, you’ll bring more focus and clarity to your music.




Masking is a little-understood concept that is important to composers and mix engineers. Essentially, masking is what happens when one sound makes it difficult to hear another sound. An obvious example of this is two instruments playing the same note, with one instrument sounding much louder than the other.

This can happen with notes or chords, where the voicing of one instrument covers up another, softer instrument. It can also happen with frequencies, where an element of one sound covers up an element of another sound. As with the example above, this happens when two instruments are playing the same note or frequency range and one is much louder than the other.

It can also happen when the notes or frequencies are not exactly the same, but nearby. The effect is particularly strong when both instruments are playing the same or similar parts, and the sounds blend very well. A common example is of distorted guitars and distorted bass. On its own, the distorted bass might have a heavy growl caused by a lot of energy in the lower mids and a crunchy fuzz on top. Once the guitars are brought in, however, the bass is reduced to a low-frequency rumble beneath the guitars. Even though the main energy of the guitars might be in the upper mids, it masks the upper harmonics in the distorted bass.

Another example is vocal harmonies. A song might have a section where the main melody is sung in parallel harmony – perhaps a third or fourth apart. If both voices are similar (sung by the same singer, in the same style, with similar processing), our ear will hear the upper harmony as being much more prominent than the lower harmony. The effect is sometimes quite striking – the lower harmony simply blends into the upper harmony.

These are both cases of the higher sound masking the lower sound.

Sometimes masking is useful, as it allows a sound to be thickened or deepened by adding other sounds to it. Other times it is undesirable as it makes it difficult for the listener to distinguish between the different sounds.

In the bass/guitar example, greater separation could be achieved by filtering or EQ so that each instrument contributes a unique sonic component to the mix. Alternatively, each instrument could be given a different depth. For example, the bass could be up front and the guitar further back in the mix.

In the vocal example, greater separation could be achieved by instructing the singer to perform each part differently – such as whispering one part, or perhaps singing one part forcefully. Better yet, have a different singer perform one of the parts.


Alternatives to reverb

Reverb adds two properties to sounds – diffusion and depth. While there are many ways of changing the balance between diffusion and depth, there are times when a more extreme approach is required. Reverb may not be the best solution if a sound needs a lot of diffusion but very little depth, or a lot of depth but very little diffusion.

More diffusion, less depth

Diffusion is a way of blurring a sound, reducing its sharpness or distinction. A sound may need to be diffused if it needs to be pushed to the background or to fit it into a mix that is generally quite diffuse. This might need to be done in a way that doesn’t add depth if the background of the mix requires a lot of clarity or if the mix is meant to be very shallow.

In these situations, processes such as chorus, microshifting, slap delay or even true doubletracking can be appropriate.

  • Chorus diffuses the sound by adding a copy with constantly-changing pitch and timing. This can be appropriate if the sound will benefit from the added movement and the constantly-changing pitch is not distracting.
  • For situations when the movement or pitch modulation are not appropriate, microshifting might be a better solution. This is commonly implemented as a pitch shift of a few cents down on one side of the stereo space and a pitch shift of a few cents up on the other side of the stereo space. This can give a very big sound that stretches across the stereo space, but doesn’t have the modulated sound that chorus adds, and doesn’t have the added depth or tail that reverb adds.
  • Slap delay is shorthand for any quick delay with a delay time roughly between 30ms and 150ms. The delay time should be determined by the nature of the sound – the delay time and level should be set so that the delayed sound blends smoothly with the original sound. Slap delay can be useful when a sound needs less diffusion and more depth than chorus or microshifting, but not as much depth as a reverb might add.
  • True doubletracking is a process of using two  different takes of the same part being played simultaneously. The natural, human variations between the two takes will make them slightly different – different enough to create a different sound when both takes are combined. This is a popular technique for guitars and vocals because it can be used to create a very big sound while still sounding much more natural than applying chorus or microshifting.

Depth, no diffusion

Depth is a sense of distance – particularly a distance between the foreground and background of the mix. A shallow mix will have very little distance between the foreground and background, a deep mix will have a lot of distance between the foreground and background. Usually sounds are pushed to the background by adding both depth and diffusion, but in some cases it is useful to add depth without diffusion. A mix might need to be very deep, but also very sharp and clear (which would require diffusion to be minimised). In other cases,a mix might already be quite diffuse, and depth has to be created by using more obvious means (because regular reverb would be lost in the general diffusion of the mix).

In these situations, delay is often the most appropriate tool. Longer delays (>150ms) should work best. When tuning a delay for depth, rather than rhythmic complexity, it’s often worthwhile tuning it by ear instead of snapping to the song’s tempo. The sense of depth will come from hearing the echos between the notes. This may be difficult if a tempo delay is causing the echos to be perfectly timed to sound underneath foreground elements (so that the background echos are masked by the foreground elements). Making the delay more audible by tuning it in between tempo times will also allow the delay to be at a lower volume. This will enhance the sense of depth in the mix.


Mixing with multiple reverbs

One way to contruct a subtle and complex ambience in a mix is to combine two different approaches to reverb. Going about this in an informed, deliberate way will result in a much more refined and appropriate sound than by simply stacking two different reverb algorithms (either in parallel or – heaven forbid – serial).

One way to approach it is to think about foreground and background. Often using a single reverb results in an ambience that sits primarily in the forground (resulting in a shallower mix) or in the background (resulting in a relatively dry foreground). Using two reverbs might allow a mix the benefit of both the foreground ambience (for softness and blurriness) and background ambience (for depth and spaciousness). One way to do this is to use a plate for the foreground ambience and a hall for the background ambience. This will be most coherent if foreground sounds are mainly (if not exclusively) sent to the plate, and background sounds are mainly (if not exclusively) sent to the hall. This approach is useful if the mix calls for a lush ambience with a three-dimensional quality to it.

Another approach is to combine short and long reverbs. This can be appropriate if the song calls for a long deep ambience, but there’s no middle ground between too dry and too lush for some sounds. This way, some textural background sounds and feature sounds would use the long reverb and other sounds (particularly more percussive/articulative sounds) would use the short reverb. A hall or plate would be suitable for the long reverb, and a room or shorter plate might be suited to the short reverb. For a more unnatural sound, use a thick modulated hall for the long reverb and a non-linear reverb for the short reverb. This approach is useful for complex mixes that don’t need to have a particularly realistic acoustic sound, such as electornic music and ‘studio’ music.


Processing Bass: Layering

Strictly speaking, layering is not really a method for processing, but it’s a common approach to take when designing a bass sound. Layering is an additive approach to designing a sound, because you’re building it by adding different elements together. By contrast, a subtractive approach (such as subtractive synthesis) works by starting with a big sound and taking away the parts you don’t need (for example, by filtering). In the real world, you’ll probably find yourself combining thw two approaches.

Recall the discussion about character and body. Sometimes you might find that your main bass sound has a satisfying body (energy below 100Hz), but not much character (above 100Hz). Other times, you might find that you like a bass sound that has a lot of character, but not much body. Using layering, you can build a composite bass sound that has right body and character for the song you’re working on.

The trick to making this work is to stay focussed (in your mind) about what you’re trying to achieve. Otherwise it’s too easy to create an indistinct mess of sound.

For example, you might have a deep filtered synth bass that sits perfectly at the bottom of your mix, but loses power when the rest of the mix gets busy and doesn’t “cut through”. You might try to make the bass brighter by raising the lowpass filter or using saturation, but then find that the sonic signature of the bass changes too much and you lose the characteristics that you like about it. Rather than trying to make the bass sound brighter,  think about layering a second element so that the original sound stays at the bottom of the mix but the added layer adds some more character in the lower mids. You wouldn’t need much – the added layer can be effective even if it’s quieter than the original layer.

Alternatively, you might have a bass with a lot of character in the lower mids but find that it’s not adequately covering the bottom of the mix. You might also find that the level of the bass below 100Hz varies quite a lot (especially if the bassline covers a wide range of notes). Boosting the bass might make the level even more uneven, and reducing the note range of the bass would probably compromise your bassline. Rather than trying to add more of what isn’t working down there (or destroying your sound with a multiband compressor), consider adding a new layer to cover the bottom of the mix. That way you can focus the original layer on the range where it’s strongest (the lower mids, or wherever the character is). If the bassline has too wide a range, you might even simplify it for the lower layer, so it is more focussed and sits better under the mix.