Bouncing to audio

‘Bouncing’ to audio is a process of rendering realtime generated audio to audio files. Typically, ‘realtime generated audio’ is software synthesisers, samplers, hardware sound generators, or even audio files being processed by plugins or hardware effects processors. After bouncing, these audio sources are turned into audio files on your hard drive. The audio files are a snapshot of how those sources sound – the same way a tape recording is a snapshot of a performance.

There are a number of different terms for this. Often you’ll see it referred to as ‘rendering’ or ‘exporting’, or even ‘loopback recording’. The term ‘bouncing’ harks back to multitrack tape recording systems, when the process involved re-recording audio from some tape tracks onto one or more other tracks. The audio was ‘bounced’ from track to track on a tape system.

Doing this can be a good idea for a number of reasons.

  • It can help conserve resources. In a DAW environment, it can allow you to conserve CPU (by rendering a track that uses CPU-hungry plugins, then deactivating those plugins). In a hardware environment, it can allow you to use a specific piece of equipment – either an instrument or an effects processor – on many tracks at once.
  • It can make a project more portable. By rendering tracks, you can bring the project files to another studio – even if that studio doesn’t have the same plugins or hardware that you do. It can even allow a project to be shared between different DAW platforms or studios based on harware/software (and mixtures of both).
  • It can help you make decisions. Rending tracks locks you in to a particular sound and performance. While realtime generated audio allows you to continually adjust the track (and for MIDI – the performance), rendering those tracks to audio files creates a snapshot that cannot be changed much (or without difficulty). This can be made part of a project workflow to mark the end of one stage and the beginning of the next stage.

Obviously, there are a couple of downsides. One is the track space. In a DAW environment, rendered audio files take up additional hard drive space. This is usually not an issue, because hard drives are cheap and high-capacity. It’s more of an issue with hardware recording systems, because some have some very strict restrictions on how many simultaneous tracks are available at once.

The other downside is that it prevents further editing of the track – both the effects processing settings and (for MIDI) the performance. This is usually mitigated by keeping a deactivated copy of the original realtime generated track.

Personally, I use track rendering at two points in my workflow:

  1. When the artist brings their demo to my studio. My artists work on a variety of platforms, so I ask them to render each track to bring them into my studio for further work.
  2. When using hardware instruments, hardware effects processors, or CPU-heavy plugins. Obviously, this is to allow these tools to be used many times in a project. It also allows projects to be recalled at later sessions (I use some hardware devices that are very complex and have no presets). I also use a CPU-heavy amp simulator, which I routinely render to audio as it’s being recorded – because I prefer not to have restrictions on how many guitar parts I use (and it’s no different to recording an audio file of a physical amp).

The decisions and if and when you render tracks to audio depends on your project workflow, your studio resources and your preferred style of working. Obviously, there are no generic rules – just what works for you.


    • Joachim
    • August 13th, 2010

    I must say that your blog is just absolutely amazing. I have promised myself, to some day read through all of your posts.

    Big up!

  1. Thanks Joachim! Glad you find my writing useful!


  2. Yeah, really excellent, no-nonsense advice. Thank you very much for your writing; sometimes I’m tempted to simply translate one of your articles to German and publish it on our site … naughty! :)

    Felix, author at

  3. @Felix
    I don’t read German, but your site looks pretty good! I’d be happy for you to translate one of my posts and publish it on your site! Just make sure you include a link back to this blog so interested readers can explore the other posts here.


  4. @Kim Lajoie
    Wonderful! I’ll ask my boss if he’s okay with this kind of partnership (perhaps even for a series of three or so articles?).

    … On topic: I have found that a bounced version of a not-too-complex loop can be turned into something wicked and exciting with the help of a capable VSTi sampler like Vember Audio Shortcircuit and its effects … and then reused as a rhythmic element that would be hard to achieve otherwise.

  5. @Felix
    Yes, reading back I realised I’d forgotten to mention the creative possibilities that open up when rendering audio files. There are some thing that can be done with audio files that are difficult or impossible to do with MIDI or live processing. Chopping up the material for rhythmic processing is one possibility. Reverse reverb is a classic example too. Another thing I do sometimes is record multiple takes of an unstable or random process, and create a composite (comp) of the best parts.


    • Gianluigi Bombatomica
    • August 20th, 2010

    Joachim :
    I must say that your blog is just absolutely amazing. I have promised myself, to some day read through all of your posts.
    Big up!

    Agree, awesome blog!

    Never thought of letting all of your posts – and comments why not – available in a PDF? I could try collecting them in a unique doc, do you think it is possible Kim?


  6. @Gianluigi Bombatomica
    I’m happy to hear you find the content useful Gianluigi!

    I do have some PDFs available – but they’re not just repackaged blog posts. They’re based on some of the blog posts, but woven together to show how each idea integrates with other ideas, and how they all work together.

    The PDFs are available to members of the Kitchen…


  1. No trackbacks yet.

Comments are closed.
%d bloggers like this: