Pro Audio

Mixing music: shining a light on this pivotal process

Like mastering, mixing can seem like a ‘dark art’. So read on as we dispel some of the myths of the mix and discover its transformative power on music.

Mixing music is simple, yet oh so complicated. Mixing should be about getting the best possible version of the recorded material wherever it’s played back. Is it more than levels, eq, and panning? Is it complicated routing, automation and thousands of plugins? Or is it adding someone’s musical taste, additional production and huge analog consoles with tonnes of channels?

Let us take you through some history on how mixing became, explain the basics and show you how mixing can be all of the above or none of the above.

Andrew Scheps in the studio

Back in the day

To understand mixing music, we need to understand music technology and how it changed throughout history. The first recordings — and their mixes — were mono. Meaning, all the instruments and vocals had to blend together to be heard from one speaker. In fact, early recordings were so basic in that there would be one microphone for recording, so when a musician was to be louder in the ‘mix’ they had to step forward closer to the microphone!

Mono recordings were used all the way into the 1960s as the ‘new’ stereo technology — developed in the late ’50s — was too expensive for most consumers. Musicians, producers and recording engineers adapted to both mono and stereo mixes or stuck with mono for as long as they could.

When stereo sound became the new norm, with headphones, car sound systems and home sound systems, it opened a new world of audio both in music lovers and musicians. A great example of early stereo panning experimentation is Taxman by The Beatles. You can hear drums, bass and guitar on the left side, vocals in the centre and tambourine, guitars, backing vocals and a guitar solo on the right side.

Behind the curtain, as it were, there were changes happening vastly in the recording world. From the first recorded mono sound via a phonautograph (1857) to microphones to magnetic tape starting with 1 track, and working its way to 24 tracks, recording mediums were growing and changing. Enter multitrack recording and mixing. An early recording and mixing console, The EMI Redd.51, had a whopping 8 microphone inputs and 4 outputs: a huge development at the time.

Over the decades we saw companies like Neve and SSL create huge mixing consoles with as many as 96 channels as the demand for more instruments, more layers and more mixing techniques using multiple channels grew.

With the computer age came the DAW (digital audio workstation). Pro Tools was released in 1991 at $6000 and one could record all of four tracks at once. An early classic made in Pro Tools was Bjork‘s Homogenic. Released in 1997 this iconic electronic album, was only possible with the ability to edit digital audio, and it’s only become easier and easier to manipulate audio since then.

These days most DAWs have near unlimited capabilities (depending on your computer power) with software instruments and plugins replicating the greats from history, automation of anything from volume to bypassing, endless effects (including magnetic tape), unlimited tracks in software like Ableton Live, and the ability to twist and turn music into whatever you want.

So what does mixing mean?

Mixing is balancing every element in the recording, using whatever means possible to achieve this. You might like to start with what you want your end goal to be by using commercial releases of your favourite musicians. For example using three songs that embody the same tone, tempo and instrumentation as your music can be a great way to give yourself and/or others perspective. For some, the thought of mixing their own music can drive them mad, and they can lose the idea that there are technical aspects to mixing and making the best version of the song.

Getting clear on what style of music and the musical act is very important in the mixing stage. An artist like Lana Del Rey’s vocal is mixed very forward and intimate. You can hear every lyric and vocal flourish on her song Born to Die. Whereas a band like Red Hot Chili Peppers in their song Dani California, you can easily hear the drum fills of Chad Smith, the guitar picking of John Frusciante and the bass movements of Flea alongside Anthony Kiedis vocals. In both these examples, they were mixed by the same mixing engineer — the legendary Andrew Scheps.

Andrew Scheps has proclaimed about mixing vocals that he uses de-essers, harmonic distortion, parallel compression, slap, delay and reverb.

Back to basics

The basics of a good mix are dependent upon the outcome intended, however, there are staple things that should happen in every mix. Balancing and panning all the sounds are very important, and this includes making a decision on where the drums will be ‘visualised’. Will it be audience perspective or drummer’s perspective (looking at the drums, or sitting at the drums). Even with electronic drums, these decisions are important and may affect later where a synth or a guitar sits in the stereo spectrum.

A general rule is that lead vocal, kick and snare drum, and bass will sit in the centre, but again this can change if you want a stylistic choice or panning these elements. Where the other instruments fall on the spectrum is entirely up to taste, and some mixing engineers use automation to keep the panning tight in the quieter or ‘verse’ moments and then wide in the choruses and bigger moments.

You might hear the term ‘low end’ a lot, and this can mean using subtractive equalisation to remove any unwanted sounds that might get in the way of the clarity of an instrument like a kick drum or bass instrument. The standard Logic Pro equaliser comes with a frequency analyzer so you can see where the instrument you have selected sits in the frequency range. For example, if it’s a vocal the lowest note might be around 100Hz and you can see a low boost if a singer was to bump the microphone stand, this can cause a thumping sound thus sounding like a kick drum.

A good mixing engineer will understand that the music needs to translate across multiple devices and systems. Consumption of music has morphed so much over the years, from mono systems to stereo, to extended bass systems both at home and in cars, to people listening to music more and more with headphones. People are now listening on laptops, phones and portable Bluetooth speakers so the need for clarity in low-frequency instruments on these smaller devices is increasingly important. A mixing engineer in conjunction with a mastering engineer will make sure these things will not get lost and the balance remains the same across all the devices where the music will be listened to.

Sometimes a mixing engineer will sculpt sounds so they work in harmony together. A mixer might carve out a frequency in an instrument like a synth or a guitar so that it doesn’t mask the vocal, or use sidechaining to compress an instrument like a bass when the kick drum plays. Sidechaining is the process of sending a signal of one instrument to a compressor loaded on another instrument and the compressor is activated by that signal. For example, using this technique a bass will duck down in volume when a kick drum plays, therefore bringing the kick drum to the front of the mix.

How much does a mixer do?

A mixing engineer essentially should be mixing the music: not writing, producing, or changing the song’s structure, just mixing the elements. A good producer and/or musicians will edit the music to their best ability in preparation for the mixer. However, when trust is involved these lines can blur.

Grammy-winning, Emmy-winning and Aria-winning writer, producer and mixer Eric J Dubowsky has become known as a go-to mixer and sometimes adds production and songwriting flourishes to a mix he’s working on. The artists whom Eric J works with (Flume, Angus and Julia Stone, Dua Lipa, ODESZA, The Rubens, Demi Levoti, Chet Faker, Meg Mac) trust his ear and expertise. Eric J uses what’s called a hybrid system, utilising the best of analog and digital gear.

He uses Pro Tools to send elements of his mixes to loads of high-end outboard gear. For example, he might send a vocal out of Pro Tools, into a piece of gear whether it be a compressor, equaliser or saturation, process the vocal through that then send that back into Pro Tools.

When asked about his approach to the craft, this is what Eric J had to say:

“As a mixer, I try to maximise all of the moments of a song and keep the listener engaged throughout. I’m able to hear things with a fresh perspective which can be valuable when the producer or artist may have been working on the songs for months or even years. I’m also trying to make sure that the emotional arc of the song is realised in the purest way possible. This can be achieved by directing the listener to the most important parts of the song and by adding depth and space with effects.

“Personally, as a mixer, I do whatever is needed to make the song the best it can be. Not every mixer does this, but I will add parts (drums, synths, backing vocals) or even change the arrangement if necessary. This music is forever so we might as well do everything we can.”

Along with his hybrid setup, Eric J is an endorsed Universal Audio user. He talks about his process and the use of some great recreations of vintage plugins the company has released here.

Long time collaborator of Tkay Maidza, Dan Farber, co-writes, produces, and mixes all of the music he works on with Tkay. Technology has made the fusing of all these roles in music more and more common. As Farber is writing and producing he mixes the music as he goes. He writes, produces and mixes everything in Logic Pro, with some choice pieces of gear to record into whilst writing and recording. We had a chat with Dan about his process here.

Perhaps sending your music to such an experienced mixer means the benefits will be much greater than simply receiving a better version of your music, you’ll get some experienced ears over your music that will no doubt improve your sound and scope and put your music career in the right direction.

When and how should you choose a mixer?

The idea of choosing a mixer, and the costs included, can be a daunting one. Obviously, you will have to way up the factors involved like budget, deadlines, genres, and how good you want your music to be perceived.

If chosen correctly, one positive can be that you will get a fresh perspective on your music and clarity you never thought existed. To choose a mixer, you should look at their previous work, if they have worked on your genre before, and have a talk to the person about what expectations you both have. The relationship will most likely become very personal and your feedback on each mix revision might seem like you are attacking their work, so a strong connection and clear communication can be paramount to getting to the end result.

A mix revision is editing after the initial mix is done. Revising items might be “bring up the vocal in the bridge”, “Bring up the bass a little bit” and “Can you make the drums sound a little more like Ringo?” Conversely, an experienced mixer might request the vocals to be re-recorded or might request to re-record a part themselves. This is all in the name of making the best version of your music.

Mixing is not such a ‘dark art’ — it is a technical task married with the emotion of the song. The best mixing engineers will care of and have a duty for your music to be the best version of itself. A great mixing engineer will carry the intent of the song which was written, whilst using their technical knowledge to get these results.