Study Hall

Supported By

Perspective: Compare & Contrast

Differences in the approaches to live and studio engineering.

The realm of the audio engineer is a broad area encompassing recording, radio, theatre, film, sports, television, live music and public speaking, and while all of these disciplines require a similar core skill set, they all differ in the way that these skills are applied.

The common ground is that they all involve the capture, manipulation and transmission of sound, which leads us to believe that it’s relatively easy to switch from one role to another. But all of these roles are better defined by the way in which they differ from each other – not just in the application of skills but also in the environment in which they’re discharged and the time scale in which they unfold. These key differences explain why certain people are better suited to certain roles than others.

The two areas most are drawn to directly involve music, i.e., the recording studio and live shows. Studio recording engineer and front of house live sound engineer are probably the two most popular and sought-after roles. Many engineers dabble in both during their working lives, treating them interchangeably, but while there’s a fair amount of overlap, they’re fundamentally different roles.

Eye-Opening Experience

I started my engineering career in recording studios, following the traditional route of “tea boy” to tape op to engineer. At some point a musician friend suggested I mix his band’s gig because I was a sound engineer, I knew their music, and they didn’t fully trust the house engineer. So I went along to the gig confident that I could mix a live show based on my experience in the studio.

However, I soon realized that the only thing that my experience had prepared me for was how to operate a mixing desk. I certainly wasn’t aware of how little time I had to build the mix, and then I realized that there was a lot of sound coming from the stage before I even raised a fader. As if that wasn’t enough to take in, the room was highly reverberant, making it hard to judge where the ambient sound stopped and my mix began.

Fortunately I was also a musician who’d played in a few bands so I knew the basics of how a gig should work and what was required of the engineer – but that didn’t prepare me for being the one in control. It’s easy to identify the problems in retrospect, but at the time I was just staring at the desk hoping to make it through the next half-hour without too many blasts of feedback. The fact that I managed to pull together a half-decent mix probably had more to do with the system being set up well and the input of the house engineer (who probably would have done a much better job than I).

Despite this “baptism by fire,” I would return to live sound time and time again, relishing the challenge. Once I got the hang of it I started to enjoy it greatly and eventually shifted my attention away from the studio and to the arena of live sound, where I’ve happily been working for many years now.

Learning Lessons

One of the biggest differences between live and studio sound is the time frame. A gig has a very finite and linear time frame; everything must happen between the time you can get to the venue and the time you need to get out, without fail.

This puts significant pressure on all aspects of the production. While there are time constraints in the studio, typically dictated by the budget, the ultimate aim of producing high quality and meaningful recordings are more likely to dictate how much time is available – if you need more time and can justify that to whoever is paying the bill, then you’ll get more time. Also, this isn’t to say it’s a pressure-free environment. The requirement of capturing the lightning of a brilliant performance puts pressure on the artists, which filters down to everyone involved.

Another key difference is in the channel processing we apply.

The differences can be subtle but important. Starting out in small venues, I soon realized that using large amounts of additive EQ led to feedback issues, especially on vocals – if you boost certain frequencies to get the sound you want, then those frequencies are much more likely to feed back.

So I quickly adopted subtractive EQ practices and soon found that I could invariably get the sound I wanted by taking away the parts of the spectrum that weren’t needed while avoiding the increased risk of feedback.

Supported By

Celebrating over 50 years of audio excellence worldwide, Audio-Technica is a leading innovator in transducer technology, renowned for the design and manufacture of microphones, wireless microphones, headphones, mixers, and electronics for the audio industry.