Study Hall

Supported By

Make It A Double: Inside Approaches To Parallel Signal Processing

Parallel processing, by definition, splits a signal and processes the copies separately, but there’s a sinister concern called "latency."

One of the earliest studio mixing tricks that I learned was parallel processing – running the same signal through several mixer channels and processing them independently. There are a few variants of this technique, the most common being parallel compression.

For example, the first application I saw of this was when a studio engineer double-patched a dynamic vocalist and compressed one of the channels. It gave the control of a compressed vocal but also helped to preserve the dynamics and “life” of the uncompressed signal.

Nerd Note: Electronically speaking, this is sort of the “opposite” of a standard compressor. A common compressor is downward-acting, meaning it reduces the signal’s dynamic range by lowering the louder moments. This parallel technique flips that around: At lower levels, since the compressor isn’t in gain reduction, there’s the strongest summation between the two coherent signals.

As the signal level increases, the compressor kicks in, dropping the level of one of the signals and decreasing the summation. The end result is that low-level signals get the effective gain boost, and higher-level signals don’t. This is called “upward-acting” compression: bringing up the low-level signal rather than bringing down the high-level signal.

A Matter Of Time

A more common variant is to route the drum inputs to the main mix, but also to a bus, where they’re compressed, and that compressed signal is mixed back into the main mix. These parallel processing tricks are extremely common in studio work and are becoming more common in live sound reinforcement as well due to modern consoles’ higher channel and bus capacities.

Parallel processing, by definition, splits a signal and processes the copies separately. I’ve already explored the EQ-related phase ramifications of this (“Don’t Phase Me Bro,” June 2019 LSI and on ProSoundWeb), but there’s a far more sinister concern: latency.

Digital mixing is just math, and digital processing takes time. Virtually all modern DAW (digital audio workstation) software used in studios is latency – compensated – that is, the software makes sure that all the signals arrive in sync at the main mix bus, regardless of how many plugins are applied to each signal. This is something that modern digital consoles are getting better at, but we’re not quite there yet.

Here’s one way to get into trouble: a double-patched channel (a snare or vocal for example), with an FX rack compressor inserted on one path. The extra trip to the (internal) effects rack and back adds to the channel’s latency, and it arrives at the main mix bus a bit later than its companion. Figure 1 shows the resulting comb filter.

Figure 1

How much later? Notice the comb filter’s first dip at 688 Hz (middle pane). That means an offset of 180 degrees at that frequency, which works out to about 0.73 milliseconds (ms). As confirmation, we see a second arrival on the impulse response (top pane) 0.73 ms after the initial arrival.

In theory, this can be fixed by manually adding delay to the signal path without the effects insert, but this attempt was thwarted by the console having limited input delay resolution. The closest I was able to get was 0.7 ms, which pushed the comb filter mostly out of the audible range, resulting in a rolloff that reached -3 dB at 11 kHz (Figure 2).

Figure 2

This is a good example of how something that appears to be a frequency-domain issue can actually be a time-domain problem. We’re dealing with a cancellation in this case, so EQ is unlikely to be the way forward. Using an externally inserted hardware processor as an insert usually means even more latency, if the signal has to go through additional DA/DA conversions or sample rate conversions in the process. This is one of the weird instances in which running an all-digital signal chain at a lower sampling rate can actually decrease system latency because it eliminates sampling rate conversions between devices.

Don’t Be Late

How about the more common approach of sending drums to the main mix, and also through a subgroup or mix bus to be compressed and mixed back in? The three modern digital consoles I tested handled this without a problem; This means there’s an internal delay added in the DSP on all channels set directly to the main mix, so the signals taking the longer input -> mix bus -> main bus trip don’t show up late. (You go ahead, I’ll catch up.)

This might sound a little strange, but you’re already used to it: it’s why console latency doesn’t jump up when you switch in a channel EQ or dynamics module. Modern consoles, with very few exceptions, can be counted on to behave themselves in the time domain with the “stock” channel processing, so it’s when we start using insert points or layering mix buses that we need to be concerned.

For example, when I tried the “parallel drum bus” configuration on an older-generation digital desk, it produced a spectacular comb filter. We can avoid this by using two drum buses: one “dry,” one “compressed,” blend to taste, serve immediately (or within 0.73 ms). So it’s not an automatic problem, but it’s certainly something that warrants a check if you’re jumping into parallel processing.

Figure 3

In many situations, of course, we’re not layering parallel-processed signals, and so the “buffer” latency might be unwanted, especially with latency-critical applications such as mixing for in-ear monitors. Figure 3 shows the delay compensation settings on a Midas Pro Series console. These allow the user to compensate for specific instances of latency and disable the others, based on the application.

If there are hardware/software inserts or parallel processing happening in the mix, we can compensate for it. Otherwise, switch it off to reduce system latency. I chose this example because it’s externally visible, but I have an expectation that consoles of coming years will increasingly manage this type of thing dynamically and automatically, with little to no guidance from the user.

In short, when considering using a parallel processing technique in your mix, don’t automatically assume you’ll have a problem – but don’t assume you won’t, either. A quick measurement or listening test will tell you what you need to know.

 

 

 

Supported By

Celebrating over 50 years of audio excellence worldwide, Audio-Technica is a leading innovator in transducer technology, renowned for the design and manufacture of microphones, wireless microphones, headphones, mixers, and electronics for the audio industry.