Study Hall

Supported By

The “New” Optimization: Reconciling A Variety Of Opposing Forces

Exploring both the updsides and downsides of leveraging this technology...

The term optimization stems from the French word optimisme, meaning “the greatest good” or “the best.”

Historically, the sound reinforcement industry has employed the term in the context of sound system tuning or alignment, but it was not until recently that the use of the term optimization has taken on a more literal and formalized meaning.

It’s important to be familiar with the concept of numerical optimization (most simply, the determination of input values to obtain a function’s maximum or minimum values) and applications in live sound reinforcement. This technique is not new, and has been employed for years other industries (aerospace design, for example).

But as computer processing power increased, so too did the accessibility of these tools to users working in the field with laptops instead of mainframes. Just as acoustical measurement systems that once required racks of equipment can now be transported in a backpack, the benefits of advancement in computer technology have allowed numerical optimization to be accessible outside of the laboratory.

Only recently, however, has optimization found it’s way into the live sound reinforcement market.

Let’s explore the implications of leveraging this technology, both the advantages and the pitfalls. Though it has the potential to provide greatly improved performance (with much less user effort) from both existing and future systems, it will also require some degree of compromise and acceptance by the user.

What Are We Optimizing?

The objective in almost all cases is to balance a number of performance factors (or target variables, in our optimization problem). Generally, these might include:

1) Consistency (or variation) of sound pressure level through a defined audience region
2) Absolute SPL in a defined audience region
3) Tonal response or consistent through a defined audience region
4) Absolute SPL outside of a defined audience region
5) Tonal response outside of a defined audience region

Examining this list, it is clear that these factors are not complementary. In fact, several of these are potentially in direct opposition, such as:

• SPL consistency versus absolute SPL in audience area
• SPL consistency in audience versus non-audience areas
• Tonal consistency versus absolute SPL in both audience and non-audience areas

An original illustration conveying the typical input and output parameters of an ‘optimized’ loudspeaker system.

How do we manually reconcile these opposing forces through the current variables, such as number of elements, splay angles, amplifier channel level or equalization with any reasonable degree of success? It is hopeless to expect a human will find the best answer unassisted. The number of variables are too great and the resources (time, mainly) too few.

One might argue that we can narrow this down with experience and intuition, and let’s say that this allows you to cut down the number of iterations by even as much as 75 percent. That’s still ~22 iterations for the splay angles alone! This is prior to any sort of experimentation with gain/equalization shading or trim height selection.To illustrate this, let us look at an example: We have 10 elements in an array (lets assume this number for the purposes of demonstration), each with 10 possible splay angles. If the top box remains at 0 degrees, that leaves us with only 9 (boxes) x 10 (possible angles per box) = 90 possibilities!

At a rate of one iteration every two minutes, the user has spent the better part of an hour already just figuring out the splay angles. Repeat this for different array lengths (if the exact quantity of enclosures has not been fixed for us) and heights, and it quickly becomes impossible to look at every combination and find the best result. In most cases, the answer is to settle for a result that is less-than-optimal in the interest of time.

Fortunately, computers present another option that can yield better results in less time. This is because computers are very good at tackling problems that require a large number of different solutions to be rapidly attempted and the results compared. Because the computer is doing the legwork, for perhaps the first time the user has the convenience of defining the desired result or performance, instead of the mechanism required to achieve it.

Supported By

Celebrating over 50 years of audio excellence worldwide, Audio-Technica is a leading innovator in transducer technology, renowned for the design and manufacture of microphones, wireless microphones, headphones, mixers, and electronics for the audio industry.