Keys to solid vocal performance

Posted: 27th August 2012 by Mezzanine Floor Studios in Uncategorized

There are a lot of “secret sauce” methods out there for improving vocal performance. Breathing exercises, vocal exercises, certain things to eat/drink and others not to eat/drink, etc.

Honestly the single biggest thing I can recommend is to treat singing as an aerobic activity. If you want to be a better vocalist but aren’t in good cardiovascular shape; get off your ass and work out.

Regular exercise can help improve your cardiovascular fitness overall. Vigorous exercise the day before a performance or moderate exercise the day of the performance can be a great way to open up your lungs so you’ll be better prepared to hold out those long notes longer and hit your high notes more strongly.

Last time we asked an unusual question: Is this song more about “color”, “texture”, or “contrast”, or more about “movement” and “energy”?

Our first example was a look at a song that was all about “color”, “texture”, or “contrast.”

Let’s take a look now at a song that requires focus on “movement” and “energy.”

Movement and Energy are key whenever you add more complicated instrumentation or percussion to a musical arrangement. This spans a multitude of genres. Modern Rock. Pop. Dance. R&B. Hip-Hop. Funk. Soul. Gospel. Hopefully at this point you’re getting the picture; Movement and Energy are probably the two most important concepts to understand if you want to mix these genres effectively.

The interesting thing about the concepts of Movement and Energy is that they are more about how things feel than about how they sound. This is strange to think about at first, but makes a lot of sense. How something sounds is experiential. It is momentary. It tends to lead to a workflow that alternates between observation and action.

For example, “the snare is too loud”->turn down the snare. “Now the vocal is too loud”->turn down the vocal. “Now the guitars are too loud”->turn down the guitars. Trust me when I say that this way lies madness.

I find it best to insert a second step into my behavior. Rather than the flow above, this would look something like: “the snare is too loud”->”what does the energy or movement of the snare interact with?”->turn down the snare->listen for the things I know the energy or movement of the snare interacts with, and adjust them.

The major difference between these two workflows is that the first is reactionary. I take one step, evaluate, take another step, evaluate, etc. This kind of workflow makes it very hard to ensure I am consistently moving towards an end goal. The second workflow is proactive, and involves a lot more “big picture” thinking, which makes moving toward an end goal much, much easier.

So now I turn down the snare knowing that I will want to check the vocal. The kick. Anything in the center of the mix, and sometimes even things like high hat and overheads.

Another good example of this is music with layered guitars. Sometimes more guitars make the track sound huge- other times less is more. If the movement of one instrument is stepping on another, the two should be made distinct using panning, EQ, reverb, or by simply muting the one that doesn’t fit.
High hat and tamborine are another good example. Panned left and right they might compliment each other if they are close enough in cadence, but panned together they would just be noise.

This “heads up” method of mixing is helpful, because it helps me keep from following myself in circles down a series of rabbit trails, and disliking the results. Instead, I go into each mixing tweak thinking about the relationships involved between the instruments- their color, texture, motion, energy, and the degree of contrast between these. This makes it much easier to direct myself towards an end goal.

This is where mixing gets really fun. In the end, mixing is all about getting something sound “right.” Everything should have a “pocket” to fit into. Everything should sound pretty good on its own when you listen critically to it. The whole thing should “gel” together and feel like a cohesive whole.
How you get there will depend largely on the kind of music you’re mixing. When you boil it all down, it comes down to a simple question and a not-so-obvious practice.

The Question: Is this song more about “color”, “texture”, or “contrast”, or more about “movement” and “energy”?

The Practice: Forget about how the way things sound, and focus on the way things feel.

First, let’s look at a singer/songwriter tune that is just vocals, guitar/piano, and minimal additional instrumentation or percussion.

This is a good example of a song that is more about “color”, “texture”, or “contrast.” Of course, the most important thing is to get the vocal to sound amazing.

After that, it’s a question the way the vocal and everything else fits together.

•    Does the vocal stick out like Karaoke? Find a way to make it sound like it fits with the music (hints: when it’s wrong the solution is often fixing reverb, levels, and eq, generally in that order.)

•    Are the music and the vocal “stepping on” eachother? Find a way to stop this. It could be the musical arrangement is too busy and is actually conflicting with the vocal. It could be that the tones of the vocal and music are too similar and you need to bring out more contrast.

•    Does the song feel like it’s the wrong “color” for the lyric? Some of this just comes down to the structure of the chords, melodies, and harmonies, which isn’t your job to fix (outside of exercising mute buttons to help clear up the arrangement.) Some of it may be fixed by looking at the “texture” elements that are being used. Anything that is being used to supplement the vocals and main instrument (guitar/piano) is about texture. This is especially true of sustained elements like strings, pads, etc. If the song doesn’t feel right and you have the option, try a different pad sound (this works better with software synthesizers than with actual recorded synths, of course.)

The concept of contrast is a very important one when thinking about fitting elements of a mix together. Sometimes “louder” means “drier”, and “quieter” means “wetter.” If I think something as being “too loud”, my reflex is to turn it down. If I think of it as having too much energy compared to the other instruments, then I can ask a followup question. Is it too loud, or too dry? Reverb can be overdone for sure, causing everything to lose definition. But it can also give instruments space. And if you’re having trouble with something popping out of the mix, being too loud, or just not sounding like it fits in, go back to asking yourself about contrast.

Stay tuned for part 2 on energy and movement.

 

Mixing (phase 2) – focusing on how things sound

Posted: 3rd August 2012 by Mezzanine Floor Studios in Mixing Techniques

Once I’ve removed unwanted sounds and noise from tracks, I turn my attention to the way things sound. This involves listening to the tracks together, and soloing each track, and basically asking myself what I want it to sound like. For the most part this varies by style, so I may reference similar styles of music and compile a picture of what I want things to sound like.

At this point I tend to take three steps-

  1. I ask myself- how warm/clear do I want this instrument to sound? I will then use EQ and plugins modeled after analog/tube gear to move the recorded sound more in one direction or the other
  2. I then use Compression and tweak panning to help things fit together more nicely
  3. Next I focus on space. Do things need to sit further back? I use reverb. Do thinks need to be “thickened up” a little? Short delays can really do this nicely. Do I need a “big” sound that doesn’t take up the center? I pan it center and use a stereo delay with one side dry and the other side wet and with a delay of 0 to 30 ms. This makes it sound wide and panned toward the dry side without taking up the center

Next up, phase 3- focusing on how things feel, part 1.

Mixing (phase 1) – Getting rid of unwanted sounds

Posted: 3rd August 2012 by Mezzanine Floor Studios in Mixing Techniques

I work on a fair amount of live recordings- where getting a quality product at the end is just as much about removing sound you don’t want as it is about sweetening the sound you do want. This has gotten me into the habit of splitting my mixing workflow into 3 phases. This works for studio albums as well, and assumes that edits, fades, etc have been done first.

Phase 1 – Getting rid of unwanted sounds

The first thing I do when I get a recording and am preparing to mix is to make a quick rough mix using just levels and panning, and listen for unwanted sounds. Is there noise on the acoustic guitar? Too much rumble in the vocal mics? Feedback? Frequencies in the piano that are accentuated too much by the microphone that was used?

For this task I use two tools all the time.

1.    High Pass/Low Pass Filters- Getting rid of anything that doesn’t need to be there. This often means removing low frequencies from everything that doesn’t need it. Obviously Kick, Bass Guitar, Djembe, etc. need it, but most other instruments or vocals don’t

2.    iZotope RX2- RX2 is very useful for removing noise, feedback, and any unwanted sound from tracks

Removing noise is particularly important when trying to really produce a live recording and make it sound closer to a modern studio album. This usually involves quite a bit of compression, which almost always means boosting the gain on the output and boosting the noise floor.

Stay tuned for phase 2 (focusing on how things sound)

 

Waves NLS plugins- separating myth from reality

Posted: 25th May 2012 by Mezzanine Floor Studios in Uncategorized

I read recently about a product Waves was proud to announce: a new series of plugins called NLS (Non-Linear Summing.)

At first I was excited, as this seemed another step towards being able to mix “in the box” with plugins. Then I got to thinking- can these plugins really make analog summing boxes irrelevant if all we care about is the sound?

The short answer is no. While the NLS series might add some analog “character” to the sound, the whole exercise misses the point, as clearly spelled out in the naming of the plugins. In the end, these plugins have nothing to do with summing. Because of the core architecture of DAW software, summing is done OUTSIDE of the plugin architecture, so the whole premise of these plugins is a complete misnomer.

That said, Waves has implemented a cool VCA group architecture and done a lot of work to model individual channels and busses on 3 major consoles, and done a good job of enabling the addition of analog warmth and a separate analog modeled fader that could potential replace the faders in your DAW if you want better modeling there.

Bottom line- if you’re looking to replace a great sounding analog mixer or Dangerous summing box with these plugins, I’d tell you to pass. If, however, you’re looking to add some analog warmth to your tracks, they are definitely worth a look. If you happen to have any UAD-2 cards in your system and haven’t done so, for about the same amount of money I would buy the Studer A800 tape plugin first. This plugin will absolutely change the way you mix.

http://www.musicradar.com/news/tech/namm-2012-waves-nls-non-linear-summing-plugin-525494

Getting a great acoustic guitar sound

Posted: 26th April 2012 by Mezzanine Floor Studios in Mixing Techniques

There are a lot of tricks I use to get a DI recorded acoustic guitar to sound ok- this usually involves a pretty brutal EQ curve to start with. What’s interesting is that there are 3 things I almost always process an acoustic guitar through- whether recorded well in a studio or recorded with a crappy DI on stage.

What are these magnificent 3?

  • UAD Studer A800
  • UAD Pultec Pro
  • UAD Fairchild 670

I know I sound like a shill for Universal Audio, but it’s really true. Aside from the fact their plugins sound fantastic, I love them because I don’t need presets. With most of their plugins I can tweak a few knobs and buttons and dial it in. So it is great when working with other folks that have UAD but don’t have my presets.

Of course, I’ve also utilized the FX chains in Sonar X1D Expanded, so my default settings for this are available at the touch of a hand.
Here are my favorite settings for acoustic guitar.

UAD Studer A800

Choose the tape you like best and set the Calibration to match. Open up the lid and turn off the noise.

Set the IPS to 30 for a lot of high end sparkle, 15 IPS if you want a more balanced sound.

If you really want more sparkle, you can open up the Studer and turn up the HF knob, just don’t overdo it.

If running 15 IPS, set the EQ to CCIR if you have a boomy guitar, NAB if you want more low end. Set the input so you get just the right amount of saturation without overing. Set the output so you’re getting the right levels out in your DAW.

UAD Pultec Pro

Insert the Pultec Pro. Close the Pultec Pro. Yes, really. It adds a nice sheen without tweaking a knob. If needed, the presets for Acoustic Guitar can actually be a very good start, but I usually don’t touch them either.

UAD Fairchild 670

From the default, set the input volume so it’s not being blown away, set the threshold so it hits lightly, set the output so the output volume sounds balanced with your other instruments.

Now for the KILLER tweak- turn the DC Bias all the way down. This tends to warm the sound and mellow the highs. Once you get used to the difference between default and all the way down, find a place in the middle that feels good. (Sometimes having the BIAS all the way down will cause a little graininess or distortion so be careful to listen closely if leaving it all the way down sounds best otherwise.)

 

 

Cakewalk Sonar X1: Using the Pro Channel EQ

Posted: 7th April 2012 by Mezzanine Floor Studios in Hot Gear, Mixing Techniques

One of the reasons I love working in Sonar X1 is the Pro Channel functionality. It’s great having any combination of Tube Saturation, Compression, and EQ on every channel by default. It makes a lot of work faster when trying to brute force a decent rough mix. It’s also great that this is expandable, making it possible to add other affects like gates, limiters, reverbs, etc on each channel by default.

Here are the reasons I love the Pro Channel functionality:

  1. Default track and buss settings even in a new blank session (Right click and Set Modules as Default for Tracks)
  2. Inclusion in track templates
  3. Easy “CTRL+drag” EQ cloning between channels in the Console view
  4. Quick Group functionality (select tracks you want to tweak simultaneously, hold down CTRL and change the setting you want to change for all tracks (this works for normal volume, pan, aux sends as well as Pro Channel settings. This grouping functionality is temporary- it persists until a different track or group of tracks is selected)
  5. Permanent group functionality (right click on a control in the Pro Channel and click Group, then assign a group A through X. Do the same on a different control on a different track.)

Quick Groups

The Quick Group functionality (#4 above) is great for doing simple things quickly. I find it especially nice if I’ve started a live recording from scratch. Generally I will filter out the low end below 80-90 Hz on everything for live recordings, except for the few instruments that need it (Kick, Bass, bottom Djembe, etc.) To do this quickly in Sonar X1, simply follow these steps (counter-clockwise from lower right):

  1. Quick Group select all tracks (click and drag across the track numbers on the bottom of the Console)
  2. Hold down CTRL and click on the Power button for the Pro Channel EQ
  3. View the Pro Channel in the Inspector (hit I to show if hidden)
  4. Hold down CTRL and click to enable the low filter, continue to hold CTRL and raise the filter to the desired frequency

Now you can simply go to the Kick and the Bass and turn off this filter or set appropriately (many folks like cutting below 30 or even 50 Hz.)


Manual Groups

With the Pro Channel EQ permanent grouping (#5 above) is especially cool, as the gain of an EQ node can be grouped but inverted, allowing for “complimentary” EQ changes on the fly. Complimentary EQ is the act of cutting a frequency on one instrument to make room for another instrument at that frequency. Often this means boosting one channel (adding 80 Hz to the Kick) and cutting another channel (subtracting 80 Hz from the Bass Guitar.) Complimentary EQ is a powerful tool, but generally has to be done one channel at a time by listening and guesswork.

To do this, go to the Pro Channel on one track and right click on the gain knob, select Group, then select a group A through X. Do the same on the second track you want to affect. Now, right click on one of the gain knobs and choose Group Manager and select Custom. Then select one of the knobs and choose to Swap values, then click Ok.

Now when you raise the gain on that grouped node for one track’s EQ, the gain will lower on the other track’s EQ for that node. This means you can do complimentary EQ on the fly, affecting both channels at the same time, using your ears to listen for the combination of settings that is correct, rather than one at a time. This is especially powerful when using Quick Groups so you can change the Frequency and Q controls as well.

Conclusion

I would not go crazy trying to add every effect known to man into this new proprietary plugin format; we already have a great cross-platform medium for this with VST plugins. But this Pro Channel functionality is awesome in large part because Cakewalk have kept it simple and focused on bread-and-butter mixing tools, and integrated it well with other features in Sonar.

I believe the Pro Channel as implemented by Cakewalk in Sonar X1 is a significant enhancement on DAW architecture, and feel that it helps set Sonar apart from other DAW software in a powerful way. Propellerhead’s Record software, which is now built into the Mixer in Reason is one of the few others to do something similar, but I prefer the way that Cakewalk has implemented this idea in Sonar.

Mastering gain structure

Posted: 27th February 2012 by Mezzanine Floor Studios in Live sound, Mixing Techniques

Gain structure is at once one of the easiest concepts for people to grasp and one of the hardest for folks to master.

At it’s core, it is simple. Proper gain structure is about two inter-related things:

  1. Avoiding distortion, crackling, bad sound from overloading a component in the mixer/daw
  2. Giving the engineer “headroom” on each fader/control, so they can make minor volume boosts and full range volume cuts with each fader

There are two things that make practical applications of gain structure complicated-

  1. Routing (both submixes and auxiliary channels) – where the signal that comes into each channel goes
  2. Signal processing – what each signal is processed with

On a mixing board proper gain structure includes many components: the actual signal coming in, the trim level on the mixer channel, the EQ and any FX that are inserted, the individual channel faders, sub-mix faders, aux returns, the master fader, and the amplifiers that run the PA system. The components are very similar in most modern DAW software as well.

To put each component in context and make sense of proper gain structure, let’s start with an analogy. Think of a mixer like a series of streams that form from  snow melt in the Rocky Mountains. Each stream would eventually flow into a river that would eventually flow into the Mississippi river, which finally empties in to the Gulf of Mexico. In the analogy, each channel on a mixer (and the signal that it controls) would be like one of the springs. The Mississippi would be like the master output on the mixer, which feeds sound out to the audience. In-between the main river that feeds into the ocean there would be confluences- places where multiple streams come together to form one stream, where multiple rivers come together to form the main river that flows into the ocean. There forks in some of the rivers, where the water would end up following two different paths to eventually meet again and flow into the Mississippi. The same snow would also lead to snow melting on the West side of the Rockies, which would go through a similar process and eventually empty into the Pacific Ocean ( in our case, think of the Auxiliary sends that are often used for monitor mixes. The Pacific and the Gulf both have a common source of fresh water feeding into them, just like the monitors and the main outputs on a mixer are fed by the channels.)

The trim knob on a mixer is a bit like sunshine causing the snow to melt faster, which makes each of the initial streams flow with more water. In many cases this is good, allowing for crops to grow, water for drinking, cleaning, etc. However, if too much snow melts too quickly the streams fed by snow melt will flood and overflow their banks. This flooding will cause silt and dirt to enter into the stream, making the water dirty and muddy. This is similar to what happens to a channel if you turn the trim up too high- you get distortion or “dirt” polluting the sound that was clean and pleasing just moments before. While there are a number of ways in nature that such dirt or pollution might be filtered out before the water reaches the ocean, mixing boards are not so forgiving. Distortion that enters a channel on a mixer will reach the output of the mixer until that channel is muted or the cause of the distortion is identified and removed.

With a mixing board each channel also has a fader. It may be useful to think of this like a dam in our analogy, except that the fader can actually increase the volume of sound above and beyond the signal that it received. A dam can’t do that. Needless to say, though, the relationship between the trim and the fader fits pretty well into the analogy. The goal is to have the right level of sound feeding to the audience, similar to having the right amount of water flow out of a dam. If there is too much sound reaching the audience, the engineer could choose to turn down the fader, which makes a lot of sense. If the engineer later turns up the trim, however, they could cause “flooding” on that channel.

When multiple streams come together into one, the amount of water they collectively flow into the Mississippi could be controlled by another dam on this smaller river. This is analogous to a sub-mix fader (although, again, a fader can output more signal than it is given, unlike a dam.) Think closely about the implications of having a dam on this river. It is fed by a lot of individual streams, and if enough of them flood individually they could cause flooding on this river as well.

Lastly, think about what it would be like to have a dam on the Mississippi river. If this existed it would be very similar to the Master fader on a mixer.

With this picture clearly in our minds, let’s start to examine the implications.

The best practice for setting levels for a single channel with a trim and a fader is as follows:

  1. Turn down the trim on each channel all the way
  2. Set the fader(s) at unity
  3. Bring up the trim up until the volume coming out of each channel is at a balanced level, with each individual channel providing a strong level with plenty of headroom

This ensures that the signal is not too loud at the fader (and causing distortion) because the trim is too loud.

A few key principles:

  1. The trim should never be significantly louder than the fader
  2. Channel faders should not be louder than submix faders
  3. Submix faders should not be louder than the master fader
  4. If you need to turn anything down a lot, use the earliest control in the flow. Trim, fader, submix fader, master fader.
  5. If you need to turn anything up a lot, think twice. If the fader is below unity, turn it up. If the fader is at unity and you have a lot of room on the trim, turn up the trim. If you don’t have headroom on the trim or fader, turn other channels down to balance the mix instead of turning up what you want to hear
  6. Whenever possible, be sure to utilize buss or sub-mix faders on your mixer, as this makes it easy to make the vocals louder than the band by turning down your drum submix and band submix, rather than turning up the vocals if you don’t have enough headroom.

At the heart of it, that’s proper gain structure. Once you’ve mastered an understanding of routing and gain structure together, you’re well on your way to being a solid front-of-house engineer.

Taking it to eleven: Keys to killer live sound for churches

Posted: 27th February 2012 by Mezzanine Floor Studios in Live sound, Mixing Techniques

Anyone that has run live sound in church knows the inherent challenges all too well. Small churches with little budget for sound equipment often struggle to do an adequate job of supporting the contemporary sound of rock and gospel inspired worship music. Doing a great job is often beyond the expectations of most church staff, but it is possible with a little budget and a lot of know-how.

At my church we recently had a new A/V director come on board and we’ve worked to improve on the great foundation the previous A/V director had laid, improving the sound every step of the way. Here are some of the key things we’ve learned.

The biggest thing we’ve kept in mind: Intelligibility is the key.

Anything that draws people’s attention away from their purpose for being there, away from the presence of God, away from the words they are singing and what they mean is detrimental. So if people can’t understand the words being spoken or sung, we’re missing the point as sound engineers. This is obvious when it comes to the pastor’s microphone, but it is essential during the worship music as well.

Here are the things we’ve found that help intelligibility.

1. Roll off the low end on everything that doesn’t NEED it. Kick, Bass Guitar, and Floor Tom are a few of the instruments that do need it. Everything else just muddies it up. Use the high-pass or low-cut filter on your mixer, and if there is also a low frequency EQ, lower the level of that as well for channels that don’t need the low end.

2. Take out the low mids on channels that don’t need it. This is especially important for churches that meet in high school gyms, as the low mids tend to be very muddy and this adversely affects intelligibility.

3. Take the lows and low mids out of the monitors as much as possible. Anything with lots of low end that needs to be heard and has low energy can already be heard by the performers naturally (drums), through an amplifier (bass guitar), or through the PA since the low end of the sound spectrum is not directional.

4. If you have the chance to use subwoofers, do it. I’m not saying you should make your worship band sound like they are being played by your neighborhood high school kid with 10 subwoofers in his car. Rather, subwoofers enable you to get adequate low end without killing your main PA speakers to do it. Low frequencies at high volumes actually move the speaker cones enough to cause phase shifts in the mid range frequencies reproduced by the same speaker cone. This can make the mids sound hollow, adversely affecting intelligibility. Using subwoofers for your low end results in longer lasting, better sounding main PA speakers.

5. Use a crossover/delay processing unit before going to the amplifiers and speakers. This allows for a lot of control over the sound and enables awesome things like using an aux send to control the level of each channel that goes to the subwoofer(s) independently. Turning up the aux send feeding the subwoofer for the Kick, Floor Tom, and Bass in the choruses of a song, for instance, can enhance the dynamic range of the sound in a very musical way.

6. Compressing vocals is vital. Drum compression is often popular, but in most small church settings it’s not necessary. Since the drums are already the loudest thing happening, sound engineers in small churches really don’t need to augment the live drum sound much (if at all) in the PA, outside of maybe sending the Kick drum through the mains and subwoofer(s). Compressing vocals enables the engineer to spend less time trying to make sure a vocalist’s words can be understood, and more time enhancing the worship experience by building on the worship that is already happening on stage. Many vocalists talk much louder than they sing- compression means not having to worry so much when they go back and forth.

7. Use an aux send for reverb and return it to an open channel instead of an aux return (just remember not to turn up the aux send for reverb on that channel AT ALL!) By routing reverb this way you get a handy fader and mute button for the reverb signal, so you can easily mute the reverb when vocalists are speaking for a portion of the service, then bring it back in when they start to sing.

8. Make sure your vocalists understand basic mic technique and the way their microphone picks up sound. Nothing annoys the sound engineer and congregation more than a hapless vocalist that holds the mic at their navel and sings in a whisper. Having vocalists that cover part of the microphone’s capsule can cause the cardioid pattern in some mics to become omnidirectional because of the techniques used to make the microphone directional. Unless they are beat-boxing they should stay away from doing this. And, of course, they should know which end of the microphone to point at the monitor, and which end to NOT point at the monitor. It sounds elementary to sound engineers, but that is the level of knowledge some vocalists have about microphones, especially at church.

9. Gating/expansion is helpful for vocal mics too! Gating is popular with live sound engineers for drum mics, but it’s cousin, expansion, can be even more useful for vocal mics in many situations. If people use proper mic technique an expander on each vocal channel can help reduce the potential for feedback. Leaving mic channels “open” can be problematic, since increasing the number of “open” mic channels automatically decreases the gain the sound engineer has available to them before they induce feedback. Muting all mic channels that aren’t used for a particular moment counters that problem but makes it possible for the embarrassing “my mic is muted” moments to occur. Putting an expander on each mic channel and setting it properly can give the engineer the best of both worlds.

10. Use a spectral analyzer while running sound. This can be helpful with identifying feedback, fine-tuning channel EQ, or finding an annoying frequency. If you have a computer and a 2 channel interface for recording you can do this without an expensive hardware unit. Use the first recording channel to record what you want people to hear later (i.e. the sermon.) Use a mono out, tape out, matrix out, etc. from the board to do this. Use a headphone splitter on the headphone output of your mixing console and send one split to your headphones/monitors (duh) and the other to the computer’s second recording input. Drop a spectrum analyzer plugin on channel 2 in your recording software (BlueCat audio and Voxengo have free ones.) Now, whenever you solo an instrument you can see its signal in the spectrum analyzer.When nothing is solo’d you get a spectral picture of the whole mix from what the board sees. If you want to see a spectrum of the live sound and have a free channel on the board, turn down the fader and aux sends on that channel so it doesn’t get sent anywhere, then hook a room mic up to it. Most solo buttons on mixers are pre-fader levels, so you should still get sound from this room mic going to your board’s headphone output, and therefore to your spectral analyzer.

11. Be aware of your available power. Many churches weren’t wired for amplified sound. Oftentimes the people that know the actual routing of wires in the building are long gone, but it pays to find out. This will enable you to avoid having electrically noisy lights/dimmers plugged into the same circuits as your audio equipment. Also, if there are multiple sources of power onstage, make sure it’s obvious to the musicians which ones they should use  for their instruments. Otherwise your perfect sound could be ruined by a ground hum loop just because the bass player got sloppy and plugged into the wrong circuit. Nothing kills intelligibility like a loud hum or buzz that’s not supposed to be there.

Peace,

Joshua