And the best reverb is…

Posted: 29th January 2012 by Mezzanine Floor Studios in Mixing Techniques

… often the one you build yourself.

There are a lot of excellent reverb plugins out there- anything from re-creations of vintage hardware to manifestations of pure digital bliss, but in many cases I find that the best reverb for the sound I’m looking for doesn’t exist in any one reverb plugin.

The first step towards being happy with reverb is recognizing when to keep it simple, using a single reverb and settings that you’re used to. For me simplicity is usually found on acoustic guitars, snares, and strings. I know what I usually like and try not to go crazy. The drum buss can be this way, too.

Reverb can be added on individual tracks or on busses with individual instruments sent to the desired reverb buss as needed (usually on the track OR a buss and not both.) When choosing whether to add reverb to the individual track or on a buss, I ask myself two questions:

1. Am I running out of CPU power? Reverb plugins are often heavyweights in terms of their demand on CPU, so adding a few reverb plugins to busses can help save CPU compared to running a reverb plugin individually on multiple tracks.

2. Am I intending this instrument to be placed hard left, hard right, or centered in the mix? If placing the instruments hard left or hard right I will add reverb directly on each track individually (if needed at all.) If centered, I ask a second question. Do I want this instrument to be in the center only, or do I want it to be “big.” Vocals and snare are two good examples. Using reverb directly on them with a mono track holds them tightly to the center, making room for other instruments on the left and right of the mix. Using reverb directly on a stereo output track or sending to a stereo buss leads to a wider, bigger sound.

When setting up reverb busses, I tend to set up 3 busses, each with a reverb:

  • A short, bright reverb for use with clean vocals when that is desired, acoustic guitars, etc.
  • A medium reverb for electric guitars, drums- anything that needs to fit in a space with other instruments, and isn’t meant to be dry, out front, or distant and orchestral
  • A long reverb for strings, pads, etc. that are meant to sound distant, orchestral, etc

With spacious vocals, piano, and solo acoustic instruments I tend to like a little more movement and variety. This is where getting creative can be helpful. For this I will tend to add two more busses:

  • One buss with a delay for adding pre-delay to an existing reverb plugin when desired. The output of this buss is then sent to the input of the buss where the desired reverb has been added. In this way I could send guitars and vocals to the same reverb, but have the guitars sent directly with a shorter predelay set on the reverb plugin (say 20-40 ms) and send the vocals to the pre-delay buss first, for a combined predelay between 80 and 200 ms depending on the desired affect
  • One buss with a stereo delay that has different settings on the left and right, with the output of this buss sent to the longer reverb. Sending a source to this delay would then be similar to sending it directly to the long reverb, but with a little more “dancing” in the stereo field as the sound decays

One last word on pre-delay. This setting is very important when you want to preserve clarity from a source, yet add a significant amount of reverb. Sound that arrives within 20 ms of direct sound can mix with the direct sound can tend to “muddy” the sound. Try setting predelay between 20 ms and 50 ms for clarity, 50 ms to 100 ms for a subtle affect, and more than 100 ms if you like old school reverb sounds.

Using multiple figure 8 pattern microphones to record in stereo

Posted: 17th January 2012 by Mezzanine Floor Studios in Recording Techniques

Polar patterns indicate the “directionality” of a microphone. They are relatively straightforward but are often misunderstood. This is especially true with the figure-8 pattern that is natural to ribbons and available on some switchable large diaphragm condensers.

Think in 3D

When first studying the polar pattern chart on a microphone’s documentation, people tend to think in only two dimensions. This is sufficient for understanding how Omni or Cardiod mics work, but an engineer with only a two dimensional understanding of polar patterns cannot fully understand and utilize microphones with a figure-8 pickup pattern (or indeed with hyper-cardioid or super-cardioid microphones either.)

Thus, the figure-8 pattern is commonly described in words as a pattern where sound is “heard” by the microphone from the front and the back, but not from the sides. This is true but incomplete. A mic with a true figure-8 pattern also does not “hear” sound from the top or bottom.

Practically speaking, this means that the engineer will want to point the front and/or back of the figure-8 microphone at what they want the mic to “hear”, and the null points at the sides, top, and/or bottom of the microphone at what they don’t want the mic to “hear.”

Recording separate sources with 2 figure-8 mics

When using two figure-8 microphones to record two different sources, such as a voice and an acoustic guitar at the same time, it can be very helpful to keep the top/bottom null points in mind, as with some mic stands it would be easier to point the top of one mic at the vocalist and the top of the other at the guitar than to try to point the sides of each microphone at what they don’t want to hear.

Recording in stereo

Generally speaking, one of two techniques will be used when recording in stereo with figure-8 microphones- Blumlein or M/S (Mid/Side). If focused on the microphones themselves these two techniques appear identical. In both cases the microphones are set up so the front of one mic is rotated 90 degrees from the front of the other. In this way, the “null” point where one mic doesn’t hear anything is covered almost completely by the other microphone, and vice versa. The similarities between the techniques end there, however.

Blumlein

Using the Blumlein technique the front of each microphone would be pointed at a 45 degree angle to the sound source being recorded. In this way, one microphone clearly captures sound from the left side of the source and the other clearly captures sound from the right side of the source.

  • This could be left and right overhead microphones for drums, recording an acoustic guitar in stereo to capture both the sound-hole and the 12th fret, stereo room microphones, etc.)
  • The back of the figure-8 pattern also picks up sound. Thus, one mic really picks up the left side of the sound source and the right side of reflections from the room behind the microphones, and the other microphone picks up the right side of the sound source and the left side of reflections from the room behind the microphone. This means that generally the backside of a Blumlein pair should not point at something you don’t want to hear. If using a Blumlein pair for room mics, for instance, it is best not to put the pair right up against the back wall of the studio.

The result of Blumlein is a “normal” stereo pair of tracks where one would be panned hard left, the other hard right.

Mid-Side

There are two differences between the Blumlein technique and the Mid-Side technique.

First, the microphone pair will be effectively rotated together 45 degrees. The front of one microphone* will be pointed directly at the source of sound that is being recorded. The other mic would have it’s side (null point) pointed at the sound source being recorded, and would capture only sound that comes in from the left and right sides of the room. In general this means the front facing microphone will pick up mostly direct sound from the source, and the side microphone will capture sound that is reflected off the side walls in the room.

Second, the signal from each of these microphones then must be encoded properly to obtain the proper results (see the Wikipedia article above for more detail.) The end result of the Mid/Side technique is a recording where the engineer can control the center of the mix and the left/right level of the mix separately, and where the left and right signals collapse into Mono perfectly, making it an excellent technique for recording in stereo when some listeners will only be able to hear in mono (which makes it very popular for radio engineers.)

* The front facing “mid” mic in a mid/side pair can actually be a microphone with any directional (non-Omni) pattern, although Cardioid and Figure-8 are the most common.

Capturing tone

Some figure-8 patterned microphones, including many ribbon microphone designs sound different from the front than they do from the back. Generally the back of these microphones will have a darker tone. Figure-8 mics with different tone in front and back are best used for close-micing, passable as room mics when solo or in Blumlein configuration if the tone differences don’t adversely affect the desired tone, and are generally avoided when choosing side mic for a Mid/Side configuration.

Other figure-8 patterned microphones, including some ribbons and most switchable large diaphragm condensers sound virtually identically in front or in back. These don’t give as many tonal options, but are more consistent and open sounding and tend to work well for close micing, room micing solo, or in pairs in Blumlein or Mid/Side configuration.

Think 3D, part 2

Remember when we talked about thinking in three dimensions with the figure-8 pattern? When recording in Blumlein or Mid-Side remember that the “top” and “bottom” are null points for both microphones. Usually Blumlein or Mid-Side mic positions are defined by what you want each mic to hear, but there are a few cases where you might also point the top or bottom of the configuration at what you don’t want to hear so that sound is reduced or cancelled entirely.

  • Think about using the Blumlein pair to record drum overheads in stereo in your basement. You just might have an AC vent in the ceiling you don’t want the microphones to hear. Rotating the Blumlein pattern so the top of each mic points at the vent may accomplish this very well as long as it doesn’t significantly change the microphone’s relationships to what you want them to hear
  • Think about using a Blumlein pair to record acoustic guitar. One mic could be pointed at the sound-hole and the other at the 12th fret. If you do this with the top of the mics pointed at the ceiling, though, you might miss an important opportunity. If your figure-8 mics have strong null points at the top and bottom, try raising the configuration a little so it looks down on the guitar slightly and rotation it forward so the top of each mic points at the guitar player’s face. Provided they don’t move much while playing this may help cancel out the guitar player’s breaths.

 

Mixing Drums for Modern Rock Music

Posted: 16th May 2011 by Mezzanine Floor Studios in Uncategorized

Often the biggest casualty when recording a rock record on your own is the sound of the drums. A lot of attention is paid to the vocals and guitars to make sure they’re just right, and the bass is usually tweaked until it sounds “good enough”, but the drums are often done on a “do what you can with what you’ve got” basis. The result is often drums that lack “punch”, “boom”, “clarity”, “tone”, “snap”, or “warmth”, but instead can be described with words like “flat”, “boxy”, “papery”, even “boring.”

One of the things I’ve found in producing drums is that a lack of punch or clarity or boom often has much more to do with the way all the drums sound together, and much less about the way each drum mic sounds when listened to by itself. Sometimes the sound of everything together is weak because of poor mic choice or placement, poor drum tone to begin with, poor drummer technique, or even recording in a room that is totally inappropriate to the style. All of these things can be hallmarks of Do-It-Yourself drum production.

Sometimes, however, the sound of all the drum mics  sounds bad because of a few simple things that can easily be fixed in the editing or mixing phase.

1. Drum mics often pick up sounds you don’t want them to “hear.” Tom mics are notorious for this- they pick up almost as much snare and cymbals (or kick for a floor tom bottom mic) as they do the intended tom. The solution: At the least, manually go through the song and cut out any material in the tom tracks where the toms aren’t playing. This will instantly tighten up the sound of the snare and cymbals. Sometimes this results in a problem where the tom hits coming in change the tone and clarity of the snare and cymbals because the sound recorded when the tom was hit includes these sounds, too. If this effect is too noticeable, replace the actual recorded tom hits with tom hit samples. Drumagog and Sample Replacer are among the many tools that enable one to do this with ease, or you can do it manually if needed. It’s best if you can get the drummer to record standalone tom hits when in the tracking session- this enables you to replace the tom sounds with ones that sound the same, making your close mic replacements and the tom sounds in the overheads match more than conflict. If this isn’t possible there are a number of good sample libraries out there.

2. Close mics are minutely out of time with the overhead mics. For many styles of music this is not a big deal. Classic rock is notorious for having room mics on the other side of a big room, giving the listener a BIG sounding drum kit that can lack in clarity. In modern rock, however, a tighter, punchier sound is often desired. The Solution: It can often be helpful to remove this effect by nudging the close mics so they are in time with the overheads (See Figures 1a and 1b below for original and nudged sound.) Find a loud hit in the kick and zoom in on the kick and overhead tracks, then move the kick track(s) so the start of the kick in the kick track(s) matches the start of the kick in the overhead tracks. Do the same with the snare and toms (if using samples render them to a clip, then nudge the sample hits to match up, too.)

NOTE: be sure to check the polarity of your drum tracks after doing this. Sometimes it becomes necessary to flip polarity after doing this. The dead giveaway is that you’ll find the waveforms line up perfectly at a zero-crossing and then start the intense transient of the drum hit, but the waveform goes up in one track and down in the other. This gives a clue about which tracks to flip polarity* on.

Figure 1 (original):

 

Figure 1b (nudged):

 

 

* A few notes on polarity and phase:

“Flipping polarity” and “flipping phase” are often used interchangeably by engineers. Flipping polarity is the more correct term, since it means exactly what it says. The part of the sound above the middle (zero) of a waveform in software gets converted into electric current that is positive, which results in a speaker cone “pushing” outward. The part of the sound below the middle (zero) of a waveform in software gets converted into electrical current that is negative, which results in a speaker cone “sucking” inward. Flipping polarity in software takes the recorded upward part of the waveform and processes it so it creates negative voltage that sucks the speaker cone inwards and vice versa.

Flipping polarity is generally done to correct one of two issues:

1) When two mics are placed on opposite sides of a drum, like a top snare mic and bottom snare mic, one mic hears a “push” when the snare is it and the drum head moves toward it, while the other mic hears a “pull” or “suck” as the snare is hit and the head moves away from it. This results in an unnatural sound when the two mics are mixed together, since one mic’s sound essentially subtracts from the other’s sound, leaving you with a “weak”, “powerless”, “hollow” sound in the mid range, lack of “clarity” in the high end, and a significant decrease in the volume in the low end [in this case a bad thing.] By flipping polarity the engineer tells both mics to hear a “push” when the drum is hit.

2) When multiple mics are used to record something there is a time relationship between them, and adding the two recorded sounds together yields strange results. The timing relationships in tracks are related directly to difference of the distance of each mic from the source of a sound. When the drummer hits the kick, the kick mic is the first to “hear” the sound, followed by any bottom snare or bottom floor tom mics, then by the top snare and tom mics, then by the high hit mic, then by the overheads, then by the room mics, etc. When you look at the waveforms in software, it can be seen clearly that the upward movements in the snare mic correspond more closely with either upward or downward movements in the overhead mics at the same time, all of which started when the drummer hit the snare. When the snare and overheads move roughly in sync up and down the snare will sound punchier than when the share and overhead mics move up and down opposite of eachother. Traditionally the method of fixing this problematic phase (time) relationship was to flip polarity on the overhead tracks so the ups and downs in each waveform moved together, resulting in a tighter, fuller sound. The problem with this technique is that the timing relationships vary according to the frequency makeup of the sound, and depending on the distance, the waveforms may not really move together at all. By nudging tracks so they line up exactly, the movements are brought more in line with each other. It is then easy to see when the snare sound moves up in the snare mic and down in the overheads at the same time, making the decision to flip polarity in the overheads an educated, more accurate, and correct choice, rather than an approximation.

A few oversimplified** examples if you can forgive some poor drawings…

In figure 2 the blue sound would largely but imperfectly subtract from the green sound, resulting in a weak and hollow tone.

Figure 2.

By moving the blue sound to line up with the green sound (like the nudging example below, which is a true phase or timing adjustment) we would get Figure 3, where the blue sound is exactly opposite the green in timing, intensity, shape, etc. If played together these two sounds would cancel each other out and nothing would be heard.

Figure 3.

 

By “flipping polarity” on the blue sound we would get figure 4, where the blue sound would add to the power of the green sound more than it subtracts from it. This would result in better tone and greater low end power than the original sound combination in Figure 2, but would still lack clarity since the blue sound lags behind the green sound.

Figure 4.

By moving the blue sound to line up with the green in timing or phase, then flipping polarity if they don’t already match, we would get the best results in both tone and clarity, since the timing (phase) and polarity of the sound match most closely and blue sound would add to the green almost completely (Figure 5.)

Figure 5.

 

** The reason I say these are oversimplified is that real sound isn’t periodic or uniform like this. Even if hitting the snare alone there would be differences in the intensity/volume of the snare sound in the snare mic and overhead mics, as well as differences in frequency content due to the microphone design, the nature of sound in air, room acoustics, etc. When adding in the sound of cymbals, kick drum, toms, high hat, ride, etc. the combined content that reaches each mic will have a unique makeup of frequency, intensity, etc. The point of this article is that it is possible to get closer to ideal results using a hybrid nudging/polarity flipping technique than one can by simply flipping polarity.

Recording to Tape In-the-box

Posted: 6th April 2011 by Mezzanine Floor Studios in Audiophile, Hot Gear

Anybody that has been around the recording industry knows the merits and headaches of recording to tape. The “magic”, “glue”, warmth, and saturation on the one hand, and the hiss, laborious editing, and low availability of good quality tape on the other.

Artists like Eminem refuse to record to anything but tape, while many others will track to tape originally, then capture the recordings into Pro Tools. Other engineers will record to Pro Tools first, edit, then record to tape before moving on to mixing the album.

For those that work only in-the-box, Universal Audio has created a new method of doing this with their new UAD based Studer A800 modeled plugin.

UAD Website

True audiophiles will argue that this isn’t the same as recording to tape only, or to tape and then capturing into Pro Tools. They are correct in that it isn’t exactly the same thing. But the results using the UAD version of the Studer A800 are nonetheless stellar.

With the UAD plugin I can take the sound of tape with me on my Macbook Pro- a darn sight lighter than the 900 lbs the real Studer weighs. There are other bonuses, too: Real tape costs money above and beyond the cost of the tape machine. Using UAD I can record to hard disk, which is far cheaper.

The other thing about using the UAD plugin is that it is far more flexible than the real Studer machine. First and foremost they added a button to toggle the hiss on or off. This alone is worth the price of admission for those who love the tone and saturation of tape but don’t want the hiss.

Next in line, the folks at UAD included models of 4 different kinds of tape, and the three standard speeds- 7.5, 15, and 30 IPS and the ability to use either CCIR or NAB EQ settings as appropriate. Unlike the real Studer A800, however, with the UAD plugin you can use different settings on different tracks at the same time, which opens up a whole new world of sonic possibilities.

For instance, to me the smoothness of 30 IPS is almost always preferable for vocals and for “sharper” sounds like cymbals, strings, or acoustic guitar. The low end head bump at 15 IPS makes it ideal for recording transient drums like the kick, snare, and toms, and also for bass. Distorted electric guitars often benefit from the “gritty” feel of 7.5 IPS. I can even choose NOT to use the Studer on a track if I prefer the dry digital sound.

What UAD have done is enabled a new generation of engineers to enjoy the wonders of tape without the headaches. Myself I’m totally sold.

 

 

Advanced tone shaping for drums with iZotope RX

Posted: 4th April 2011 by Mezzanine Floor Studios in Hot Gear, Tech Ninja

I recently worked on a project where I recorded drums and I felt like we got just about everything right, but there were a few minor niggles.

1. The snare mic was ringing just a bit too much.

2. We had added a low tom mic and while I loved the tone this added from the resonant head when the floor tom mic was hit, I hated the low rumble it picked up when the resonant head on the floor tom vibrated every time the drummer hit the kick drum.

I tried a number of standard techniques (eq, compression, gating, transient shaping) for trying to deal with these issues in post production and just wasn’t happy with the results.

I then tried an experiment and found myself falling in love with iZotope RX all over again.

What I realized was that I had a situation that was fundamentally the same as having unwanted noise on my signal, making noise reduction an ideal process for dealing with it. The reasons for this as I saw it:

1. Noise reduction is more accurate than simply using EQ, because EQ can’t accurately discern between wanted and unwanted sounds at the same frequency. It’s a bit all or nothing.

2. Noise reduction is more accurate than using a simple gate, because it is sensitive to the level at many frequencies at once (essentially it’s just a serious multiband gate.)

3. Noise reduction would sound more natural than a gate, especially when dealing with the tail of a drum hit where the hit gets quieter and returns to its background levels.

What I originally thought was reinforced when I opened iZotope RX and gave it a shot. I used a section where the floor tom was rumbling when the kick was hit as a sample of the “noise” I didn’t want. I then dialed in the settings for the desired amount of removal, and found my boomy floor tom transformed into a tight, full bodied sound. I then did the same with the snare, but this time I sampled a full snare hit, then used an envelope to focus only on reducing the ringing frequency I didn’t want. RX was brilliant through and through.

Here is an audio sample- it starts with “before” and fades into “after” : Drum_RX_demo

Below are screenshots of the settings I used- Snare on the left and Floor Tom on the right.  Please note this was with the original RX. I’ve since upgraded and my other examples of RX on this blog are now with RX2.

For those in search of drum tone tightening magic- have fun!!!

 - Joshua

Drum recording techniques

Posted: 4th April 2011 by Mezzanine Floor Studios in Recording Techniques

For those who have ever recorded multitrack drums you know it can be a challenge to get everything right. There are a lot of excellent articles on the subject (Bruce Miller’s website is a great resource for this and so many other details about recording and music production: Bruce Miller’s free audio course.

For myself, I have a pretty simple checklist that has served me well.

1. Make sure the drummer tunes his drums

2. Work with the drummer to find the right amount of resonance for snare, toms, etc. Too much ringing can be distracting- too little can kill the tone of the drums, making them sound paper-like. Moon gels work great for this: RTOM Corp website

3. Never mount SM57 mics on the drums. The baffle of the 57 rattles slightly when the mic is shaken and can ruin otherwise great drum sounds.

4. Always check what the drums sound like in the room. If the room is supposed to be “live” listen to the way it sounds. If it is supposed to be “dead”, walk around and clap and listen for “flutter echo”- the sound of really quickly repeated echos that happens when sound bounces off of two parallel hard surfaces (like most walls.) If you hear some try hanging blankets, comforters, acoustic panels, pillows, whatever you can to get rid of the flutter echo.

5. When micing cymbals, ride, or high hat, don’t point the mic flatly at the cymbal. Angle it slightly so the front of the mic is not perpendicular to the cymbal. If close micing cymbals with the mic perpendicular you can get some weird “swimming” sounds from the cymbals, especially the ride. As the ride moves significantly closer and further from the mic it will get louder and quieter. I tend to aim at where the drummer will hit the cymbal to get the most attack.

6. Be sure to flip polarity on your preamp or in your recording software when needed. This is sometimes called “flipping phase.” Basically you just push the polarity flip button back and forth and listen. The setting that has the most low end is the correct one, as phase cancelling is most obvious at low frequencies. Situations where this might be necessary are numerous- room mics, overhead mics, kick mics, bottom snare mics all may need to have the polarity flip applied. There are really two reasons for doing this:

a. When multi-micing a drum like a snare it can sound weird to have one mic pointed at the top and capturing the first strike of the stick as a “sucking” away from the mic and having a mic on the bottom that “sees” this first strike of the stick as a “pushing” towards it. For this reason I always check polarity on bottom snare or bottom tom mics and 99% of the time I flip the bottom mic. This essentially makes both microphones see the stick striking the drum head as moving in the same direction.

b. Different mics will capture the same sound at slightly different times simply because of the distance between each mic and the source of the sound. The overheads are a great example. The sound they capture will always be a bit behind the sound of a close mic in capturing any drum. Anytime there is a significant distance between microphones it is good to check the polarity. Similar to over/under micing you want all microphones that capture a sound to capture it as close to pushing and sucking together as possible. Obviously with a drumset this isn’t fully possible, but flipping polarity can help in most situations.

7. Tube guitar amps tend to sound best when played very loud. There are some cool gadgets for enabling a guitarist to saturate their amp and have their cabinet put out less sound, enabling them to play more quietly and still have killer tone. There are no such tools for drums. The tone of the drums will vary largely depending on where and how hard they are hit. Snare and toms especially will tend to have the best tone when played VERY loud. Of course, volume may have to take a back seat to practical considerations (neighbors, hearing your headphones, etc.) and to musical style, but in general the louder the better.

8. Always listen to each drum hit individually. Sometimes little rattles can hide in the wall of sound when the whole kit is played, but they will become audible during quieter sections.

9. Always record at 24-bit. Some programs default to 16-bit. This greatly reduces dynamic range and makes recording drums well extremely difficult. Set the record volumes so that peaks hit -12 dB when rehearsing. The drummer will likely play louder when performing. This setting enables you to capture the most dynamic range with little fear of overing.

10. Choosing overhead placement is critical. I tend to like having my kick, snare, and mid tom sound dead center in my mixes, so that is where I want them in my overheads. I do this by drawing an imaginary line from the center of the snare through the center of the kick. This is where I want the center of my mix, so the overheads should capture sound from either side of this line. I then place my overheads so that they are the same distance from the center of the snare and the kick, and point them at the snare. (I use a belt, mic cable, whatever’s free to measure and adjust until the distances are correct.) In the diagram below the kick, snare and center tom are in Orange. The microphones are red dots, with the imaginary centerline in bold and the distance measurements in dotted line. These should all be equal.

I hope this information is useful in helping you get better (and more consistent) drum sounds.

- Joshua

Every once in a while a company comes along with a product that changes the way things are done.  iZotope RX is one of these products. They have some excellent samples of its capabilities on their website:

iZotope RX website

The coolest thing about RX for me is that it brings advanced visual editing to the masses, enabling people to better see what they want in the audio, and what they don’t. Here is a good example to go along with the Car Beep example they have at the bottom of their page. For those that aren’t familiar with this type of display, it is a plot of frequency (height), time (which progresses horizontally like any other software), and volume/intensity (the brighter the orange the louder the sound.)

You can clearly see the repeating beep at around 2 kHz and with RX you are able to use multiple selection tools to select the beep (and any harmonics when using the Magic Wand selector.) Holding down Shift enables you to use different tools and keep adding to your selection until you have what you want.

Once the problem areas are selected, there are many options for dealing with the issue, the most flexible of which is Spectral Repair. This enables you to bring down or entirely remove a problem area in the spectrum and even resynthesize what you expect to hear. It is actually possible to replace entirely missing audio using the surrounding context, which is ridiculously cool, although finding an application where it works great can be a challenge. I’ve had excellent results with Attenuate (the most hammer like tool in Spectral Repair) and Partials+Noise (perhaps the most surgical of the Spectral Repair tools.)

Standard restoration process are available in RX, enabling you to remove noise, hum, crackles, and even clipping from audio. Using noise reduction has always been an art in compromising, trying to remove the most noise while leaving the most desired sound intact. The better the tools get the less compromise you need to make, and the further you can push it and get results. RX is a big step forward in many ways and really enables some amazing editing. Below is the car beep audio spectrum display for “after RX.” Please note, this  display is not a capture of the “after” sample from their website. It shows the actual results after my own processing using RX and the Partials+Noise function in Spectral Repair.

- Joshua

Cool Tips and Trips- volume “automation” with Melodyne

Posted: 4th April 2011 by Mezzanine Floor Studios in Hot Gear, Tech Ninja

Those that have paid attention to music technology in the past 11 years are probably familiar with an amazing coup.

Antares Autotune had been the pitch correction of choice since the beginning of the Pro Tools era. In the year 2000 a German company called Celemony came along that offered Melodyne, a product with a fresh perspective on detecting and editing pitch correction, timing, and volume.

Their natural, visual way of seeing and manipulating performance was a revolution, and has largely relegated AutoTune to studios that like using tools they are already familiar with and folks that produce vocals a la Black Eyed Peas and T-Payne.

Celemony then blew everyone’s minds a few years ago by releasing Melodyne Editor- software that could accurately detect all the pitches in polyphonic material (where more than one note is played at once.) This made it useful for fixing the one note that a pianist played wrong on a performance, or even for dropping samples/loops that were created in a different key or with a different chord structure into a project and making them work.

One of my favorite simple uses of Melodyne is in fixing volume inconsistencies that can occur with some instruments at different octave. The bass is a good example. Often when a bassist starts playing at a higher octave the notes are quieter than notes at lower octaves (like below.)

Melodyne makes it very easy to deal with this by following a few easy steps:

1. Select the Amplitude tool
2. Select one of the notes you want to change
3. In the menu, choose Edit, then Select Special, then Select Same Notes in All Octave
4. Grab any of the notes that are now selected and drag upwards to increase the amplitude (volume)

 

The same technique can, of course, be used for editing anything else where the volume is inconsistent. Falsetto sections of vocals sung amidst full chest voice (think The Fray) are perfect for this as well.

- Joshua

Why is lack of musical talent something to be proud of?

Posted: 29th March 2011 by Mezzanine Floor Studios in Talent and Technology

At the risk of sounding bitter and jaded this article really saddened me. How many kids will grow up thinking their iPad can make them into Justin Bieber or the Beatles?

Music talent not required

At the end of the day, it isn’t the technology that is off-putting to me. It’s the cavalier attitude the author had towards making music.

There will be those who will use this technology as a tool as they try to grow their own natural skills with training and practice and jamming with other musicians. And there will be those, like the author, I think, who will play with this technology for amusement and really enjoy it- never dreaming that it will make them a star. For both of these groups I say “right on.”

But there are also folks who will read the author’s words and will be convinced that this technology makes them talented. They won’t put any work into actually learning about music or honing their skills, but will have the attitude of a prima donna all the same. It is to these folks that I say “grow up and put some work into becoming a better vocalist, musician, composer, etc.”

The next time someone with no talent and no drive to work at becoming better comes to me and wants to play me the “epic” songs they recorded using this technology I’m going to write a short little note to the author of this article thanking him for giving a generation of wannabes the vision and courage to be proudly mediocre.

 - Joshua

Analog sound in a digital world

Posted: 21st March 2011 by Mezzanine Floor Studios in Audiophile

I love digital audio recording. We’re at the point in technological history where a tiny flash recorder with built in mics can capture sound with portability and accuracy that folks only dreamed about back in the “golden age.”

Just about any person with a decent laptop, software, and good ears can make a decent recording, breaking the chains that once held many artists back from having their music produced and distributed to the world. The gap in quality between what an indie artist can do in a buddy’s garage and what the music industry pumps out at a harried pace has been growing narrower and narrower, and I think that is awesome.

Yet I can’t help feeling that this revolution has come at a cost. We heard it when digital first broke into the mainstream. The harsh highs and brittle tones of the first ADAT-recorded albums were a stark contrast to the warmth and depth of analog tape. We hear it to this day in many home recording projects. Music that should bloom with all the vibrance and warmth of spring feels somehow sterile, cold, and stark- a bit more like winter than we first expected. I found myself years ago realizing that my idea of a successful digital recording was one where I had sucked the least amount of tone, soul, and heart out of the performance by recording it.

This revelation birthed in me the desire to reach beyond this limitation and find whatever combination of hardware, plugins, and knowledge I needed to get it right. Now when I approach a project I work hard to find the right balance of digital clarity with analog warmth and tone- to get the “season” right, if you will. An acoustic guitar isn’t meant to sound thin and brittle at it’s core: It’s meant to sound full and warm, vibrant and alive! Each instrument has an element that it brings to the mix, each song a feeling it brings to an album, and each album a feeling of hope or desperation, life or death, beauty or pain. To me it is important to get it right each step of the way when crafting an album, bringing each instrument, song, and album to the place where it feels like it was meant to be.

This starts with the performance, the instrument, the sound of the room. The soul of this performance must then be captured with a microphone and preamp that make me forget I’m listening to a recording. Once captured into digital, I then spend a lot of time focused on what each element needs to be- what brings it to life most clearly and completely within the context of the mix. This may be as simple as adding an EQ without touching a preset or button, just because the character of that EQ makes the instrument sound more alive. It may be deciding whether or not to add tape emulation or tube warmth to a track- deciding whether the inherent cleanliness of digital helps me hear the soul of the sound more clearly, or whether adding warmth, edge, or even crunchiness to the sound will make it dance. Or it may be compressing the living daylights out of the drum kit because that makes the drums dance in a way that drives the song forward.

I remember hearing once that being an excellent front of house engineer meant being invisible to the audience- not having any mistakes for people to notice. While I still find some truth in this I can’t help but feeling that there is cowardice and insecurity in this way of thinking: my job as an engineer or producer is not to play it safe and simply capture the artistry around me. Rather, I must recognize the artistry inherent in the work I do, in the hopes that the artistry I capture and the artistry I bring might result in something greater than the sum of the parts.

At the end of a mix I don’t want anything left on the table- not a trace of heart or soul or life. Every ounce of these should reside deep in the elements of the song, bringing the color and emotion and perspective of the song to the listener with nothing held back. That is the philosophy behind the work that Jonas and I do at Mezzanine Floor Studios. It’s about recognizing the heart and soul and season of an instrument or song or album and bringing that to light.

- Joshua