Monday, October 31, 2005

Recording Studio Techniques

Since I have been recording in my own studio environment for over 25 years, and since I have read a number of books, magazine articles, taken videotape courses, and read websites on the subject, I have over the years, both collected and invented a number of studio techniques that I thought might be useful if I posted them here - just on the outside chance that someone might come upon this in their travels....

Some of these I picked up in a course or an article (like the first one here on Stereo Miking), and others I discovered through need or sometimes even by mistake. And sometimes I earned the lesson through hundreds of hours of tweaking and experimentation.

For this first post on the subject, I will write 3 techniques to get started, but I will have more to add in future posts as well. Here is the first installment:

Stereo Miking:
Microphone placement is a big part of the art of recording studio techniques. Things sound different depending upon where you hear them from. To hear a realistic sound image of an instrument the way you hear it in person, you need to get a stereo image of it because you have two ears and that is how you hear things normally. To get a stereo image (a left and a right track) of an acoustic instrument (like an acoustic guitar) you usually need two mics (not always - for my Ovation acoustic, I mike one side and plug in for the other and get a blend of the two distinct sounds). The question becomes, how do you position the mics for a stereo effect that is similar to what a real person hears when sitting there in front of the musician. Part of the problem is something called 'phasing'. This is caused by the placement of the microphones where one mic picks up the sound in one section of the sine wave, and the other mic picks it up a little further away, in another section of the same wave. If the two are exact opposites, for instance if one is at the peak of the sine wave, and the other is at the lowest point of the valley, then theoretically one cancels out the other and there is no sound at all. In reality though, it is never that precisely opposite so what really happens is a very thin, weak overall sound when the two are mixed together because part of the sound from one side is being cancelled out by the other side.
To fix this, you would need two identical mics in EXACTLY the same position in front of the instrument. The problem then becomes that they pick up the EXACT same sound, so there is no difference, therefore no separation, therefore no stereo, therefore no point. So there is a simple technique that solves this nicely called the “X-Y mic placement” technique. You take two identical microphones (some even try to buy sequential serial numbers to get them as exactly the same as possible), then, you place them one on top of the other pointing directly at the acoustic guitar. They should be separated by as little air as possible without touching. The diaphragms inside the mics should line up exactly on top of each other. Now swivel each mic 45 degrees in the opposite directions, so one is pointing toward the bottom of the guitar and one is pointing up the neck toward the head. The mics need to be on a 90 degree angle from each other. This gives the effect of hearing sounds from each side of the instrument and the room and the airspace around it AS THEY ARRIVE in the same spot – like your head does. Because the mic diaphragms are so close to each other, phasing is decreased as much as possible, and this allows a full, deep, rich sound, as well as stereo imaging.

The Sound Landscape:
When you go to a concert, you actually hear the instruments coming from different parts of the stage, That is what gives it that added sense of reality and depth of space. To create that same expansive feel, I try to do the same thing when do my final mix down from 32 tracks to a 2-track stereo image. You can do this using the stereo pan controls on each track for side to side placement, and also using the reverb and or delay for front to back-of-stage depth-of-field placement. If you have 3 main vocals, you can get different effects by placing the singers all together in one spot, or by separating them across the sound panorama. Keep in mind, that placing them together will usually give a tighter sound, but you will sacrifice clarity. Spreading things out geographically, will give a spacious feel, and a more clear sound, but some things that you might want to sound tight and together (like some background vocals, or a horn section) might sound too loose for you intended sound picture.
Here’s how I think of it: Think of painting a picture of the band. Do you paint all the players in one spot? All in the middle of the stage, or do you spread them out? Which ones do you group together in one area and how close are they standing? Well, when you have a mental picture of where people are standing, then you will know where their sounds have to come from. Then pan them to their spot on stage in your mental painting. Use the reverb and delay to place them forward or back in the stage, and then use volume adjustments to zoom in on someone for special solo parts, etc. Also consider that people don’t always stand in one place through the whole song. How about allowing people to move once in a while? Take a guitar solo and gradually pan it across from center off to the left, and then increase the reverb as he goes off into the back. Or imagine the opposite where an instrument or a singer comes from the side and the back out to center stage for a solo. This is what the automation in recording systems like Pro Tools are excellent for. Every song has it’s own settings for everything, and the settings can change dynamically as the song moves along.

Jigsaw Puzzle EQ Strategy:
One thing often overlooked by musicians (and less-experienced recording engineers) is that an instrument that sounds great on its own, sounds different once in the mix with the other instruments. Some frequencies cut through, and others are trampled by the similar frequencies from other instruments. In fact, in the general mix of instruments there is often a general ‘mud’ in the middle somewhere, where the individual instruments lose their distinction and everything blends together in a messy muddy way. This is not good, usually. One way to avoid this is to use the EQ (Equalization controls which control the amount of boost or cut of each frequency range) on each instrument track to fine-tune the sound so that the different instruments fit together better. That way they all contribute to the sound of the ‘band’, but each instrument can still be distinguished.
To achieve this, I think of each instrument as a puzzle piece. I can carve away a part of one to make room for a part of the next instrument that needs to occupy that frequency space in the sound. For example, you might cut the guitar sound around 200 hz, so that the bass can be heard without turning it up too loud that it dominates the overall mix. Sometimes you can achieve sound distinction advantages of a solo by using eq inventively instead of always using the volume. One rule of thumb for EQing is that you should generally not adjust the eq for any frequency for any track more than about 3db. Any more than that, and you have a different problem. Eq can be used to fix certain problems, but only to a point. Too much adjustment and it’s a mistake. The sound of the instrument loses it’s integrity and authenticity.

0 Comments:

Post a Comment

<< Home