By David Ciccarelli
May 14, 2007
Explore the tools of the trade and how they interact with each other.
Discover new recording studio techniques, written by Voices.com CEO, David Ciccarelli, a trained and knowledgeable recording engineer and honors graduate from OIART (Ontario Institute of Audio Recording Technology).
This is one of my favorite topics too as a graduate from OIART, a prestigious audio recording school here in London, Ontario Canada, and with technology being such a popular topic of late, particularly talk of audio recording studios, I thought it would be appropriate to publish an article I wrote on VOX Daily about Advanced Recording Techniques.
For some of you, this may be a review, however, for quite a few people it will be brand new.
Hard Disk / Computer-Based Recording
One of the biggest trends in recent audio production has been to merge digital audio with computer technology to create a samplebased approach to sound recording. The encoding of audio data into digital memory or onto a storage medium provides us with a means for storing or manipulating defined blocks of digital data. This data can be stored as a soundfile such as .wav, .aiff or SDII.
Perhaps the most important difference that can be distinguished between a tape-based system (digital or analogue) and samplebased recording system is random access. Random access production refers to the fact that digital audio can be stored within a random access memory (RAM), or a disk based memory medium in such a way that the data can - virtually instantaneously - be accessed, processed, or reproduced in any order at any point in time.
Once developers began to design updated sample editor software, it was discovered that through additional processing hardware, digital audio editors were capable of recording digitized audio directly to a computer's hard disk. These devices, sometimes known as digital audio workstations (DAW), serve as computer based hardware and software packages that are intended specifically for the recording, manipulation, and reproduction of digital audio that resides on hard disk.
Commonly, such devices are designed around and controlled by a standard personal computer with the addition of a sound card which provides the input and output interaction with the computer.
There are multiple advantages to using digital audio workstations in an audio production environment.
• The capability to handle longer sound files. Hard disk recording is limited only by the size of the hard disk itself (commonly one minute of stereo recording at 44.1 kHz occupies 10.5 MB of hard disk memory or 5MB / track minute).
• Random Access editing. As audio is recorded on the hard disk, any point within the program can be accessed at any time, regardless of the order in which it was recorded.
• Nondestructive editing allows audio segments (often called regions) to be placed in any order, manipulated in any fashion without changing the originally recorded sound file in any way.
• DSP. Digital signal processing can be performed on a segment or entire sound file in either real time or non-real time in a nondestructive fashion.
• In addition to these advantages, computer-based digital audio devices serve to integrate many of the tasks related to both digital audio and MIDI production. Many DAW's are capable of importing, processing, and exporting sound files into formats such as mp3 or Real Players G2.
Also known as equalization or EQ, filters are used to increase or decrease the level in a specific range of audio frequencies. The most common filters are the simple bass and treble controls found on inexpensive stereo systems, which act on a broad range of frequencies. But other filters are designed to surgically boost or cut very narrow bands of the audio spectrum.
As the simplest form of filter, shelving EQ boosts or cuts all frequencies above or below a fixed frequency. A bass shelving filter, also called a low-pass filter, boosts or cuts everything below its fixed center frequency. Likewise a treble shelving filter, also called a high-pass filter, boosts or cuts everything above its fixed center. A single control typically adjusts the amount of boost or cut.
These filters are useful for making broad changes like reducing boomy bass and wind noise. But encoders can easily be overloaded by too much bass or treble, so it's often wisest to use these filters to cut high and low frequencies to prevent artifacts.
These filters can be used to boost or cut audio on both sides of a center frequency. Bandpass filters are commonly used as midrange filters, because they have little effect on either high or low frequencies. The familiar graphic equalizer is just a set of bandpass filters tuned to different center frequencies.
More sophisticated versions, called sweepable bandpass filters, have an
additional control allowing you to change the center frequency. Bandpass filters are useful for increasing the intelligibility of a speaker without increasing hiss or background noise. A variation of the bandpass filter is the notch filter, which boosts or cuts all frequencies except those around the center frequency.
A parametric filter is a bandpass filter with an additional control to adjust
the width of the frequency band being effected (fig. 3). These are the surgical tools of audio editing. They can be used to eliminate just the noise from an air conditioner, while having a minimal effect on the rest of the audio.
With all filters it's important to follow the audio engineer's first rule of EQ -- cut rather than boost wherever possible. Cutting undesired sounds is always less obtrusive, and boosting too much can make a track too loud and lead to distortion and artifacts when encoding.
A compressor's basic function is to reduce the dynamic range of an audio recording, which is the difference between the loudest and softest sounds that pass through the recording chain. Simply put, a compressor is a processor whose output level increases at a slower rate as its input level increases.
By reducing the volume of the loudest sounds, a compressor lets you raise the level of the entire audio track, making it all sound louder than it actually is. Compression can be a big help in achieving intelligible audio tracks with a more uniform volume that will survive the encoding process.
A compressor consists of a level detector that measures the incoming signal, and an an amplifier whose gain is controlled by the level detector.
A Threshold control sets the level at which compression begins. Below the threshold, the compressor acts like a straight piece of wire. But when the input level reaches the Threshold, then the compressor begins reducing its output level by an amount determined by the Ratio control.
The Ratio control establishes the proportion of change between the input and output levels. If you set the compression Ratio to 2:1, then when the input signal gets twice as loud, the output signal will increase by only half.
If you set the Ratio to its maximum (10:1 or more), the the compressor becomes a "limiter" that locks the maximum level at the Threshold.
While a compressor can level out a recording, high levels of compression can also introduce artifacts including "pumping", in which there is an audible up and down change in volume of a track, or "breathing", which sounds like someone breathing as the background noise level goes up and
An expander is the opposite of a compressor. As the level of the audio signal gets louder, the expander's amplifier turns up further making loud signals even louder. An expander can be used to reduce noise in a process called downward expansion. In this case you set the Threshold just above the level of background noise. The expander will then raise the volume of everything above the Threshold, but won't change anything below the Threshold, thereby lowering the perceived background noise.
Normalizing increases the gain of the audio file until its loudest point (or sample) is at maximum level. The overall signal level is now higher, which makes for clearer audio, and also gives the encoder more bits of data to work with and reduces encoding artifacts. The only downside of normalizing is that it increases the noise as well as the audio signal so it should be used carefully. It should be your last step before encoding, and you may not need it at all.
Has this article been helpful? If you have anything to add, leave a comment.
Looking forward to hearing from you,
Â©©©iStockphoto.com/Aleksandar KolundzijaRelated Topics: booth, recording studios, techniques
Join us for a FREE webinar on music, how you can purchase it, the legal ramifications and key factors involved with integrating it into your projects.
Vox Daily offers a daily dose of voice acting news, articles, tutorials, interviews, intelligent conversation and business ideas for voice talent and voice actors.
Our feed & social options update you with special offers and news as it happens.