Skip to content
October 30, 2012 / Randy Coppinger

AES 2012 SF

I’m collecting articles about the 113 AES Convention in San Francisco. Please let me know if you’ve seen anything worth reading about the show so I can include it.

Overview and Big Hit of the Show, Hardware, and Software
Bobby Owsinski evaluates what he saw on the Exhibit Floor.

Was I Hearing The Earth?
A session on the history of analog recording on magnetic tape turns cosmic.

AES 2012 recap and more!
The Home Recording Show with Matt McGlynn, Bjorgvin Benediktsson, and yours truly.

New Audio Technica 5040 Microphone
A distinctive looking mic introduced at the show (RecordingHacks).

Project Studio Expo Connects Experience With The Desire To Learn
Craig Anderton shares his experiences at the first Project Studio Expo

Articles in ProSoundNews:
Reinvented AES Reaches Success
Al Schmitt & Friends Share Info, Laughs

AES Press Release:
Project Studio Expo

October 29, 2012 / Randy Coppinger

Next Level VO

Sarah Elmaleh asked,

Is there anything that actors do to take perfectly usable takes to the next level?

Most top tier voice actors have mic technique: pulling back a bit for loud lines, moving in on quiet ones, turning off slightly for plosives and otherwise using the physics of how a mic functions to enrich the performance. This skill seems especially prevalent for voice actors with a radio announce background. It’s more than simply moving around. Veterans known when it is appropriate to make changes and how far they should move, often very little. Working the mic — addressing it — done well is typically subtle but sure adds value to the recording.

Listen to all of the questions and answers… Dialog Editing for Game Audio.

October 27, 2012 / Randy Coppinger

Analogue Earth

Yesterday at the AES Convention in San Francisco I went to a presentation about the history and physics of recording analog on magnetic tape. Seeing oxide on a microscopic level was fascinating, but the question and answer period revealed a real gem. A question about erasing tape provoked presenter Jay McKnight to give his explanation for the “wub-wub-wub” sound of analog tape in fast transport. I remember that sound from my first studio job. The tape could be otherwise completely blank but during high speed transport with the faders up you could hear this low frequency oscillation.

Jay said he can’t prove this (yet), but it is his opinion that when you erase the tape (via erase head or bulk eraser) the audible spectrum acts as a bias for the magnetic field of The Earth. In fast transport the frequency gets fast/high enough to allow us to hear the magnetic flux of our planet. Whether he can prove it or not, I get goose bumps at the idea.

I’m rounding up posts about the convention.

October 24, 2012 / Randy Coppinger

How A Unidirectional Ribbon Mic Works

The technical challenges of changing the polar pattern of a ribbon microphone are significant. If it was easy, I suspect most inexpensive ribbons would be manufactured with a cardioid pattern.

Standard ribbons are known as velocity microphones. They pickup sound in the front and back equally, rejecting sound above, below and to the sides. The sound received at the back of the mic has the opposite polarity of the sound received at the front. In the transition area between the on axis front and rear to the off axis top, middle and sides, the sound volume decreases. This bidirectional behavior is pure velocity.

An omni pattern is pure pressure. There is no axis, no transition between higher or lower volume; the angle of entry (direction) does not affect volume.

A cardioid pattern is a 50/50 mix of velocity and pressure. And there are patterns in between: velocity is dominant for hypercardioid, pressure is dominant for subcardioid.

So to change the pattern of a pure velocity ribbon mic, a pressure component needs to be added. In John Eargle’s The Microphone Book, he explains that the earliest directional microphones combined two separate elements, one pure velocity and one pure pressure, using electronic summing. But it became more common to have a single element. In the case of ribbons, part of the single ribbon element needs to be obscured to produce the pressure component. Some designs simply cover part of the ribbon with a plate. Other unidirectional ribbons use absorptive materials such as felt. Then there are elaborate designs with baffled chambers to prevent sound from reaching part of the ribbon element on the back side.

Ideally the scheme used to prevent part of the ribbon from receiving sound should be perfectly absorptive. But nothing is perfect, so some sound passes through that barrier. Or sound gets reflected back into the ribbon. Maybe both. Reflections can sound especially unnatural, causing comb filter effects. Have I mentioned this is difficult?

LISTEN

Earlier this month I collaborated with Recordinghacks, Igloo Music, master voice actor Corey Burton and ribbon microphone manufacturers to record some unidirectional models. Have a listen to the Unidirectional Ribbon Mic Shootout to hear for yourself what all of the fuss is about.

Got a pinboard for articles like this one? Cool-
Follow Me on Pinterest

October 24, 2012 / Randy Coppinger

Greatest Hope

Morla Gorrondona asked,
What is your greatest hope for VO in games?

With editing, my greatest hope is that no one notices it. When we’ve done our jobs well, people enjoy the story and the gameplay, and are oblivious to the audio editing.

Listen to all of the questions and answers… Dialog Editing for Game Audio.

October 23, 2012 / Randy Coppinger

High Definition Game Audio?

There were some great follow-up questions and comments to a question from J. S. Gilbert about archiving from Production Milestones.

J. S. Gilbert asked,
Have you worked at 32 bit?

Yes, last Fall I conducted a few experiments with 32 bit after getting ProTools 10. On one hand I believe in working at a higher bandwidth. But if everyone else in the project isn’t using 32 bit, my use of it may offer litte to no sonic benefit.

I believe a production meeting about tools and formats can help sort out these kinds of issues. If every believes that 32 bit, or 96k Hz, or other high bandwidth considerations are important, this can be determined early. Feasibility for high bandwidth can also be discussed. For example, do all collaborators have tools to support 32 bit, including integration? We need both the willingness and the tools to push up the bandwidth.

J. S. Gilbert followed-up with,
A lot of the plug-ins work in 32 bit. Does that make a difference?

In some cases it may. I suspect 32 bit floating point was recently adopted by Avid because newer computers are using 64 bit architecture. Several companies have been committed to 32 bit floating point for years and I’ve heard devotees rave about the improvement in quality. If everyone on a project is using 32 bit, I’m on board. If not, it may be better to stay below it.

Michael Csurics added,
I work at 24 bit, 96k Hz, mono. Just take a file and try and down pitch it. The higher your sample and bit rate the more manipulative you can get with your file. In video games we tend to do a lot of sound design work off of dialog files and post processing in middle-ware like Wwise and so on. The more data you have to work with the more manipulative you can get without stretching it to an unnatural point.

Excellent point. Representing data with more points is going to preserve more information. Better resolution allows crazier DSP before the results become unconvincing.

Then Michael Csurics asked,
Do you tend to work in ProTools or more traditional two track editors like Soundforge?

There are a lot of great software options. Because I am confortable in ProTools, that’s what I use most. But if people prefer other applications, great. Use whatever helps bring out your best sound and efficiency.

Listen to all of the questions and answers… Dialog Editing for Game Audio.

October 22, 2012 / Randy Coppinger

Mistakes, Changes and Re-Use

J. S. Gilbert asked,
What about archiving? Do you archive iterations, such as raw dialog in 24 bit format, and then perhaps normalized, then perhaps with effects applied? What is a good way to handle this?

As stated previously, batch normalizing is pure evil. Don’t do it.

The larger a project is, the more likely these things will happen:
> Mistakes will be made,
> People will change their minds, and
> Others will re-use the assets

In short, game audio tends to be re-visited. Let’s take an example: a set of files needs to be mastered differently. We’d prefer not to start again from raw recordings which also need head/tail edits, internal cleanup edits and file naming. This is why I like to save full file sets at major milestones along the way. Then I can choose how far back I need to go to fix a problem or make a change. And when it’s well organized, collaborators also benefit from quick access to the parts they need.

Some file sets worth consideration to build and organize as you go:
(1) Raw, unedited recordings
(2) Edited, named, but unmastered
(3) Mastered (processed) final audio

It can be pretty annoying to create intermediate parts after the fact. But if you build this into your workflow it doesn’t cost a lot of extra time. Time spent may break even after just one fix or change.

Listen to all of the questions and answers… Dialog Editing for Game Audio.

October 18, 2012 / Randy Coppinger

Editing vs Mastering

Chip Beaman asked,
Is mastering the files a part of the editor’s process?

It tends to be, yes. I edit and master dialog at the same time by choice. However, editing demands a person’s attention, like driving a car. If you ignore editing you won’t get the job done. But so long as nothing obviously wrong jumps out, mastering is insidiously easy to neglect. Good mastering requires that editing stop for a listen to mastering issues only. Giving each task some separate brain space can feel a bit scattered, but ignoring one for the other, and switching back and forth is essential.

Editing and mastering could be performed separately under the right conditions. Maybe in a large team with lots of specialization one person would edit and a different person would master. But because there is some overlap between these tasks, I believe there is a time efficiency when one person does both simultaneously.

(Mastering in Game Audio is more like pre-dubbing for film, or comping in music production. In this context mastering is level setting and processing to get the dialog ready to implement into the game. But audio elements can be further manipulated using the game’s audio engine, unlike mastering for music production or print mastering for film which result in a final product.)

October 17, 2012 / Randy Coppinger

Audible Whisper

Alexander Brandon asked,
Is there a compression process that you’ve found works best when normalizing whispered lines to be as audible as shouted lines?

Let me get something off of my chest: I don’t batch normalize anything. In my experience human ears are better to decide volume. Batch normalization tends to make all of the quiet stuff louder and leave the loud stuff quiet. It seems to cause more problems than it solves. So I work according to how things sound with my ears rather than batch processing.

It’s also worth mentioning that the quiet lines should be audible but still quiet. Varying intensity by performers can be important for creating a believable context and may even help advance the plot. We cheat real world volume in media so we can hear the story, but I always prefer to preserve some volume perspective. Weird volumes can be distracting.

Actors who perform in front of a live audience learn a skill: how to be heard from a whisper to a scream. The folks up in the cheap seats need to hear actors on stage. When those stage actors come into the recording studio, they know how to articulate and project a whisper. I know of nothing that works better than an actor who can perform a legitimate stage whisper. Seriously. I know that’s not what Alexander asked but a believable, projected whisper is worth mentioning because it is incredibly effective.

When stuck with a soft, mumbly whisper I tend to reach for EQ. If the actor worked close to the mic, thinning the low end will usually improve how it sounds, especially with whispers. The old Academy Curve rolled off the extreme lows and highs, then punched up the presence frequencies. That’s a good starting point, but it depends what else is going on during that dialog. After hearing the mastered dialog in game, it may beg for re-mastering adjustments to optimize that whisper in context. Sorry, I don’t have simple answers for this question.

Listen to all of the questions and answers… Dialog Editing for Game Audio.

October 16, 2012 / Randy Coppinger

WAV vs MP3

Debora Duckett asked,
If files are sent do you prefer .wav or since this will be compressed is .mp3 fine?

I always prefer high quality, high bandwidth sound files whenever possible. If you keep as much detail as possible to the very end then the final, memory optimized (data reduced) audio will sound better.

Most media professionals – music, film, broadcast, photography, etc. – would prefer their source material have more bandwidth than the delivery system.

Listen to all of the questions and answers… Dialog Editing for Game Audio.