Skip to content
February 7, 2015 / Randy Coppinger

Profile: Doppler Studios, Atlanta

Entry to Doppler Studios in Atlanta, Georgia

Whenever I’ve needed to record out of town, information about studios in that area has been very much appreciated. If you ever need to record in Atlanta, here’s some info about Doppler Studios. This seven room facility is located about 15 miles north of the airport in Piedmont Heights. The Lindbergh Center Station is only 1.1 miles from the studio if you want to take the MARTA subway from the airport.

Shawn Coleman of Doppler Studios in Atlanta, GeorgiaI’ve worked over the phone with Shawn Coleman for at least a decade. He’s our go to guy for voice recording. He typically works in Studio G, a well equipped room for voice actors, music production, and film/TV audio post.

We finally had the opportunity to show up in person for a session and were treated to working in Studio A, with a larger control room and larger studio proper including a baby grand piano. Client View in Studio A at Doppler Studios in Atlanta, GeorgiaShawn knows we like the sound of their Neumann u87. He also hung a Neumann TLM 103 running at a lower volume as a safety microphone in case the actor suddenly shouted (recorded on a separate track). The 40 input SSL 4000E was overkill for the two microphones we used, but it sure was a pleasure to see and hear. Always the professional, Shawn kept the session running smoothly and everything sounded great. What I didn’t know until this visit: Chief engineer Joe Neil makes his own custom microphone preamps, which we used for our recording session. They sounded very nice feeding into the Manley compressor. We recorded at 32bit, 96k Hz because it sounds better if we need any iZotope Declick to help clean up mouth noise.

After our session Shawn showed me their big music tracking room: Studio E. With a 56 input SSL, 3 large isolation booths and a 30′ x 40′ studio proper this would be ideal for recording large groups of musicians at the same time.
56 Input SSL Console in Studio E at Doppler Studios in Atlanta, Georgia

Below we see Shawn in the big room with two of the three iso booths behind him.
Studio E Proper at Doppler Studios in Atlanta, Georgia

So now you know a little more about this fine studio. I hope you find it helpful if you need to record in Atlanta. Now, here are some more pretty pictures.

Neumann u87 Microphone at Doppler Studios in Atlanta, GeorgiaOutboard Gear Rack in Studio A at Doppler Studios in Atlanta, Georgia

View from Control Room into Studio A Proper at Doppler Studios in Atlanta, Georgia

View from Control Room into Studio A Proper

Custom Microphone Preamplifiers at Doppler Studios in Atlanta, Georgia

Custom Microphone Preamplifiers

View from Iso Booth into Studio E Proper at Doppler Studios in Atlanta, Georgia

View from Iso Booth into Studio E Proper

February 4, 2015 / Randy Coppinger

How To Modify Your Shure SM57

Someone suggested that older Shure microphones sound better because the quality of the transformers was better. Then my friend Dave recommended that I rip out the transformer all together! Since the Shure SM57 is inexpensive, and the changes are relatively simple, I decided to try my first microphone modifications in the summer of 2007.

REPLACE THE STOCK TRANSFORMER

Shure SM57 microphone modification  desoldered elementThe good folks at Mercenary Audio sold me some TAB Funkenwerk T58 transformers, designed specifically to replace the ones that come in stock 57s and 58s. First I unscrewed the back portion of the mic and de-soldered the contacts from the stock transformer to the microphone element. Then I released the XLR connector at the base and de-soldered it.

The next part was tricky… I had to remove the old transformer from the mic body, which was held in place by some sort of rubbery glue. The folks at Mercenary gave me a hot tip: put the lower body of the mic (not the element!) in a toaster oven for 3-5 minutes. Despite warnings I still managed to burn myself. And some of the glue spilled onto the metal tray. Bummer. That stuff would not come off, rendering it unsuitable for cooking food. But it did break the stock transformer free.

Then I needed to insert the new transformer and attach it so that it wouldn’t bounce around inside. A dot of double stick foam seemed to work pretty well. Now I have to admit this doesn’t form the same acoustical cavity formed by that gooey glue. So the new transformer may not be the only significant change. But since I didn’t seem to have a large supply of industrial glue or the manufacturing setup to apply it without trashing other parts of the mic in the process, I decided the sticky foam dot would suffice.

Shure SM 57 modification reattach XLR connectorShure SM 57 modification reattach mic elementI used a solder sucker to clean off the residue from the previous wiring. I tinned with some good Kester solder because cheap stuff can sound bad. I attached the XLR to the transformer leads. Then I attached the XLR connector to the base and soldered the other end of the transformer wires to the mic element.

Shure SM 57 modification enamel painted "T" for Transformer upgradeI threaded the body back onto the mic. Except for a little glue residue on the outside of the mic, this modified version didn’t look any different from a stock 57. So I added a personal touch, a red enamel “T” for Transformer to help visually differentiate this mic. I call this mic my Hotrod 57.

NO TRANSFORMER

I nicknamed the next modification Sawed-Off 57. There were two separate goals here: (1) See if I can improve sonics by removing the transformer, and (2) create a low profile mic that is easier to place in tight spots. This mod turned out to be more difficult than I expected. Shure SM 57 modification hole drilled in chassis for drain wire postRemoving the back portion of the mic was easy enough, but when I went to attach the chassis of the mic to the drain wire I discovered the chassis wouldn’t take solder. So I left the project and came back to it the next day. I decided to try another approach — physically attach to the mic body using a terminal. I took the chassis out into the garage to find out how difficult it would be to drill a hole in it. Piece of cake! The metal was soft enough to accept a standard wood bit in the drill. So far so good.

I soldered the terminal to the end of the wire. I pushed a machine screw through the hole in the chassis, intending to thread the terminal inside the mic body. But the nut and bolt took up too much room inside, so I moved the whole assembly outside the mic body.

Shure SM 57 modification wires soldered to mic elment and chassisI purposely cut the drain wire short so that it would bare any stress instead of the +/- connections. Having cleaned off the old solder and tinned the terminals to the mic element, I finished the soldering by connecting the positive and negative wires. Then I dropped the sawed-off into a SABRA-SOM SSM-1 shockmount. I would have preferred something smaller, but that’s a different project!

UNDER TEST

Shure SM 57 modification on snare micMy drummer friend Austin brought his kit into the studio as a favor to me. The Hotrod 57 sounded noticeably better than the stock 57 as a snare mic. And even in the somewhat bulky shockmount, the Sawed-Off 57 fit under the high hat better than the other versions. The Sawed-Off had 15dB less output than the other two — no problem up close on a loud snare. And it sounded really great… significantly better than the stock 57 and the Hotrod 57. That was a pleasant surprise. But I also noticed a lot more sound getting in the Sawed-Off from behind. Perhaps the normal frequency limitation of a stock 57 is also a key design feature for the cardioid pattern. It also seemed very likely that removing the rear chassis of the mic exposed the back side of the element to sound that wouldn’t otherwise get in.

Shure SM 57 modifications comparison with other microphonesAnother friend of mine let me put up a bunch of mikes while he sang and played acoustic guitar. This is when the limitation of the Sawed-Off 57 became obvious. That 15dB loss without the transformer was an absolute deal breaker. I like the No Transformer 57 on loud sources like snare and electric guitar cabs, but the Hotrod 57 for sources that have moderate volume.

CONCLUSIONS

The sound leakage into the back of the microphone was the biggest problem with the Sawed-Off version. So I decided to return the mic body to the Transformerless version and paint a yellow “N” on it for No Transformer. Someday I’d like to bend both microphones 90 degrees with the Granelli Audio Labs G5790 so they fit better in tight spots.

January 2, 2015 / Randy Coppinger

More Machine Voices

In our previous discussion about Machine Voices we looked at sonic treatments to make a voice recording sound more like an automaton: re-recording, frequency, time, vocoding, speech synthesis, intentional misuse of tools, and layering in Part 1.

ACTING
Before we get into more treatments it is worth noting that sonic effects alone are not the only factor in making human speech sound like an android voice. When a synthetic intelligence is the goal script writing and voice acting can help give us robotic clues. For example, HAL 9000 from Kubric’s 2001: A Space Odyssey speaks without emotion. It’s creepy that the computer has no feelings, evidenced by the lack of word stress or pitch variations that humans naturally use. The classic Robot B-9 from Lost in Space used a monotone delivery to let us know it’s “Non-Theorizing” status. Many androids have been portrayed by voice actors using a monotone delivery, though pitch variations have also been removed from regularly entoned speech using processing (example: the Cylon Centurions from the late 1970s)

Pace and pause can be used to sound intentionally procedural. C3PO has quirky pauses and a steady pattern to his dialog that is part of the android speech presentation. Concatenated speech that we hear when calling for the local time, or via MovieFone demonstrates how it sounds when pre-recorded voice is presented piecemeal, organized based on current conditions by a computer for the listener. Interactive Voice Response is used by airlines, banks, and tech support so callers can get what they need with little or no “live human” time on the line. We can intentionally emulate these unnatural patterns before any treatment is applied.

Editing may also play a role. For example, the mechanical personality Max Headroom stuttered, like a series of bad edits, to let us know the voice was from a machine.

OVERDRIVE
Oscilloscope readout of an amplifier output. This shows a 1 KHz sine wave clipping into 5 ohms. 10V/division.One of the sonic give aways that we are listening to a machine is some kind of failure. Subtle to extreme distortion can help convince listeners that electronics, transducers, and the power supplies found in a machine are being used for the reproduction of the voice we are treating. Sometimes the best part of re-amping is pushing the system into distortion — a little or a lot. Of course this can be simulated with software, or by passing signal through a piece of gear like a guitar effects pedal. Sometimes massive distortion mixed back in subtly with the untreated signal helps prove the idea while maintaining intelligibility.

GLITCH
Equipment failures don’t have to be limited to electronic or mechanical clip distortion. There are a wealth of opportunities to glitch a voice recording and make it sound less than human. Low resolution digitization — such as 8bit, 8k Hz — can give you some downright awful sounds. Hint: a lot of talking toys operate down in this range. You can downres using all kinds of different software, not just plugins. Or try hardware such as a guitar stomp box featuring bit crushing effects.

@stonevoiceovers suggested using the ProTools Air Frequency Shifter. Early pitch processing such as the Eventide H910 Harmonizer could sound garbled and glitchy, especially when pushed to extremes. Most pitch processing still sounds pretty synthetic with excessive settings. One idea is to simply pitch something way up or down and then process again to return to normal pitch. The filters and shuffling will add some great artifacts. Of course you can keep the pitch changes, even variable pitch change, to make something both simulated and glitchy at the same time.

You don’t have to pitch process to get chopped, stuttered sounds. There are plugins and hardware effects processors dedicated to these kinds of glitch effects. We can even get these kinds of sounds by abusing a simple tremelo or vibrato processor. And if you’re not afraid to experiment, try circuit bending an inexpensive consumer product that records and plays sound. Cheap toys can be especially fun to hack.

PROCESSING TOOLS
Some of my favorite plugins make processing easier by having several techniques readily available on a single display. For mediated voice futzes it hard to beat Speakerphone by Audio Ease. It has tons of great re-recording techniques simulated using convolution — both speaker and microphone emulations are included. Then you can pile on frequency manipulations with EQ, overdrive, room reflections, telecom codec simulations, and much more.

Whereas Speakerphone is a quick, accurate path for emulating the real world, FutzBox offers a palate of sonic changing parameters to play around and create your own flavor. Start with a speaker emulation, then select options to downres, filter, overdrive, even add a noise layer. The interface makes it fun to experiment with different combinations.

I’ve worked in several studios that had a rackmount Eventide H3000 Harmonizer. If you ever have the opportunity to play with this kind of quirky box full of crazy treatments, indulge your ears with synthetic weirdness. A significant number of robot voices have been created using Harmonizers over the years. Eventide makes plugins these days, which is a more convenient way for everyone to access multi-effect sounds.

GO DEEPER
Ben Burtt is a living legend. His creative use of sound tools inspires us. From classic ARP synthesizers to the Kyma, he blurred the lines between human voice and machine with iconic robots from R2-D2 to Wall-E. We don’t have to use someone else’s plugin; we can devise our own treatment paths too. Techniques like vocoding and speech synthesis demonstrate a confluence of artistic and technical thinking that ask us to create new processes. But powerful tools with no specific voice processing agenda require patience to wield well, and a steep learning curve may not coalesce with an inspired moment. These are deep waters. Come mentally prepared.

Native Instruments makes powerful music oriented software like Kontakt for sampling, and Reaktor for synthesis. To write your own code consider Pure Data and Max/MSP. More tool recommendations: 10 Great Tools For Sound Designers, What’s The Deal With Procedural Game Audio, and Google search for new ideas in sonic tools, including audio related discussion groups.

TIPS AND TRICKS
(1) Levels. Metallic resonances and other synthetic treatments can generate crazy level spikes. Watch your input and output levels so you don’t produce unwanted clipping.
(2) Dynamics. Just because your dialog was compressed and/or limited before you treated it, doesn’t mean you can ignore dynamics after. Consider another pass through dynamics processing so your low level sounds don’t get lost, and the newly created signal peaks don’t prevent you from setting this dialog loudness on par with everything else.
(3) Diction. When you mangle in the 2k – 5k Hz range the intelligibility and presence of the dialog may become diminished. Our ear/brain system uses the 6k – 8k Hz range to distinguish S sounds from F sounds, with potential confusion for other sounds like TH, SH, or CH. Sometimes you just need EQ to enhance these areas. Other times you may need to blend in some unprocessed or less processed voice in these frequency ranges to recover the diction that was lost from treatment. If you lower resolution, remember Nyquist showed that we need at least twice the sampling rate to represent an audible frequency, meaning a downres to 8k Hz constrains the audible frequency to only 4k Hz!
(4) Distortion. If you choose overdrive as part of your processing chain, keep in mind that distortion is a form of dynamic range compression. If you’re unsure whether to use it or not, remember that distortion can be a substitute or compliment to any dynamics processing needed after applying other techniques. Even a little intentional clipping could help improve the signal chain — from a simple futz, to a full-on sentient machine.

If you’ve got any tips, tricks, or other suggestions please share. Have fun making Machine Voices!

December 31, 2014 / Randy Coppinger

Machine Voices

A friend of mine was trying to make a voice sound like it was coming from a toy. He referenced the Extreme EQ article I wrote a while back. Him telling me about it, combined with some recent projects I’ve been doing, inspired me to assemble more ideas about treating voice recordings for machine-like effect.

I first heard the term futz from some film mixing colleagues, which refers to changing a recording so that it sounds like it’s on the phone, an intercom, a megaphone, or other mediated delivery of a voice. We can extend this idea to any kind of talking machine, whether it transmits a human voice, or represents a sentient machine like Hal 9000, C3PO, or Optimus Prime.

RE-RECORDING
Some of the earliest practical voice treatments were made by placing telephone speakers, megaphones, etc. inside a sound isolation box with a microphone. The interior of the box was often lined with sound absorptive material to help reduce audible reflections inside. Sound was fed into the emitter of choice and then recorded using the microphone. But you don’t have to build the box unless you plan to do this kind of re-recording on a regular basis — having these things isolated might be helpful if you do it often enough.

radioshack-mini-audio-amplifierI love re-amping, especially when I’m going for realism. You know, playing a sound file of someone speaking on my mobile phone sounds very convincingly like that person talking on my phone! Sometimes old school re-recording is the better option: a quick and convincing method to get a machine-like voice treatment. And you don’t have to be a purist about it; you can add other forms of manipulation before and after.

(See also: Re-Amping Mix Tips One, Two, and Three)

FREQUENCY
Transducers found in machines often have a specific frequency response that we hear as machine-like. Using extreme roll offs and obnoxious, narrow boosts can help simulate an android, toy, or other talking machine. Listen to these kinds of sounds in the cinema, on TV, and real life talking devices to help you decide when your EQ settings help create a convincing treatment. For some specific ideas see: Extreme EQ.

TIME
Some of the earliest robot voices included the sound of the speaker inside the chassis of the machine. A really tight delay can simulate that kind of reflection. And you can create some great metallic resonances by cycling the result back into the delay again and again in a time smeared feedback loop. Delays under 30ms or so can create comb filters, also known as “phasing.” If you slowly increase the delay time then let it recover you can create flanging: comb filtering with variable notches and peaks. A closely related effect called chorusing also varies delay times back and forth for moving comb filters that can sound synthetic and hollow. @r0barr likes to use a ring modulator, which could be considered a frequency effect plus a time effect, because it smears specific frequency ranges over time by driving a filter into oscillation.

All of these forms of delay can have a synthetic, manufactured kind of sound. If you’re going for a vintage machine, or something subtle, a simple time based effect may be all you need. Or combine with other manipulations to mashup old and new sonic characteristics.

Some of my favorite plugins for these kinds of legacy effects are made by Soundtoys: EchoBoy, Phase Mistress, and Crystallizer.

(See also: Phase)

CODE
Both @daviddas and @recordingreview mentioned the TAL Vocoder by name. Breaking speech into smaller components, vocoding was originally created to reduce the bandwidth needed to transmit a voice. It was also used to encrypt voice communications including military use. By the nineteen sixties artists and technicians collaborated to make several different models of a “talking synthesizer”, or put another way, a singing machine. Because the artistic use of vocoders was based on music, using one may benefit from knowledge of music. If not a musician yourself, consider collaborating with your composer and musician friends.

There’s a fascinating history about vocoders on this wiki page if you’d like to read more.

SPEECH SYNTHESIS
@MikeHillier made a really obvious suggestion that I had completely overlooked: “get Appletalk to say it.” @chewedrock recommended “Atari speech synthesis – software automatic mouth.” Speech synthesis is a great option for a machine voice.

YOU’RE DOING IT WRONG
Intentional misuse and reinvention can be incredibly fun. Some of our beloved music processors such as pitch correction can be applied to speaking dialog instead of music for some very tasty synthetic voices. @grhufnagl said, “I really love using Melodyne to control pitch & time, alongside its formant control.”

Convolution and noise reduction may have been intended to emulate the real world and clean noise out of recordings, respectively. But we can choose to apply these in creative ways to generate interesting artifacts. Freeware and low cost software tends to cut corners, making them more prone to audible errors that sound unnatural and weird. Almost any audio tool can be used in ways it wasn’t intended to produce ear catching flaws.

LAYERS
Good sound design often features the prominent sound that we notice, with layers of quieter elements adding color and flavor. This can work for machine voices too. We can use the voice signal as a key to open a gate on other sounds — static, digitization artifacts, droning guitars, and many, so many more sonic clues that the voice is being mediated. Add samples, such as servos to move a mechanical mouth, or the hum of a power supply. These finishing touches are like highlights and shadows in visual art that add believable, three dimensional characteristics.

Next time: more treatments, more plugins, plus voice acting ideas and tips/tricks in Part 2

October 27, 2014 / Randy Coppinger

Cool Gear from the 137th AES 2014 Los Angeles

Here are some of the interesting things I saw on the exhibit floor.

(1) Triad Orbit was showing a very cleaver clamp with 5/8” threads for putting microphones is less traditional locations. The foam inside the clamp makes it safer to crank down on pretty fixtures, plus adds gripping power to keep it from sliding. It’s called the IO-C Mounting Clamp and I need several!
New Triad Orbit IO-C Mounting Clamp

(2) I like to stop by the Latch Lake booth in case they are giving away their fabulous Jam Nuts, which they were. I used both of them on a recording gig immediately following the convention. Latch Lake introduced a burley new tripod mic stand with the same boom clutch as found on their weighted base models. Want.
Latch Lake introduce the Mic King 1100 stand at AES 2014 Los Angeles

(3) I saw and heard the new Cliff Mics ribbon. It was impressive on a number of levels. The magnets were so massive and strong, I thought they were going to pull the hair off my face. Interestingly, the cover was made of mesh cloth rather than metal.
The new ribbon microphone by Cliff Mics unveiled at AES 2014 Los Angeles

(4) On recommendation I took some time to check out Miktek. Apparently the late, great Oliver Orchut of TAB Funkenwerks designed most of their microphones. I was especially interested to hear the figure 8 of their multi-pattern mikes, with insanely good off axis rejection and an even transition from on to off axis. Impressive.
Miktek C7e Large Diaphragm Multi-Pattern FET Condenser with highly accurate bi-directional pattern

See also New Microphones at AES 2014 from RecordingHacks
Bobby Owsinski’s AES Show New Gear Wrap Up Part 1Part 2Part 3

Did you see something at AES that belongs on this list? Let me know, won’t you?

October 22, 2014 / Randy Coppinger

Dynamic Mixing for Games

My notes from Game Audio 7 Dynamic Mixing for Games at 137th AES Convention 2014, Los Angeles
Presented Oct 10 by Simon Ashby

Dynamic Mixing defined: A system that dynamically changes the audio mix based on currently playing sounds and game situations.

Middleware such as Wwise creates channels between the game engine and the audio engine. In Wwise these are called Sends.

Simon Ashby of Audiokinetic discusses  Dynamic Mixing for Game Audio

Dynamic mixing can help keep things interesting by modifying sounds on the fly to keep from hearing the exact same thing repeatedly. But it can also provide feedback to the user such as volume going down to indicate greater distance from the player / camera.

In the same way live sound mixers can use snapshots to quickly go from cue to cue, middleware mix snapshots can be attached to triggers / mechanics in the game.

Side-chain is not just for ducking. It can drive other parameters such as EQ, pitch, sends, etc. In other words, you can drive a parameter setting based on the audio level of a different channel.

HDR: High Dynamic Range. The audio version of HDR photos. Inputs have high dynamic range going into the HDR buss. Delivery system ducks lower volume inputs to allow the louder ones to be heard. It is more complex than buss compression or ducking. The result is actually lower dynamic range but seems to have more range than compression or unmastered audio.

Audiokinetic has a YouTube Channel that includes some information about HDR.

Adaptive Loudness and Compression, as heard in mix by Rob Bridgett in Zorbit’s Math Adventure. Mix snapshots are triggered by output state of device: headphones, speaker, AirPlay. This can help protect user’s hearing and otherwise optimize for the listening scenario. Compression was also applied based on the volume measured at the mic input of the device. This was designed to help the listener hear better when playing in a loud environment. This was only applied to headphones because speaker and AirPlay would form a loop back into the microphone.

October 20, 2014 / Randy Coppinger

Game Audio Middleware

My notes from Game Audio 5 Audio Middleware for the Next Generation at 137th AES Convention 2014, Los Angeles
Presented Oct 10 by Steve Horowitz and Scott Looney

The key thing that separates linear media production from interactive is: indeterminacy. Middleware helps us manage this difference.

Justification of middleware:
(1) Puts more audio control in the hands of audio people, and
(2) Simplifies work for coders.

Steve Horowitz of Game Audio Institute presents at AES 2014 Los Angeles

Middleware for multiple development platforms: FMOD, Wwise
Unity specific middleware: Fabric, Master Audio

FMOD Studio is now sample based, not frame based.

It was suggested during this discussion that Master Audio seems ideally suited for 2D and casual games. It supports all systems to which Unity can publish, including web. It has better documentation than Fabric.
UPDATE Jan 5, 2015:
I originally reported Master Audio as Open Source, but it is not. When you buy, you get access to all of the source code — true. But game makers do not submit new code to Dark Tonic to update the product, rather Dark Tonic takes responsibility to write and publish Master Audio and allow game makers access in case they want to add code for their game. Brian Hunsaker of Dark Tonic clarified that Master Audio is used by AAA game studios, not merely 2D or indie developers. It is intended for any Unity based product that does not require realtime audio parameter changes.

Middleware Resources: Game Audio Institute, iasig.org, IGDA, Game Sound Con

Follow

Get every new post delivered to your Inbox.

Join 57 other followers