Machine Voices
A friend of mine was trying to make a voice sound like it was coming from a toy. He referenced the Extreme EQ article I wrote a while back. Him telling me about it, combined with some recent projects I’ve been doing, inspired me to assemble more ideas about treating voice recordings for machine-like effect.
I first heard the term futz from some film mixing colleagues, which refers to changing a recording so that it sounds like it’s on the phone, an intercom, a megaphone, or other mediated delivery of a voice. We can extend this idea to any kind of talking machine, whether it transmits a human voice, or represents a sentient machine like Hal 9000, C3PO, or Optimus Prime.
RE-RECORDING
Some of the earliest practical voice treatments were made by placing telephone speakers, megaphones, etc. inside a sound isolation box with a microphone. The interior of the box was often lined with sound absorptive material to help reduce audible reflections inside. Sound was fed into the emitter of choice and then recorded using the microphone. But you don’t have to build the box unless you plan to do this kind of re-recording on a regular basis — having these things isolated might be helpful if you do it often enough.
I love re-amping, especially when I’m going for realism. You know, playing a sound file of someone speaking on my mobile phone sounds very convincingly like that person talking on my phone! Sometimes old school re-recording is the better option: a quick and convincing method to get a machine-like voice treatment. And you don’t have to be a purist about it; you can add other forms of manipulation before and after.
(See also: Re-Amping Mix Tips One, Two, and Three)
FREQUENCY
Transducers found in machines often have a specific frequency response that we hear as machine-like. Using extreme roll offs and obnoxious, narrow boosts can help simulate an android, toy, or other talking machine. Listen to these kinds of sounds in the cinema, on TV, and real life talking devices to help you decide when your EQ settings help create a convincing treatment. For some specific ideas see: Extreme EQ.
TIME
Some of the earliest robot voices included the sound of the speaker inside the chassis of the machine. A really tight delay can simulate that kind of reflection. And you can create some great metallic resonances by cycling the result back into the delay again and again in a time smeared feedback loop. Delays under 30ms or so can create comb filters, also known as “phasing.” If you slowly increase the delay time then let it recover you can create flanging: comb filtering with variable notches and peaks. A closely related effect called chorusing also varies delay times back and forth for moving comb filters that can sound synthetic and hollow. @r0barr likes to use a ring modulator, which could be considered a frequency effect plus a time effect, because it smears specific frequency ranges over time by driving a filter into oscillation.
All of these forms of delay can have a synthetic, manufactured kind of sound. If you’re going for a vintage machine, or something subtle, a simple time based effect may be all you need. Or combine with other manipulations to mashup old and new sonic characteristics.
Some of my favorite plugins for these kinds of legacy effects are made by Soundtoys: EchoBoy, Phase Mistress, and Crystallizer.
(See also: Phase)
CODE
Both @daviddas and @recordingreview mentioned the TAL Vocoder by name. Breaking speech into smaller components, vocoding was originally created to reduce the bandwidth needed to transmit a voice. It was also used to encrypt voice communications including military use. By the nineteen sixties artists and technicians collaborated to make several different models of a “talking synthesizer”, or put another way, a singing machine. Because the artistic use of vocoders was based on music, using one may benefit from knowledge of music. If not a musician yourself, consider collaborating with your composer and musician friends.
There’s a fascinating history about vocoders on this wiki page if you’d like to read more.
SPEECH SYNTHESIS
@MikeHillier made a really obvious suggestion that I had completely overlooked: “get Appletalk to say it.” @chewedrock recommended “Atari speech synthesis – software automatic mouth.” Speech synthesis is a great option for a machine voice.
YOU’RE DOING IT WRONG
Intentional misuse and reinvention can be incredibly fun. Some of our beloved music processors such as pitch correction can be applied to speaking dialog instead of music for some very tasty synthetic voices. @grhufnagl said, “I really love using Melodyne to control pitch & time, alongside its formant control.”
Convolution and noise reduction may have been intended to emulate the real world and clean noise out of recordings, respectively. But we can choose to apply these in creative ways to generate interesting artifacts. Freeware and low cost software tends to cut corners, making them more prone to audible errors that sound unnatural and weird. Almost any audio tool can be used in ways it wasn’t intended to produce ear catching flaws.
LAYERS
Good sound design often features the prominent sound that we notice, with layers of quieter elements adding color and flavor. This can work for machine voices too. We can use the voice signal as a key to open a gate on other sounds — static, digitization artifacts, droning guitars, and many, so many more sonic clues that the voice is being mediated. Add samples, such as servos to move a mechanical mouth, or the hum of a power supply. These finishing touches are like highlights and shadows in visual art that add believable, three dimensional characteristics.
Next time: more treatments, more plugins, plus voice acting ideas and tips/tricks in Part 2
One Comment
Leave a CommentTrackbacks