Skip to content
February 6, 2017 / Randy Coppinger

AES Convention Memory

The current President of the Audio Engineering Society, Alex U. Case, has invited members to share their memories of conventions past. Here’s one of my favorites. 

Graham Blyth performed pipe organ in one of the big cathedrals in San Fransisco, as he has for so many years as part of AES Conventions. Ron Stretcher held a class before the concert on techniques for recording that pipe organ, including a Decca Tree. He asked those of us attending if anyone knew how to assemble his AEA Decca Tree. I had just purchased one and assembled it a few times, so I raised my hand. And I was the only person who raised a hand.

So there I was assembling Ron’s microphone positioner in front of the entire class, and though I might have been nervous about it, instead had fun putting it together. After everything was setup and ready, our class was invited to sit in the choir seats right underneath the front organ registers. As if that listening perspective of the glorious music wasn’t enough, I was just a few seats away from audio luminaries Wes Dooley and Rupert Neve. I thought to myself: THIS is why I am a member of the AES.

Here’s a pic of my Decca Tree
for a different recording situation.
Decca Tree, Before the Show

September 26, 2016 / Randy Coppinger

AES LA 2016 Meetups

135th Audio Engineering Convention in New York City entrance banner

Are you going to be in Los Angeles for the 141st Convention of the Audio Engineering Society?
Let’s meet in person!

Thr, Sep 29

12:30pm UPDATED
Bring your lunch, or just hang

West Hall Groundwork eating area, near West Exhibit Hall entrance 

3:30pm
We might talk about microphones. A little bit.
South Hall Exhibit Area
Booth 406 – Roswell Audio & Microphone Parts

Fri, Sep 30

10:30am UPDATED 
Seems like a good time for caffeinated beverages

West Hall Groundwork eating area, near West Exhibit Hall entrance
12:30pm
Buy your lunch, or just hang

El Cholo – 1037 S. Flower Street (walking distance from Convention Center)

See you there friends.

May 5, 2016 / Randy Coppinger

Erin Fitzgerald

On the occasion of Erin publishing her e-book about how to get started in voice acting, I took the opportunity to talk shop and glean some wisdom from my very talented friend.

I asked Erin:

(1) How did you decide to create an e-book?
“There was a license plate in front of me that said, “write.'”
“I thought it would be a pamphlet… with some links. That’s how it started.”
“This one afternoon I sat down to start putting this pamphlet together – it poured out of me.”
“It’s written the way I talk, so it’s very conversational.”
“Even if you don’t like my opinion, or my stories, or any of the advice I give, just the links alone with be worth their weight in gold.”

(2) When you work with others — producers, directors, engineers, other actors — what can they do that really helps you act? What things make it more difficult?
“I mention in the book how important a good engineer and director are…”
“The best engineer: you don’t even notice they’re doing their job.”
“When I’ve become a character, there’s a flow…”
“If I’m blessed enough to be with other people in the room, which always makes it more honest for me; my acting always goes up to a much more real place when other actors are in the room. It’s a huge blessing.”
“A really is engineer is so intuitive, that they’re three steps ahead of me, and I don’t even know where I’m going!”
“The director says, ‘Did you catch that? And the engineer’s like, “Of course.'”
“The directors, the engineers, the other actors – everybody adds. I can’t speak for other people, but my acting gets better when there’s a beautiful team of people who are working together and each one of them have the permission to use all of their intuition, to use all of their talents and their strengths. And then we never have to pay attention to the weaknesses. We don’t have to because everybody compliments each other.”

(3) Tell us about recording ensemble.
“My favorite is prelay when we’re all in the room together. There’s no question. Or if I’m doing a videogame and I’m not going to get to have other people in the room, a video game where I’m not being pushed to deliver a hundred lines per hour, a videogame where the director, and the writer, and the producer in the room and we’re playing, and we’re really discovering a performance.”
“So to get the performance that you would get with another person in the room we have to do something more creative in order to really create the pacing, the flow, and that rhythm. So, that’s a real gift when that happens.”
“Things come up that would never come up one-on-one.”

(4) Who inspires you?
“I live in Los Angeles surrounded by the best talent in the world.”
“I love artists.”
“I love reading comic books…”
“I have to use my imagination. My eyes are closed most of the time. My imagery all comes from the inside because very rarely is the product already finished that I can look at it.”
“I like to go to art shows and just soak that in. And I feel like that helps build my inner universe.”
“I was a big fan of Emily Carr when I was younger.”
“There’s a great artist: Brian Ball… his artwork is so stunning.”

(5) What’s your favorite thing about being an actor?

(6) When you’re asked to create a character, what kinds of things do you want to know?

(7) What’s your favorite trick, tip, or secret weapon?

(8) If you could travel through time to have dinner with any historical figure, who would it be? And which of your roles would most likely delight that historical figure?

February 24, 2016 / Randy Coppinger

Film Soundtrack Story: Bird

The Feb 23rd meeting of the AES-LA Section featured Jason LaRocca and Bobby Fernandez, discussing their work as film scoring mixers. Bobby told a story that I found especially engaging.

Clint Eastwood got some recordings of Charlie Parker playing his sax. But the recordings were only his solos, and they had been recorded using a reel-to-reel tape machine, just hanging the mic from the chord, draped on top of the stage mic. Clint wanted to isolate Parker and re-record the rest of the band for his film Bird, made in the mid-eighties. That meant Bobby had to figure out how to minimize bleed from the original performance before digital workstations were common, and before robust noise reduction was really viable.

8200
Bobby said he put six analog GML equalizers on his left, and six more on his right, with the signal feeding through all of them. The left EQs were used to cut the backing players… drums, piano, etc. The right EQs were used to bring out Parker’s alto sax. From his description, it sounded like he would work on a passage, or phrase, or even a note at a time. That EQed segment would be recorded to an adjacent track, then he would adjust the EQs for the next segment and punch it, then adjust EQs, then punch, until the whole solo was isolated. Cleaning up that track would be a difficult task even with today’s technology, but I’m blown away by the patience and technique Bobby used to accomplish that isolation with the tools of the day. Inspiring!

December 1, 2015 / Randy Coppinger

Voice Acting Lessons from Daws Butler

Voice actor, writer, and legendary acting instructor Daws Butler has been celebrated in print and live presentation. Here are a few acting tips that I especially enjoyed…

Corey Burton: “They’re not voices, they’re characters.” Let another character inhabit your voice and you can create distinct personalities. These can be as memorable and real as actual people you know.

Nancy Cartwright: “I want to show how to get the words off the page.” Not just acting, but relating.

Joe Bevilacqua: Keep in mind the physicality of the character, the facial structure. Portray the character you are creating with your body. Think about how tall the character is, what age the character is, and where the voice emanates from the body.

Tony Pope: Try the same story in different dialects, different characters. Try to bring something new to a thing you’ve already done. “Never be afraid to be lousy. Always take chances.”

Joe Bevilacqua: Many of his characters had been a voice long before they were animated into a specific character. He collected personalities that he thought were interesting.

June Foray: “Daws would always have a little piece of paper…” on which he kept a list of ideas for voices.

Earl Kress: A comedy principle of Daws was to play opposite–Fast against slow, or slow against fast. Loud against soft. Contrasts.

Listen to the entire Tribute to Dawn Butler.

Daws Butler On Microphone RCA KU3a 10,001The book mentioned in the live tribute–Scenes for Actors and Voices–includes some additional tips…

In the forward, Corey Burton writes: “Performing is not simply reading aloud, but delivering the lines as if those words just naturally occurred to the character; as an expression of that character’s own thoughts and feelings at that particular moment in their imaginary lives.”

The words of Daws Butler himself: “We do not read lines–we ‘express thoughts’…in many instances, one ‘thought’ will wipe out another–it will take precedence, asserting its more valid importance to the continuity–this I would call ‘decaying.’ The end of the line seems to ‘fall off’ or atrophy–and the energy of the following lines snaps into position. Its vitality is a refreshment–a transfusion–and it excites the listener, because it seems so natural and spontaneous. Because it is representative of what happens in ‘real life.’ Remember–the actor’s stock-in-trade is being ‘real.’ All else is pretension.

Daws also encouraged: “I want you to understand the words. I want you to taste the words. I want you to love the words. Because the words are important. But they’re only words. You have to leave them on the paper and you take the thoughts and you put them in your mind and then you as an actor recreate them, as if the thoughts had suddenly occurred to you.”

[bold emphasis added by your truly]

November 30, 2015 / Randy Coppinger

Tribute to Daws Butler

Voice actor, writer, and legendary acting instructor Daws Butler was paid tribute in this recording from July of 2003, presented by his students and colleagues: Joe Bevilacqua, Corey Burton, June Foray, and Nancy Cartwright. Writers, producers, recording engineers, voice directors, and voice actors will enjoy and be nourished by the insights of Mr. Butler as shared in this full length presentation.

To read a few of my take aways from this presentation and the book they mention, see Voice Acting Lessons from Daws Butler.


November 24, 2015 / Randy Coppinger

My Upgrade to ProTools 12

ProTools 12 screenshot

When anticipating an audio software upgrade, two major concerns are: familiarity, and stability. Having done a few small projects since an upgrade from ProTools 10 to 12, it doesn’t feel like a lot of things have changed, so I’m feeling good about how familiar it seems. Most of my focus has been on how well the software works or fails.

FADE OUT BUG
I started with version 12.2 and encountered a strange bug with fade outs. When a fade out was scrolled off screen then back on, sometimes it changed from the familiar outlined box with fade line and waveform to an opaque box. The audio played fine, but it prevented the fade from being edited. I found if I would close and reopen the session these fade outs were sometimes restored. Other times a new fade out of the same length was created AFTER the original fade, lengthening the edit. At first I was amazed at how strange this bug seemed, but after losing a lot of time to it I became desperate for a solution.

Luckily version 12.3 was available. I didn’t find any mention of this bug in documentation for either version, so I just had to hope using 12.3 would be better. I am happy to report that I haven’t had any weird fades since moving up.

DELAY COMPENSATION BUG
Click latency demonstrates delay compensation bugIn 12.2 and 12.3 I’ve experienced a problem with delay compensation. I often buss through two aux inputs, one of which has a dynamics plug-in, for parallel compression. Every time I launched ProTools and then opened a session for the first time, there was a 1024 sample delay on the plugin channel (at 48k Hz rate). If I simply closed the session and reopened, the delay compensation worked fine again. It’s a minor annoyance.

McDSP FilterBank F2 for RTAS with sliding fadersRTAS
One of my session templates included the McDSP F2 filter plug-in. I hung onto the old version for as long as I could because sliders seemed faster than rotating knobs. My F2 was RTAS, but its AAX replacement was the F202. According to the helpful folks at McDSP, ProTools users need to have v6 of FilterBank installed in order to automatically convert old F2 plug-ins and transfer the setting values into F202.

MAGMA CHASSIS FAN NOISE
My ProTools 10 rig used a PCI card to connect to the HD Omni. The cost effective path to interface with a “trashcan” Mac Pro seemed to be the Magma Express Box 3T. Setup was pretty easy and it worked like a charm. But the fan noise was bugging me. The product literature explained that the fan speed was adjustable, and I even found a How To video on YouTube. Trouble was, that video was for a previous version. The new, improved model was easier to adjust slower, and quieter without getting significantly hot. While the information about how to adjust the jumper was well documented in the product literature, locating the jumper was a bit more challenging. So here’s a sequence of photos to help Express Box 3T owners access the jumper.

Rear Thumbscrew for the Magma Express 3T

Loosen the thumbscrew at the rear of the chassis

Remove top cover of Magma Express 3T

Removing the top cover

Pull fan from the Magma Express 3T

Pull up the fan near the front of the chasses

View of jumper board on Magma Express 3T

Front fan view shows the circuit board on the fan where the jumpers live

How to move fan speed jumper on Magma Express 3T

Use needle nose pliers to lift and move jumper

 

February 7, 2015 / Randy Coppinger

Profile: Doppler Studios, Atlanta

Entry to Doppler Studios in Atlanta, Georgia

Whenever I’ve needed to record out of town, information about studios in that area has been very much appreciated. If you ever need to record in Atlanta, here’s some info about Doppler Studios. This seven room facility is located about 15 miles north of the airport in Piedmont Heights. The Lindbergh Center Station is only 1.1 miles from the studio if you want to take the MARTA subway from the airport.

Shawn Coleman of Doppler Studios in Atlanta, GeorgiaI’ve worked over the phone with Shawn Coleman for at least a decade. He’s our go to guy for voice recording. He typically works in Studio G, a well equipped room for voice actors, music production, and film/TV audio post.

We finally had the opportunity to show up in person for a session and were treated to working in Studio A, with a larger control room and larger studio proper including a baby grand piano. Client View in Studio A at Doppler Studios in Atlanta, GeorgiaShawn knows we like the sound of their Neumann u87. He also hung a Neumann TLM 103 running at a lower volume as a safety microphone in case the actor suddenly shouted (recorded on a separate track). The 40 input SSL 4000E was overkill for the two microphones we used, but it sure was a pleasure to see and hear. Always the professional, Shawn kept the session running smoothly and everything sounded great. What I didn’t know until this visit: Chief engineer Joe Neil makes his own custom microphone preamps, which we used for our recording session. They sounded very nice feeding into the Manley compressor. We recorded at 32bit, 96k Hz because it sounds better if we need any iZotope Declick to help clean up mouth noise.

After our session Shawn showed me their big music tracking room: Studio E. With a 56 input SSL, 3 large isolation booths and a 30′ x 40′ studio proper this would be ideal for recording large groups of musicians at the same time.
56 Input SSL Console in Studio E at Doppler Studios in Atlanta, Georgia

Below we see Shawn in the big room with two of the three iso booths behind him.
Studio E Proper at Doppler Studios in Atlanta, Georgia

So now you know a little more about this fine studio. I hope you find it helpful if you need to record in Atlanta. Now, here are some more pretty pictures.

Neumann u87 Microphone at Doppler Studios in Atlanta, GeorgiaOutboard Gear Rack in Studio A at Doppler Studios in Atlanta, Georgia

View from Control Room into Studio A Proper at Doppler Studios in Atlanta, Georgia

View from Control Room into Studio A Proper

Custom Microphone Preamplifiers at Doppler Studios in Atlanta, Georgia

Custom Microphone Preamplifiers

View from Iso Booth into Studio E Proper at Doppler Studios in Atlanta, Georgia

View from Iso Booth into Studio E Proper

February 4, 2015 / Randy Coppinger

How To Modify Your Shure SM57

Someone suggested that older Shure microphones sound better because the quality of the transformers was better. Then my friend Dave recommended that I rip out the transformer all together! Since the Shure SM57 is inexpensive, and the changes are relatively simple, I decided to try my first microphone modifications in the summer of 2007.

REPLACE THE STOCK TRANSFORMER

Shure SM57 microphone modification  desoldered elementThe good folks at Mercenary Audio sold me some TAB Funkenwerk T58 transformers, designed specifically to replace the ones that come in stock 57s and 58s. First I unscrewed the back portion of the mic and de-soldered the contacts from the stock transformer to the microphone element. Then I released the XLR connector at the base and de-soldered it.

The next part was tricky… I had to remove the old transformer from the mic body, which was held in place by some sort of rubbery glue. The folks at Mercenary gave me a hot tip: put the lower body of the mic (not the element!) in a toaster oven for 3-5 minutes. Despite warnings I still managed to burn myself. And some of the glue spilled onto the metal tray. Bummer. That stuff would not come off, rendering it unsuitable for cooking food. But it did break the stock transformer free.

Then I needed to insert the new transformer and attach it so that it wouldn’t bounce around inside. A dot of double stick foam seemed to work pretty well. Now I have to admit this doesn’t form the same acoustical cavity formed by that gooey glue. So the new transformer may not be the only significant change. But since I didn’t seem to have a large supply of industrial glue or the manufacturing setup to apply it without trashing other parts of the mic in the process, I decided the sticky foam dot would suffice.

Shure SM 57 modification reattach XLR connectorShure SM 57 modification reattach mic elementI used a solder sucker to clean off the residue from the previous wiring. I tinned with some good Kester solder because cheap stuff can sound bad. I attached the XLR to the transformer leads. Then I attached the XLR connector to the base and soldered the other end of the transformer wires to the mic element.

Shure SM 57 modification enamel painted "T" for Transformer upgradeI threaded the body back onto the mic. Except for a little glue residue on the outside of the mic, this modified version didn’t look any different from a stock 57. So I added a personal touch, a red enamel “T” for Transformer to help visually differentiate this mic. I call this mic my Hotrod 57.

NO TRANSFORMER

I nicknamed the next modification Sawed-Off 57. There were two separate goals here: (1) See if I can improve sonics by removing the transformer, and (2) create a low profile mic that is easier to place in tight spots. This mod turned out to be more difficult than I expected. Shure SM 57 modification hole drilled in chassis for drain wire postRemoving the back portion of the mic was easy enough, but when I went to attach the chassis of the mic to the drain wire I discovered the chassis wouldn’t take solder. So I left the project and came back to it the next day. I decided to try another approach — physically attach to the mic body using a terminal. I took the chassis out into the garage to find out how difficult it would be to drill a hole in it. Piece of cake! The metal was soft enough to accept a standard wood bit in the drill. So far so good.

I soldered the terminal to the end of the wire. I pushed a machine screw through the hole in the chassis, intending to thread the terminal inside the mic body. But the nut and bolt took up too much room inside, so I moved the whole assembly outside the mic body.

Shure SM 57 modification wires soldered to mic elment and chassisI purposely cut the drain wire short so that it would bare any stress instead of the +/- connections. Having cleaned off the old solder and tinned the terminals to the mic element, I finished the soldering by connecting the positive and negative wires. Then I dropped the sawed-off into a SABRA-SOM SSM-1 shockmount. I would have preferred something smaller, but that’s a different project!

UNDER TEST

Shure SM 57 modification on snare micMy drummer friend Austin brought his kit into the studio as a favor to me. The Hotrod 57 sounded noticeably better than the stock 57 as a snare mic. And even in the somewhat bulky shockmount, the Sawed-Off 57 fit under the high hat better than the other versions. The Sawed-Off had 15dB less output than the other two — no problem up close on a loud snare. And it sounded really great… significantly better than the stock 57 and the Hotrod 57. That was a pleasant surprise. But I also noticed a lot more sound getting in the Sawed-Off from behind. Perhaps the normal frequency limitation of a stock 57 is also a key design feature for the cardioid pattern. It also seemed very likely that removing the rear chassis of the mic exposed the back side of the element to sound that wouldn’t otherwise get in.

Shure SM 57 modifications comparison with other microphonesAnother friend of mine let me put up a bunch of mikes while he sang and played acoustic guitar. This is when the limitation of the Sawed-Off 57 became obvious. That 15dB loss without the transformer was an absolute deal breaker. I like the No Transformer 57 on loud sources like snare and electric guitar cabs, but the Hotrod 57 for sources that have moderate volume.

CONCLUSIONS

The sound leakage into the back of the microphone was the biggest problem with the Sawed-Off version. So I decided to return the mic body to the Transformerless version and paint a yellow “N” on it for No Transformer. Someday I’d like to bend both microphones 90 degrees with the Granelli Audio Labs G5790 so they fit better in tight spots.

January 2, 2015 / Randy Coppinger

More Machine Voices

In our previous discussion about Machine Voices we looked at sonic treatments to make a voice recording sound more like an automaton: re-recording, frequency, time, vocoding, speech synthesis, intentional misuse of tools, and layering in Part 1.

ACTING
Before we get into more treatments it is worth noting that sonic effects alone are not the only factor in making human speech sound like an android voice. When a synthetic intelligence is the goal script writing and voice acting can help give us robotic clues. For example, HAL 9000 from Kubric’s 2001: A Space Odyssey speaks without emotion. It’s creepy that the computer has no feelings, evidenced by the lack of word stress or pitch variations that humans naturally use. The classic Robot B-9 from Lost in Space used a monotone delivery to let us know it’s “Non-Theorizing” status. Many androids have been portrayed by voice actors using a monotone delivery, though pitch variations have also been removed from regularly entoned speech using processing (example: the Cylon Centurions from the late 1970s)

Pace and pause can be used to sound intentionally procedural. C3PO has quirky pauses and a steady pattern to his dialog that is part of the android speech presentation. Concatenated speech that we hear when calling for the local time, or via MovieFone demonstrates how it sounds when pre-recorded voice is presented piecemeal, organized based on current conditions by a computer for the listener. Interactive Voice Response is used by airlines, banks, and tech support so callers can get what they need with little or no “live human” time on the line. We can intentionally emulate these unnatural patterns before any treatment is applied.

Editing may also play a role. For example, the mechanical personality Max Headroom stuttered, like a series of bad edits, to let us know the voice was from a machine.

OVERDRIVE
Oscilloscope readout of an amplifier output. This shows a 1 KHz sine wave clipping into 5 ohms. 10V/division.One of the sonic give aways that we are listening to a machine is some kind of failure. Subtle to extreme distortion can help convince listeners that electronics, transducers, and the power supplies found in a machine are being used for the reproduction of the voice we are treating. Sometimes the best part of re-amping is pushing the system into distortion — a little or a lot. Of course this can be simulated with software, or by passing signal through a piece of gear like a guitar effects pedal. Sometimes massive distortion mixed back in subtly with the untreated signal helps prove the idea while maintaining intelligibility.

GLITCH
Equipment failures don’t have to be limited to electronic or mechanical clip distortion. There are a wealth of opportunities to glitch a voice recording and make it sound less than human. Low resolution digitization — such as 8bit, 8k Hz — can give you some downright awful sounds. Hint: a lot of talking toys operate down in this range. You can downres using all kinds of different software, not just plugins. Or try hardware such as a guitar stomp box featuring bit crushing effects.

@stonevoiceovers suggested using the ProTools Air Frequency Shifter. Early pitch processing such as the Eventide H910 Harmonizer could sound garbled and glitchy, especially when pushed to extremes. Most pitch processing still sounds pretty synthetic with excessive settings. One idea is to simply pitch something way up or down and then process again to return to normal pitch. The filters and shuffling will add some great artifacts. Of course you can keep the pitch changes, even variable pitch change, to make something both simulated and glitchy at the same time.

You don’t have to pitch process to get chopped, stuttered sounds. There are plugins and hardware effects processors dedicated to these kinds of glitch effects. We can even get these kinds of sounds by abusing a simple tremelo or vibrato processor. And if you’re not afraid to experiment, try circuit bending an inexpensive consumer product that records and plays sound. Cheap toys can be especially fun to hack.

PROCESSING TOOLS
Some of my favorite plugins make processing easier by having several techniques readily available on a single display. For mediated voice futzes it hard to beat Speakerphone by Audio Ease. It has tons of great re-recording techniques simulated using convolution — both speaker and microphone emulations are included. Then you can pile on frequency manipulations with EQ, overdrive, room reflections, telecom codec simulations, and much more.

Whereas Speakerphone is a quick, accurate path for emulating the real world, FutzBox offers a palate of sonic changing parameters to play around and create your own flavor. Start with a speaker emulation, then select options to downres, filter, overdrive, even add a noise layer. The interface makes it fun to experiment with different combinations.

I’ve worked in several studios that had a rackmount Eventide H3000 Harmonizer. If you ever have the opportunity to play with this kind of quirky box full of crazy treatments, indulge your ears with synthetic weirdness. A significant number of robot voices have been created using Harmonizers over the years. Eventide makes plugins these days, which is a more convenient way for everyone to access multi-effect sounds.

GO DEEPER
Ben Burtt is a living legend. His creative use of sound tools inspires us. From classic ARP synthesizers to the Kyma, he blurred the lines between human voice and machine with iconic robots from R2-D2 to Wall-E. We don’t have to use someone else’s plugin; we can devise our own treatment paths too. Techniques like vocoding and speech synthesis demonstrate a confluence of artistic and technical thinking that ask us to create new processes. But powerful tools with no specific voice processing agenda require patience to wield well, and a steep learning curve may not coalesce with an inspired moment. These are deep waters. Come mentally prepared.

Native Instruments makes powerful music oriented software like Kontakt for sampling, and Reaktor for synthesis. To write your own code consider Pure Data and Max/MSP. More tool recommendations: 10 Great Tools For Sound Designers, What’s The Deal With Procedural Game Audio, and Google search for new ideas in sonic tools, including audio related discussion groups.

TIPS AND TRICKS
(1) Levels. Metallic resonances and other synthetic treatments can generate crazy level spikes. Watch your input and output levels so you don’t produce unwanted clipping.
(2) Dynamics. Just because your dialog was compressed and/or limited before you treated it, doesn’t mean you can ignore dynamics after. Consider another pass through dynamics processing so your low level sounds don’t get lost, and the newly created signal peaks don’t prevent you from setting this dialog loudness on par with everything else.
(3) Diction. When you mangle in the 2k – 5k Hz range the intelligibility and presence of the dialog may become diminished. Our ear/brain system uses the 6k – 8k Hz range to distinguish S sounds from F sounds, with potential confusion for other sounds like TH, SH, or CH. Sometimes you just need EQ to enhance these areas. Other times you may need to blend in some unprocessed or less processed voice in these frequency ranges to recover the diction that was lost from treatment. If you lower resolution, remember Nyquist showed that we need at least twice the sampling rate to represent an audible frequency, meaning a downres to 8k Hz constrains the audible frequency to only 4k Hz!
(4) Distortion. If you choose overdrive as part of your processing chain, keep in mind that distortion is a form of dynamic range compression. If you’re unsure whether to use it or not, remember that distortion can be a substitute or compliment to any dynamics processing needed after applying other techniques. Even a little intentional clipping could help improve the signal chain — from a simple futz, to a full-on sentient machine.

If you’ve got any tips, tricks, or other suggestions please share. Have fun making Machine Voices!