My notes from Game Audio 7 Dynamic Mixing for Games at 137th AES Convention 2014, Los Angeles
Presented Oct 10 by Simon Ashby
Dynamic Mixing defined: A system that dynamically changes the audio mix based on currently playing sounds and game situations.
Middleware such as Wwise creates channels between the game engine and the audio engine. In Wwise these are called Sends.
Dynamic mixing can help keep things interesting by modifying sounds on the fly to keep from hearing the exact same thing repeatedly. But it can also provide feedback to the user such as volume going down to indicate greater distance from the player / camera.
In the same way live sound mixers can use snapshots to quickly go from cue to cue, middleware mix snapshots can be attached to triggers / mechanics in the game.
Side-chain is not just for ducking. It can drive other parameters such as EQ, pitch, sends, etc. In other words, you can drive a parameter setting based on the audio level of a different channel.
HDR: High Dynamic Range. The audio version of HDR photos. Inputs have high dynamic range going into the HDR buss. Delivery system ducks lower volume inputs to allow the louder ones to be heard. It is more complex than buss compression or ducking. The result is actually lower dynamic range but seems to have more range than compression or unmastered audio.
Audiokinetic has a YouTube Channel that includes some information about HDR.
Adaptive Loudness and Compression, as heard in mix by Rob Bridgett in Zorbit’s Math Adventure. Mix snapshots are triggered by output state of device: headphones, speaker, AirPlay. This can help protect user’s hearing and otherwise optimize for the listening scenario. Compression was also applied based on the volume measured at the mic input of the device. This was designed to help the listener hear better when playing in a loud environment. This was only applied to headphones because speaker and AirPlay would form a loop back into the microphone.