Under normal gameplay conditions, the listener or “virtual microphone” is
typically positioned at or near the location of the camera, and the sound
sources are modeled where they really are in the environment. Distance-based
attenuation, direct and indirect sound path determination, voice limiting—all
are determined using these realistic positions.
However, during an in-game cinematic—a portion of the game in which
player control is suspended so that a story moment can take place—the camera often pans out away from the player’s head. This kind of thing tends to
wreak havoc with our 3D audio system. We could just keep the listener/mic
locked to the location of the camera; but this is not always appropriate. For
824 13. Audio
example, if there’s a long shot of two characters speaking, we probably still
want to mix so that the characters’ voices can be heard, even though physically speaking they are too far away to be heard. In this case, we might want
to detach the listener from the camera, and artificially position it nearer to the
characters.
Mixing in-game cinematics is a lot closer to mixing a film. As such, a sound
engine needs to be capable of “breaking the rules” and doing things that aren’t
necessarily physically realistic.
13.5.9 Audio Engine Survey
It should be evident by now that creating a 3D audio engine is a massive
undertaking. Luckily for us, lots of people have already put a great deal of
effort into this task, and the result is a wide range of audio software that we
can use pretty much out of the box. This ranges from low-level sound libraries
all the way to fully featured 3D audio rendering engines.
In the following sections, we’ll survey a few of the most common audio
libraries and engines. Some of these are specific to a particular target platform,
while others are cross-platform.
13.5.9.1 Windows: The Universal Audio Architecture
In the early days of PC gaming, the feature set and architecture of PC sound
cards varied a great deal from platform to platform and vendor to vendor.
Microsoft attempted to encapsulate all of this diversity within its DirectSound
API, supported by the Windows Driver Model (WDM) and the Kernel Audio
Mixer (KMixer) driver. However, because vendors could not agree on a common feature set or set of standard interfaces, the same functionality would
often be realized in very different ways on different sound cards. This required the operating system to manage a very large number of incompatible
driver interfaces.
For Windows Vista and beyond, Microsoft introduced a new standard
called the Universal Audio Architecture (UAA). Only a limited set of hardware features are supported by the standard UAA driver API—all remaining features are implemented in software (although hardware manufacturers
are still free to provide additional “hardware acceleration” features, as long
as they provide custom drivers to expose them). Although the introduction
of UAA limited the competitive advantage of prominent sound card vendors
like Creative Labs, it did have the desired effect of creating a solid, feature-rich
standard, which could be used by games and PC applications in a convenient
way.
0 Comments
Please Comment for any further query :