A number of powerful, ready-to-use cross-platform 3D audio engines are available. We’ll outline the most well-known of these below.
• OpenAL is a cross-platform 3D audio rendering API that has been deliberately designed to mimic the design of the OpenGL graphics library.
Early versions of the library were open source, but it is now licensed
software. A number of vendors provide implementations of the OpenAL API spec, including OpenAL Soft (http://kcat.strangesoft.net/
openal.html and AeonWave-OpenAL (http://www.adalin.com).
• AeonWave 4D is a low-cost audio library for Windows and Linux by
Adalin B.V.
• FMOD Studio is an audio authoring tool that features a “pro audio” look
and feel (http://www.fmod.org). A full-featured runtime 3D audio API
allows assets created in FMOD Studio to be rendered in real time on the
Windows, Mac, iOS and Android platforms.
• Miles Sound System is a popular audio middleware solution by Rad Game
Tools (http://www.radgametools.com/miles.htm). It provides a powerful audio processing graph and is available on virtually every gaming
platform imaginable.
• Wwise is a 3D audio rendering engine by Audiokinetic (https://www.
audiokinetic.com). It is notably not based around the concepts and features of a multi-channel mixing console, but rather presents the sound
designer and programmer with a unique interface based on game objects and events.
• UnrealEngine of course provides its own 3D audio engine and powerful
integrated tool chain (http://www.unrealengine.com). For an in-depth
look at Unreal’s audio feature set and tools, see [40].
828 13. Audio
13.6 Game-Specific Audio Features
On top of the 3D audio rendering pipeline, games typically implement all
sorts of game-specific features and systems. Some examples include:
• Split-screen support. Multiplayer games that support split-screen play
must provide some mechanism that allows multiple listeners in the 3D
game world to share a single set of speakers in the living room.
• Physics-driven audio. Games that support dynamic, physically simulated
objects like debris, destructible objects and rag dolls require a means of
playing appropriate audio in response to impacts, sliding, rolling and
breaking.
• Dynamic music system. Many story-driven games require the music to
adapt in real time to the mood and tension of events in the game.
• Character dialog system. AI-driven characters seem a great deal more realistic when they speak to one another and to the player’s character.
• Sound synthesis. Some engines continue to provide the ability to synthesize sounds “from scratch” by combining various kinds of waveforms
(sinusoid, square, sawtooth, etc.) at various volumes and frequencies.
Advanced synthesis techniques are also becoming practical for use in
real-time games. For example:
◦ Musical instrument synthesizers reproduce the natural sound of an
analog musical instrument without the use of pre-recorded audio.
◦ Physically based sound synthesis encompasses a broad range of techniques that attempt to accurately reproduce the sound that would
be made by an object as it physically interacts with a virtual environment. Such systems make use of the contact, momentum,
force, torque and deformation information available from a modern physics simulation engine, in concert with the properties of the
material from which the object is made and its geometric shape,
in order to synthesize suitable sounds for impacts, sliding, rolling,
bending and so on. Here are just a few links to research on this fascinating topic: http://gamma.cs.unc.edu/research/sound, http://
gamma.cs.unc.edu/AUDIO_MATERIAL, http://www.cs.cornell.
edu/projects/sound, and https://ccrma.stanford.edu/∼bilbao/
booktop/node14.html.
◦ Vehicle engine synthesizers aim to reproduce the sounds made by
a vehicle, given inputs such as the acceleration, RPM and load
placed on a virtual engine, and the mechanical movements of the
13.6. Game-Specific Audio Features 829
vehicle. (The vehicle chase sequences in Naughty Dog’s three Uncharted games all used various forms of dynamic engine modeling,
although technically these systems were not synthesizers, because
they produced their output by cross-fading between various prerecorded sounds.)
◦ Articulatory speech synthesizers produce human speech “from scratch”
via a 3D model of the human vocal tract. VocalTractLab (http://
www.vocaltractlab.de) is a free tool that allows students to learn
about and experiment with vocal synthesis.
• Crowd Modeling. Games that feature crowds of people (audiences, city
dwellers, etc.) require some means of rendering the sound of that crowd.
This is not as simple as playing lots and lots of human voices over top
of one another. Instead, it is usually necessary to model the crowd as
multiple layers of sounds, including a background ambience plus individual vocalizations.
We can’t possibly cover everything from the above list in one chapter.
But let’s spend a few more pages covering some of the most common gamespecific features.
0 Comments
Please Comment for any further query :