VR and 3D Audio - Ask The Experts


Pieter Schillebeeckx TV-Bay Magazine
Read ezine online
by Pieter Schillebeeckx
Issue 113 - May 2016

What's the difference between 2D & 3D audio?
There are two parts to this question when it comes to audio for VR. The key difference is that 2D is a single horizontal slice,so when we're thinking 5.1 or traditional surround sound in a cinema that would be looked at as 2D,whereas 3D adds height information to this both above and below you. The second part to this question relates to static versus dynamic audio,and this goes for both 2D and 3D audio. Until now,we've been used to consuming audio in a static manner to match a static image. With VR the image is no longer static as it tracks head movement,making for a dynamic experience. For virtual reality it's the dynamic nature of the audio that's extremely important and will complete the immersive experience.


Is 3D audio the s ame as object-based audio?
3D audio can be part of object-based audio but they're not one and the same. 3D audio-often referred to as immersive audio-aims to transport a listener to an environment,immersing them in the sound,whether at a concert or a basketball game.
Object-based audio is a radical departure from traditional audio formats,such as stereo or 5.1,in two important ways:it supports many audio playback formats natively from one single audio deliverable; and it offers personalisation. In order to achieve this,an object-based audio stream is not a pre-baked stereo or 5.1 mix but rather a selection of audio stems that are used by the consumer's device - such as a set-top box or a mobile device-to create the desired playback format for the listener's set-up,whether that's headphones or a home theatre.


If we're looking at all the different playback formats-including virtual reality - it's clear that we can't keep on creating more and more mixes,so object-based is definitely the future for audio,whether delivered to a VR headset or the ultimate home theatre.
I think a very important part of object-based audio is the personalisation element. For example,by sending multiple commentator stems for a football game you could say: "Well I don't want that neutral commentator,I'm a Liverpool fan so I want the Liverpool biased commentary." To take it one step further,you can also set the balance between the background ambience-the feeling of being there-and the commentator
of your choice.


So is 3D audio the same as object-based audio?

No it's not. 3D audio can be part of an object-based deliverable,and 3D audio as an ambience bed works extremely well in an object-based environment because you can augment it with mono or stereo stems such as sound effects or narration.

I understand 3D audio is not a new technology, when was it developed and for what reason originally?


3D audio has been around for quite a long time. If you look at surround sound as a whole,Disney's Fantasia introduced surround sound in the 1940s.
SoundField developed the very first ambisonic B-format microphone in the late 70s,with the first commercial product coming out in 1978. And they were fully 3D audio compliant even back then so it's nothing new. The challenge then was what to do with the 3D audio,how to play it back outside of a laboratory. The early use of 3D audio was not about being immersive but about the flexibility it gave you when steering around this microphone. You may only want a mono or stereo output from this 3D audio capture but it's about being able to steer around and being able to reposition this microphone in post-production.
This is a very important statement because this is where SoundField and virtual reality really start to gel together. So if you think about it,the way virtual reality is captured from a 3D video point of view means we can smoothly move around this space. SoundField B-format captures audio in exactly the same way allowing us to use exactly the same head tracking or positional data used to position the video to move the audio perfectly in sync.
All we have to do for this is use the four SoundField B-format audio channels together with the video. The head tracking information,which is used to move the video around,can then be used to steer the audio.

If I'm wearing VR goggles I need to hear sound behind me when I turn to see what these sounds are then they should be in front really,so what are the challenges in doing that and maintaining this audio interact?


There are a lot of different ways you could do audio for VR,and as you progress down these different ways the experience will become more realistic for the consumer,which is the end goal.
So first of all,we could just have a fixed stereo which doesn't move with the video so you just lay down a stereo track like you would have always done for a standard video shoot. So a lot of the VR content out there is exactly that:you move your head and the audio stays completely static. Clearly this is not satisfactory and we are really missing
out here,in the end it is the audio that will make you really believe you're in a virtual reality.


The second thing you could do is to use head tracking to play back stereo audio that is in line with what you're seeing.
As you move your head,the audio will pan around in sync with the video. At this point you won't really hear a discrete source behind you over headphones because it's just a stereo image which is facing forward-or if you do hear the sound you will not localise it behind you. Again,this is an improvement and there is more and more virtual reality material available that is done in this way but clearly it's not the holy grail.
The true holy grail is really about being able to recreate complete 3D audio over headphones. Binauralisation aims to do exactly this and mimics spatial cues generated by your head and your ears to trick your brain into hearing real 3D audio. This technology again has been around for a long time but has been fraught with challenges in that every person's head and ears are different. When you measure a given person's head the results are extremely convincing. However coming up with a set of measurements that work for a wide range of people has been challenging,but a lot of progress has been made in recent years.


So this is where it starts to get very exciting. We can capture 3D audio using a SoundField B-Format microphone,we can use object-based audio to augment with other mono or stereo sources and we can playback 3D audio over headphones that can use the video head-tracking data to move them all in sync. Now we are really starting to be immersed in a virtual reality both from a video and an audio perspective!
So from where I am standing it looks like the technology is available to go out and create truly immersive virtual reality experiences. All we need now is lots of creativity to make amazing content.


Tags: iss113 | 2D Audio | 3D Audio | Object-Based Audio | VR | Virtual Reality | Pieter Schillebeeckx
Contributing Author Pieter Schillebeeckx

Read this article in the tv-bay digital magazine
Article Copyright tv-bay limited. All trademarks recognised.
Reproduction of the content strictly prohibited without written consent.

Related Interviews
  • Sennheiser VR at IBC 2016

    Sennheiser VR at IBC 2016

  • Elemental Technologies VR at NAB 2016

    Elemental Technologies VR at NAB 2016

  • Virtual and Augmented Reality support with the Arrow Fx7 from Miller at NAB 2017

    Virtual and Augmented Reality support with the Arrow Fx7 from Miller at NAB 2017

  • Ross Video at BVE 2017

    Ross Video at BVE 2017


Articles
Painting Performance Analytics with ChyronHego
KitPlus By now, most people are familiar with the sport of mixed martial arts (MMA) and its leading organization – UFC (Ultimate Fighting Championship). And while the sport and its leading promotion are only 25 years old, a great deal has changed in those 25 years, including the training of UFC athletes.
Tags: iss136 | paint | telestrator | ufc | chyron | chyronhego | KitPlus
Contributing Author KitPlus Click to read or download PDF
Looking for the Silver Lining
Harry Grinling According to the World Meteorological Organisation, there are 10 different types of cloud, each of which can be divided further into sub-types. They range from the cirrus, the thin floaty clouds which generally serve only to make the sky look beautiful to the towering, all-embracing cumulonimbus which can deliver fearful quantities of rain – the biggest cumulonimbus clouds can contain 50 million tonnes of water.
Tags: iss136 | cloud | lto | archive | storage | Harry Grinling
Contributing Author Harry Grinling Click to read or download PDF
Keeping Your Post Prodction on Track with Subclips and Search Bins
Alex Macleod

For my 2nd Kit Plus article I thought I’d try and build on the theme of my first, and that’s one of making sure things are organised at all levels of your post production projects.

Last time I talked about trying as best as you can to stick to the ‘two week rule’, making sure that the names & locations of every asset you import, and every bin & sequence that you create in your project - will make sense to you regardless of how long it is you spend away from it.

Tags: iss136 | mediacity training | subclip | premiere pro | gvs | bve | bve2019 | Alex Macleod
Contributing Author Alex Macleod Click to read or download PDF
The way forward in a changing world
Alan Wheable The broadcast and media industries are evolving in all areas to adapt to the challenges of competition from OTT, changing user viewing habits, technological advances in image resolution, dynamic range and colour rendition as well as embracing new video over IP infrastructures.
Tags: iss136 | omnitek | aims | ip | smpte | 2110 | 4k qc | ultra xr | nmos | ultra tq | Alan Wheable
Contributing Author Alan Wheable Click to read or download PDF
4k and HDR Wireless Camera Transmitters
David Edwards Across the globe, live events represented over 50 percent of the most watched TV programs last year. However, big budget episodic shows are impacting live TV and the way viewers want to see their content. Viewers are demanding the same quality of production for live event broadcasts as they see in pre-recorded TV series and films. Producers of live content are looking to new, immersive and cinematic mobile camera views to better achieve these results. This presents a challenge to live production teams as these new camera views and angles must match the quality of the rest of the production -  the demand for mobility means that the cameras need to be wireless.
Tags: iss136 | wireless | hdr | 4k | transmitter | bandwidth | imt vislink | hcam | David Edwards
Contributing Author David Edwards Click to read or download PDF