The world is all around us. We have a lot of cues that make us aware of how we fit into a 3D space. Many of these cues, senses and reactions are survival instincts. For example, the ability to hear someone walk up behind us, to see movement in our peripheral vision, a sense of how we are oriented when we move our bodies or swivel our head. All of this creates an overwhelming amount of data that we seamlessly interpret as we go around our day getting our morning coffee, driving, walking through crowds, diving into pools, and everything that we as people do in our lives.
As movies and television have evolved, we\'ve learnt a new language, a new way to interpret the world around us based on a window into another world, the screen. What we\'ve now become accustomed to via that \'screen\' is jumps in location, perspective time and space. The screen is sometimes supplemented with surround sound, but usually we\'re just left to our imagination to weave the data we get into human stories.
Close ups, wide shots, panning, zooming, and many other devices are all an established part of the storyteller\'s toolkit. Then along comes Virtual Reality (VR) and 360, and suddenly we have a tendency to fall over reaching out for things that aren\'t there. We lose perspective, and even though this world, like ours, is not flat, we are lost!
There are differences between VR and 360-degree images, but what they both share is an immersive environment and the ability to represent, like reality does, what is going on all around you. Despite both VR and 360-degree images being more like reality, we find ourselves struggling to know how to use it.
There are several disconnects for us as viewers. I think a major one is that we do not physically navigate our way through the world in the viewer, we can spin around and look up or down, but we don\'t have all of the normal cues leading up to this. We didn\'t walk into a town square or into the theatre, so we immediately look around to get our bearings as if just teleported in. Then we can\'t really move around in 3D space, we are restricted to rotating to see what is next to us. In VR environments you can navigate through space, but generally not in the normal way like walking over to something you see. So basically, we\'re a little lost and as a result it takes us time to orient ourselves into what is a static scene with action someplace that we have to find.
Throughput the whole VR/360-degree experience, the director, can\'t guide you through the story in the same way as a 2D film. They no longer have the fast cuts, the zooming, the pacing they are accustomed to using to move you through the story. As a result, there is a focus on the novelty as opposed to an improvement in how we can tell the story.
There is a lot of experimentation going on today, and a lot of it is on the capture side. How to we grab better immersive images and stitch them together. Primestream is dedicated to managing those files. Images are captured, metadata is then added and managed around those files, and ultimately, the workflow is organised around them. This puts us right in the middle of the efforts to better understand how to tell stories with these new assets, pulling us into discussions around the creative process. As creative tools like Premier evolve to meet the needs of editors using them, we too are evolving how to manage the data and metadata that adds value to those captured images in the process. We\'re no longer engaged in tracking content segments with time-based markers, but rather content that needs to be marked up in time and space with directional data. Not only when, but also where the interesting content is.
We fully expect the requirements for VR/360 asset management to continue to evolve as people figure out how to use it to perfect their techniques over time, and we know that all the creative tools will need to move quickly to keep up with that innovative re-imagining of how we tell stories. We are not there yet, but you can already see the early signs that we are creating an exciting new creative narrative.