The state of 3D
Rewind a couple of years and 3D was a hot topic but there was very little production and few companies able to offer suitable support for efficient 3D post production. Many people thought it would fade, as before. A few even hoped it would go away! Now many are supporting stereoscopic 3D. The most activity is in the DI (digital post for film) area – simply because many of the new digital cinemas can support 3D and it already has a proven track record of pulling in the audiences – and the money. The 3D screenings of Disney’s “Meet the Robinsons” generated almost three times the money per screen compared to 2D, and Paramount's “Beowulf” grossed twice as much.
Television lags but, in the UK, the BBC, and notably Sky, are taking a serious interest. Viewing still almost always requires darkish glasses, not the red-and-green ‘anaglyph’ ones, as well as a special 3D television screen. The glasses are pretty much accepted in cinemas but some feel are not suitable for home viewing – after all you will look really odd when you pop into the kitchen to make a cuppa during the ads! As I write scientists are beavering away to make viable autostereoscopic displays which allow people to see the 3D effect without special glasses. There are screens around – look out for them at IBC (try Auto Creative). So far, there is still room for improvement with a significant reduction of the resolution of the viewed image and only restricted viewing zones where the 3D effect can be appreciated.
3D is here to stay this time around because the accuracy of digital video (over film), and a host of highly sophisticated tools in the scene-to-screen workflow have now taken the headaches out of viewing, allowing audiences to enjoy feature-length shows in comfort. Hollywood is making about 10 3D features a year, and the worldwide production rate is over 25 – and rising.
However, even with good 3D there is still a fundamental viewing problem with convergence and the eyes’ focus working out of sync. This is because the plane of the screen is always the audiences’ point of focus, while convergence operates to change the distance that objects appear at relative to the screen plane. This never happens in real-life.
The challenge for post
The successful handling of stereo 3D in post requires everything needed for 2D – and a lot more. Assimilate’s Scratch has been active in the area for around three years – a long time in digital 3D – and was famously involved with the post for ‘U2 3D’. One post activity involves trying to make the left and right ‘eye’ cameras track each other correctly, with parameters such as convergence, relative image size/rotation, relative image colour, keystoning and, to a degree, focus. Jeff Edson, CEO of Assilmilate, commented, ‘in post, an enormous amount of time is spent fixing the anomalies of the camera. The digital camera world has the potential to make it simple, and at some point, the whole thing will become easy, and then post becomes purely a creative medium.” Some 3D camera rigs have now gained intelligence through real-time image analysis and can self correct and so begin to reduce that area of post. However, many rigs are not as advanced yet.
There is also the question as to shoot parallel or converged – which has a direct impact on the requirements of post’s tools.
A more obvious area of complexity is the fact that, with two cameras running, there is twice as much footage shot. Edson adds, “A major Scratch feature has always been its data management. 3D is twice as hard as 2D but that’s already handled in Scratch’s ‘Construct’ area.”
That helps with the management but, then again, other considerations are storage and bandwidth. There is a widely held view that being able to see the 3D result as you work is a great asset – that means using twice the storage, outputting twice the bandwidth and running two video streams in sync. This panned out well for Quantel who was also quick to adapt its Pablo and iQ video editing platforms to handle 3D. Marketing Director Steve Owen revealed that the kit had more power under the hood that could be accessed. “We’d always had a second port designed into Pablo (and iQ) so we use that to give us live uncompressed real-time 3D and have added 3D tools.”
FotoKem’s 3D Pablos helped it to complete the post the 3D shoot of “Hannah Montana & Miley Cyrus: Best of both worlds concert tour” in 11 weeks. The 3D concert packed theatres and took $31 million in the first weekend – an average of $45,000 per screen. It has since passed $70 million – more evidence that 3D can pay. Just in case you think that 11 weeks is not that tight a schedule, you might like to know that this was the tightest 3D post completion ever. Before the advanced camera rigs and all-digital workflows, just correcting the camera misalignments was a monster task, with an appropriately sized price tag. This new era of 3D post can deliver far better quality, much quicker and, hopefully, at a better price.
The Quantel 3D package includes real-time operation with uncompressed real-time left and right ‘eyes’ in view. The 3D toolset includes real-time convergence tools that can fix or set the relative location of an object in space while looking at the live result. It goes without saying that 3D adds another dimension and so the possibilities of further tools for 3D manipulations – another space in which to be creative. While these degrees of freedom are new, over exuberant 3D editors may well get carried away and over used them to spectacular effect – as happened with the early DVEs; that was on television, the impact on the big screen would be even more painful! Even so there are many early 3D movies that include at least one moment when an object, usually unpleasant or aggressive-looking, shoots out of the screen – seemingly within reach of the shocked audience. Owen brings us a bit closer to reality when he points out that, “Not everything has changed. Post is still about narrative tools. The 3D is just another part of post – which is still about making the images look beautiful.”
One area where 3D post requires some serious power is colour correcting. It’s very unusual for two cameras and lenses to match in colourimetery, and when using mirror rigs this incompatibility can become even more acute. Attempting to grade both eyes to match perfectly is not a simple task, but without it the end result is never appealing to watch. In addition, the various technologies used in theatres for the audience to view 3D films introduce different changes to the left/right eye colour. This also has to be taken into account during post. Then the colour grader himself may have to work wearing ‘sunglasses’. This can be a very strange experience that not all colourists warm to. If using linear polarized glasses, looking at the operating system’s GUI can be a problem as LCD screens are naturally polarized. So 3D post can be a real headache.
The Foundry has stepped into 3D with its Nuke Ocula tools for stereo. It can operate on just one channel, with left and right eyes treated the same, or as two where each is treated separately. This offers intelligent operation by creating a disparity map between left and right images to calculate their differences. Then an operation on one is also affected on the other in the right way so, say, rotoscoping or painting on the left eye is automatically mapped onto the right. Among other very useful tools are camera correction adjustments for horizontal and vertical position and rotation that also includes keystone effects.
In fact there are plenty of ways for getting it wrong and ending up with something that looks absurd or, even worst, slightly incorrect. This is to do with obeying 3D editing grammar, which is really to do with perception; the way our brains are programmed to interpret what our eyes see, along with the information on where they are pointing. Stereoscopic 3D is not real 3D, you cannot walk around a 3D cinema and see an object from all sides – there is only one view. In fact it’s best to keep your eyes relatively in the same place (or rather following the main action rather than roaming the screen) all the time as any large movement reveals that objects on the screen don’t appear to move as they should. In truth it’s all trickery – an illusion that tricks our perception into thinking we are seeing 3D, but in fact it’s just something that can look like 3D, if everyone obeys the rules.... rather like stereo sound.
There is a whole new layer of editing ‘dos and don’ts’ for 3D: the grammar of piecing together of 3D footage. This is really important because watching 3D is a far more immersive experience than for 2D, and so it is much easier for an error to cause upset as we think we are in a 3D world. One of the easiest rules to understand concerns cuts. It says that the in and out shots should have similar convergence – meaning that they are perceived to be at a similar distance from the audience. Why? because that’s how we see the real world. The distance of our focus and convergence changes as we look around, or as things move around, but not suddenly while we are looking in the same direction. Obeying this to the letter would lead to editorial nightmares so occasionally breaking the rule is usually acceptable. Fast 3D cuts are to be avoided, even if they obey the distance rule, as they can be too much for our perception to keep up with and can cause headaches, etc.
Editors of 3D need to understand a few more things about human perception. Understandably it assumes that the distance between our two eyeballs, well, actually, their lenses (the interocular distance – reckoned to average 63.5 mm, does not change after we’ve finished growing. It also knows a bit about the focal length of the eyes’ lenses. It’s easy to break this rule when shooting and this has the effect of making objects look strangely the wrong size. This is due to the choice of the interaxial distance between the cameras relative to the focal length of the camera lenses. Shooting with a relatively short interaxial camera distance can make objects appear much bigger – an effect known as giantism. With the cameras too wide the opposite happens, resulting in miniaturization, also know as lilliputism. Using these apparent size changes for the right reasons can be very effective, or very stupid if done accidentally.
Although 3D film has been around for many years, going digital has now changed everything all the way through the scene-to-screen chain. Digital 3D post probably only started in earnest less than three years ago, so the learning curve is still pointing upwards, but not nearly as steeply as it was! Three years ago there were a lot of disbelievers. I recall an IBC tram conversation with an industry figure who was convinced it was all just a flash in the pan as 3D gave him headaches. Apparently it is true that stereo 3D does not work well for a small percentage of people, but there are plenty who do enjoy it. And it’s thriving as never before.
Despite the youth of digital 3D post, current equipment is already very capable. In time there may well be further refinements and a trend to new operating methods – that’s what happens with more experience and product development. But there is no great missing piece of the jigsaw in post. But note that 3D represents another version that will need to be made for distribution. Good camera rigs have become more easily available and accurate enough for live 3D broadcasts, but still need some correctional work for the more exacting requirements for movie projects. Film audiences seem to accept the use of 3D glasses – though I’m sure everyone would prefer not to need them. For the wide development of the home 3D experience, the market needs good autostereoscopic screens that are, as yet, somewhere over the horizon.
For more information see: www.lightillusion.com/stereoscopic3d.htm