There is an interesting seminar called Size Matters at the KitPlus Show – organised by the publishers of this fine magazine – at MediaCityUK in Salford on 6 November. It’s a talk by cinematographer Alistair Chapman on the way that camera technology is changing, and in particular the size of the electronic device which creates the image is growing.
From the days of tubes, through CCDs and now increasingly CMOS imagers, the size of them imager in a broadcast camera has remained the same. They tend to be called 2/3 inch sensors, because that is roughly what the diagonal is (about 16mm for us modern folk). The manufacturers of broadcast cameras and lenses got together on the B4 standardised lens mount and distance from the back of the lens to the start of the camera’s optical chain (usually a group of prisms), so any lens will work on any camera.
One of the advantages of this combination is that you get predictable depth of field – the bit of the picture that is in focus. And, given suitable light levels, that depth of field will be deep, meaning that in the heat of live production, you can be slightly out with the focus and still get a good picture. Which is a good thing.
The movie industry has always worked on a different principle. It has worked on the principle that the bigger the film gauge, the better the picture. So while 16mm film was practical – and had about the same sized sensor as a broadcast camera – for the movies you needed to shoot at least 35mm.
Early movie stocks were not particularly sensitive: the chemists struggled to get enough silver halide crystals into the coating. So shooting movies required a lot of light. It also required a large aperture on the lens to get as much of that light through as possible.
You will recall from your school days that large aperture equals small depth of field at the point of focus. Directors took this short depth of field forced upon them and made it a central part of the language of movies. They used focus to direct our attention, and this remains a central part of what we call the “film look”.
Indeed, audiences rebel when directors use modern technology to break away. The entirely admirable Ang Lee in his most recent movie Billy Lynn’s Long Halftime Walk used electronic cameras that gave an almost infinite depth of field. Along with the 120 fps frame rate the result, according to most critics, was just too real. It lacked the detachment which makes a movie a story.
Today there are a very large number of people making professional video cameras. Off the top of my head I can think of: AJA, Arri, Blackmagic, Canon, Datavideo, GoPro, Grass Valley, Hitachi, Ikegami, JVC, Kinefinity, Nikon, Panasonic, Red and Sony. Apologies to all the others I have missed. While some continue to make 2/3 inch broadcast cameras, others are using sensors of different sizes, to create distinctive looks for directors.
So does size matter? Well, yes. First, there is the depth of field issue. We now have a whole range of different lens mounts, although at the top end of the market there is a gravitation towards the (originally Arri) PL mount. However the lens is attached to the body, though, the laws of physics cannot be broken, so with a large format sensor – and typically they are the size of a Super 35mm film frame – you get a shallower depth of field.
This was the original selling point, when operators first started using Canon stills cameras for video acquisition. It had that film-like ability to push the unimportant bits of the frame out of focus, to make it clear you were watching a movie not seeing real life.
I said earlier in this article that this was not suitable for live broadcast, because there is no time to rush around with tape measures getting the focus precisely right. As if to prove me wrong, we are now seeing large format cameras used in broadcast applications.
The Amira camera from Arri, which uses the same family of large-format sensors as the market-leading Alexa, is now available in a multi-camera format, for live production. A popular late-night chat show on ZDF in Germany is shot using Arri Amiras, a particularly challenging application. It gives the shallow depth of field look that the production company wanted, but for the engineering team there is the reassurance of controlling the cameras through standard Sony camera control units.
A perhaps more obvious application comes from French facilities company PhotoCineRent, which has cornered the market in the live coverage of fashion shows. It uses as many as nine Amiras around the catwalk.
The other gain with large format sensors is much improved sensitivity, because each photosite is bigger so gets to count more photons. Which would be nice, but for a couple of things.
Broadcast cameras tend to have three imager chips (Hitachi uses four). An assembly of prisms sorts the incoming light into red, green and blue components which each go to their own full resolution imager. Large format cameras are single chip: the individual photosites are tuned to red, green and blue in what is now the familiar Bayer pattern.
You need to read the specifications carefully, to check if the manufacturer is claiming a number of pixels – discrete bits of digital information giving resolution to your pictures – or photosites, the photon counters. In a Bayer pattern chip, half the photosites are green and a quarter each are red and blue. A nifty bit of arithmetic converts this output into the camera’s output. But if there are 8,294,400 photosites, there will be some mathematical interpolation to get to 4k video resolution.
And therein lies the other issue. There is huge interest in shooting 4k Ultra HD. Content that is going to end up as a movie rather than television has a different definition of 4k. Specialist content like displays and live production need different resolutions for different shaped screens.
So developers are cramming ever more photosites into that Super 35mm sized frame. This is the result of remarkable chip fabrication. It is also possible because we can now convert resolution on the fly, so it matters much less that the camera is generating more pixels than we actually need. Indeed, it may well have a benefit in the final image.
The latest flagship camera in the Red family, for example, offers a choice of 8k sensors (in different physical sizes) and a 5k sensor. That certainly gives you plenty of options when you are planning a shoot.
The benefit is that you can make best of breed choices at each stage of production. Even if you are heading for HD, you can shoot in 4k if that is going to give you the look you want. Using clever software plug-ins like Comprimato’s UltraPix, you can use JPEG2000 to edit in 4k, inside your preferred editing package, even on a laptop.
The answer to my question about whether size matters, then, is “it depends”. There are many considerations, and bigger is not always better. That is why Alistair Chapman’s presentation will tackle the question “Large sensors are certainly fashionable but are they right for you?”. If you have anything to do with content origination and production, and are in the area, it will be worth a listen.