Those pesky scientists

Dick Hobbs.

Author: Dick Hobbs.

Published 1st July 2015

by Dick Hobbs Issue 102 - June 2015

This month\'s picture is of something I was given on a visit to the Grass Valley camera factory in Breda, maybe 10 years ago. I think it is a thing of beauty, which is why it sits in pride of place on top of the wine cabinet.
It is the sharp end an HD system camera: the optical block and a CCD imager. We saw in the factory the incredibly painstaking work that goes in to precisely align the prisms to create red, green and blue beams from the incoming light, then equally precisely attaching the sensors so that the three channels line up.
The sensor in a broadcast system camera is around 16mm diagonal. This has been the case since tube cameras, and it means that lens manufacturers, operators and directors know what a picture will look like with a given lens.
Now I promise I will not go into any complex maths, but we do need to get our calculators out and do some simple arithmetic. If the hypotenuse of this sensor is around 16mm, then - given that it is 16:9 to match the picture - it is very approximately 14mm wide by around 7.9mm high.
That means each photosite on the sensor is remarkably tiny. Allowing for a boundary around each photosite, a single pixel on the imager chip is around 70m square for HD. Already we can see a manufacturing challenge.
But now we have people saying we need to move to 4k. Or even, for goodness\' sake, 8k. So our photosites are getting really small. For broadcast 4k, around 35m. I will only mention 8k in this one paragraph, but if we went to that, then the photosite would be not much more than 16m square.
Going back to basics, the way that a digital camera works, in the very simplest of terms, is that it counts the number of photons that fall on it in a given time period: say a 25th of a second. The more photons that fall on it, the brighter the pixel.
For HD, we are counting the number of photons falling on an area 70m by 70m, which is about one two hundred millionth of a square metre. Now we see just how clever our camera manufacturers are.
But if we want to get 4k out of the same sized sensor, then each photosite is one quarter of the size. Or looking at it another way, we are counting a quarter of the number of photons which we did in HD. So our accuracy is considerably less, which will equate to noisier, less smooth pictures.
The individual photosites are also getting so small that we are nudging the limits of optical physics, with problems like fringing caused by the borders of one photosite affecting its neighbours. You are even getting towards the stage where radio interference to visible light begins to be significant, so that really useful Wifi link from the camera has to go.

I have painted a bleak future for 4k and up. But is there an answer?
First, you could use a single chip, which would at least save the light lost in going through that prism assembly. But then you are stuck with a Bayer pattern sensor to get three colours out of a single chip. The Bayer pattern is one line of pixels which go blue green blue green blue green, with the next line going green red green red green red.
The problem is that you end up with lower true resolution, because you only have half the number of pixels you need in green and a quarter in red and blue. Purists argue about what that actually means the resolution is, but it is definitely not what it says on the box. You also have to apply another set of mathematical processes to de-Bayer the data, and people even argue about the best way to do this.
But, you are shouting at your magazine, surely the simple solution is to make the sensors bigger for higher resolutions. And of course you are correct. The Red Dragon camera, for instance, has a sensor with roughly speaking twice the diagonal of a broadcast camera, so the photosites are about four times the area.
Which is all very good, except we have to go back to the optics. Broadcast lenses match broadcast cameras. Bigger imager frame sizes mean film-style lenses. The bigger area to focus the image on means that the capture side has a much reduced depth of field - that is what we were taught in physics when I was 15 (and, frankly, never thought would come in useful). Shallow depth of field is great when you want to focus the audience\'s attention on something, but not so good when you want to see everything, or keep up with fast-moving action.
So by all means shoot in 4k, but accept that the route to good pictures is to accept film-style depth of field. Which is fine, because 4k detail would be great for Wolf Hall but unnecessary on the rugby world cup. And so we avert the creativity/science clash for a little while longer.

Related Listings

Related Articles

Related News

Related Videos

© KitPlus (tv-bay limited). All trademarks recognised. Reproduction of this content is strictly prohibited without written consent.