Another great contribution from our very own Paul Turner!
One thing we know to be constant in our industry is that there is always going to be change. We’ve moved from black and white to color to digital to HD, in the constant desire to bring higher and higher fidelity images to the home. Now we’re hearing about 4K/UHDTV (these are not quite the same, but we’ve had that discussion) and beyond – the Japanese broadcasters – and specifically NHK – have announced that they intend to broad cast the 2020 Olympics in 8K resolution.
Do these images look amazing? On the very large screens that they’re shown on, yes. Is it practical for home use? I’m not sure – there are other forces at work here for home delivery to be practical and financially viable (Remember 3D? That didn’t do so well, did it? You sure could spend a lot of money on a 3D capable TV set, but there are no programs to watch on it, and as for those glasses – really????). Perhaps we should consider going beyond 8K to get even better looking images? That’s what this discussion is all about. It’s my opinion that we are heading in the wrong direction if all we do is add more pixels to the image (aka “spatial resolution”).
This is a pretty deep discussion, so it will fill a few blog entries (I’m limited to about 500
words per entry). I’ll try to make this as straight forward as I can. There’s just a lot to consider!
Before we begin, lets remind ourselves that TV isn’t real – it’s a representation of what you might see if you were actually at the camera’s location. As media professionals, we want to deliver the highest fidelity image we can, which translates to the greatest amount of picture information. But there’s no point in preserving data that you can’t see. Remember: in that scene you’re looking at, there would be infra red and ultra violet light present. But you can’t see it, so we don’t bother to transmit it. So let’s begin by getting a deeper understanding of the mechanics and limitations of the human visual system.
Hindsight is 20/20
I always wondered what that meant (not the term, but the “20/20” measurement) – you hear about this all the time, but few of us know where it came from. It’s pretty important to understand, though, as visual perception is highly variable, and in reality pretty much everything we do in our industry is targeted at the image quality perceived by the “average” human with 20/20 vision.
The term was coined by a German visual scientist in the mid 1800s. In a book on vision, Hermann Snellen introduced the idea of the optician’s eye chart, in which letters of a certain size could be read at a specific distance by a person with normal vision. This definition said that such a person could resolve features that subtended an angle of 1 arc minute (1/60th of a degree) on the subject’s retina. The standard size of the letters was calculated so that a normal person could read the letters at a distance of 20 feet. So, a person with 20/20 vision is someone who can read at 20 feet (the second “20”) what a person with normal vision could also read at 20 feet (the first “20”).
Now there’s an interesting piece of trivia for you to take away from this blog. The important part, though, is the max resolution at 1 arc minute. We’ll be coming back to that later!