Home    Search    Gallery    How-To    Books    Links    Workshops    About    Contact

Ritz Camera

adorama

Amazon

I personally buy from Ritz, Adorama and Amazon. I can't vouch for any other ads.

 

How We See
© 2007 KenRockwell.com

Please help KenRockwell..com

 

I get my goodies at Ritz, Amazon and Adorama.
It helps me publish this site when you get yours from those links, too.

June 2007

INTRODUCTION

The Human Visual System (HVS) is tricky.

There are two components: the eye, which is the easy part to understand, and our perception of our eyes' signals as processed by our brains, which is the hard part.

Our eyes are relatively easy to understand, and I'll explain a little here as it relates to photography.

We only "see" after our brains interpret what's sent to them from our eyes. I'll cover this a little bit, but this subject is far more abstract and beyond the scope of this paper.

First I'll cover the eyes (no, not cover them literally, boneheads), and then a little about the brain which does the actual seeing.

Our Eyes

ISO (Sensitivity): Our eyes have auto ISO, like Nikon DSLRs (honest!!!!) In the electronic imaging and video world we call this variable gain.

In bright light our eye's ISO drops and we see fine details and bright colors.

In dim light our eye's ISO climbs (astronomers call this "dark adaptation") and we can see in the dark. Our eye's ISO climbs, which is why we see fuzzy grain while fumbling around in the dark. Our eyes only see in black-and-white in the dark. Our eyes can see in light much darker than urban and suburban dwellers experience.

Our eyes adapt very quickly to bright light (although it may hurt a little as our irises contract when someone turns on the lights after a slide show), but take a longer time to become more sensitive to darkness. That's why we can't see anything when we first walk into the dark, and over several minutes begin to see more.

The central spot of our eyes is insensitive in the dark. To see an object in very dark conditions, look to the side!

Region-Adaptable ISO: No camera does this, which is why photographers have always had to modify light or burn-and-dodge. Our retinas can vary their sensitivity by region. They drop the sensitivity for a bright sky and increase it in a dark foreground. Look out a window and close your eyes. See the fuzzy negative image? That's the map of where your eye has varied its sensitivity (an unsharp mask) to let you see out the bright window and inside your house, both at the same time with full contrast, which no camera can do.

HDR (High Dynamic Range): Our eyes do this with region-adaptable ISO. Sadly there is no HDR scheme yet today which successfully mimics our own visual system's processing, which is why all the automated HDR images I've seen so far suck. As soon as someone writes a routine to take the abstract HDR (32 bit linear) data and artfully remap it into visual (8 bit log) space, HDR remains a manual artistic process. (Let me know if you get this right. Automated HDR -> visual mapping is still unsolved in the photography domain.)

White Balance (WB): We also have auto WB, with extreme intelligence via our brains. I cover the brain later. Our brains create the image and have a lot to do with our perception of color. (see Josef Albers.) The sensitivity of our eyes also varies with light level. At bright levels (photopic vision), our eyes are less sensitive to blue than they are in the dark (scotopic vision). This automatically helps prevent cave fires and candles from looking as hideously red as they do in photographs, but why in dim indoor light we need to dial in values like 2,500 K to get more natural looking photographs.

Focal Length: Our eyes focus by squeezing or stretching our lenses, which changes its focal length to fit the fixed size of our eyeballs. Cameras used to focus by moving a lens in and out, but today most lenses focus by changing focal length by moving internal optics, too.

Aperture: Depth of field depends on the clear aperture of a lens or eye. Clear aperture is the effective size of the hole (in millimeters or inches) when you look into the front of a lens. To calculate clear aperture, take the real focal length and divide it by the f/number. For example, a 100mm lens at f/4 has a clear aperture of 25mm (one inch, 100 / 4 = 25mm). A 6.3mm pocket digital camera lens at f/6.3 has a clear aperture of only a millimeter (6.3 / 6.3 = 1mm or 1/25 of an inch), which is why almost everything is always in focus from a pocket camera.

Our eyes' clear aperture is the black spot in the centers our irises. This is a maximum of about 9mm at night, usually about a few mm otherwise, and as little as 1mm in direct sunlight. These small apertures are why we see a larger depth of field than most SLRs.

Shutter Speed: About 1/30 second. Wave your finger back and forth. How long is the blur? Now shoot it with a camera. You'll see the blur is equal to about what the camera sees around 1/30 of a second. Movie cameras usually use a 180 degree shutter, meaning at 24 FPS the exposure time is 1/48 second. Our eyes see with an exponential decay, not a hard open-close of a shutter, so there isn't an exact correlation. Our eyes have no shutter except for our eyelids - vision is a continuous process.

Bugs and hummingbirds have much faster visual systems, which is why they can outmaneuver us. What's a blur to us is perfectly clear to them.

Resolution: About one minute of arc (one sixtieth of a degree).

Angle of View: Our eyes only see detail from a tiny central spot. We perceive detail by our brains stitching together images as our eyes look around.

Our peripheral vision is highly sensitive to motion, but not at all to detail. To see this, keep your eyes locked on one object. The side of the image have no detail, just light, dark and shape. For more fun, look ahead and sense the limits of your angle of view, usually about 180 degrees. Now wiggle your fingers and bring your hand forward from behind you. You can see your wiggling fingers beyond this 180 degree field of view. You can see motion a little bit behind you!

Our Perception

Our eyes see nothing

Our eyes don't send images to our brains. Images are constructed in our brains based on very simple signals sent from our eyes.

The nerve signals from our eyes are still the subject of much study, and mostly represent edges, shapes and motion. They do not send images.

The mental processing required to perceive images is so great that it represents about 40% of the body's at-rest caloric consumption. This is why it's so resting to close our eyes for a moment. (I forget the citation for the 40% number, let me know if you have it exactly.)

"Seeing" is a very complex higher-order brain function, and a huge percentage of our brains (the largest, in fact, of any brain function) is required for doing nothing other than recognize what's in front of us.

Pattern Recognition

Our brains form images based on pattern recognition. We don't see images; our eyes see line and motion, our brains interpret that to attempt to recognize to what sort of thing those lines and motion might represent, and then our brains seamlessly cause us to perceive whatever that object might be.

Pattern recognition is learned as we grow from babies. At first nothing makes sense, and as we learn about the world around us, more and more makes sense until as we grow into kids after which most of us forget what it was like when our visual systems were training.

Adults rarely have instances where we realize our brains can't recognize something, and drive home how our eyes don't see anything themselves. I remember when I was very young and my visual system was still developing. (I've been curious about all this stuff since the day I was born.) I'd see lines and shapes, and it would take a moment until recognition kicked in and I'd suddenly "get" what was the object in front of me.

Pattern recognition is why motorcyclists and bicyclists get run over every day by people who were looking right at them.

Most drivers are looking for cars. If they're not looking for cyclists, people often won't perceive them, even if they are stopped right in front of a red light. The car driver runs right over them, and never saw them even though the car driver was looking right at them. If the car driver isn't paying attention, his brain doesn't perceive the lines and shapes from his eye's vision of the motorcycle as being a motorcycle.

Notice how motorists will spot a police officer on a motorcycle a mile away. It's not just because of the white helmet; it's because the visual system is working hardest to manage all the inputs its receiving and prioritizing how it recognizes things. The brain can only recognize so much, so it's looking for what concerns it. Sadly it tends to miss things other than cars and trucks.

George Carlin alluded to this, talking about what fun it is to look at a chain link fence when our two eyes lock into the wrong links. The stereo 3D effect created in our brains is messed up, backwards and inside out! We stare at it, and something's kind of weird, kind of fun, while our left and right eyes are looking a few links differently from each other. Then our brains finally get it, and as I recall Carlin saying, the "fun suddenly goes away" as the image reverts to what it is supposed to be.

Our eyes can't see a fixed, non-moving image. Our eyes are always scanning and moving. If you can lock your eye in one spot (this research usually requires rather painful apparatus to fix an eyeball), the image fades away. Even when we think we're staring, our visual systems are constantly moving our eyes slightly to keep the signals coming and the image refreshed.

Sources

I've been studying this for a long time. I've learned this from articles in Scientific American decades ago when it was good, talking with human visual system researchers, reading every reference book I could find on the subject and decades of casual research. I provided links to Josef Albers' books on color, but not the rest since my library is still packed up from my last move. I also hear that Ernst Gombrich's Art and Illusion is an interesting read.

June 2008 Human Visual System Discovery: Fair and balanced Fox News has an article explaining new research that's uncovered both how we see in the present (even though our brains take time to process the images) as well as explained why many optical illusions happen.

PLUG

If you find this as helpful as a book you might have had to buy or a workshop you may have had to take, feel free to help me continue helping everyone.

Thanks for reading!

Ken

 

Caveat: The ads below come from a third party and I don't see or approve them. They are sent to your screen directly from a third party. They don't come from me or my site. See more at my Buying Advice page. Personally I get my goodies at Ritz, Amazon and Adorama.

Home    Search    Gallery    How-To    Books    Links    Workshops    About    Contact