I thought some of you might find this interesting, especially the second half:
I thought some of you might find this interesting, especially the second half:
Interesting article. Thank you.
I don't think the contention that creationists maintain the human eye is perfect and therefore evidence of a creator is entirely accurate. A creationist can see evidence of design in the eye but would likely maintian that the eye was "perfect" in the original creation not necessarily as it is now. The creationist would maintain that flaws, if they exist, would be the result of accumulated genetic defects rather than original design.
The design vs evolution debate is very complex and won't be resolved in this forum or anywhere else in the near future but here's an interesting article by a scientist who is not a creationist (as I am) addressing what is commonly reported as a design flaw in the human eye. http://www.arn.org/docs/odesign/od19...dretina192.htm
But getting back to photography, I am amazed at how digital photography captures different quantities and frequencies of light and uses an RGB filter aray to compute a vast spectrum of color as well as shape....just like the human eye. What we see as colors are merely interpretations of combinations of light frequencies. Black and white is nice but color is a gift. A lens, like an eye, is of no use without the filter aray and processor to interpret light into shape and color. A camera is far less complex that the human eye and brain and it required a designer.
Chuck
If I may add something... a couple of days ago, while taking this photo, I was once again impressed by how much more dynamic range my eye can perceive when compared to what my DSLR sensors (Nikon D80 in this case, also have D90) can capture.
I stood there for half an hour, taking photo after photo, knowing that in order to portray what my eye could see I would have a lot of shadow pushing and highlight protection to do when I got home. Even the HDR version left me wanting.
You'll probably find that it's not so much a limitation of the sensor (which at their native ISO is usually good for around 12 stops) - it's probably more to do with the fact that we can't display more than around 5 or 6 stops on our monitors, no matter how much manipulation we do.
As you scan a scene your eyes constantly adjusts their aperture to let in more or less light depending on the part that you're looking at. A fairer comparison with a camera would be to paint a scene with the camera, rather than take it in one shot, and allow the camera to continuously adjust its aperture (in 'spot metering' mode, more or less) as it goes, say between F1.4 and F22. Requiring a camera to take in an entire scene in detail with a single image at a single aperture, exposure time and amplification is really asking a lot of it. We don't expect our eyes to do anything like that. We just don't normally think about 'blown hilights' or featureless shadows in parts of our vision that we're not concentrating on. In a direct comparison the camera would do very well. The camera also has the advantage that it has a massively wider range of integration (exposure) times, from a hundred microseconds or so through to minutes. The eye's range is much more limited (in the tens of milliseconds). This means that even an ordinary camera can capture information that the eye simply cannot, from freezing motion through to imaging very low light sources. An analogous case is when the human brain is favourably compared to computers, forgetting that a computer is not intended to be a brain, and that even the fastest brains are glacially slow at computation.
Regards,
Will
The thing to remember is that prints only have a dynamic range of around 4 stops - monitors around 5 or 6 - and sensors around 11 or 12; the major limitation isn't in capturing the dynamic range of the scene - it's in printing or displaying what we've captured (in essence, unless we compress it, we can't).
Perhaps a better way to consider it would be to think of a camera and an eye having similar dynamic ranges, but with the difference being that the eye is perfectly matched to our brains (and so can feed all of the info to us in a manner we can see) whereas the the camera captures the info, but it's when we try to display it on paper or on a screen that we come up short.
Easy enough to do a little test -- just photograph something (in RAW) that you think is on the limits of what they eye can hangle (be careful not to over-expose the shot) and then in post-processing push the shadows right up into the midtones and see if the detail is actually there. If it is then the sensor has captured it.
From what I can find out, the instantaneous dynamic range of the eye in normal light conditions is about 14.5 stops (rounding to 0.5), whereas the instantaneous range of a typical recent DSLR is about 11, so you're quite right about that. However, when scanning a scene with the aperture adjusting, the eye goes up to 17.5, whereas a camera taking multiple exposures with a typical lens adjustment range could get up to 19 or so. If you allowed the camera to adjust its other exposure variables as well it should be able to get more.
The field of view over which a human being can perceive detail (say enough to read) is only about 2.5 degrees, so in practice people must always scan, usually amounting to only a small fraction of a scene, and fairly slowly. This is quite different to what we do with a camera, which is to capture an entire scene in detail all at once so that we can peruse it at leisure later.
One could build a scene-scanning auto-exposure camera, which as well as having huge DR would have phenomenal resolution across the frame, or else huge pixel sites and even more sensitivity and DR. Alternatively, and more easily, one could build an 'auto-HDR' camera that just took a wide range of exposures and combined them itself. The fact that such devices are not mainstream is a reflection of what most people want to do and pay for, not of what is possible. Anyway it is clear enough that the technology can still be developed a fair way beyond its current state, if anyone has a mind to.
So one cannot really compare the eye and a normal camera in a simple way. The eye has greater instantaneous DR, but with hardly any detail over most of the field of view. The camera has detail over the whole field but lower instantaneous DR. This reflects their different functions as much as anything. Scanning is how the eye gets more detail; HDR imaging is how the camera can get more DR; both are at the cost of time. One should remember, however, that for most purposes 11 stops is more than enough.
(Night-adjusted vision is different from day vision. It is more sensitive with longer integration times and greater range, although without colour. But it takes a long time for the eye to adjust to that state, so it really has to be considered as a separate case.)
Will
Interesting food for thought, here, guys. Thanks for helping me think about his in new ways.
Off memory (at their native ISO) I think many are over 12 and some, over 13 (Nikon D3x springs to mind) (www.dxomark.com allows you to check just about any common model released over the past 5 or so years, but I can't get my browser to work properly with the site just at the moment).
Already doneOne could build a scene-scanning auto-exposure camera, which as well as having huge DR would have phenomenal resolution across the frame, or else huge pixel sites and even more sensitivity and DR.
UPDATE: DxOMark site working for me now - Nikon D3x has a dynamic range of 13.7.
Last edited by Colin Southern; 1st August 2009 at 10:09 AM.
I think (only the result of my thinking, needs to be confirmed) our eyes has a wider dynamic range because our retina is constantly moving and by doing that we scan different areas of the picture with different brightness and quickly adjust to those changes and form a complete picture in our brain based on those pieces. It is like taking smaller images of different areas and patching them together to make a single image with a wider dynamic range.
What do you guys think about my theory? Cool huh? And I am not even an opthomologist!
13.7 - I didn't realise that they were that good now. Astronomical CCDs get to 16-17, I believe, but they're still expensive. Mind you, something with the capabilities of a current top-shelf DSLR would have cost a fortune not very long ago.
We certainly do scan and adjust to brightness as we do so. However I don't think that we ever actually form a complete picture inside our heads. We don't function like a still camera at all. A better analogy is probably a camcorder, one which is sharp right in the centre and soft everywhere else. We tend to operate under the illusion that our senses give us a continuous picture of the world, whereas in fact it is highly fragmented. That is because the missing data is not processed at all - it is not even marked as 'missing'. For example, each eye has a six-degree wide blind spot (which is a lot) that we are normally completely unaware of unless we specifically test for it. We don't perceive it as dark - we just don't process it. (If you haven't tried this and you want to do so, draw a roughly 1 cm wide dot in the middle of a piece of paper, keep your gaze fixed on something directly in front of you, cover your left eye and move the dot in slowly from the right at about arm's length horizontally at eye height. The dot should disappear for a bit and then reappear. You can test the other eye too with left/right reversed.)
A still camera gives us something that we don't have naturally - a wide field of detail all taken simultaneously. That might be one reason why still pictures of dynamic scenes can seem so dramatic.
I thought it (sorry, the eye) also had dynamic range enhancement for static views also.
e.g. that effect where if you stare at something with a simple, distinct dark/light pattern on for say 20-30 secs, then move the eye slightly, you see the inverse of the pattern overlaid over what you are now looking at.
Sure I was told this was a photo-chemical reaction to do with extending the range of brightness the eye can see.
Or I could be completely wrong; I'm no opthomologist either!
Cheers,
I agree. What I meant simply means when you look at a scene we are not really looking at the whole scene. It just feels like that and in fact we are scanning the whole picture very rapidly, starting from the more "eyecatching" areas and moving to less important areas, so what we see in a picture is a patch work of multiple pictures not just one image and that is done in the brain. This way you remember the whole picture and percept it as a whole but what you physically do is not really taking one picture, the way we do with a camera.
I think that up to 13.7 in a dslr utterly surprises most people. I'm convinced that the reason that that would come as a surprise to most is that they just never see that kind of range because it can't be displayed or printed -- what makes it apparent to me is that I often need it when shooting into the light, but needing to preserve shadow detail ... it's often quite amazing at what's revealed in the shadows when I use the fill light control (it's often reminisent of portions of the image "returning from the dead").
Whilst on the subject, just a little "something" for people to keep in mind ... this extra dynamic range can also be used to reveal shadow detail that shouldn't be revealed ... so one needs to be careful when one releases images (especially electronically, although this appies to a lesser degree to prints also) that (how do I put this?) "may degrade the modesty of subjects/models" (a quick check / adjustment is always a good thing before publishing) (just chuck on a temp levels layer and wind the mid-tones waaaaaaay up to check for too much "revelation").
Hi Alis,
Oh yes, I wasn't disagreeing for a moment, I took what you said and was adding to it.
But I'm still not certain I'm correct, hence the last sentence.
Hi Colin,
Now you've got us all wondering who has revealed too much
You've obviously seen something recently that made you raise it.
Cheers,