We need white balance,because of the imbalance of the image sensor output,why ?
We need white balance,because of the imbalance of the image sensor output,why ?
Have you read the tutorial ?
https://www.cambridgeincolour.com/tu...te-balance.htm
Ok,I will read it now,thanks!
UDDITFK,
First, your eye dynamic range is much wider compared to digital sensor. Moreover, your brain is always correcting what has to be perceived as white. A white sheet of paper actually is not white under different lightning conditions. It will be yellow under tungsten bulb and blue under cool light. Having corrected that, your brain will lie you telling that these two sheets of paper are white. Unfortunately, you cannot lie to digital camera sensor. What is yellow will be yellow. Blue will be blue. That is why white balance correction is required.
People often say that, but I have my doubts.
If we're comparing apples with apples then if the eye can restrict the light using the contraction of a pupil and have that counted in it's "dynamic range" then the varying of the camera's aperture should be able to be counted in it's dynamic range - so if a typical lens like the 70-200 F2.8 with a 7 stop range gets added to a typical sensor dynamic range (say, 12 stops) - then we're up to 19 stops - and if we're REALLY keen, ad to that the cameras range of shutter speed stops (another 19) - we get up to around 38 stops -- be interesting to see if the human eye could match that.
Just thinking.
http://en.wikipedia.org/wiki/Human_eye claims a 'passive' dynamic range of about 6.5 stops. Dynamic range over time is much larger, up to 20 stops (but that's between bright sunlight and starlit night, not in one view), but the eye will need up to 2à-30 min to adapt... That would be the value to compare to Colin's 38 stops
In reality, those numbers aren't all that interesting though, as your eye adapts already while scanning a scene. That means that the 'perceived' dynamic range will be larger than the passive 6.5 stops (as if you could use different F-stop values for different parts of a scene). And I've often noticed that in a scene where I can see detail in both lighted and shadow zones, the camera has some trouble doing the same (in one image).
All that has nothing (or not much) to do with white balance. If it was just a matter of an imbalance in the sensor output, that could be set once and for all (per camera or sensor at worst), and then forgotten. We need the ability to correct the white balance because the camera sensor does not adapt to various light conditions.
Good, easily visible exemple: snow on a sunny day:
you can get the sunlit snow nice and white, but then snow in the shadow will appear blue. Correct for that (through white balance correction) and your sunlit snow appears orange/yellow.
The sunlit snow is lighted by direct sunlight (surprisingly), which has (by definition) a neutral colour. The shadows are of course not lit by direct sunlight, but mainly by light refracted in the atmosphere and other reflective surfaces near by. That light will have a blue colour (blue sky...) so anything lit by it will appear blue(ish).
Something similar happens indoors: the light from incandescent lamps has much more red and less blue than sunlight, so anything lit by it will appear reddish in colour. By changing the white balance (i.e. the ratio between red and blue) we can correct for those differences.
The human brain does those corrections automatically, and can do different corrections on different parts of a scene, the camera isn't that smart, and can at best figure out a correction for the
whole image: automatic white balance. This works most of the time, depending on your scene, and how critical you are, but can't get it right in a scene lit with mixed light sources (snow landscape; indoor shot with incandescent lamps and supporting flash, etc.)
[This should be what some sites call a "sticky" (it's always at the top of the list of threads where it can be easily found.)]
For centuries, graphic artists have been adjusting the hues in shadow areas to make their work look more real, so the variation must be there and is visible.
I personally think that striving for the perfect WB can be an exercise in frustration and will be futile. Further to this, WB can (and often should) be adjusted to enhance the mood of an image. For example - the WB of an image of someone's face lit by an open flame should not be adjusted to make the WB "real" or the reality of the image will be destroyed.
There are instance where WB is important, but I feel that WB is often taken too seriously. Just my individual opinion - others may vary.
Glenn
For the OP's benefit, this is basically true if you're shooting RAW. JPEGs do not respond as well to white balance adjustments. I shoot RAW, but I'm also a bit of a white balance nerd. To me, getting WB right in the field is kind of fun, whereas dragging a slider in Photoshop is drudgery.
With respect to the OP's question, we need WB correction because the what the camera's software calls white may not be what the human brain calls white. So the "imbalance" isn't in the sensor, it's between the camera software and our brains. They both use similar methods (looking at the brightest part of a scene and calling it white), but frequently don't arrive at the same conclusion. Normally, the camera's auto white balance is more accurate with brighter scenes.
Well, that's one of those things I've got some trouble with. Let me try to explain:
When we say that the camera can record 12 stops of dynamic range we agree that that means that the ratio between the darkest real signal (i.e. minimum shadow detail) and brightest real signal (no burned out bright parts) is about 2^12.
Now for the difficult (for me) part:
But those levels are then recorded in a certain range of digital values. That range is then scaled, but the extreme values still represent the extremes in the original scene. On reproduction, we cannot get the same ratio in output, so we will have a lower dynamic range (2^6 for screen, 2^4 for paper, or there-abouts). So, do we lose detail because the different values get too close together to be distinguished? It can't be because we lost range in terms of numerical values because of the output medium.
And in practice, I've no trouble seeing detailed clouds in a nice blue/cloudy sky, and then look at the shadow side of the building and see detail there as well. The camera tends to show lots of blinkies in such conditions. (though I do notice it takes half a second or so to see those shadow details, as the eye does adapt).
My take is that you should know why you want to be able to adjust the white balance; and why some scenes cause trouble. From there, there are indeed situations where you don't want the scientifically correct WB setting, but at least you know what happens.
And yes, artists adjust the shadow colours, and the colour variations are real and visible. But they (just as experienced photographers) are perhaps better trained to see what's there (and not what 'should be there') than the average public.
I know that I'm much more conscious of such colour variations since I started photography. And there's regularly questions about correcting mixed lighting scenes. For me that indicates that the colour differences aren't consciously seen on the spot (fluorescent lighting is a bit apart, as there are differences in colour response between eye and camera).
That's why I said that the brain does the corrections
It isn't a valid comparison, but it does mean that the impression a high contrast scene gives (when looking at it) isn't easy to capture in one shot. Isn't that impression what we try to achieve in a lot of cases (or get away from it for artistic purposes). And isn't that aim one of the reasons for HDR ('proper' HDR, not the over-the-top tonemapped kind)?
Also, there's such a lot going on in human vision, that saying it is superior to the 'camera vision', is a bit like saying pears are superior to apples. I've never seen someone pull out an image taken with the Mark I eyeball, so there the camera is clearly superior