Originally Posted by
Daniel Hon
Great tutorial for understanding gamma correction, but there's 1 thing that I'm finding hard to get my head around: a gamma encoded 8 bit image has the same tonal continuity as an 11 bit linearly encoded image. This is because our eyes do not perceive the extra data in the highlights, but requires more information in the shadows.
Our eyes capture light logarithmically, but camera sensors captures light and the camera ADC quantizes the light linearly. So before you apply gamma correction, the data for shadows, highlights and mid-tones have already been captured linearly by the camera. Therefore, applying digital gamma correction is not going to introduce more information in the shadows because the total amount of information has already been captured, it's just going to reshuffle the information to make the image look more realistic and pleasing, i.e. we're mimicking logarithmic capture. So the 12 bit digitally corrected gamma image is still only has the same tonal continuity as the original 12 bit linearly encoded image not a 15 bit linearly encoded image.
So that statement is only correct if the 8 bit gamma encoded image came from the eye or a linear encoding camera that had 11 or more bits. So digital gamma correction helps when you want to lower the bit depth but still want to retain as much data as possible.
Is there a camera with some sort of analog gamma correction so that when the ADC outputs bits, the gamma correction has already been applied and thus we're actually increasing the bit depth beyond what is capable of the camera?