Several authors have pointed out that the eye's sensitivity to light is not linear. One well respected writer has stated that: "Our eyes perceive differences in lightness logarithmically, and so when light intensity quadruples we only perceive this as a doubling in the amount of light. A digital camera, on the other hand, records differences in lightness linearly-- twice the light intensity produces twice the response in the camera sensor." I don't know if the stated ratio between intensity and human perception is exact, but there seems to be general agreement on the principle that the eye's ability to distinguish differences in light intensity is not a linear function of the level of light intensity (Weber-Fechner Law, Stevens Power Law, etc.).
From this I conclude that the eye can distinguish smaller absolute differences in intensity at low light levels, and at higher levels of light intensity larger absolute differences are required to produce a perceivable change. If this is indeed so, should it not be a significant consideration in deciding how one converts and stores the linear data captured by the sensor?? This is at least a logical extension of the theory but I am unsure how well it holds up in the practice. However one also finds statements such as: "remember, the eye can distinguish fewer level in shadows". This seems to contradict the above hypothesis IF the "levels in shadows" referred to are of the same relative energy magnitude as the levels in the highlights. None of the articles I have read addresses this question. Another head-scratcher?
Several authorities have written that the sensor chips are linear devices. (To me, that simply means the voltage generated and measured at each pixel is directly proportional to the amount of light that pixel received. At this point each of the pixels in the sensor array has generated an analog voltage which must be converted to a digital value for storage. This is carried out by the analog to digital (A/D) converter using a process called quantification. The voltage measured at each pixel will fall within one of a contiguous series of pre-defined voltage intervals or classes. Each interval is quantified by a digital value and this value is assigned to those pixels whose analog voltage falls within that interval. Thus the analog voltage measured by the voltage detector is "replaced" by an integer (digital) value which codes for a pre-defined voltage range. I assume that one would wish to maintain that linearity in the A/D conversion. This requires that all pre-determined voltage interval be equal in width.
One can also conclude that, as the analog intervals are made narrower, the more such intervals may be defined within the range of interest. This will result in an increase in the precision with which the digital values express the original analog values. It will also require an increase in the bit depth used for storage.
And it is worth noting at this point that increasing the bit depth does not affect the dynamic range of the sensor nor does decreasing the bit depth limit the dynamic range (contrary to what I have seen suggested by more than a few). To belabor the point, the bit depth simply determines the precision with which the dynamic range may be recorded.
Assume we have a voltage detector that can distinguish 512 discrete voltage levels from Vmin (noise) to Vmax. However, (for the purposes of this exercise), assume we have a bit depth of only 8 for data storage; i.e. we can define only 256 intervals to which we must assign the voltage values in the process of analog to digital conversion. If the class intervals have equal width, this means that both the 512th and the 511th voltages are assigned to one class and thus assigned the same digital value when stored. The same pooling will be true for the allocation of the remaining detected voltages. In the process we are loosing precision in the absolute amount of 1/512 of Vmax; and note that the same absolute loss applies to all of the classes. One is prompted to ask: is this the best way to allocate the information? Why bother to employ a voltage detector whose sensitivity exceeds your need or desire to retain the information it can provide? Does the loss of precision at the low end of the voltage scale (brightness range) have the same impact on the final quality of the image as an identical loss at the high end of the voltage scale? For no reason, other than my gut feeling, I believe this should not be the case. Consider that the same absolute loss of precision in the lowest class results in pooling two intervals which differ relatively by a factor of 2 while in the highest class the factor is 512/511.
This discrepancy is a direct result of the decision to assign equal width to all intervals. However there are several examples in science and statistics where assigning analog data into classes having unequal width conveys more "useable" information. (Non-linear assignment, e.g. geometric progression of class widths.) In addition to other possible advantages these alternatives are certainly capable of keeping the relative loss of precision the same across all intervals. But to accomplish this requires that the non-linear conversion be carried out at the A/D conversion stage or prior to writing the data to a file. Keeping in mind that all this digital data will eventually be converted back to voltages (or current) which, in turn, will be used to create an image, one assumes that such decisions concerning class boundaries and class width must be made with this goal in mind. One assumes that the designers have made such decisions in a manner that produces the best quality in the final image.