While waiting for the elusive Gamma tutorial I have been doing some reading on the subject. There are two statements that crop up periodically that I find misleading for a properly color-managed post-processing workflow and that I would like to better understand with your help (be forewarned, long and technical).
1. ‘Linear processed raw captures look very dark’. This quote, found in different forms in textbooks and specialist sites alike, comes from a 6 year old Adobe white paper ( http://www.adobe.com/digitalimag/pdfs/linear_gamma.pdf ). I took it for granted for several years and it may even appear to be true in a non-color managed workflow - but it is misleading: raw captures processed in a linear gamma color space look just as good (probably better, see point 2 below) if properly displayed by a color-managed system. Of course if your system is unaware that the image is linear it will pass it as-is to your monitor which, if it is like the vast majority of monitors on sale today, will distort it thanks to its electronics to the tune of a 2.2 power function (gamma) therefore displaying it darker than your data wanted. If on the other hand your system is properly color managed, it will recognize that you are working in a linear color space and, knowing that your monitor will make your image darker, it will apply a compensating inverse power function of 1/2.2 to your image data just before passing it to the monitor so that when it DOES distort it, the end result will be the properly displayed picture, proportional to the underlying image data: 1/2.2 X 2.2 = 1. So the linear processed data of raw captures is not and does not look ‘very dark’. It looks just like the captured scene, if properly displayed. It’s the monitor that makes it dark.
2. ‘You need to work in a gamma corrected color space (i.e. not linear) because it minimizes posterization/banding’. I found this here (http://www.dpreview.com/learn/?/Glos...nearity_01.htm ), but there are countless examples of similar statements everywhere – this too is, IMHO, misleading. I believe that this statement would be true if you were working with a camera sensor with a non-linear response in its analog stage that would therefore produce the equivalent of gamma corrected raw data out of the box in the raw file. This way you would actually CAPTURE more detail in the shadows. But if you start with a set of linear(ized) raw data, as is the case with virtually all commercial DSLR’s today, you do not have that benefit: you do not GENERATE more detail in the shadows by applying a gamma correction, all you do is (unnecessarily and prematurely?) shift your EXISTING linear data around, expanding it in some regions and compressing it in others. Any time you do this in post-processing it may result in banding, gamma or not. So how is posterization minimized by utilizing a gamma-corrected color space on existing linear raw data? Of course you do not have a choice if on the other hand you are working on a Jpeg image (sRGB, 8 bits).
So if these premises are correct, in these days of 16 bit color and terabyte hard drives, why exactly do we post process our raw captures in working color spaces with gamma other than one?