This is an interesting (though old) discussion and sorry to be joining after so much time, but it gets to some fundamental questions I have about the chain from analog to digital to Raw to ACR. The last links digital-raw-ACR are clear.
It is often said that the Raw file is simply "the output of the sensor" with meta data, but this is only true with the qualification that the "output of the sensor" is after digital sampling. So, is ISO simply a gain of the analog voltages prior to sampling or is something more involved? If so, I assume there is an analog noise filter appled prior to sampling?
I am most interested in these questions in the context of underwater video, which can be view as a frequency-dependent attenuation problem. In this case, one can't solve the attenuation in post because of S/N in the red spectrum. It is better in RAW, but not much. More to point, not many cameras shoot in RAW (nor do many videographers shoot this way for obvious reasons!) so it is even more problematic for video. So, a specific quesiton:
Some cameras (Olympus in particular) have an Underwater white balance setting. Can this be viewed as a frequency-dependent ISO adjustment, i.e., is it done analog, which would improve the situation. Or is it just a tweak after sampling, with accompanying limitations (S/N). Manufacturers are not responsive to such questions. I do observe that it boosts the reds, but does not come close to what can be achieved with a red filter, which is a true analog spectrum tilt. ISO comes in because the red filter costs 3 stops in exposure which raises the ISO (aperture is usually wide open anyway due to low light levels and there is little sutter speed flexibility shooting video).
Finally, this pathological example (as some would view the underwater problem -- the normal answer is to just get light on the subject -- good luck) should open up the concept of white balance. In correcting for the light source only (tungsten, daylight, etc.) we ignore the transmission effect, or perhaps more accurately, we lump the source, transmission effects and reflectance into a couple of sliders called WB. Then, we try to solve the specific issues by other means: masks, split toning, dehaze, contrast pop, etc. Then we try to mitigate the downsides of these powerful tools with another string of noise filters and the like. It seems best to me to try to get the digital signals correct (as best we can) in the A/D process.
I will appreciate any commnets and/or references.