Page 4 of 5 FirstFirst ... 2345 LastLast
Results 61 to 80 of 87

Thread: Gamma revisited

  1. #61

    Join Date
    May 2012
    Location
    Southern California
    Posts
    79
    Real Name
    Mike

    Re: Gamma revisited

    Quote Originally Posted by dje View Post
    Ted you may have a different interpretation of the word quantisation to me but for me, in this context, it is simply the process of dividing up an analogue voltage range into a series of small ranges and assigning a digital value to each range. The number of steps is determined by the bit depth of the A/D converter.

    Mike it all happens as part of the operation of the A/D converter. Modern CMOS sensors have the A/D converter built in to the same chip as the sensels. In fact, the latest sensor chips have one A/D converter for each column of sensels to facilitate faster read-out. The outputs from the multiple A/D converters are then multiplexed together into one data stream.

    But we don't need to know the intimate details of how A/D converters work ! (And I couldn't help with that anyway)

    Dave
    Dave,

    Thanks for your comments. The first paragraph states exactly what I was trying to convey, but much more succinctly. I think you recognize that knowing where the A/D converter is situated, or how it works, is not essential to my reasoning. However thanks for the information; it is interesting.

    Some have commented that my posts are rather verbose and add unnecessary detail. I apologize, however I need that much detail to construct, for myself, a logical process. I do not imagine that most readers require it for their understanding; indeed I assume that most readers are several steps ahead of me. I can only ask for your indulgence. What I write is sort of a stream of thought as I attempt to understand the process. Also I include these details in order that readers can more easily point out the steps where my logic has gone astray.

  2. #62

    Join Date
    May 2012
    Location
    Southern California
    Posts
    79
    Real Name
    Mike

    Re: Gamma revisited

    Quote Originally Posted by Simon Garrett View Post
    As John says, I think the first two parts of Mike's tutorial go into a lot of depth which might be beyond what is strictly essential to an understanding of gamma. I certainly wouldn't discourage Mike from a more wide-ranging view, which could be extremely interesting and valuable.

    When Mike gets to the parts more directly about gamma curves, my suggestion would be to make sure the differnet uses of gamma curves in photography are clearly distinguished, which include:
    1. To provide perceptually-uniform coding (to minimse the number of bits used for a given noise level)
    2. To compensate for the non-linear response of monitors
    3. To apply a small contrast boost in terms of a "viewing gamma" to compensate for:
      • The lower contrast of the monitor or print, compared to the original scene
      • The different viewing conditions of the monitor or print, compared to the original scene


    Thanks to John for the link to the sRGB paper (http://www.w3.org/Graphics/Color/sRGB.html) which describes the "viewing gamma" very well.

    With modern systems (especially colour-managed systems), the tone curve applied for encoding efficiency, the tone curve applied to compensate for non-linear output device, and any tone curve for a contrast boost are treated as separate processes. Historically, electronics were analogue and each processing step was a major cost, and could introduce additional noise. Even in the early days of digital, extra processing steps were avoided where possible. As a result, with careful choice of gamma, the TV systems (for which gamma curves were probably first used) could deal with all three purposes by the judicious choice of one tone curve. But they are quite separate purposes.
    Simon,

    Thanks for your comments. I doubt that I am competent to endulge in a truly "wide-ranging view" of digital imaging and color management. I do have in mind some comments about points 1 & 2, a bit later. As for the rest, I hope that you and others wil provide additional elucidation.

  3. #63

    Re: Gamma revisited

    Quote Originally Posted by mikesan View Post
    Some have commented that my posts are rather verbose and add unnecessary detail. I apologize, however I need that much detail to construct, for myself, a logical process. I do not imagine that most readers require it for their understanding; indeed I assume that most readers are several steps ahead of me. I can only ask for your indulgence. What I write is sort of a stream of thought as I attempt to understand the process. Also I include these details in order that readers can more easily point out the steps where my logic has gone astray.
    Mike, I for one am not saying what you've written is too long. It may be longer than absolutely necessary (the uses of gamma curves are in essence fairly straightforward), but there's nothing wrong with adding more background.

    However, if it's fairly long I think it makes it even more important to structure it well, and keep the key points clear. I'm sure you'll find some way of doing that. One way that I might use in such circumstances could be an overview at the beginning and/or a summary at the end (or key points highlighted along the way) - but whatever works for you. Just make it easy for people to latch on to the different purposes for which gamma curves are used in digital image processing.

  4. #64

    Join Date
    May 2012
    Location
    Southern California
    Posts
    79
    Real Name
    Mike

    Re: Gamma revisited

    Quote Originally Posted by Simon Garrett View Post
    Mike, I for one am not saying what you've written is too long. It may be longer than absolutely necessary (the uses of gamma curves are in essence fairly straightforward), but there's nothing wrong with adding more background.

    However, if it's fairly long I think it makes it even more important to structure it well, and keep the key points clear. I'm sure you'll find some way of doing that. One way that I might use in such circumstances could be an overview at the beginning and/or a summary at the end (or key points highlighted along the way) - but whatever works for you. Just make it easy for people to latch on to the different purposes for which gamma curves are used in digital image processing.
    Excellent suggestions Simon. Please note however, it was never my intention to give a detailed description of the many purposes for which gamma (or other tone) curves are used. I am just trying to get my head around how they are used to facilitate the digital storage of images.

  5. #65

    Join Date
    May 2012
    Location
    Southern California
    Posts
    79
    Real Name
    Mike

    Re: Gamma revisited

    Part #3

    Before I get back to gamma allow me a bit of a digression to supply some background, and a short story about how I arrived at where I am.

    I became interested in photography at the age of 12, so I have now been a (very amateur but enthusiastic) photographer for 71 years. I bought my first digital camera in 2007; well after the digital era was well underway. I quickly became fascinated with the speed with which one could go from snapping the shutter to a viewable image, and interested in the technology which provided that convenience. Having been a scientist (molecular biology, medicine) during all my professional life, I was naturally curious to understand a bit more about digital imaging.

    My first recourse was the internet. I read much, some of which was enlightening, some confusing, and some downright illogical. I kept notes and I shall next quote some excerpts from these notes, all of which date from 7 years ago (when I was even more naive than I am today).

    I am aware that the following notes are somewhat disconnected, and not directly relevant to the subject of this thread; perhaps even inappropriate. They might even be described as the ramblings of an old man. Well there's no denying it; I am an old man. I am prepared to edit this post and delete these notes if that is the consensus of the readers.

    Several authors have pointed out that the eye's sensitivity to light is not linear. One well respected writer has stated that: "Our eyes perceive differences in lightness logarithmically, and so when light intensity quadruples we only perceive this as a doubling in the amount of light. A digital camera, on the other hand, records differences in lightness linearly-- twice the light intensity produces twice the response in the camera sensor." I don't know if the stated ratio between intensity and human perception is exact, but there seems to be general agreement on the principle that the eye's ability to distinguish differences in light intensity is not a linear function of the level of light intensity (Weber-Fechner Law, Stevens Power Law, etc.).

    From this I conclude that the eye can distinguish smaller absolute differences in intensity at low light levels, and at higher levels of light intensity larger absolute differences are required to produce a perceivable change. If this is indeed so, should it not be a significant consideration in deciding how one converts and stores the linear data captured by the sensor?? This is at least a logical extension of the theory but I am unsure how well it holds up in the practice. However one also finds statements such as: "remember, the eye can distinguish fewer level in shadows". This seems to contradict the above hypothesis IF the "levels in shadows" referred to are of the same relative energy magnitude as the levels in the highlights. None of the articles I have read addresses this question. Another head-scratcher?

    Several authorities have written that the sensor chips are linear devices. (To me, that simply means the voltage generated and measured at each pixel is directly proportional to the amount of light that pixel received. At this point each of the pixels in the sensor array has generated an analog voltage which must be converted to a digital value for storage. This is carried out by the analog to digital (A/D) converter using a process called quantification. The voltage measured at each pixel will fall within one of a contiguous series of pre-defined voltage intervals or classes. Each interval is quantified by a digital value and this value is assigned to those pixels whose analog voltage falls within that interval. Thus the analog voltage measured by the voltage detector is "replaced" by an integer (digital) value which codes for a pre-defined voltage range. I assume that one would wish to maintain that linearity in the A/D conversion. This requires that all pre-determined voltage interval be equal in width.

    One can also conclude that, as the analog intervals are made narrower, the more such intervals may be defined within the range of interest. This will result in an increase in the precision with which the digital values express the original analog values. It will also require an increase in the bit depth used for storage.

    And it is worth noting at this point that increasing the bit depth does not affect the dynamic range of the sensor nor does decreasing the bit depth limit the dynamic range (contrary to what I have seen suggested by more than a few). To belabor the point, the bit depth simply determines the precision with which the dynamic range may be recorded.

    Assume we have a voltage detector that can distinguish 512 discrete voltage levels from Vmin (noise) to Vmax. However, (for the purposes of this exercise), assume we have a bit depth of only 8 for data storage; i.e. we can define only 256 intervals to which we must assign the voltage values in the process of analog to digital conversion. If the class intervals have equal width, this means that both the 512th and the 511th voltages are assigned to one class and thus assigned the same digital value when stored. The same pooling will be true for the allocation of the remaining detected voltages. In the process we are loosing precision in the absolute amount of 1/512 of Vmax; and note that the same absolute loss applies to all of the classes. One is prompted to ask: is this the best way to allocate the information? Why bother to employ a voltage detector whose sensitivity exceeds your need or desire to retain the information it can provide? Does the loss of precision at the low end of the voltage scale (brightness range) have the same impact on the final quality of the image as an identical loss at the high end of the voltage scale? For no reason, other than my gut feeling, I believe this should not be the case. Consider that the same absolute loss of precision in the lowest class results in pooling two intervals which differ relatively by a factor of 2 while in the highest class the factor is 512/511.

    This discrepancy is a direct result of the decision to assign equal width to all intervals. However there are several examples in science and statistics where assigning analog data into classes having unequal width conveys more "useable" information. (Non-linear assignment, e.g. geometric progression of class widths.) In addition to other possible advantages these alternatives are certainly capable of keeping the relative loss of precision the same across all intervals. But to accomplish this requires that the non-linear conversion be carried out at the A/D conversion stage or prior to writing the data to a file. Keeping in mind that all this digital data will eventually be converted back to voltages (or current) which, in turn, will be used to create an image, one assumes that such decisions concerning class boundaries and class width must be made with this goal in mind. One assumes that the designers have made such decisions in a manner that produces the best quality in the final image.

    Note that the last paragraph of these notes was written at a time when the notion of "gamma" had not yet crossed my horizon. I am now well aware that non-linear conversion of the data is not employed prior to digital storage.

    Now back to the subject at hand. As a very preliminary experiment I have added columns H, I & J to sheet #2 of the linked spreadsheet.

    https://www.dropbox.com/s/vf6a948dsn...%231.xlsx?dl=0

    In column H I list the gamma (2.2) adjusted values from the normalized voltage range given in column B. In column J you find the absolute difference in normalized voltage between that interval (bin) and the immediately preceeding interval.

    As I pointed out in Part #2, a linear quantization of the data (see column B, bin #2) pools all voltages between 0.0039 Vmax and 0.0078 Vmax and stores them all with the digital value of 2. Assuming some (likely all) of these different voltages represent distinguishable luminances, we are discarding usable data.

    Compare with column H, where the quantization is gamma adjusted. bin #2 pools all voltages between 0.00001 Vmax and 0.00002 Vmax. Clearly a much finer granularity and much more information (assuming that voltages of this magnitude represent luminances with a distinguishable difference in the final image).

    Look a bit further down in column H. One sees that voltages between 0.0039 Vmax and 0..0078 Vmax (which in the linear quantization are all pooled in a single bin) are now divided into 9 bins (each stored with a unique digital value). With this encoding it is clear that pixel A with a voltage of 0.004 Vmax (encoded as digital 21) would be clearly distinguishable from pixel B with a voltage of 0.006 Vmax (encoded as digital 24).

    I am aware that this example presupposes that the gamma encoding of the analog data occurs before conversion to digital values. I know that is not what happens in the real world. Nevertheless this was, for me, an interesting exersize in showing how non-linear encoding can be used to advantage.

    My next task is to understand how non-linear (gamma) manipulation of the data is employed to advantage after the linear data is stored.

    Next episode: Linear quantization with more bits.

  6. #66

    Re: Gamma revisited

    Is anyone else aware that gamma has been around for 75 years or more in the first television and radar.

  7. #67

    Join Date
    May 2014
    Location
    amsterdam, netherlands
    Posts
    3,182
    Real Name
    George

    Re: Gamma revisited

    Quote Originally Posted by Richard Lundberg View Post
    Is anyone else aware that gamma has been around for 75 years or more in the first television and radar.
    I wondered too.I allways understood it's a correction of the output device.

    And what the bitdepth concerns, 8 bit is enough to present a vissible image but not enough for editing. More bits results in a smoother change back to 8 bit. That's where the value of a higher bitdepth is.

    George

  8. #68
    dje's Avatar
    Join Date
    May 2011
    Location
    Brisbane Australia
    Posts
    4,636
    Real Name
    Dave Ellis

    Re: Gamma revisited

    Quote Originally Posted by Richard Lundberg View Post
    Is anyone else aware that gamma has been around for 75 years or more in the first television and radar.
    Yes indeed.

  9. #69
    dje's Avatar
    Join Date
    May 2011
    Location
    Brisbane Australia
    Posts
    4,636
    Real Name
    Dave Ellis

    Re: Gamma revisited

    Quote Originally Posted by mikesan View Post

    I am aware that this example presupposes that the gamma encoding of the analog data occurs before conversion to digital values. I know that is not what happens in the real world. Nevertheless this was, for me, an interesting exersize in showing how non-linear encoding can be used to advantage.
    Mike, in my opinion, this is the key point in the issue of re-distribution of samples towards the lower end of the range (and the subsequent argument about better coding efficiency). With your indulgence, I'll include a few notes of my own.

    The re-distribution of digital samples towards the lower end of the input signal range caused by the application of a gamma curve can only occur if this gamma curve is applied to the analogue signal input to the A/D converter. If the gamma curve is applied digitally after the A/D converter (which it is in a digital camera) the sampling points will be linearly dispersed. The key to understanding this is that the quantisation done by the A/D converter is based on the input signal to the A/D converter. The diagrams and figure below seek to explain this further.

    Gamma revisited

    In case A, the input signal to the A/D converter is non-linear. Sample or quantisation points will occur every time the input signal to the A/D converter increases into the next integer value range and these will occur as shown by the blue points in the curve below. eg The first sample point occurs at an A/D input level of 1 which corresponds to a value much less than one in the original linear signal level . This situation displays a set of digital samples that are biased towards the lower end of the original signal input range. However this case does not apply to a modern digital camera which is configured as in Case B. Case A may well have occurred in the early days of digital television however where the original signal from the camera was still analogue and was converted to digital for transmission purposes.

    In case B, the input signal to the A/D converter is linear and hence the sample points are linearly distributed across the original signal input range. When the gamma curve is applied after digitization, all that happens is that the values at the sample points are increased. The red points in the figure below illustrate this. You can see that the shapes of the red and blue curves are the same but the sample points of the original input signal are distributed differently.

    Gamma revisited

    Hope this is of some interest.

    If you click on the diagrams, you'll get a larger view.

    Dave

    PS 5 bit data used for clarity
    Last edited by dje; 22nd January 2015 at 09:59 AM.

  10. #70

    Re: Gamma revisited

    Dave, interesting analysis. There's another step, I think. What happens in cameras (when creating jpegs) is as in your case B: A/D conversion and then application of a tone curve. However the the captured digital data is in 12-14 bits (for typical current sensors). The tone curve is applied, and then the data is reduced to 8 bits.

    This is the extra step: reduction from 14 bits to 8 after the tone curve is applied. As a result, the tone curve results in more accurate perceptually-equal distribution of steps. In terms of improving perceptual quality (reducing quantisation noise at the black end) there wouldn't be much point in applying the tone curve if converting to 8 bits were done before applying the curve.

  11. #71
    ajohnw's Avatar
    Join Date
    Aug 2012
    Location
    S, B'ham UK
    Posts
    3,337
    Real Name
    John

    Re: Gamma revisited

    I think it might be worth mentioning that a/d's needn't be linear and there are several types available that use different techniques. Same applies to d/a's.

    Where gamma is applied is an interesting aspect. I generally assume that it's a function of the monitor - a number is mapped to a luminosity level. If I look at the system software settings for my monitor I see 3 gamma sliders one for each of red, green and blue plus one that I assume does all. The default setting is 1 so I assume that as far as the system software is concerned it's linear and the monitor applies it. It's then manipulated further by by monitor profiles to try and get it as close as possible to the desired value. In my case that includes a LUT that is loaded into the video card. Of late there seems to be variations on how that can be done, 3D LUT's for instance but on the other hand I currently use an XYZ LUT which implies a 3D table.

    If I stray into the area of software rendering I have seen comments such as gamma has to be removed in order to cause colour calculations to work correctly. This might be because a user may have applied it in the PC. I'm part inclined to disregard the comment, Confusion is possible due to pages like this and it's links which makes me wonder why images didn't change when I moved from a crt to a panel display. I did that very early on. Maybe the comments relates to early LCD colour laptops which had awful displays They even ghosted when things moved fairly slowly. TFT was much better.

    http://www.anyhere.com/gward/vrml/gamma.html

    I should add that I assume the LUT is in the graphics card and isn't in the monitor but then wonder why they all seem to be 8bit and greater depths are obtained with hardware calibrated monitors. The are communication facilities between a PC and it's monitor. On Linux they seem to come to the for and then ???????? no idea.

    http://en.wikipedia.org/wiki/Display_Data_Channel

    John
    -

  12. #72
    ajohnw's Avatar
    Join Date
    Aug 2012
    Location
    S, B'ham UK
    Posts
    3,337
    Real Name
    John

    Re: Gamma revisited

    Quote Originally Posted by Simon Garrett View Post
    Dave, interesting analysis. There's another step, I think. What happens in cameras (when creating jpegs) is as in your case B: A/D conversion and then application of a tone curve. However the the captured digital data is in 12-14 bits (for typical current sensors). The tone curve is applied, and then the data is reduced to 8 bits.

    This is the extra step: reduction from 14 bits to 8 after the tone curve is applied. As a result, the tone curve results in more accurate perceptually-equal distribution of steps. In terms of improving perceptual quality (reducing quantisation noise at the black end) there wouldn't be much point in applying the tone curve if converting to 8 bits were done before applying the curve.
    No Simon the tone curve maps the whole bit range into the 8 bits. As I mentioned earlier the higher bit depth then becomes fractions of the jpg or screen bit depth. These are of course lost when this results in a jpg but are still there underneath in a raw converter.

    If you google dpreview some dlsr model you will find they show the tone curves used. They have changed over time. The highlight and lowlight regions are generally compressed in terms of tonal variations. The net result is a certain reduction in contrast in these areas. Canon did have a tendency to chop them off. All used to compress the low light ends over several EV's on jpg's which meant that a fair degree of manipulation of this area in jpg's was possible. Increasingly now if these EV's are needed in the final image they have to be extracted from raw. They are probably doing this to ease jpg noise reduction problems as pixels get smaller and noise goes up. Despite improvements that trend is still there.

    If you look at these curves the point to note is that the linear regions give perceptually good results. Also where a lot of compression has been used on low lights that a lot less is used on highlights - so that they look more life like.

    On the gamma correction out of the sensor something similar may actually go on. Using the old bucket analogy water is being poured into it to represent light for some period of time. One problem is then usually included via a hole to represent a leak. The flow rate through that will get greater as the depth of the water increases. Then there is noise - another source of water being poured into the bucket. These sort of things can combine to result in a none linear response. They might just account for this in part with an A/D. As a for instance all sensors I am aware of have pixels that are never exposed. These could be used as a baseline for an A/D to work from. They are. The A/D could be tampered with in other ways.

    Out of interest the early cameras were based on EEPROM's and even produced by amateur astronomers. Info long since gone but there is some mention of a commercial product here

    http://jalopnik.com/the-first-digita...ack-1215543300

    John
    -

  13. #73

    Re: Gamma revisited

    Quote Originally Posted by ajohnw View Post
    Where gamma is applied is an interesting aspect. I generally assume that it's a function of the monitor - a number is mapped to a luminosity level.
    I think it's applied in a number of places. For example, encoding gamma is applied:
    • In the camera - if the image is saved as jpeg (or tiff)
    • In the raw convertor, if the image is saved in the camera as raw

    In either case, this is the gamma curve that's applied to match digitisations steps to perceptual ability to discern those steps (or rather, to avoid those steps being discernable). This is a logarithmic curve.

    Display gamma is applied:
    • By use of LUTs In the display driver, for monitors without hardware LUTs
    • In the monitor, where they have hardware LUTs (or other gamma controls)

    As I understand it, the gamma curve implemented in monitors is in the opposite sense - an exponential curve. Effectively, the response curve of a monitor largely cancels the gamma curve applied when encoding jpegs (and usually tiffs).

    The issue of removing gamma before display: this is my understanding (which of course might be wrong). Although our eyes have a non-linear response, the same non-linear eyes see the original scene and the scene on the monitor. The monitor needs to recreate the original scene without any tone curve applied, or it won't look like the original scene. So to a first approximation the end-to-end gamma ("system gamma", referred to as "viewing gamma" in the sRGB reference at http://www.w3.org/Graphics/Color/sRGB.html) must be 1.

    That is, any gamma applied must be removed. As the non-linear response of the monitor is exponential and the opposite to the logarithmic gamma applied when encoding sRGB jpegs, then the monitor at least partly removes any gamma, if nothing else is done. With colour-managed systems, tone curves are applied, altered and removed as independent steps, but the effect is the same.

    However, as the contrast of monitors is usually lower than contrast of the original scene captured by the camera, the image on the monitor may look a bit flat with a system gamma of 1. Viewing conditions may accentuate it. As the sRGB reference says, the sRGB standard assumes that the gamma curve is not entirely removed, leaving a residual gamma on the visual image displayed on the monitor (in the case of the sRGB standard) of 1.125, to give a slight contrast boost to the monitor image.

    Just to emphasise: this is as I understand it, and I'm open for others to explain it better.

    Quote Originally Posted by ajohnw View Post
    No Simon the tone curve maps the whole bit range into the 8 bits. As I mentioned earlier the higher bit depth then becomes fractions of the jpg or screen bit depth. These are of course lost when this results in a jpg but are still there underneath in a raw converter.
    Forgive me, I don't quite understand what you mean. The A/D conversion is (as I understand it) normally a linear conversion. Some cameras chop off a little bit at the black end, but I thought the initial conversion was linear. If the image is saved as raw, this is the data that is saved.

    If the image is to be saved (in the camera) as jpeg, then a first a gamma function curve is applied to the digital data (and maybe other tone functions such as Nikon picture controls, and whatever the equivalent is for Canon), then the data is reduced to 8 bits.

    I'm not quite sure how that relates to "the tone curve maps the whole bit range into the 8 bits". Do you mean that the tone mapping is done after reducing from 12 or 14 bits to 8 bits? That would seem to me a bit pointless, but I'm probably misunderstanding your point.

    You mention other curves being applied in cameras: this (I assume) is for jpeg encoding, where all sorts of black magic goes on to improve the look of jpegs. I don't think any curves are applied to raw data. Some of the curves applied to jpegs are to "improve" the image, some are gamma curves applied as part of the jpeg encoding. This latter is not to make the image look better (except by reducing the encoding noise), and is removed (by applying the opposite function) before display.

    I wonder if we're talking at cross purposes?

  14. #74
    ajohnw's Avatar
    Join Date
    Aug 2012
    Location
    S, B'ham UK
    Posts
    3,337
    Real Name
    John

    Re: Gamma revisited

    The only comment I have seen on image file format encoding refers to a gamma of about 0.45 and decoded with one of around 2.2. There seems to be 2 ways the power law is usually expressed a reciprocal and and a decimal fraction. 1/0.45 happens to equal 2.22222. When I see things like that I want to see more info than the wiki gives as I don't fully understand it either, Then adding a quote from the wiki

    Output to CRT-based television receivers and monitors does not usually require further gamma correction, since the standard video signals that are transmitted or stored in image files incorporate gamma compression that provides a pleasant image after the gamma expansion of the CRT (it is not the exact inverse). For television signals, the actual gamma values are defined by the video standards (NTSC, PAL or SECAM), and are always fixed and well known values.
    And also go back to the point about what happened when I moved on from a CRT to a TFT screen. The screen must have had the same characteristics as the CRT monitor. At the time I was also using a CRT monitor on a daily basis at work. Same with our early panel TV which we bought when there were still plenty of CRT ones about. At that time colour rendition was very similar to PAL. The next set was what I would call "chocolateboxy", my own word probably relating to vibrance and contrast.

    There is also another quote from the wiki that is partially of interest

    Gamma encoding of floating-point images is not required (and may be counterproductive), because the floating-point format already provides a piecewise linear approximation of a logarithmic curve.[3]
    What I was trying to point out that on route to the screen several things go on apart from gamma manipulations. Some people refer to some aspects of these processes as gamma manipulation. The usual tone curve establishes mid tones, changing that is sometimes referred to as changing gamma.

    The wiki makes another interesting comment

    Although gamma encoding was developed originally to compensate for the input–output characteristic of cathode ray tube (CRT) displays, that is not its main purpose or advantage in modern systems
    Like hell it was in relationship to TV's. It was also used to give pleasing images. Analogue systems do not have the bit count problem.

    The only sensible comments I have seen on viewing conditions is in one case it must be darker than the screen and in another case match it's visual mid tone grey. There are then various standards for image processing of one sort or another. This areas is trying to ensure sensible eye adaptation.

    The tone mapping maps all or some of the raw bit depth into the 8 bit space. I difficult thing to tie down. The googling I suggested will show the number of EV's in jpg's. 9 EV maybe with a lot of compression of lowlights isn't uncommon. What can't be tied down is just how EV's relate to bit counts in the raw file. There is a tendency for people to assume that a 14 bit raw file has a dynamic range of 14 EV's or stops. Given the basic nature of an A/D that isn't possible. Just what the a/d count is at an exposure level of 1 EV over some black level is questionable. All that can be said is that to be of any use it must have some gradation - a count of more than 1 and the higher the better. Nikon probably went 14bit to give d-light something to work on. It is a part cure of the problem with linear sensors. They intensify shadow in evenly lit situations - say a typical landscape were we only see weak shadows that are hardly noticeable.

    Given the internal circuitry of sensors that I have actually seen the values due to noise and leakage obtained from the un exposed pixel are subtracted from the exposed pixel values. These may be in raw files too. I don't know. The manufacturers might keep them just for their jpg's or maybe not use them at all. I'd be surprised if they didn't make any use of them. My camera exif's show comments like hi gain up. Cmos sensors can generally have amplifiers attached to a pixels output or maybe it's pre A/D.

    A/D's can have a number of characteristics. If pixels are none linear this could be used to compensate. Nothing is ever perfect and pixels wont be any different. I was just pointing out that they might do something like that to solve a problem. One day I might find a full data sheet on a dslr sensor but some how I doubt it.

    Curves are applied in PP packages as well - more or less the same as the ones the camera applies to jpg's. Adobe for instance provide mimics of the ones in the camera and adobe standard. If you want to see what happens without that find Dcraw and look at the options. It would have to be run in a console DOS etc.

    John
    -

  15. #75
    dje's Avatar
    Join Date
    May 2011
    Location
    Brisbane Australia
    Posts
    4,636
    Real Name
    Dave Ellis

    Re: Gamma revisited

    Quote Originally Posted by Simon Garrett View Post

    This is the extra step: reduction from 14 bits to 8 after the tone curve is applied. As a result, the tone curve results in more accurate perceptually-equal distribution of steps.
    Simon this is the area I'm still a bit fuzzy about. Part of the problem is I'm not sure how the reduction in bit depth is done. Another factor in this is the concept of dithering which can reduce the effects of tonal banding when bit depth reduction is performed however I think this is quite a different concept to re-distribution of sample points.

    Dave

  16. #76
    ajohnw's Avatar
    Join Date
    Aug 2012
    Location
    S, B'ham UK
    Posts
    3,337
    Real Name
    John

    Re: Gamma revisited

    Quote Originally Posted by dje View Post
    Simon this is the area I'm still a bit fuzzy about. Part of the problem is I'm not sure how the reduction in bit depth is done. Another factor in this is the concept of dithering which can reduce the effects of tonal banding when bit depth reduction is performed however I think this is quite a different concept to re-distribution of sample points.

    Dave
    The tone curve causes the conversion to 8 bit jpg and gives the initial view in a pp package. If you read my earlier post you should have noted it's difficult to relate this to bit counts in the raw file,

    Here is one of the sets of the jpg mappings in a particular camera but in a different fashion to the options usually in a PP package. EV's / stops horizontal, bit counts vertical.

    Gamma revisited

    What has to be noticed about these curves just like the ones in PP packages that can be manually adjusted is that the slope / steepness of the curve is an indication of the degree of contrast in that region.

    Another setting same camera - the older types that usually include landscape as well and are offered in many PP from raw packages.

    Gamma revisited

    The EV scale has a zero because it can also be used to measure light levels in a odd way. Sometimes you might see in a camera specification that it can meter or focus at so many negative EV.

    John
    -

  17. #77

    Re: Gamma revisited

    Wow! A lot of ground to cover here.

    Quote Originally Posted by ajohnw View Post
    The only comment I have seen on image file format encoding refers to a gamma of about 0.45 and decoded with one of around 2.2. There seems to be 2 ways the power law is usually expressed a reciprocal and and a decimal fraction. 1/0.45 happens to equal 2.22222. When I see things like that I want to see more info than the wiki gives as I don't fully understand it either,
    That's consistent with my understanding. The sRGB standard doesn't use an exact gamma function, but approximates to a gamma of 2.2 - but the encoding function is f(x) = x ^ (1/2.2) (not x ^ 2.2).

    CRTs have an output function which approximates to f(x) = x ^ 2.2

    As a result, the two roughly cancel out, leaving an overall linear relationship.

    Quote Originally Posted by ajohnw View Post
    Then adding a quote from the wiki: "Output to CRT-based television receivers and monitors does not usually require further gamma correction, since the standard video signals that are transmitted or stored in image files incorporate gamma compression that provides a pleasant image after the gamma expansion of the CRT (it is not the exact inverse). For television signals, the actual gamma values are defined by the video standards (NTSC, PAL or SECAM), and are always fixed and well known values."

    And also go back to the point about what happened when I moved on from a CRT to a TFT screen. The screen must have had the same characteristics as the CRT monitor. At the time I was also using a CRT monitor on a daily basis at work. Same with our early panel TV which we bought when there were still plenty of CRT ones about. At that time colour rendition was very similar to PAL. The next set was what I would call "chocolateboxy", my own word probably relating to vibrance and contrast.
    When TFT screens were introduced, they needed to be compatible with CRT displays, and so emulate a 2.2 gamma. This doesn't matter if one is using colour management, but of course most people don't.

    Quote Originally Posted by ajohnw View Post
    There is also another quote from the wiki that is partially of interest: "Gamma encoding of floating-point images is not required (and may be counterproductive), because the floating-point format already provides a piecewise linear approximation of a logarithmic curve.[3] "
    Right, because a floating point representation has the same precision at any order of magnitude.

    Quote Originally Posted by ajohnw View Post
    What I was trying to point out that on route to the screen several things go on apart from gamma manipulations. Some people refer to some aspects of these processes as gamma manipulation. The usual tone curve establishes mid tones, changing that is sometimes referred to as changing gamma.

    The wiki makes another interesting comment: "Although gamma encoding was developed originally to compensate for the input–output characteristic of cathode ray tube (CRT) displays, that is not its main purpose or advantage in modern systems"

    Like hell it was in relationship to TV's. It was also used to give pleasing images. Analogue systems do not have the bit count problem.
    My understanding is that there were two reasons for gamma encoding with CRT displays: mainly to compensate for the non-linear characteristics of CRTs, but it had the useful side effect of improving the S/N ratio. In modern systems it has (to my understanding) similar purposes. Modern monitors still usually emulate the non-linear characteristics of CRTs, and the side effect of improving the S/N ratio can be expressed as non-linear encoding to reduce quantisation noise.

    Quote Originally Posted by ajohnw View Post

    The only sensible comments I have seen on viewing conditions is in one case it must be darker than the screen and in another case match it's visual mid tone grey. There are then various standards for image processing of one sort or another. This areas is trying to ensure sensible eye adaptation.

    The tone mapping maps all or some of the raw bit depth into the 8 bit space.
    I'm not quite sure what this means. Tone mapping is not necessarily anything to do with bit depth, or the reduction of bit depth from 12 or 14 bit raw data to 8 bit data. I guess they could be done at the same time.

    Quote Originally Posted by ajohnw View Post
    I difficult thing to tie down. The googling I suggested will show the number of EV's in jpg's. 9 EV maybe with a lot of compression of lowlights isn't uncommon. What can't be tied down is just how EV's relate to bit counts in the raw file. There is a tendency for people to assume that a 14 bit raw file has a dynamic range of 14 EV's or stops. Given the basic nature of an A/D that isn't possible. Just what the a/d count is at an exposure level of 1 EV over some black level is questionable. All that can be said is that to be of any use it must have some gradation - a count of more than 1 and the higher the better. Nikon probably went 14bit to give d-light something to work on. It is a part cure of the problem with linear sensors. They intensify shadow in evenly lit situations - say a typical landscape were we only see weak shadows that are hardly noticeable.

    Given the internal circuitry of sensors that I have actually seen the values due to noise and leakage obtained from the un exposed pixel are subtracted from the exposed pixel values. These may be in raw files too. I don't know. The manufacturers might keep them just for their jpg's or maybe not use them at all. I'd be surprised if they didn't make any use of them. My camera exif's show comments like hi gain up. Cmos sensors can generally have amplifiers attached to a pixels output or maybe it's pre A/D.

    A/D's can have a number of characteristics. If pixels are none linear this could be used to compensate. Nothing is ever perfect and pixels wont be any different. I was just pointing out that they might do something like that to solve a problem. One day I might find a full data sheet on a dslr sensor but some how I doubt it.

    Curves are applied in PP packages as well - more or less the same as the ones the camera applies to jpg's. Adobe for instance provide mimics of the ones in the camera and adobe standard. If you want to see what happens without that find Dcraw and look at the options. It would have to be run in a console DOS etc.

    John
    -
    I follow that, but IMHO we're confusing several things. Tone curves are applied for many reasons, and gamma curves have many possible effects. In my opinion, gamma curves are not really about improving the appearance of the image, but I guess it's a matter of definition, and we can describe the same things different ways.

  18. #78
    ajohnw's Avatar
    Join Date
    Aug 2012
    Location
    S, B'ham UK
    Posts
    3,337
    Real Name
    John

    Re: Gamma revisited

    Tone mapping has a lot to do with it all Simon it means mapping one tone to another. Unfortunately it's also used to describe certain PP operations one of which is fairly popular but the same degree of dynamic range can be compressed into an image without the usual look HDR has.

    You also need to note the comment about things not being and exact inverse so gamma application also involves further tone mapping that isn't immediately apparent.

    On 10,12,14 or even 16 to 8bit I can't help any more than I have with the curves I posted. That is how it's done. On one raw converter I use the curve is a simple straight line - prior to adjustment but that is unusual. Actually it's 2 of them but that is a side issue.

    John
    -

  19. #79

    Re: Gamma revisited

    Quote Originally Posted by ajohnw View Post
    Tone mapping has a lot to do with it all Simon it means mapping one tone to another. Unfortunately it's also used to describe certain PP operations one of which is fairly popular but the same degree of dynamic range can be compressed into an image without the usual look HDR has.

    You also need to note the comment about things not being and exact inverse so gamma application also involves further tone mapping that isn't immediately apparent.

    On 10,12,14 or even 16 to 8bit I can't help any more than I have with the curves I posted. That is how it's done. On one raw converter I use the curve is a simple straight line - prior to adjustment but that is unusual. Actually it's 2 of them but that is a side issue.

    John
    -
    If you mean that tone mapping is to do with bit depth, then to my understanding not directly, but maybe we're talking at cross purposes. I think we've covered all the ground, so I'll leave it at that.

  20. #80

    Join Date
    May 2012
    Location
    Southern California
    Posts
    79
    Real Name
    Mike

    Re: Gamma revisited

    Quote Originally Posted by dje View Post
    Mike, in my opinion, this is the key point in the issue of re-distribution of samples towards the lower end of the range (and the subsequent argument about better coding efficiency). With your indulgence, I'll include a few notes of my own.
    ....snip..
    Dave,
    Many thanks for your note (post #69). It is very informative, but I think it goes a bit beyond what I have so far digested.
    I was particularly interested in your case B. I am guessing that this pathway describes what occurs when writing a tiff or a jpeg file (or what occurs in the output of a raw converter). My understanding is that the linear digital data (pretty much just as it comes off the A/D converter) is written directly to the RAW file without any non-linear manipulation. If this is not the case then my logic is in big trouble. I have concentrated my attention on what happens to the raw data up to (but not including) the point where it is used to create a viewable image. May I ask how you have generated the data depicted in the graph?

    Simon & John,
    Very much appreciate your input to the thread. My grasp of the subject is still insufficient to make any useful comments or even formulate some intelligent questions. That will come, I hope.

Page 4 of 5 FirstFirst ... 2345 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •