I'm just showing you and the others here on the forum why and where you're wrong.
Why getting personal when you don't know it anymore. The name of the thread deals with wide gamur editing in a narrow gamut screen. Not the other way. And pay special attention to the word "editing". That's the subject.That statement shows us that, sadly, you didn't get it.
To be more secure, an image doesn't clip but the pixels. And when you start with a minimum of 1, the maximum is 255, and not 254. And be more precise, maximum values can be a true value.
However, please do try that and post your results. Would you like the raw file?
Er, what is a "nearly/partly clipping image"?
In my world, images are either clipped or they are not. There is no "partly" , sorry.
Ted, I'm just trying to show you that your example isn't what you want it to be. Don't blame me please.
George
OK
I'm getting no more personal than your good self. You have told me several times in the past that I "don't understand" which is the same as "you don't get it". In future I will say "you don't understand" so as to be on your level.Why getting personal when you don't know it anymore. The name of the thread deals with wide [gamut] editing in a narrow gamut screen. Not the other way. And pay special attention to the word "editing". That's the subject.
I am well aware of the significance of 1 and 254 as warning levels. That is my choice for normal editing. I will be happy to set 0 and 255, if that would please you. Also I could turn the warnings off if you would like.To be more secure, an image doesn't clip but the pixels. And when you start with a minimum of 1, the maximum is 255, and not 254. And [to] be more precise, maximum values can be a true value.
Fine, George. Not 'blaming' you at all. And now for proof. You need to offer proof that my example is wrong. As we say over here, "sayin' it don't make it so" - a favorite Americanism of mine which is why I post references, numbers, formulae and illustrations. You can't just tell someone "your example isn't what you want it to be" without offering some proof, and please don't refer me to your previous posts!Ted, I'm just trying to show you that your example isn't what you want it to be. Don't blame me please.
Can you explain me exactly how my example should have been carried out and presented? (step by step - not vague, general suggestions).
If you don't respond satisfactorily to the above question, your assertion that we are all wrong fails, you lose credibility in this forum, our discussion will be over, simple as that.
I have the raw file, the converter which has simple adjustments, a choice of working space and a choice of output spaces and file types. My true 8-bit monitor is 98% sRGB. What more do we need to prove how right you are and how wrong I am?
I await your instructions . .
I am on your side, by the way. In my converter, I edit in sRGB working space, not ProPhoto and not Adobe RGB (1998). With a proper response to this post, we might get this issue resolved, one way or the other . .
Last edited by xpatUSA; 5th December 2017 at 12:18 PM.
Ted, we did resolve the main question: whether editing in a wider color space is superior. Manfred posted a clear explanation, and Dave posted empirical evidence confirming it.With a proper response to this post, we might get this issue resolved, one way or the other . .
Seems to me that it's time to move on.
Dan
Ted - I suspect that you are looking for a quantitative answer rather than a qualitative one. I suspect there is one, but given the proprietary nature of much of the software and hardware in use, someone would have to reverse-engineer these processes to come up with that type of information. Even there, I would suggest that some of the assumptions that would have to be made could invalidate much of the analysis. I have definitely not found anything quantitative when researching this subject and that does not surprise me. Most photographers are not technical people and will go with what they see, rather than what the math tells them, so anything that I have found looking for this is subject is qualitative in nature, much like Dave's posting. Obviously the people who have designed the Serif Affinity software or the Adobe Lightroom software do have this information, but they are not sharing it.
Let me try one more approach; data accuracy. In general when we work we try to use as much data as we can and carry that data accuracy as far as we can before rounding off or simplifying it. This means that 16-bit data should give us higher resolution than 8-bit data, just as a wider colour space like ProPhoto RGB that covers some 90% of the CIE Lab 1931 colour space should give us better colours than sRGB that covers about one third of CIE Lab 1931's colours. From that standpoint, working in ProPhoto should introduce less inaccuracies in the calculations than working in sRGB and as long as the conversions from one colour space to the other are accurate and "clean", the time to round off would seem to be at the end of the editing process, if this is required.
Let me throw another quantitative example at you. If you are creating a B&W image from the colour data your camera has captured, when should you "throw away" the colour data? In my view this is a parallel situation. I will apply a B&W conversion early in the process, using a B&W conversion layer, but keep the colour data so that I can manipulate the individual colour channels to get the look that I want. I will only do the "hard" B&W conversion once I do no further editing and I am ready to lock in my final image.
Last edited by Manfred M; 5th December 2017 at 05:42 PM.
Post 55. I don't know what to add.
It looks a quite impressive example but is it? Is this Rawtherapee? I'm just trying to figure out what's happening.
You start with a picture in sRGB that contains some clipping based on the histogram. The pixels have a value based on that color space.
Than you change to Prophoto. The pixels are getting another value. To maintain the same image on your screen the value is mostly lower in that wider color space. You can see the histogram moving to the left. But the warnings are still 1 and 254. So no clipping.
Would be better to start with a nearly/partly clipping image in Prophoto and then change to sRGB.
What is this for a nonsense???
If you don't respond satisfactorily to the above question, your assertion that we are all wrong fails, you lose credibility in this forum, our discussion will be over, simple as that.
You can't be on my side. I don't have a preference. I just want to know. I don't know what CaptureNx2 is using, I have to look after it.I have the raw file, the converter which has simple adjustments, a choice of working space and a choice of output spaces and file types. My true 8-bit monitor is 98% sRGB. What more do we need to prove how right you are and how wrong I am?
I await your instructions . .
I am on your side, by the way. In my converter, I edit in sRGB working space, not ProPhoto and not Adobe RGB (1998). With a proper response to this post, we might get this issue resolved, one way or the other . .
I do believe what Manfred tries to tell is true. And I see what he means. We've only two different approaches to the subject. So we continue learning.
George
Which is why I said in post #43 "Wouldn't it be nice if the bone of contention could be quantified and the differences in accuracy measured, like in terms of CIE delta-E?".
As to the above there's too much to respond to effectively - so I'll just fold and cede the Last Word to your good self.I suspect there is one, but given the proprietary nature of much of the software and hardware in use, someone would have to reverse-engineer these processes to come up with that type of information. Even there, I would suggest that some of the assumptions that would have to be made could invalidate much of the analysis. I have definitely not found anything quantitative when researching this subject and that does not surprise me. Most photographers are not technical people and will go with what they see, rather than what the math tells them, so anything that I have found looking for this is subject is qualitative in nature, much like Dave's posting. Obviously the people who have designed the Serif Affinity software or the Adobe Lightroom software do have this information, but they are not sharing it.
Let me try one more approach; data accuracy. In general when we work we try to use as much data as we can and carry that data accuracy as far as we can before rounding off or simplifying it. This means that 16-bit data should give us higher resolution than 8-bit data, just as a wider colour space like ProPhoto RGB that covers some 90% of the CIE Lab 1931 colour space should give us better colours than sRGB that covers about one third of CIE Lab 1931's colours. From that standpoint, working in ProPhoto should introduce less inaccuracies in the calculations than working in sRGB and as long as the conversions from one colour space to the other are accurate and "clean", the time to round off would seem to be at the end of the editing process, if this is required.
Let me throw another quantitative example at you. If you are creating a B&W image from the colour data your camera has captured, when should you "throw away" the colour data? In my view this is a parallel situation. I will apply a B&W conversion early in the process, using a B&W conversion layer, but keep the colour data so that I can manipulate the individual colour channels to get the look that I want. I will only do the "hard" B&W conversion once I do no further editing and I am ready to lock in my final image.
I cannot recall where I read this but the essence of the source material was that a color space within a much larger color space may not remap (if that's the right word) the smaller color space correctly as an output image. So the suggestion was to use ProPhoto for editing AdobeRGB and AdobeRGB for editing sRGB and to not use any RGB mode when editing CMYK. Some of this seems to fly in the face of mathematics...I assume there is a rational algorithm used in most common post-production software. I would rather that this is mere speculation in print and it seems to be. Lightroom, if I am correct, uses a 32bit color space for sRGB and certainly does a good job of processing colors.
Lightroom does not edit in sRGB; it edits in a variant of ProPhoto. This is not a user option.
I think the bottom line is that editing in a larger space is less likely to create unwanted artifacts than editing in a smaller one. I can't see any benefit whatever to using Adobe RGB rather than ProPhoto RGB for editing images that will eventually be mapped into the sRGB space; it would leave more opportunities for the problems that Manfred described. This assumes that the software has a reasonable mapping to sRGB, which the software I use (with one exception) certainly does. The fact that LR only works in ProPhoto is consistent with this.
The sole exception in my set of software is my stacking software, Zerene, which does not remap images when rendering on the screen. It's not a problem, as none of the editing it can do alters colors. When I finish an image in Zerene and reimport it as a 16-bit prophoto TIFF in LR or Photoshop, it looks just as it should.
There is a lot of nonsense posted on the internet and I suspect you have stumbled across some of it. Even the self-proclaimed experts sometimes get it wrong, and one of the problems when researching on the internet is separating the good information from the suspect information.
Paul is correct in his statement in #57. So long as the narrower colour space is completely encapsulated in the wider one, conversion from one to the other is feasible.
In fact, so long as we have colour spaces that are represented by three variables; such as the three colour spaces we have written about a lot on this thread, sRGB, Adobe RGB and ProPhoto RGB, this is an easy conversion. Even going to the Lab colour space which does not have a red. green and blue model; rather a lightness, a channel and b channel, the mapping remains easy. The CMYK, on the other hand presents more challenges, as I point out in #58.
If you look at the following diagram and examine the SWOP curve you will see the problem that Paul alludes to. SWOP stands for Standard for Web Offset Publications, which is a commonly used CMYK colour space. The only colour space that has been discussed here is ProPhoto (and of course CIE Lab 1931).
As for your comments on 32-bit; so far as I know, while some software has support for 32-bit operations, there is no practical need for this level of accuracy in colour photography where the best cameras produce 14-bit output and even the medium format ones that claim 16-bit are really 14-bit with a pair of leading zeros. Working in 16-bit is really where we want to work.
I guess that answers my earlier question: "Ed, do you have a link to where you read that?"
Perhaps you read something like this?
https://www.dpreview.com/forums/post/15843921
Andrew Rodney (@digidog) wrote that gamut clipping (colormetric intent) or gamut compression (perceptual intent) is inevitable. (Provided that editing has taken image content outside of the target gamut, of course). So, it does make sense that going from ProPhoto to sRGB is "worse" than a smaller step from Adobe RGB (1998) to sRGB.
We could, in fact, use the property of color purity to quantify "worse" but, apparently, quantification is frowned on in this thread, eh, Manfred?
For those few interested, see under the section headed "Plotting CIE values" here:
http://photonicswiki.org/index.php?t...d_Chromaticity
In fact, moving any color, e.g. from outside of Adobe RGB (1998) to inside or at the edge of it or any smaller gamut, produces a false color with less chromaticity. Please remember that "color" is not hue, even though hue is part of color. For example, the HSB color model also includes saturation and brightness.
When all is said and done, the resulting color inaccuracy is usually acceptable - provided that it looks good on your Galaxy portable or your print or your screen.
HTH
Last edited by xpatUSA; 6th December 2017 at 07:14 PM. Reason: truism deleted
Hi Manfred, is there a reference for the bolded bit, making it clear that those 'medium format ones that claim 16-bit' do not have a 16-bit ADC inside?
Only talking about the ADC; not how the raw data is written to the card (16-bit in my case).
"we" being "most people", I imagine . .Working in 16-bit is really where we want to work.
I am utterly content with my post-processor's native 32-bit floating-point working. I can do a horrendous amount of parametric editing, should I choose so to do, without fear of "dropped bits" (quantization) affecting the work.
Last edited by xpatUSA; 6th December 2017 at 03:35 PM.
In practical terms, I would guess this is true, but in theoretical terms, it needn't be. One often would want a greater precision in processing than in input or output. For example, most statistical software operates in double precision, even though the input values and therefore the output values are far less precise than that, in order to avoid compounding the effects of rounding. It's a constant battle in teaching. For no particular reason, many software packages by default round their computational results to five digits to the right of the decimal--worse, regardless of the number of digits to the left--even when the input data aren't precise enough to warrant more than 2 significant digits. Students happily dump this stuff into tables, even though most of the digits are meaningless and the combined effect is to make the table extremely hard to read.As for your comments on 32-bit; so far as I know, while some software has support for 32-bit operations, there is no practical need for this level of accuracy in colour photography where the best cameras produce 14-bit output
From what I have read, this is indeed the case for the previous generation of Phase One / Leaf and Hasselblad. The others (Leica, Pentax and FujiFilm), I believe use the same components as in their other cameras, which are 14-bit. Again, there are no tear-down reports on these higher end cameras, so I cannot pretend that what I have read is up to date.
Manfred, please pardon further nit-picking:
CIELAB is 1976, not 1931 for what that's worth.
https://en.wikipedia.org/wiki/Lab_color_space
The diagram is interesting per se because it does not show ProPhoto's D50 white point - which could affect discussion to date.
So going from ProPhoto to another color space isn't just about primaries and the straight lines connecting them. Hopefully most of us can see why that is.
For example, drawing a line from the pure color at 520nm to D65 and then again from 520nm to D50 produces gamut border crossings at different colors on all the gamuts shown, even ProPhoto to a small extent.
Last edited by xpatUSA; 6th December 2017 at 05:56 PM.
Ted - I find your nit-picking to be amusing. I've been accused of the same at times.
I should have said CIE 1931 colour space rather than CIE Lab.
That being said, the what point has always been an issue of discussion. D50 is commonly used in the printing (offset press) industry, partially because those images were unlikely to be looked at under the D65 "daylight" lighting conditions. I could go a step further and suggest all people should be running their sRGB screens set to an output level of 80 candela / square meter as that is what the sRGB standard is based on. I run mine at 100 candela / square meter, so I'm probably closer than most.
https://www.w3.org/Graphics/Color/srgb
Dan - Agreed. Rounding issues aside, the accuracy of any calculation is determined by the element that has the lowest precision; i.e. significant digits. With software generally running in 8-bit, 16-bit, 32-bit and 64-bit modes, it is easy for software designers to build in extra precision "just in case" it might be required.
If I have a measurement that is accurate to 0.001", another to 0.01" and a third one to 0.1" that are all used in a calculation, the highest degree of accuracy I can get in my final result is 0.1" as that is the least precise of all my measurements. For example, giving a final answer of 6.3" is meaningful. Giving an answer of 6.296" is not and is in fact misleading as it suggests a higher degree of accuracy than there really is.
When it comes to rounding, in general, this works out by itself as the distribution of calculations that are rounded up work out to being the same as calculations that are not, so I tend to not worry about that aspect too much.
That being said, when using a computer to do the calculations, we don't control the precision of the calculations which will frequently exceed what is required.