yeah, the first part usually is the one a converter engine like Libraw/DCraw should take care of otherwise I will be designing my own raw converter if I jump too much in the details here except of course for adjusting some few parameters such as (gamma, WB, image end-format...etc.)
in the workflow I am trying to establish I don't think I need to go beyond that in part 1.
for part 2, this is actually my main focus and what I am trying to achieve is really an accurate color rendition as much as possible to reality and yeah I would make the efforts to make the last 5% jump even it would take much time. However, unfortunately with my current workflow using two separate software (Libraw/DCraw + LittleCMS) I am not even close to that.
Now coming to your question, how much of difference I see between the "color corrected" images my workflow produces and reality, so far it is really too much just judging by the naked eye (I follow also the DetalE metric by the way)
Why am I doing that?
well, as a CS engineer it is needed for some software development actually.
------------------------------------------------------------------------------------------------
following up on the issue::
so basically I see the problem of correcting an already-corrected image. However, what I still don't understand. How am I supposed to deal with Libraw/DCraw so that I can de-mosaic a raw image in order to assign to it my custom ICC profile later on as it doesn't support internally profile passing argument other than the hard-coded ones?
is it the best practice to go to a wide color space gamut as output of conversion and then re-map that gamut to my custom ICC profile's gamut (something like ProPhoto maybe)?
I am sure there should be a way cuz, to my understanding and my speculation - as Libraw/DCraw is part of RawTherapee studio- are in RawTherapee the two steps processing just like what I am trying to achieve in my workflow are happening the same, like there must be a step where RawTherapee is using DCraw first to decode the raw data and then the second step where it tries to assign to the decoded data an ICC profile using??? (I am not quite sure what).
Thanks a lot for your efforts, though I quite didn't understand the difference between the photos you've posted, I mean what were the conversion paramters in each, could you maybe tell?
well, satisfactory....hmm this the point, I am not really trying to just make a pleasant good-looking photos but rather I am more into color accuracy and authenticity so I would appreciate any advice/idea in that direction for what to follow in specific or just to hear what is the common practice among professionals
oh by the way, relaying on the default parameters of Libraw/DCraw, in my opinion, I don't think they are the optimal in term of accuracy but rather usually they are optimized based on pleasing and time criteria (i.e. more saturated color, much of compression, less time consumption)
It is impossible to answer the question because what parameters RawDigger uses for LibRaw are buried in the code. I assume that the titles of each image were understood.
I determine color accuracy based on the L*a*b* model. I shoot the Xrite/Macbeth 24-patch color card and compare the captured patch values with the list provided with the card. For my purposes, I compare the chromaticity vector per cent (derived from a*b*) and the DeltaE a*b* which excludes L*, i.e. it is two-dimensional.well, satisfactory....hmm this the point, I am not really trying to just make a pleasant good-looking photos but rather I am more into color accuracy and authenticity so I would appreciate any advice/idea in that direction for what to follow in specific or just to hear what is the common practice among professionals
Professionals may use a patch color card with many more patches shot under better defined lighting than mine.
You are entitled to your opinion, of course.oh by the way, relying on the default parameters of Libraw/DCraw, in my opinion, I don't think they are optimal in term of accuracy - but rather usually they are optimized based on pleasing and time criteria (i.e. more saturated color, much of compression, less time consumption)
How are you measuring color accuracy?
Last edited by xpatUSA; 28th March 2021 at 01:49 PM.
I have no idea as to the end-user target of your software, but unless you are targeting the high end commercial photographers who are working in the commercial advertising market or absolutely picky and knowledgeable amateur photographers who are looking at high end print output, there is no need for this level of colour accuracy.
The weakest link in the process is the human visual system. The second weakest link is the computer screen people are viewing on. The third weakest link is the working environment for the first two. My question to you is, does the additional effort involved in developing camera body specific profiles?
Just as an aside, the folks that require this level of accuracy will never judge the final product on a screen, but rather on printed proofs, viewed in a proper viewing booth with calibrated light sources and controllable illumination. Usually it is more of an art than a science, although corporate colours do have to match their Pantone reference values. In general prints are viewed with light levels at 150 lux on the print with a 5000K source.
Without knowing the spec you are developing to, I can only look at this from a photographer's point of view. I haven't coded in around 30 years. Both Ted and I worked as engineers before we retired.
Tarek, are using the normal ICC profiles which use an intermediate device-independent color space such as XYZ?
In your quest for absolute color accuracy and if you are accounting for printer output, you might want to consider instead ICC "device link profiles":
"Combining an input profile with an output profile produces a mathematical look-up-table (LUT) that translates colors from an input device into the best available matching colors on the output device.
When using profiles in a program such as Photoshop, this combination is held in RAM for only as long as the color transformation takes place. A Device Link profile captures this device-space to device-space transformation and saves it to disk.
The main advantage of a device link saved in this manner is that some tools can then modify the profile to perform conversions differently than the default method. The most popular is Black Preservation where K-only colors sent through the profile remain K-only after conversion (the default conversion will typically yield a 4-color gray with CMY inks as well)."
From http://www.colorwiki.com/wiki/Device_Link_Profile
Food for thought.
Hi Tarek
I'm not familiar with LibRaw or LittleCMS but I have used DCRaw. There is actually an option in it to use an external custom icc profile.
-p profilename.icm
You can see further details here.
I have tried it with an icc camera profile (matrix based) and it does work.
Dave
Edit: And this from Libraw documentation. About halfway down the page
char* camera_profile;
dcraw keys: -p file
Path to input (camera) profile ICC file (or 'embed' for embedded profile). Used only if LCMS support compiled in.
Last edited by dje; 29th March 2021 at 09:04 AM.
Seems like Tarek is busy, Dave, e.g. not answering a question.
The answer is:
A) Get a half-size non-demosaiced 16-bit TIFF image using: dcraw -h -4 -T <raw file name>
With a Panasonic RW2 file, that gets me an image with a high-ish black level and greenish cast but with no embedded profile. Always a good start ...
Example: https://www.dpreview.com/forums/post/65000409
B) Tarek develops the perfect ICC profile(s) for outputs from dcraw with those switches and applies it to the dcraw output file.
C) Tarek then edits everything but the sacred color accuracy to his taste.
Piece of cake ...
Last edited by xpatUSA; 1st April 2021 at 10:07 PM. Reason: added example images link
Thanks Dave the link seems to be helpful even though the version they offer of Libraw/DCraw emu has no such parameter unfortunately, I have checked.
[update]
oh it seems that this argument is supported only if the library (Libraw) was compiled with the support of LCMS together....interesting, I have to look that more in details and try it, thanks again.
huhh thanks Ted. Yeah for a matter of fact, I was really super busy last week.
yeah usually my parameters for DCRaw are something very similar though I may add also the output color space argument (-o) and pay a bit of attention to the demosaicing algorithm.
the resultant 16-bit tiff image is now the confusing part, to which color space should it belong (raw? ProPhoto?) and the resultant created ICC profile is making gamut mapping after applying it on the tiff image (e.g. ProPhoto's gamut --> "ICC" camera's gamut)?
judging ICC profile accuracy by DeltaE00, say mean < 2.0?
I'd like to get sure of one point though...
when characterizing a camera hardware and creating a corresponding ICC profile for it using a ColorChecker and then starting applying that ICC profile on the other taken photos under the same conditions.
take a high-end camera as an example like PhaseOne
1- isn't usually the ICC profile's color gamut quite big (bigger than sRGB color gamut at least), right?
>>>>> so then
2- by applying the ICC profile, technically the image's colors are now unrepresentative on any monitor because the camera gamut hence its ICC gamut is wider than any common monitor color space (e.g. sRGB or even AdobeRGB)
3- when the image is to be opened and viewed on a computer with a decent monitor, the OS now is taking care of converting/re-mapping colors from the image's ICC gamut to the monitor's gamut, hence too much cut-off's and missing information.
is that all right how it is happening?
Welcome back, Tarek.
As I understand it, an ICC profile does not have a specific color gamut as such. By which I mean that it can not be said to be PhotoPro, for example. I don't think you are, but are you referring to the Profile Connection Space?
The statement is too sweeping and is a bit vague.>>>>> so then:
2- by applying the ICC profile, technically the image's colors are now unrepresentative on any monitor because the camera gamut hence its ICC gamut is wider than any common monitor color space (e.g. sRGB or even AdobeRGB)
The ICC profile converts the image color data firstly to XYZ or L*a*b* - usually by a 3x3 matrix.
DCraw can render the TIFF with raw data or XYZ data, -o 0 or -o 5. You might prefer raw because, for XYZ, DCraw is applying white balancing which you might prefer to decide yourself as a separate issue.
The question then is "how to create an ICC profile with raw data as an input?" can LCMS (for example) do that? Remember that raw data has no gamut and some values can be well outside of the CIE xyY triangle. Consider for instance the extreme example of a camera converted to full-spectrum with no UV-IR blocking filter.
Extreme because infrared can not be rendered by any monitor that I know of.
On the other hand, it would be simpler to output from DCraw as -o 4 ProPhoto. At this point, you have no choice but to accept DCraw's conversion into that space but at least any gamut-clipping is done well outside of sRGB. At this point, any considerations about the path to the output device are irrelevant. In other words, the image data should be considered and used to create an ICC profile for that image and how it looks on your screen during that process does not matter.
Of course the mapping is from the ICC PCS (say XYZ) to the monitor's color space (say sRGB) but that itself does not cause "too much cut-off's and missing information". Therefore your statement 3 is incorrect.3- when the image is to be opened and viewed on a computer with a decent monitor, the OS now is taking care of converting/re-mapping colors from the image's ICC gamut to the monitor's gamut, hence too much cut-off's and missing information.
Please try to avoid bringing "gamut" into the discussion - it is not relevant to the matter of basic, i.e. in-gamut, color accuracy. So please do stick to the 24-patch card for now **. As soon as we get into out-of-gamut and rendering preferences such as "perceptual" there are too many variables to account for.
** P.S. on that Xrite card, patch 18 (cyan) is slightly out-gamut in sRGB ...
Not all right, only some, sorry. Good luck with your quest for acceptable color rendition on a good monitor.is that all right how it is happening?
Last edited by xpatUSA; 6th April 2021 at 05:01 PM.
Tarek,
Another thought concerning what you see on your monitor.
I hope that you have a screen color picker which shows you what the screen driver sent to your screen as opposed to the image data in the opened file. If so, you can download this which has perfect Xrite/Macbeth card data and the standard sRGB ICC profile embedded therein.
http://kronometric.org/phot/color/Co...Bruce_sRGB.tif
Image courtesy of Bruce Lindbloom.
There will be some variation between what your Viewer's color-picker says and what the screen color-picker says ... hopefully well within your goal of <2dE. Apologies if you already know the above ...
According to my investigations, the range of colors that can be captured by a typical digital camera is not as wide as some people think. Cyans in the Prophoto space aren't handled well. The colors that can be captured depend on the color mosaic filter characteristics as well as the matrix (or LUT) in the camera capture profile. This matrix is sometimes referred to as a compromise matrix and is designed to optimise color accuracy in the vicinity of the test chart patches.
It's not a serious issue though as it's only the colors that are outside the gamut of Adobe RGB or sRGB (depending on your monitor) that are not represented accurately. Real world reflective colors (Pointer's gamut) don't extend that much past the Adobe RGB gamut anyway (mainly cyans).
Dave
Sad to say, there is a paper which shows the exact opposite, Dave!
http://kronometric.org/phot/gamut/Ca...ysisGamuts.pdf
Have a look especially at figures 59 and 60 (for a Nikon D70 and a Canon 20D) where "colors" well outside the 1931 triangle are captured - well outside ProPhoto for that matter ...
Hi Ted
I’m familiar with that paper, in fact it’s been a valued reference of mine for some time. I’ve done similar plots of spectral colours myself and have seen similar stuff from Jim Kasson.
Some observations:
The plots show that results for different camera profiles can differ quite a bit.
Some spectral colours are being presented outside the CiE xy triangle ie as imaginary colours. In my view, this is such a gross inaccuracy that the results in these areas are meaningless and should be ignored.
The colours in the cyan area are of most interest to me. In this area, the colour representations are fairly restricted.
So I’m happy with my previous remarks.
Dave
I guess they would.
Can't argue because I never studied their method sufficiently to agree or disagree. Therefore, so as not to muddy the waters, I withdraw the reference as a rebuttal to your prior post.Some spectral colours are being presented outside the CiE xy triangle ie as imaginary colours. In my view, this is such a gross inaccuracy that the results in these areas are meaningless and should be ignored.
OK, no problem Dave.The colours in the cyan area are of most interest to me. In this area, the colour representations are fairly restricted.
So I’m happy with my previous remarks.
Dave
Last edited by xpatUSA; 6th April 2021 at 11:42 PM.