This tends to be my view too. If a scene looks more interesting or moody because of high contrast or dark areas I will aim to capture it that way and indeed often post-process it to darken those areas. In my image the sun was rising from my left and had only started to illuminate the tops of the hills in the left distance so the foreground was all still in shade.
Ultimately everyone has to choose what works best for each image in their view, even if that means it might not accurately reflect what we saw at the time. When I head off out with my camera it is not to "record" a scene, it is to create an "image".
What I found interesting about Manfred's first edit was that it brought more "depth" to the picture, which in my view was an improvement. But I would not want to brighten the foreground any more than that.
When people think about post-processing it is often in the context of "major surgery"; removing unwanted elements or adding some new material. While this is done, most post-processing is far more subtle and that is to slightly modify what the camera has collected to either accentuate it or to tone it down; burning and dodging. Here we are not doing anything more than making relatively minor changes in the data the camera has collected. There really is nothing mysterious going on here. We are simply correcting the default values that our PP software has used.
This is not a new technique that tools like Lightroom and Photoshop have brought out, it's something that photographers in the wet darkroom started using many decades ago. Ansel Adams, the great landscape photographer, was well known for spending a long time doing this with his prints.
The main purpose that the experienced photographer has for dodging and burning is to calm the image down; toning down the highlights and lifting up the shadow detail so those parts of the image are more cohesive. Adding a few areas that are lighter or darker can be quite effective in changing how the viewer sees the image and how his or her eyes travel through it.
In general, when I post-process an image, I will spend a minute or two making global adjustments and might do the same for large regions of the image. I will spend 90% - 95% of my time on the dodging and burning phase of the image making adjustment that affect only a small percentage of the image's pixels at a time.
All that is simply done to create a stronger image, in the photographers view. His or her view is neither right or wrong, but simply an opinion of how the image should look. Every photographer eventually develops what we refer to as a "style". While that is often a combination of the technical and compositional choices that they make, it is also how they finish their images in post.
All makes sense Manfred, and I appreciate your thoughts and time.
I agree that some minor processing to correct image flaws is important, and learning to walk that fine line between just enough and "holy crap"(I've seen these types, not here but...) takes practice and knowledge about a multitude to things. However, some people love the "holy crap" edits, so who am I to say? Inever want to go to the latter state of it, but also want to get to a better place where I get near perfect out of camera, only requiring a bit of touching . By aiming for that I do believe that will make me a better photographer. The dodging and burning are something I'd like to learn more of, just need more hours in the day. I
Sharon,
I don't think this is a helpful way to think about it.
Certainly, one can sometimes use postprocessing to fix flaws. And certainly, it's good to try to make the image as good as possible in camera, in part because there are limits to how much one can fix in post. It's essential to pay attention to backgrounds, to frame reasonably well (although not perfectly), to get the appropriate depth of field, and to expose reasonably well.
However, postprocessing is not just a way to correct for mistakes in doing these things. it is an essential part of the creative process. Ansel Adams didn't spend hours upon hours in the darkroom working on a single image because he made a lot of mistakes. He did it because a well-done capture didn't produce the image he wanted.
To take one simple example: if you are shooting under very even lighting, you will often get an image that has limited tonal range and contrast. There is nothing "right" or "wrong" about this. if you want a low-contrast image, fine. If you don't, you have to use postprocessing to increase contrast.
When you shoot JPEGs with a camera, you are making these postprocessing decisions by selecting a picture style. Each of those styles is just list of postprocessing adjustments--color balance, saturation, contrast, etc.
Manfred wrote:
Note that he didn't write "correcting the mistakes we made in camera." Even if you did exactly what you want with the camera, the context may not give you what you want (my point above), and the default postprocessing settings in your software may not give you what you want (Manfred's point).We are simply correcting the default values that our PP software has used.
So I think the best way for newbies to think about postprocessing is not to think about fixing mistakes, but rather to look at an image and ask: "what would I like to be different?" Then you learn the tools that will let you make those changes, to complete the creative process.
Dan
If you use the Relative Colorimetric rendering intent, the out of gamut (OOG) colours are brought the the "hull" of the colour space so this is often what one sees. The Perceptual rendering intent distributes the OOG colours evenly and while there is an overall colour shift, the number of pixels sitting on the "hull" of the colour space is vastly reduced so the amount of channel clipping is not as noticeable.
The trade-off is really up to the photographer and to a large extent depends on the way the image looks in the end.
Not forgetting that the original was saved here with a V2 matrix-based sRGB profile of type Device:Monitor.
Caveat: That means a color-managed viewer would ignore the Rendering Intent tag IF it had gotten set to Perceptual and would use Relative Colorimetric instead (because a matrix profile has no CLUTs).
Not at all, Peter. Photoshop tagged your image as Relative Colorimetric.
I was responding to Manfred's and Dan's mentions of Perceptual which is quite OK for typical printing profiles which have Color Look Up Tables, but not OK for most monitor profiles.
Things may have changed with ICC's V4 profiles re: monitors - but basically, if the profile is about 5Kb in size, it won't do Perceptual - so gamut clipping remains possible, as you have found. Yours was about 3Kb, according to ColorThink.
Hope this helps ...
Carefully said, let me disagree a bit with your wording. Instead of suggesting that the retouch should be "minor", I prefer to think of using the word "subtle". I'm also a bit troubled by the use of the word "flaws". What is a "flaw" to one photographer can be "intent" to another.
One issue when one first starts working with post-processing tools is that it is all too easy to "punch up" an image by cranking up the saturation and contrast. Two very fine photographic techniques; high dynamic range images (HDRI) and long exposure of water have been "ruined" in the eyes of many photographers through their misuse. It's all too easy to throw technology at an image and get lots of "Likes" from uninformed users on social media... I agree with you that these "holy crap" examples of overprocessing are far too common. but it is also a mistake to think that an image will be "good enough" straight out of camera. The only time I come close is in studio photography where I can control the light so well that little to no retouching is required. I natural light work, I would suggest it is nearly impossible, but that depends on your definition of "nearly perfect" and "good enough".
When it comes to dodging and burning. Those are skills I learned in my teens when I was working in the "wet" darkroom. These have always been part of my workflow and are my digital editing skills have improved, they are now what I spend most of my time with on an image. To quote Ansel Adams; "Dodging and burning are steps to take care of mistakes God made in establishing tonal relationships."
Ted - this has nothing to do with computer screen profiles, but choosing the appropriate rendering intent when changing colour spaces. A computer screen uses Relative Colorimetric to handle OOG colours that need to be displayed. So far as I know neither MS nor Apple have implemented any other rendering intent properly, even though these options exist for screen rendering intents.
If one is converting from a wide gamut colour space (ProPhoto RGB, Adobe RGB, etc.) to a narrow one (a.k.a. sRGB), applying a rendering intent during the conversion process will ensure that a 100% sRGB compliant screen will show the colours correctly, regardless of the rendering intent chosen during the conversion process.
Last edited by Manfred M; 22nd September 2019 at 07:15 PM.
Ted - It wasn't so much of a rebuttal as a comment to separate what the screen driver and associated rendering intents do versus what happens when changing the colour space in a software editing tool. Reading what you had written, I'm not sure if someone with less familiarity of the two distinct processes will be able separate these two things.
1. There is a hardware limitation as to which colour space(s) a computer screen can completely reproduce. Better quality screens can display the entire sRGB colour space. Some high end screens can display (virtually) all of the Adobe RGB colour space. There is no computer screen on the market that I am aware of that can reproduce the ProPhoto RGB or ColorMatch RGB colour space.
Whenever a person is working in a colour space outside of the native gamut of the display device, the colours that cannot be reproduced (out-of-gamut / (OOG)), the rendering intent will take those OOG colours and move them onto the "hull" of the colour space so that they can be displayed by the device. Computer operating systems only support the Colorimetric Rendering intent.
2. Photo editing software, assuming it has the ability to do so (many "mainstream" commercial products like Photoshop, Lightroom, Capture One, etc. do) allow the user to change from one colour space to another. In photography we tend to use one of two rendering intents; Relative Colorimetric or Perceptual. The software will do this, regardless of what the computer operating system capabilities are and as long as these colours are in-gamut for the display device, will display the colours properly. If they are not, the operating system will use Relative Colorimetric to handle the OOG colours.
3. Some commercial products like Photoshop and Lightroom have an emulation mode (Adobe calls this "Softproofing") and one can emulate what the rendering intents do with both the colour spaces as well as various printing printer inksets / paper. Modern photo printers are able to exceed the Adobe RGB colour space when we are printing saturated colours, especially the blues, greens and purples. So it makes sense to use a wide RGB colour space like ProPhoto if the original scene contains these colours, even though your computer screen cannot reproduce them.
I use this functionality to see which rendering intent I will use in my prints.
Manfred, something's up with CiC:
I toned my comment down to "I wasn't talking about computer monitor profiles, such as those that come with the monitor driver. I was only talking about embedded ICC color profiles of type monitor. <> " at 3:36 PM.
But you responded to my full, disgruntled, comment at 4:46 PM.
So I'll just back off and drop the subject. Thanks for the clarification anyway. Hope it helps Peter ...