Our modern digital cameras are, in theory, capable of capturing all the colours humans can see. Our output devices, whether they are our computer screens or printers are far more limited in their capabilities. We refer to the colour range that our output devices can reproduce as their gamut. If our image contains data that cannot be reproduced by our specific output device, that colour is referred to as being “out of gamut”, often abbreviated as OOG.
Our digital cameras captured data, which must be turned into an image file before it can be viewed. Part of that conversion process requires that the data be assigned a colour space, which tells our software how to interpret the colour data our camera have captured. Even the most basic cameras can turn out sRGB JPEG images. More advanced camera can also produce JPEG images that use the AdobeRGB colour space. Anyone who uses the Adobe’s Lightroom will be working with a variant of the ProPhoto RGB colour space.
The standard that we use to compare colour spaces is the CIE Lab 1931 colour space, which is a good representation of all the colours that humans can see. The sRGB colour space can reproduce about 1/3 of the colours of CIE Lab 1931, AdobeRGB can reproduce around ˝ of the CIE Lab 1931 colour space and ProPhoto RGB can reproduce about 90% of the colours in the CIE Lab 1931 colour space. As colour spaces get wider, they will display more vibrant colours than the narrower colour spaces.
We can convert our images from one colour space to another using software tools, but the conversion process is uni-directional. A wider colour space can be converted to a narrower one, but in doing so, that data is thrown away and cannot be recovered. While we can take a narrower colour space, say sRGB and convert it to a wider one, say AdobeRGB, all of the colours will be the sRGB colours that have been mapped to AdobeRGB values.
When we display a colour space that is wider than our output device can handle, our operating system or print drives use the assigned Rendering Intent to handle out of gamut colours. Operating systems use the Relative Colormetric rendering intent, which leaves the colours that are in gamut unchanged, but clips the out of gamut colours to the edge (sometimes referred to as the hull) of the colour space.
We do something similar when we convert colours to the same colour space as our computer screens, and if we do no further editing after conversion, then both paths will give us virtually identical results when we look at our computer screen.
If we edit these images, using a wide gamut colour space gives us the advantage of using the all the colour data in the original image. Colours that were OOG in the original image can come back to being in gamut when edited. When using a wide-gamut colour space, the original colours will be preserved and the correct colours will appear on the screen. If the edits are made to data in a smaller colour space, then the data used by the editing program will use data where the rendering intent has changed the colours and less accurate colours will come out of the editing process. A colour channel that has been clipped or blocked cannot be recovered.
Looking at some examples to try to explain this visually:
Assuming that this example of “pure” values of Red, Green and Blue. The outside rectangle shows wide gamut colours that cannot be displayed on a normal sRGB screen, but the inner square has colours that are just barely in gamut and can be displayed on a standard sRGB screen. This is essentially what we would see if we displayed this data on and AdobeRGB compliant screen.
Now if we decide to display this data on an sRGB compliant screen, we could follow one of two approaches. We could convert the data to the narrow sRGB screen or we could just leave the data in a wider gamut colour space and let the operating system rendering intent take care of the out of gamut colours. The two approaches will give exactly the same output on the sRGB screen:
The upshot of this is that if one is not planning to edit the data, either workflow will give identical results from an end user standpoint. On the other hand, if one edits the image and some of the data comes into gamut through the edit, this information will be accurate and will be displayed if one is working in the wide gamut colour space. The narrow gamut colour space will not have this data and the colours will not be accurate. In this example, the large vertical rectangle shows the output on the sRGB screen of both of these scenarios.
The first image shows an emulation of what happens if we work in a wide colour space and some of the colours come into gamut due to the edit. As these colours were edited in the wide colour space, the out of gamut colours are still handled by the operating system and the rendering intent, but the colours that are in gamut after the edit are displayed accurately.
In the second diagram, the edit is made to data in the narrow colour space and the original data has been lost. The edit is applied to the narrow colour space, so some of the subtlety and colour accuracy of the original data has been lost.
Conclusion: Working in a wide colour space like ProPhoto RGB has advantages, even in cases where this colour space is larger than the computer screen can display. Converting to a narrower colour space, for instance when uploading an image to the Internet, is a necessary step in this workflow. Failure to do so can result in the colours not being displayed properly for other people viewing the image.