Improve Your Knowledge Of Color Space And Depth

Share:  

If you’ve fiddled enough in Photoshop and Lightroom you’ve probably come across color space and color depth. It is imperative for you to understand what those mean, and what do they do.

In other words, what are the pros and cons that they bring, and what benefits you have if you are using either one?

What Is Color Space Then?

Color space by definition is an abstract mathematical model which simply describes the range of colors as tuples of numbers, typically as 3 or 4 values or color components.

For example, the RGB color space will have the colors as values of 0 to 255 for each of the 3 colors, red, green, and blue. By those rules, computers, printers and so forth know how to interpret digital files in order to display them properly.

Make no mistake, not every color space is created equal, so let us dive down into the most known (and used) ones.

sRGB vs AdobeRGB

Basically, both utilize the red-green-blue format, however, the sRGB is the standard. Adobe RGB has a wider color gamut, but it only works properly in very specific situations, using specific equipment. In other words, if you don’t know what AdobeRGB is, stick to sRGB.

I’m not going to say that sRGB is inferior because in practice the differences are really not that evident, but sRGB makes sure that your picture is interpreted properly on every device or printer. That is why it is most broadly used.

LCD displays and photo labs (for print) utilize the sRGB format, since it is the most simple and consistent yet throughout the range. This means that if your image looks good on your LCD display (under the premise that it is calibrated properly and not affected by filters) it will look just as good on print.

Since photo labs project the picture onto the photo paper using the same method (give or take), it ensures that there aren’t any color deviations, except those that are imposed by the photo paper itself.

Cie Chart with sRGB gamut by spigget.png

Cie Chart with sRGB gamut” by Spigget

If you are using Lightroom, under the navigator you’ll see R G B values shown in percentages when your mouse pointer hovers over the picture. If you understand them properly, you can utilize those to gain proper white balance even on uncalibrated displays.

If for example you hover over an area that you know is gray, the values must be the same, thus ensuring proper white balance; but, if it is shifted more towards the red or the blue or the green, you need to adjust the white balance until it is equal. That is how the white balance eyedropper works as well.

You simply can't afford to miss out on this opportunity to check out Jimmy McIntyre'sThe Art Of Digital BlendingCourse. Learn how to make the most from your RAW files…by creating beautiful images in Photoshop.

CMYK – How It's Different From RGB & sRGB

CMYK stands for Cyan, Magenta, Yellow, and Key (black), and it is mostly used for ink jet printing. Since it works a bit differently than RGB (different colors are used as the base for reproducing color in general) your images need to be converted to CMYK before printing on CMYK printer.

However, the conversion from sRGB to CMYK will induce color changes, and you should be prepared for that. That is why when you plan to do inkjet printing (or plate printing) you should convert the image to CMYK and edit it properly in order to avoid color loss or changes in hue.

CIE1931xy gamut comparison.svg

CIE1931xy gamut comparison” by BenRG and cmglee

When working in Photoshop, under the CMYK color space (especially when converting from RGB) make sure that you have Proof Colors (Ctrl+Y) turned on, and Gamut Warning (Ctrl+Shift+Y) turned on.

This way you’ll see when the colors are out of the printing gamut (which almost never happens with CMYK) and you can also preview how the CMYK will look like while in RGB mode.

Looking At Color Depth

In Image Files

As previously noted, each pixel in a digital image consists of 3 color channels:

  1. Red,
  2. Green,
  3. Blue.

In order to create the proper shading, the color space mixes the amounts of each base color in order to produce the proper color. Color depth defines how much gradation the color space can have per channel.

This is often called bit depth as well, or you can come across it as bits per channel or bits per pixel. Now, in the standard sRGB, which is 8 bits per channel, it means that each of the primary colors can have 255 shades.

That is usually enough, but in certain situations, usually among clean gradients which differ a little from the beginning to the end, posterization is clearly visible.

4 bit.png

4 bit” by Thegreenj

8 bit.png

8 bit” by Thegreenj

Just to clarify: posterization is the effect when colors aren’t mixed properly due to the lack of color depth and you can see visible edges between the different shades.

This effect can be reduced by adding noise to an 8-bit image, while the same effect won’t be visible as much in 16-bit images, in fact, it shouldn’t be visible at all.

Color depth can go much deeper, so far the top being 48 bits per channel, which results in 218 trillion shades per color. But that is clearly excessive since the human eye can only differ between around 10 million different shades, which makes 24bits per channel the maximum you’ll ever need for viewing purposes.

However, even 24 bits is a tad overkill since the difference will be visible only in gradients and when compared one next to another. 16 bits per picture is in practice the most you’ll really need.

Keep in mind that most monitors are 8-bit, so when you edit and view 16-bit images they will be scaled down to 8-bit, but the file will be intact and the picture will be still 16 bits when printed out.

If you want a better view, you can get yourself a 10-bit monitor which will have a wider gamut and produce less posterization. However, they are more expensive and require graphics cards capable of producing those kinds of color depth, such as nVidia Quadro graphics cards.

Additionally, .jpeg and .tiff are limited to 16 bits. They can’t go any higher.

In Cameras

DSLRs usually capture 14 bit RAW images, meaning that they have 36 bits in total for color information per pixel. On a 16 bit image, that is 48 bits per pixel in total (16 per channel).

However, the RAW file is made to be 14-bit per channel (with the goal of future-proofing), because so far, according to DXO mark, there isn’t a single camera capable of utilizing all those bits in color information.

Instead, most of the modern cameras gravitate around 25 bits in total for color information. Other sources claim that most modern DSLR’s utilize around 10-12 bits per channel of color, so clearly somebody is doing something wrong with the testing.

From what I see, a 16-bit image in Photoshop looks better than the 8-bit counterpart when I open a RAW file from my 7D Mark 2. Especially in the shadows and smooth gradients.

Anyhow, if you aren’t shooting RAW, the cameras produce 8-bit .jpeg files thus you are losing a large amount of data that way. That is why you need to start using RAW immediately.

https://www.lightstalking.com/wp-content/uploads/2015/10/6570275495_fd8abe3def_b.jpg Photo by Nick Harris

You should also bear in mind that the amount of color that reaches the sensor is not equal to the amount of color that reaches the file.

Sensors need to process the data produced by the light hitting it, and apply several algorithms before it is dumped in the RAW files. In general, a great deal of data is lost in that process in order to gain a “better looking” image for the user.

This usually consists of filtering noise, color artifacts from the bayer filter etc. That's why often the exact same sensor technology performs quite differently in two different manufacturers (yes I’m pointing towards Nikon and Sony). The sensor itself doesn’t do all the work, the algorithms are doing much of it too.

Additionally, the lens and the quality of the glass and coatings are to blame for some of the color loss and fringes too. Remember this if you were wondering why certain lenses make colors look washed with not much detail going on there, or why other lenses have some slight color casts towards a certain color.

That, paired with the limited amount of color captured by the sensor and the software driving it can result in less color depth (fidelity) stored in the RAW file.

Summary

  • Using the proper color depth can improve your images completely. Don’t bother scaling it down when you don’t have to.
  • All RAW files are opened as 16-bit images by Adobe Software (by default), and I'm sure that is the case with all RAW file processors as well. There is also no need to tone down the color depth unless it is for web use and the filesize really matters. 16-bit color gives you more freedom with editing as well since it will generate less banding (posterization) and less color artifacts.
  • For archiving, if you prefer to store .jpeg files instead of RAW (I can't see the reason why you wouldn’t just keep the RAW file), as many photographers do nowadays, it's wiser to keep a 16-bit .jpeg (instead of an 8 bit), since it will give you more room for further editing if that is necessary.

You simply can't afford to miss out on this opportunity to check out Jimmy McIntyre'sThe Art Of Digital BlendingCourse. Learn how to make the most from your RAW files…by creating beautiful images in Photoshop.

Further Resources

About Author

Photographer who loves challenging and experimental photography and loves sharing his knowledge about it.

Leave a Reply

Your email address will not be published. Required fields are marked *