top of page

UNDERSTANDING BIT/COLOR DEPTH

Bit-depth or color depth is an important measure of video quality. Read on to learn what bit depth is and why it's an important consideration in film and video workflows.

Bit/Color Depth:

If a color space specifies a range of available color intensities (gamut), then bit depth defines how accurate we can be with those specifications. Put another way, bit depth determines how accurately one can specify the difference between one color and another. Thankfully it’s one of those topics that make a lot more sense when paired with images so let’s dive right in and see if we can’t make some sense out of this topic.

What's the difference between a low bit depth and a high bit depth?

As seen above, a low bit depth limits the possible values that can be used when representing an image. In the case of a 3-bit greyscale image, there are only 8 possible values. This is because 3-bit is another way of saying 2x2x2 = 8. On the other hand if we are allowed to use 8-bits then we have up to 256 values we can use to reproduce the same image (2  = 256). This results in a much smoother looking image because we can be more accurate in how we define subtle differences between shades.

8

RGB Imagery

Now that we have the idea with greyscale images, let’s apply this knowledge for color images. Color images are created by mixing three channels of color, namely Red, Green and Blue – RGB.

Because we have 3 channels it means we now have 3 times the amount of data. 

When looking at an 8-bit per channel RGB signal, we have 256 possible values for each of the channels/colors. When combined together this means there are 16.7 million color combinations possible (256 x 256 x 256). This may sound like a huge amount but is actually what almost every screen we watch daily is capable of reproducing and there are many use cases that require even higher bit depths to avoid noticeable problems with image quality.

RGB + Alpha

In addition to having three color channels, imagery can sometimes also have a fourth channel called an alpha channel. An alpha channel contains transparency information and is typically at the same bit depth as the other color channels.


This means that for color image on a computer system there will be 8 bits assigned per color channel and then sometimes another 8 bits for an alpha channel, giving us a total of 32-bits per pixel (4x8bits).

Bit Depth per pixel vs. Bit Depth per channel

One area of confusion for me for a long time early on was the seemingly inconsistent ways that I saw bit depth referenced. For example, if you’ve ever used Adobe After Effects, you may know that you can switch the application to be working in either 8-bit, 16-bit or 32-bit color. 8-bit is the default, but as we just saw in the previous example, doesn’t a standard image use 32-bits per pixel? Why would 8-bit even be an option?


The confusion lies in the fact that After Effects is referencing how bits are assigned per color channel and the 32-bit measurement is referencing how many bits are required when all the channels are combined together. This distinction is acknowledged technically by the acronym “bpc” (bits per channel), but more often in casual conversion people will assume you know which they are referencing based on the context.


For example, if someone said that a render was done in 12-bit color, it’s very likely they mean it was exported at 12-bits per channel color. This is because the alternative would mean that they had with only 4-bits per channel (12bits divided into the Red Green and Blue channels) which is very much below professional standards.


For reference here’s a breakdown of how bits per channel plays out when calculated out to bits per pixel.

The tables above calculates bit depth per channel out to the number of bits required per pixel.

Why use high bit-depths?

There’s multiple reasons and use cases that necessitate moving to higher bit depths.  Among these would be the avoidance of color banding + posterization, high dynamic range imagery and also to maintain quality whilst being manipulated through the post production.

Banding

Banding occurs when a bit depth is too low and the eye can see where changes in color are occurring rather than seeing a smooth graduated change between shades. Often these differences look like bands that run through an image - hence the name. Here’s an example of a photograph that is reproduced at a bit depth too low to accurately render the gradient in the shades of blue in the sky:

The image above exhibits banding in the sky due to an insufficient bit depth being used for rendering.

Banding is an issue that can appear regularly for images with 8-bits per channel or less and for that reason, high end cameras capture at higher bit depths and master files are likewise produced at higher bit depths also. For example, digital cinema uses 12bpc (bits per channel) projection which allows it to show up to 68.3 billon different color combinations and accurately display even most subtle differences in colors and shades. 

Posterization

Posterization is essentially the same problem as banding but it has a different name because some images don't have large gradients that end up breaking up into neat "bands". Instead some imagery will break down into more abstract patterns. The problem is still the same though - instead of having nice blended shades of color, the eye can spot where one color stops and the next begins.  

Example of posterization where a break down in colors can be seen in the background foliage.

In the above image, posterization can clearly be seen in the background region where focus is dropping off. One interesting phenomenon about posterization is that areas of high detail and contrast (high spatial frequency) don't perceptually appear to be affected as much. Take for example the tree in the right third that's in focus - without the background, it would be hard to tell that a low bit depth was negatively affecting this image. This is because in areas of high spatial frequency the limited colors that are used are alternated and broken up much more rapidly which means our eye isn't able to spot areas that look like they should have smoother gradients in place.

On the other hand, areas that have low spatial frequency (such as out of focus regions or low contrast scenes) are much more likely to exhibit posterization artifacts. This is because it's much more likely that large patches will be reduced to single colors which in turn make them much easier for our eyes to spot.

HDR (High Dynamic Range) + Wide Color Gamuts

Another area that has necessitated the move to high bit depths is the advent of High Dynamic Range and wider color gamuts now entering into the consumer market. Both HDR and wide color gamuts have the potential to exacerbate banding and posterization problems since they require a bit depth to represent both more colors whilst also representing a much brighter range of colors and shades too. This means in essence that a bit depth is having to be stretched further than it would have been otherwise and will start to create banding in areas where it may not have been visible in Standard Dynamic Range range.

Accordingly, ultra-high definition TV’s that have moved to support HDR must be able to reproduce at least 10-bits per channel to avoid introducing banding to content. Dolby Vision (Dolby’s standard of HDR) goes one step further and requires Dolby Vision content to be encoded at 12-bits per channel in an effort to future proof and mitigate any potential problems.

Post Production Processes

So why on earth would we ever need more than 12-bits per channel if it’s capable of handling High Dynamic Range Wide Color Gamut content without issues? While it’s true this level of color depth comes close to exceeding human vision for monitoring, higher bit depths are very often required in post-production. This is because post production often involves heavily pushing and pulling colors around which is the equivalent of stretching out the bit depth encoded in an image. When you start to see banding or posterization, you’ve reached the limit of color depth information which in turn limits the amount of creative freedom color graders, compositors and VFX specialists have.

One other use for high bit depths is encoding additional exposure information that can be later retrieved. For example, Visual Effects (VFX) work is often rendered at 32-bits per channel using Linear Light values. This enables super bright whites (white values beyond the normal encoding point) to be stored and then manipulated further later on. For example, a compositor putting together explosions rendered at 32-bit could begin to adjust exposure to match it into their shot and begin to retrieve details back out of whites that appeared to previously be clipped and lost.

Wrapping it up:

In this article we’ve explore what bit/color depth is – namely it’s a measure of the accuracy that one can specify differences in shades of color. The higher the bit depth, the more possible color shades. We’ve looked at how bit depths are calculated and the difference between the amount of bits required per pixel and the number of bit’s required per channel in an image. Lastly we took a look at what banding and posterization are, why they occur and why extremely high bit depths may be used in some areas of post-production.

Need more help?

Unravel creates and masters content that's viewed on a variety of devices. We love mastering content for cinema, TV and the web. Dealing with various bit-depths is something we deal with on a daily basis, so if you need help or advice on your next project, don’t hesitate to get in touch.
bottom of page