Help with Color grading formula

I would like to use the color grading formula:
The classic formula is: output = (gain * (x + lift * (1-x)))^(1/gamma).

Do I need to remove the gamma before using the formula and then add back afterwards?

Is this the correct formula for the gamma?
bt 709 gamma = (0.2126R + 0.7152G + 0.0722*B)

Brian’s filter is a little unique. The best resource is to look at the source code:

No, that is the formula to get luma (the Y channel) in BT.709, for converting RGB to YCbCr/YUV. Brian’s filter uses a hard-coded gamma of 2.2.

Thank you. exactly the info I was needing. :slight_smile:

I am struggling with this, some help please.
My aim is to make a win app that will compute the color grading parameters from the image data.
I have the framework done but the image data is confusing.

I have a procedure which compares every pixel in 2 images and outputs stats for differences in RGB as 3 separate numbers.

I created a shotcut color grading filter and set just 1 setting (Shadows B +20%)
When I compare the images I get all 3 colors changing.
R = 1%
G = 0.8%
B = 11.8%

I understand that the Shadow parameter will have a small effect on most of the Blue data and so my 11.8 % will not corrolate with the 20% adjustment. This is fine.

But why do the Red and Green bytes change?

PS: I used the export frame as BMP format to save the images for comparison.

I think that the problem is that I have Interpolation and my source image is bigger than my render image…
So changing the Blue on the source may change Green and Red on interpolated image data.

Interpolation could definitely be a culprit.

In general, I would not be concerned about anything under 1.5%. Shotcut processes video in YUV (unless told otherwise), but the color grading filter is RGB. That means YUV -> RGB -> YUV conversion is happening, which is lossy +/- 2 RGB values. There isn’t anything that can be done about that, except move to higher bit-depth video so the loss is less perceptible. And even that doesn’t help our case here because Shotcut output is currently capped at 8-bit.

Thanks Austin,

I did find a small bug in the filter code.
The input suffers from rounding errors where no change is required.
So X = 1 ends up as X=int(0.99999) = 0

Perhaps this code

		// Update LUT
		self->rlut[ i ] = (int)(r * 255.0);
		self->glut[ i ] = (int)(g * 255.0);
		self->blut[ i ] = (int)(b * 255.0);

could be changed as follows:

		// Update LUT
		self->rlut[ i ] = (int)(r * 255.0 + 0.5);
		self->glut[ i ] = (int)(g * 255.0 + 0.5);
		self->blut[ i ] = (int)(b * 255.0 + 0.5);

For some reason setting all values to zero does NOT result in these errors… ?
Perhaps I am missing something.

I’d love to hear what comes of this project

I will send you a download link once I am done.

1 Like

I also do not recreate a rounding bug - even when printing the entire LUT and inspecting each value. But I agree that rounding should be used instead of truncation. So I have submitted a fix here using lrint():

Thank you Brian.

I am stuck with some basic math. :frowning:

Anybody help me?
I need to reverse the Color Grading filter calcs and solve for input Lift, Gain and Gamma
For some odd reason the results indicate that the Shotcut is not using the code that Austin posted above.
Perhaps an older version.

This code gives same LUT as Shotcut. (just showing code for one color:

		// Convert to gamma 2.2
		double r = pow( (double)i / 255.0, 1.0 / 2.2 );

		// Apply lift
		r += rlift * ( 1.0 - r );

		// Apply gamma
		r = pow( r, 2.2 / rgamma );

		// Apply gain
		r *= pow( rgain, 2.2 / rgamma );   <<<< not as expected

		// Update LUT
		self->rlut[ i ] = (int)(r * 255.0);

I need 3 procedures:

ComputeLift(X, Y, Gain, Gamma)

ComputeGain(X, Y, Lift, Gamma)

ComputeGamma(X, Y, Lift, Gain)

I check that the results do not cause overflow / underflow so the values can always be assumed to be within range.

Just to verify, is this your workflow?

  • Start with a normal PNG image (Source.png)
  • Bring it into Shotcut, use one Color Grading filter only, export frame as Graded.png
  • Compare Graded.png to Source.png to calculate differential
  • Apply differential to Source.png and expect to be equal to Graded.png

Step 2 is the one I’m concerned about. What’s the source of Graded.png? If the comparison is being made to an image that was graded with some other tool like DaVinci Resolve, then the math will never work. It will only approximate at best, and probably won’t be good enough, especially on skin tones. Here are the problems:

  • A frame grab from a finished video will have processing besides color grading applied, such as contrast and saturation. The Shotcut color grading filter alone cannot replicate these filters. Likewise, extracting color grading values would only work if the video clip in Shotcut had all other post-processing filters correctly applied first, which would be guesswork at best.

  • Resolve’s color grading wheels have additional controls to manipulate the gamma curve that Shotcut cannot replicate. Same for Premiere.

  • Resolve and Premiere (and others) have a tool called hue-vs-hue which allows for tweaks to specific ranges of hue, saturation, and lightness. This tool is heavily used for skin tone correction in feature films. For instance, if a color grade was used to force an overall scene to be more blue (to simulate night for instance), it is common to use hue-vs-hue to bring the now-blue skin tones back closer to skin ranges. Without this tweak, the scene simply looks like it has a blue cast rather than looking like night. This is why searching for a generic color grade in professionally produced video will likely mess up skin tones. What works for the scene will not work for skin, and vice versa. An extracted correction value will need more granularity than three global color wheels.

To extract complex grades from professional video, it would be necessary to generate a LUT across the entire color space like this tool does:

At a minimum, you may get better results if a Gaussian blur then maybe a quarter-size linear downscale (like FFmpeg zscale) was applied to Source.png and Graded.png before comparison, so that specific tweaks like skin correction would get averaged into the scene, and the resulting Color Grading filter differential values would be gentler (more averaged) than harsh per-pixel values.

Is there a reason you’re using 2.2 instead of 1.0?

My workflow is as above.
I exported the frames as .BMP - both the source and the graded.
I chose a 50% gray filled image RGB(127,127,127) as my source.

I applied filters as follows:
Lift - Red + 10
Lift - Green + 20
Lift - Blue + 60
Midtones - Red + 10
Midtones - Green + 20
Midtones - Blue + 60
Highlights - Red + 10
Highlights - Green + 20
Highlights - Blue + 60

I then used my app to generate the LUT for each of these single color images and compared them to the unmodified exported image.

I found that they all worked except the Highlights… I tweaked the numbers a bit and once I changed the LUT generator to use r *= pow( rgain, 2.2 / rgamma ) then all the tests were 100% identical

If there is an easier way to solve my problem then please let me know.

I fly a microlight and have 3 X gopro cameras. Hero 6, Session 5 and Hero 3+
The video colors are way off between the 3 cameras.
So I record the Hero 6 in gopro color and the other 2 in “flat”

I now need to adjust the 2 “flat” sources to match with the Hero 6 video.

I cannot do this manually because it just makes my eyes water and the result is not good.

Hmm, manual is usually the recommended way.

Part of the problem is that flat profiles are much lower in saturation by design. The color grading filter alone can’t change saturation to match the GoPro Color profile. Grading flat footage requires two separate filters at a minimum… a saturation boost, and a contrast adjustment (curved or linear). If your app isn’t accounting for saturation differences, I’m not sure how it’s going to generate equal image output.

Fortunately, both of those properties have scopes (graphs) to help match clips. In Shotcut, there is a video waveform scope (under the View menu) that takes the guesswork out of getting contrast and brightness matched between two videos. Just get the graphs to look similar across two clips using the brightness sliders in the Color Grading filter. There’s also a vectorscope that takes the guesswork out of saturation. Match the distance-from-center amount across clips. If needed, the RGB Parade can be used to determine the amount of white balance difference between clips (it will be visible as a different amount of separation between the R, G, B plateaus in a gray area).

If using the scopes to their full potential (and copy-pasting the Saturation + Color Grading pair across clips or trackheads), then matching footage manually is actually pretty fast. (This assumes a color-targeting LUT hasn’t skewed color uniformity, and a match is possible with only global modifications.) Using an external program to match footage would work too, but correction values would have to be recomputed and manually entered at all major exposure changes unless the GoPros are set to fixed ISO. Auto ISO would obviously skew all the values every time the camera ramps ISO.

EDIT: Regarding saturation filters, there are two. “Saturation” calculates in RGB whereas “Hue/Lightness/Saturation” calculates in YUV. I find it easier to get the look I want with H/L/S.

Are you trying to match the values in the Shotcut UI? Or are you trying to match the values that get saved in the .mlt file for the filter?

The Shotcut UI applies a scaling to the filter parameters to make the UI parameters more friendly:

Perfect, yes that is exactly the problem. Thanks Brian

Thank you Austin, I will give these a try and see if the color grading is even required.

Here is a tool I use to evaluate camera performance. You can use it in conjunction with Shotcut’s scopes:

https://www.bhphotovideo.com/c/product/813250-REG/Kodak_1277144_Gray_Card_Plus_9x12.html?sts=pi&pim=Y

On the back are the 8-bit R, G and B values that the color patches are supposed to be.

On a waveform monitor, the 18% gray patch should be at 43.4% (formerly known as IRE units), or 111 in 8-bit digital space, for a gamma of 1 / 0.45 (2.2). I find a so-called “eyedropper” tool or color picker very useful. It reads out the R,G and B values under the mouse pointer. I have some PureBasic code which also reads out the Y (lum) value according to BT.709 specs.

Use the white patch of the above card to set gain, the black patch to set “lift”, and the 18% gray to set gamma. Make sure the lighting is even across the chart. White should be 235 (8-bit digital) and black should be 16.

Hope this helps match your 3 Go-Pro cameras.

A colleague of mine always recommends DSC charts but I find them way too expensive. I think he owns stock in the DSC company :slight_smile: