I am struggling with this, some help please.
My aim is to make a win app that will compute the color grading parameters from the image data.
I have the framework done but the image data is confusing.
I have a procedure which compares every pixel in 2 images and outputs stats for differences in RGB as 3 separate numbers.
I created a shotcut color grading filter and set just 1 setting (Shadows B +20%)
When I compare the images I get all 3 colors changing.
R = 1%
G = 0.8%
B = 11.8%
I understand that the Shadow parameter will have a small effect on most of the Blue data and so my 11.8 % will not corrolate with the 20% adjustment. This is fine.
But why do the Red and Green bytes change?
PS: I used the export frame as BMP format to save the images for comparison.
I think that the problem is that I have Interpolation and my source image is bigger than my render imageâŚ
So changing the Blue on the source may change Green and Red on interpolated image data.
In general, I would not be concerned about anything under 1.5%. Shotcut processes video in YUV (unless told otherwise), but the color grading filter is RGB. That means YUV -> RGB -> YUV conversion is happening, which is lossy +/- 2 RGB values. There isnât anything that can be done about that, except move to higher bit-depth video so the loss is less perceptible. And even that doesnât help our case here because Shotcut output is currently capped at 8-bit.
I also do not recreate a rounding bug - even when printing the entire LUT and inspecting each value. But I agree that rounding should be used instead of truncation. So I have submitted a fix here using lrint():
Anybody help me?
I need to reverse the Color Grading filter calcs and solve for input Lift, Gain and Gamma
For some odd reason the results indicate that the Shotcut is not using the code that Austin posted above.
Perhaps an older version.
This code gives same LUT as Shotcut. (just showing code for one color:
// Convert to gamma 2.2
double r = pow( (double)i / 255.0, 1.0 / 2.2 );
// Apply lift
r += rlift * ( 1.0 - r );
// Apply gamma
r = pow( r, 2.2 / rgamma );
// Apply gain
r *= pow( rgain, 2.2 / rgamma ); <<<< not as expected
// Update LUT
self->rlut[ i ] = (int)(r * 255.0);
I need 3 procedures:
ComputeLift(X, Y, Gain, Gamma)
ComputeGain(X, Y, Lift, Gamma)
ComputeGamma(X, Y, Lift, Gain)
I check that the results do not cause overflow / underflow so the values can always be assumed to be within range.
Bring it into Shotcut, use one Color Grading filter only, export frame as Graded.png
Compare Graded.png to Source.png to calculate differential
Apply differential to Source.png and expect to be equal to Graded.png
Step 2 is the one Iâm concerned about. Whatâs the source of Graded.png? If the comparison is being made to an image that was graded with some other tool like DaVinci Resolve, then the math will never work. It will only approximate at best, and probably wonât be good enough, especially on skin tones. Here are the problems:
A frame grab from a finished video will have processing besides color grading applied, such as contrast and saturation. The Shotcut color grading filter alone cannot replicate these filters. Likewise, extracting color grading values would only work if the video clip in Shotcut had all other post-processing filters correctly applied first, which would be guesswork at best.
Resolveâs color grading wheels have additional controls to manipulate the gamma curve that Shotcut cannot replicate. Same for Premiere.
Resolve and Premiere (and others) have a tool called hue-vs-hue which allows for tweaks to specific ranges of hue, saturation, and lightness. This tool is heavily used for skin tone correction in feature films. For instance, if a color grade was used to force an overall scene to be more blue (to simulate night for instance), it is common to use hue-vs-hue to bring the now-blue skin tones back closer to skin ranges. Without this tweak, the scene simply looks like it has a blue cast rather than looking like night. This is why searching for a generic color grade in professionally produced video will likely mess up skin tones. What works for the scene will not work for skin, and vice versa. An extracted correction value will need more granularity than three global color wheels.
To extract complex grades from professional video, it would be necessary to generate a LUT across the entire color space like this tool does:
At a minimum, you may get better results if a Gaussian blur then maybe a quarter-size linear downscale (like FFmpeg zscale) was applied to Source.png and Graded.png before comparison, so that specific tweaks like skin correction would get averaged into the scene, and the resulting Color Grading filter differential values would be gentler (more averaged) than harsh per-pixel values.
My workflow is as above.
I exported the frames as .BMP - both the source and the graded.
I chose a 50% gray filled image RGB(127,127,127) as my source.
I applied filters as follows:
Lift - Red + 10
Lift - Green + 20
Lift - Blue + 60
Midtones - Red + 10
Midtones - Green + 20
Midtones - Blue + 60
Highlights - Red + 10
Highlights - Green + 20
Highlights - Blue + 60
I then used my app to generate the LUT for each of these single color images and compared them to the unmodified exported image.
I found that they all worked except the Highlights⌠I tweaked the numbers a bit and once I changed the LUT generator to use r *= pow( rgain, 2.2 / rgamma ) then all the tests were 100% identical
If there is an easier way to solve my problem then please let me know.
I fly a microlight and have 3 X gopro cameras. Hero 6, Session 5 and Hero 3+
The video colors are way off between the 3 cameras.
So I record the Hero 6 in gopro color and the other 2 in âflatâ
I now need to adjust the 2 âflatâ sources to match with the Hero 6 video.
I cannot do this manually because it just makes my eyes water and the result is not good.
Part of the problem is that flat profiles are much lower in saturation by design. The color grading filter alone canât change saturation to match the GoPro Color profile. Grading flat footage requires two separate filters at a minimum⌠a saturation boost, and a contrast adjustment (curved or linear). If your app isnât accounting for saturation differences, Iâm not sure how itâs going to generate equal image output.
Fortunately, both of those properties have scopes (graphs) to help match clips. In Shotcut, there is a video waveform scope (under the View menu) that takes the guesswork out of getting contrast and brightness matched between two videos. Just get the graphs to look similar across two clips using the brightness sliders in the Color Grading filter. Thereâs also a vectorscope that takes the guesswork out of saturation. Match the distance-from-center amount across clips. If needed, the RGB Parade can be used to determine the amount of white balance difference between clips (it will be visible as a different amount of separation between the R, G, B plateaus in a gray area).
If using the scopes to their full potential (and copy-pasting the Saturation + Color Grading pair across clips or trackheads), then matching footage manually is actually pretty fast. (This assumes a color-targeting LUT hasnât skewed color uniformity, and a match is possible with only global modifications.) Using an external program to match footage would work too, but correction values would have to be recomputed and manually entered at all major exposure changes unless the GoPros are set to fixed ISO. Auto ISO would obviously skew all the values every time the camera ramps ISO.
EDIT: Regarding saturation filters, there are two. âSaturationâ calculates in RGB whereas âHue/Lightness/Saturationâ calculates in YUV. I find it easier to get the look I want with H/L/S.
On the back are the 8-bit R, G and B values that the color patches are supposed to be.
On a waveform monitor, the 18% gray patch should be at 43.4% (formerly known as IRE units), or 111 in 8-bit digital space, for a gamma of 1 / 0.45 (2.2). I find a so-called âeyedropperâ tool or color picker very useful. It reads out the R,G and B values under the mouse pointer. I have some PureBasic code which also reads out the Y (lum) value according to BT.709 specs.
Use the white patch of the above card to set gain, the black patch to set âliftâ, and the 18% gray to set gamma. Make sure the lighting is even across the chart. White should be 235 (8-bit digital) and black should be 16.
Hope this helps match your 3 Go-Pro cameras.
A colleague of mine always recommends DSC charts but I find them way too expensive. I think he owns stock in the DSC company