If it’s “normal” YUV or RGB you’re losing ~40% of the data clipping that much . There is no way it would look “normal” if ffmpeg is clipping correctly R,G,B to [44,199] . Unless something else is going on or it’s not working as advertised. So your “black” level would be RGB 44,44,44 before the YUV conversion if it’s working correctly? (And “white” would be RGB 199,199,199) . Just think about that for a second…
In 8-bit RGB, typically “black” is 0,0,0 , white is 255,255,255 . Sure, for some strict standards, some compromises are made, but try making a black to white gradient 44,44,44 - 199,199,199 in photoshop or image editor. Does it look right to you ?
The 2nd is the same file , but clipped to 44,199 in RGB with lutrgb in ffmpeg , and 4:2:2 mpeg2 as per your commandline . Notice the lack of detail in the highlights and shadows. It’s all 1 “shade.” Low contrast and no separation of details. The “ocean waves” look like a single grey block. Hair highlights are gone, as are the specular highlights on the necklace, ball.
This would get rejected, r103 or otherwise - any broadcast standard - because of improper levels. There would be a note from the QCer saying “objectionable clipping” or “excessive black crushing and highlight compression”
There is one U.S. TV network which explicitly states in its specs that RGB values will be 0 - 700 mv, a much more reasonable requirement. The output of a consumer camcorder would thus have to be processed to meet this requirement.
That partially depends on how you convert it back to RGB to measure. Recall the actual file is 4:2:2 YUV. What kernal is used to resize the chroma planes can affect your values, and how the chroma locations are interpreted. On certain types of content, e.g. broadcast graphics , overlays, there can be major deviations depending on what is used. So what the QCer uses for RGB check, and what you use are not always the same thing. But YUV checking is YUV. No additional sources of error
I used AVS Bilinear to convert to RGB for the ffmpeg lutrgb YUV file
RGB values
AVS Point
min 30,23,27
max 219,220,216
AVS Bilinear
min 33,23,26
max 224,220,216
AVS Bicubic
min 33,23,26
max 223,221,216
AVS Lanczos
min 33,22,27
max 223,222,216
VPY/zimg Point
min 11,25,17
max 251,214,221
VPY/zimg Bilinear
min 30,35,25
max 227,208,216
VPY/zimg Bicubic
min 32,35,26
max 226,210,216
VPY/zimg Lanczos
min 29,35,25
max 226,221,216
You use a broadcast legalizer plugin or program , along with manual color correction. Some have EBU 103 ,vs. EBU 103 “strict” settings. Some have different options of handling the corrections with soft vs. hard clipping, knee handling
There are various 3rd party services that can do thing like this for you as well
If in doubt, check the spec sheet or ask the client directly. In North America, 9/10 times for HD delivery for commericals or main programme it will be something like 1) lumimance range 0-100 IRE (some specify have “wiggle room range” like -1% to +103%, some are explicitly strict, no deviation), 2) <75% saturation, and 3) no illegal broadcast colors. But note that some combinations of 0-100 IRE , and <75% sat on vectorscope can still produce broadcast illegal colors. Condition 1 and 2 are relatively easy to fulfill. It’s the latter problem that causes the most headaches for people. Also hard clipping can be flagged, because it often does not look nice. That’s why people spend money on expensive broadcast legalizers and filters, or use 3rd party services.
Those solutions are suitable if you’re a broadcaster and have the bucks.
Good point.
That’s the problem with these specs. They are written for RGB which is not actually transmitted. A YUV signal is what’s actually transmitted, so make BT.709 the delivery spec and 16 - 235/240.
It’s way too difficult otherwise, especially for EBU strict.
Recall the of negative RGB values generated from some (individually legal) Y,Cb,Cr values - those are the tricky out of gamut errors that even some lesser plugins don’t catch. “negative” is below zero and critical fail. No 1% leeway there.
If you do everything in RGB, but then submit YCbCr 422, that subsampling step can once again generate spurious values, and how it’s calculated, what algorithm/kernal is used can easily vary the values +/-10
Except on modern equipment, digital 16 = 0 IRE or 0% as I prefer to state it. The 7.5 IRE setup is obsolete, a relic of analog NTSC. This is the same as the 0 - 700 mV spec except they are monitoring Y (luminance) and not RGB.
In the broadcast legal discussion, unfortunately there are still many YUV values that are “legal” when assessed individually, but still produce “illegal” out of gamut broadcast colors that won’t be picked up with clipping
So black = 16 and white = 235. 246 (r103) is the “fudge factor” of 4.68% for artifacts and overshoots. As the “fudge factor” it is not the target value.
same but using ffmpeg MPEG2 settings above . YUV analyzer reports Ymin = 15, Ymax=235. More quantization and banding than expected. I didn’t post the screenshot of image, but you can tell by the lumpy waveform. Probably need to tweak setting encoding settings. You can see the 15 on the waveform as well
Output of 2 fed into ffmbc to encode xdcamhd422. YUV analyzer reports Ymin = 16, Ymax=235. Smoother, less banding.
That’s why you check your work after the last processing step and adjust the ffmpeg parameters as needed to catch any potentially illegal values before it goes out the door…
Again, you don’t always have your pick of video codecs; depends what the client wants/expects. If he wants MPEG-2 then MPEG-2 it is.
Are you sure that’s right? You have it clipping at 235 but Ymax is 255.
But there is a problem with those ffmpeg MPEG2 settings used. XDCAM422HD is MPEG2 too, and the results were better, less banding . For example GOP length, bframes, rate control. If you look in the ffmbc code, you might be able to emulate some of the settings. Or just use ffmbc directly
Not possible for ffmpeg, except for Y
MPEG2 is YUV - you’re not going to get RGB min or max.
And if you want R,B,G, you have to specify the method of RGB conversion used . You can easily get +/- sometimes large values because of chroma up/down sampling as you can see in the earlier example above
Most importantly strict “broadcast legal” is nearly impossible to do with ffmpeg, because of condition (3) above
It needs to be reiterated - even if you clip Y,U,V, to some range, you’re going to miss many “out of gamut” errors (Broadcast illegal colors) . Potentially millions of values are still “illegal” when you clip to Y[16,235], CbCr[16-240]. There are MANY values in the middle that are still illegal. You can’t clip to 128 grey. It’s the combination of Y,Cb,Cr that makes it difficult, that won’t be picked up by individual channel checks
e.g. Y40,Cb100,Cr100 . Results in negative R,B values , broadcast illegal
There are many 8bit YCbCr values that do not “map” to the standard 8bit RGB color cube. Those are the “out of gamut” values
Also, detection is one thing, but what to do about it is another. Do you reduce sat, change hue, reduce luminance ? You can get severe artifacts depending on the distribution of “illegal” pixels .
If the client is not so strict, you might get by, but if they have a QCer with strict enforce standards it’s very common to get illegal broadcast colors if you only clip the ends.