Lossless Export Formats

It’s possible that ffmpeg is giving me 16 - 235 by default; this is something that deserves further scrutiny as I’m not fully up to speed on the subject. Maybe I need to address it in my ffmpeg script?

Here is the ffmpeg code with BT.709 color information:

ffmpeg -y  -i "D:\Videos\test.avi"  -c:v mpeg2video  -pix_fmt yuv422p  -vb 50M  -minrate 50M  -maxrate 50M  -vf lutrgb=r='clip(val,44,199)',lutrgb=g='clip(val,44,199)',lutrgb=b='clip(val,44,199)',scale=out_color_matrix=bt709:out_range=limited  -color_primaries bt709  -color_trc bt709  -colorspace bt709 -c:a pcm_s16be  -f vob  clipped.mpg

There seems to be some problems with the latest beta waveform. Different than older versions. This version clamps to limited range, but the actual YUV values are actually unclamped when exporting (still there) so there is a disconnect between the scope and actual values . I make a separate post about it

@chris319 - a potential issue with clipping in RGB then converting to YUV422P8, is that conversion (especially subsampling step) will generate new illegal values

Illegal in what sense? I examine the values in the final exported yuv .mpg file and don’t see illegal values. This is using a tool external to Shotcut.

I would love to port my PureBasic scope program to procedural C (not C++) but have no experience with Windows graphics in C.

Potentially new illegal values - ie. out of gamut errors, out of range, however you want to define it - since new values are generated from the subsampling, even though you started with 100% within limits

It’s usually not a big deal for most types of content - and you’re usually allowed ~1% leeway. Check with whoever you’re submitting to.

Was that a typo ? R,G,B clipped to the range [44,199] seems drastic . That is going to reduce the contrast and look washed out. Black will not be “black” anymore, white will not be “white” anymore, colors will be off, saturation low. That’s probably why you’re not seeing illegal values

Is there a reason why you are targetting EBU r103 ?

Clients require it. We had a big discussion about broadcast standards a few weeks ago.

No you’re not.

I’ve checked all of this out using the tool YOU helped me with and it passes muster. Do you have evidence to the contrary? I need proof, not supposition.

ffmpeg is funny but those are the values needed to get RGB 5 - 246.

Look at EBU r103

“Certain operations and signal processing may produce relatively benign gamut overshoot errors in
the picture therefore, the EBU further recommends that
measuring equipment should indicate an “Out-of-Gamut” occurrence only after the error
exceeds 1% of an integrated area of the active image.
Signals outside the active picture area shall be excluded from measurement.”

This is very common for submission guidelines for other clients, other broadcasters non EBU as well . EBU is not common for USA/North Amercian targets. But for EBU, that 1% of the active image area refers to values between 1 to 4 , 247 to 254 in either Y, or converted to R, G, B channels is allowed.

Professional legalizers typically filter out the channels before measuring as well, and they will give you a % and/or hotspot visualization.

Yes,I totally believe using those clip values will “pass” mathematically. That’s my point - because the contrast is too low.

I suspect it will fail on other submission criteria if you use those clip values. Test it on a colorbars video. Does it “look” right to you ? Does “black” look black to you ? Do the colors look right to you ? Do skintones look right?

Yes.
Yes.
Yes.
Yes.

I’ll let you write that code.

Find me some U.S. or Canadian standards.

If it’s “normal” YUV or RGB you’re losing ~40% of the data clipping that much . There is no way it would look “normal” if ffmpeg is clipping correctly R,G,B to [44,199] . Unless something else is going on or it’s not working as advertised. So your “black” level would be RGB 44,44,44 before the YUV conversion if it’s working correctly? (And “white” would be RGB 199,199,199) . Just think about that for a second…

In 8-bit RGB, typically “black” is 0,0,0 , white is 255,255,255 . Sure, for some strict standards, some compromises are made, but try making a black to white gradient 44,44,44 - 199,199,199 in photoshop or image editor. Does it look right to you ?

I’m not going to argue about it any more. Run the code and see for yourself.

I’m using this from post 13

just no audio

ffmpeg -y  -i test_420.mp4 -c:v mpeg2video  -pix_fmt yuv422p  -vb 50M  -minrate 50M  -maxrate 50M  -vf lutrgb=r='clip(val,44,199)',lutrgb=g='clip(val,44,199)',lutrgb=b='clip(val,44,199)',scale=out_color_matrix=bt709:out_range=limited  -color_primaries bt709  -color_trc bt709  -colorspace bt709 output_clipped.mpg

The waveforms were generated by ffmpeg using -vf waveform=g=green

The first is the input file. It was normal range , AVC , 4:2:0. It looks roughly “normal”, or what you would expect . ~Y 16-235 or 0-100IRE

The 2nd is the same file , but clipped to 44,199 in RGB with lutrgb in ffmpeg , and 4:2:2 mpeg2 as per your commandline . Notice the lack of detail in the highlights and shadows. It’s all 1 “shade.” Low contrast and no separation of details. The “ocean waves” look like a single grey block. Hair highlights are gone, as are the specular highlights on the necklace, ball.

This would get rejected, r103 or otherwise - any broadcast standard - because of improper levels. There would be a note from the QCer saying “objectionable clipping” or “excessive black crushing and highlight compression”

What are the maximum and minimum R,G,B values in the corrected image? Not scope traces, actual sample values.

What is your solution for making this or any image r103 compliant? You know the standard and you have the tools, so have at it.

There is one U.S. TV network which explicitly states in its specs that RGB values will be 0 - 700 mv, a much more reasonable requirement. The output of a consumer camcorder would thus have to be processed to meet this requirement.

That partially depends on how you convert it back to RGB to measure. Recall the actual file is 4:2:2 YUV. What kernal is used to resize the chroma planes can affect your values, and how the chroma locations are interpreted. On certain types of content, e.g. broadcast graphics , overlays, there can be major deviations depending on what is used. So what the QCer uses for RGB check, and what you use are not always the same thing. But YUV checking is YUV. No additional sources of error

I used AVS Bilinear to convert to RGB for the ffmpeg lutrgb YUV file

RGB values

AVS Point
min 30,23,27
max 219,220,216

AVS Bilinear
min 33,23,26
max 224,220,216

AVS Bicubic
min 33,23,26
max 223,221,216

AVS Lanczos
min 33,22,27
max 223,222,216

VPY/zimg Point
min 11,25,17
max 251,214,221

VPY/zimg Bilinear
min 30,35,25
max 227,208,216

VPY/zimg Bicubic
min 32,35,26
max 226,210,216

VPY/zimg Lanczos
min 29,35,25
max 226,221,216

You use a broadcast legalizer plugin or program , along with manual color correction. Some have EBU 103 ,vs. EBU 103 “strict” settings. Some have different options of handling the corrections with soft vs. hard clipping, knee handling

There are various 3rd party services that can do thing like this for you as well

If in doubt, check the spec sheet or ask the client directly. In North America, 9/10 times for HD delivery for commericals or main programme it will be something like 1) lumimance range 0-100 IRE (some specify have “wiggle room range” like -1% to +103%, some are explicitly strict, no deviation), 2) <75% saturation, and 3) no illegal broadcast colors. But note that some combinations of 0-100 IRE , and <75% sat on vectorscope can still produce broadcast illegal colors. Condition 1 and 2 are relatively easy to fulfill. It’s the latter problem that causes the most headaches for people. Also hard clipping can be flagged, because it often does not look nice. That’s why people spend money on expensive broadcast legalizers and filters, or use 3rd party services.

Those solutions are suitable if you’re a broadcaster and have the bucks.

Good point.

That’s the problem with these specs. They are written for RGB which is not actually transmitted. A YUV signal is what’s actually transmitted, so make BT.709 the delivery spec and 16 - 235/240.

It’s way too difficult otherwise, especially for EBU strict.

Recall the of negative RGB values generated from some (individually legal) Y,Cb,Cr values - those are the tricky out of gamut errors that even some lesser plugins don’t catch. “negative” is below zero and critical fail. No 1% leeway there.

If you do everything in RGB, but then submit YCbCr 422, that subsampling step can once again generate spurious values, and how it’s calculated, what algorithm/kernal is used can easily vary the values +/-10

This is not an easy topic to do properly

Here is another broadcaster’s actual spec:

Except on modern equipment, digital 16 = 0 IRE or 0% as I prefer to state it. The 7.5 IRE setup is obsolete, a relic of analog NTSC. This is the same as the 0 - 700 mV spec except they are monitoring Y (luminance) and not RGB.

Some ffmpeg code which clips Y to 16 - 235 and U/V to 16 - 240:

ffmpeg -y  -i "test.avi"  -c:v mpeg2video  -pix_fmt yuv422p  -vb 50M  -minrate 50M  -maxrate 50M  -vf lutyuv=y='clip(val,22,228)',lutyuv=u='clip(val,16,240)',lutyuv=v='clip(val,16,240)',scale=out_color_matrix=bt709:out_range=limited  -color_primaries bt709  -color_trc bt709  -colorspace bt709 -c:a pcm_s16be  -f vob  clippedyuv.mpg

Shouldn’t it be clip(val,16,235) for Y 16-235 ?

In the broadcast legal discussion, unfortunately there are still many YUV values that are “legal” when assessed individually, but still produce “illegal” out of gamut broadcast colors that won’t be picked up with clipping