Video Scope Calibration

The solution is simple: the scope should read in digital units (0 - 255) rather than IRE units. This circumvents the whole tap dance of limited and full range, flags, etc. and everybody’s happy.

The scope should not alter the video it’s measuring.

Austin, can you cite documentation for the existence of IRE units in “full range” video? I’ve never associated IRE units with full range.

Shotcut’s waveform is an IRE scope, so that’s the only definition we have if there’s going to be comparison with FFmpeg. I’m assuming that’s why this thread exists, but I feel like I have no idea what’s going on anymore. :rofl:

The main thing I’ve deduced for myself is that Shotcut’s scopes are accurate to their definitions, and they are extremely useful just the way they are. We could say that IRE and digital Y-value waveforms are specialized tools for solving very different problems.

Suppose I wanted to know “have I reached 100% white yet” during a color grade. If IRE says 100%, then I have reached white regardless of color space or range because those are factored into the scale. But if a digital Y-value waveform shows me Y 235, I don’t know if I’ve hit white unless I also know whether the video is full or limited range. In FFmpeg’s case, it doesn’t move the graticules to reflect the black and white points in conjunction with the input range, so there is no cue to tell me where white is. Guessing the range by how low the Y values go is not acceptable because low-contrast log footage from Panasonic cameras is in full range and may never drop below Y 16. It could easily look like legal range to the eye, but it isn’t. Basically, keeping track of these technical weeds slows down the efficiency and artistic mindset of color work, so IRE is sometimes a better fit for creative processes.

A digital Y-value scope is definitely a better tool for diagnostic work, no argument there.

IRE by design does not show us actual YUV data. That isn’t its goal. It’s not a hex editor.

Exactly. IRE units by nature are a percentage, a relative measure of voltage compared to the maximum voltage possible. Scaling to the units of 0-100% regardless of the input format is implied in the very definition of IRE. Because 0% and 100% are tied to the black and white points (for BT.709 and sRGB at least), scaling will be tied to the range because range determines black and white Y values.

Exactly. This is the job description of an IRE waveform. It is not a digital value waveform or a YUV hex dump in any way, shape, or form. There are other scopes for that. I’m pretty sure that crossing the job descriptions of different scopes is responsible for 92.7% of the confusion in this thread, which is why I tried to describe IRE from the beginning with my War and Peace second edition. :slight_smile:

100% agree. But that’s not how IRE works or what it does. IRE is not the tool for this kind of job.

As you noted earlier, Shotcut measures timeline output rather than input files, so there could be a layer of separation there too.

I grasp everything you’re saying, and agree 100% if we’re just talking about how digital Y-value scopes work. What I’m trying to head off is an output comparison of Shotcut’s IRE waveform to absolute digital Y-value scopes like FFmpeg (not in IRE mode), and then expecting them to show the same thing. They won’t; they’ll be different by design.

If FFmpeg weren’t bugged, it would always match Shotcut in IRE mode and we could all go home haha.

We’re making progress millimeter by millimeter. Does anyone disagree that the unambiguous digital Y (0 - 255) should be the way to go in Shotcut?

Austin: you need to check your thinking about “white” and IRE units. I’m not going to address the issue here and now unless you need me to in a PM.

Digital units are only “ambiguous” in that context of ffmpeg reading a file directly , because -vf waveform is looking at the original video data, 8bit Y code values will always be in the range 0-255. So IRE 0-100 will always correspond to Y 16-235 in that case

Otherwise, not necessarily in the shotcut context; it can add confusion because you’re measuring the output, not the input. Are you working in YUV or RGB. If you add an RGB overlay? Are the units Y ? or Converted Y ?

Almost all do , by convention, all computer video players , all web use computer RGB . ie. You’re using the other set of equations, not the full range equations
ie. Y 16-235 gets “mapped” to RGB 0-255

Computer monitors are calibrated RGB 0-255 black to white . This is why it’s called “computer RGB”
Studio RGB referenece monitors are calibrated RGB 64-940 black to white (they usually don’t come in 8bit variety, but you get the point it would be RGB 16-235 in 8bits ) . This is why it’s referred to as studio range RGB

studio range RGB (also called limited range RGB) has pros/cons too - 16-235 black to white (or 64-940) is your dynamic range . A typical computer sRGB monitor is calibrated 0-255 black to white (or 0-1023) . There will be gaps in a full gradient with the former. Not all video is derived from some old legacy broadcast format. Full range video exists, many cameras record it natively, also lots of consumer HDR, 10bit video, etc…

Not really that simple; see above. It causes other problems

Also, you still need to either scale a full range video , or scale a limited range video . You can’t have both. By definition the black and white level are different. You have to interpret them. eg. If you have reference black level at Y=0, you 're going to want to move that to Y=16, or vice versa if you are using and exporting full range video ; either automatically or manually

For measuring an input file, like ffplay directly, definitely

But in the context of a video editor and timeline , you need to . You have mixed assets and they have to be interpreted. You’re measuring the output of the timeline, all the filters

pdr:

This discussion would be better off without your red herrings and cunards. You have an inimitable way of jumbling the simplest concepts.

The levels that are measured should be the levels that will be exported, after all of the filters etc. have been applied. Again, simple enough.

Austin - I agree with what you’re saying

I know you’re semi joking, and you can match shotcut if you want to, just chain a scale filter, just like shotcut is scaling when it sees a full range flag . The ffmpeg waveform scope using IRE is just measuring at the input prior to scaling . It’s really the same thing

What red herrings? Computer range the very reason why you make the mistake in the 1st video

And yet you make the same mistakes over and over, and have great difficulty understanding these simplest of concepts.

How many times have I helped you with this very same concept and how to make proper test videos

Do I need to point out the 30 page threads on other forums ?

Sheesh a little gratitude would be nice

That IS what it’s measuring.

That’s why you get scaling and change in the waveform when a full range flag is present. It’s measuring after , not the input file before

Currently it works ok.

Austin: I’ll check this data again later using both “limited” and “full” files and report my findings. Have you been doing any actual testing or are you simply inferring this? If you have been doing testing, what is your methodology? BTW, in the ffplay scope you can specify “full” or “limited” range.

Can you answer my question about IRE units in full-range video?

There is a color transformation pipeline in BT.709 for receiving color values, converting them to YCbCr if necessary, compressing them to 16-235 for broadcast, and modulating an electrical carrier with those values.

Many workflows jump out of the pipeline long before electrical transmission. The concept of sync does not exist throughout the entire pipeline. Sync does not become a concept until after the 16-235 compression.

Here’s a prime example… Edit a video in Shotcut, export it in full range (doing the rgb24 hack), and examine the YUV values. They go down to zero and up to 255. Shotcut has no problem playing it back. No sync issues. That’s because nobody in the first half of the pipeline cares about sync (or overshoot/undershoot for that matter). That is strictly a broadcast concept with broadcast equipment. No broadcast, no worries. The file formats holding YUV values have no special meaning for 0 or 255, so it’s okay here. File-based sync is managed with presentation timestamps (PTS), not with 0/255 sync values. Likewise, the equations for converting YCbCr and RGB have no special meaning for 0 or 255, which is why Y 255 can be used for white in full-range video without breaking anything. A major reason (along with overshoot buffer) that we compress to 16-235 in the first place is to create room for the broadcast side of the pipeline to assign its own special meanings to 0 and 255 without overwriting a color value.

This is why the table 3.5 equations give back Y 255 when running RGB 255 through them. That’s a full-range value and is illegal on the broadcast side of the pipeline. Is there a problem with table 3.5? No, it’s a totally fine value if not going to broadcast. That’s the whole point of full-range video. It preserves all the values when you have the option, unlike broadcast which loses colors from the compression.

(Granted, DVD and Blu-ray are also limited range, but that’s for copycat reasons. Nothing stops them at a technical level from playing back full-range video successfully.)

So yes, absolutely, BT.709 can be encoded with full-range or limited-range values, and the specification explains how to do both.

It shows Y 235 in full range as IRE 100. Incorrect.
It shows Y 255 in full range as IRE 108. Incorrect.

If these values are compared to Shotcut, then the man with two watches doesn’t know what time it is because they’re different. Shotcut is correct. FFmpeg is not when it comes to full-range video (unless a scale pre-filter downgrades the range).

I think you posted this before my last reply to pdr. Everybody would not be happy. Ideally, both scopes would exist. Refer to that post about “creative processes” to see where IRE can be a better fit than digital.

We could go to the VGA specification if absolutely necessary, but the definition of IRE should automatically cover it. Full vs limited range is irrelevant. IRE only cares about voltage. Voltage is specified for both BT.709 and sRGB… it’s 700 mV for reference white. How that 700 mV is achieved is completely irrelevant to an IRE scope. The Wikipedia reference can verify that much of the IRE definition. IRE is a percentage concept that applies to any analog electrical signal.

I like it, but I don’t want to lose IRE in the process. They’re both useful for different tasks.

I never correlated white and IRE units directly. I correlated 100 IRE to 700 mV, which is the maximum protocol voltage for both BT.709 and sRGB, and also just happens to be the reference white voltage in both specifications. This would not necessarily be true with other color spaces. But it is here, and it’s convenient. It’s completely documented in my sources.

But digital units are ambiguous for color work. If I ask “what color is Y 235”, what should someone reply? They don’t know if it’s white or gray unless they also know if the range is limited or full. Granted, digital units are unambiguous in the sense that they reveal exactly what’s in the data file. But again, what’s good for diagnostic purposes isn’t always efficient for creative purposes like color grading.

Yeah, that works, but it requires me to downgrade all my sources to limited range and I have to remember to add the scale. Sometimes I like to sleep and forget things, and hope my tools look out for me. :slight_smile:

The bulleted list is actual tested results.

I created a 0-16-235-255 .BMP like you did and turned it into a lossless MP4. I created three versions:

  • Full values with full range flag
  • Full values with limited range flag
  • Limited values with limited range flag

Then I ran them through Shotcut and FFmpeg waveform in IRE mode and picked off the relevant squares using Shotcut video zoom and waveform. FFmpeg was determined by just reading the output PNG.

I can type up the exact commands later this evening, but this post is already longer than anyone will read.

The table 3.5 equations have unity gain, as they should. If you run 235 through them, you’ll get 235 out. Out = in.

When you say “full range”, does that apply to the flagging of the file under test or the range you are setting for the scope in your ffplay script? There are four tests involved:

  1. file = full; ffplay = full
  2. file = full; ffplay = limited
  3. file = limited; ffplay = full
  4. file = limited; ffplay = limited

I will perform the four tests later.

Precisely the problem. The equation takes in RGB. If I put in RGB 255 which is white, I get out Y 255, which is an illegal value for broadcast. What becomes of Y 255? It can go straight to a full-range file as-is, or it can be compressed to limited range for broadcast. Both options are valid. Nothing says that RGB inputs need to be constrained to 16-235 first, nor should they be to preserve the most color tonality.

Yeah, sorry, there’s so much to specify lol.

  • “Full values” means 0-255
  • “Limited values” means 16-235
  • “Full range flag” means the MP4 container was flagged as full range
  • “Limited range flag” means the MP4 container was flagged as limited range

I set Shotcut’s color range override as necessary for a given test.

For instance, I can load the 0-255 version into Shotcut and set the override to Full range to get the 235 and 255 IREs in full mode. Then, still using the 0-255 version, I can set the override to Limited range and look again at what 235 and 255 IREs are in limited range.

At no point could I use a 16-235 file to figure out what Y 255 looks like in either range, because that file doesn’t have a Y 255 in it. But the 0-255 file has both 235 and 255 in it, meaning tests can be a little consolidated.

I created the 0-255 values with a limited flag for FFmpeg’s sake. I didn’t actually use ffplay… I used ffmpeg -vf waveform. It doesn’t have a parameter for limited/full interpretation, so I had to actually make files with 0-255 values in limited and full ranges to test all cases. You wouldn’t need that one if using ffplay because you can toggle the range.

Really, in your list, “file = limited” isn’t even necessary because you’ll have a 235 value in your Full file too. Toggling the ffplay range will reveal both the limited and full IRE values of it.

Right, so you put 235 in and get 235 out. So what’s the problem? If you start with 255 and get 255 out, you can apply the conversion to 235 either before or after applying the equation. Just bring down the gain. This is what that XDCam script I sent you does. Did you ever try that script?

You have a flag in the file and a flag in the scope script. That’s 2. The flags can be full or limited. That’s 2. 2 x 2 = 4 tests to perform to cover all possible combinations. That’s how I will test it.

…to prepare for broadcast, correct. Or it can be left as 255 and saved directly to a full-range video file. This is how BT.709 specifies both full range and limited range. The world does not revolve around limited range or broadcast requirements. Limited range is a hack for the specific work environment of the broadcast industry. Other industries, including many camera codecs, use full range to reclaim those extra 35 color values for higher color quality.

Yes, I’ve been meaning to follow up with that.

It’s not just broadcast, so don’t trivialize it.

If you want to give yourself a headache, read the tech specs for Netflix. They are very persnickety.

My bad. On the ffmpeg/ffplay scope it is not possible to choose between “full” and “limited” range.

Here is what I get with the ffplay scope:

File: flagged full range
Y = 16 = 0 IRE
Y = 235 = 100 IRE

File: flagged limited range
Y = 0 = 0 IRE
Y = 255 = 100 IRE

Indeed, the ffplay scope is messed up. It is backwards in its handling of full/limited range files.

Now let’s check Shotcut (preview pane)

File: flagged full range
Y = 0 = 0 IRE (OK)
Y = 255 = 100 IRE (OK)

File: flagged limited range
Y = 0 = 0 IRE (Error - should be Y = 16 = 0 IRE)
Y = 255 = 100 IRE (Error - should be Y = 235 = 100 IRE)

Limited range is the same as full range in Shotcut.

Maybe something is different in the way we’re generating test files. Here is my methodology for the questionable values you found above:

Create image with 0-16-235-255 patches:

Turn image into video with full range values but container flagged as limited:

ffmpeg -loop 1 -i 0-16-235-255.png -t 4 -filter:v scale=in_range=full:out_color_matrix=bt709:out_range=full -pix_fmt yuv420p -color_range tv -colorspace bt709 -codec:v libx264 -qp 0 -g 1 -bf 0 -preset ultrafast -movflags +faststart -an FullValuesLimitedFlag.mp4

Drop into Shotcut preview. I get Y 0 = -7 IRE and Y 255 = 108 IRE. These are the expected values.

I know you don’t like PNG files, but I can’t send the BMP version because it’s over 4 MB. The PNG can be converted to BMP and eyedropped before use if you like. But I’ve checked the results both ways and there is no conversion error using the PNG directly in ffmpeg.


Thinking about it more, I could make a conditional concession on the IRE scope.

Since the scopes in Shotcut look at the timeline output,
and input files are conformed to the timeline format before giving data to the scopes,
and the timeline preview is always limited range,
then Y 235 implicitly becomes a stable white point, as stable as 100 IRE.
Due to these mechanics, if the IRE scope was replaced with a digital waveform, I would not lose sleep regarding color work such as brightness matching. I could get the same clarity with a digital waveform.

However, if Shotcut mechanics ever changed such that the scopes looked at individual input files, then IRE becomes useful again.

What does MediaInfo say about the color range of your test files? I assume you have one which is limited and one which is full. In mine it says “full” and “limited”. I had to use yuvj420p pixel format for full-range.

ffmpeg -y  -loop 1 -t 10  -i WindowSignal.bmp  -pix_fmt yuvj420p  -c:v libx264  -vf scale=out_range=full  -color_primaries bt709  -color_trc bt709  -colorspace bt709  -r 59.94  -an  Window_full.mp4

MediaInfo says this is a full-range file. Note the use of yuvj420p pixels.

ffmpeg -y  -loop 1 -t 10  -i WindowSignal.bmp  -pix_fmt yuv420p  -c:v libx264  -vf scale=out_range=limited  -color_primaries bt709  -color_trc bt709  -colorspace bt709  -r 59.94  -an  Window_limited.mp4

MediaInfo says this is a limited-range file. Pixel format is yuv420p. You must use the appropriate pixel format for ffmpeg to correctly flag your file. yuvj420p for full, yuv420p for limited.

Measuring off the Shotcut preview pane:
Full range: all of the Y values are as expected, +/- 1
Limited range: all of the Y values are as expected, +/- 1

Videozoom returns the correct values off the preview pane.

It’s the IRE calibration of the scope that’s off for limited-range files.

I’m assuming that the levels that appear in the preview pane are the same as will be exported.

Do we all agree that for a limited-range file, Y = 235 = 100 IRE and Y = 16 = 0 IRE?

The failing test was Y 255 in a limited range file.

We need to verify that Y 255 is actually present in that limited range test file.

The scale filter says out_range=limited. This will take 0-255 values from the bitmap and compress them to 16-235. This means the file doesn’t actually have any Y 255 values in it. It tops out at Y 235 due to compression, which is why Shotcut says 100 IRE and is correct in this scenario. Running ffplay with a digital waveform will confirm the absence of Y 255.

To get Y 255 into a limited range file, the scale filter has to explicitly say out_range=full to avoid default compression to 16-235. This preserves the 0-255 values. Then there should be a color_range flag to explicitly mark the container as limited range. If color_range is not specified, it defaults to limited range for YUV pixel formats, which is how your file ended up being flagged as limited range.

This is the proper way to get Y 255 into a limited range file, which can be confirmed with a digital waveform:

If we had shared our complete “steps to reproduce” from the very beginning, this entire thread could have been shrunk to three posts. I’ve learned a lesson here about making too many assumptions. :rofl:

If there is Y 255 in a limited range file, then the video zoom scope should show Y 255, not Y 235. The file is limited and the timeline is limited, so no conversion is taking place, meaning what goes in is what goes out. If Y 255 went in, then Y 255 should be coming out, because it is indeed an overshoot in limited range. Y 235 would not be an overshoot and would represent clipping of the input file. Perhaps you meant that by “as expected”, but just verifying.

Yup!

[quote=“Austin, post:58, topic:19672”]
We need to verify that Y 255 is actually present in that limited range test file.[/quote]

How do you propose we do that? I have viewed it in Shotcut and used videozoom, and have viewed it in VLC and have used videozoom to verify the levels as well as my own eyedropper program which is in agreement with videozoom. I was getting correct levels all around. How about Windows media player? I haven’t tried that yet. The files are compressed so you would have to decompress them to read the underlying RGB values.

Also, I want to make sure we are in agreement about the problems with ffplay scope before filing a bug report.

A caveat about VLC: I was using the current version and was not getting the correct levels on either the full or limited file, I forget which. So I had to revert to an earlier version and now I get the correct levels for both limited and full range.

Another caveat: I am using libav codecs which have a setup option to leave video levels untouched so I don’t suspect the codec is forcing 255 to 235 or vice versa.

ffmpeg -i Window_limited.mp4 -frames:v 1 -vf waveform=scale=digital:flags=numbers+dots:display=overlay:g=green:o=0.25 Window_limited_scope.png

Check that the pure white window goes to 255 on the digital waveform. There should be stair steps on 0, 16, 235, 255. If it tops out at 235, then it’s actually legal range data in the file, not full Y 255.

If those scopes showed Y 235, then the file does not have full-range Y values. The scopes should have shown Y 255, which they did not. These tools did their job and proved that the test file has limited range data as well as a limited flag.

The problems only apply to IRE calculations on full-range video. Everything else is fine.

To verify the contents of our test files for current purposes, we’re using the digital scope, not IRE. Digital is always fine by virtue of reporting raw YUV values.

It is. The setup options and compiler settings don’t come into play because the command line explicitly has an out_range=limited override. So it’s definitely forcing 255 to 235.

This is a classic case where an eyedropper won’t work. Y 255 in limited range is an overshoot, an illegal value, a superwhite. Converted to RGB, it would be 278. This will get clipped to RGB 255 for display because that’s as bright as a display can go. The eyedropper will see RGB 255, think the stream underneath is limited Y 235, and assume everything is cool when the file’s data by design is actually an overshoot. An eyedropper cannot detect overshoots due to clipping, and cannot be used for color critical work.