For the hundredth time, I checked the levels and described how I did it, and got 0,16,235,255. I also checked that same file using ffplay scope. Why is it so hard for you to understand this and I have to explain it over and over again? Whatever he used to check the levels was clearly compressing 0 - 255 to 16-235.
So far in this discussion nobody has accounted for the discrepancy between the ffplay scope and Shotcut’s scope on the SAME VIDEO.
The first video you posted window_signal.mp4 (notice the underscore) has actual values of Y=16,30,218,235. I don’t know what you’re looking at; FFmpeg/FFplay waveform definitely does not show 0,16,235,255 . (I checked with other programs too)
I’ve posted the screenshots above, are you saying you get something different ?
When you use a RGB color picker, you ‘re in for a world of hurt if you don’t understand how the YUV<=>RGB conversions are being done in the background. YUV picker, or true Y’ waveform is simpler because it measures the levels directly, and you don’t get rounding errors or make as manhy wrong assumptions
On a web browser interface, computer RGB is used. Because it’s a computer. Y16-235 => RGB 0,0,0-255,255,255.
The patches show RGB 0, 16, 235, 255 because the Y values are 16,30,218,235
I’ll explain it another way to you, again , as I have before many times in other threads and forums.
You’ve said before , you’re ok with ffplay.
Since you like using RGB color pickers, recall ffplay - and studio range RGB would use
ffplay -i window_signal.mp4 -vf scale=in_range=full
The color picker shows RGB 16,30,218,235
Computer range RGB would use
ffplay -i window_signal.mp4
The color picker shows RGB 0,16,235,255 ; just like the what is shown in the browser. ie. the browser is using computer RGB
Recall that studio range RGB is your “unity” that we discussed many times before .The Y level <=> RGB level. 0-255 <=> 0-255. In ffmpeg you use the full range equations in/out
So if you’re ok with ffplay using studio range RGB; a result of RGB 16,30,218,235 would give a Y value of 16,30,218,235 .
@Austin Thank you so much for this incredibly informative explanation.
If you have the time/interest, it would be incredible to adapt some parts of that write up into the documentation sections:
This is a critical distinction that applies to Shotcut. When Shotcut operates on YUV, it operates in limited-range. When Shotcut operates on RGB, it operates on full-range. I think this meets most people’s expectations. But it may not meet the expectations of people who come from a deep broadcast TV background.
Can you elaborate on this specific point? When I look at BT.709, Table 4.6 shows the quantization levels:
It doesn’t seem to leave any provision for full range.
I think it would be more accurate to say that BT.709 describes limited-range signals and sRGB (IEC 61966-2-1:1999) describes full-range signals. Maybe you have a better understanding of where full and limited range values are defined.
Thanks again for your thoughtful response on this topic.
I can demonstrate from source code that the ffmpeg/ffplay waveform filter is broken for full-range video. It is fine for limited-range video. See the GitHub link below for details.
Regarding other discrepancies, there’s the issue of the Shotcut scopes not showing the actual data that’s inside the input files. Rather, the scopes are showing data of the timeline’s preview output, which reports YUV in limited range regardless of the input file. The scopes must be interpreted in that context.
For sake of clarity, here are all combinations of scope output. Do these match your findings, and do you feel any of the scopes are reporting incorrect values? To be honest, I’m very hazy about which combinations are actually under scrutiny.
Y 235 with limited flag
Shotcut: 100 IRE, Y 235
FFmpeg: 100 IRE
Y 235 with full flag
Shotcut: 91 IRE, Y 217
FFmpeg: 100 IRE << wrong: Y 235 in full range has not reached white point, so is not 100 IRE
Y 255 with limited flag
Shotcut: 108 IRE, Y 255
FFmpeg: 108 IRE
Y 255 with full flag
Shotcut: 100 IRE, Y 235
FFmpeg: 108 IRE << wrong: Y 255 in full range is equal to white point, not greater
Shotcut appears to be correct regarding IRE in all cases. Shotcut’s Y-values are also correct if allowance is made for them always being stated in limited range. FFmpeg shows incorrect IRE values in full range due to hard-coded values. Y 235 does not automatically equal 100 IRE. 100 IRE is defined as the maximum voltage, which is the white point for BT.709. In full range, Y 255 is the white point, not Y 235. So FFmpeg gets IRE wrong by hard-coding equality between Y 235 and 100 IRE.
I have been unable to make Y 255 flagged as limited range appear as 100 IRE on the Shotcut waveform. The waveform has to be expanded very, very tall on the screen in order to see the overshoot, but it’s way up there as 108 IRE where it should be.
I need to use better wording. Thank you for the catch. I will update my post to say that BT.709 specifies full and limited range equations rather than signals. Only limited range is defined as a signal for transmission. But the YCbCr equations operate in full range up until the point they are compressed to 16-235/240 for transmission.
This distinction caused a lot of confusion earlier in this thread when the equations in table 3.5 resulted in “out-of-bounds Y 255” rather than Y 235 for white. It’s because this part of the chain is in full range. It is completely acceptable to write these full-range YCbCr values to a video file and call it BT.709 encoding provided it is flagged as full range. But it’s not legal to use full-range values for over-the-air transmission. That’s a different part of the chain.
My goal was to demonstrate that YCbCr can exist with either full or limited values depending on where in the equation chain the math is being done. Therefore, range must always be specified to properly decode YCbCr or communicate values to someone else.
I’m game. I don’t know what adaptations would be involved, so you’re welcome to slice and dice as you see fit. Or recommend changes and I’ll try to rewrite it. Whichever.
I covered it, although with less detail since the post was getting long. See:
Finally, here is the source code showing hard-coded values in the FFmpeg waveform filter. The graticule lines are defined in four columns which represent four components of a video signal: Y, Cb, Cr, Alpha. Line 2523 shows the YCbCr channels have 100% graticules hard-coded for 16-235/240 limited range. (The 255 is for Alpha.) The code never adapts to 0-255 based on input range. This is not the right way to calculate IRE, therefore it is wrong for full-range inputs. Y 255 in full range is the exact same color as Y 235 in limited range (they’re both the white point), so both represent the same voltage (the reference white level) and are therefore both 100 IRE.
Perhaps for an IRE definition; but this is not is not “wrong” when you use digital units 0-255 in Y, such as the default setting when using -vf waveform
Another way to think of it is that the waveform examining the raw Y data directly. The flag does not affect the actual raw, it’s just metadata
But scaling based on a flag implies some transformation is applied afterwards, such as for conversion to RGB, or Y adjusting of levels. That’s not necessarily the actual original Y values in the bitstream or the original video. eg. If you were to decode to elementary .yuv videostream, and look at the hex data. And that’s ok if that’s how the data is changed in a video application such as a video editor, you’re measuring the transformed output data, not the original data
Measuring digital units of the input file is “pure”, exact. That does not rely on what context something is being used in , or how some program decides to interpret something. There are no rounding errors are other errors introduced. Also, videos can be unflagged, or flagged improperly. But the raw data is always correct what reflects what the actual code values are
Austin’s statement gave me pause but I didn’t dispute it right away. Brian is right: BT,709 does not make provision for “full range” 0 - 255. I gave a detailed explanation for this, but to refresh memories, the values 0 and 255 are reserved for sync and are off limits for video information. If video information extends to 0 or 255, it’s going to freak out parts of the signal chain that depend on a source of sync at 0 or 255.
How does this flaw manifest itself?
Digital units are unambiguous. ffplay offers the option of either digital units or IRE units. Shotcut offers IRE units and that’s it — no digital units. If the Shotcut scope offered digital units, this would be a definite improvement.
There are many video players that alter the video levels. For example, a 235 pixel is made 255 or 255 made 235, so you don’t get a true visual representation of the data in the image. This may be acceptable to the YouTube crowd but not for critical users; not just broadcast but theatrical and other high-end applications. For example, if you have and area of, say, 235 next to an area of 255 and the player messes with it, making 235 into 255 or 255 into 235, the two areas will become indistinguishable and you lose detail.
Shotcut’s waveform is an IRE scope, so that’s the only definition we have if there’s going to be comparison with FFmpeg. I’m assuming that’s why this thread exists, but I feel like I have no idea what’s going on anymore.
The main thing I’ve deduced for myself is that Shotcut’s scopes are accurate to their definitions, and they are extremely useful just the way they are. We could say that IRE and digital Y-value waveforms are specialized tools for solving very different problems.
Suppose I wanted to know “have I reached 100% white yet” during a color grade. If IRE says 100%, then I have reached white regardless of color space or range because those are factored into the scale. But if a digital Y-value waveform shows me Y 235, I don’t know if I’ve hit white unless I also know whether the video is full or limited range. In FFmpeg’s case, it doesn’t move the graticules to reflect the black and white points in conjunction with the input range, so there is no cue to tell me where white is. Guessing the range by how low the Y values go is not acceptable because low-contrast log footage from Panasonic cameras is in full range and may never drop below Y 16. It could easily look like legal range to the eye, but it isn’t. Basically, keeping track of these technical weeds slows down the efficiency and artistic mindset of color work, so IRE is sometimes a better fit for creative processes.
A digital Y-value scope is definitely a better tool for diagnostic work, no argument there.
IRE by design does not show us actual YUV data. That isn’t its goal. It’s not a hex editor.
Exactly. IRE units by nature are a percentage, a relative measure of voltage compared to the maximum voltage possible. Scaling to the units of 0-100% regardless of the input format is implied in the very definition of IRE. Because 0% and 100% are tied to the black and white points (for BT.709 and sRGB at least), scaling will be tied to the range because range determines black and white Y values.
Exactly. This is the job description of an IRE waveform. It is not a digital value waveform or a YUV hex dump in any way, shape, or form. There are other scopes for that. I’m pretty sure that crossing the job descriptions of different scopes is responsible for 92.7% of the confusion in this thread, which is why I tried to describe IRE from the beginning with my War and Peace second edition.
100% agree. But that’s not how IRE works or what it does. IRE is not the tool for this kind of job.
As you noted earlier, Shotcut measures timeline output rather than input files, so there could be a layer of separation there too.
I grasp everything you’re saying, and agree 100% if we’re just talking about how digital Y-value scopes work. What I’m trying to head off is an output comparison of Shotcut’s IRE waveform to absolute digital Y-value scopes like FFmpeg (not in IRE mode), and then expecting them to show the same thing. They won’t; they’ll be different by design.
If FFmpeg weren’t bugged, it would always match Shotcut in IRE mode and we could all go home haha.
Digital units are only “ambiguous” in that context of ffmpeg reading a file directly , because -vf waveform is looking at the original video data, 8bit Y code values will always be in the range 0-255. So IRE 0-100 will always correspond to Y 16-235 in that case
Otherwise, not necessarily in the shotcut context; it can add confusion because you’re measuring the output, not the input. Are you working in YUV or RGB. If you add an RGB overlay? Are the units Y ? or Converted Y ?
Almost all do , by convention, all computer video players , all web use computer RGB . ie. You’re using the other set of equations, not the full range equations
ie. Y 16-235 gets “mapped” to RGB 0-255
Computer monitors are calibrated RGB 0-255 black to white . This is why it’s called “computer RGB”
Studio RGB referenece monitors are calibrated RGB 64-940 black to white (they usually don’t come in 8bit variety, but you get the point it would be RGB 16-235 in 8bits ) . This is why it’s referred to as studio range RGB
studio range RGB (also called limited range RGB) has pros/cons too - 16-235 black to white (or 64-940) is your dynamic range . A typical computer sRGB monitor is calibrated 0-255 black to white (or 0-1023) . There will be gaps in a full gradient with the former. Not all video is derived from some old legacy broadcast format. Full range video exists, many cameras record it natively, also lots of consumer HDR, 10bit video, etc…
Not really that simple; see above. It causes other problems
Also, you still need to either scale a full range video , or scale a limited range video . You can’t have both. By definition the black and white level are different. You have to interpret them. eg. If you have reference black level at Y=0, you 're going to want to move that to Y=16, or vice versa if you are using and exporting full range video ; either automatically or manually
For measuring an input file, like ffplay directly, definitely
But in the context of a video editor and timeline , you need to . You have mixed assets and they have to be interpreted. You’re measuring the output of the timeline, all the filters
I know you’re semi joking, and you can match shotcut if you want to, just chain a scale filter, just like shotcut is scaling when it sees a full range flag . The ffmpeg waveform scope using IRE is just measuring at the input prior to scaling . It’s really the same thing
Austin: I’ll check this data again later using both “limited” and “full” files and report my findings. Have you been doing any actual testing or are you simply inferring this? If you have been doing testing, what is your methodology? BTW, in the ffplay scope you can specify “full” or “limited” range.
Can you answer my question about IRE units in full-range video?
There is a color transformation pipeline in BT.709 for receiving color values, converting them to YCbCr if necessary, compressing them to 16-235 for broadcast, and modulating an electrical carrier with those values.
Many workflows jump out of the pipeline long before electrical transmission. The concept of sync does not exist throughout the entire pipeline. Sync does not become a concept until after the 16-235 compression.
Here’s a prime example… Edit a video in Shotcut, export it in full range (doing the rgb24 hack), and examine the YUV values. They go down to zero and up to 255. Shotcut has no problem playing it back. No sync issues. That’s because nobody in the first half of the pipeline cares about sync (or overshoot/undershoot for that matter). That is strictly a broadcast concept with broadcast equipment. No broadcast, no worries. The file formats holding YUV values have no special meaning for 0 or 255, so it’s okay here. File-based sync is managed with presentation timestamps (PTS), not with 0/255 sync values. Likewise, the equations for converting YCbCr and RGB have no special meaning for 0 or 255, which is why Y 255 can be used for white in full-range video without breaking anything. A major reason (along with overshoot buffer) that we compress to 16-235 in the first place is to create room for the broadcast side of the pipeline to assign its own special meanings to 0 and 255 without overwriting a color value.
This is why the table 3.5 equations give back Y 255 when running RGB 255 through them. That’s a full-range value and is illegal on the broadcast side of the pipeline. Is there a problem with table 3.5? No, it’s a totally fine value if not going to broadcast. That’s the whole point of full-range video. It preserves all the values when you have the option, unlike broadcast which loses colors from the compression.
(Granted, DVD and Blu-ray are also limited range, but that’s for copycat reasons. Nothing stops them at a technical level from playing back full-range video successfully.)
So yes, absolutely, BT.709 can be encoded with full-range or limited-range values, and the specification explains how to do both.
It shows Y 235 in full range as IRE 100. Incorrect.
It shows Y 255 in full range as IRE 108. Incorrect.
If these values are compared to Shotcut, then the man with two watches doesn’t know what time it is because they’re different. Shotcut is correct. FFmpeg is not when it comes to full-range video (unless a scale pre-filter downgrades the range).
I think you posted this before my last reply to pdr. Everybody would not be happy. Ideally, both scopes would exist. Refer to that post about “creative processes” to see where IRE can be a better fit than digital.
We could go to the VGA specification if absolutely necessary, but the definition of IRE should automatically cover it. Full vs limited range is irrelevant. IRE only cares about voltage. Voltage is specified for both BT.709 and sRGB… it’s 700 mV for reference white. How that 700 mV is achieved is completely irrelevant to an IRE scope. The Wikipedia reference can verify that much of the IRE definition. IRE is a percentage concept that applies to any analog electrical signal.
I like it, but I don’t want to lose IRE in the process. They’re both useful for different tasks.
I never correlated white and IRE units directly. I correlated 100 IRE to 700 mV, which is the maximum protocol voltage for both BT.709 and sRGB, and also just happens to be the reference white voltage in both specifications. This would not necessarily be true with other color spaces. But it is here, and it’s convenient. It’s completely documented in my sources.
But digital units are ambiguous for color work. If I ask “what color is Y 235”, what should someone reply? They don’t know if it’s white or gray unless they also know if the range is limited or full. Granted, digital units are unambiguous in the sense that they reveal exactly what’s in the data file. But again, what’s good for diagnostic purposes isn’t always efficient for creative purposes like color grading.
Yeah, that works, but it requires me to downgrade all my sources to limited range and I have to remember to add the scale. Sometimes I like to sleep and forget things, and hope my tools look out for me.
The bulleted list is actual tested results.
I created a 0-16-235-255 .BMP like you did and turned it into a lossless MP4. I created three versions:
Full values with full range flag
Full values with limited range flag
Limited values with limited range flag
Then I ran them through Shotcut and FFmpeg waveform in IRE mode and picked off the relevant squares using Shotcut video zoom and waveform. FFmpeg was determined by just reading the output PNG.
I can type up the exact commands later this evening, but this post is already longer than anyone will read.
Precisely the problem. The equation takes in RGB. If I put in RGB 255 which is white, I get out Y 255, which is an illegal value for broadcast. What becomes of Y 255? It can go straight to a full-range file as-is, or it can be compressed to limited range for broadcast. Both options are valid. Nothing says that RGB inputs need to be constrained to 16-235 first, nor should they be to preserve the most color tonality.
Yeah, sorry, there’s so much to specify lol.
“Full values” means 0-255
“Limited values” means 16-235
“Full range flag” means the MP4 container was flagged as full range
“Limited range flag” means the MP4 container was flagged as limited range
I set Shotcut’s color range override as necessary for a given test.
For instance, I can load the 0-255 version into Shotcut and set the override to Full range to get the 235 and 255 IREs in full mode. Then, still using the 0-255 version, I can set the override to Limited range and look again at what 235 and 255 IREs are in limited range.
At no point could I use a 16-235 file to figure out what Y 255 looks like in either range, because that file doesn’t have a Y 255 in it. But the 0-255 file has both 235 and 255 in it, meaning tests can be a little consolidated.
I created the 0-255 values with a limited flag for FFmpeg’s sake. I didn’t actually use ffplay… I used ffmpeg -vf waveform. It doesn’t have a parameter for limited/full interpretation, so I had to actually make files with 0-255 values in limited and full ranges to test all cases. You wouldn’t need that one if using ffplay because you can toggle the range.
Really, in your list, “file = limited” isn’t even necessary because you’ll have a 235 value in your Full file too. Toggling the ffplay range will reveal both the limited and full IRE values of it.