Video Scope Calibration

I see how that can be a problem; clamping = clipping from that perspective . I will no longer use “clamping”. Range expansion /contraction is better phrasing, less ambiguous .

Agreed… for scopes that measure levels in a file.

IRE does not visualize levels in a file.

IRE is an analog unit that measures voltage on an electrical wire. Since limited Y 235 and full Y 255 produce the same voltage on an electrical wire (700 mV reference white point), both those values should be 100 IRE. The values of the digital data are irrelevant. All that matters is the final voltage hitting a wire.

Agreed… for a scope that measures levels in a file.

IRE does not visualize levels in a file.

It measures voltage. Since the sRGB and BT.709 specs happen to put black at 0 mV and white at 700 mV, the 0-100 IRE range is implicitly tied to the black and white reference points. In order to know whether a Y value represents black/gray/white, it has to be interpreted in the context of the range flag. Therefore, the scope must adapt itself based on the range to calculate what final voltage will hit an electrical wire.

IRE is completely different from a digital waveform.

Only in limited range. Full range would be 0 and 255 because full Y 255 is the same white as limited Y 235, therefore it should get the same IRE value.

With IRE, the graticules are usually 0% and 100% because IRE is a percentage, not an absolute value. The only way it would make sense to move a percentage graticule is to represent the black and white points for a format that puts black somewhere other than 0% (like composite video where black is 7.5%). However, even if we were to do that, nothing changes because limited Y 235 and full Y 255 are the same white and land on the same voltage and the same IRE/percent. Since the IRE is the same for both ranges, there’s no other sensible place to move the graticules. They already represent the black and white points for both BT.709 and sRGB without any movement at all. Limited and full are two different ways of specifying the exact same color, therefore they will have exactly the same placement on an IRE scope.

IRE measures only the final result, not the path taken to get there. This makes it intrinsically different than a digital waveform.

For a digital waveform, yes, it can make sense to move the graticules to represent the black and white points depending on the range.

That’s a nice explanation, Austin

What would you suggest in the -vf waveform IRE case ?

a) Range compression based on flag detection (similar to what shotcut is doing)

b) Limited Y 16-235 = 0-100 IRE . Full Y 0-255 = 0-100 IRE . Based on flag detection

c) Limited Y 16-235 = 0-100 IRE . Full Y 0-255 = 0-100 IRE . 2 user switches, IRE limited vs. IRE full

(And it would be up to the user to ensure prerequisites are met in all cases)

Wait a minute, Austin. You’re talking out of both sides of your mouth. After posting that you posted this:

So which is it, percentage or voltage?

Can you produce any documentation that says IRE units are used in 0 - 255 sRGB, i.e. anything other than BT.601 or BT.709?

For ITU-R BT.709 there is no full range video. ITU-R BT.709 table 4.6 only defines a narrow range signal. It does allow overshoots in the range 1-253, but black is defined as 16 and white as 235.
https://www.itu.int/dms_pubrec/itu-r/rec/bt/R-REC-BT.709-6-201506-I!!PDF-E.pdf

If you are going to use black at 0 and white at 254, it is not standards compliant and it will require slightly different colour matrices.

Full-range has multiple meanings in different organisations. The EBU in Europe have recently published this discussion on the topic: https://tech.ebu.ch/docs/r/r103.pdf

This has already been covered extensively. Table 4.6 only applies to the broadcast half of the color transformation pipeline. But the YCbCr/RGB color conversion formulas in table 3.5 operate in full range, and these values can be written directly to a computer file and played back successfully in any competent media player. It is not “standards compliant” in terms of broadcast, but it is 100% standards compliant in terms of BT.709 as a color space. BT.709 is both a color space definition and a broadcast standard. This is why ffmpeg has a -colorspace bt709 flag… because BT.709 is a stand-alone color space for color processing. The color space portion can be used without legalizing values for broadcast. Otherwise, there would be no such thing as a full-range YCbCr file that all media players can play. And yet, they can. Everyone is aware of cameras that write full-range YCbCr files and they are correctly interpreted by all major video editors. It works because BT.709 is also a color space that can operate completely independent of broadcast requirements.

Full range is the “real” BT.709 and limited range is just a hack to get full range through the broadcast pipeline (and to provide room for sync signaling of analog televisions). If BT.709 delivery was entirely digital (no need to add buffer for frequency separation or to sync a CRT), then limited range math would have no need to exist and the world would be a less confusing place, just like sRGB.

IRE is a percentage of maximum protocol voltage. This was covered extensively in my first post. For BT.709, the protocol maximum is 700 mV. So if somebody puts a voltmeter on an electrical wire carrying a BT.709 signal and reads 350 mV, then that is 50% of the maximum possible and is called 50 IRE.

IRE is an electrical concept, not a television concept. It can be applied to any electrical signal. It is not specific to BT.709 or any other wire protocol. In that sense, documentation is not necessary because IRE is implicitly a unit of measure for any electrical signal.

However, to your point, IRE is not traditionally used to scope sRGB signals because it is a more consumer-oriented format that doesn’t have the strict boundaries of broadcast. When people want to calibrate sRGB color, they calibrate the screen itself with a colorimeter and don’t worry about the values going over the DVI/HDMI/DisplayPort cable because those values are digital, not analog. Thus, a digital waveform makes more sense for sRGB. So, to compare sRGB and BT.709 on the same IRE scope, we have to pretend sRGB is going over a VGA cable because VGA actually is an analog specification that was purposefully designed to match the 700 mV white point of BT.709. This concept was documented in the sources of my first post.

That is the million dollar question for all of us. :slight_smile: Paul B. Mahol has already gone clear off the board for Option D, “stay the same as hardware scopes even if hardware shows wrong results because it wasn’t designed for full-range video”. Given the operating environment of ffmpeg, I can totally respect this direction. It would be nice if this disclaimer was included in ffmpeg-all documentation so the unaware user knows what to expect from the IRE values.

This would be my personal preference. When a user pumps a video through the pipeline, the scope results will automatically be correct without any user intervention or knowledge. For full diagnostics ability, it could be useful to have full/limited override flags to the auto detection, as you suggested in Option C. That would be a productive blend.

Since ffmpeg has access to source videos (whereas the Shotcut scope only has access to timeline output), it would be nice to take advantage of the exact input file format for diagnostic purposes, meaning Option A (range compress everything to limited) would make me hesitant. However, it would still be “correct enough” and I wouldn’t lose sleep over it.

For me, I could be 100% content with a simple addendum to the ffmpeg documentation that said “the IRE scope always assumes the max possible voltage is based on limited-range Y values, and will show incorrect IRE for full-range video”.

What kind of double talk is that? BT.709 is a digital standard. Analog televisions have nothing to do with it.

No wonder prosumer digital video is all screwed up. You’re making me cringe, buddy, with some of your inaccurate statements and tortured metaphors. Sorry if this sounds harsh.

I’m out of this discussion of Shotcut’s video scope. You guys rationalize it whatever way you please.

I still waiting on proof/input what hardware scope or ScopeBox does in such case.

I have started to map some of the information from this post to these topics:



Feedback or contributions are welcome.

Some people are uncomfortable with the term “IRE unit”, feeling it is obsolete. Call it “percent” if you are more comfortable with that. Either way it measures the same thing: the percentage of the range between digital 16 and 235 (or 0 and +0.714 volts analog). What it is NOT is a unit of voltage.

Simon said something that prompted me to thoroughly review my notes, and I discovered that I made a mistake. I ran many combinations of ranges and sample values through the different equations in the BT.709 specification, to the point that my notes got messy and I accidentally associated full-range values with the table 3.5 equations. As it turns out, the equations in table 3.5 operate on limited range RGB, as in studio 0-219 with a +16 lift. This is what Simon meant by “0 to 254 will require slightly different colour matricies” and he was right. So I apologize for giving bad information regarding table 3.5. However, this doesn’t have any impact on any other claims or conclusions.

Since many people have expressed concern over the existence of full-range BT.709 as a standard, I realize that nothing short of documentation will break a stalemate. So, I present a memorandum from the Joint Video Team of the ISO/IEC and ITU. I chose this particular document because it treats full-range BT.709 as a first-class citizen, and then goes on to demonstrate that it has a 1.2 dB PSNR advantage over its limited-range counterpart. This is why many camera companies prefer to use full range video as the acquisition format. It provides a little extra stretch latitude during color grading before banding appears. After all, 8-bit video needs all the help it can get. There is logic to the madness.

This link to the memorandum goes directly to a Microsoft Word document:
https://www.itu.int/wftp3/av-arch/jvt-site/2003_09_SanDiego/JVT-I017-L.doc

The key is to remember that BT.709 specifies both a color space and a broadcast standard. As a YCbCr color space, full range is the default just like any other YCbCr color space. Using the full bit depth to represent color slices from total white to total black is implicit in all core YCbCr color space definitions I’ve seen. The idea of using a limited subset of the full bit space available is an additional constraint because there is no other intuitive reason to throw away valuable bit space. This is why the very name “limited range” sounds like it is less than something bigger, less than something better, less than something “fuller”. The specification designers didn’t create a range called “limited” without also defining something “not limited” to go with it. They tried to make the naming convention easy on us.

So yes, “full” is the standard for the BT.709 color space which is then “limited” for the sake of the BT.709 broadcast standard. But any signal not going to broadcast is welcome to stay in full range, and it will be official and fully supported. The disjointed industries of media players, camera manufacturers, and video editors didn’t just happen to implement the same ad hoc full-range idea together… they all play nice together because full range is an actual standard defined by BT.709 color space. Full range does require a different set of equations than what is found in the BT.709 document, but that’s fine. The full equations still operate on the same color primaries, white point, transfer characteristics, etc. The limited equations in table 3.5 are simply optimized versions of the full equations to factor limited range directly into the math for convenience. That’s why the Cb and Cr equations have “multiply by 224/219” in them, because it’s a scale factor from 0-219 RGB to 16-240 CbCr. These are clearly derived from the full equations, and I should have realized that earlier while reviewing my test results.

Hopefully with the “full range BT.709” hurdle out of the way, the rest of this thread can make more sense just the way it reads.

Earnest question… does anybody know why we still use limited range for digital television? I remember it being relevant for analog television channels, but as Chris mentioned, those concerns don’t exist anymore. So why do we continue to compress the color range? What value does that provide? Legacy reasons? Were older televisions designed for 0-219 RGB instead of 0-255?

You’re reading too much into the specification.

The “BT” in BT.709 stands for “Broadcast Television”. The spec defines 16 as black and 235 as white. As I explained before, the equations have unity gain. You’re supposed to put 16 - 235 in and get 16 - 235 out. 0 - 255 isn’t part of the spec, anywhere.

If you say 0 - 255 is sRGB, fine. I’m not going to hunt down the sRGB spec.

Limiting to 16 - 235 is not and never has been relevant to analog TV. Again, you’re mixing up digital and analog. As I have explained, in analog, white is +0.714 volts, not 700 mV as in digital.

To answer your question, reread my post about “footroom” and “headroom”. A broadcaster could broadcast full-range video, BUT if the video encroaches on sync and causes the viewer’s TV set to freak out, they’re going to get reception complaints and hear from cable companies that carry their signal, and they could be looking at a citation and forfine from the FCC. So that’s why broadcasters stick to spec.

BT is only half the name. It’s also called Rec.709 because it’s a Recommendation used outside broadcast television. Hence, the color space definition. Ask any colorist.

For broadcast, no. For a color space, yes, full range comes by default. If there was only one range, why would that one range be called “limited”? Because there’s another one that isn’t.

I included a link to a document that describes the use of 0-255 using BT.709 color primaries. The document was written by Dr. Gary Sullivan who is the current chair of MPEG SC29 and is rather qualified to speak on the topic.

I’m genuinely trying to learn. How could this happen with a digital signal? Wouldn’t it be super easy to digitally clamp around the two sync values on the way down the pipeline? Why not use 1 - 254 since we have digital precision? If it’s necessary to have buffer that far away from sync even in digital, then we’re back to limited range being a hack to protect sync.

Again, not to be harsh, but you’re seriously confused.

You’d have to ask the designers of BT.601, but I think you’re unnecessarily obsessing over a very small amount of the 8-bit video range (0 - 15 and 236 - 255). That’s why people are moving toward 10-bit video.

The overhead is not there only to protect the synch pulses.

Be aware when we say Black is at 16 and White is at 235, we do not mean that there is no signal outside of these ranges - it is permissible to allow transient incursions into the over-range and under-range signal areas.

This exists for a number of reasons:

  1. Video contains aliasing artefacts and video filters introduce ringing. This is because the filters are not perfect. A severe case is shown in this Gibbs Phenomenon plot where a short filter distorts a square wave. https://en.wikipedia.org/wiki/Ringing_artifacts#/media/File:Gibbs_phenomenon_10.svg
  2. Even under perfect conditions, changing ambient lighting will cause a transient excursion as it increases as the iris control will lag.
  3. SDI and other cable formats use some of the video levels at the extremes to send synch pulses.

So if you’ve got filter rings and transient excursions and you don’t have headroom and footroom, or you clip at 16 - 235, you cause these rings and transient excursions to appear almost like a Square Wave. Square waves are bad as they:
a) cause further filter ringing later in the processing and
b) if converted to the frequency domain cause a lot of harmonics (see: https://upload.wikimedia.org/wikipedia/commons/thumb/b/b5/Spectrum_square_oscillation.jpg/525px-Spectrum_square_oscillation.jpg).

Most Codecs convert to the frequency domain so the more high frequency data you have, the higher the bitrate needed.

If you’ve got fully controlled lighting and have a colourist ensuring that there is no hard clipping, or you’re working with computer generated, perfect images, full range makes sense. For video, it doesn’t.

For a longer discussion, the EBU R.103 document goes in to details.

1 Like

EBU R103 takes some of this into account. The range is 5 - 246, wider than 16 - 235.

Realised that I forgot one:
The Y’CbCr and R’G’B’ spaces are not co-incident. Not all of the valid ranges of Y’CbCr map to valid R’G’B’ values.

I have searched hard and talked to people and never been able to get a good answer to this question in the digital domain. Finally, an explanation that makes sense. Thank you Simon, this was extremely helpful.

I was under the impression that a QC check would still fail if too much data was in headroom or footroom. Is limited range there to protect against square waves during acquisition, and then the colorist pulls in the transients using more graceful techniques than hard clipping to pass QC?

The headroom is traditionally for transients. However, if you look at R.103, you’ll see it mentions a further use case - single workflow HDR and SDR productions.

If you’re concerned about passing QC, have a look at EBU r103. The latest version came out in May of this year.