Lossless Export Formats

Kind of a basic question: Is there any difference between the way pixels are exported in Rawvideo format vs any other “lossless” format such as UT?

At the physical storage level, naturally yes. Rawvideo is (usually) a headerless stream of uncompressed YUV values. The decoder has to be manually told the resolution and subsampling (from AVI metadata if available) so that it knows how many bytes to read per scan line of video. Meanwhile, lossless codecs have “visibility” of the entire frame and can find compression opportunities anywhere similar colors and patterns show up.

At the logical interpretation level, no there isn’t a difference. Raw YUV values are pushed into either codec by the Shotcut export process. Identical YUV values come out of the codec when played back. The only difference is how those values were physically stored on disk. There isn’t a need to put quotation marks around “lossless” formats… they are truly lossless, causing no reason to be concerned about the integrity of your data. :slight_smile: Here is a simple transcoding test procedure for anyone interested in proving lossless codecs for themselves:

Chain A: Source video -> rawvideo -> rawvideo
Chain B: Source video -> Ut Video -> rawvideo

The two rawvideo files at the end of the chains will be bit-for-bit identical and can be confirmed with fc /b at a Command Prompt. It’s important to compare files from the same generation in order to eliminate the possibility that processing in the transcoding engine or poorly-chosen transcode settings could have altered the color values. The same generation means they would likely be altered the same amount and therefore produce identical files. (Spoiler alert: every fourth byte will be different by -1 if you compare generation 2 to generation 3 from the Shotcut rawvideo exports. I don’t know why, although I suspect bicubic chroma upsampling when a yuv420p source is converted to yuv422p for internal processing.)

Historically, rawvideo was popular for lower-resolution video editing because it had zero compression which allowed more CPU to be devoted to effects processing. But that advantage largely evaporates at 4K and above (possibly at 1080p depending on track count) because the sheer amount of data in an uncompressed format strains the storage systems to keep up with demand for speed and space. A very expensive storage system would likely be required to feed 4K uncompressed multi-track video to an editor for any amount of runtime, especially if backups and archiving are part of the package too. Here are some relative file size differences from a test I just ran:

rawvideo (8-bit 4:2:0): 100%
ProRes HQ (10-bit 4:2:2): 61%
Ut Video (8-bit 4:2:0): 36%
DNxHR HQ (8-bit 4:2:2): 29%

Given that rawvideo and Ut Video are both lossless, compression allows the storage system specifications to be reduced to a third on all counts (cost, capacity, transfer speed, time for copying, etc) in exchange for a small amount of CPU decompression overhead. The intermediate codecs are compressed even further considering that they are hauling upscaled 4:2:2 data and even upscaling to 10-bit for ProRes, and are still smaller than rawvideo in 8-bit 4:2:0. This is what makes the intermediate and lossless codecs so attractive compared to rawvideo these days.

Rawvideo was also popular as being a lowest-common-denominator format between editors on different OSes. But this advantage too has largely evaporated since the standardization of intermediate and lossless codecs.

Where rawvideo is still very useful is for signal chain end-to-end color variation testing because the format can be easily read and compared at the byte level by scripting tools that would never otherwise be able to decompress video.

I realize you’re already aware of that stuff… I went comprehensive for anybody else following along because rawvideo hasn’t been talked about much in the forums.

However… if this post is related to your other post about a yuv420p export target in Shotcut for yuv420p rawvideo… I added some additional thoughts to that thread as well. The basic issue was that Shotcut doesn’t process internally in 4:2:0, and asking it to do so will result in scrambled eggs as you noted. The trick is to leave Shotcut in 4:2:2 and change the export mode to 4:2:0 using nearest neighbor for lossless chroma scaling. In other words, rawvideo and lossless codecs go through an identical export pipeline. The lossless codecs do not have any pipeline advantage over rawvideo, nor vice versa. The one possible fly in the ointment may be whether Shotcut does a bicubic upscale from 4:2:0 to get video into its internal 4:2:2 format. If it does, then the output values will be -1 about 25% of the time… not even remotely visible to the human eye. However, if you want absolute perfection, then command-line FFmpeg will get you there.

I expect the new “Zoom Scope” in the 19.12 beta will be a great convenience for this. However, the scope will not know what pixel format you are going to choose on export.

Unfortunately, I think Shotcut currently uses interpolation when performing chroma subsampling conversions. This might be an opportunity for future improvement. I think the logic would need to be to check if video resolution and colorspace are changing. If they are not, then use nearest neighbor chroma conversion. Otherwise, use interpolated conversions. I’ll keep this in the back of my mind as something to look into one day. Maybe Dan will look into it sooner.

I was thinking along the same lines. And if the resolution or subsampling changes, then possibly bicubic for downscaling and Lanczos for upscaling. Gets around the possibility of halos/ringing when Lanczos does a downscale. Thanks for chiming in!

Hi Austin -

Thanks for the exhaustive explanation. I know pity little about the inner workings of compression.

All I want to do is get my video out of Shotcut, after adding titles and effects, with as little alteration to the image as possible. FWIW I note that using Shotcut’s presets, MediaInfo reports FFV1 and UT as lossless but does not HuffYUV. I also set the audio to pcm_s16be.

I’m concerned about it being lossless because there is yet another step involving ffmpeg to make it compliant with EBU r103, which I may have mentioned in our thread about broadcast standards. It requires the R, G and B values to be confined to the range 5 - 246. I’m assuming that MPEG-2 is the target deliverable and is the end of the line as far as processing is concerned. That is a wider range than BT.709’s 16 - 235 which is for YUV luminance. I have not yet gotten ffmpeg to export to XDCam. Here is the ffmpeg code that accomplishes that:

ffmpeg -y  -i "rawtest.avi"  -c:v mpeg2video  -pix_fmt yuv422p  -vb 50M  -minrate 50M  -maxrate 50M  -vf lutrgb=r='clip(val,44,200)',lutrgb=g='clip(val,44,200)',lutrgb=b='clip(val,44,200)' -c:a pcm_s16be  -f vob  clipped.mpg

I also need to address 1080i if such a deliverable is needed.

I also wrote a program which reports the maximum and minimum R, G and B values. You can have the source code but it runs on PureBasic, which is a handy program to have. It also displays an RGB parade.

Also I have found that sharpening the video messes up the r103 compliance with all of the ringing artifacts it introduces so I skip the sharpening now.

HuffYUV, MagicYUV, FFVHuff, FFV1, and Ut Video are all based on Huffman encoding, which is lossless by nature. Since they all use the same general encoding scheme, they all produce output file sizes that are essentially the same.

Rawvideo is also lossless, so that means you could use any of the formats listed so far and achieve the least alteration possible from a direct export out of Shotcut. The only alteration would be that chroma upscale if the source was 4:2:0, and I would not sweat over that in the least. After all, the whole reason that the concept of chroma subsampling exists is because the human eye is not sensitive to small color variations. The luminance will remain unchanged, and that’s what counts.

Since the decoded data is identical across all of them, the deciding factor is how to get the smallest file size for storage yet have speedy playback. FFV1 is out for being too slow on playback. HuffYUV is out because your source is 4:2:0 but HuffYUV does not have a 4:2:0 mode, meaning you would have to unnecessarily burn disk space upscaling your source to 4:2:2 to encode as HuffYUV.

That leaves MagicYUV, FFVHuff, and Ut Video. If Shotcut is your only tool, you could use any of them and be equally happy. If you want to carry the exported file to Premiere or Resolve, then Ut Video would get my vote because it has mature, open-source, cross-platform native video drivers that allow other editors to read those files. FFVHuff does not have strong support outside of FFmpeg, and MagicYUV is a closed implementation that is still working through a few corner-case issues last I checked.

EDIT: In theory, it should be possible to use a soft clip LUT to clamp any values outside your R 103 range. I did a quick Google search and other people have attempted the same thing, although I did not yet find a downloadable LUT file that accomplishes it. If you have Resolve handy, you could make one.

You’re forgetting that the Shotcut export is not the final step and it will be resampled using ffmpeg where the RGB levels are adjusted, the bandwidth is confined to 50 MBps and it is saved as MPEG-2 for delivery because that’s what the client wants. The interpositive (to borrow a film term), or Shotcut export to UT or whatever, is kind of irrelevant and could probably be deleted if disk space is an issue.

The file exported by Shotcut is not intended for playback or delivery.

How would any other codec be an improvement over the ones listed above for what you’re trying to do? I’m not understanding what was forgotten.

You could export to FFV1 and get a smaller file. However, FFV1 decompression is so CPU-intensive that smooth playback is nearly impossible using VLC even with good hardware. The point is this… how can you verify that your Shotcut export is correct and ready for the next phase of processing if you can’t preview it in a media player without it stuttering? Thus, FFV1 is out. Speedy playback matters to verify an export for correctness.

File size matters too. Rawvideo is not a good candidate because of this. The file size is three times larger than Ut Video for the exact same level of quality. What if Shotcut is unable to export an hour-long project because there isn’t enough free disk space for rawvideo? Using Ut Video gives you 3x more headroom, or 3x lower storage costs, however you wish to look at it. There is everything to gain and nothing to lose by using Ut Video (or any lossless codec) over rawvideo as an intermediate format.

It perfectly fits your situation, unless you have additional project requirements that haven’t been mentioned yet.

The file exported from Shotcut is not intended for playback or distribution. It will be made r103 compliant and distributed or played back in MPEG-2 or XDCam. I agree that UT would be the best format for this interpositive file output by Shotcut.

To the best of my knowledge, Shotcut cannot produce r103-compliant file. For this ffmpeg is needed.

Ut Video is not intended for distribution. It’s an intermediate format. It checks all the boxes you requested in post #5. I’m scratching my head trying to figure out what else I can do for you since something is apparently still forgotten.

So I did some more investigative work into the LUT approach of legalizing levels and found a company that sells an R 103 LUT as part of a $79 pack:

https://legalluts.com/

(There may be freebie options out there, but I didn’t find one right off.)

Basically, if you put a LUT 3D filter last on the Shotcut master track and use the R 103 LUT file provided in their pack, then the LUT will soft clip values to the R 103 range and Shotcut can export native R 103 direct to your deliverable format, no intermediate file needed. And this would probably look better than the hard clip that FFmpeg would do. But I think there are a couple of catches.

First, Shotcut does YUV internal processing in MPEG range (16-235). @brian, do you know of an Export > Other > mlt_* specifier that enables Shotcut to process YUV in full-range internally? If I simply do color_range=jpeg and bring exported video back into Shotcut, the waveform shows the video data floating between 16-235 with empty space above and below the graph instead of 0-255 data.

The Shotcut export range is in preparation for the next problem… will the FFmpeg post-processing step take 0-255 video and shrink it to 5-246, or will it take 16-235 video and expand it to 5-246? If it requires 0-255 as input, then you’ll need full-range YUV (which we’re trying to figure out how to export from Shotcut) or you’ll need RGB. If you use RGB, then rawvideo will not work as an intermediate format because it’s YUV only. Also, rawvideo cannot be passed into the lutrgb FFmpeg filter because that filter requires an RGB input to work properly. You would need the lutyuv filter for a rawvideo source.

Thirdly, how will the FFmpeg command signal color range in the final deliverable file? If the input is full-range YUV or RGB but the deliverable needs to be MPEG range, then doing scale=in_range=full:out_range=mpeg will cause compression to 16-235 that will ruin your values. If the input was full-range YUV, then I suppose you could leave the expanded values and just tag it as MPEG to avoid range compression, but we don’t have a full-range YUV intermediate yet.

I’m no expert on R 103, but I’m pretty familiar with FFmpeg range processing and it very much likes 0-255 or 16-235/240. Getting a different range will be… interesting. :slight_smile: I’d love to know how you get around it in your final solution.

Hi Austin -

I really appreciate your interest and effort but I think you’re going to way too much trouble for something that probably applies to only a small subset of Shotcut users. I’m happy with the free ffmpeg solution which doesn’t require an external paid LUT product and requires no modification to Shotcut.

My ffmpeg solution uses LUTRGB so even if you give it a YUV file, the output is r103 compliant. Plus, I don’t fully understand what ffmpeg is doing but it definitely isn’t giving a hard clip as you might imagine. You have to fuss with the numbers and do a lot of trial and error but it does work.

As Shotcut uses ffmpeg, you could incorporate my ffmpeg commands into Shotcut, I imagine without much effort, but that call is up to Dan.

I tend to be hard to please but I am happy with the results I’m getting with ffmpeg and the fact that the user doesn’t have to pay for an external solution is a bonus.

If all you want is to adjust the levels to r103 compliance, this ffmpeg code will do it:

ffmpeg -y -i “test.avi” -vf lutrgb=r=‘clip(val,44,199)’,lutrgb=g=‘clip(val,44,199)’,lutrgb=b=‘clip(val,44,199)’ output.mpg

If you want true interlace, add tinterlace=4.

If this code were incorporated into Shotcut, the user would supply such settings as codec, bandwidth, pixel and audio formats as part of the normal Shotcut export settings.

Now back to my struggle in dealing with Windows 10 update errors.

It’s possible that ffmpeg is giving me 16 - 235 by default; this is something that deserves further scrutiny as I’m not fully up to speed on the subject. Maybe I need to address it in my ffmpeg script?

Here is the ffmpeg code with BT.709 color information:

ffmpeg -y  -i "D:\Videos\test.avi"  -c:v mpeg2video  -pix_fmt yuv422p  -vb 50M  -minrate 50M  -maxrate 50M  -vf lutrgb=r='clip(val,44,199)',lutrgb=g='clip(val,44,199)',lutrgb=b='clip(val,44,199)',scale=out_color_matrix=bt709:out_range=limited  -color_primaries bt709  -color_trc bt709  -colorspace bt709 -c:a pcm_s16be  -f vob  clipped.mpg

There seems to be some problems with the latest beta waveform. Different than older versions. This version clamps to limited range, but the actual YUV values are actually unclamped when exporting (still there) so there is a disconnect between the scope and actual values . I make a separate post about it

@chris319 - a potential issue with clipping in RGB then converting to YUV422P8, is that conversion (especially subsampling step) will generate new illegal values

Illegal in what sense? I examine the values in the final exported yuv .mpg file and don’t see illegal values. This is using a tool external to Shotcut.

I would love to port my PureBasic scope program to procedural C (not C++) but have no experience with Windows graphics in C.

Potentially new illegal values - ie. out of gamut errors, out of range, however you want to define it - since new values are generated from the subsampling, even though you started with 100% within limits

It’s usually not a big deal for most types of content - and you’re usually allowed ~1% leeway. Check with whoever you’re submitting to.

Was that a typo ? R,G,B clipped to the range [44,199] seems drastic . That is going to reduce the contrast and look washed out. Black will not be “black” anymore, white will not be “white” anymore, colors will be off, saturation low. That’s probably why you’re not seeing illegal values

Is there a reason why you are targetting EBU r103 ?

Clients require it. We had a big discussion about broadcast standards a few weeks ago.

No you’re not.

I’ve checked all of this out using the tool YOU helped me with and it passes muster. Do you have evidence to the contrary? I need proof, not supposition.

ffmpeg is funny but those are the values needed to get RGB 5 - 246.

Look at EBU r103

“Certain operations and signal processing may produce relatively benign gamut overshoot errors in
the picture therefore, the EBU further recommends that
measuring equipment should indicate an “Out-of-Gamut” occurrence only after the error
exceeds 1% of an integrated area of the active image.
Signals outside the active picture area shall be excluded from measurement.”

This is very common for submission guidelines for other clients, other broadcasters non EBU as well . EBU is not common for USA/North Amercian targets. But for EBU, that 1% of the active image area refers to values between 1 to 4 , 247 to 254 in either Y, or converted to R, G, B channels is allowed.

Professional legalizers typically filter out the channels before measuring as well, and they will give you a % and/or hotspot visualization.

Yes,I totally believe using those clip values will “pass” mathematically. That’s my point - because the contrast is too low.

I suspect it will fail on other submission criteria if you use those clip values. Test it on a colorbars video. Does it “look” right to you ? Does “black” look black to you ? Do the colors look right to you ? Do skintones look right?

Yes.
Yes.
Yes.
Yes.

I’ll let you write that code.

Find me some U.S. or Canadian standards.