UtVideo and White Noise

White noise is the hardest thing for a compression algorithm to deal with because all of the data is random.

I have exported a test file consisting of audio and video white noise using UtVideo.

UtVideo handles white noise satisfactorily in addition to having no color errors to speak of. HuffYUV also handles white noise satisfactorily.

FFV1 does not handle white noise quite as well.

FYI.

Aww, I was starting to like FFV1 as an export codec. Where specifically did it let you down? CPU effort to encode/decode? File size exploding on random data? Color accuracy issues?

My benchmark for testing codecs is video and audio white noise, i.e. random data. The video normally looks like a swarm of crawling ants. When I encoded white noise with FFV1 the swarm of ants appeared motionless, like a freeze frame. I tried playing back on both VLC and SMPlayer with the same results on each. Now that the color reproduction has been straightened out on Utvideo, my format for interpositives will be Utvideo. So for Youtube it would be:

Camcorder H.264 -> Shotcut export to Utvideo lossless and upload -> YouTube resamples Utvideo to H.264. This eliminates one generation of lossy compression.

Correct me if I’m wrong but codecs such as the trendy ProRes and H.264 with crf 1 are not truly lossless as Utvideo is, but are actually quasi lossless.

Caveat, though: the Utvideo files will be large and will take a long time to upload, so you must factor this into your workflow. If upload time is a real consideration then you might settle on a codec that gives a more compact and faster-uploading file, bearing in mind that YouTube will downsample whatever you upload to a low-ish bit rate to conserve their transmission bandwidth.

1 Like

My hardware can’t decode FFV1 fast enough to play it back in real-time, but I can load the file into Shotcut and click through the timeline to verify that the frames are actually in there. To your point, not being able to get a real-time playback means it would be difficult to verify that the export looks right before shipping it off.

I’m genuinely curious about upload formats now, as in effort versus results. You seem to be very familiar with test procedures, so maybe you can give some insight on the best method here.

Let’s say I have a Shotcut project which I export to both Ut Video and DNxHR. (I could have chosen ProRes, but DNxHR is easier to go cross-platform and also has an 8-bit option which should require less disk space than ProRes which is always at least 10-bit.)

Now let’s say I upload both videos to YouTube, then download them at highest quality using www.ClipConverter.cc or some other ripping service. Once I have the YouTube transcoded versions, what is the best procedure to determine whether the lossless upload retained higher quality (edge sharpness, color accuracy, fewest artefacts) versus the visually lossless upload?

If we use SSIM, what percentage would be necessary to say “I’m willing to accept that loss to get the radically smaller upload and storage size”? Same question for PSNR, VMAF, or anything else. For my purposes, if my eye can’t tell a difference, that’s good enough for me. But I’m wondering how to put some science on this in case my eyes aren’t as good as everyone else’s.

I would also assume that the content of the video would play a part in testing too. I would assume that a static shot of a test card would be the worst possible video to upload. VP9 and H.264/265 have frame buffers and are smart enough to realize that no change is happening, and can therefore devote more bitrate to getting a really good first I-frame, knowing that the following P/B “frames” will have very little difference to encode. This will make the video look better than it really is, as the quality would quickly fall apart once motion starts happening and not as much bitrate can be devoted to a single frame anymore. So what would the appropriate amount of motion be to test the lossless upload versus the visually lossless upload?

If I could archive DNxHR rather than Ut Video or FFV1 and have an imperceptible quality difference, that would be a good day in my book. Now how to test it…

You’re right; you need to put some science into it.

Start with a calibrated test signal. Here is a direct link to my test pattern mp4 file which is free of YouTube’s downsampling:

Next you will need an eyedropper program to check the colors.

http://instant-eyedropper.com/

You can either view the test pattern in a browser or, preferably, download it and view it with a player such as SMPlayer which I use, or VLC or yes, Windows Media Player. CAVEAT: Chrome and Microsoft Edge are the only browsers I’m familiar with that are color-accurate. Firefox is a big disappointment in this regard.

I have found that the primary and secondary color patches will be +/- 3. The actual values are 16 and 180. 16 should be familiar to you as it is “black” in BT.709. So yellow would be R-G-B 180-180-16.

3 / 255 is a little over 1% if you want a percentage.

This topic was automatically closed after 90 days. New replies are no longer allowed.