Cool test, I learned something. That’s why I ask questions.
In that case, the feature you want may already exist. If you want Shotcut to operate in rgb24 instead of YUV, I think you can add these lines to Export > Other:
Then in theory (I haven’t tested yet), exporting this to HuffYUV would get you a completely RGB pipeline. Is that what you were wanting?
ffmpeg -h encoder=huffyuv doesn’t list yuv444p as a supported pixel format, so I think that prevents Shotcut from exporting 4:4:4 HuffYUV. This is one of many reasons I switched to Ut Video, to get the extra pixel format support.
Same with FFV1… ffmpeg -h encoder=ffv1 doesn’t list rgb24 as a pixel format. However, it has rgb48le, rgba64le, and a slew of bgrpXle bit depths or the more simple bgr0 formats to work with. Specifying any of those would get you the RGB lossless you seek, albeit with massively more headroom than you may ever need. These archival guys don’t mess around haha.
I have the impression that all transitions and track compositions occur in YUV also. So regardless of filters, there will be a conversion to YUV if you use the timeline. But maybe I am mistaken in my understanding.
No, the track blending is using frei0r.cairoblend, which is RGB, and there is an optimization to not blend if not needed. You are correct about all transitions at this time. I made a list of the YUV filters in this post:
Here is a snippet where the image format is declared in the header:
To find out image format you need to get true bit count: it is stored in
bpp_override if biSize > sizeof(BITMAPINFOHEADER), else (or if bpp_override
is 0) it's stored in biBitCount.
Then you can find out image format with this table
| bit | image |
| 16 | YUY2 |
| 24 | RGB |
| 32 | RGBA |
In the rest of this document YUV will indicate image of type YUY2, and RGB
will be used for both RGB and RGBA.
Scanning through the rest of the document, and the way that the data itself is stored, the format by nature cannot store higher than 4:2:2. It only does YUY2 encoding. The table above is consistent with the pixel formats indicated by ffmpeg -h encoder=huffyuv. If 4:4:4 is an absolute requirement, then HuffYUV cannot serve your needs.
The source files are (generally?) compressed (I assume to save space and processing power on the source device (camera, phone, Go Pro, etc). When I create an intermediate file I’m effectively uncompressing the file to work with in the editing process? It opens up editing options and is a better start point when it comes to compressing the final edit into a delivery format/file?
I’m not sure what to do with this. The gray box quotes one of my previous posts, and the follow-up text quotes the OP which started this entire topic. I don’t see a new question in this post. The answers to the OP’s questions were detailed in a four-point bullet list at the end of post #9:Intermediate Files for Editing
If I use my eyedropper program on Shotcut’s color picker tool I see 16-180-16 in the color patch in the color picker window, immediately to the left of the fields for entering R,G and B values, however I see 3-154-9 in Shotcut’s preview window.
If I load my .mp4 file which has 16-180-16 over the entire raster, I see 4-189-4 in Shotcut’s preview window.
If I load this same mp4 file into my other color checking program, which reads YUV pixels directly out of the file rather than using an “eyedropper” to read pixel values under the mouse on the screen, I see 16-179-16. An mp4 file encoded with Shotcut’s lossless H.264 shows 0-84-0. I get the same colors from H.264 whether I export as 420, 422 or 444.
Using Ut video I also see 0-84-0. If I view the Ut video file on SMPlayer and use the eyedropper I see 18-221-9, and 17-221-9 if I play it back using VLC.
I get 4-189-3 using ffv1 and Huffyuv with SMPlayer.
I haven’t found a path to accurate colors in the latest version of Shotcut.
Shotcut/MLT uses libswscale and not zscale, which can explain differences. Also, libswscale has various interpolation and rounding accuracy options that will make MLT differ slightly from some ffmpeg command line executions. Here are the options that MLT uses for libswscale for pix_fmt conversion:
Still, we’re not getting accurate colors even when ffmpeg is not involved, such as when using Shotcut’s internal color generator. See post #70 in this thread. Note that there is a difference between the preview window and the color patch displayed by Shotcut’s color picker.
There is still a bug in the player/preview if you load a file with BT.601 color into a BT.709 project (a color generator or still image defaults the project to 709), then the colors are not transformed and you get the problem you describe. If you change Settings > Video Mode to SD NTSC. Then, you get something much closer to expectation:
The bmp shown in Paint was generated using ffmpeg 4.0.2 from the MP4 exported from Shotcut using its color generator and lossless H.264. The eyedropper was made from the bmp loaded into Paint. Not great, but not as bad as before and what you are reporting.
I do not know about your MP4 as I do not have it (generated using that ffmpeg command above?), but maybe some mismatch between the source and target colorspace coefficients are causing a problem somewhere that requires further investigation. I do not know much about zscale, but why is “range=full” while the format is not yuvj420p? That looks like a problem to me like it would be using full 8-bit range without signaling it. When I run that command with range=full and range=limited, both give color_range=tv in the ffprobe output.
Here is the result of loading range=limited into Shotcut, and using the eyedropper on its player:
I found that I got inaccurate colors if the range was not explicitly set to “full”. I can try this with yuvj420 pixels, with and without explicitly setting it to “full” and see what the results are. Not only do I have to check to see if the colors are right but I also need to see if the video will play back at all on VLC, SMPlayer and Windows Media Player. I will report my findings here.
I have a couple of programs which I wrote and use to check video color. Some work with RGB and some with YUV. My code has been scrutinized by guys on another forum so I’m pretty confident of it.
All of my work is in the BT.709 color space. IMO BT.601 is passe and has no business in computer video in 2018. BT.601 was only ever intended to be a bridge between analog and digital video.
Only recently has the Chrome browser started decoding video in the BT.709 color space. Microsoft Edge uses BT.709 but Firefox is stuck in BT.601. I and others have been “squeaky wheels” and it is an open Firefox bug but no action for quite some time. The standard for computer, and by extension, web video is sRGB, which uses the same coefficients as BT.709, so you decode the colors as if they were 709. It is logical that Edge uses 709 because Microsoft developed sRGB in conjunction with Hewlett Packard. Standards are wonderful things provided you stick to them.
Just to add. I wish I’d found this thread earlier. And as a sidenote, from all the research I did this thread is of the highest quality I have found on the subject.
Recently after doing tests between Prores vs FFV1 with a view of using as a mezzanine codec on my limited hardware this is what I found. Prores at standard quality which is approx. 100Mb/s for 1080p (is it different for 4k?) and FFv1 whose files are approx %20 larger. I could see a visible difference in quality, I guess because FFv1 is lossless but supposedly Prores at standard quality is perceptably lossless (according to some at this profile). Just mentioning…
However, it quickly became apparent to me that FFv1 cannot be used as an everyday editing/mezzanine/intermediate codec unfortunately. Decoding speed is too slow. Encoding speed was slower and could not peak my cores consistantly.
So we’re to infer that FFV1 looked better to you than ProRes? If FFV1 looked better to you than ProRes then ProRes isn’t exactly “perceptually lossless”, now is it?
Here’s what I do if delivering to YouTube or something similar: edit in the native format, probably mp4. As you have to upload an intermediate file to YouTube, export to a lossless format such as FFV1 or HuffYUV and upload that to YouTube. It’s going to be a heckuva long upload so you might as well start the upload and go to bed. YouTube is going to resample your video no matter what, so you save a generation of lossy encoding by making your “upload” file a lossless format. To borrow a film term, this would be an interpositive.
I have a 2 TB hard drive and disk space has not become an issue yet, but if you need to recover the disk space just burn your interpositive to DVD.
I tested FFV1 and HuffYUV with Shotcut’s white-noise generator. White noise is random and will give a codec a hard time encoding and decoding. In my casual experimenting, players seemed to have an easier time with HuffYUV. In FFV1 I had audio “hash” but the video was frozen. It didn’t look like the swarm of crawling ants that you would expect. HuffYUV showed me a swarm of crawling ants so that is my preference for interpositives. I’m under the impression that HuffYUV and FFV1 are truly lossless and the rest, including ProRes and lossless H.264 are “quasi-lossless”. Is this about right? One of the best features of Shotcut is the wide array of available export formats. I have looked at commercial NLE programs and none had the lossless export formats of Shotcut. As of version 18.11.18 the lossless formats are exporting without color error, so many thanks to Dan for that.
Here is a short video shot in H.264/mp4, the title added in Shotcut, exported to HuffYUV and the HuffYUV mkv file uploaded to YouTube:
Which was the point I was trying to make, I guess not very well or obviously.
That FFv1 doesn’t work well with white noise seems interesting alright. My tests with Prores and FFv1 were with my own files for an editing job and more importantly was quick and dirty just enough to convince me to re-encode my intermediate files with Prores albeit with a slight loss in quality.