This is impossible if you use various filters, track blending, and transitions. Internally, it currently uses 8-bit YUV 4:2:2 and RGB(A). If you simply want to edit single clip, playlist, or single track with no filters or select filters you know can operate in YUV, you can use YUV 4:2:2.
This always depends on the input and how perfectionistic the output must be. Technically, a yuv420p input converted to yuv444p output is only lossless if chroma scaling is done with nearest neighbor. Similarly, if the input is RGB, then the output must also be RGB to avoid YUV conversion loss. There are nuances like this to every input/output pair. It’s more than just Shotcut settings to consider, and it’s never set-once-and-forget-it if total perfection is the goal.
Yes. Can the human eye detect the loss? Usually no.
That will probably always be true, but have you tried the latest FFmpeg Git master? It has faster Cineform decoding as well as a new Cineform encoder. Your wish was granted. Maybe it will be fast enough now. See:
What is the goal? “No conversion” sounds like the goal is to retain highest quality, but “fastest decode” sounds like another goal is to edit directly on this high-quality file and performance needs to be better. Would you be better served by using the original files (no transcode loss, highest quality, high compression, slow decode) as the source and then do editing on proxies? This workflow is built into Shotcut.
Yes. If there are no filters involved, then command-line ffmpeg makes more sense and will provide greater control.
Essentially. It will be 8-bit values that have been expanded (multiplied) up to 16-bit space. Mathematically, there would be gaps between values that would cause banding if processed further.
Caveat: GPU filters are almost never used. Even if they are, only the processing happens in 16-bit space. The timeline output is still an 8-bit choke point. Exporting as yuv422p10le is still 8-bit expanded to 10-bit, not a native 10-bit pipeline all the way back to the source.
But is GPU output delivered to ffmpeg for encoding at 16-bit? I’ve actually not had clarification on that part. If so, then yuv422p12le could be a viable output format and meet cinema standards after conversion to XYZ. This is an unsung selling point. If not, then frame delivery to the encoder (which is what I meant by timeline output) is choking the pipeline to 8-bit.
I have a hard time understanding this too shotcut Leader.
So processed at 16-bit float, then rendered at 8-bit on the timeline or at export “interpolated” to fit in any bit depth export format ?
Any plan on making the program/mlt to 16-bit native ? It will be futur proof and ease everything to transision perfectly all at 16bit float GPU/timeline/Rendering.
Is it on your mind or it may need too much ressources/team work.
I really wish you the best guys, and you have a winner product.
No, but technically that reduction occurs outside of the transitions and track blending.
Yes, but first Brian and I have made plans to run some experiments to decide on a direction for greater than 8-bit and linear color that also addresses performance (hopefully better than today’s 8-bit but not worse). We expect to start these experiments by the end of the year.