This always depends on the input and how perfectionistic the output must be. Technically, a yuv420p input converted to yuv444p output is only lossless if chroma scaling is done with nearest neighbor. Similarly, if the input is RGB, then the output must also be RGB to avoid YUV conversion loss. There are nuances like this to every input/output pair. It’s more than just Shotcut settings to consider, and it’s never set-once-and-forget-it if total perfection is the goal.
Yes. Can the human eye detect the loss? Usually no.
That will probably always be true, but have you tried the latest FFmpeg Git master? It has faster Cineform decoding as well as a new Cineform encoder. Your wish was granted. Maybe it will be fast enough now. See:
What is the goal? “No conversion” sounds like the goal is to retain highest quality, but “fastest decode” sounds like another goal is to edit directly on this high-quality file and performance needs to be better. Would you be better served by using the original files (no transcode loss, highest quality, high compression, slow decode) as the source and then do editing on proxies? This workflow is built into Shotcut.
Yes. If there are no filters involved, then command-line ffmpeg makes more sense and will provide greater control.
Essentially. It will be 8-bit values that have been expanded (multiplied) up to 16-bit space. Mathematically, there would be gaps between values that would cause banding if processed further.
Caveat: GPU filters are almost never used. Even if they are, only the processing happens in 16-bit space. The timeline output is still an 8-bit choke point. Exporting as yuv422p10le is still 8-bit expanded to 10-bit, not a native 10-bit pipeline all the way back to the source.