The TL;DR … Yes, RGB mode is drop-dead simple compared to YUV. If you’re using an RGB pipeline and you are content with how it looks, then you can safely put any of the above-mentioned lossless codecs into RGB mode like you did for Lagarith and hit the ground running. Your questions are about properties of YUV video that don’t apply to RGB sources.
The longer story…
RGB doesn’t have confusing format variations like YUV because there is only one specification (“sRGB”) and it makes no compromises when it comes to retaining full color data. But there are trade-offs with it:
The file size is huge. It is not suitable as a delivery format.
It is not backwards compatible with analog television receivers or the multitude of modern video formats based on broadcast standards.
Some post-processing functions will be more complex (slower) because the data is not split by nature into luma (grayscale) and chroma (color) components.
YUV can technically represent more colors than sRGB can. sRGB is nowhere close to covering all the colors that the human eye can see. However, because BT.709 is the largest color space that Shotcut supports, and BT.709 and sRGB are basically the same, we will ignore this point for now.
YUV has a strong history in analog television of course. The luma plane (the “Y” part) was the grayscale signal sent to the very first television sets. Then a color layer (the “UV” part) was added as a subcarrier so that the new color signal remained backward compatible with existing B&W televisions.
But television history isn’t why we continue to use YUV in online video delivery and computer graphics today. We use it because it enables significant space and bandwidth reductions without impacting perceived video quality. This is a really big deal.
For the first space reduction, some early science people did some science stuff and discovered that the human eye is much more sensitive to brightness than it is to color variations. Brightness was already represented by the “Y” component in B&W television at the time this research was going on. When deciding how to add color to the signal, they discovered that they could reduce the color information by a factor of four and the human eye would still think the image looked pretty good. That is how un-sensitive the human eye is to fine color differentials. And this is where we get the YUV concept of “subsampling”:
4:2:0 subsampling means there is 1/4th the color information as there is B&W information. This retains just enough color to make the image look right, but not really enough information to manipulate it with further editing. This is why 4:2:0 is so frequently chosen as a delivery format for DVDs, Blu-rays, YouTube delivery, Netflix delivery… everything. Why would the online streaming companies pay two to four times as much money for storage and network capacity to transfer video data that is not visibly better? Why would customers want to pay for higher Internet speeds to transfer larger videos when the perceived quality doesn’t go up with it? Final delivery is why YUV 4:2:0 is and will continue to be a big part of the video encoding world. RGB can’t be made smaller the way YUV can because “brightness” is distributed across all three pixel components, meaning all three must be fully retained to keep that all-important brightness information. This cross-dependency doesn’t exist with YUV. (Lagarith stores this kind of data as “YV12” in your screenshot drop-down box. Ut Video stores it as yuv420p.)
4:2:2 subsampling means there is 1/2 the color information as there is B&W information. This is what a lot of production studios use internally for editing. Having the extra color information is especially useful for getting cleaner edges when dropping out a green screen. (Lagarith stores this kind of data as “YUY2” in your screenshot drop-down box. Ut Video stores it as yuv422p.)
4:4:4 subsampling means all color information is retained. This format has the same fidelity as RGB, but is stored in YUV format. I’m not aware of this format being used too often because it’s easier to use linear RGB or RAW. But it does exist if needed. Actually, thinking about it again, ProRes has a 4444 and XQ implementation of 4:4:4 with an alpha channel for VFX integration-style work. Stock media companies often provide titles, logos, and clip art overlay graphics in this format. (Lagarith is unable to store this kind of data. Ut Video stores it as yuv444p.)
Obviously, RGB doesn’t have to specify what subsampling it is using because it is always “4:4:4” in the sense that all color information is always there.
For 4:2:0 and 4:2:2, YUV has to answer the additional question of “where will the color layer be anchored”. This is called chroma siting. For example, with 4:2:0, one color value has to cover four pixels. But which four? What if the color was more accurate by bleeding over pixel boundaries? There’s a whole science to this, but RGB doesn’t have to care about it.
For the second space reduction, YUV by nature compresses better than RGB because of the separation between B&W and color. There isn’t as much variation inside those layers as there is with RGB. The RGB values can mathematically appear to randomly swing a lot trying to represent brightness and color at the same time, and big swings are more difficult for compression algorithms to pack down.
Now for the properties of YUV that aren’t beneficial over RGB…
There is a concept called “color space” which is a specification that says what “red”, “green”, and “blue” actually mean in terms of nanometer wavelengths of light and phosphor emissions. RGB wouldn’t look the same between devices unless the numbers in a video file resulted in the same colors of light coming off the screen. The sRGB standard defines these “color primaries” for RGB data. But for YUV which has a long history of evolution, there are a number of standards each with its own color space. BT.601 is standard definition television, BT.709 is 1080p high definition television, and BT.2020 is 4K UHD television. The primary colors are different nanometer wavelengths for each standard, so interpreting the color space correctly is important for the colors to look right.
Lastly, there is a concept called “color range” which specifies whether the YUV values go from 16-235 or 0-255. In a technical sense, YUV should always be limited range, also known as MPEG or TV range. The buffer area is to allow for overshoots in an analog broadcast signal, and is sometimes used for time code or synchronization signaling between different pieces of studio hardware. However, in a fully digital workflow, it is technically possible to get the full 0-255 range to capture a little extra color information. However, this is not industry standard and has to be properly indicated in the video metadata in order to be interpreted correctly. RGB of course does not have to deal with this distinction because it was never used for broadcast or heavily used by live analog studio gear. Had it been, its history would be just as colorful.
I went into detail on those last two parts to explain that when people import YUV footage and complain of shifted colors, the reason is because the metadata in the video file has failed to specify the color space or color range necessary to interpret the YUV data. Shifted colors mean metadata is missing or incorrect, causing the colors to be interpreted in the wrong way (such as the wrong nanometers for “red”, or the wrong range of values). This isn’t an issue with RGB because there is only one specification, so it’s really hard to get it wrong.
The final thing we need to consider is that taking a YUV source video (a cell phone camera for instance) and converting it to RGB will cause a color accuracy loss due to that conversion. The values will shift by +/- 2 at every conversion between RGB and YUV (both directions). This adds up over time, like generational loss with VHS tapes, so it’s important for an output format to be the same as the input format to get a truly lossless transcode.
Now that you know the history and differences between RGB and YUV, let’s get to why your questions were difficult to answer directly…
I wrote those caveats before I knew your workflow was RGB. If your sources are RGB and you save as RGB, then these caveats don’t apply to your situation.
But to clarify the example… if the source video was YUV 4:4:4 with alpha, then we are unable to use both Lagarith and Ut Video for true perfect edge transparency, for two reasons: Lagarith does not support YUV 4:4:4, and neither codec supports YUV transparency. To get a truly lossless transcode of a 4:4:4 input with alpha, the only options are MagicYUV or FFVHuff or FFV1. These codecs will keep the data in YUV, meaning no loss happens converting the YUV data to RGB. If you are okay with the +/- 2 conversion loss, then you can convert 4:4:4 with alpha to RGBA and keep doing what you’re doing.
In reality, 4:4:4 with alpha is a pretty rare format, which is why I personally would not base my selection of preferred codec on being able to support it. However, if you wanted a “one and done” codec for every source you’ll ever meet, nobody would give you grief for choosing FFVHuff instead of Ut Video.
FFVHuff has support for everything all the way up to 16-bit video. MagicYUV is also complete in pixel format support, but only for 8-bit and can sometimes get file sizes smaller than FFVHuff without sacrificing performance. FFV1 is good for archiving but too slow for editing.
Ut Video would only subsample to 4:2:0 if you ask it to. Unfortunately in this example, Ut Video does not support YUV transparency at all, and neither does Lagarith, so they are out of the game. In your case, you would ask it to write RGBA or switch to a different codec to stay in YUV.
It totally would if you had an RGB source and saved it as YV12 (which is YUV 4:2:0). This is an example of the confusion that’s happening regarding the structure of video formats and the services of codecs. Codecs, as the name implies, do nothing but compress and decompress. The mechanics of downsampling, upsampling, RGB/YUV conversion, etc etc are a function of video formats and have nothing to do with the codecs. The codecs merely compress whatever bitstream is eventually handed to them without even caring what the bits mean. Most codecs don’t even know if the data they’re holding is full range or limited range, nor do they care. That is a job for the metadata manager, not the codecs.
As I and others here have said several times, there is absolutely nothing special about Lagarith as a codec when it comes to lossless color accuracy. The principles of RGB and YUV video are universal. If they weren’t, lossless transcoding between codecs would not even be possible. The questions you’ve asked can’t be directly answered because you’ve mixed video format concepts with codec concepts, and my tragically-long reply is an attempt to untangle this confusion so you can see just how little of the feature set you value is due to the codec itself. What you value is the stability and simplicity of RGB as a video format, and every lossless codec supports RGB. Lagarith is not special.
The one and only thing special about Lagarith is its support for null frames, meaning it won’t write data for a frame if it is identical to the previous frame. This is a useful space saving feature when dealing with screen-captured video of a PowerPoint presentation where nothing changes for minutes at a time.
It is, so long as you’re okay with its shortcomings. sRGB only covers 30% of the colors that the human eye can see; there are conversion errors introduced when coming from ubiquitous YUV sources; and the file size is huge in the process. Not everyone is happy with that setup.
Guessing the color space based on video resolution only has to be done on YUV sources that are missing metadata, and generally speaking, it’s a good guess at that. Meanwhile, RGB does not require any guesswork because there are no other formats the data could possibly be in. This guesswork is nothing for you to worry about because it doesn’t apply to your RGB workflow.
Because interpreting YUV data as full range would be incorrect 99% of the time. Limited range is the standard for YUV. And as we mentioned earlier, RGB sources are always interpreted as full range, so this has no effect on your workflow.
Sorry for the long-winded response, but as you can see, your questions were based on nuances of YUV formats that have nothing to do with your RGB workflow. There was no direct answer. We first had to sort out the differences between RGB and YUV for anything to make sense because you were concerned about video properties that don’t apply to RGB sources. Once you decide on a YUV or RGB pipeline, all of the lossless codecs are able to save your data in either format. There is a reason they support both formats, because both formats have their merits depending on the situation. You can convert between formats if needed, but you should be aware of the conversion loss that goes with it.
Hopefully the long trip around the mulberry bush has answered more questions than it created.