Able to edit Lagarith encoded video but how to encode in Lagarith codec?

The TL;DR … Yes, RGB mode is drop-dead simple compared to YUV. If you’re using an RGB pipeline and you are content with how it looks, then you can safely put any of the above-mentioned lossless codecs into RGB mode like you did for Lagarith and hit the ground running. Your questions are about properties of YUV video that don’t apply to RGB sources.

The longer story…

RGB doesn’t have confusing format variations like YUV because there is only one specification (“sRGB”) and it makes no compromises when it comes to retaining full color data. But there are trade-offs with it:

  • The file size is huge. It is not suitable as a delivery format.

  • It is not backwards compatible with analog television receivers or the multitude of modern video formats based on broadcast standards.

  • Some post-processing functions will be more complex (slower) because the data is not split by nature into luma (grayscale) and chroma (color) components.

  • YUV can technically represent more colors than sRGB can. sRGB is nowhere close to covering all the colors that the human eye can see. However, because BT.709 is the largest color space that Shotcut supports, and BT.709 and sRGB are basically the same, we will ignore this point for now.

YUV has a strong history in analog television of course. The luma plane (the “Y” part) was the grayscale signal sent to the very first television sets. Then a color layer (the “UV” part) was added as a subcarrier so that the new color signal remained backward compatible with existing B&W televisions.

But television history isn’t why we continue to use YUV in online video delivery and computer graphics today. We use it because it enables significant space and bandwidth reductions without impacting perceived video quality. This is a really big deal.

For the first space reduction, some early science people did some science stuff and discovered that the human eye is much more sensitive to brightness than it is to color variations. Brightness was already represented by the “Y” component in B&W television at the time this research was going on. When deciding how to add color to the signal, they discovered that they could reduce the color information by a factor of four and the human eye would still think the image looked pretty good. That is how un-sensitive the human eye is to fine color differentials. And this is where we get the YUV concept of “subsampling”:

  • 4:2:0 subsampling means there is 1/4th the color information as there is B&W information. This retains just enough color to make the image look right, but not really enough information to manipulate it with further editing. This is why 4:2:0 is so frequently chosen as a delivery format for DVDs, Blu-rays, YouTube delivery, Netflix delivery… everything. Why would the online streaming companies pay two to four times as much money for storage and network capacity to transfer video data that is not visibly better? Why would customers want to pay for higher Internet speeds to transfer larger videos when the perceived quality doesn’t go up with it? Final delivery is why YUV 4:2:0 is and will continue to be a big part of the video encoding world. RGB can’t be made smaller the way YUV can because “brightness” is distributed across all three pixel components, meaning all three must be fully retained to keep that all-important brightness information. This cross-dependency doesn’t exist with YUV. (Lagarith stores this kind of data as “YV12” in your screenshot drop-down box. Ut Video stores it as yuv420p.)

  • 4:2:2 subsampling means there is 1/2 the color information as there is B&W information. This is what a lot of production studios use internally for editing. Having the extra color information is especially useful for getting cleaner edges when dropping out a green screen. (Lagarith stores this kind of data as “YUY2” in your screenshot drop-down box. Ut Video stores it as yuv422p.)

  • 4:4:4 subsampling means all color information is retained. This format has the same fidelity as RGB, but is stored in YUV format. I’m not aware of this format being used too often because it’s easier to use linear RGB or RAW. But it does exist if needed. Actually, thinking about it again, ProRes has a 4444 and XQ implementation of 4:4:4 with an alpha channel for VFX integration-style work. Stock media companies often provide titles, logos, and clip art overlay graphics in this format. (Lagarith is unable to store this kind of data. Ut Video stores it as yuv444p.)

  • Obviously, RGB doesn’t have to specify what subsampling it is using because it is always “4:4:4” in the sense that all color information is always there.

  • For 4:2:0 and 4:2:2, YUV has to answer the additional question of “where will the color layer be anchored”. This is called chroma siting. For example, with 4:2:0, one color value has to cover four pixels. But which four? What if the color was more accurate by bleeding over pixel boundaries? There’s a whole science to this, but RGB doesn’t have to care about it.

For the second space reduction, YUV by nature compresses better than RGB because of the separation between B&W and color. There isn’t as much variation inside those layers as there is with RGB. The RGB values can mathematically appear to randomly swing a lot trying to represent brightness and color at the same time, and big swings are more difficult for compression algorithms to pack down.

Now for the properties of YUV that aren’t beneficial over RGB…

There is a concept called “color space” which is a specification that says what “red”, “green”, and “blue” actually mean in terms of nanometer wavelengths of light and phosphor emissions. RGB wouldn’t look the same between devices unless the numbers in a video file resulted in the same colors of light coming off the screen. The sRGB standard defines these “color primaries” for RGB data. But for YUV which has a long history of evolution, there are a number of standards each with its own color space. BT.601 is standard definition television, BT.709 is 1080p high definition television, and BT.2020 is 4K UHD television. The primary colors are different nanometer wavelengths for each standard, so interpreting the color space correctly is important for the colors to look right.

Lastly, there is a concept called “color range” which specifies whether the YUV values go from 16-235 or 0-255. In a technical sense, YUV should always be limited range, also known as MPEG or TV range. The buffer area is to allow for overshoots in an analog broadcast signal, and is sometimes used for time code or synchronization signaling between different pieces of studio hardware. However, in a fully digital workflow, it is technically possible to get the full 0-255 range to capture a little extra color information. However, this is not industry standard and has to be properly indicated in the video metadata in order to be interpreted correctly. RGB of course does not have to deal with this distinction because it was never used for broadcast or heavily used by live analog studio gear. Had it been, its history would be just as colorful.

I went into detail on those last two parts to explain that when people import YUV footage and complain of shifted colors, the reason is because the metadata in the video file has failed to specify the color space or color range necessary to interpret the YUV data. Shifted colors mean metadata is missing or incorrect, causing the colors to be interpreted in the wrong way (such as the wrong nanometers for “red”, or the wrong range of values). This isn’t an issue with RGB because there is only one specification, so it’s really hard to get it wrong. :slight_smile:

The final thing we need to consider is that taking a YUV source video (a cell phone camera for instance) and converting it to RGB will cause a color accuracy loss due to that conversion. The values will shift by +/- 2 at every conversion between RGB and YUV (both directions). This adds up over time, like generational loss with VHS tapes, so it’s important for an output format to be the same as the input format to get a truly lossless transcode.

Now that you know the history and differences between RGB and YUV, let’s get to why your questions were difficult to answer directly…

I wrote those caveats before I knew your workflow was RGB. If your sources are RGB and you save as RGB, then these caveats don’t apply to your situation.

But to clarify the example… if the source video was YUV 4:4:4 with alpha, then we are unable to use both Lagarith and Ut Video for true perfect edge transparency, for two reasons: Lagarith does not support YUV 4:4:4, and neither codec supports YUV transparency. To get a truly lossless transcode of a 4:4:4 input with alpha, the only options are MagicYUV or FFVHuff or FFV1. These codecs will keep the data in YUV, meaning no loss happens converting the YUV data to RGB. If you are okay with the +/- 2 conversion loss, then you can convert 4:4:4 with alpha to RGBA and keep doing what you’re doing.

In reality, 4:4:4 with alpha is a pretty rare format, which is why I personally would not base my selection of preferred codec on being able to support it. However, if you wanted a “one and done” codec for every source you’ll ever meet, nobody would give you grief for choosing FFVHuff instead of Ut Video.
FFVHuff has support for everything all the way up to 16-bit video. MagicYUV is also complete in pixel format support, but only for 8-bit and can sometimes get file sizes smaller than FFVHuff without sacrificing performance. FFV1 is good for archiving but too slow for editing.

Ut Video would only subsample to 4:2:0 if you ask it to. Unfortunately in this example, Ut Video does not support YUV transparency at all, and neither does Lagarith, so they are out of the game. In your case, you would ask it to write RGBA or switch to a different codec to stay in YUV.

It totally would if you had an RGB source and saved it as YV12 (which is YUV 4:2:0). This is an example of the confusion that’s happening regarding the structure of video formats and the services of codecs. Codecs, as the name implies, do nothing but compress and decompress. The mechanics of downsampling, upsampling, RGB/YUV conversion, etc etc are a function of video formats and have nothing to do with the codecs. The codecs merely compress whatever bitstream is eventually handed to them without even caring what the bits mean. Most codecs don’t even know if the data they’re holding is full range or limited range, nor do they care. That is a job for the metadata manager, not the codecs.

As I and others here have said several times, there is absolutely nothing special about Lagarith as a codec when it comes to lossless color accuracy. The principles of RGB and YUV video are universal. If they weren’t, lossless transcoding between codecs would not even be possible. The questions you’ve asked can’t be directly answered because you’ve mixed video format concepts with codec concepts, and my tragically-long reply is an attempt to untangle this confusion so you can see just how little of the feature set you value is due to the codec itself. What you value is the stability and simplicity of RGB as a video format, and every lossless codec supports RGB. Lagarith is not special.

The one and only thing special about Lagarith is its support for null frames, meaning it won’t write data for a frame if it is identical to the previous frame. This is a useful space saving feature when dealing with screen-captured video of a PowerPoint presentation where nothing changes for minutes at a time.

It is, so long as you’re okay with its shortcomings. sRGB only covers 30% of the colors that the human eye can see; there are conversion errors introduced when coming from ubiquitous YUV sources; and the file size is huge in the process. Not everyone is happy with that setup.

Guessing the color space based on video resolution only has to be done on YUV sources that are missing metadata, and generally speaking, it’s a good guess at that. Meanwhile, RGB does not require any guesswork because there are no other formats the data could possibly be in. This guesswork is nothing for you to worry about because it doesn’t apply to your RGB workflow.

Because interpreting YUV data as full range would be incorrect 99% of the time. Limited range is the standard for YUV. And as we mentioned earlier, RGB sources are always interpreted as full range, so this has no effect on your workflow.


Sorry for the long-winded response, but as you can see, your questions were based on nuances of YUV formats that have nothing to do with your RGB workflow. There was no direct answer. We first had to sort out the differences between RGB and YUV for anything to make sense because you were concerned about video properties that don’t apply to RGB sources. Once you decide on a YUV or RGB pipeline, all of the lossless codecs are able to save your data in either format. There is a reason they support both formats, because both formats have their merits depending on the situation. You can convert between formats if needed, but you should be aware of the conversion loss that goes with it.

Hopefully the long trip around the mulberry bush has answered more questions than it created. :slight_smile:

4 Likes

I forgot to mention FFVHuff. It’s almost as fast as Ut Video and look at the crazy amount of pixel format support it has, all the way up to 16-bit:

ffmpeg -h encoder=ffvhuff
yuv420p yuv422p yuv444p yuv411p yuv410p yuv440p gbrp gbrp9le gbrp10le gbrp12le gbrp14le gray gray16le yuva420p yuva422p yuva444p gbrap ya8 yuv420p9le yuv420p10le yuv420p12le yuv420p14le yuv420p16le yuv422p9le yuv422p10le yuv422p12le yuv422p14le yuv422p16le yuv444p9le yuv444p10le yuv444p12le yuv444p14le yuv444p16le yuva420p9le yuva420p10le yuva420p16le yuva422p9le yuva422p10le yuva422p16le yuva444p9le yuva444p10le yuva444p16le rgb24 bgra

1 Like

Well that just made my day, I’m not doing anything 10bit yet but that looks fantastic

How big are the files compared to Ut and HuffYUV?

Same size. FFVHuff with context=0 specified will create a file that is bit-for-bit identical to a HuffYUV file except for the FourCC embedded in it. FFVHuff with context=1 specified will create a file that is essentially the same size as Ut Video, although not bit-for-bit the same.

The main reason I didn’t originally suggest FFVHuff as a Lagarith replacement is because the OP might have interest in sharing files between After Effects and Shotcut. Since the other lossless codecs have native options like VfW, the same video would be usable by both editors. But FFVHuff is fairly specific to ffmpeg. The only way I can imagine to get FFVHuff into After Effects is to use the ffdshow-tryout codec pack, but that hasn’t been maintained since 2014. So I’m not sure how portable FFVHuff will be with other editors. For people whose workflow is entirely inside an ffmpeg framework, it’s a fantastic format. If sharing outside of ffmpeg, then MagicYUV may be the closest to a one-and-done codec due to its support for YUV transparency. If a one-time conversion loss from YUV to RGB is acceptable, then using any lossless codec in RGB mode would be a viable replacement.

Nerd alert: The context parameter determines whether the Huffman tree will be altered to improve compression for each frame, or if the same tree will be used for the whole video. HuffYUV uses the same tree, meaning there is no overhead setting up a new tree for each frame, making it process faster at the expense of compression efficiency. Ut Video alters the tree each frame for better compression, but has gotten very fast at it over the years to make up for the overhead. FFVHuff is able to simulate either HuffYUV or Ut Video by setting the context parameter, where “context” means a new tree in the context of a new frame. FFV1 can do this too.

I have some video of a moving train.

When encoded with FFV1 or HuffYUV, there is jerky playback on VLC. When encoded with UT Video, the motion is nice and smooth, so I use UT Video with PCM audio for lossless. The subsampling has to be 4:2:2, so it’s not truly lossless being subsampled. I can’t get 4:4:4 to work with UT Video. Maybe I’m doing something wrong?

If it’s headed for YouTube I use H.265 with AAC audio for a small file and a quick(er) upload.

What quality % do you use for the video and bitrate for the audio when preparing for youtube?

Thanks. So MagicYUV is smaller than FFVHuff, HuffYUV and Ut but just as fast?

You’re probably doing everything correctly.

If I choose the Lossless > Ut Video export preset and change the line on the Other tab to pix_fmt=yuv444p, I get a valid 4:4:4 file that plays back in Shotcut and MPC-BE just fine. But VLC will only play the 4:2:2 version. I’ve noticed this with a few other formats too. VLC is the weak link in the chain. That’s probably the source of jerky playback for the HuffYUV files too. I don’t experience it with MPC-BE.

All of those codecs are based on Huffman trees, so they end up within 2% of each other when it comes to file size. The codec that’s smallest is very dependent on the content of the scene. But Huffman by its design is a near-optimal encoding strategy, so they’re all going to be so close that I don’t let file size be a deciding factor. I look more at speed, pixel format support, and compatibility outside of Shotcut. For instance, MagicYUV is not playable in either VLC or MPC-BE (unless it was added very very recently). Issues like that make me stick with Ut Video overall and only branch out to FFVHuff or MagicYUV when I really need it, such as transcoding a ProRes 4444 file that has transparency.

A number of people dislike VLC for being too buggy/quirky.

This presents an interesting dilemma. Do I want to optimize my encoding for VLC, in which case I avoid Huff and FFV and use UT, or do I not care about VLC and use Huff or FFV1? I’m probably not losing anything by sticking with UT.

What’s a better player than VLC? Media Player Classic? At the TV station where I work we have VLC all over the place.

I’m sure you’ll hear other suggestions but potplayer has been a favorite of the production company I work with on occasion recently. A bit tricky to learn but super customizable.

I’ve had very good luck with MPC-BE. It’s my main player. I don’t use VLC at all anymore because some of my work is audio-critical and VLC has an alarmingly difficult time playing some files without dipping or raising the pitch. I can’t mix and verify results like that.

There are probably many other competent players (and I’ve heard good things about potplayer that @D_S mentioned), but I’m not too familiar with the landscape because I quit looking after MPC-BE worked so well.

I briefly mentioned Potplayer before on here in passing to @chris319 in an old thread. It’s pretty much the best player out there and since finding it I have rarely gone back to use VLC or MPC. However it’s only for Windows. No Mac or Linux. I do have to say though I am a bit suspicious of it. It’s owned by a big company in South Korea but they just decide to give the program out for free and not make it open source? What do they get of it? There’s got to be some kind of string attached.

I use SMPlayer:

This topic was automatically closed after 90 days. New replies are no longer allowed.