Able to edit Lagarith encoded video but how to encode in Lagarith codec?

@Austin you beat me too the more technical breakdown(and in greater verbosity than I would have, if you’re up for it I could use a few guest posts for my website about video editing XD)

@Lagarith Austin explained most of what I pointed you to those threads to read, out of curiosity what are you using this video for and getting it from perhaps we can help you optimize things better still?

If the codec library outputs something that requires that I must integrate a muxer (container format), then I am not going to spend the effort. Let someone else integrate it into FFmpeg, where there is a lot higher level of contribution. If there is a codec not integrated into FFmpeg, then there is likely a reason regarding relevance, quality, or license. I do not know why lagarith encoding is not implemented or integrated into FFmpeg.

Cineform is in a similar state within FFmpeg and Shotcut even though the code is on GitHub under Apache license.

The other most popular codec requested in this area is HAP. That is available through an existing library integration with FFmpeg. I just haven’t added it to the build process yet.

@Austin your mention of 10bit color sent me diging some, are there actually any 10 bit lossless codecs that aren’t proprietary right now? I did a little looking but couldn’t find any really. Also MagicYUV is decode only under ffmpeg unless shotcut implemented support for it specifically(I haven’t checked)

The amazing simplicity of Lagarith, it just works, want normal lossless ? Just Select RGB, want alpha ? Just Select RBGA, notice it didn’t ask you nonsense like “Do you want your video in 4:2:2 or 4:2:0”.
This is because Lagarith understand basic common sense, when it says it is a lossless codec, it wouldn’t do a 180 by saying “Yeah…the whole lossless thing ? You want me to maybe lose it down to 4:2:0 ? or you want me to keep it as…” Notice it doesn’t do bullcrap like this, because it is a lossless codec that don’t ask stupid questions.
Lagarith_1327

How do I get shotcut to interpret and output in RGB and or RGBA, I have never and don’t ever want to deal with YUY2, YV12, yuv422p, yuv420p, yuv444p.
It seems to me when reading the forum that all these “interpretation” has been a great source of grief for everyone because of color shift issues.

All I know is, in my 20 years of editing in AfterEffects, Lagarith has always worked, the colors has always been right and I have never once have to content with or be asked by AfterEffects whether my lagarith video is to be interpreted through rgb, rgba, YUY2, YV12, yuv422p, yuv420p, yuv444p.

In AfterEffects, it’s all just RGB or RGBA, life is so simple and the color is as it is, Lagarith in AfterEffects never ask me if it should encode in 4:2:0 or 4:2:2, if I want an alpha channel, I just select RGBA, if I don’t want an alpha channel, I will just select RGB, all these talks about having to remember which codec to use for 4:2:2 and which codec to use for 4:2:0 and the added warning that the codec that handles 4:2:2 will upscale a 4:2:0 video so one have to be mindful of which to select is making me wanting to crawl back to my AfterEffects Lagarith workflow where everything just works.

I am now starting to have a panic attack just thinking about having to deal with all these nonsense in shotcut video editor, and this is coming from someone having done a lot of VFX works for about 8 years now.

The color issues as was noted in one of the threads that @D_S linked to you, have been fixed in Shotcut a while back. So as far as I know, there are no color issues now. Dan has been very good about that.

For whatever it’s worth, I did a quick search and found this post on the ffmpeg zeranoe forum where someone asked back in 2012 about ffmpeg encoding Lagarith and this was the response:

it can’t. it can decode it, however. I was contacted the lagarith author and he said that he (specifically) had no interest in porting his code to ffmpeg so…until somebody else steps up to do it, you’ll need to use windows’ system to encode to lagarith.

Link: https://ffmpeg.zeranoe.com/forum/viewtopic.php?t=649

Mind you, that was in 2012. I don’t know if anything is different now. Maybe you could ask at the ffmpeg forums if anyone would consider porting it. Here is the link to the ffmpeg forums. Also, here is the webpage to the author, Ben Greenwood. According to that page, he last updated Lagarith back in 2011.

Oh ALL the color interpretation nonsense has been fixed ?
THAT’S WONDERFUL!!!

That was a VERY worrisome “feature” and I am glad now that’s settled.
Except that the way shotcut video editor import your video might still mess up your color big time IF YOU DO NOT MANUALLY SET IT FOR EACH TRACK because of this totally uncalled for redundant unnecessarily added nonsense:

The part where I talk about the “Broadcast Limited (MPEG)” nonsense.
It’s like shotcut is saying "Hey since you did not set whether your video is “Broadcast Limited (MPEG)” or “Full (JPEG)” and I couldn’t infer it from the video, let me just assume it is 1995 and select “Broadcast Limited (MPEG)” and interpret your video’s color in a limited fashion, because as we ALL KNOW it is GREAT to assume the worst quality when unknown…

Meanwhile AfterEffects would never do something ridiculous as this, it would have never been a thing, AfterEffects will just allow the full RGB range and don’t “re-interprete within a limited pallet when unknown”, this is just.so.ridiculous.

You almost have to put in extra effort to make sure shotcut don’t mess up the color interpretation by default and you didn’t even do anything yet on your end to mess it up.

Is AfterEffects available on Linux?
It doesn’t support Linux.
firefox_2019-10-12_04-53-24

I haven’t researched open source 10-bit lossless codecs a lot, but FFV1 supports 10-bit directly in ffmpeg. The Ut Video format also supports 10-bit and the native codecs (like VfW) support 10-bit. It was the choice of ffmpeg to only implement the 8-bit portions so far. I’m not sure what’s stopping them from implementing 10-bit as the standards are already written. My guess would be that if a studio cares enough to have a 10-bit lossless workflow, they’re probably going to use CinemaDNG, BRAW, or some other direct RAW format rather than mess with a middle layer. There’s so much data and so much processing that they can’t waste much CPU on decompression at that level of post-production.

That was true for older versions of ffmpeg. In the 4.2 branch and later, MagicYUV is both encode and decode, and no longer marked as experimental.

TL;DR… There is nothing to worry about. Shotcut and the other lossless codecs have you totally covered.

Now for the details…

I wish I knew this about you from the start. :slight_smile: It provides context to everything.

In a VFX pipeline, it is very common to work entirely in RGB/A. For motion pictures and other large productions, it’s even more common for that RGB space to be linear and be exchanged as EXR or DPX image sequences rather than video files. As you’re probably aware, this is exceedingly specific to the VFX workflow. But before and after the VFX integration, it’s a completely different world when it comes to encoding video, and out there, the only common formats are YUV or some variant of RGB RAW (like CinemaDNG or BRAW). RGB – as in sRGB to distinguish it from linear or RAW – will rarely exist outside of 3D animation or motion graphics renderings, or screen recordings of video games captured by OBS Studio. Basically, RGB is for stuff that’s generated by a computer as opposed to video that’s captured with a camera.

Before answering your other questions, we should first double-check your source video. The pixel format was gbrp, meaning it’s an RGB video. But how did it get to be RGB? Was it generated by a computer like a rendering or screen capture? Or was it converted to RGB from video captured with a camera? The reason it’s significant is because you wanted absolutely zero loss due to compression. However, if the source video was YUV from a camera and then converted to RGB for Lagarith, then loss happened right there in the YUV-to-RGB conversion. Granted, the loss won’t be terribly noticeable and the loss is probably minimal since the camera already converted an RGB Bayer filter to YUV in the first place, meaning the camera chose YUV values that will map the closest back to the original RGB. However, if the RGB conversion was done later with software that isn’t perfectly color correct, then there will most definitely be loss in every YUV-to-RGB conversion and back. There is also an issue that the YUV color gamut is larger than RGB, meaning RGB will lose color data if transcoded from YUV, which will turn into noticeable banding and possible generation loss if you apply heavy color grading.

This is why all the lossless codecs mentioned in this thread support both YUV and RGB variants. The output format must match the input format in order to avoid a conversion loss. You can do everything as RGB in Shotcut just as you’re doing in After Effects, but there is potential for loss even with RGB in both workflows and it’s up to you to decide if you’re okay with that. You haven’t noticed a problem with After Effects apparently, so in theory you should be fine doing the same workflow in Shotcut.

All that aside, let’s get down to editing…

Since your input is RGB, your life will indeed be much more simple than the average person using Shotcut who has to deal with YUV. Lucky you!

You’ll be relieved to know that when you drop an RGB video onto the timeline, the color range drop-down box for Limited vs Full has no effect. Those options are only relevant for YUV video. To prove this for yourself since it rightfully concerns you, change between the two options and notice that the preview window doesn’t look any different either way. Then drop a YUV video onto the timeline (a video from a cell phone will suffice) and change between the two options. The colors in the preview window will have a radical shift. Limited vs Full is ignored for RGB video because it’s a meaningless concept to RGB. @shotcut, to @Lagarith’s point, it would be nice if Shotcut automatically set the color range drop-down box to Full if the source video is RGB. Not sure if that’s an option or even a good idea, but it sounds logical in theory.

At this point, editing can be done care-free just like you would normally expect. No need to double-check Shotcut’s interpretation of your files because RGB is RGB is RGB. Simplicity is a feature of RGB and has nothing to do with Lagarith.

Now we get to exporting. The easiest way to have Shotcut export as RGBA is to choose the Alpha > Ut Video option from the list of stock export presets. If you choose that preset then go to the Advanced > Other tab, you’ll see these lines in it:

mlt_image_format=rgb24a
pix_fmt=gbrap

The first line tells Shotcut to use an internal RGBA processing pipeline. The second line says the output video format should also be RGBA (as opposed to converting to YUV). If you want RGB without an alpha channel, change those two lines to the following and optionally save it as a custom export preset if you’ll use it often:

mlt_image_format=rgb24
pix_fmt=gbrp

That’s all there is to exporting as RGB. The same concept works for all the lossless codecs, including Lagarith. The simplicity of Lagarith was due to being in RGB, not due to any magic of the codec itself. Lagarith also has YUV options (they were called YUY2 and YV12 in your screenshot) that are identical and every bit as complicated as the other codecs. They’re all the same. Lagarith has no more common sense than any of the other codecs. It’s just RGB that is arguably more common sense than YUV for everything except color gamut and disk space usage. :slight_smile: I demonstrated Ut Video as a possible replacement to Lagarith simply because it has an export preset already defined in a default installation of Shotcut. You could just as easily use HuffYUV or MagicYUV. Although when you’re in RGB, there’s no particular reason to favor one over the other except that Ut Video is the most actively maintained and continually optimized in the ffmpeg code base. And you’ll be happy to know that since RGB doesn’t require color metadata due to its simplicity, you can use an AVI container without any worries or caveats at all. You don’t have to add Matroska MKV files to your life with RGB.

The world of video encoding is dizzying, I get it. The glory of Shotcut is that it gracefully allows the importing of videos in any format, any bit depth, RGB or YUV, interlaced or not, any frame rate, drop frame or not, any color space, and any color range all on the same timeline within the same project, and gives you the power to tweak any settings that were not guessed correctly when provided with files that are malformed or incomplete. These formats that are so understandably ridiculous and full of compromises in the eyes of VFX guys are unfortunately the bread and butter of the broadcast industry and the distribution markets (like DVD and Blu-ray). There are numerous people in this forum who work in broadcast studios and use Shotcut for its incredibly forgiving and versatile treatment of any format thrown at it.

For a documentary filmmaker receiving video files sent in from all over the world, Shotcut is the coolest tool out there for being able to drop all videos immediately onto a single timeline. By comparison, if you wanted to edit video in Blender or Lightworks, all the source videos would have to be transcoded to a common frame rate and color space before they would play nice on the timeline. Meanwhile, Shotcut can get to work directly on the original files and interpolate transparently. It’s a thing of beauty. It’s actually a little unfortunate that the documentation and the “marketing arm” of Shotcut does not emphasize these features more prominently. The abundance of options (including the nonsensical ones) are actually raw unlimited power in the hands of people that have need for it.

You’ll also be glad to know that if YUV sources ever enter your workflow, Shotcut actually makes very intelligent guesses when metadata is missing. Defaulting to MPEG color range for YUV files is industry standard even today. Most containers like Apple’s MOV don’t even have a flag to signal full range because that’s just not a thing in a professional YUV workflow. It’s really just consumer devices like cell phones and DSLR/mirrorless cameras that create full-range YUV in MP4 containers, and MP4 has specialized metadata to signal it that Shotcut knows how to read.

Hopefully those details are able to set your mind at ease about the way Shotcut interprets your videos. If an RGB pipeline was good enough for you with After Effects, there are enough details here to create an RGB pipeline with Shotcut too and get identical results, albeit with a codec other than Lagarith. You should be able to hit the ground running at this point. If not, reply with the next hurdle and we’ll see what we can do for you.

1 Like

Another option just came to mind… H.264 in Lossless RGB mode. This should be very compatible with any editor out there. The easiest way to export in this format is to choose the H.264 High Profile export preset, change the Codec tab to use libx264rgb instead of libx264, and set the Quality to 100%. There is a chance it will offer better compression than the other codecs at the expense of playback speed (requires more CPU to decompress it on-the-fly).

Hopefully shotcut gets support for CinemaDNG before I actually need it! honestly though the studio I work with on occasion doesn’t even shoot that yet.
Good call on the 4.2 branch I was checking wikipedia’s codec list not direct on the ffmpeg site.

How is this relevant to this thread?

Hey @Austin, I just wanted to take this time to thank you for EVERYTHING.
You spend a considerable amount of time educating me about MPEG limited and JPEG Full with all that typing.
I just wanted to say thank you.

Now more questions:

Wait so now one shouldn’t use utvideo for TRUE PERFECT EDGE transparency ?
Because utvideo sub-sampled to 4:2:0 while maintaining transparency ? While MagicYUV keep 4:4:4 while maintaining transparency ?
You know Lagarith will never say “Hi you use me because you want lossless but let’s just downsample your stuff while maintaining transparency so you have the “illusion” of lossless while in actual fact I F-up your video big time ?”
WHY WHY WHY the confusion ? RGB is SO NO NONSENSE and straight forward. It wouldn’t degrade your video period ! That pixel color will STAY that color, end of story !
I really hate this !

This is the WORST thing from a VFX stand point because VFX elements have elements of different resolution and sizes, to think that it will infer color interpretation from resolution is a FREAKING NIGHTMARE.
Why can’t it just interprete everything as full jpeg ?! Life is simple everything will look right end of story ?

The TL;DR … Yes, RGB mode is drop-dead simple compared to YUV. If you’re using an RGB pipeline and you are content with how it looks, then you can safely put any of the above-mentioned lossless codecs into RGB mode like you did for Lagarith and hit the ground running. Your questions are about properties of YUV video that don’t apply to RGB sources.

The longer story…

RGB doesn’t have confusing format variations like YUV because there is only one specification (“sRGB”) and it makes no compromises when it comes to retaining full color data. But there are trade-offs with it:

  • The file size is huge. It is not suitable as a delivery format.

  • It is not backwards compatible with analog television receivers or the multitude of modern video formats based on broadcast standards.

  • Some post-processing functions will be more complex (slower) because the data is not split by nature into luma (grayscale) and chroma (color) components.

  • YUV can technically represent more colors than sRGB can. sRGB is nowhere close to covering all the colors that the human eye can see. However, because BT.709 is the largest color space that Shotcut supports, and BT.709 and sRGB are basically the same, we will ignore this point for now.

YUV has a strong history in analog television of course. The luma plane (the “Y” part) was the grayscale signal sent to the very first television sets. Then a color layer (the “UV” part) was added as a subcarrier so that the new color signal remained backward compatible with existing B&W televisions.

But television history isn’t why we continue to use YUV in online video delivery and computer graphics today. We use it because it enables significant space and bandwidth reductions without impacting perceived video quality. This is a really big deal.

For the first space reduction, some early science people did some science stuff and discovered that the human eye is much more sensitive to brightness than it is to color variations. Brightness was already represented by the “Y” component in B&W television at the time this research was going on. When deciding how to add color to the signal, they discovered that they could reduce the color information by a factor of four and the human eye would still think the image looked pretty good. That is how un-sensitive the human eye is to fine color differentials. And this is where we get the YUV concept of “subsampling”:

  • 4:2:0 subsampling means there is 1/4th the color information as there is B&W information. This retains just enough color to make the image look right, but not really enough information to manipulate it with further editing. This is why 4:2:0 is so frequently chosen as a delivery format for DVDs, Blu-rays, YouTube delivery, Netflix delivery… everything. Why would the online streaming companies pay two to four times as much money for storage and network capacity to transfer video data that is not visibly better? Why would customers want to pay for higher Internet speeds to transfer larger videos when the perceived quality doesn’t go up with it? Final delivery is why YUV 4:2:0 is and will continue to be a big part of the video encoding world. RGB can’t be made smaller the way YUV can because “brightness” is distributed across all three pixel components, meaning all three must be fully retained to keep that all-important brightness information. This cross-dependency doesn’t exist with YUV. (Lagarith stores this kind of data as “YV12” in your screenshot drop-down box. Ut Video stores it as yuv420p.)

  • 4:2:2 subsampling means there is 1/2 the color information as there is B&W information. This is what a lot of production studios use internally for editing. Having the extra color information is especially useful for getting cleaner edges when dropping out a green screen. (Lagarith stores this kind of data as “YUY2” in your screenshot drop-down box. Ut Video stores it as yuv422p.)

  • 4:4:4 subsampling means all color information is retained. This format has the same fidelity as RGB, but is stored in YUV format. I’m not aware of this format being used too often because it’s easier to use linear RGB or RAW. But it does exist if needed. Actually, thinking about it again, ProRes has a 4444 and XQ implementation of 4:4:4 with an alpha channel for VFX integration-style work. Stock media companies often provide titles, logos, and clip art overlay graphics in this format. (Lagarith is unable to store this kind of data. Ut Video stores it as yuv444p.)

  • Obviously, RGB doesn’t have to specify what subsampling it is using because it is always “4:4:4” in the sense that all color information is always there.

  • For 4:2:0 and 4:2:2, YUV has to answer the additional question of “where will the color layer be anchored”. This is called chroma siting. For example, with 4:2:0, one color value has to cover four pixels. But which four? What if the color was more accurate by bleeding over pixel boundaries? There’s a whole science to this, but RGB doesn’t have to care about it.

For the second space reduction, YUV by nature compresses better than RGB because of the separation between B&W and color. There isn’t as much variation inside those layers as there is with RGB. The RGB values can mathematically appear to randomly swing a lot trying to represent brightness and color at the same time, and big swings are more difficult for compression algorithms to pack down.

Now for the properties of YUV that aren’t beneficial over RGB…

There is a concept called “color space” which is a specification that says what “red”, “green”, and “blue” actually mean in terms of nanometer wavelengths of light and phosphor emissions. RGB wouldn’t look the same between devices unless the numbers in a video file resulted in the same colors of light coming off the screen. The sRGB standard defines these “color primaries” for RGB data. But for YUV which has a long history of evolution, there are a number of standards each with its own color space. BT.601 is standard definition television, BT.709 is 1080p high definition television, and BT.2020 is 4K UHD television. The primary colors are different nanometer wavelengths for each standard, so interpreting the color space correctly is important for the colors to look right.

Lastly, there is a concept called “color range” which specifies whether the YUV values go from 16-235 or 0-255. In a technical sense, YUV should always be limited range, also known as MPEG or TV range. The buffer area is to allow for overshoots in an analog broadcast signal, and is sometimes used for time code or synchronization signaling between different pieces of studio hardware. However, in a fully digital workflow, it is technically possible to get the full 0-255 range to capture a little extra color information. However, this is not industry standard and has to be properly indicated in the video metadata in order to be interpreted correctly. RGB of course does not have to deal with this distinction because it was never used for broadcast or heavily used by live analog studio gear. Had it been, its history would be just as colorful.

I went into detail on those last two parts to explain that when people import YUV footage and complain of shifted colors, the reason is because the metadata in the video file has failed to specify the color space or color range necessary to interpret the YUV data. Shifted colors mean metadata is missing or incorrect, causing the colors to be interpreted in the wrong way (such as the wrong nanometers for “red”, or the wrong range of values). This isn’t an issue with RGB because there is only one specification, so it’s really hard to get it wrong. :slight_smile:

The final thing we need to consider is that taking a YUV source video (a cell phone camera for instance) and converting it to RGB will cause a color accuracy loss due to that conversion. The values will shift by +/- 2 at every conversion between RGB and YUV (both directions). This adds up over time, like generational loss with VHS tapes, so it’s important for an output format to be the same as the input format to get a truly lossless transcode.

Now that you know the history and differences between RGB and YUV, let’s get to why your questions were difficult to answer directly…

I wrote those caveats before I knew your workflow was RGB. If your sources are RGB and you save as RGB, then these caveats don’t apply to your situation.

But to clarify the example… if the source video was YUV 4:4:4 with alpha, then we are unable to use both Lagarith and Ut Video for true perfect edge transparency, for two reasons: Lagarith does not support YUV 4:4:4, and neither codec supports YUV transparency. To get a truly lossless transcode of a 4:4:4 input with alpha, the only options are MagicYUV or FFVHuff or FFV1. These codecs will keep the data in YUV, meaning no loss happens converting the YUV data to RGB. If you are okay with the +/- 2 conversion loss, then you can convert 4:4:4 with alpha to RGBA and keep doing what you’re doing.

In reality, 4:4:4 with alpha is a pretty rare format, which is why I personally would not base my selection of preferred codec on being able to support it. However, if you wanted a “one and done” codec for every source you’ll ever meet, nobody would give you grief for choosing FFVHuff instead of Ut Video.
FFVHuff has support for everything all the way up to 16-bit video. MagicYUV is also complete in pixel format support, but only for 8-bit and can sometimes get file sizes smaller than FFVHuff without sacrificing performance. FFV1 is good for archiving but too slow for editing.

Ut Video would only subsample to 4:2:0 if you ask it to. Unfortunately in this example, Ut Video does not support YUV transparency at all, and neither does Lagarith, so they are out of the game. In your case, you would ask it to write RGBA or switch to a different codec to stay in YUV.

It totally would if you had an RGB source and saved it as YV12 (which is YUV 4:2:0). This is an example of the confusion that’s happening regarding the structure of video formats and the services of codecs. Codecs, as the name implies, do nothing but compress and decompress. The mechanics of downsampling, upsampling, RGB/YUV conversion, etc etc are a function of video formats and have nothing to do with the codecs. The codecs merely compress whatever bitstream is eventually handed to them without even caring what the bits mean. Most codecs don’t even know if the data they’re holding is full range or limited range, nor do they care. That is a job for the metadata manager, not the codecs.

As I and others here have said several times, there is absolutely nothing special about Lagarith as a codec when it comes to lossless color accuracy. The principles of RGB and YUV video are universal. If they weren’t, lossless transcoding between codecs would not even be possible. The questions you’ve asked can’t be directly answered because you’ve mixed video format concepts with codec concepts, and my tragically-long reply is an attempt to untangle this confusion so you can see just how little of the feature set you value is due to the codec itself. What you value is the stability and simplicity of RGB as a video format, and every lossless codec supports RGB. Lagarith is not special.

The one and only thing special about Lagarith is its support for null frames, meaning it won’t write data for a frame if it is identical to the previous frame. This is a useful space saving feature when dealing with screen-captured video of a PowerPoint presentation where nothing changes for minutes at a time.

It is, so long as you’re okay with its shortcomings. sRGB only covers 30% of the colors that the human eye can see; there are conversion errors introduced when coming from ubiquitous YUV sources; and the file size is huge in the process. Not everyone is happy with that setup.

Guessing the color space based on video resolution only has to be done on YUV sources that are missing metadata, and generally speaking, it’s a good guess at that. Meanwhile, RGB does not require any guesswork because there are no other formats the data could possibly be in. This guesswork is nothing for you to worry about because it doesn’t apply to your RGB workflow.

Because interpreting YUV data as full range would be incorrect 99% of the time. Limited range is the standard for YUV. And as we mentioned earlier, RGB sources are always interpreted as full range, so this has no effect on your workflow.


Sorry for the long-winded response, but as you can see, your questions were based on nuances of YUV formats that have nothing to do with your RGB workflow. There was no direct answer. We first had to sort out the differences between RGB and YUV for anything to make sense because you were concerned about video properties that don’t apply to RGB sources. Once you decide on a YUV or RGB pipeline, all of the lossless codecs are able to save your data in either format. There is a reason they support both formats, because both formats have their merits depending on the situation. You can convert between formats if needed, but you should be aware of the conversion loss that goes with it.

Hopefully the long trip around the mulberry bush has answered more questions than it created. :slight_smile:

4 Likes

I forgot to mention FFVHuff. It’s almost as fast as Ut Video and look at the crazy amount of pixel format support it has, all the way up to 16-bit:

ffmpeg -h encoder=ffvhuff
yuv420p yuv422p yuv444p yuv411p yuv410p yuv440p gbrp gbrp9le gbrp10le gbrp12le gbrp14le gray gray16le yuva420p yuva422p yuva444p gbrap ya8 yuv420p9le yuv420p10le yuv420p12le yuv420p14le yuv420p16le yuv422p9le yuv422p10le yuv422p12le yuv422p14le yuv422p16le yuv444p9le yuv444p10le yuv444p12le yuv444p14le yuv444p16le yuva420p9le yuva420p10le yuva420p16le yuva422p9le yuva422p10le yuva422p16le yuva444p9le yuva444p10le yuva444p16le rgb24 bgra

1 Like

Well that just made my day, I’m not doing anything 10bit yet but that looks fantastic

How big are the files compared to Ut and HuffYUV?

Same size. FFVHuff with context=0 specified will create a file that is bit-for-bit identical to a HuffYUV file except for the FourCC embedded in it. FFVHuff with context=1 specified will create a file that is essentially the same size as Ut Video, although not bit-for-bit the same.

The main reason I didn’t originally suggest FFVHuff as a Lagarith replacement is because the OP might have interest in sharing files between After Effects and Shotcut. Since the other lossless codecs have native options like VfW, the same video would be usable by both editors. But FFVHuff is fairly specific to ffmpeg. The only way I can imagine to get FFVHuff into After Effects is to use the ffdshow-tryout codec pack, but that hasn’t been maintained since 2014. So I’m not sure how portable FFVHuff will be with other editors. For people whose workflow is entirely inside an ffmpeg framework, it’s a fantastic format. If sharing outside of ffmpeg, then MagicYUV may be the closest to a one-and-done codec due to its support for YUV transparency. If a one-time conversion loss from YUV to RGB is acceptable, then using any lossless codec in RGB mode would be a viable replacement.

Nerd alert: The context parameter determines whether the Huffman tree will be altered to improve compression for each frame, or if the same tree will be used for the whole video. HuffYUV uses the same tree, meaning there is no overhead setting up a new tree for each frame, making it process faster at the expense of compression efficiency. Ut Video alters the tree each frame for better compression, but has gotten very fast at it over the years to make up for the overhead. FFVHuff is able to simulate either HuffYUV or Ut Video by setting the context parameter, where “context” means a new tree in the context of a new frame. FFV1 can do this too.

I have some video of a moving train.

When encoded with FFV1 or HuffYUV, there is jerky playback on VLC. When encoded with UT Video, the motion is nice and smooth, so I use UT Video with PCM audio for lossless. The subsampling has to be 4:2:2, so it’s not truly lossless being subsampled. I can’t get 4:4:4 to work with UT Video. Maybe I’m doing something wrong?

If it’s headed for YouTube I use H.265 with AAC audio for a small file and a quick(er) upload.

What quality % do you use for the video and bitrate for the audio when preparing for youtube?