Able to edit Lagarith encoded video but how to encode in Lagarith codec?

When inside the export advance menu, of ALL the codecs, I couldn’t find the lagarith codec even though I have it installed.

I use this codec all the time for lossless editing, how do I get shotcut to show the option to encode in lagarith codec ?

Thank you !

In case someone is interested in the codec, it can be found here:

But I care more about whether Shotcut video can export using this codec, it is kinda make or break for me on this one, thanks.

FFmpeg is capable of decoding lagarith but not encoding it, since Shotcut uses ffmpeg it will be bound to the same limitations. If you’re after a lossless codec that’s supported in both directions huffyuv(which lagarith is based on) and UTvideo both get offered as options for lossless video here fairly regularly.

First of all, thank you for explaining this to me.

Now I am wondering if shotcut could integrate lagarith in shotcut since it is free.
I have zero experience using other lossless codecs, I want zero funny tricks, I don’t want any chance of any information being loss, compression is fine but nothing must be loss, at all.

What do you suggest for that ?

HuffYUV or UTvideo in an mkv container is going to be your best choice, both have had some fine tuning in the forum here to improve both their color accuracy and performance, the two linked threads are both worth a read.

1 Like

I will be reading the two posts in about 20 minutes from now.
Can I have it in avi container instead of this *.mkv thingy ?

I rather have it in a container that is familiar for other video programs.
I understand I can just change the container in shotcut video editor, but from your experience is it “ok” for any of the two codecs you have fore-mentioned to be in the *.avi format ?

I can’t speak for the Shotcut developers, but I would be surprised if they had interest in adding custom codec support beyond what ffmpeg provides. That would create a lot of internal code trees and chances for bugs trying to decide whether to use an internal or ffmpeg codec. Then an internal codec might become redundant if ffmpeg adds support in the future. So the general rule is “Shotcut does whatever ffmpeg does and nothing more”. With the possible exception of some RAW/CinemaDNG formats if the roadmap stays on course and ffmpeg doesn’t add it before then. :slight_smile:

HuffYUV, Ut Video, and MagicYUV are all lossless codecs in the same class as Lagarith. They are all mathematically-perfect bit-for-bit encodings of the original. I think you would end up liking any of them better than Lagarith for reasons of performance, code stability (fewer bugs), and compatibility with other programs. These three have native codecs including VfW that can talk to commercial editing programs.

However, in order to be completely lossless, that will depend on your source material. As in, if your source is YUV 4:4:4 with an alpha channel (like it came from ProRes 4444 HQ or similar), then MagicYUV is the only codec that supports the yuva444p pixel format. Likewise, all of these codecs are only 8-bit, meaning any 10-bit sources you have will not be lossless if brought down to 8-bit. These are the same caveats as Lagarith, so it shouldn’t be anything new.

You will need to pick the codec that matches your source if you’re doing anything too unusual as they each support slightly different formats. If you’re doing normal 4:2:0, then Ut Video is your friend because HuffYUV doesn’t have a 4:2:0 option and will upsample to 4:2:2, which is lossless but unnecessarily wastes 30-60% disk space. If you are using standard YUV 4:2:2, then you can use any of them without a care in the world.

Here are the supported pixel formats for each codec:

ffmpeg -h encoder=huffyuv
yuv422p rgb24 bgra

ffmpeg -h encoder=utvideo
gbrp gbrap yuv422p yuv420p yuv444p

ffmpeg -h encoder=magicyuv
gbrp gbrap yuv422p yuv420p yuv444p yuva444p gray

ffmpeg 4.2.1 or later is recommended because MagicYUV is no longer in experimental state with the 4.2 branch.

This one comes with many caveats. The AVI format does not have flags to store color space or color range metadata. The lossless codecs (Lagarith included) do not store this data either, although we can infer the color space of Ut Video due to it having dedicated FourCCs for each color space. Without this color metadata, applications are left to guess what the colors should look like, and wrong guesses are the source for shifted colors that can drive people absolutely crazy trying to fix. So here’s the breakdown with Shotcut…

If color space metadata is not supplied in an AVI file and we can’t infer it from a Ut Video FourCC: Then, if a video has Width * Height < 750,000 pixels, it is considered “Standard Definition” and gets the BT.601 color space. Else, if Width * Height > 750,000 pixels, it is considered “High Definition” and gets the BT.709 color space. Basically, the video needs to be 720i or greater to count as HD, although I emphasize that Shotcut is guessing and not guaranteed to be correct. (If the format is Ut Video, then BT.601 or BT.709 will be indicated by the FourCC, so that’s at least one thing accurate.) However, in all cases of these lossless codecs in an AVI, the color range will be interpreted to be legal/limited/MPEG rather than full regardless of whether the actual video data inside is limited or full.

The only fool-proof way to get around color shift and color metadata problems is to put the lossless codecs (Lagarith included) into a Matroska MKV container. It has flags for these color properties. ffmpeg 4.2.1 or later is needed to get the highest performance out of Matroska. It is included in Shotcut 19.10 which is currently in beta.

The three codecs listed above are drop-in replacements for Lagarith. I think you’ll find a transition to be way less problematic than you fear, especially if using MKV as the container and the 19.10 version of Shotcut when it comes out in a few days. (Older versions work too, but not as fast in some extreme cases.)

Actually, there is one more option in the same lossless category as the others. FFV1 is an archive format that supports every pixel format under the sun, including 10-bit and higher. It’s a good option if you need one format to rule them all for archiving anything that’s thrown at you.

However, by virtue of being an archive format, it aims for really high compression which means prohibitively slow playback. You won’t be able to edit in Shotcut directly on an FFV1 file due to the slow playback. If you’re editing, then the other three formats are much better choices. If you are archiving, or creating an intermediate that will be used for the final render while you do your editing on a proxy, then FFV1 is a good option as an intermediate. The other three are good proxy options.

@Austin you beat me too the more technical breakdown(and in greater verbosity than I would have, if you’re up for it I could use a few guest posts for my website about video editing XD)

@Lagarith Austin explained most of what I pointed you to those threads to read, out of curiosity what are you using this video for and getting it from perhaps we can help you optimize things better still?

If the codec library outputs something that requires that I must integrate a muxer (container format), then I am not going to spend the effort. Let someone else integrate it into FFmpeg, where there is a lot higher level of contribution. If there is a codec not integrated into FFmpeg, then there is likely a reason regarding relevance, quality, or license. I do not know why lagarith encoding is not implemented or integrated into FFmpeg.

Cineform is in a similar state within FFmpeg and Shotcut even though the code is on GitHub under Apache license.

The other most popular codec requested in this area is HAP. That is available through an existing library integration with FFmpeg. I just haven’t added it to the build process yet.

@Austin your mention of 10bit color sent me diging some, are there actually any 10 bit lossless codecs that aren’t proprietary right now? I did a little looking but couldn’t find any really. Also MagicYUV is decode only under ffmpeg unless shotcut implemented support for it specifically(I haven’t checked)

The amazing simplicity of Lagarith, it just works, want normal lossless ? Just Select RGB, want alpha ? Just Select RBGA, notice it didn’t ask you nonsense like “Do you want your video in 4:2:2 or 4:2:0”.
This is because Lagarith understand basic common sense, when it says it is a lossless codec, it wouldn’t do a 180 by saying “Yeah…the whole lossless thing ? You want me to maybe lose it down to 4:2:0 ? or you want me to keep it as…” Notice it doesn’t do bullcrap like this, because it is a lossless codec that don’t ask stupid questions.

How do I get shotcut to interpret and output in RGB and or RGBA, I have never and don’t ever want to deal with YUY2, YV12, yuv422p, yuv420p, yuv444p.
It seems to me when reading the forum that all these “interpretation” has been a great source of grief for everyone because of color shift issues.

All I know is, in my 20 years of editing in AfterEffects, Lagarith has always worked, the colors has always been right and I have never once have to content with or be asked by AfterEffects whether my lagarith video is to be interpreted through rgb, rgba, YUY2, YV12, yuv422p, yuv420p, yuv444p.

In AfterEffects, it’s all just RGB or RGBA, life is so simple and the color is as it is, Lagarith in AfterEffects never ask me if it should encode in 4:2:0 or 4:2:2, if I want an alpha channel, I just select RGBA, if I don’t want an alpha channel, I will just select RGB, all these talks about having to remember which codec to use for 4:2:2 and which codec to use for 4:2:0 and the added warning that the codec that handles 4:2:2 will upscale a 4:2:0 video so one have to be mindful of which to select is making me wanting to crawl back to my AfterEffects Lagarith workflow where everything just works.

I am now starting to have a panic attack just thinking about having to deal with all these nonsense in shotcut video editor, and this is coming from someone having done a lot of VFX works for about 8 years now.

The color issues as was noted in one of the threads that @D_S linked to you, have been fixed in Shotcut a while back. So as far as I know, there are no color issues now. Dan has been very good about that.

For whatever it’s worth, I did a quick search and found this post on the ffmpeg zeranoe forum where someone asked back in 2012 about ffmpeg encoding Lagarith and this was the response:

it can’t. it can decode it, however. I was contacted the lagarith author and he said that he (specifically) had no interest in porting his code to ffmpeg so…until somebody else steps up to do it, you’ll need to use windows’ system to encode to lagarith.


Mind you, that was in 2012. I don’t know if anything is different now. Maybe you could ask at the ffmpeg forums if anyone would consider porting it. Here is the link to the ffmpeg forums. Also, here is the webpage to the author, Ben Greenwood. According to that page, he last updated Lagarith back in 2011.

Oh ALL the color interpretation nonsense has been fixed ?

That was a VERY worrisome “feature” and I am glad now that’s settled.
Except that the way shotcut video editor import your video might still mess up your color big time IF YOU DO NOT MANUALLY SET IT FOR EACH TRACK because of this totally uncalled for redundant unnecessarily added nonsense:

The part where I talk about the “Broadcast Limited (MPEG)” nonsense.
It’s like shotcut is saying "Hey since you did not set whether your video is “Broadcast Limited (MPEG)” or “Full (JPEG)” and I couldn’t infer it from the video, let me just assume it is 1995 and select “Broadcast Limited (MPEG)” and interpret your video’s color in a limited fashion, because as we ALL KNOW it is GREAT to assume the worst quality when unknown…

Meanwhile AfterEffects would never do something ridiculous as this, it would have never been a thing, AfterEffects will just allow the full RGB range and don’t “re-interprete within a limited pallet when unknown”, this is

You almost have to put in extra effort to make sure shotcut don’t mess up the color interpretation by default and you didn’t even do anything yet on your end to mess it up.

Is AfterEffects available on Linux?
It doesn’t support Linux.

I haven’t researched open source 10-bit lossless codecs a lot, but FFV1 supports 10-bit directly in ffmpeg. The Ut Video format also supports 10-bit and the native codecs (like VfW) support 10-bit. It was the choice of ffmpeg to only implement the 8-bit portions so far. I’m not sure what’s stopping them from implementing 10-bit as the standards are already written. My guess would be that if a studio cares enough to have a 10-bit lossless workflow, they’re probably going to use CinemaDNG, BRAW, or some other direct RAW format rather than mess with a middle layer. There’s so much data and so much processing that they can’t waste much CPU on decompression at that level of post-production.

That was true for older versions of ffmpeg. In the 4.2 branch and later, MagicYUV is both encode and decode, and no longer marked as experimental.

TL;DR… There is nothing to worry about. Shotcut and the other lossless codecs have you totally covered.

Now for the details…

I wish I knew this about you from the start. :slight_smile: It provides context to everything.

In a VFX pipeline, it is very common to work entirely in RGB/A. For motion pictures and other large productions, it’s even more common for that RGB space to be linear and be exchanged as EXR or DPX image sequences rather than video files. As you’re probably aware, this is exceedingly specific to the VFX workflow. But before and after the VFX integration, it’s a completely different world when it comes to encoding video, and out there, the only common formats are YUV or some variant of RGB RAW (like CinemaDNG or BRAW). RGB – as in sRGB to distinguish it from linear or RAW – will rarely exist outside of 3D animation or motion graphics renderings, or screen recordings of video games captured by OBS Studio. Basically, RGB is for stuff that’s generated by a computer as opposed to video that’s captured with a camera.

Before answering your other questions, we should first double-check your source video. The pixel format was gbrp, meaning it’s an RGB video. But how did it get to be RGB? Was it generated by a computer like a rendering or screen capture? Or was it converted to RGB from video captured with a camera? The reason it’s significant is because you wanted absolutely zero loss due to compression. However, if the source video was YUV from a camera and then converted to RGB for Lagarith, then loss happened right there in the YUV-to-RGB conversion. Granted, the loss won’t be terribly noticeable and the loss is probably minimal since the camera already converted an RGB Bayer filter to YUV in the first place, meaning the camera chose YUV values that will map the closest back to the original RGB. However, if the RGB conversion was done later with software that isn’t perfectly color correct, then there will most definitely be loss in every YUV-to-RGB conversion and back. There is also an issue that the YUV color gamut is larger than RGB, meaning RGB will lose color data if transcoded from YUV, which will turn into noticeable banding and possible generation loss if you apply heavy color grading.

This is why all the lossless codecs mentioned in this thread support both YUV and RGB variants. The output format must match the input format in order to avoid a conversion loss. You can do everything as RGB in Shotcut just as you’re doing in After Effects, but there is potential for loss even with RGB in both workflows and it’s up to you to decide if you’re okay with that. You haven’t noticed a problem with After Effects apparently, so in theory you should be fine doing the same workflow in Shotcut.

All that aside, let’s get down to editing…

Since your input is RGB, your life will indeed be much more simple than the average person using Shotcut who has to deal with YUV. Lucky you!

You’ll be relieved to know that when you drop an RGB video onto the timeline, the color range drop-down box for Limited vs Full has no effect. Those options are only relevant for YUV video. To prove this for yourself since it rightfully concerns you, change between the two options and notice that the preview window doesn’t look any different either way. Then drop a YUV video onto the timeline (a video from a cell phone will suffice) and change between the two options. The colors in the preview window will have a radical shift. Limited vs Full is ignored for RGB video because it’s a meaningless concept to RGB. @shotcut, to @Lagarith’s point, it would be nice if Shotcut automatically set the color range drop-down box to Full if the source video is RGB. Not sure if that’s an option or even a good idea, but it sounds logical in theory.

At this point, editing can be done care-free just like you would normally expect. No need to double-check Shotcut’s interpretation of your files because RGB is RGB is RGB. Simplicity is a feature of RGB and has nothing to do with Lagarith.

Now we get to exporting. The easiest way to have Shotcut export as RGBA is to choose the Alpha > Ut Video option from the list of stock export presets. If you choose that preset then go to the Advanced > Other tab, you’ll see these lines in it:


The first line tells Shotcut to use an internal RGBA processing pipeline. The second line says the output video format should also be RGBA (as opposed to converting to YUV). If you want RGB without an alpha channel, change those two lines to the following and optionally save it as a custom export preset if you’ll use it often:


That’s all there is to exporting as RGB. The same concept works for all the lossless codecs, including Lagarith. The simplicity of Lagarith was due to being in RGB, not due to any magic of the codec itself. Lagarith also has YUV options (they were called YUY2 and YV12 in your screenshot) that are identical and every bit as complicated as the other codecs. They’re all the same. Lagarith has no more common sense than any of the other codecs. It’s just RGB that is arguably more common sense than YUV for everything except color gamut and disk space usage. :slight_smile: I demonstrated Ut Video as a possible replacement to Lagarith simply because it has an export preset already defined in a default installation of Shotcut. You could just as easily use HuffYUV or MagicYUV. Although when you’re in RGB, there’s no particular reason to favor one over the other except that Ut Video is the most actively maintained and continually optimized in the ffmpeg code base. And you’ll be happy to know that since RGB doesn’t require color metadata due to its simplicity, you can use an AVI container without any worries or caveats at all. You don’t have to add Matroska MKV files to your life with RGB.

The world of video encoding is dizzying, I get it. The glory of Shotcut is that it gracefully allows the importing of videos in any format, any bit depth, RGB or YUV, interlaced or not, any frame rate, drop frame or not, any color space, and any color range all on the same timeline within the same project, and gives you the power to tweak any settings that were not guessed correctly when provided with files that are malformed or incomplete. These formats that are so understandably ridiculous and full of compromises in the eyes of VFX guys are unfortunately the bread and butter of the broadcast industry and the distribution markets (like DVD and Blu-ray). There are numerous people in this forum who work in broadcast studios and use Shotcut for its incredibly forgiving and versatile treatment of any format thrown at it.

For a documentary filmmaker receiving video files sent in from all over the world, Shotcut is the coolest tool out there for being able to drop all videos immediately onto a single timeline. By comparison, if you wanted to edit video in Blender or Lightworks, all the source videos would have to be transcoded to a common frame rate and color space before they would play nice on the timeline. Meanwhile, Shotcut can get to work directly on the original files and interpolate transparently. It’s a thing of beauty. It’s actually a little unfortunate that the documentation and the “marketing arm” of Shotcut does not emphasize these features more prominently. The abundance of options (including the nonsensical ones) are actually raw unlimited power in the hands of people that have need for it.

You’ll also be glad to know that if YUV sources ever enter your workflow, Shotcut actually makes very intelligent guesses when metadata is missing. Defaulting to MPEG color range for YUV files is industry standard even today. Most containers like Apple’s MOV don’t even have a flag to signal full range because that’s just not a thing in a professional YUV workflow. It’s really just consumer devices like cell phones and DSLR/mirrorless cameras that create full-range YUV in MP4 containers, and MP4 has specialized metadata to signal it that Shotcut knows how to read.

Hopefully those details are able to set your mind at ease about the way Shotcut interprets your videos. If an RGB pipeline was good enough for you with After Effects, there are enough details here to create an RGB pipeline with Shotcut too and get identical results, albeit with a codec other than Lagarith. You should be able to hit the ground running at this point. If not, reply with the next hurdle and we’ll see what we can do for you.

1 Like

Another option just came to mind… H.264 in Lossless RGB mode. This should be very compatible with any editor out there. The easiest way to export in this format is to choose the H.264 High Profile export preset, change the Codec tab to use libx264rgb instead of libx264, and set the Quality to 100%. There is a chance it will offer better compression than the other codecs at the expense of playback speed (requires more CPU to decompress it on-the-fly).

Hopefully shotcut gets support for CinemaDNG before I actually need it! honestly though the studio I work with on occasion doesn’t even shoot that yet.
Good call on the 4.2 branch I was checking wikipedia’s codec list not direct on the ffmpeg site.

How is this relevant to this thread?

Hey @Austin, I just wanted to take this time to thank you for EVERYTHING.
You spend a considerable amount of time educating me about MPEG limited and JPEG Full with all that typing.
I just wanted to say thank you.

Now more questions:

Wait so now one shouldn’t use utvideo for TRUE PERFECT EDGE transparency ?
Because utvideo sub-sampled to 4:2:0 while maintaining transparency ? While MagicYUV keep 4:4:4 while maintaining transparency ?
You know Lagarith will never say “Hi you use me because you want lossless but let’s just downsample your stuff while maintaining transparency so you have the “illusion” of lossless while in actual fact I F-up your video big time ?”
WHY WHY WHY the confusion ? RGB is SO NO NONSENSE and straight forward. It wouldn’t degrade your video period ! That pixel color will STAY that color, end of story !
I really hate this !

This is the WORST thing from a VFX stand point because VFX elements have elements of different resolution and sizes, to think that it will infer color interpretation from resolution is a FREAKING NIGHTMARE.
Why can’t it just interprete everything as full jpeg ?! Life is simple everything will look right end of story ?