Workflow suggestions for major feature-filmmaker considering Shotcut

I am a major feature-filmmaker (remaining anonymous) interested in making the switch to open-source software (Shotcut) for my next project.

We will be shooting Sony S-Log3 XAVC footage which, apparently, Shotcut (ffmpeg) does not natively support. The solution is, presumably, to transcode.

My question is this: What workflow does the community suggest in order to retain the maximum quality from S-Log3 footage, and then to edit this footage in Shotcut?

Would we have to do a primary-grade of the S-Log3 footage in non-open-source software before transcoding, or can Shotcut plausibly grade an S-Log3 gamma-curve when transcoded? This, assuming that we can first transcode our footage into something Shotcut can ingest.

You may well ask why we are quitting DaVinci Resolve / Apple FCPX etc in a professional post-production environment. The reason is that it has become clear that the manufacturing-ethics and politics of these two companies are no longer aligned with our studio’s values.

We want to put our energies into supporting an open-source solution. It may, initially, be difficult for our studio to work in Shotcut, but we’d like to make the switch.

Any guidance greatly appreciated.


I do not have experience working with camera log video. However, I can tell you that Shotcut will accept most common video color spaces such as Rec. 601, Rec. 709, and BT.2020. You will need to transcode footage, and you can apply a LUT while doing that. I suggest to use ffmpeg (which Shotcut includes) and its lut3d filter for that. As for the command line, you can make some experimental transcodes in Shotcut that uses ffmpeg in the background by choosing Properties > Convert on any video. Then, use View > Application Log and go to the end to see the command line it generated. Then, you can add the ffmpeg filter to that command line between the input and output filenames like -vf lut3d="name_of_lut_file". If you are interested in 10-bit, you need to research how to expand that command line (codec and pixel format). Our 10-bit encode presets provide some hints about that even though they are intended to be used with melt, the command line interface to the Shotcut processing engine, which is a layer atop FFmpeg’s libraries (plus others).

1 Like

Sounds like an interesting project! In order to provide useful information, we need to know your intended output format. Are you targeting BT.709 SDR, or DCI-P3, or BT.2100 wide color plus HDR, or some other format?

Thank you for your help. Based on this, we are considering using ffmpeg to transcode the Sony S-Log3 footage to DNxHD / HR It looks like Shotcut is happy editing this. We’ll do some tests.

1 Like

Interesting question! Moving to an open source post-production workflow is not our only brave-move in this project, there’s more, which may explain why out intended output-format is, as yet, undecided:

We are also planning to deliver this movie (and future ones) theatrically (in a single, new public-cinema which we building specifically for this purpose), and also via an open-source ‘video store’ application (which will run on top of the BitTorrent network).

In my opinion, the production, post-production, and distribution wings of the legacy movie industry all need to be completely side-stepped. We’re attempting that.

A entirely parallel ‘Hollywood’ needs to be constructed outside the realm of Los Angeles / Netflix / Disney etc. To quote Buckminster Fuller:

“You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete.”

So, we are not limited to delivering in any current television ‘broadcast’ format and have not yet decided on an output type. This is a discussion we’re currently having: If our core audience will either be watching in our cinema (one we can personally calibrate), or on a computer, what is, technically, the most beautiful way we can present our movies to them?

Because the movie will never be streamed in real-time (only downloaded), we don’t necessarily have to observe broadcast conventions or compression-ratios here. Quality is our goal.

In summary: At this stage, we’re curious about a recommended format to transcode ungraded Sony XAVC footage into before we bring it into Shotcut. We’re looking for an ffmpeg-friendly format that will give us a wide range of output options later, and preserve as much of the original image quality as possible (within reasonable file sizes).

Hope that makes sense. Thank you.

1 Like

Is that you Dinesh!

Not Dinesh, no :slight_smile:
But I’m aware of several filmmakers who are considering leaving the entire ‘Hollywood’ / Nefflix system — following the ethos of many musicians who are no longer seeking to ‘get signed’, and are instead choosing self-distribution.

:slight_smile: I might not have been wrong.
I saw a news article where Johnny Depp told Disney to take a hike and another titled " Christian Actor Mark Wahlberg to Create ‘Hollywood 2.0’, Announces Location of New Studio"

This is the natural progression of the marketplace. The lack of quality products creates opportunities for alternatives.

Best of luck to you.


I’ve begun to feel that every movie director who is serious about changing the trajectory of the industry should be looking into buying a movie theater at this point. Self-distributing, and reaching smaller audiences with better-quality storytelling is preferable to wrestling with what’s left of the ‘Hollywood’ system.

Nice move. I wish you success. Did you hear about Jean-Pierre Mocky ?
It’s biography in English don’t talk about it, but he bought his own theatre to be free to make the films he wanted without suffuring oukazes from the film industry and television.

All in this thread is very interesting. I’ll wait for those new independent films. Best of luck :smiley:

Following this post…

S-Log 3 can be stored in DNxHD/HR but you need to convert it from S-Log 3 which is a proprietary log curve optimised for Sony camera sensors to a delivery format (such as BT.2100, BT.709 or DCI-XYZ). The capabilities of the cinema projector will tell you which to use. (S-Log3 is not a traditional gamma curve)

The conversion from S-Log to a delivery format would be handled in DaVinci either using ACES or it’s in-built colour science. It’s a mathematical transform so can be implemented in a LUT, however, S-LOG3 places IRE 0% at 10-bit 95 (not 64) so you need to ensure your tool of choice can deal with that.

Sorry for the delay. I have Panasonic cameras, not Sony, so it took me a little time to dig up some sample footage and relevant information for testing.

In this post, I will cover:

  • An example project that uses an official Sony Rec.709 conversion LUT to pre-grade footage for use in Shotcut
  • An example project that attempts to grade S-Log3 footage directly in Shotcut
  • Choosing a color space
  • Choosing an encoding format

It’s worth noting right away that Shotcut’s largest color gamut is Rec.709. Currently, Shotcut will not master in DCI-P3, Rec.2020, ACES, or other large gamut spaces. All of the following example projects are built under the assumption that Rec.709 output will be sufficient. (More on this at the end.)

Using a LUT to pre-grade footage

First, we need sample footage. I got it from this YouTube video:

In the video description is a link to a Google Drive folder with MP4 files. I chose to run tests on a file called A1_Courtney&Spencer_082121__233.MP4 because the scene has a wide dynamic range, the footage is properly exposed, and it features some skin tones.

The format of the footage is H.264 High 4:2:2 Level 5.2 (XAVC) 3840x2160 59.94fps 10-bit S-Log3 signaled as full range. FFmpeg does not detect any values specified for matrix, primaries, or OETF. Also, neither the footage nor the video owner stated whether the gamut is S-Gamut3 or S-Gamut3.Cine, but I am guessing from the scopes that it is S-Gamut3.Cine. (The Cine gamut is also generally recommended over full S-Gamut3 unless there is a really good reason and a really good colorist involved.) The source camera was a Sony A1.

The next step is to get an official S-Log3 to Rec.709 conversion LUT from Sony. This link provides LUTs for both S-Gamut3 and S-Gamut3.Cine:

These LUTs are for s709 output, which is Sony’s attempt at giving you a lower contrast image that preserves more highlights for grading than a straight 709 Look Profile would do. There are also LUTs included to do an S-Log3 to DCI-P3/D65 conversion should you decide to go that direction. Note that these LUTs are intended for the A1 and A7S III. For the Venice and FX/FS-series cinema cameras, or if you want the 709 Look Profiles or Cine+ profiles, use the LUTs mentioned by @st599 at this link:

And for completeness, the Panasonic crowd can get a V-Log to Rec.709 LUT from here:

The next step is getting FFmpeg binaries. I typically get the nightly auto-build from here:

The next step is crafting an FFmpeg command that applies the S-Gamut3.Cine version of the LUT and saves the result as a DNxHR HQX file (the HQ variant is only 8-bit). Put the source MP4 video and the LUT .cube file in the same folder and run this command:

ffmpeg \
-i "A1_Courtney&Spencer_082121__233.MP4" \
-filter:v lut3d=file='SL3SG3Ctos709.cube' \
-vsync cfr \
-pix_fmt yuv422p10le \
-colorspace bt709 \
-color_primaries bt709 \
-color_trc bt709 \
-color_range tv \
-c:v dnxhd \
-profile:v dnxhr_hqx \
-c:a copy \
-f mov \
-movflags +write_colr+write_gama+faststart \
-y ""

This can obviously be scripted as part of your ingest workflow. Since FFmpeg can natively read XAVC (which is just H.264 at Level 5.2), no other tools are required.

Sony S-Log3 footage is an “extended range” format just like Panasonic V-Log. Code values do not go all the way down to zero, but they do go all the way up to 1023. This imbalance defies the usual definitions of full range versus limited range video, so it is called “extended range” instead. To retain the values up to 1023, the input file from the camera is signaled as full range even though that isn’t exactly what the code values are doing. This is important to know because the official Sony LUTs are designed to create an output image in Rec.709 limited range. As in, the compression to limited range is baked into the LUT itself. It isn’t an additional step you need to do yourself. The LUT itself also expects and compensates for S-Log3 having reflectance IRE 0% at 95 rather than 64 or zero.

At this point, the transcoded video can be brought into Shotcut and edited as usual. Here is a frame exported from Shotcut where the center column has the LUT pre-applied (no additional filters in Shotcut), and the outer edges are the original S-Log3 footage:

That’s a really good start point. Since this Rec.709 footage is still 10-bit, it retains a great deal of grading flexibility, provided the GPU filters in Shotcut are used. Using a CPU filter will cause a temporary conversion to 8-bit for processing, which could degrade the image quality.

Grading S-Log3 footage directly in Shotcut

Here is my meager 2-minute attempt at using Shotcut color filters to directly grade the S-Log3 file:

The center column is my grade, and the outer edges are the Sony S-Gamut3.Cine LUT. The only tools I used were Color Grading (the lift/gamma/gain “brightness” controls), Saturation, and White Balance. The hay field looks pretty similar to the LUT, but the sky varies by a lot. I can switch those around by pushing White Balance the other direction, but I can’t get both to match the LUT at the same time. Maybe I could if I adjusted the color of the highlights. Anyhow, perhaps I guessed incorrectly that the gamut of the source video was S-Gamut3.Cine. Or, more likely it’s a side-effect of the red primaries being off-axis with each other between Rec.709 and S-Gamut3.Cine.

The conclusion though is that yes, Shotcut can be used to edit 10-bit S-Log3 footage. Whether it offers enough color manipulation tools to meet your grading requirements is a question you would have to explore for yourself from here. In particular, a Curves filter with hue-vs-hue and luma-vs-saturation is currently absent in Shotcut.

Choosing a color space

I’m sure your studio has already debated the output format topic to death, but I’m going to rehash the main points here for everyone else that’s following along.

Shotcut supports Rec.709 as a working space and output format. It does not support DCI-P3, Rec.2020, ACES, or any other large gamuts.

The good thing about Rec.709 is that practically every display device supports it. It can also be losslessly and automatically remapped inside the DCI-P3 color primaries, which allows a single color grading pass in Rec.709 to be viewable both online and in a theater. Granted, the theater viewers would not see the extra colors available in P3.

This is the dilemma regarding output format…

  • If you choose Rec.709 for delivery, then Shotcut is a viable option and you only spend time for a single color grading pass. But you don’t get the larger color gamut of other color spaces.
  • If you choose DCI-P3 for delivery, then Shotcut cannot give you a native P3 working space, which may disqualify it as a video editor. You will also have to grade your material twice: once for a P3 theatrical release, and again to remap the P3 master into the smaller Rec.709 gamut for the streaming release. The big question is whether you have the time and money to pay a colorist to do the same job twice, plus workflow organization and storage space to handle the duplicated assets.
  • If you choose full-gamut Rec.2020 for delivery… maybe reconsider? Only a handful of display technologies can achieve full BT.2020 green (like the $150,000 laser beam coming out of a Christie Griffyn cinema projector). If full-gamut BT.2020 is used, only a small part of your audience would see your work in its full glory, and then a second grading pass would be necessary to be viewable on all other devices (like streaming).

Consider these quotes from Charles Poynton and David LeHoty about the design goals of Rec.2020, found at

The BT.2020 developers appreciated that color processing would be necessary in all consumer devices; their goal was to standardize interchange or container primaries, not native device primaries. Nonetheless, some factions today assert that BT.2020 defines a colorspace suitable for program material—in other words, they argue that program material should be allowed to be mastered to the entire BT.2020 gamut. We disagree.

… skipping cool but long technical reasons …

We believe that it is a mistake to create programming that addresses the entire gamut of BT.2020 colorspace. To do so risks compromising color image quality for a wide diversity of display devices, particularly in the cinema, where direct-view LED displays are emergent. We argue that BT.2020 colorspace should be considered an interchange or container space, as its developers intended. We believe that DCI P3 primaries are optimum for production and presentation of programming and for consumer imaging, and we believe that professional (BT.709/BT.1886) and consumer (sRGB) imagery will migrate to P3 primaries.

709, P3, 2020/2100… those are the basic final delivery options. I know you don’t feel bound by conventions, but they do give you the widest audience possible. The limiting factor is the small list of formats supported by consumer display devices. The same limitations will apply to the encoding format, discussed later.

What it probably comes down to is the cost versus quality trade-off. If quality is worth that much to you and you’re willing to pay a colorist to do the job twice, then a P3 master for the theater will look the best. For streaming providers that support BT.2020, the P3 master can be remapped into BT.2020 and look great on those devices too. (Quick caveat: DCI-P3 cannot be directly uploaded to YouTube… it will error and ask you to resubmit the video as a remapped BT.2020 file.) But if time is critical (can’t do two passes) or money for staff and storage is an issue, then Rec.709 may look “good enough” and will immediately work everywhere.

I say all of this with the assumption that you don’t work at Sphere Studios, who built a 580,000 square-foot screen that bubbles over a 17,500-seat theater in Las Vegas to display footage that was shot on a custom camera using a 316-megapixel large format sensor. The insanity is described at These guys are the epitome of doing their own thing, and they custom-built every piece to do it. If this actually is your working environment… do you need any consultants? :slight_smile:

Choosing an encoding format

The last piece of the puzzle is encoding formats. As I’m sure you’re aware, encoding video in formats patented by MPEG LA or other patent pools could incur licensing fees if distributed at any scale (including your private streaming service or your theater). As an example of the fees to distribute video encoded in H.264, see page 8 of the MPEG LA patent portfolio briefing:

Since you’re probably aware of this and have already discussed it with your team, I won’t delve too deep into encoding formats unless you request further information. I will simply leave some food for thought in the event that DCI-P3 or HDR or Rec.2020 will be one of your requirements. If so, this limits the number of formats that can signal Rec.2020 and HDR.

The main contenders supporting HDR/WCG would be H.265/HEVC, H.266/VVC, AV1, and MPEG-5 EVC. The last one was merged into the FFmpeg codebase three days ago and will be released with FFmpeg 6.1 as the libxeve encoder. EVC Baseline and AV1 are royalty-free while the other formats are not.

The dilemma here is that H.264 is considered a universal fallback format in encoding streaming ladders for clients that don’t support H.265 or AV1. But your desire for a fully open-source royalty-free workflow may rule H.264 out. MPEG-5 EVC Baseline would be the ideal fallback, except it doesn’t have widespread support yet. For now, MPEG-4 Part 2 ASP (also known as H.263/DivX/Xvid) may work as a royalty-free universal fallback because its patents have expired.

The situation is similar for audio. FLAC and Opus are great formats, but may not be supported on all devices. AAC is patented and has license fees, but fortunately the patents on MP3 have expired. MP3 has the advantage of being universally supported while also sounding really good if given enough bitrate. For formats that allow it (like DCI DCP), using WAV audio will of course give the best results.

Speaking of DCI DCP for your cinema projector, the format is naturally free because it is JPEG 2000 in MXF using WAV audio. If shipped on a physical hard drive, the partition is ext2. So, nothing proprietary is involved there.

ProRes and DNxHR are technically patented as well, but I’m not sure if that is of any legal consequence to you since you aren’t distributing those files to customers or making revenue on them (I am not a lawyer). If you want to be on the safe side, CineForm is a totally open intermediate format with excellent quality and similar file sizes when using -c:v cfhd -quality film1 during encoding. The format is slower to decode than DNxHR, but this won’t affect you if editing is done on proxies. CineForm is also a suitable format for long-term storage or archiving of your final master.


It would appear that most or all of the workflow you would need is available. If intricate compositing is needed, Shotcut can also be supplemented with Natron and Blender, similar to using After Effects with Premiere. The two big questions is whether Rec.709 as an output format will be sufficient enough for you, and whether the color tools provide enough precision for you. Only you can answer that.

Any other questions, I’m happy to help out best I can. Good luck!


This topic was automatically closed after 90 days. New replies are no longer allowed.