Sorry for the delay. I have Panasonic cameras, not Sony, so it took me a little time to dig up some sample footage and relevant information for testing.
In this post, I will cover:
- An example project that uses an official Sony Rec.709 conversion LUT to pre-grade footage for use in Shotcut
- An example project that attempts to grade S-Log3 footage directly in Shotcut
- Choosing a color space
- Choosing an encoding format
Itâs worth noting right away that Shotcutâs largest color gamut is Rec.709. Currently, Shotcut will not master in DCI-P3, Rec.2020, ACES, or other large gamut spaces. All of the following example projects are built under the assumption that Rec.709 output will be sufficient. (More on this at the end.)
Using a LUT to pre-grade footage
First, we need sample footage. I got it from this YouTube video:
https://www.youtube.com/watch?v=nMzeSLf4gHo
In the video description is a link to a Google Drive folder with MP4 files. I chose to run tests on a file called A1_Courtney&Spencer_082121__233.MP4
because the scene has a wide dynamic range, the footage is properly exposed, and it features some skin tones.
The format of the footage is H.264 High 4:2:2 Level 5.2 (XAVC) 3840x2160 59.94fps 10-bit S-Log3 signaled as full range. FFmpeg does not detect any values specified for matrix, primaries, or OETF. Also, neither the footage nor the video owner stated whether the gamut is S-Gamut3 or S-Gamut3.Cine, but I am guessing from the scopes that it is S-Gamut3.Cine. (The Cine gamut is also generally recommended over full S-Gamut3 unless there is a really good reason and a really good colorist involved.) The source camera was a Sony A1.
The next step is to get an official S-Log3 to Rec.709 conversion LUT from Sony. This link provides LUTs for both S-Gamut3 and S-Gamut3.Cine:
https://support.d-imaging.sony.co.jp/support/ilc/movie/en/grading/02.html
These LUTs are for s709 output, which is Sonyâs attempt at giving you a lower contrast image that preserves more highlights for grading than a straight 709 Look Profile would do. There are also LUTs included to do an S-Log3 to DCI-P3/D65 conversion should you decide to go that direction. Note that these LUTs are intended for the A1 and A7S III. For the Venice and FX/FS-series cinema cameras, or if you want the 709 Look Profiles or Cine+ profiles, use the LUTs mentioned by @st599 at this link:
https://pro.sony/en_GB/support-resources/software/00263050
And for completeness, the Panasonic crowd can get a V-Log to Rec.709 LUT from here:
https://na.panasonic.com/us/resource-center/v-log-v-709-3d-lut
The next step is getting FFmpeg binaries. I typically get the nightly auto-build from here:
https://github.com/BtbN/FFmpeg-Builds/releases
The next step is crafting an FFmpeg command that applies the S-Gamut3.Cine version of the LUT and saves the result as a DNxHR HQX file (the HQ variant is only 8-bit). Put the source MP4 video and the LUT .cube file in the same folder and run this command:
ffmpeg \
-i "A1_Courtney&Spencer_082121__233.MP4" \
-filter:v lut3d=file='SL3SG3Ctos709.cube' \
-vsync cfr \
-pix_fmt yuv422p10le \
-colorspace bt709 \
-color_primaries bt709 \
-color_trc bt709 \
-color_range tv \
-c:v dnxhd \
-profile:v dnxhr_hqx \
-c:a copy \
-f mov \
-movflags +write_colr+write_gama+faststart \
-y "TranscodedWithCineLUTto709.mov"
This can obviously be scripted as part of your ingest workflow. Since FFmpeg can natively read XAVC (which is just H.264 at Level 5.2), no other tools are required.
Sony S-Log3 footage is an âextended rangeâ format just like Panasonic V-Log. Code values do not go all the way down to zero, but they do go all the way up to 1023. This imbalance defies the usual definitions of full range versus limited range video, so it is called âextended rangeâ instead. To retain the values up to 1023, the input file from the camera is signaled as full range even though that isnât exactly what the code values are doing. This is important to know because the official Sony LUTs are designed to create an output image in Rec.709 limited range. As in, the compression to limited range is baked into the LUT itself. It isnât an additional step you need to do yourself. The LUT itself also expects and compensates for S-Log3 having reflectance IRE 0% at 95 rather than 64 or zero.
At this point, the transcoded video can be brought into Shotcut and edited as usual. Here is a frame exported from Shotcut where the center column has the LUT pre-applied (no additional filters in Shotcut), and the outer edges are the original S-Log3 footage:
Thatâs a really good start point. Since this Rec.709 footage is still 10-bit, it retains a great deal of grading flexibility, provided the GPU filters in Shotcut are used. Using a CPU filter will cause a temporary conversion to 8-bit for processing, which could degrade the image quality.
Grading S-Log3 footage directly in Shotcut
Here is my meager 2-minute attempt at using Shotcut color filters to directly grade the S-Log3 file:
The center column is my grade, and the outer edges are the Sony S-Gamut3.Cine LUT. The only tools I used were Color Grading (the lift/gamma/gain âbrightnessâ controls), Saturation, and White Balance. The hay field looks pretty similar to the LUT, but the sky varies by a lot. I can switch those around by pushing White Balance the other direction, but I canât get both to match the LUT at the same time. Maybe I could if I adjusted the color of the highlights. Anyhow, perhaps I guessed incorrectly that the gamut of the source video was S-Gamut3.Cine. Or, more likely itâs a side-effect of the red primaries being off-axis with each other between Rec.709 and S-Gamut3.Cine.
The conclusion though is that yes, Shotcut can be used to edit 10-bit S-Log3 footage. Whether it offers enough color manipulation tools to meet your grading requirements is a question you would have to explore for yourself from here. In particular, a Curves filter with hue-vs-hue and luma-vs-saturation is currently absent in Shotcut.
Choosing a color space
Iâm sure your studio has already debated the output format topic to death, but Iâm going to rehash the main points here for everyone else thatâs following along.
Shotcut supports Rec.709 as a working space and output format. It does not support DCI-P3, Rec.2020, ACES, or any other large gamuts.
The good thing about Rec.709 is that practically every display device supports it. It can also be losslessly and automatically remapped inside the DCI-P3 color primaries, which allows a single color grading pass in Rec.709 to be viewable both online and in a theater. Granted, the theater viewers would not see the extra colors available in P3.
This is the dilemma regarding output formatâŚ
- If you choose Rec.709 for delivery, then Shotcut is a viable option and you only spend time for a single color grading pass. But you donât get the larger color gamut of other color spaces.
- If you choose DCI-P3 for delivery, then Shotcut cannot give you a native P3 working space, which may disqualify it as a video editor. You will also have to grade your material twice: once for a P3 theatrical release, and again to remap the P3 master into the smaller Rec.709 gamut for the streaming release. The big question is whether you have the time and money to pay a colorist to do the same job twice, plus workflow organization and storage space to handle the duplicated assets.
- If you choose full-gamut Rec.2020 for delivery⌠maybe reconsider? Only a handful of display technologies can achieve full BT.2020 green (like the $150,000 laser beam coming out of a Christie Griffyn cinema projector). If full-gamut BT.2020 is used, only a small part of your audience would see your work in its full glory, and then a second grading pass would be necessary to be viewable on all other devices (like streaming).
Consider these quotes from Charles Poynton and David LeHoty about the design goals of Rec.2020, found at https://sid.onlinelibrary.wiley.com/doi/full/10.1002/msid.1146:
The BT.2020 developers appreciated that color processing would be necessary in all consumer devices; their goal was to standardize interchange or container primaries, not native device primaries. Nonetheless, some factions today assert that BT.2020 defines a colorspace suitable for program materialâin other words, they argue that program material should be allowed to be mastered to the entire BT.2020 gamut. We disagree.
⌠skipping cool but long technical reasons âŚ
We believe that it is a mistake to create programming that addresses the entire gamut of BT.2020 colorspace. To do so risks compromising color image quality for a wide diversity of display devices, particularly in the cinema, where direct-view LED displays are emergent. We argue that BT.2020 colorspace should be considered an interchange or container space, as its developers intended. We believe that DCI P3 primaries are optimum for production and presentation of programming and for consumer imaging, and we believe that professional (BT.709/BT.1886) and consumer (sRGB) imagery will migrate to P3 primaries.
709, P3, 2020/2100⌠those are the basic final delivery options. I know you donât feel bound by conventions, but they do give you the widest audience possible. The limiting factor is the small list of formats supported by consumer display devices. The same limitations will apply to the encoding format, discussed later.
What it probably comes down to is the cost versus quality trade-off. If quality is worth that much to you and youâre willing to pay a colorist to do the job twice, then a P3 master for the theater will look the best. For streaming providers that support BT.2020, the P3 master can be remapped into BT.2020 and look great on those devices too. (Quick caveat: DCI-P3 cannot be directly uploaded to YouTube⌠it will error and ask you to resubmit the video as a remapped BT.2020 file.) But if time is critical (canât do two passes) or money for staff and storage is an issue, then Rec.709 may look âgood enoughâ and will immediately work everywhere.
I say all of this with the assumption that you donât work at Sphere Studios, who built a 580,000 square-foot screen that bubbles over a 17,500-seat theater in Las Vegas to display footage that was shot on a custom camera using a 316-megapixel large format sensor. The insanity is described at https://petapixel.com/2023/06/12/sphere-studios-big-sky-cinema-camera-features-an-insane-18k-sensor/. These guys are the epitome of doing their own thing, and they custom-built every piece to do it. If this actually is your working environment⌠do you need any consultants?
Choosing an encoding format
The last piece of the puzzle is encoding formats. As Iâm sure youâre aware, encoding video in formats patented by MPEG LA or other patent pools could incur licensing fees if distributed at any scale (including your private streaming service or your theater). As an example of the fees to distribute video encoded in H.264, see page 8 of the MPEG LA patent portfolio briefing:
https://www.mpegla.com/wp-content/uploads/avcweb.pdf
Since youâre probably aware of this and have already discussed it with your team, I wonât delve too deep into encoding formats unless you request further information. I will simply leave some food for thought in the event that DCI-P3 or HDR or Rec.2020 will be one of your requirements. If so, this limits the number of formats that can signal Rec.2020 and HDR.
The main contenders supporting HDR/WCG would be H.265/HEVC, H.266/VVC, AV1, and MPEG-5 EVC. The last one was merged into the FFmpeg codebase three days ago and will be released with FFmpeg 6.1 as the libxeve
encoder. EVC Baseline and AV1 are royalty-free while the other formats are not.
The dilemma here is that H.264 is considered a universal fallback format in encoding streaming ladders for clients that donât support H.265 or AV1. But your desire for a fully open-source royalty-free workflow may rule H.264 out. MPEG-5 EVC Baseline would be the ideal fallback, except it doesnât have widespread support yet. For now, MPEG-4 Part 2 ASP (also known as H.263/DivX/Xvid) may work as a royalty-free universal fallback because its patents have expired.
The situation is similar for audio. FLAC and Opus are great formats, but may not be supported on all devices. AAC is patented and has license fees, but fortunately the patents on MP3 have expired. MP3 has the advantage of being universally supported while also sounding really good if given enough bitrate. For formats that allow it (like DCI DCP), using WAV audio will of course give the best results.
Speaking of DCI DCP for your cinema projector, the format is naturally free because it is JPEG 2000 in MXF using WAV audio. If shipped on a physical hard drive, the partition is ext2. So, nothing proprietary is involved there.
ProRes and DNxHR are technically patented as well, but Iâm not sure if that is of any legal consequence to you since you arenât distributing those files to customers or making revenue on them (I am not a lawyer). If you want to be on the safe side, CineForm is a totally open intermediate format with excellent quality and similar file sizes when using -c:v cfhd -quality film1
during encoding. The format is slower to decode than DNxHR, but this wonât affect you if editing is done on proxies. CineForm is also a suitable format for long-term storage or archiving of your final master.
Conclusion
It would appear that most or all of the workflow you would need is available. If intricate compositing is needed, Shotcut can also be supplemented with Natron and Blender, similar to using After Effects with Premiere. The two big questions is whether Rec.709 as an output format will be sufficient enough for you, and whether the color tools provide enough precision for you. Only you can answer that.
Any other questions, Iâm happy to help out best I can. Good luck!