Please test BETA version 25.12

Should be quicker to keep pressing Ctrl (or Command) Z to remove the tracks.

Once you’ve disabled Automatically Add Tracks in the settings, it stays disabled. You don’t need to do it again and again.

2 Likes

Yes. Settings > Timeline > Automatically Add Tracks (untick) did the trick. Thank you.

I tested some 10-bit Panasonic V-Log footage brought into Rec.709 with the official Panasonic LUT using 10-bit linear CPU mode. It worked great! The 10-bit linear mode is easily my favorite new feature this year, as it unlocks the entire log workflow.

The only ā€œproblemā€ I faced during brief testing was that the Color Grading filter seems to have a tone slider range that’s optimal for 8-bit. Since 10-bit values go 4x higher, I have to stack 2 to 4 Color Grading filters to get log-white boosted up to 709-white.

1 Like

It is not so much about bit depth but rather color transfer. Some time ago we had to adjust this filter because it was way too sensitive to be very usable. Log footage is so shallow, Color Grading is not ideal for converting log to gamma. Why not use a LUT for that and Color Grading to tweak? Maybe in a future version we can add a range toggle.

True, Color Grading is not the ideal tool for the job. This was a quick test to see what could be done if I didn’t have a LUT to get me in the ballpark. And to see how much banding might happen, which was very little thankfully.

For sake of documentation, how is the 10-bit processing represented internally? Floating point or integer? I’m wondering about the ramifications for filter order and such when trying to avoid banding or crushed shadows after too much stretching. Apologies if this was already mentioned somewhere. I didn’t find it.

16-bits per color component floating point on GPU and integer on CPU (rgba64).

1 Like

Thanks again for your effort @shotcut. A couple of questions:

  1. As you concentrated on 10bit support, would it be possible to allow 10bit mode in video scopes?
  2. Currently SC rendering (10bit CPU/GPU mode) utilizes ~40% of CPU and ~15% of GPU. Resolve for instance grabs about the same amount of CPU cycles, or even a bit less, but GPU works nearly at 100% load – which results in 3x faster rendering. Would it be possible to achieve a better GPU utilization in SC?

Not in this version or the next. Hopefully sometime in 2026.

Resolve for instance grabs about the same amount of CPU cycles, or even a bit less, but GPU works nearly at 100% load

Shotcut will never be as good as Resolve, and Resolve will never be open source.

A week or two after the release of 25.12 I will make a beta of 26.01 that adds hardware decoding for preview scaling only, not export, for reasons explained in the FAQ as well as a technical issue I ran into. By coupling it with preview scaling (or sources <= HD) it constrains the memory transfers, and it benefits both CPU and GPU processing modes. Previous ideas (failed attempts) to integrate hardware decoding were exclusive to GPU mode. IMO this speed improvement is more important for preview to make linear 10-bit more usable.

1 Like

I uploaded a new beta version 25.12.19 of the AppImage. If you use Linux with Intel or AMD graphics, I would appreciate a brief test of the hardware encoder. It should be more compatible now with a variety of distros and versions, but you might need to install something libva-related from the distro if you have not.

Also, HEVC is known problematic. It seems that broke with the FFmpeg 8 upgrade in version 25.10 (no or garbage video). You can test H.264 (not 10-bit, hardware encoders do not generally support 10-bit H.264) and AV1 if your hardware supports it, one of my systems does. This test is more about general greater compatibility. It works now for me on Ubuntu 24.04-based distros, Arch, and Fedora whereas previously it did not.

I’m very interested in this new feature. My laptop has integrated Intel graphics with hardware decoding support. But I didn’t notice any difference between the current release version and the appimage version from your link. Could you please tell me if I need to enable this setting separately in Shotcut itself? Or does Shotcut enable hardware decoding automatically? And how can I tell if it’s working? In the ā€œSystem Monitor,ā€ I see the same CPU load in both the new beta version and the old release version.

upd. Perhaps you were referring to the hardware encoder used during video export? No problems either, exporting was successful using h264_vaapi and hevc_vaapi.

1 Like

Thank you, glad to have a test with Intel graphics so I do not need to swap gfx cards. What is your distro and version?

LinuxMint 22.2 Cinnamon.

Intel Core i9-13900h, Intel Iris XE graphics

Hello, I test new beta version, is GPU actived? when i play video in the timeline, only CPU is work with 95% of charge…but no GPU (4%).

Manjaro linux with AMD cpu and GPU RX7600XT

There is a problem with the Gradient Map filter in v25.12.2
The filters works correctly if the clip it is applied to is on track V1

But…
Put a clip on V1 (no filter)
Put a clip on V2 and add a Gradient Map filter.
The clip on V2 disappears.

On v25.10.31 the filter works correctly.

Is this specific to the situation in your post (Intel or AMD hardware encoders on Linux) or a broader problem? I use NVENC on Windows and haven’t had any problems with HEVC in 25.10.31; does that mean my configuration shouldn’t run into this problem in 25.12 and beyond?

Specific to this

1 Like