What is the best GPU for shotcut accelaration?

I’ve been using shotcut with GPU acceleration on my 1080 TI. it makes a huge difference. my understanding is that this is using the nvenc in the GPU and not something like CUDA, is that correct?

if so, is there any advantage in using a Quadro GPU instead of the 1080TI since the Quadro GPUs have unlimited nvenc streams? will shotcut be able to take advantage of that?

and more generally, what is the best GPU to accelerate shotcut?

For Export > hardware encoder, yes. If you manually enable the hidden GPU processing configuration setting (unstable and not recommended at this time), then it is uses OpenGL.

is there any advantage in using a Quadro GPU instead of the 1080TI

Not in Shotcut.

what is the best GPU to accelerate shotcut?

Each generation of GPU improves quality substantially, and I think NVIDIA and Intel are considered to give better quality than AMD. Only the latest generation are considered to be approaching x264 quality on its fast preset. Anyone have a link to a good, recent comparison?

Since HEVC is so slow, some only consider to use hardware encoder for that.

Here is a promising article I have not had time to read yet:

Unfortunately there is no GPU supported export acceleration, only GPU supported viewing. (Helps to playback complicated filter/transition/track edits without lagging.) Sorry to say this will not be in the works for a while if not ever. Despite that we hope you’ll still give Shotcut a try

uh… i’ve been using shotcut with GPU/nvenc acceleration export for the last several months… what are you talking about?

I only use Shotcut to cut videos. So I’m not sure if this would be relevant to you. I have parallel processing disabled. I use HEVC. I just compared my GTX 1080(4790CPU) and my RTX 2080(6700kCPU). Encoding the same mp4 video file 32MB 5 minute video. The RTX 2080(31 seconds) is 6 seconds faster than the GTX 1080(37 seconds). Not much difference. Hope this helps you out.

Not only are you incorrect, but you write like you represent the Shotcut project, which you do not.

The RTX cards are new Turing generation, and will give a better quality. Hardware encoding needs these quality improvements to compete more with the software encoders.

I’ve also heard in the past that Intel’s QuickSync is really fast for encoding comparatively, but quality isn’t nearly as good.

As a beginner at video editing, it seems like a strange situation that quality is different on different hardware. I guess with different quality, that means they are using totally different methods of encoding?

Only the latest generation are considered to be approaching x264 quality on its fast preset.

So you can’t encode x264 with video cards at all I take it? I would have thought that the target encoding would be the important thing, and yes, video cards should crunch the data quicker (more parallel) if the software was built for GPU processing.

This topic was automatically closed after 90 days. New replies are no longer allowed.