I tried out the GPU effects and when I export the file, Shotcut only uses about 30 % of my CPU resources and the export takes much longer. When rendering without the GPU effects, all 8 cores are close to 100 %. I understand the GPU is not used during export, and accordingly Task Manager only shows around 8% GPU utilization by Shotcut during export.
Now I don’t really need the GPU effects currently, so I will disable it for future projects, just curious what causes this and if it’s working as intended.
EDIT: I’m using the latest version 18.10.08 and I’m on Windows 10, codec is libx264 in both cases
You failed to provide any real information, but GPU effects is available, but not really supported here since it’s experimental. And any project mlt file created with GPU will stay with GPU effects, and will not open without GPU effects turned on.
It may help the devs and other users if you could provide all specifications of your computer and specifications of your GPU, source files, even upload the mlt file here.
Know there is not many people in this forum that use GPU effects, so you may not get any responses or feedback.
Hm I understand, I guess I’ll best recreate the project with CPU effects. I can upload the .mlt file if anyone is interested, let me know.
The whole thing actually got weirder, the exported file was audio-only with a size of only 91 MB (after 4.5 hours of exporting at low CPU utilization) but in the same folder it created a filename.mp4_2pass.log.mbtree file that is 1.23 GB. I had dual pass turned on, which I had not used before, so maybe the problem is related to that.
I’ll have to test with a shorter clip and just CPU effects to see if there is a problem with dual pass on my machine.
Specs of my computer: AMD FX-8350, 16 GB DDR3-RAM, GTX 960 with 4 GB VRAM
Source files: h264 encoded mp4 files I converted from variable frame rate game footage with Handbreak (so the files are output from Handbreak).
This is not a bug; that’s just the way it is. Lack of absolute optimization is not a bug. To explain the situation, GPU Effects is for video/image processing and not for decoding or encoding. Sending video to the GPU and reading it back out is much slower than the CPU-RAM bus speed. Also, it depends on what CPU effects you combined on top of the GPU effects. There is much less CPU multi-threading possible in the engine for image processing when GPU effects are enabled. However, GPU Effects is clearly marked experimental and given cautionary dialogs, and it is unsupported.
The next version will make this much more obvious first of all by hiding the option and requiring you to manually edit the config file/registry to turn it. And once on, you must acknowledge a warning dialog every time the app starts.
Thank you for the explanation!
I quite enjoy working with your program btw.
Now that I discovered how to use proxy files, I can even edit without having to listen to my own distorted slo-mo voice
This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.