Hi! I’m new to Shotcut, but already love what it can do, and will likely be relying on it for a lot of my future projects.
The first video I put together was sourced from a bunch of FLV files, cut together and exported as an MP4. Now, sure, I’m exporting at 1920x1080 29.97fps progressive, but it’s taking almost the full length of the video itself to render out to disk.
I’m running it right now on my macbook pro (i7 2.3GHz with a SSD) and it’s still just as slow as the exporting on my home machine (an i5-equivalent AMD). According to Activity Monitor, the qmelt process is somehow at 200% CPU, despite my machine sitting at 70% Idle:
Are you using more than one video track with ‘C’ enabled/on in those additional video tracks? If so, that enables compositing and blending between that track and V1, which is going to slow things down a lot. Also, filters will slow things. In the next version due August 2, there is a major performance boost if the video on the upper tracks is opaque and fills the frame. In the meantime, if you are not using the compositing/blending features, click ‘C’ to disable compositing. If none of that applies to you, then that is just the way it is. There are these things called “bottlenecks,” and lack of optimization is not a bug.
I have also seen render times take as long as the video itself. This is quite normal for me. As Dan suggested, there are certain operations that are not multithreaded. Alpha compositing is one of those. So if you have a lot of blending and filters, then the rendering will take longer - and it won’t necessarily be able to use 100% of your CPU while rendering.
I’m combining one muted video track with one audio track. No compositing, effects, filters or overlays:
The output format for the video is the exact same as the input format. In 3 minutes it did about 7%, so that’s around 48 minutes to export a 51 minute video.
@shotcut I understand that there are bottlenecks, and that lack of optimization is not a bug, I just pointed out that this issue exists when dealing with high-res output. If there are other settings or config options I can test on my build to help identify where the bottlenecks are, I’d be happy to help.
@brian It’s good to know that alpha compositing will take a while. I imagined that would be the case anyway, since every frame of the video source would need to be modified.
Yes, there is already much multithreading: video decoding, image processing, and video encoding (depending on the codec). Even the alpha compositing supports frame-threaded parallel prpocessing: in Export > Video see the “Parallel processing” checkbox.
I have a MacBook with i7 and SSD running OS X, and I made a composition similar to yours. I used all of the defaults in export (same as YouTube preset). I am waiting to see how long this 30 minute video will take, but you can see my Activity Monitor shows much higher CPU usage. Still, based on progress so far, I expect it will be a while - somewhere along the lines of yours. I do not know why CPU usage is low for your particular situation. Besides things like compositing and filters, there are other factors such as whether the source is interlace and the output is progressive, image scaling when the project resolution differs from export, video codec and its thread settings.