does somebody of us have any experience what would be the difference (time) for file export (let say mp4) between 64 core (threadripper, seems to be highest perf for us), and highest perf graphic card available?
All includes best settings you can get for Shotcut each case.
Let say, we are looking for 16c-64c core proc paradise vs. graphipc card (side to proc cores anyhow) (thousands cores, usually proudly announced) which can be used/implemented (with full respect for each Shotcut Developers).
Just Your experience, in time - seconds, and anything you can say, pls.
Depends on the output codec.
For the sake of round numbers, let’s say that GPU encoding is real-time.
Let’s also say that libx264 on Slow preset can achieve real-time encoding using 64 cores. In this scenario, to me at least, it would make sense to use libx264 rather than GPU because the encoding quality will be higher for the same amount of encoding time.
However, libx265 is 6-10 times slower than libx264 to encode, so it wouldn’t be real-time even with 64 cores in this example. If real-time HEVC is the target, then GPU encoding might still make sense. But then a lot of money would have been wasted on unused CPU if the GPU does the encoding work.
Note that although 64 cores are available, many filters cannot be parallelized beyond 8 threads due to pipeline dependencies in the math. A few threads will manage UI and preview and decoding, but overall… around 32 to 48 cores will probably sit around not doing much unless a software encoder like libx264 takes advantage of them. If GPU is used for encoding, then many dollars would be wasted on those extra 32 cores. Generally speaking, it’s much better to have 8 to 24 superfast cores (~4.2GHz) rather than 24 to 64 half-speed cores (~2.8GHz). Those serial pipeline math dependencies will finish much faster on the faster cores.
Also, having the highest performance graphics card is not necessary. All that’s being used on it is the encoding engine. That means an RTX 2060 for under $500 will look just as good and be just as fast as a higher end 20xx, because those thousands of CUDA cores are doing nothing the entire time when it comes to Shotcut. The encoding engine is dedicated circuitry, totally separate from CUDA cores, and Shotcut filters do not currently use CUDA either.
This topic was automatically closed after 90 days. New replies are no longer allowed.