12th gen+ intel issues

Shotcut uses only baby cores on new intel cpus to export video, resulting is horrible performance.

Anyway to fix this, other than disabling the baby cores from the bios every time you want to edit a video?

By “baby cores”, I assume you are referring to e-cores (efficiency cores) rather than p-cores (performance cores).

It seems there is some relevant information here:

I suggest to open your task manager, right click on the melt process during the export and try changing priorities and affinities.

Also, in your Shotcut settings, make sure your job priority is not set to “Low”.
image

Hi Brian
Thanks for the quick reply. I am aware about task manager. Even better you can disable the ecores (baby) from there as well per process, issue is this resets on every export.

Did you change the Shotcut Settings > Job Priority as shown?

Thanks, that’s the option I was looking for. Still a major performance hit sharing the load with the baby cores.
I did a small file as a test on normal priority vs disabling baby cores from bios: 35s to process vs 15s.
Is there any option in app like that to only use the big boy cores? or spread the processing more efficiently?

We use x264 so you can research what it is doing in itself. Otherwise, the UI code (the Shotcut part) or the edit and processing engine does not look at what processors are being used to do different things. I do not know how to make it only use certain types of cores.

I have a laptop with a modern intel core i9-13900h processor (14 cores 20 threads)

I conducted a series of tests and found one pattern:
if you use the libx264 software encoder and all available kernels, this works the fastest. If I leave only productive cores, I will lose speed. But if I enable the hardware QSV encoder - I lose performance from all cores and gain an advantage when only the productive cores are running.

Changing the “job priority” setting has little to no impact on performance, but I still set it to “normal”.

I think it would be very good if somewhere in the shotcut settings you could choose between productive cores, energy-efficient ones, and all at once.

Why should every application need to do this using a different settings, and different API on each OS? Silly nonsense that CPU and OS makers should be working out.

How did you do your tests? Is there a way for your to change which cores are used?

I can not find an API for this. If you know of one, let me know. And you certainly do not want Shotcut to try to modify your bios and reboot for you.

I changed the available kernels directly through the process manager (as in the video in my post above). When I have the opportunity, I will conduct another series of tests, more comprehensive.

I completely agree that this should be a headache for the developers of such processors and the operating system, but I also understand that they will do optimization primarily for popular software. Perhaps the problem can be solved by some banal methods that I have not yet thought of.

Hello, just wanted to give an update for future generations. I started to get unreliable results with further testing, turns out my CPU was going bad. I replaced it and everything is working as expected now.