Massive Memory Leak When Rendering?

What is your operating system?
Windows 10

What is your Shotcut version (see Help > About Shotcut)? Is it 32-bit?
21.05.18

Can you repeat the problem? If so, what are the steps?
(Please be specific and use the names as seen in Shotcut, preferably English. Include a screenshot or screen recording if you can. Also, you can attach logs from either View > Application Log or right-click a job and choose View Log.)

Render a project with following properties:

  • 8 Video layers
  • 1440p60

After about 5 hours of rendering, the memory use of melt.exe has increased exponentially. This causes my system to reboot, but checking the event log shows me this error that happens about 100 times over the next 10 hours or so until the system finally crashes:


As can be seen, melt.exe is using over 57 GB of RAM (using all 16 GB of my available memory, as well as the entirety of my page file). In order to capture the output of the render, I ran the render on the command line by invoking melt.exe directly using the following command in powershell:

C:\"Program Files"\Shotcut\melt.exe -progress final-job.xml 2>&1 | tee render-output.txt

Here is the output of this command

C:\Program Files\Shotcut\melt.exe : Current Frame:          4, percentage:          0
At line:1 char:1
+ C:\"Program Files"\Shotcut\melt.exe -progress final-job.xml 2>&1 | te ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (Current Frame: ...age:          0:String) [], RemoteException
    + FullyQualifiedErrorId : NativeCommandError
 
Current Frame:          6, percentage:          0
Current Frame:         18, percentage:          0
Current Frame:         22, percentage:          0
Current Frame:         23, percentage:          0
Current Frame:         25, percentage:          0
[mp4 @ 000001f1f41b78c0] Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.
[mp4 @ 000001f1f41b78c0] Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.
Current Frame:         26, percentage:          0
Current Frame:         27, percentage:          0
Current Frame:         28, percentage:          0
Current Frame:         29, percentage:          0
Current Frame:         30, percentage:          0

[I trimmed about 100 of these out]

Current Frame:        135, percentage:          0
Current Frame:        136, percentage:          0
Current Frame:        137, percentage:          0
Current Frame:        138, percentage:          0
Current Frame:        140, percentage:          0
QObject::startTimer: Timers can only be used with threads started with QThread
Current Frame:        141, percentage:          0
Current Frame:        142, percentage:          0
Current Frame:        143, percentage:          0
QObject::startTimer: Timers can only be used with threads started with QThread
QObject::startTimer: Timers can only be used with threads started with QThread
QObject::startTimer: Timers can only be used with threads started with QThread
Current Frame:        144, percentage:          0
[swscaler @ 000001f220c8a3c0] Warning: data is not aligned! This can lead to a speed loss
Current Frame:        145, percentage:          0
Current Frame:        146, percentage:          0
Current Frame:        147, percentage:          0
Current Frame:        148, percentage:          0
Current Frame:        149, percentage:          0

[I trimmed about 6000 of these out]

Current Frame:       6273, percentage:          9
Current Frame:       6274, percentage:          9
Current Frame:       6275, percentage:          9
Current Frame:       6276, percentage:          9
Current Frame:       6277, percentage:          9
Current Frame:       6278, percentage:          9
Current Frame:       6279, percentage:          9
Current Frame:       6280, percentage:          9

This output doesn’t seem to give much indication as to what the issue is, aside from the data is not alligned warning. I’m not sure what this means, or whether it is the cause of the issue.

I’m currently uploading the project to my Google Drive, but it’s literally 20 GB of content, so testing it yourself might be a challenge. Here’s the link: Final Video Trim - Google Drive

I attempted this on my own, using video tracks from the google drive link in a project at 1440p 60, but i didn’t have any issues with RAM usage or crashing, in fact it only took my computer an hour and thirty minutes.

I checked it out as well on my Windows system with 16 logical CPUs and 64 GB RAM. This project and its export requires a fair amount of memory due to its resolution, 9 tracks, and export with parallel processing and software encoder. See

Virtual memory usage is not real memory usage, and it does not need that much. In my test, I left the export interpolation at bilinear to make it go faster and hyper/lanczos does not use more memory but slower. By the time export reached

% shotcut.exe melt.exe
2% 2.9 GB 5 GB
5% 2.8 5.1
10% 2.8 5.1
clicked each track and clip to view filters…
21% 3.2 5.15
30% 3.2 5.3
1 Like

It may be something to do with the clips I’m using, but I honestly don’t know. Because I don’t think that it’s normal that it’d use almost triple the memory that it would take to load all the clips into memory at one moment. It could also have something to do with the export settings I’m using (see the .XML file, which can be given as a parameter to melt.exe on the command line)

Shortly after above my last update above, I experienced problems like Shotcut frontend crash, Firefox crash, etc. I left melt.exe running overnight, but it was still running 12 hours later and the file not playable. Checking the Event Viewer System log, and I see the same events as you preceded by a “Application popup: Windows - Out of Virtual Memory : Your system is low on virtual memory.” melt.exe was using 55 GB, and my paging file size is intentionally small.
Safe to say I reproduced it. I will run another test without parallel processing and with hardware encoder to see what happens:
33% done, 3.4 GB used for melt.exe, and so far no problems.

Update: it too eventually failed.

2 Likes

The problem is not new with this version; 21.03.21 also has the problem.

Update: I think I did not reboot after having a problem and before running the test on v21.03.21. Yesterday, I rebooted and tried export again with parallel processing turned off, and this time it succeeded with a peak working set of memory usage of 4.5 GB. There were no resource exhaustion events in the Event Viewer system log, but there was one of “Application popup: Windows - Virtual Memory Minimum Too Low” and a little weird activity shortly following. Nevertheless, I could play the video and it passed an integrity check (in Shotcut Properties menu).
It is possible that on one of my tests on v21.05.18 I did not reboot.

1 Like

Thanks for the investigation! It’s good to know that the error is reproducable. Just to clarify, it works without issues if parallel processing is disabled and without hardware acceleration? I’m trying to get this project rendered as soon as I can since I’ve been having issues with it for months.

I attempted to render using a single core with no hardware acceleration, and although it seemed like it was far more successful, reaching about 30% before my system crashed due to low resources, with melt.exe using about 44 GB of RAM.

Tried again using the command line.

This time it reached 98% before crashing.


image

Log:

C:\Program Files\Shotcut\melt.exe : Current Frame:          1, percentage:          0
At line:1 char:1
+ C:\"Program Files"\Shotcut\melt.exe -progress final-job-2.xml 2>&1 |  ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (Current Frame: ...age:          0:String) [], RemoteException
    + FullyQualifiedErrorId : NativeCommandError
 
[mp4 @ 00000198258470c0] Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.
[mp4 @ 00000198258470c0] Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.
Current Frame:          2, percentage:          0
Current Frame:          3, percentage:          0
Current Frame:          4, percentage:          0

...

Current Frame:        117, percentage:          0
Current Frame:        118, percentage:          0
Current Frame:        119, percentage:          0
QObject::startTimer: Timers can only be used with threads started with QThread
Current Frame:        120, percentage:          0
Current Frame:        121, percentage:          0
Current Frame:        122, percentage:          0
Current Frame:        123, percentage:          0
[swscaler @ 000001989bd4a700] Warning: data is not aligned! This can lead to a speed loss
Current Frame:        124, percentage:          0
Current Frame:        125, percentage:          0

...

Current Frame:      63451, percentage:         98
Current Frame:      63452, percentage:         98
QImage: out of memory, returning null image
Current Frame:      63453, percentage:         98
Current Frame:      63454, percentage:         98


...

Current Frame:      63534, percentage:         98
Current Frame:      63535, percentage:         98
[consumer avformat] error writing video frame: -12
Current Frame:      63536, percentage:         98

Was the memory also exhausted during this test? Or is the failure unrelated to memory leaking?

Memory was definitely exhausted. I had about 100 resource exhaustion detections in the event log, and a few processes had some weird behaviour, which could explain the buffer overruns possibly, although I can’t say I know enough about buffers to know what I’m talking about. I imagine that it got further since Shotcut wasn’t open using an extra GB of RAM, so it could leak for a bit longer before crashing.

Gonna try again with hardware acceleration, although that was causing some other (seemingly unrelated?) issues earlier.

Thanks for your time, and hopefully we can get to the bottom of this soon! If there’s anything I can do to help find the issue, please let me know, although I doubt my C++ is good enough to get me anywhere.

I was also able to reproduce this using your provided source files. I started the process on a day that I was able to closely monitor the progress. I watched for hours as the melt.exe memory stubbornly hovered around 4GB in the task manager - never exceeding 4.2GB. By 83% complete I was feeling smug that it was not leaking and was surely going to succeed. I looked away and moments later my computer was locked up. After a reboot I find similar virtual memory errors in the event log.

So it would seem that the memory utilization spontaneously spiked in a short period of time. Or maybe the virtual memory usage can accumulate without being reported in the task manager somehow?

1 Like

If you can afford the time, a time consuming task that I would find helpful…
It would be good if we could reproduce the problem with a simpler project file. I wonder if it is only the length of the project causing the problem. Or maybe the source files exploit a problem. Or maybe too many tracks.

If you could try subtracting features from the project one-by-one and see if it still fails for you, that would help narrow where the problem could be. For example, if the problem can be reproduced without using the timeline, that would eliminate a large swath of areas for investigation.

1 Like

To see the virtual memory used in Task Manager Details view, show the column “Commit size” This you can see goes up, but I do not yet know how it trends over time. Usually, this takes so long I have to let it run overnight, or I leave the house for an extended period.

Update:
I have been running some tests outside of the project and not yet reproduced a virtual memory consumption problem. I am also doing some memory profiling with the project but not found anything. All this stuff takes a long time. Then, I starting giving some attention to the frequent error messages “moov atom not found.” The copy of render particles mask.mp4 is not valid and not opening in Shotcut, ffmpeg, or anything else. I downloaded it again from the Google drive, and it is the same. I suspect this could be a factor in the problem.

1 Like

That’s very interesting - I’ll have a closer look at that one after my classes today. I did do some manual editing of the .mlt file a few months back when I accidentally broke the project and replaced the actual clips with the proxy clips, but the preview seems to work fine.

I’ll give this ago when I have time. I’ll start trying a few combinations of the base clips tonight hopefully.

I’ll try a trimmed version of the project as well then. Is there a command line option to get melt to print out timestamps every time it updates the progress? That way I could pair it with some form of memory monitoring to figure out at what frame the issues start.

melt prints out the frame number for every frame that is processes.
image

Yeah I’ve got that to work, I’m more looking for a thing where it’ll print something like [@t=2021/06/07 11:56:42] Current frame: 2534, percentage: 18.

I’ll have a look for a command line tool for it.

The problem is related to the Size, Position & Rotate filters. If I remove them the problem goes away, if I move them from the track to the clips, it does not help. I did intensive memory profiling of the first couple seconds with a special tool called valgrind, but it did not reveal anything. It runs many times slower under this tool such that trying to let it progress to over 70% might take days, and then it may not reveal anything about a pure consumption issue (it is designed more for hard leaks).

I am running another test outside of the project with a 2 hour 4K video with the filter applied to zoom in 140%, and so far at 37% done the virtual memory (commit size) usage in Task Manager is not showing anything odd (flat at about 500MB more than active private working set is OK).

1 Like