I was looking at the video tutorial for proxies, and then I noticed kdenlive(also built on the MLT framework) has it as a simple checkbox in their project settings. Is this something that would be easy to copy their example and add to shotcut?
Hi D_S, I have an alternative you might like in the meantime if you don’t mind some scripting in your workflow.
Dan, if you’re reading, Shotcut is an incredible tool and I can’t thank you enough for creating it. After two years of using it, I feel that full proxy support would be one of its killer features compared to other editors, especially for heavy filter use over 4K video, and the good news is that it’s already 80% of the way there. I am willing to donate money to achieve the remaining 20%.
I’ll quickly walk through the working 80% then explain the missing 20%.
When we finish a video shoot, we stick all the files in a Media subfolder. Then we use ffmpeg to manually create proxy versions of every video. This is the DOS/Windows method:
For %f In (*) Do ffmpeg.exe -i "%f" -vf scale=-1:270 -c:v libx264 -profile:v high -crf 12 -intra -tune film -preset veryfast -c:a aac -b:a 384k "..\Proxy\%f.mp4"
If the input file was “\Media\VIDEO.MOV”, the output filename becomes “\Proxy\VIDEO.MOV.MP4”. Adding .MP4 to the end is necessary to prevent ffmpeg from complaining about a container mismatch when using libx264. So after the transcode, we run one more command in the Proxy subfolder to remove the .MP4 extension and make filenames identical to the originals:
For %f In (*) Do @ren "%f" "%~nf"
Linux shell scripts can of course accomplish the same thing. This is the gist of it, but our full script syncs the proxy timestamps to the originals to detect when changes have occurred (like audio tracks replaced with de-noised versions). Then we can regenerate and resync proxies only on the files that changed to save transcode time.
So now, we have two mirrored folder structures that have identical videos and filenames but at different resolutions. The MP4 proxies at 480x270 are generally 10% the file size of 4K H.264 100Mbps source videos. We use a very high CRF of 12 when creating the proxies to make their color as accurate as possible while maintaining reasonable file sizes. This way, a first-attempt color grade can be applied directly on the proxy and hold up well when switching back to 4K for final adjustments.
The beautiful thing about Shotcut (or MLT or ffmpeg depending on the source) is that it uses header inspection to determine the video format rather than the filename extension. For instance, the Media folder may have an MPEG-2 MTS file, but the Proxy folder has an H.264/AAC/MP4 version of the same video but with an MTS extension to match the original filename. Shotcut will correctly read the MP4 proxy even though it has an MTS extension. This feature is the magic sauce that makes the whole pre-compiled proxy process work.
Our folder structure now looks like this:
\Project.mlt <-- The MLT file references videos in the Media subfolder using relative paths
\Media\VIDEO.MOV <-- Imagine this is 4K ProRes video from an Atomos external recorder
\Proxy\VIDEO.MOV <-- Despite the .MOV extension, this is a 480x270 All-I H.264/AAC/MP4 transcode
Since our Project.mlt file is hunting for videos through the Media folder, all we have to do is swap (by renaming) the Media and Proxy folders to switch between proxy mode and 4K mode. (Of course, Shotcut should be closed while doing this.) Since the filenames are the same inside each folder, videos load fine and Shotcut acts like nothing happened. So we edit on the proxies, adding media to the timeline as fast as we like, with no waiting for the editor (Shotcut) to generate proxies for us because they’re already generated. Once the edit is done, we swap the folders again, re-load the project in 4K mode, selectively scrub through the timeline to adjust color grading or crop as needed, then finally do an export using the original 4K videos.
It works. It’s beautiful. It’s 80% of the way there.
The remaining 20% we need for full proxy support can be summed up as “relative coordinate systems for all filters”.
For instance, the Mask filter uses percentage units for both position and size of its bounding box. This is EXACTLY what all filters need for full proxy support to work. As in, percentage units will land in the same place regardless of the clip or timeline resolution underneath.
Here’s a case where absolute coordinates don’t work: the Crop filter. The crop is specified in pixels. If you add a crop filter on a proxy that’s only 480x270, then it may only take 50 pixels of crop to achieve your desired effect. But when you swap in a 4K video, cropping 50 pixels is completely unnoticeable and does not look the same as the 480x270 crop. If the crop were specified as a percent, then the same amount of video would be masked regardless of the underlying resolution. The coordinate system needs to be relative rather than tied to the clip resolution. Or rather, its representation when stored in the MLT file needs to be relative. The UI could continue to show pixel units that are calculated from the relative coordinates to make manual entry more convenient to users.
Similarly, the Size and Position filter is in pixels rather than percent. The only difference is that its coordinate system is tied to the project/timeline resolution rather than the clip resolution. But the problem is exactly the same. If the timeline resolution changes, all pixel-based filter coordinates get destroyed.
But why would we change timeline resolution? One reason is because we authored an old project in 720i for a DVD release but we want to re-author it for 4K since the sources were acquired in 4K. But, and way more importantly, the reason to change timeline resolution is that when you’re stacking multiple tracks of video on the timeline, there is a huge preview performance difference between a 3840x2160 timeline and a 480x270 timeline. Being able to edit on a fast 480x270 timeline then change it to UHD right before export is extremely useful for getting the most service life out of old hardware, which is also the key to giving 4K editing capabilities to third-world countries that can’t afford high-end hardware.
Crazy as that sounds, this is my motivation for requesting full proxy support. In the last country I visited, a Dell laptop in an elementary school was the fastest computer within a two hour radius of us. Their economy depends on tourism to thrive, but they can’t showcase what they offer through video because they have no hardware to edit a video. They have computer knowledge but no hardware. They have cameras and cell phones that get good source video, but no nVidia GPUs and DaVinci Resolve to edit it together at a professional level. Something as simple as proxies could be their ticket to opening a much more attractive tourism campaign through high-quality video using the hardware they already have. As an example, I edit video using Shotcut on a Surface Pro 2 all the time with this proxy workflow, and it works. I can produce 4K video on this laptop when other programs like Resolve won’t even start up. I would love for third-world countries to have this same capability but with less hassle (filters that work in or out of proxy mode).
So, to D_S, this is our current proxy process and it works well so long as we edit first, then swap to 4K to apply any pixel-coordinate filters at the end. I went verbose for the sake of anyone else interested, as I get the impression you could have figured this out with only two paragraphs. I’ve also learned to like this process better than Blender’s extremely picky proxy workflow, and better than Kdenlive’s proxies. Last time I tried Kdenlive, its proxies were MPEG-2 and there was a glaring color shift. Our MP4 proxies at CRF 12 don’t do that. Also, built-in proxy solutions in other editors have been less-than-transparent about where the proxies are located, meaning it can be difficult to transport proxies with the project when moving projects between computers or archiving the project. Lastly, not being able to generate proxies in advance is a major drag in Blender and Kdenlive to me. Pre-generated proxies let me edit at the speed of thought, rather than having no proxy until I drag the original onto the timeline and request a proxy then wait for it to transcode. To clarify, I like your checkbox idea for proxies. I’m just hoping that the Shotcut mechanics can be handled more transparently than Blender and Kdenlive have done so far. Pre-gen is awesome.
To Dan, I know you have a lifetime list of feature requests already and I sympathize with you, so I am willing to donate money to get proxy support up to 100%, which simply means making all filters use a relative coordinate system like the Mask filter does. A checkbox like D_S requested may be nice for other users in the future, but the coordinate system would still have to be fixed first to be usable. And we could at least run with the process we have today until the checkbox is ready.
Thank you for listening. Shotcut is amazing.
I have historically disliked proxy editing because it adds steps to the workflow, adds code that has bugs to work through, and most of all adds a degree of separation by masking problems that export might have with the source material. I would rather put the effort into improving the GPU processing.
The coordinate system needs to be relative
I do not disagree, and it used to be that way for many filters before keyframes. Round-tripping relative values through the keyframes is not working yet. I worked on it once, but I did not get it working reliably. Maybe I will again soon.
But why would we change timeline resolution? One reason is because we authored an old project in 720i for a DVD release but we want to re-author it for 4K since the sources were acquired in 4K.
I understand the reasons. I just can’t make Shotcut the most awesome tool people want in a very short amount of time.
I will reconsider my opinion about proxy editing. Maybe if someone is willing to convert media to edit-friendly (aka “optimized”, where needed) at the same time as proxy creation, it will alleviate the concern about masking problems. First, I need to add some project management that will establish a project folder into which things are managed.
Great post, @Austin!
You make a great case about all filters using percentages rather than going by pixels and you have quite the knowledge about editing programs. Do you happen to know how to do programming? From what I understand, the Shotcut development team right now is only a two man operation and Dan also is the lead developer of MLT. He’s got a lot on his plate. When Shotcut introduced Keyframes back in May it was very buggy and it didn’t get stable until just now with v18.10.08. I wonder if changing the filters that don’t use percentages to use percentages would introduce a lot of bugs that would cause regressions and take some time to fix before it becomes stable again. That would be a lot of time and work if so.
If you don’t know how to do programming yourself do you happen to know anyone who does and can at least volunteer temporarily just to get that remaining 20% that you talked about? Being able to get an extra hand in the mix I figure will actually be the one way to get that remaining 20% the fastest. I don’t know anything about programming myself because if I did I would help out.