Suggestions for large projects and description of current workflow

Hello and thanks to Dan for your software. I really like your approach to making software. Brian et al too, thanks

  • I don’t use many filters. In fact the linux distro I use doesn’t even have working html or webvfx framework (Archlinux). Doesn’t bother me, I’m doing fairly formal stuff so basic cuts and the odd blended piece of footage and this works well.
  • I’ve usually got maybe 3 hours of footage that has been converted into Prores intermediate files in the timeline divided up into a dozen or so tracks and more in the playlist. I still haven’t worked out an efficient way to organise my footage. The last shoot had a B-cam and that added more complexity to this workflow. Basically, I love using the playlist, however, I think I need another level of organisation. Proper isolation of separate groups of clips. Bins basically or a playlist within a playlist. Just thinking out loud here.
  • Stability was great this past month, maybe 150 hours of editing and maybe one or two crashes that I couldn’t attibute to a bug (audio waveform scope crashing the program (latest git built from source this past month). :+1:
  • Love the shortcut keys for popping the various tools on and off the editor window. This is so useful.
  • My workflow is to render out to another intermediate format and all of the audio tracks separately (stems) and then I process the audio in Ardour 5.12 with xjadeo and combine the finished audio and video finally with ffmpeg. This workflow allows for a lot of flexibility and allows for getting a professional setting of levels according to established practices (LUFS). This project which I hope to finish today I will try adding the compression and getting the levels where I want them using LADSPA tools. So :crossed_fingers:
  • I relied heavily on the video histogram as well as the video waveform tools in this project and it behaved quite bizarrely. Sometimes just freezing, then continuing, then freezing. However, it worked enough that I got the job done. Maybe it was 12 tracks in the timeline? :confused:

This is just feedback and if it’s of no use, feel free to delete (after you’ve read it). Summary, love the stability, please keep up the amazing work in that area. Fancy filters mean nothing me but I know it’s a part of your interest in making this software accessible and useful to a large audience (it means nothing to me got that song in my head now).

EDIT: One thing I just remembered I would like some feedback on Dan is I used a few short clips that were supplied by someone else and I didn’t bother transcoding them. They worked perfectly and I was able to do frame accurate cuts on them without a problem. They weren’t All-I (Intra) codecs either and I was surprised by this. That you can combine multiple formats into the same clip effortlessly I am certain is a big advantage to your software over other established products. I can imagine there is a huge amount of work in the background to make this happen and look as easy as it does.

My question is this, if I am able to get frame accurate cuts from bog standard h264 footage, what is the advantage to transcoding? Thanks again in advance for any replies

1 Like

Consider not using Playlist and simply organizing your clips using sub-folders of your project folder. The file managers are pretty good at providing and browsing organization. Then, you can drag-n-drop whenever you need to add another clip. Yes, you must go through Source player before it gets into the Timeline that way, but practice 3-point editing and you may learn to like it and be more efficient. After all, you can also use this approach to audition the clips and find the shot you are looking for because you may not be able to determine that from simply a filename and thumbnail. We still have a lot of user interface, media management, and workflow features to add but the priority ls still on debugging and engine-oriented improvements.

There is better accuracy at seeking into audio when it is not compressed using audio codecs with delay (which is most including AAC and MP4 but not PCM, ALAC, and AC-3). For video, it mostly comes down to faster seeking, scrubbing, and reverse playback. The way some files are compressed without all I-frames can also introduce some seeking problems - it depends on some deeply technical factors but certainly open-GOP is problematic. Of course, on a different but related note, variable frame rate video is challenging and not recommended but sometimes it works good enough. I think preparing optimized or edit-friendly media is really about reducing risk and improving the user experience of the interactivity (latency/lag).


Indeed, it’s so simple that I did not think about it. I just finished a project made up of more than 50 clips and I confess that I struggled with the sorting of these clips.
I will do as you suggest for my next project.

However, please note in your list of improvements on this topic, the ability to mark for example a pellet (or use another visual means) on clips that have already been used in the timeline.

Doing say may give the impression that these objects are linked, which they are not. People may expect that if they double-click the playlist item to open it in Source and make a change (trim or filter), then it will affect all instances of this clip object in the timeline, but it will not. By design, nothing is linked between Playlist, Source, and Timeline - copies are made at each transfer and things might have changed in each copy. We could, however, add some sort of flag-with-color feature - under user control - and then perhaps add some automatic flagging feature.

That’s exactly what I thought

My workflow is almost identical to yours, although I’m using filesystem folders to organize videos like the other commenters are recommending.

The other difference with my workflow is the audio piece. Like you, I export a speech stem and edit it to appropriate levels in Cubase. However, I don’t render video to an intermediate file and then combine it with audio using ffmpeg. Instead, I go back to the Shotcut project, add a new audio track, import the finalized Cubase speech stem into it, mute all the other speech tracks, add Gain filters on music tracks to sound good against the speech stem, then do a single export to my final delivery format. It saves the time and disk space of rendering an intermediate if you don’t have any other uses for the intermediate.

I am able to get away with not exporting and editing a music stem because our background music is quiet enough to not push the final integrated level more than one LU. So it’s easier and faster and more flexible to change Gain directly in Shotcut. Using YouTube as an example, I target speech at -14 to -15 LUFS whereas YouTube’s reference level is -13 LUFS. That gives me 1-2 LU for music and SFX to play around in, which sums nicely to -13 LUFS when integrated.



That said, I will not open another thread, but I want to congratulate you for your work, it is the first version (19-06-15) that allowed me to realize a project of 50 clips (40 minutes of video in the end) on 5 working days, with multiple openings, closures, modifications, filters … without any crash, freeze …
Stability comes …

1 Like

Thoughts on transcoding…

We use Panasonic cameras that pump out well-formed closed-GOP constant frame rate H.264 files. In terms of seek accuracy, these are just as good as All-Intra. What they lack compared to All-Intra is decoding speed, and they would make the editing experience very laggy if we tried to edit them directly. A transcode could gain back editing speed at 1080p, but our files are 4K, which means they’re going to lag no matter what. So we use a proxy workflow, which is basically a transcode at a smaller resolution. All that to say… if you have CFR H.264 files and your computer can edit them directly without lag, there is probably zero benefit to transcoding intermediates.

Variable frame rate H.264 is a different story. So are open-GOP TS/MTS/M2TS files and some old AVCHD files. The occasional 60fps AVCHD file has been a problem for me too. With all of these, transcoding (if done right) gets seek accuracy back by forcing frames to a constant frame rate, meaning seek operations will return the same frame every time.

Since you are familiar with ffmpeg, have you tried DNxHR HQ as an intermediate format rather than ProRes? It is 4:2:2 8-bit, meaning it takes less disk space (on some sources) than 10-bit ProRes HQ, and it encodes and decodes about 3x faster than prores_ks on my computer. Even if you have truly 10-bit sources, you may not gain much color-wise using the 10-bit codec because Shotcut’s filters are still 8-bit.

For the available parameters to DNxHR, do ffmpeg -h encoder=dnxhd

To convert to DNxHR, do ffmpeg -i input.mp4 -c:v dnxhd -profile:v dnxhr_hq

If you aren’t familiar with it, this is Avid’s long-overdue response to ProRes dominance in the edit suite. It is resolution-independent and performs very well. For some reason, ffmpeg chose to implement it as a profile under the older DNxHD codec. Shotcut does not currently have a DNxHR preset, but you can still export to it if you supply the profile option on the Other tab under Export > Advanced.

For all intents and purposes transcoding is something I’d rather not do if I can help it. It adds complexity to the workflow and I guess room for error on my part. I did some tests against ffv1 and was happy with the quality for the size of Prores.

I’ve been thinking along these lines as well. I don’t use a traditional filemanager but a folder-based system for organisation makes absolute sense.

Going to experiment with lossless trimming of files as well using AVIDEMUX that cuts on the I-frame. Less clutter. Individual scenes that can be reordered for organisation