Command line renderer

Hi there,

I’m trying to use the melt to render the files exported from Shotcut, but I’m running into various issues.
This is the melt command I’m using, with all the options I’ve got :
melt - progress shotcut-iIOuUF.mlt -consumer avformat:output.mp4 vpre=veryfast preset=veryfast movflags=+faststart f=mp4 acodec=aac channels=2 ar=48000 ab=384k vcodec=h264_nvenc vglobal_quality=18 vq=18 g=15 bf=2 width=1920 height=1080 top_field_first=2 deinterlace_method=yadif rescale=bicubic threads=15

The options are taken from the temporary .mlt file, also the used .mlt file is the exported project from Shortcut.

  1. The used options above may not be the best ones, any suggestion on how to correct them?

  2. The time in which the project is rendered is way slower in the command line than when done in Shotcut. Any idea why that could be?
    For example, a 9:13 long video is rendered in Shotcut in 5:15, but using the command line it takes 17:40

  3. Getting some errors in the command line rendering:
    h264_nvenc @ 0x7fa138201d80] [Eval @ 0x7fa140c58f30] Undefined constant or missing ‘(’ in ‘veryfast’
    [h264_nvenc @ 0x7fa138201d80] Unable to parse option value “veryfast”
    [h264_nvenc @ 0x7fa138201d80] Interlaced encoding is not supported. Supported level: 0
    [h264_nvenc @ 0x7fa138201d80] No capable devices found
    [mp4 @ 0x7fa138200f40] Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.
    [mp3 @ 0x7fa10fe5a080] Estimating duration from bitrate, this may be inaccurate

Hoping that these can be sorted out by somebody more knowledgable as I.

Have a build with GeForce RTX 2060 GPU and an I9-9900 CPU.

I would consider removing threads since the libraries make good guesses on their own, and there’s no reason to overly bind commands to hardware in case the hardware changes.

Two possibilities:

  • Is parallel processing turned on when rendering from Shotcut?

  • The error h264_nvenc No capable devices found makes me wonder if the unsupported settings are causing hardware encoding to fail and it reverts to libx264 software encoding. Are there any references to libx264 in the command line log file?

veryfast is not a supported quality preset for hardware. The closest is fast which is also called p2 with NVENC. See:

https://docs.nvidia.com/video-technologies/video-codec-sdk/ffmpeg-with-nvidia-gpu/index.html#command-line-for-latency-tolerant-high-quality-transcoding

Remove top_field_first and deinterlace_method. Possibly add progressive=1.

Thank you, Austin, for your suggestions.
Will test them and come back with the results.

Updated the command line to:
melt -progress shotcut-iIOuUF.mlt -consumer avformat:output.mp4 vpre=fast preset=fast movflags=+faststart f=mp4 acodec=aac channels=2 ar=48000 ab=384k vcodec=h264_nvenc qp=18 vq=18 g=15 bf=2 width=1920 height=1080 rescale=bicubic progressive=1

[h264_nvenc @ 0x7f99ec201d80] Using global_quality with nvenc is deprecated. Use qp instead.
Changed also the global_quality to qp.

Now don’t have the " h264_nvenc No capable devices found" error anymore, think that changing the preset and the vpre to “fast” fixed that.

Of course, parallel processing was turned on when exported the project in Shotcut :slight_smile:

It is working but still slow, in the command line rendering the project takes 17 minutes+,

Parallel processing is probably the difference. Unfortunately, I don’t know the melt flag that enables it.

This is how the .mlt XML consumer part look:

Yup can confirm that the rendering in the command line is not using parallel processing.
It is taking almost the same time …

https://www.mltframework.org/faq/#does-mlt-take-advantage-of-multiple-cores-or-how-do-i-enable-parallel-processing

Added real_time=-4 :slight_smile:

1 Like

This topic was automatically closed after 90 days. New replies are no longer allowed.