Possibly. There are two alternatives if you want to avoid a new tool:
Your original workflow was 95% perfect. It’s still possible to edit audio in Reaper, export as WAV/PCM instead of AAC, import audio into Shotcut, edit video in Shotcut, then do the final export from Shotcut. This method doesn’t involve any new tools at all.
If you prefer to edit audio in Reaper by matching it to the video exported from Shotcut, then it’s possible to use ffmpeg instead of avidemux. ffmpeg comes bundled with Shotcut, so there’s nothing extra to install. A command that would work for your scenario is:
This command takes only the video from stream #0 (the first -i file), only the audio from stream #1 (the second -i file), preserves video with no re-encoding (-c:v copy), compresses audio with AC-3 640kbps (-c:a ac3 -b:a 640k), and puts the result in RemuxedMovie.mp4.
ffmpeg.exe can be found in the Shotcut installation folder.
Thanks for getting back to me. In fact I wanted to avoid a new tool.
This could be the solution:
It is not important for me to edit audio in Reaper by matching the exported video from Shotcut, the original unedited video is all that’s needed for this.
Just to be sure, workflow should be now:
Edit audio (with unedited video as guide) in Reaper, export audiofile only (as WAV, sample rate 44100)
Edit video in Shotcut, then import WAV file on seperate audio track (audio track from video → muted)
Final export from Shotcut
audio settings:
sample rate: 44100
codec: ac3
rate control: avarage bitrate
bitrate: 448kb/s
Correct?
One more question:
When a video (post editing) has been exported from Shotcut once, and will be loaded again in Shotcut just for matching with a new audio track (with no further video editing), will the video quality be affected when rendered a second time?
Assuming the source files are 44100, then yes. The important thing is to have the same sample rate from source to export.
If there is any chance the MP4 file will be saved to a USB stick and plugged into a TV or Blu-ray player for viewing, then 448 is a good choice. If the video is going to YouTube only, then 640 offers a little more quality buffer for surviving another generation of transcoding by YouTube.
Depends on the format of the first export. If it was lossy like H.264 or H.265 or VP9 or AV1, then yes, quality would be significantly affected. If the first export was an intermediate format like ProRes 422 HQ or DNxHR HQ, or a lossless format like Huffyuv or Ut Video, then no, quality would not be affected.
However, why would the exported video be brought back in? Why not reopen the original project file and add new audio tracks to it? Zero generational loss and minimal export time.
Glad you finally got the results you were looking for.
Honestly, because I had to apply a workaround for a small problem when using two specific Shotcut filters in a row. Probably a beginner’s mistake, and actually a bit off topic. I would have to post pictures to illustrate. I don’t want to break the forum rules, can I post it here in this thread?
I doubt anyone would complain about rules. However, a separate topic might be good just in case an interesting solution pops up that warrants its own discussion and solution.
Austin, thank you for the excellent information on audio codecs! I knew some of this in bits and pieces, but I feel like my understanding has taken a giant leap forward by reading your responses in this thread.
That would be my strategy, although it often requires a Matroska container. Opus can’t be put into MOV or MP4, which can be a limiting factor for some workflows and programs. Technically, ffmpeg can cram Opus into MP4, but a lot of media players will be like “Wuuuuhhh” and nothing will play.
I wish I understood all their decisions, but I don’t. There’s probably a money trail that traces back to the patent pools on those formats in there somewhere. The format designers have to recover R&D costs somehow, and make a modest 1,000,000% return on investment.
Either the format designers, or the powers-that-be who coordinate with big commercial codec designers to inseparable entwine DRM into the codecs (Digital Rights Management, not our friend here. )
Then there is power.
Imagine a world where all of the news is videos which have been edited by the same products and powers that produce CGI for commercial movies. Imagine such a world where the only video editing software was that which comes from big commercial houses, with a license that requires an eavesdropping internet connection back to “those who protect your wellbeing”.
Within the past year I was involved in a project using Shotcut to produce videos for four coordinated local law enforcement agencies (who were coordinating with the state Attorney General but not with the Governor), to produce videos about choreographed violence at political events. The project was shut down by…
Power. Control of codec licenses, control of video editing software licenses to enforce the licensing of codecs, control of video editing, control of information, control of “the truth”.
Power.
So what’s the difference between libopus and opus in the Codec list in Export? I imagine that libopus is FFmpeg’s recreation of opus so what’s opus on the list for?
February 2011: The Opus bitstream format neared the end of development and was tentatively frozen.
July 2011: Mozilla wanted to get a jump on having code to implement Opus, so the “opus” encoder began.
… a year goes by …
September 2012: The official Opus working group that defined the format released the “libopus” reference encoder.
“libopus” has had a few revisions over the years, but the underlying bitstream format for Opus hasn’t changed since 2012. FFmpeg bundles the “libopus” reference encoder. They didn’t have to write it themselves.
“libopus” is now the default Opus encoder in ffmpeg, and it is excellent. “opus” is still included, but it is considered experimental and requires a strict flag to access it. It was (is still?) also generally considered to be not as good at low bitrates.
Yes, for personal use. It supports up to 7.1. But No if the goal is playback on hardware devices or general widespread compatibility. The format is technically great for surround sound. But not many hardware players and commercial tools support it. So it all depends on whether your tools and workflow can handle it.
If encoding beyond stereo, the channel layout must be specified in the call to ffmpeg:
It will be 5.1, but I haven’t tested whether the channels will be mapped to the proper places (as in, left and right won’t get swapped). Opus is based on “Vorbis order of channels” which is different from other formats, and is why a layout has to be specified to get the proper mapping. Since Shotcut is probably specifying 5.1 layout during the export, it will probably work fine. But I haven’t tested it to be sure.