The command line given only does frame dropping or duplicating. To convert frame rate with more sophistication than dropping or duplicating, there has to be a motion vector tracker combined with a morph algorithm such as optical flow. Here is an example:
ffmpeg -i Input.mp4 -filter:v minterpolate=fps=120:mi_mode=mci:mc_mode=aobmc:me_mode=bidir:me=epzs:vsbmc=1 Output.mp4
But there are lots of problems with these algorithms. Motion vectors are generated by finding common edges between video frames then shifting those edges to create in-between frames. But if the frames are filled with motion blur from fast-moving objects, there are no edges to positively correlate, so the algorithm often guesses wrong and creates interpolated frames that look like smeary mush. Frame duplication almost always looks better and cleaner if the fps change is small. Secondly, and this goes without saying… these algorithms are slooooooow at a level that words cannot express.
For the very minor conversion of 25fps to 30fps, frame duplication is totally fine. This is what every media player does anyway when showing a 24fps movie on a 60 Hz computer screen. If you don’t hate your media player, you won’t hate dropping a 25fps video directly onto a 30fps Shotcut timeline any worse.
For sake of clarity to the OP, you provided ffmpeg command lines that were probably meant to be templates and not complete commands. For instance, setting -r 30
by itself will not necessarily remove variable frame rate from cell phone footage. This has to be combined with -vsync cfr
and -filter:a aresample=async=1:min_comp=0.001:min_hard_comp=0.1:first_pts=0
to signal constant frame rate and synchronize audio with any changes. Otherwise, -r 30
by itself does exactly the same thing that Shotcut does when mixed frame rates are dropped onto a common timeline.
Secondly, there were no specific encoder settings provided, which again probably represented a template for the OP to fill in. So for clarity, in the absence of specific encoder settings, FFmpeg uses defaults that are far from lossless, meaning a generation of loss has just occurred if these videos get dropped into Shotcut and then encoded again for the final export. For pre-processing to work, much higher quality settings would need to be used for these intermediate files.
All trade-offs considered, the easiest and highest-quality (least generational loss) method is to simply drop videos onto the timeline and let Shotcut take care of the rest. It is designed for this. Pre-processing would make total sense if wanting to do a smooth 4x slow-motion sequence of course, but that’s a different scenario.