This is incredibly context-sensitive:
-
What is the source material? Line-art such as a cartoon is a very bad candidate that leads to many noticeable artefacts.
-
What interpolator is being used? Simple blend mode as I demonstrated above? Or a GPU-accelerated optical flow interpolator like Resolve has? One is not necessarily better than the other depending on the source material, and both create their own types of artefacts.
-
How much time is available? Using the motion estimation interpolator in FFmpeg can take 400x the source material’s duration for computation time. Re-interpolating a 2-hour wedding video in 4K could take weeks.
I made a demo video last year of the artefacts created by interpolation, along with computation timings for each setting:
Note that using ffmpeg with just the -r 25
or -r 60
option does not create motion-interpolated output. With those settings, ffmpeg will drop or duplicate frames just like Shotcut does natively. Motion-compensated interpolation is implemented with filters that have to be specifically added to the filter graph.
Also, here is a breakdown I did earlier of why 24fps movies interpolated up to 60fps don’t look as good as people wished, mostly because interpolators struggle with motion blur and objects that rotate: