I’m curious about your experience with this issue, I thought I was being smart and switched to (PAL) 50fps mid holiday to reduce flickering in the evening lights but then switched back next day as I noticed my phone sadly can’t do PAL so I now have a mix of everything (including a 25 fps timelapse but that will be sped up so don’t care). I would say most of the footage is 60 fps now, and only 20% is 50fps (also <10% 30 or 25 fps).
So now I’m stuck with some choices: do I choose a 60fps timeline and let shotcut duplicate and fill in the missing frames? Do I go with 50 so it only has to drop frames for the 60->50 (Is this smoother than duplicating for 50->60?). Or would 30 fps be better so both 50 and 60 will have more change to spread out the dropped frames and looks smoother?
Of course, this is just a personal edit and I could get away with any settings but I’m hoping someone else went through the same and tested this more thoroughly. I tried googling this but the virtually all results point to simple “what is fps/pal/ntsc”, “which fps is better for slow mo” and “just don’t mix framerates” , nothing about actually best practice when mixing them.
NEVER mix 25/50 with 30/60 fps footage on the same timeline!
Convert them before importing!
Of course not all of them, just those what are different from the timeline’s fps.
For example if you have 70% of the footage recorded with 25 fps and the rest with 30 fps, then more convinient to make a 25fps timeline, convert the 30fps files to 25 fps, and then go to editing.
As Shotcut comes with ffmpeg in the pack, You can use it for this conversion. But it works from command line. It’s very easy once you get to know it.
The very easiest command line is (You can make it very much more “pro” if you learn thorough ffmpeg documentation): ffmpeg %SourceFileNameWithExtension% -r %OutputFrameRate%.00000000 %OutputFileNameWithExtension%
for example ffmpeg footage1.mp4 -r 25 footage1_25fps.mp4
Of course if ffmpeg executable file is not in the “path”, you sholud define the full spec. for it, for example: "C:\Program Files\Shotcut\ffmpeg" ... ... ...
on Windows. If path has spaces in it, like the above, always put the whole thing in quote marks (")!!!
But it is very similar on other op.sys.s, for ex.: /apps/Shotcut/ffmpeg ... ... ... (or something similar)
Also and of course You can create a batch file that accepts parameters, to make the whole thing even more easier.
if you want to stick to your phone as “the camera”… Yes, there is no possibility to change the fps from 30 or 60 (NTSC) to 25 or 50 (PAL).
BUT try some pro cam apps, for example “Filmic Pro” or “Moment Pro Camera”! They may have the this possibility if your phone is capable of it.
And one thing more to the end… Phones usually does not make exact fps files!
I mean, the set (and usually unchangeable) fps is 30, but the resulting recordings are not exactly 30.000fps, but something random between ~29.5 and ~30.5 fps).
So I recommend to convert ALL footage to the desired fps, if they were recorded with a phone.
Why? Shotcut can convert them in real-time in a manner very similar to ffmpeg except if you use its frame interpolation filters, which you do not show. Moreover, you show ffmpeg command lines that are very basic and under specifies things. I recommend using a tool like Handbrake or Shotcut’s Properties > Convert unless you are an advanced ffmpeg command line user.
[quote=“shotcut, post:4, topic:31432, full:true”]
… … … you show ffmpeg command lines that are very basic and under specifies things. … [/quote]
Yes. That was what I’ve said too. :: “The very easiest command line is (You can make it very much more “pro” if you learn thorough ffmpeg documentation)”
Well sure, but it’s already done now.
I’m not sure adding ffmpeg into the mix would be worth it as it will add one extra (lossy) conversion that I assume shotcut will do it in the end anyway.
I was leaning towards 60 fps too, thanks.
Do you know how the app deals with the extra frames? Is it duplicating 10 of them spread evenly in the 1s interval or something else? I did a test encode and the result was actually not noticeable (I didn’t have any smooth pans in this test though).
Indeed, I’ve mixed different resolutions and framerates and it seems to handle them with no problems during editing.
Notice that the sonar line is ghosted because the conversion used blending of nearby frames for motion interpolation.
For line-art videos such as cartoons and animation, this ghosting is probably not desirable. However, for live-action footage that contains motion blur, this ghosting will simply blend into the motion blur and perhaps look smoother than dropping or duplicating entire frames.
And this is the wrong solution, as it can create very eye-disturbing stalling results (after every 5 frames moving the last 1 frame duplicated == freezed). So every 6th frame is a freeze/stall.
The good solution is when the whole thing is regenerated (reinterpolated) from 25/50 to 30/60 fps or vice versa.
What is the source material? Line-art such as a cartoon is a very bad candidate that leads to many noticeable artefacts.
What interpolator is being used? Simple blend mode as I demonstrated above? Or a GPU-accelerated optical flow interpolator like Resolve has? One is not necessarily better than the other depending on the source material, and both create their own types of artefacts.
How much time is available? Using the motion estimation interpolator in FFmpeg can take 400x the source material’s duration for computation time. Re-interpolating a 2-hour wedding video in 4K could take weeks.
I made a demo video last year of the artefacts created by interpolation, along with computation timings for each setting:
Note that using ffmpeg with just the -r 25 or -r 60 option does not create motion-interpolated output. With those settings, ffmpeg will drop or duplicate frames just like Shotcut does natively. Motion-compensated interpolation is implemented with filters that have to be specifically added to the filter graph.
Also, here is a breakdown I did earlier of why 24fps movies interpolated up to 60fps don’t look as good as people wished, mostly because interpolators struggle with motion blur and objects that rotate: