OK Then what is the use of exporting the video to 60 FPS??
There are a few use cases:
If the source is 60fps, then exporting at 60fps is natural.
If the source is 24fps, but the target is 30fps (60 interlaced) for broadcast on an NTSC television station, then changing the frame rate is a way to play the source video on a broadcast playout server. The station would reject 24fps material because it isn’t immediately ready for over-the-air transmission. This is a case where people specifically want to keep the “non-smooth” look of 24fps even though playback is at a higher frame rate, because they want the movie to still look like a movie.
If you’re making a documentary movie and gathering sources from all over the world in all kinds of frame rates, then it becomes necessary to convert all footage into a common frame rate for the final movie, preferably without changing the look (smoothness) of each clip. Shotcut can do this on-the-fly without pre-processing, which is one of its great features.
Some video editors like Lightworks and Blender require that all source footage be in the same frame rate before editing can even begin. Failure to do this will cause video to lag or run ahead when added to the timeline. So, converting 24fps to 30/60fps might be necessary just to appease the editor. The goal isn’t to change the look… the goal is to preserve the look at a different frame rate that is common to all clips going on the timeline. Shotcut can do this pre-processing step for other editors (although command-line FFmpeg would more likely be used).
Posted by suite_suit on Reddit - 1 year ago
If you have 24 fps footage with each frame a unique slice in time, then when you “convert” that footage to 60 fps there are only two options. The first option is to just play back the footage at 60 fps which means 10 seconds on the clock will playback in 4 seconds so the motion will be sped up. 240 frames divided by 60 = 4.
The second option is that you keep the playback time the same for your 10 second clip by playing back some of the frames more than once. You can either do that by adding a pull down scheme or using time remapping or a plug-in like Twixtor to take the pixels in 2 frames and calculate their new position in the frame based on their projected movement. For example, let’s say frame 1 has a rotating pointer that is vertical and pointed straight up and in frame 2 the pointer is horizontal and pointed to the right. If you were converting to 48fps instead of 60 then you would need a new frame between 1 and 2 with the pointer pointed up and to the right at 45º. This is very difficult for software to do with 100% accuracy, especially if there is a lot going on in the scene.
Hey Thanks for the reply . You are telling that the video should be captured in 60 FPS . What is the use of changing the fps to 60 when it has no effect/impacts. And I need to ask another Question,
Have a Look at the following Videos:
Well I hope that you can find the difference in the above video. The original video (i mean Original Trailer from Marvel , ) is in 24 FPS. But the above videos is converted from 24 to 60 FPS, you have told that Changing the frame rates does not have any impact , then how the video looks so smoother??
Also look out the above video , the video looks so realistic and smoother . The video title says Image Interpolation . What do u mean by that ? Is that different from what i did first ?
Before i switched to Linux , i was using Windows 7 as primary os . I had used many video editors including Filmora , Vegas Pro, Shotcut, Lightworks. Few months ago i was in need of converting a video ( with 30 fps) to 60 fps . So i asked a question in the above video.
Here is an Screenshot of that comment,
He had used Premiere Pro for that video . In the above picture , he had mentioned Optical Flow . What do u mean by that. Is it available in Shotcut ?
As i am new to editing and learning them , I don’t know anything about that .
Well I don’t want to do videos like that , i need it for my new project. Many youtubers nowadays uploads 60 fps videos . I want to know how is that possible and can i do that in shotcut??
Yes it is very difficult when many scenes are there in the video. So converting a video(24 fps) to 60 fps , reduces the total Duration of the video?
SO that means,
Converting a Video to 60 fps is harder than creating own 60 fps video project and exporting them. Is that correct ?
If we want the video to be in60 fps, it is good if the recorded video is in 60 fps.
from the FAQ re: Optical Flow
How do I change the speed of the video?
This is implemented as of version 16.01. With a clip open in the source player or selected in the timeline, choose Properties and look for the Speed field. Shotcut only provides simple frame dropping or duplicating. However, if the frame rate of your source footage is higher than the Video Mode (under Settings menu), then you can achieve a fairly smooth slow motion. If you are looking for more sophisticated results using more advanced optical flow techniques, we recommend you try the free, open source, cross-platform tool slowMoVideo.
So after converting the video to 60 fps , do we need to change the speed of the video ?
Can you …
Just now i used blender and checked it .
1 . Before changing the frames rate, i imported into blender .
- After changing the FPS to 60 .
The Audio gets longer , also the duration of the video gets very shorter , Can u tell me . I don’t want to change the speed at all , I Need smoother one like the ones in my previous post .
My second post provided numerous examples of why a no-impact rate change is desirable.
Regarding the Marvel trailer, this is an example of optical flow interpolation. Adobe Premiere and DaVinci Resolve can do this, among others. Shotcut cannot.
The idea behind interpolation is to take two frames, find edges in common between the two frames, and then mathematically bend the edges to generate in-between frames. The success of this method depends on edges in the two frames being properly discovered and correlated.
So let’s break down the Marvel trailer. The first thing we notice is an extremely selective criteria in deciding which clips made it into the trailer. The majority of clips have very little movement, which makes it fall into the video game recording exception I made at the end of my first post. We see a spaceship slowly floating in space; we see people slowly marching across the floor; we see a soap opera stare; they want us to believe a quinjet is flying fast across the sky but the camera lens is so wide-angle that the overall movement is actually next to nothing. By choosing clips with slow or zero movement, there is no motion blur to retain the 24fps look, and there are no blurry edges to confuse the edge-detection algorithm in the interpolator. The algorithm will have a very high chance of positively correlating edges between frames.
Now let’s look at where it fails. At 1:05, Falcon turns into a dust cloud. Edges don’t exist in a ball of smoke. The interpolation algorithm has a difficult time figuring out the movement of the cloud from frame to frame because there are no hard edges to easily track motion. The result is that the interior of the cloud looks pretty mushy compared to other parts of the frame. It still does pretty well overall because the cloud is moving pretty slow, which means general mesh distortion in the absence of a clean edge will still look decent. But a fast-moving cloud? That would look awful.
Now let’s watch optical flow struggle with motion blur. Step through 1:50 frame at a time and watch Black Widow’s hands as she swaps magazines. Because her hands are moving fast and they were captured at 24fps, there is a lot of motion blur. Interpolation can’t find an edge in the blur, so artefacts like this happen:
These artefacts happen pretty much anytime something moves fast on the screen or has motion blur around it. Now you see why the person who made the trailer was extremely selective about picking clips that don’t move fast. It looks awful otherwise.
Another example, from the same YouTube channel, “Rambo: Last Blood” trailer. At 0:11, there is a windmill spinning. The edges of the blades are blurry from movement, and the interpolation algorithm is very confused about which edges match between frames because interpolation algorithms generally don’t understand rotation. They only track straight-line movement. Therefore, blades of the windmill randomly appear and disappear depending on whether the algorithm matched an edge or not. In this picture, the blades are gone in the upper-left. It looks awful to watch in real-time.
And at 1:14, this soldier has two faces due to unmatched edges in motion blur:
So let’s summarize. Shotcut changes frame rate by duplicating or dropping frames as necessary to match the target frame rate. Interpolation is a totally separate technique where the computer generates artificial frames by matching edges between frames. Shotcut does not provide interpolation tools. For editors that do provide interpolation, it’s worth noting that it is extremely CPU intensive, and it would not surprise me if it took an entire day to render one of those two-minute trailers. Even with all that processing time, the end result will still have artefacts anytime edges can’t be matched or anytime edges are lost in motion blur. This is why it’s best to capture in 60fps if you want the 60fps look. There are no artefacts and it eliminates interpolation processing time. For people who know how to analyze what they’re seeing, watching interpolated video is not a great viewing experience, and definitely not an emotional one when artefacts break the illusion of reality.
Speaking of which, a question if I may ask without sounding critical because that’s not the goal… Comparing the look of the Marvel movie at 24fps vs 60fps, is there really a strong preference for the 60fps look in cinema? To me, it looks like the humans were taken over by robots and the emotional impact feels totally different… mechanical, almost… maybe tense, too. I realize this is a totally subjective thing. I’m just wondering if other people feel the same way or prefer the 60fps look. I could totally see it working for 3D CGI movies or video game playback.
Think of it this way… If you’re in Blender and you want to make a 4-second intro at 60fps, that’s a total of 240 frames needed for the intro. If four seconds of 24fps footage is dropped onto the timeline, that video only provides 4s*24fps=96 frames, which is less than half of what’s needed to last four seconds at 60fps. Blender absorbs the video frame-for-frame without stretching, duplicating, or interpolating it to the length you intend. This is why Blender has the prerequisite that all sources must first be converted into the FPS of the timeline before editing can begin, and is a big reason I don’t use Blender for editing. So, this means you’ll need to convert that 24fps video to 60fps using an external process. If you use Shotcut, you’re limited to frame duplication which will retain the look of the 24fps source. If you want the 60fps look, you can try optical flow with Premiere or Resolve, but your mileage will vary… a lot.
Thanks for this expeditious description. I think it was understood.
OK now i understood the difference between my video and those youtube videos. I don’t want to do interpolation at all. I will continue my project by recording the videos in 60 fps instead of 24.
Now i have another Doubt.
I just played the videos (Exported one and the original source video ) on my TV. I compared the both the video and they were same and the 60 fps does not have any impact on the video.
But when i speed up the video with the remote for the TV , i can see the difference. The original Video(24 frames ) changes with frame by frame when i speed it up . But when i speed up the other video (60 fps) , the video changes continuously (not frame by frame ) and it looks like Interpolation. Is that a reason for Broadcasting Stations not using 24 fps ??
I have another question, How do the video will look if i have used Transitions and Effects in the mid of the video ? What if i merged an another video clip that is created inside the shotcut which shows an Text in a black background with a motion . Will the overall exported video looks like same or will they have an impact on the exported video??
I mean the souce video would like 24 fps when it exported to 60 fps but what about the text and transitions ?? . Converting an video form 24 fps to 60 fps is different from creating an video inside the video mode (1080p and 60 fps) . SO tell me how will the video would like ??
Thanks . Did u see this post Exported to 60 FPS but still the video is in 24 FPS ??
Just to verify, are you recording brand new material with a camera set to 60fps? Or are you playing a 24fps movie and screen capturing or otherwise ripping it at 60fps? The first option will give you the smooth look. The second will not, as it is basically the same as Shotcut’s method of duplicating frames, which will preserve the 24fps look.
Caveat on recording with a camera at 60fps… the smooth look happens because the shutter speed is closer to 1/120th of a second. If the camera is in automatic mode or is a cell phone, then the camera is likely using a slower shutter speed in order to collect enough light to make the image bright. 60fps is only half of the formula to the smooth-style video you want. Shutter speed is the other half. If you have a manual camera set to the recommended 1/120th or 1/125th shutter, that’s over twice the speed of a 24fps shutter, which means your set lighting will have to be over twice as strong to make an equally bright image in the shorter time interval.
They should be. If a video was captured at 60fps and exported at 60fps, it is literally the same video, frame for frame. No motion processing has taken place. The improvement in smoothness compared to 24fps should have occurred during the capture phase, which means playback of the original video (before export) should already look smooth. If it doesn’t, one of the problems listed above has probably happened.
Caveat 2: If this 60fps video was captured with a camera, bear in mind that 60fps isn’t a total cure to smoothness. In cinema movies, actors intentionally move slowly and smoothly, and the camera gear is mechanically stabilized. The final video looks smooth because the camera is literally capturing a smooth production. Maybe not 60fps hyper-smooth, but still smooth compared to home video. There is no micro-shake from handheld video, and there are no quick, jerky, bouncy movements like somebody vlogging while walking down the street. 60fps isn’t going to fix the capture of a bad production. So, I don’t know what your video content is, but if it involves live action, part of the burden will be on you to deliver your performance as smoothly as possible. You were correct that 60fps looks more realistic in a sense, because it is capturing more frames of the action. Well, if the action was jerky, then 60fps realism is going to preserve that jerkiness rather than eliminate it. 60fps is like an amplifier. Smooth things look smoother, and jerky things look jerkier.
Apologies, I am not able to visualize what this means. Is there another way to describe it? If a video isn’t changing frame by frame, I’m not understanding how it can change continuously.
It will be fine. Items generated by Shotcut (like text) will be generated at the frame rate of the timeline. These items do not have motion blur, so the end result will be the 60fps look you wanted.
Thanks . Now i feel like the problem is solved. I don’t want to know why speeding up the video looks continuously on my TV . I am recording the videos with a camera set to 60 fps and i am not doing the screen capturing thing.
So, here are my conclusions and tell me whether they are correct or not .
To get the 60 fps look on the final video i should record the videos in 60 fps.
Optical flow interpolation is different from exporting the video (24 fps) to 60 fps. And it cannot be done in shotcut as it consumes more CPU power and Rendering/exporting takes longer time. It can be done in other softwares but that is not 100% and it may have motion blurs.
On the timeline , the video(24 fps ) will look the same as the original even if it is exported to 60 frames but other things that were created inside the shotcut depends on the Project settings.
It is better to record all the videos in 60 fps for the project , so that all the videos looks the same way and does not have mixed frames.
That’s all . And i need to know another thing . How many transitions are they in shotcut ? . Can u share it ??
Great summary! I might possibly extend #1 to include setting the camera’s shutter speed to 1/125th (which may require bringing in more lighting) if you really want that hyper-smooth look.
I defer to @jonray since he has created tons of transitions for Shotcut along with tutorials. He or @Hudson555x can also walk you through the stock transitions like wipes. My web browser can’t run Shotcut, so creating screenshots from another computer is pain-staking for me.
How did you get the idea that she made film recordings with a camera? I am still in the context that she is recording with OBS-Studio.
Author’s own words. There is a lot of text in here, so it’s easily missed.