Taking these three quotes together, it is becoming more and more apparent that the nature of time modification is fundamentally different from the nature of all other filters which use Keyframes.
The more the Keyframe structure is adjusted to accommodate the unique problems of time modification, the more negative impact it will have on the use of Keframes with other filters.
The need for ripple, as so ably illustrated by @Austin above, is but one example of the unique nature; Keyframe Ripple would be a disaster with any other filter.
This difference in nature gives the suggestion of @MusicalBox above great merit.
This is why it’s so confusing. The developers have conditioned us to expect a certain behavior based on the consistency of how the other keyframes behave, but this one throws it out of whack because it is the one exception that behaves counterintuitive to what we’re used to. This is what makes it hard to grasp, more importantly, hard to teach.
Looking at all of the discussion and history of this, as an engineer, it appears to have a mistake that I have made many times in my engineering career, the attempt to conflate fundamentally different processes into one “simple, easy, familiar” interface.
I believe the problems will continue until is is accepted that because it is so fundamentally different, it must be handled through a separate UI withing the Shotcut UI framework.
Just as adding Keyframes required adding new buttons and a new window, time modification requires the same.
In an engineering analysis that I did this morning, I identified that adding time modification requires not two but three separate Keyframe interfaces (although two of the could share a UI, separated internally by external radio buttons).
Keyframed legacy filters referenced to original clip frames
Keframed time modifications
Keyframed legacy filters referenced to time-modified frames.
Is this a way of saying that if time is stretched via the Time Remap filter, then keyframes on any later filters in the stack should also be stretched/rippled to stay in sync with the time shift?
Using the skateboard video as an example again, if Time Remap is added first and the curve is still default 1:1 real-time, then I add the RGB Shift filter and keyframe it for when the wheels land to really drive in that 1980’s vibe, what happens if I now go back to add slow motion in Time Remap? If I stretch time, the RGB Shift keyframes would need to stretch also to remain synchronized with the wheels landing.
The million dollar question is defining the line between what the user has to adjust and what Shotcut will try to adjust automatically when the flow of time is changed.
Realistically, this filter will probably have some very specific workflow around it for greatest efficiency. As in, Time Remap should be applied first and finalized before adding any additional filters and keyframing them. Attempting to shift keyframes of other filters automatically when time is stretched will probably break as much as it fixes if the other filters are using Smooth keyframes. Results may be predictable with Linear, but the look (timings) will change due to Smooth being curved. The user would probably have to fine-tune any auto-adjustments anyway.
This filter is super powerful and cool. It just has a more rigid workflow than others at this point, which is fine so long as everyone is aware of it.
This filter reminds me of a Linux application called slowmoVideo. Documentation for .:: slowmoVideo.granjow.net
When I saw the graphical representation of Time remap time (with the horizontal line for freeze and ramp down for reverse), I remembered the interface of this application.
Reading this documentation I understood a little better (just a little) certain concepts.
The explanation about the interpretation of the two axes helped me a lot to understand why a straight line means freezing of the video.
If as @Brian says, more filters based on this dynamic are coming, maybe it would be convenient to enable a module (or even a layout) for this group of filters. Just a thought.
That’s exactly what it was inspired by. It was mentioned several times over the years on this forum. Here’s an example:
There’s still things I think that need to be worked out and reconsidered in Time Remap. When the test versions were being posted for Time Remap I didn’t have a lot of time like I did before to do testing like I wanted to so I hoped others would weigh in. I have to admit it was disappointing to see little participation after the alpha thread. I hoped more would be involved in the beta and final release version threads to post feedback rather then after its release for people here to start trying to figure it out. The perfect time to have done that was in the beta and final release threads which could’ve also helped its development.
Still, I hope some issues can still be worked out.
That is how much time is usually schedule for a change this significant.
No less than six month in beta, AFTER all bugs that can be found in alpha testing have been found, corrected, and tested, and the version released as beta appears to be completely stable.
Sure but that wasn’t how it was scheduled for this. The point I am making is that I see that there is a lot of activity here in this thread about figuring out Time Remap when the best time for that really was in the several threads that were made on it before its official release. I’m not berating anyone here I’m just saying that this feedback would’ve been perfect in those previous threads as it would’ve helped direct the development of it.
It’s hard to work with a new tool, test it and give feedback when you have no clue on how to use that tool and even what exactly it’s suppose to do…
A few people (including me) mentioned in those alpha and beta threads that they didn’t understand how make Time Remap work. A bit of comprehensive instructions may have helped to involve more people in the testing phase. I say comprehensive because the information given in the first post of the alpha release was like trying to read Chinese to me and was of very little help. I finally figured out how to kind of make Time Remap work but I feel that I still miss a lot of infos to be able to use it at it’s full potential.
My thoughts on this particular execution was definitely brought up during the alpha stage threads. That’s why I have no qualms in mentioning it again after the feature was released.
Now that I have some time for testing, I can comment on something and spend more time reading, translating, and experimenting.
I don’t even know if this filter will be useful for the kind of projects I currently do, but I hope my input will be of some help.
In a while, I took the explanation about the axes in SlowmoVideo and made some graphics in the style of what I interpreted.
I lengthened the track to place the elements I wanted to visualize.
I don’t know if this way it will be understood a little better.
…imagine that you are walking on the canvas from left to right. On the left, i.e. behind you when walking, there is a wall showing all frames of the video you recorded, one after another, starting at the bottom. Each step you do is a frame, and as you walk, you take a look back at the wall after each step.
I quickly read the documentation you shared above and it took me a while to assimilate what they meant. Adding the character walking on the line and the video frames on the left makes it a lot easier to understand
Sometimes technical language can be simplified somewhat to reach more people, and the example explanation on SlowmoVideo seemed very simple to understand so I thought this would help others with a quick look.
You found a tricky bug and you found a good workaround to fix it. The “Set Speed” buttons can circumvent the maximum and minimum value constraints. I will make a note of this, but I do not know if I will make it a priority to fix for the next build.
That may be something to consider. The current UI design is a direct response to many historical requests for “Where is the speed filter?”. So I expect many people expect time related features to be among the rest of the filters in the UI.
I have not used a lot of other video editors. But I have watched tutorials of time remap features in other editors. I have observed the following things about time remap features in other editors:
The workflow is often iterative - going back and making changes as the output unfolds
They almost always start by extending the clip way longer than they need and then they end by reducing the clip length back to where they want it to finally end
They almost always work left to right - which stands to reason because that is how our minds perceive the timeline
So I am not so sure if the Shotcut implementation is very different from other implementations.
This makes me curious to understand:
Are the difficulties with time remapping caused because Shotcut has implemented it so differently than other tools?
OR
Are the difficulties with time remapping because these users are generally unfamiliar with time remapping and they happen to be learning for the first time in Shotcut
I know that the Shotcut implementation doesn’t have all the bells and whistles of other editors (yet). But I was kind of expecting someone with some experience to make a tutorial video showing a productive workflow. Maybe someone will get around to that.
Maybe we can map some of those concepts into the Documentation. It looks like Dan has been making some improvements to it already:
I understand that people are volunteers, and busy, and at different places in their learning journey. So I am happy to accept comments/suggestions any time. Just because the feature has been released does not mean we can not improve upon it. But we are somewhat limited by the capacity of the volunteer developers
For me it’s both. I actually think that this first release didn’t need to be as ambitious. Providing just the ability to change speed so that people can speed ramp would have been a game changer to people without need to add an additional axis to the keyframes.
As the developers, you have a huge influence on user behavior. By creating consistency, you can condition them to expect things with respect to how everything else interacts with the software.
Keyframes for zoom:
Horizontal line is default
Up is bigger
Down is smaller.
Adding a keyframe to the center line brings it back to default
Keyframes for gain/volume:
Horizontal line is default
Up is louder
Down is quieter.
Adding a keyframe to the center line brings it back to default
Keyframes for time remap:
Starts out with a slope
Horizontal line means freeze, not the default line
Up is faster if you squint and move the keyframe where you think it’s above the sloping line
You don’t need to add a additional keyframe at the peak of the speed, because without seeing an actual peak you need to visualize that line is still speeding up based on the guestimate
of where it is with respect to the imaginary sloping line
Down is slower, probably if it’s a Tuesday, again figure out where the original sloping line should’ve continued.
If you manage to actually speed or slow down the clip and you go back to the timeline, the clip still takes up the same space and will just play on repeat when the original sped up part finishes.
Not confusing at all.
On top of that, people need a frame of reference because yes, the other video editing programs have also conditioned them to expect certain behaviors when they keyframe speed. Why is every single keyframe behavior the same except for this one? Are we trending toward changing all other keyframes to start with a slope instead of a horizontal line? I would love to make a definitive tutorial on this as I am a huge cheerleader for this software, but I myself am unable to make heads or tails of the behavior of the keyframes let alone the rationale of why it way executed this way when there was already a basis within the software of how the keyframes should behave.
Don’t take any of this as negativity. Take it as passion for Shotcut and wanting it to be as good as it can potentially be. So take my personal opinion with a grain of salt.
Here’s what I thought the time remap keyframes would look like based on what I had been conditioned to expect. It would be pretty easy to explain…
I hate to compare Shotcut with other editors, but this representation is similar to how speed changes is done in Resolve. It looks a lot simpler and easier to understand.
@bentacular, I was reading this whilst drinking coffee and that made me laugh so much I sprayed the coffee all over my keyboard I’m sending you the bill