360 Transform and Maybe Motion Tracker Combo

I am having to manually center all my 360 videos so that they follow a consistent subject instead of wandering, rotating randomly. The only built-in way I see to do this is to use the 360 Transform filter. Since my 360 camera is on a selfie stick, and I might randomly rotate it around when using it, this amounts to making keyframes as often as one per second of video, or even more frequently. When I’m dealing with videos that are 30 minutes long, this can be extremely boring and time consuming.

I had the idea to use the Motion Tracker filter to lock onto my face. I figured I might be able to translate the X data from that filter into usable offsets for the 360 Transform filter. I’m 99% sure this would work because I’m tech savy (former web developer), and I could write a script to parse the Motion Tracker data in the MLT source, translating the values a certain number (all keyframes would have the same value added to the X value from the Motion Tracker to get the Yaw for the 360 Transform).

The problem is this. I can’t get the Motion Tracker filter to produce any results. I’m fairly certain the sun of our solar system will burn out before it reaches even close to 1% completed. I’d have to tell my heirs to keep my PC running millions if not billions of years into the future just to see it roll over to 1%. It’s insanely slow, aka unusable. I’ve never had the patience to wait to see it gain a percentage.

Is it this slow for everyone? If so, it’s an unusable feature. I don’t think it would finish if I left it sitting for a week! I’m used to waiting already… my 5.7K footage takes 3 or 4 hours or more to render a final movie file. But at that rate, I can at least see progress. Like, within a couple minutes, it rolls to 1% or 2%. Whereas the Motion Tracker filter never moves or budges. Using basic math, I can extrapolate that it will take anywhere from half a day to weeks (or possibly centuries or millions of years, since I’ve never seen it reach even 1%).

I did a web search to find that the MOSSE match method is the fastest, but even that never showed any progress when analyzing a motion track.

Any chance it’s a nonlinear progress bar, like it’ll suddenly jump to 50% or 99% instead of 1%?

As a related issue, if possible, an improvement to the 360 Transform filter would be to allow infinite positive and negative numbers in the Yaw (essentially it would wrap each time it gets to the edge… 360, 720, 1080, 1440, etc. At least say a max of 3600 or something to allow for 10 full rotations. It’s very common for 360 footage to rotate well beyond 360 degrees, and it’s very difficult to create keyframes that continue smooth animation when the max rotation in Yaw is 360 degrees. I’m aware it can be held or transitioned back to zero degrees from 360 or whatever, but in reality this doesn’t work well.

Unrelated, but also… There are problems with the smooth keyframe types in Shotcut. I can see that they are using some kind of node handles that are likely 33% out from the nodes… but when one side of the node has a long gap, and the other a short gap… this makes the 33% of the long side pull the keyframe curve wildly out on the short side. Hard to explain… but the keyframe smoothing algorithm should be more intelligent to see that gaps between keyframes are not equal, and so the curve handles scale accordingly. As it is now, the curve can collapse in on itself and cause odd animation scenarios. I understand the greater than 360 degrees would introduce a difficulty in interface visualization, since the keyframe curve line would go beyond the “top” or “bottom” of where it shows on the timeline. Not sure what to say about that. In my opinion the vertical visualization of the line on Yaw isn’t necessarily super important anyway, as it’s generally necessary to use the slider or enter numbers rather than drag the keyframes vertically to achieve a result. Blah blah blah.

shotcut buggy keyframe nodes

Above shows example of buggy keyframe nodes. Since I have a lot of experience with vector editing, I know exactly what’s happening. It has a smooth node, but the longer side dictates the position of the bezier handle for the short side… so the curve goes well beyond the horizontal position of the previous node. In the illustration, there’s a 20 minute gap until the next keyframe after the red highlighted one. But the previous keyframe is only about 2 seconds prior… so the long gap destroys the smoothness of the line, making it go backwards behind all the other keyframes. Each node handle should pull the curve 1/3 or thereabouts of the horizontal space between the previous or next keyframe, while still being smooth in alignment (aka not a cusp node, but rather smoothed, and NOT symmetrical).

Here’s a graphical example of proper smooth node/keyframe implementation. The node handles should be pulling with approximately 33% weight, and their length should depend on the gap between the nodes or keyframes, and this distance can vary obviously. How Shotcut currently implements, wider gaps force their way on the node handles, so the “sine wave” can fold in over itself, which shouldn’t be possible on a timeline.

In a future advancement of the software, these nodes could be editable, so the weight of the smoothing could be controlled by the user… if it is desirable. But for now, the node handles should pull unequally on either side of the node to match 1/3 of the space toward the next or previous node.

The important detail is that each bezier handle has a different length - see the bottom center node as an example where one handle is longer than the other. I also purposely made the top right node have differing length handles as a second example.

There is a lot going on in your post. It is usually best to keep each post focused on one topic. I don’t have any advice about your Motion Tracker and Yaw questions.

Regarding the smooth keyframes… you do not state what version of Shotcut you are using. But in the recent release (23.11) I changed the smooth keyframe interpolation to avoid overshoots and cusps. I also changed the way it renders in the keyframe panel. Additionally, I added more easing types. Have a look and see if it works better for you. (BTW, none of the keyframe types use beizier curves. The smooth type uses catmull-rom).

1 Like

Oh nice. I didn’t realize there was a newer version available. I had recently checked, so it must have been released not long ago. I’ll definitely upgrade, and I’m sure that l’il issue will go away. If it gets rid of the overlapping and kinking keyframes, it should make it so less keyframes are required, which will save a little time in the long run. More easing types will be great! Thanks.

Regarding the infinite time to get results from Motion Tracker, I wonder if it has anything to do with the size of the videos I’m working with… 5.7K. It’s probably a rare use case. Most people probably edit in 1080P or 4K max. My videos are 5760x2880, which is a lot of pixels to calculate. I’d think it would only calculate a smaller area around the green Motion Tracker box, but still.

My CPU and GPU also aren’t brand new, but they aren’t unusably slow either. I have never used Adobe Premier or After Effects to compare their motion tracking tools, but I can’t imagine them taking so long, as it would render the tools unusable.

I might try a couple different things. First, I’ll see if I can get DaVinci Resolve to do a motion tracker on the footage and output data in a form I can use to input x-axis data into the MLT XML for the 360 Transform Yaw node/effect. Regarding DaVinci Resolve, the only reason I don’t use it as my overall editor is that the free version supports max 4K resolution only. That’s why I’ve been using Shotcut. Plus I generally like fiddling with open source stuff. I still use DaVinci Resolve to create color correction LUTs, since its color tools are far more powerful than anything else out there.

The other idea I have is to downscale the video massively before running the Motion Tracker and see if the filter runs faster that way. If it does, then I can just multiply the offsets in the XML between x-axis keyframe changes by a factor of how much I downscaled it by, and it would still probably be accurate enough for my use.

I don’t mind if a Motion Tracker effect takes hours. I’d rather have it do the work for me than painstakingly add manual keyframes from start to finish. But if I wait for hours and don’t even see it reach 1%… that’s obviously a no-go.

Have you tried using proxies with the Motion Tracker? I just did a quick test and it seems to work for me.

image

I don’t usually use proxies because they take a very long time to generate… but I might give it a shot also for curiosity and comparison. I need a repeatable method I can use going forward since editing these 5.7K videos is a regular occurrence. The fastest method “wins,” whatever software I can find to do it.

I made proxies for a project that has about 46 minutes of 5.7K 360 footage. It took a little less than 2 hours to generate the proxies. I’m currently testing a Motion Tracker on a 14 minute segment with a generated proxy. The default match method of CSRT took a couple minutes to get to 1%, which is infinitely faster than without proxies, but that would still be about 3 hours to reach 100% on the 14 minute clip. I switched to MOSSE method, and I estimate it is about 3 times faster than the default CSRT method. I’ll decide later if MOSSE is accurate enough. I don’t need it to be perfect, so hopefully it’s good enough. I’ll probably end up deleting a lot of the keyframes so there is only one keyframe every 1 or even 5 seconds, as that’s usually tight enough tracking for my use. I don’t want it lock-on, because that would be too jittery if I’m moving the camera around a lot.

If I’m able to parse the XML Motion Tracker data and convert it to XML for the 360 Transform Yaw, I’ll report back. I feel pretty certain it will work, as long as I parse the data correctly. I might write some PHP or Javascript to do the parsing if I’m in the mood. I haven’t done much coding in years, so we’ll see how that goes. If it works, I’ll have a basic web page with an input box to convert Motion Tracker data to 360 Transform Yaw data.

I imagine if the Motion Tracker can already work as input keyframes for the 360 Transform Yaw & Pitch, someone would have already said so. But since I haven’t yet succeeded in completing a Motion Tracker, I wasn’t sure. If that’s a thing, I’ll just do that, otherwise I’ll write the script to convert it. Adding this functionality into Shotcut would be cool, since it would be an easy parsing conversion, since both represent x and y coordinates… just that one is in degrees versus pixels. Super easy math though, since 360 Yaw degrees is equal to the width of the video, and I believe that means 360 Pitch degrees would be equal to the height of the video.

Hmm, well that’s unfortunate. I put a green box around my bust (head and shoulders) to Motion Track. It tracked me for a few frames, but then locked on to the moving trees behind me instead. So as I walked around, it was on the trees outside me. This was using the MOSSE method, so I’ll try one of the others.

MIL method did better, but also a fail. It tracked me for maybe 20 or 30 seconds, but then got distracted by background trees like MOSSE. I split my 14 minute clip into a 1 minute segment to test with, since the 14 minute clip takes way too long to Motion Track, even with the 540p proxy generated.

Yay. The default CSRT method kept me in the green box for the 1 minute sample clip, so maybe I’ll commit to having it do the whole 14 minute clip (and 46 minutes for all the clips combined). It’s gonna take a LONG time though (maybe 16+ hours!), so hopefully it doesn’t drift onto a tree some minutes in! Ugh. I wish there was a way to speed this stuff up. The 16 hours is with a proxy, so I’d guess it would be numerous days without the proxy, at full 5.7K resolution.

Actually, I dunno. 22 minutes of Motion Tracker analyzing per 1 minute of proxied video is a bit steep. I dunno if I’m patient enough for that. It’ll really push out the frequency of my video edits. I might just do manual 360 Transform Yaw key framing until I figure out a better method. I did look at the XML data spit out in the MLT file from the Motion Tracker analysis… and I’m pretty sure it can be converted to degrees for the 360 Transform pretty easily… but again, it just takes too long to do the Motion Tracker, so nah.

Follow up feedback on the new tweening options. Looks good. The visual keyframe kinks and overlapping appears to be fixed in the update as expected.

1 Like

This topic was automatically closed after 90 days. New replies are no longer allowed.