As I used the Motion Tracker filter, I realized that the last algorithm (CSRT) works quite well in most cases, while the others fail almost. But, there are few cases for which even the CSRT fails and the user cannot proceed. I would suggest to add a new Algorithm, which will based on user provided key points, around which an interpolation procedure would performed, in order to have the desired result. For example, lets say we want to track a football player. The user will pause the video every some amount of time (eg every 1 sec) and will specify the desired position of the moving frame, for the current point of the paused video. Having a list with manual provided points, the algorithm will smoothly fill the intermediate keyframes, so the process will not fail (ie the CSRT algorithm could then calculate the intermediate points, for each consecutive pair of manual points).
I understand what you are suggesting. You can do this today manually (i.e. not conveniently) by using multiple Motion Tracker filters and trimming them:
Thank you for your response. This is a workaround, but it would more elegant and compact to have it in a single functionality.
Cool suggestion. Guided motion tracking. I definitely struggle with motion tracking not being accurate since I have tried it on hiking videos where I’m walking around… and all the algorithms eventually think I’m one of the trees in the background.
But generally most of my struggles with the filter is that it is painfully slow to analyze for long segments of footage… which I assume is not really the fault of Shotcut, but the OpenCV Tracker being used as a library. I’m amazed it’s even possible at all, so I’m not complaining. But in my case, it’s simply too slow to be of practical use on my high resolution footage. I imagine it’s probably slow in any software, unless someone has discovered a magical algorithm that is better at guessing patterns from arbitrary pixel relationships.