Developer Shoutout and new feature suggestions

Hi all,

In my career I was a Software Developer and for the last few months I’ve moved over to Media and Film Production - which I think I like more. We use Adobe at the College but I prefer Shotcut to Adobe Premier so I’ve been using Shotcut instead.

I have some observations on the positive side and suggestions for improvements.

1. Adding transitions as visual “Joiners” between video segments. For example, like:

image

Therefore, add another button on the Toolbar for “Transition” to make it really easy.

The current implementation is half-way there already. When you drag one segment over another you get a rectangle with a cross. That’s nice. I think a “Transition” graphic would be nicer and not that difficult to implement programmatically.

Then, it might allow Transitions to become modular. Which I think is needed at some point as well.

2. Simple face/cartoon animation using Glaxinmate featuring lipsync

I think doing talking heads or faces might be pretty common. Implementing basic Lipsync should not be too difficult.

3. Point Tracking via OpenCV

Other programs have point tracking using OpenCV’s SIFT mechanism.

This could be done as a Filter, scanning a video segment and then outputing points to a file that Glaxnimate could use.

Thank you and yes, I might be able to help with coding.

2 Likes

Welcome to the forum and thanks for your comments. We would be happy to have your help.

1 Like

Thanks Brian.

I might just move forward on this myself to do a Proof-of-concept of the Position Tracking.

Where would I find Documentation and Code to the “Filters” section.

I have Qt programming experience but I think I will need to find how Shotcut implements Filters. Links would be enourmously helpful.

Q: Are the Filters dynamically loaded ? or statically (such as compiled in).

My approach will be to process a video then output some glaxnimate keyframes that would work in that program.

Just some pointing in the right direction would be extremely helpful.

A good first step is to focus on MLT - the underlying framework.

There is an opencv motion tracker filter that can serve as an example:

Use the command line tool to investigate the framework and test new code before adding all the overhead complexity of the Shotcut UI:
https://mltframework.org/docs/melt/

Filters are compiled into the MLT framework

Hmmm. That doesn’t really sound like a filter. A filter receive each frame, performs some transformation on it, and then passes the frame along. It sounds like you do not plan to perform any transformation. In that case, maybe you just need to make a tool that performs the processing and creates a Glaxnimate file.

1 Like

Cool

Perfect. I will start from there.

That’s what I was thinking. From what I understand from using Shotcut so far, only the tweening system is available in Glaxnimate and that’s the mechanism really needed to make the smooth animations work.

Thanks for the information, that’s exactly what I need to get started.

Update: I have a basic idea figured out.

In my experimental code, given an object to track, I can now find that object if it exists within the base image, if it’s there and where it is.

I’m using the image from Shotcut Tutorial: How To Motion Track! - YouTube under ‘Fair use’ rights because it is from a tuturial on object Tracking in Shotcut, Nothing more. It was the test data that I had at hand. I’ll find something else next time.

So now, I can check each image in a feed and determine where the ‘object’ is.

I’m having trouble installing glaxinamate python module. So the plan to write tracking in glaxinamate isn’t working.

Instead I think I can write position data directly somewhere else.

I am now thinking to write 24/30/60 frame entries per second as keyframes directly into shotcut, in xml somewhere. Suggestions on best place to do this ?

If I figure out how to use the ‘size position and rotation filter’ I think that I could use the mechanism that’s working there.

Forgive my personal level of experimentation. In a few days I might have something that works as an experimental Proof of concept investigating the viability of such a Filter but no promises.

1 Like

A motion tracking tool in Shotcut would be wonderful!

I think 25 and 50 images per seconds would be also appreciated bu users In Europe, where I believe these frame rates are more common.

For now it isn’t.

The final transformation will be to replace some feature in the video with a new feature.

The “idea” that I’m working to is best presented here → Shotcut Tutorial: How To Motion Track! - YouTube except where the position data is generated automatically and not manually. Typically it is a case where there is some sort of object whose position needs to be identified then replaced with some inserted object graphic.

Other examples would be those 3D-legends and object identifiers.

It’s also in the Adobe product suite and Blender where you can replace features on walls in a scene that perhaps might not be exactly right. I’m not really trying to copy that, just stating that it’s there.

I’m just doing this for fun and happy to work with the guidance of existing Developers. I’m still some way off having the the poc ready.

The engine already has an object motion tracking filter.

Is it the one related to OpenCV that is listed in the Road Map?

Yes, which is what Brian linked to above. It is not yet included in the Shotcut build of MLT, however.

I saw that code and it was very helpful. I’ll refer back to that later.

At the moment, I’m trying to get the basics working in Python for what I need. Then I plan to come back and rewrite it in C++.

I’m thinking that the following UI elements will also be needed:

  1. UI to select a feature pixmap/area. Most likely an Area select. That will need storage somewhere.
  2. UI to show size-coordinates in a graph plot (changing x/y/size/rotation values) (not trying to overcomplicate for now).
  3. Keyframe timeline

Some of that will exist, some not.

For now, I will just be trying to write a file or hold a list/array of frames and the size-coordinates and then worry about working that into the MLT-framework/glaxnimate later.

I’m not promising that the first code-cuts of this will be pleasant on the eye but I’m hoping that they have a high-chance of working.

Does this filter work but just needs a UI? Or is it a work-in-progress that still needs major code changes?

It works. Kdenlive uses it:
https://docs.kdenlive.org/en/effects_and_compositions/effect_groups/alpha_manipulation/motion_tracker.html?highlight=motion%20tracker

Shotcut would need a UI. But also, we do not compile OpenCV into our build yet.

1 Like

Oh that’s pretty cool, follows a face really well in kdenlive.

Yes - both.

No and yes.

The UI needed is minimal and on par with any of the other filters that exist for Shotcut. The main requirement is to be able to make a selection using a rubber-band-selection tool using the appropriate api in Qt. A similar piece of code is the eye-dropper in the Chroma-key filter that selects a pixel. What we need for this filter is to be able to make a rectangular selection instead of a pixel and save that image into storage. As far as I know, this is not overly complex.

A selection to load the replacement image from file is also part of the UI. This graphic replaces or overlays the tracked point with some level of opacity, edge feathering, and perhaps colour processing. Perhaps some of these things can be stored in nested MLT (called pre-compositions in other software).

In a future advanced version, based on what I have seen in other programs, a Matrix Transformation in keyframes is required. This is to flip, spin, rotate and warp the inserted/overlayed graphic.

OpenCV already has the required Matrix transformations built in. It’s just a matter of understanding those.

One well known program has a GUI for it and I have seen it in use also in Blender. I don’t know if I like their approach yet. It’s not needed in the first instance, but I’m showing it as an example of what Matrix-Transformation systems give you the power to do with extreme coding simplicity.

Here is an OpenCV documentation page explaining what you get in their Matrix transformation system : OpenCV: Geometric Transformations of Images It’s well worth a developer understanding at even the most simple level.

The idea being that if you store the Matrix Transformation in the keyframes, you can have OpenCV or a different graphics subsystem, do the heavy lifting of doing the graphic transformations for you.

So it’s very little work for a ton of benefits.

1 Like

Well they have implemented it and definitely have the Tracking part working.

I guess they will be working on the effects next to bring them up to par with what can be done with other programs. For example, they have opaque image replacement working but nothing with a pixmap just yet.

Thanks for sharing.

Proof of life:

Link to python code, only for Developers: Shotcut Videos

3 Likes

Very cool, @david.lyon. :+1:

I’m curious about how you plan on implementing it in Shotcut. I personally don’t think it was well implemented in Kdenlive and the whole process actually kind of cumbersome. For example, trying to finish a track that loses the focus of the object in the middle of the path it’s tracking is not so straightforward and smooth. Also, they list several algorithms to choose from which can be confusing to most. What kind of approach do you have in mind?

Using Glaxnimate and their file.

{
    "animation": {
        "__type__": "MainComposition",
        "animation": {
            "__type__": "AnimationContainer",
            "first_frame": 0,
            "last_frame": 180
        },
        "fps": 60,
..
                                        "time": 0,
                                        "value": {
                                            "x": 305.874363327674,
                                            "y": 341.07979626485564
                                        }
                                    },
                                    {
                                        "after": {
                                            "x": 1,
                                            "y": 1
                                        },
                                        "before": {
                                            "x": 0,
                                            "y": 0
                                        },
                                        "time": 110,
                                        "value": {
                                            "x": 331.30050933786083,
                                            "y": 150.38370118845495
                                        }
                                    }
                                ]
                            },
                            "rotation": {
                                "value": 0
                            },
                            "scale": {
                                "value": {
                                    **"x": 0.5415953993797302,**
**                                    "y": 0.3696938455104828**
                                }
                            }
                        },

The saved glaxnimate file looks like a json file and there appears to be things in there that I can read and write.

I’ll be playing around with them over the next week.

Haven’t got to that but I agree. Other software displays the movement as an animated path. I think that’s very simple and handy. It would allow you to make changes to the path for cases in video where you motion-jitters where you can’t get a clear frame to track.

I’m hopeful that can be resolved.

Choose the best on in testing and if they want something different let them change it in the configuration file, or even better, environment variables :slight_smile: .