I have a music performance made with one high quality microphone and three video cameras. So I have one audio track in Shotcut and three video tracks. They are all properly synchronized.
All I want to do now is pan between the three different videos during the performance, so that the final mix shows different views of the performance (not just the top video track).
What is the easiest way to do this in Shotcut? I’m guessing the answer is something to do with keyframes but keyframes seem so very complicated. All I need to do is mark timed regions of my video to apply fade-in fade-out effects.
[Sorry to post such a dumb question but after much searching I can’t find a simple answer for this. It is something I’d expect many musicians to want. I’m sure the fault is my own]
I doubt this involves keyframes. Instead, I think the easiest is to have a separate track for each video angle, perhaps with the main/default one at the bottom. You can then use “Blend Mode” filters on clips to decide how they blend in with the bottom tracks, and you can also use the fade in/fade out filters (with “adjust opacity instead of fade to black” checked), with or without blend mode filters. So basically I would use one track per angle, with any combination of blend mode and fade in/fade out filters on clips.
Another option, more clunky and harder to edit, is a single track with clips next to each other with transitions between them, which you can achieve by dragging a clip onto an adjacent one to create a transition. You can customize the transition by selecting the new purple box that appears in-between the two clips on the timeline and going into the Properties tab.
First of all thank you for your reply. Using “blend mode” filters will be exactly the right solution for me.
However, most surprisingly I’ve now found the three video sources I recorded at the same time as the audio are not correctly synced. Even If I convert to constant frame rate the metronomic tick of the rendered video varies. Sometimes it is faster, sometimes it is slower than the pure audio. Do you happen to know of how I can convert to a constant frame rate prioritizing/preserving the “metronome tick of the audio”.
I didn’t encountered this problem before, but maybe Microsoft “improved” its ‘Camera’ application to default to variable rate (the other secondary views are from tablets and phones)
When I’ve got completed this I’ll try and write it up as “step by step process” for the benefit of other musicians like myself who want to make Youtube videos but have only minimal experience of ShotCut.
[Further Information ] My video sources all began with variable frame rate (VFR). I’ve tried converting them to a constant frame rate (CFR) both within ShotCut and also using “Handbrake” but strangely this is nowhere near a perfect fix - the “tempo” of the music in the videos still varies compared with .wav audio. I could cope with it being a constant factor out - It’s just a matter then of applying a time multiplier to the video track. But variation is impossible to deal with.
I’m quite bemused. If I didn’t know FOR CERTAIN that the audio and video was created for the same recording session I assume I’d picked incompatible files. But they have identical file dates.
When converting from VFR to CFR is there a parameter which tells it absolutely respect tempo?
RE:… the video should be reasonably synced across tracks (albeit, not perfect).
My intention was only to use the high quality audio as audio and put the various videos on top. Because I recorded three videos and an audio simultaneously I had expected automatic synchronization (i.e. Once I had adjusted the start point - and possibly added a speed multiplier to compensate for absolute differences in speed). All four recording devices were quite close together so the speed of sound is not a factor.
This picture is simple example of what I see in ShotCut (and hear if unmute both audio). Start point and speed are adjusted so that initially the first few chords are well synced. But then a more active passage follows - and because I’m playing this on the guitar there is more movement in the video - and what actually happens is that the video lags behind or races ahead and reaches the next obvious peak well ahead/or behind the audio. The audio is of course a “.wav” and as such is a very accurate representation of the truth.
Its almost as though the conversion from VFR to CFR is not respecting the timestamps on the variable frames. I’m simply bemused - or have missed a critical flag in the conversion process either in HandBrake or ShorCut.
Because I’m under pressure to compose more music I’ve had to abandon putting video on these recordings and simply published Youtubes of pure audio. But I have another work I need to record in a couple of months time - then I will attempt this process again. Next time I will make sure all the video sources are using CFR.
Thanks again for replying.
If do manage to make it work next time I will create a guide for this process. For many student or amateur musicians it’s a very common problem. Like me (Composer not performer) they can create Youtubes with occasional mistakes or passing motorbikes. They can eliminate those mistakes using Audacity in the audio track - then if they have multiple video tracks available they can cross fade etc and thus fudge over “leaps” in the video. [Those in the know will have seen that many Youtube renditions switch camera views in technical difficulties, or resort to a pure mimed-video shot over a prerecorded audio.] But obviously if the basic synchronization is out then none of that is possible. Just In case you are wondering: for this particular recording I don’t think I made an error I had to correct in Audacity for around 3 minutes - So this sample is exactly as recorded.
Lorsque je fais un montage d’une scène à partir de plusieurs sources vidéos qui ont été filmées par différents appareils, y compris des téléphones. Je choisis la piste son qui me semble la meilleure, et je la place sur la piste audio. Au-dessus, sur les pistes vidéos, je place mes différentes vidéos à raison d’une vidéo par piste. Lorsque je constate un décalage, je corrige la vidéo concernée, soit en supprimant une image par ci par là si la vidéo est en retard, soit en en rajoutant une si elle est en avance. On peut aussi découper les vidéos en plusieurs morceaux et modifier la vitesse de chaque morceau. Cela revient au même car SC modifie la vitesse en ajoutant ou en enlevant des images.
When I edit a scene from several video sources that have been filmed by different devices, including telephones.
I choose the sound track that seems best to me, and place it on the audio track.
Above, on the video tracks, I place my different videos, one per track.
When I notice a discrepancy, I correct the video concerned, either by deleting a frame here or there if the video is behind, or by adding one if it’s ahead.
You can also cut videos into several pieces and modify the speed of each piece. This is the same as changing the speed by adding or removing frames.
EDIT : Bien sûr je coupe le son des pistes vidéos.
Are you using the alignment tool? That might help a little. But there is only so much it can do if your source is varying over time
Your assumption that VFR is the problem might be right. It is best to work with CFR files in Shotcut.
Another source of the problem might just be inaccurate clocks in your recording devices. All source device clocks vary over time to one degree or another. Some more than others. To work around that, you could slice up the clips (once every minute, for example) and then let the alignment tool place the slices as close to the reference track as possible.