V20.02 BETA is now available to test

Rules of Participation (Important)

  • We are only interested in major regressions over recent versions (v19.10 and newer).
  • We are also interested in major bugs in new features and changes specific to this release.
  • Do not report about existing bugs that have not been indicated as fixed by this version.
  • Not everything reported will be fixed before release; only what the developers consider critical.
    We are not trying to fix everything possible during the beta but only the most critical, blocker bugs.
  • Provide all feedback as a reply within this thread or as a GitHub Issue.
    If using a GitHub issue, make sure you indicate the version.
  • The beta period will end by February 16.
  • There will only be another beta released if there is a critical problem that needs confirmation from testers after the fix.
  • The actual release may add some low risk fixes and additions (e.g. new preset) over the beta.

Download

Get the beta release from this GitHub page that also includes the list of fixes, changes, and additions.

Thank You For Your Help!

4 Likes

hi, when it announced tha i can move video freely, i thought i can just drag and drop media into timeline and it will automatically add new track. but i dont know if this is a bug, but i still need to create a new track (audio/video track) and drag my video-for video track / audio-for audio track into the new track.

hopefully develepoer can create and detect an automatic new track when moving or dragging media files into empty timeline

sorry not a native speakerā€¦

Your post is invalid in this thread. And, no, it is not required to add a new track first.

Thanks for the Vectorscope, a welcome addition, however the graticule markings are almost invisible, in particular the red, magenta and blue 75% marks

1 Like

Added Filters > Video > Rectangle .
This is an experimental video filter that uses the Qt Quick QML side of WebVfx.

Thanks for this expected new feature.
I think in next versions, circle and ellipse will be added

Preview Scaling:

If I use 360 the Preview starts with stutter where the FadeOut begins and remains on ā€˜Heavy-Parts-01ā€™ until a next part.

If I use 720 or even None it is smoothā€¦

The new vectorscope is awesome! I agree that the blue markers are difficult to see. Just wondering, would RGB values in the tooltips be useful to anyone else? Iā€™m wondering about comparing colors quickly between GIMP and Shotcut without having to export a frame.

Preview scaling is phenomenal. I donā€™t have any bugs to report, but I have results from two tests that illustrate the performance gains:

Test 1: Fading

Setup:

  • Video clip on V2 overlaps video on V1 by four seconds and fades in the whole four seconds. Hardware is 16 threads @ 2.4 GHz.

Results:

  • 4K timeline, 4K sources, no scaling: cannot play real-time
  • 4K timeline, 4K sources, 360p scaling: single-clip playback is easily real-time, but fade section stutters badly
  • 4K timeline, 360p proxies, 360p scaling: the entire sequence never tops 5% CPU! Incredible!

Test 2: Track Stacking

Setup:

  • Put any video clip on V1. Same video clip goes on V2 overlapping the clip on V1, but has audio detached and deleted, a color grading filter added, then an opacity filter at 20% added. This filtered clip on V2 is then copied-and-pasted to V3, V4, V5 as high as possible until the audio from the clip on V1 starts to stutter.

Results:

  • 4K timeline, 4K source, no scaling: stutters with V1 alone; canā€™t stack
  • 4K timeline, 360p proxy, 360p scaling: stacks to 21 tracks!

If I replace the color grading filter with a Text: Simple filter, it stacks to 15 tracks.

The opacity filter is critical to the test because I believe Shotcut still has an optimization that says ā€œif V5 is fully opaque, then donā€™t bother compositing V1-V4 since they canā€™t be seen anywayā€. The opacity filter forces compositing and activation of all filters across all tracks for a more accurate measure of performance.

For previous projects, I had set the video mode to 360p using 360p proxies to get this kind of editing performance, and I see improvements in the new Preview Scaling feature that beat the old method. I was surprised to see Preview Scaling maxing out 8 of the 16 cores, whereas the old method would usually use 4, maybe 6 on a good day, but none at 100%. Also, the old method was only able to stack 18 tracks whereas Preview Scaling can do 21 (this is using the same hardware for both tests), so this is win-win all the way around. Iā€™m very excited, as you can tell.

Specific note to @Earlybite (I hope Iā€™m not breaking the thread rules by commenting on this since the stuttering thread was closed): You asked about performance of a 4-thread processor vs. a 12-thread processor. These test results suggest that a 4-thread processor with proxy sources could possibly be sufficient in 360p scaling mode. However, with 12 threads, proxy generation and export encoding will go significantly faster, and the preview while editing at 360p scaling should also benefit significantly since the preview appears able to utilize 8 cores efficiently. Letā€™s open a new topic if you want to discuss further to avoid cluttering this beta conversation.

3 Likes

At the risk of sounding greedy, could an option be added for 480p preview scaling? Itā€™s an even multiple for 1440p monitors like the Apple Cinema LEDs that has an extremely good balance between picture quality and playback performance. Iā€™ve noticed that using 480p proxies with a 480p video mode scales up cleaner to a full-screen external monitor than 360p/540p does to 1440p (using bilinear scaling for speed).

One issue to report.
Add a video, image or color clip to the source viewer or timeline.
Add size and position filter, select distort mode, resize the rectangle. The image will not fill the rectangle.

Demo.

This is fixed for the next version.

1 Like

Can you elaborate on this? They are subtle on purpose so that they donā€™t obscure the vector data. I can lighten them or make the larger, but then they may hide the vector data. Additional thoughts would be appreciated.

I think this is not possible. The vectorscope only displays the chroma values (hue and saturation). It does not show the luminance values - which would be needed to calculate an RGB value. Or maybe I am misunderstanding the suggestion?

This is what the zoom scope is for. Please give it a try and let me know if it meets your needs.

I just noticed that the vectorscope tooltip has an error when it is outside of the scope. I will fix this for the release.

Yeah, I didnā€™t think that one through. Ignore me. :slight_smile: I tried the Video Zoom earlier today and it does exactly what I was looking for, but I was wondering before really thinking if the RGB functionality could be combined with the vectorscope to reduce the number of scopes on the screen for real estate reasons.

EDIT: I recall thinking that the FFmpeg avfilter.vectorscope filter allowed the color of the data in the vectorscope to represent the average or peak luminance at that UV value. Iā€™m not requesting this as a feature, but it gets me thinking again about how much data can be packed into a vectorscope.

By the way, I love the multitude of scopes available now. Much appreciated.

In a room that has any amount of ambient light, it is difficult for my eyes at a reasonable distance from the screen to reliably detect the difference between the black background and the 75% blue mark in particular. Iā€™ve got a calibrated Apple Cinema LED monitor, meaning it isnā€™t eye-blazing extra bright like a lot of monitors. Blue looks very dim on it. The RGB values of the Alexis Van Hurkman scope were RGB(0,0,255) but itā€™s showing up as RGB(0,0,85) on my computer. Itā€™s really dark and really thin. Any chance HiDPI mode would make the lines extra-thin for some people?

I keep my ambient light down and my monitor is as near calibrated as I can get it, so it is not set exceptionally bright either. I sit about 1 1/2 metres away from it and, even if I set the vectorscope to maximum size, which is not usually practical, the red, magenta and blue 75% marks are almost undetectable. For me they (all) need to be brighter, bigger would help too. I find the vectorscope trace obscures the 75% marks when put over them.

Are the graticules not the defined acceptable limits for SMPTE 75% bars? Making them bigger makes them useless.

The use case of comparing Rec.709 RGB values to GIMPā€™s sRGB is not valid.

When I push saturation to make the trace extend past the 75% and 100% marks, the trace data covers (overlays) the marks and completely hides them. I guess Iā€™m not seeing how these marks can obscure vector data if they are the lower layer.

I agree that marks canā€™t be made very thick because that makes finding the 75% point less precise. But if the marks were less wide but lots brighter, wouldnā€™t they be both visible and less obscuring even if they were the upper layer of compositing?

Does this statement effectively suggest that the Video Zoom scope is also not valid?

Added support for using a video clip in Transition Properties > Video .
This is handy to use with @jonrayā€™s matte transitions.

Can someone elaborate on this feature?

Hey Thanks. I should have explained better. I am aware of the transitions I was just wondering how Shotcut supports those transitions now. I went to Properties - Video ā€¦but what thenā€¦

If you have the same value in sRGB and 709RGB, you donā€™t have the same hue, brightness and saturation. Itā€™s a small difference, but it is different.