Is it DEFINITE? There is no way to change PORTRAIT TO LANDSCAPE?

Tente usar o Deepl, que não tem o limite de 3900 caracteres que o Google Translate faz.

[Try using Deepl, which does not have the 3900 characters limit that Google Translate does.]

I am in the process of putting together a tutorial of the various techniques for converting a Vertical (or Portrait) video into a Horizontal (or Landscape) one. This image shows the various options.

4 Likes

@shotcut @brian
Would it be possible to make a note in the SPR filter’s documentation about any risks towards loss of resolution? SPR seems to work differently now than I remember, and I can’t figure out exactly what it’s doing.

Sample image (diagonal yellow and black lines with sharp jagged edges):
hazard.zip (2.6 KB)

Image resolution: 2160x3840 (vertical UHD)
Project resolution: 1920x1080 (horizontal FHD)

Filter stack #1 (result is sharp, as expected):

  1. Crop: Source with “Center” checked, no bias

Filter stack #2 (result is medium sharp):

  1. SPR with Zoom set to 316%

Filter stack #3 (has the softest edges of the three):

  1. SPR #1 with size 608x1080 (Zoom 100%)
  2. SPR #2 with Zoom set to 316%

All three of these stacks basically accomplish the same thing, but get different results in terms of edge sharpness. I don’t know what’s causing that difference.

The part that makes me go “hmm” is stack #3. If the hazard image was truly scaled down to 608x1080 then scaled up by 316%, then I would expect a horribly fuzzy rendition of those diagonal lines. Doing this sequence in GIMP produces the expected softness. But the diagonal lines are still well-defined in Shotcut, which I can only imagine happening if all the SPR filters get their zoom values added together to produce a single, final zoom amount that goes against the original image one time, as opposed to carrying out every SPR operation in sequence on the resized-to-fit-the-timeline image. The lines are so sharp that I’m led to believe SPR maintains access to the original high-resolution image for its processing, which would be awesome.

As an example, here is a 200x200 crop of filter stack #3 exported from Shotcut:

Sample-2xSPR

And here is a 200x200 crop doing the filter stack #3 sequence in GIMP:

Sample-GIMP

Huge difference, which has me assuming that SPR is somehow maintaining access to the original image data. But does it always?

Regardless of the actual mechanics (which I’m hoping can be added to the documentation), this is a pleasant surprise where Shotcut is producing way better quality than expected when source clips are zoomed in with SPR. But I don’t understand how it works, nor do I know when and how the auto-resizer may change things if there is no Crop: Source or SPR filter, and therefore I don’t know when I’m at a risk of resolution loss from upscaling or rotating or otherwise filtering an image. Could a documentation note be added stating when filters see the auto-resized version of a frame as opposed to receiving the full-size frame from the original file? Thanks for considering.

1 Like

There is an empty slot available on your demo board… what about a “custom background” option? Kinda like the blurred background option, except they can put their own custom color clip or image/video clip on V1 with the vertical video on V2. Maybe a background of slow rippling water or other B-roll video could fit the theme of whatever’s happening in the vertical video.

Thanks for the suggestion. I do actually go into that option in the presentation, but I’ll add an image to the “demo board”.

Ou pourquoi pas afficher directement les pourcentages, on ne se prendrait plus la tête avec les calculs.

Or why not display directly the percentages, we would not take the head with the calculations.

I do not think I can document the magic. There are many variables. I suggest people not to overthink and try things to find what works for them. I am not really following this thread since I found it too confusing. You are really surprised that different implementations (crop source, affine, gimp) and combinations give different results? Or you are not, but want a full explainer? There’s too many variables involved. I will add a few hints here, but not sure about adding to the documentation yet. Maybe it can be refined to be included:

  • The Crop: Source filter removes rows and columns from the edges of an image before anything else implied or not.
  • There is implicit scaling (upstream from all user-added filters) from the source resolution to the project resolution that maintains the source aspect ratio by padding with black. But on the rare occasion a downstream filter can bypass this or govern its target resolution.
  • Size, Position & Rotate (aka affine transform) has different behaviors depending upon:
    • If Preview Scaling is on:
      → tell upstream to scale to Size with the preview scaling factor applied.
    • If there is more than one of this filter on the same object, or
    • if Size mode is Fill, or
    • if Size mode is Distort, or
    • if the source image is larger than Size (down-scaling):
      → tell upstream to scale to Size
    • Otherwise (one transform filter on this clip), Size mode is Fit, and source is smaller than Size:
      → bypass the upstream scaler.
  • Size, Position & Rotate uses interpolation when mapping source pixels to each destination pixel.

In stack #3, you would expect the image to be initially scaled to 608x1080; however, the two filters are compounding such that the first filter thinks there is preview scaling at 3413p (316% zoom = 1919x3413). Thus, the first filter behaves under the preview scaling rule above because the second filter asked it for 1919x3413. The way preview scaling works at the engine level is different than the user presentation within Shotcut. In Shotcut there is a global setting but not so in the engine. A filter in the engine receives an image request at a resolution (typically at video mode with preview scaling applied), compares it with the current video mode resolution, and scales its parameters accordingly. Since the second transform filter is requesting 1919x3414 and the video mode is 1920x1080, it scales its size parameter by ~3.16x. Thus, the first filter ends up also requesting 1919x3414 from the upstream scaler. I have no plans to change or address any of that. Basically, I do not know how and lack the confidence to do so without breaking a use case or backwards compatibility.

Before version 20.09 IIRC the Size, Position & Rotate filter would always bypass the upstream scaler and do its own scaling, but that gave very poor quality for down-scaling and was changed.

2 Likes

This actually makes sense, and thank you so much for taking time to describe the many logic paths. This is very useful and relevant.

…as opposed to SPR receiving a timeline-shrunk version from upstream and doing the scaling itself on the shrunk version.

This is the clarification I think many people – including myself – were seeking.

In other forum threads as well as this one, several people have posted their concern that the implicit scaler might shrink a large image down to timeline resolution first, and then SPR would use that shrunken version for any zoom operations. The fear was that resolution would be lost due to zooming into the shrunken version rather than zooming into the original image data.

The test I did above proved that resolution loss was not happening when zooming in with SPR, but I couldn’t verify whether that would remain true in all circumstances. Your clarification helped us know when to be concerned and when not to be.

I think a simple statement saying “don’t worry about resolution loss during a zoom with SPR because the original source frame is used” is all people need to see in the documentation to stop worrying. Without this clarification, some people (including myself earlier in this thread) have advised using “Crop: Source” rather than SPR as a preferred method of zooming to avoid resolution loss, but it turns out that the loss problem doesn’t even exist. I feel like I noticed a bigger resolution difference in earlier versions of Shotcut, but maybe that was just different interpolation methods I used at the time.

Thanks again. I’ll update my previous inaccurate posts.

Cool, tutorial will be welcome, I believe it will have a lot of access. In the example posted, there is significant video loss.

Man, I’m sure your text teaches something, I didn’t understand almost anything. What is SPR?

Size, Position & Rotate filter.

I have created a new thread with a copy of the presentation slideshow I put together showing the various ways of converting a vertical video to a horizontal one. It can be seen here:

Good tip, thanks.

What is UI?

HAAAAAAAAAAAAA, thank you.

It’s true, the subject is complicated

Thank’s

UI = User Interface - the layout of the screen, buttons, menus, etc. Or to say it another way, UI = the way you communicate with the program.

Similar acronyms:

GUI = Graphics User Interface - in today’s world most people interact with computers primarily or exclusively via GUIs, so UI and GUI may be used somewhat interchangeably, depending on the context.

UIX = User Interface XML - referring primarily to the UI that is presented by a web-based application, created with XML (or HTML).

Thank’s

Regarding the original post, Shotcut version 21.12.21 added Properties > Video > Rotation to change orientation (rotate only in multiples of 90 degrees) to transpose without resampling.