Failure in deinterlacing?

All my 50 Hz-interlaced sources are degraded from 50i into to 25p, which means the smooth movements become jerky. How can I deinterlace from 50i to 50p? I also tried manual settings “deinterlace” and 50Hz output, then the result is 50p format, but images change with 25 Hz, means contain pairs of identic images). Does Shotcut mean de-interlace only in the sense of skipping each second field?
Any good

Converting 50i (50 fields per second = 25 interlaced frames per second) to 50p (50 progressive frames per second) is called “frame doubling deinterlace”. Shotcut does not support frame doubling deinterlace. If you want to perform frame doubling deinterlace, you will have to do when an external program before using the file in Shotcut.

Thank you! I got it. Luckily I can export it as “interlaced” means without loss…
But there seem to be a few little bugs:

  1. This interlaced export is not accessable as soon as I activate the tickbox in Settings > Player > progessive. 2) As soon as I have used Filter > Stabilizer, images change only 25 instead 50 Hz, menas the 50-Hz-fields are lost. 3) during exporting as interlaced, de-interlacing options are available but should be disabled because they are irrelevant.
    Could we to report this to the programmers, including the wish for frequency-doubling deinterlace? Since the de-interlace algorithm is is already present, it should be a small thing to read the sources in their real field rate.

This requires an entire frame, not just a field. A stabilizer would struggle on native interlace because it needs to track the movement of object edges. But with interlace, every other line would be a hole (no image data) where tracking can’t happen with any level of precision. The amount of guesswork forced on the stabilizer would defeat the purpose. The video needs to be deinterlaced in advance with a double rate algorithm like bob-weaver.

They’re still relevant. Any scale operation that changes the vertical resolution of the video (such as the SPR filter) will cause the video to be deinterlaced, scaled, then re-interlaced. Most scalers do not work directly on interlaced sources.

1 Like

@Austin correctly points out some examples where Shotcut will automatically convert to progressive as needed. The reality is that many filters require whole progressive frames and it is common for interlaced sources to be deinterlaced when filters and transitions are applied. Shotcut has some limitations when it comes to processing interlaced fields.

Frame doubling deinterlace would result in the frame rate changing in the processing pipeline. This is not currently supported by the underlying video framework that we use (MLT). There is some work in progress to eventually support this, but it will be a while before that work is done.

It would be great if frequency-doubling interlace can be realized. As general standard before processing of course, because It would solve many secondary problems and workarounds which now seem to be necessary. (it shold better be be called: ‘frame-freqency preserving deinterlace’…).
Example: If we change the vertical resolution for export, as Austin says, I guess that at present, it can not render a healthily interlaced result, because it has to de-interlace first, therefore can not generate real interlace with 50i unless it preseves the field rate.
So I repeat: I would be happy and I#m looking forward. (having a lot of interlaced footage, like some people I guess).
And, thanks a lot ao you developers! I like this genious software a lot! You did a great job.

P.S: Any reasonable video player does it automatically, for example windows media player. Present release of VLC player doesn’t, they seem to have the same problem. The de-interlacing algorithm could be identical, I guess, only it has to work each field instead of each frame, and swap upper line…

If you can do it in an external FFmpeg GUI first, then the BBC released their W3FDIF in to FFmmpeg a few years ago. It’s a true multi-field temporal filter that maintains temporal resolution.
(As it’s about 30 years old, it was designed to run on hardware available then, so isn’t hugely complex and runs relatively fast).

Would be great if it were possible to use a decent de-interlacer, even if this means running the entire project at the higher frame rate and frame-doubling any 25p content.

Frame-doubling any 25p-content should not even be necessary(?) since Shotcut knows the format (interlaced or not) beforhand and could automatically activate the ‘decent deinterlacer’ for any interlaced source.
By the way: The “interlace” topic seems to be often neglected due to a confusion caused by an un-lucky term: Insted of saying e.g. “1080 i, 25 Hz”, the more appropriate term would be: “1080, 50 i”, and otherwise 50p. (The number of lines is more related to resolution and aspect ratio, whereas the frequency term should always carry the information whether progressive (meaning frames) or interlaced (meaning fields).

If 50i is deinterlaced into 50p, then the timeline will need to be 50p. If a 25p video is dropped onto that timeline, then it must be frame-doubled up to 50p to match.

The challenge with quality deinterlacing is that it is slow unless GPUs are involved. The good algorithms these days (the successors of w3fdif) use edge awareness, cubic interpolation, motion compensation, and even neural processing to make motion look smooth. The frame rates for CPU-only deinterlacing are often as low as 2fps. Getting real-time preview and compositing while editing video is a lot to ask unless GPU support is present. This is why it’s recommended to deinterlace first using some other tool, then bring the progressive footage into Shotcut for editing. It pairs the highest quality deinterlace (slow) with a fast and straight-forward progressive editing experience.

1 Like

Seems to be a misunderstanding The presently used de-interlacer brings perfectly good result, looking at the still images. Thus, the only step needed is to make such an algorithm (ir the same one) running in double rate, i.e. twice per frame. In other words, to repeat the identical job for the next field instead of skipping ervery second field. The quality would then be more than enough, since the only problem at present is to avoid the loss from original 50 Hz to 25 Hz. The other features you mentioned (edge awareness, motion compensation etc.) are needed for real frequency doubling, e g. from 25p into 50p. This was not the request and certainly is a very heavy process and would be better done with external sortware, as Austin says, before or even better after Shotcut.

The current YADIF deinterlacer has struggles in situations that perhaps your footage hasn’t challenged it yet. YADIF is okay but not awesome. w3fdif also has struggles such as static scenes and top-field-first encoding. bwdif combines the strengths of these two algorithms plus cubic interpolation to generally get better results. But that’s the tip of the iceberg.

When deinterlacing 50i into 50p, literally half of the visual data is missing at each 1/50th of a second. This means a deinterlacer has to fabricate half of what we see out of thin air. This is why going from a single-rate to a double-rate deinterlacer is not a trivial thing. This is why motion compensation and neural networks get involved. A simple averaging algorithm is not good enough because motion is not linear like the results of an average would be. ffmpeg supports a number of deinterlacers like nnedi, mcdeint, bwdif, and others that try to use these advanced techniques to fill in the gaps more eloquently than an averaging algorithm. But quality comes with a speed tradeoff, unless you have broadcast station dollars to purchase a GPU unit like this:

1 Like

I can’t agree to this (“going from a single-rate to a double-rate deinterlacer is not a trivial thing”). . The presently used deinterlacer produces images which are good enough. It replaces the missing lines 2,4,6 etc. with sufficient quality. There is no reason why the same process can’t be done with the lines 1,3,5… a 1/50 second later for the next field. The job of replacing missing lines is identical, since usual interlced footage contains all it’s fields in equal steps of 1/50 seconds.. That would solve all problem.The disturbing thing is the frequency loss from 50 Hz to 25 Hz. You know that thismeans a big loss in smoothness.

In VLC-Player V.2.2.8 (which still supports de-interlacing), I tested all availably de-interlace modes. Most of them (linear, mean-value, yadiif, yadifx2) render field frequency (50 Hz) and have relativelly small quality difference amonst each other, visible only if you stop the video look at single images, at moved edge structures. Same with present Windows-Media-Player. (present VLC-Player version can’t do it). I doublt that this is to be be referred to “double rate deinterlacer”.

I correct myself in one point: It is NOT necessary to run deinterlacing prior to the processes, NOR to run the whole project at higher rate. As Brian pointed it: Shotcut will automatically convert to progressive as needed. (which is: for any geometry effect, speed change or video format changes) This is perfect, except that this existing progessive-conversion should render in original field rate, not in frame rate, even though the prior process is based on frames.
That would solve all problems.

(Without having an idea about the implementation: might it be possible to as call the routine separatedly for top field and bottom field? or to use a double value when transfering a rate variable, or to try installing the module in another thread…?)

It doesn’t fill in the missing lines with all-new material. It merges lines 1,3,5 with lines 2,4,6 from the next field and does comb filtering removal to fix offsets. This merge is why the rate drops from 50i to 25p. At this point, there are no more source lines left over to get from 25p to 50p. A deinterlacer has to invent new material to create more frames. All “real” lines were already used up by merging two fields into a single frame. This is why it isn’t trivial to get to double rate. This is why there is a big quality difference between the fast field-doubling deinterlacers and the slow interpolation deinterlacers.

Before you say the next frame should simply merge 2,4,6 with 3,5,7… please remember that the previous frame and this new frame would both have 2,4,6 in them. Perceptually, half the frame has not changed, which means the perceived frame rate will not actually double. If averaging is added to the process, there is a little improvement. But it’s not the same as full temporal restoration by having an interpolator invent totally new and reasonable lines to fill in the gaps.

I wasn’t sure exactly how to interpret this, so for the sake of clarity… An interlaced field is half of a frame, which happens at each 1/50th of a second. That means all fields of a frame are not present until 1/25th of a second intervals. This is why a single-rate deinterlacer drops the frame rate from 50 to 25. It is merging odd+even fields into a single frame. The loss of temporal resolution and the addition of comb artefacts is why the merged footage looks less smooth.

If you are happy with the look of real-time VLC playback, that’s great. That level of quality could theoretically be worked into Shotcut and probably maintain real-time preview provided MLT supported frame rate changes. (I am not a developer and do not speak for them; I’m just sharing what I know.) However, this is not the best that deinterlacing can look. Some people set the bar much higher, and it is possible to get there with extra processing time.

To your point, if MLT supported frame rate modification (converting a source to double-rate), then quickly fabricating frames to get from 50i to 50p could be done by algorithms like w3fdif and bwdif which analyze three source fields to help invent the missing lines. It might be good enough for most people in most situations.

I don’t understand this one. If the conversion renders in the original field rate, that means the output is 50p. To use that output, the timeline would need to be 50p, which is what we’ve referred to here as a “higher project frame rate”. How could the conversion running at 50p field rate be dropped onto a 25p timeline without removing half the frames? Doing so would produce the same not-smooth movement we are seeing today.

Oooooh, as I type this, I think I see where the confusion is now. The missing detail is that for all practical purposes, all processing and output of Shotcut is progressive. When export options are set for interlace, that doesn’t mean processing happens at 50 fields per second despite the sound of it. It means processing happens at 25p, hence 50i converted to 25p… then at export time, the progressive frames are encoded and signaled as interlaced in the metadata. However, there is no 50-field temporal resolution present. It is only 25-frame resolution even though the output format is flagged as interlaced. If Shotcut output was analyzed field at a time, and if deinterlacing had been triggered during processing, then there would be no time advancement from fields 1,3,5 to fields 2,4,6 (assuming TFF) as there would be with true interlaced material. So yeah, this little detail will probably disappoint you as it changes the math of everything.

I think I finally understand what you’re saying now. Given the following 50i video…

2.2 <-- Export process is at this point

Let’s say we’re exporting and there is a Size, Position, Rotate filter on the clip, which needs a complete progressive frame to feed to the scaler. Bear in mind that Shotcut is internally storing video as frames at 25p, not fields at 50i. So the first change needed would be a way to flag the timeline as interlaced rather than progressive so that the export process steps through time in increments of 1/50th rather than 1/25th. Then, I’m assuming you want to construct a complete frame at position 2.2 by double-rate deinterlacing. This complete frame would be fed to the scaler, then only the even lines of the scaled image would be fed back into the export stream to overwrite the existing even lines. The odd lines from the scaler would be discarded. The end result, whether exported as true 50i or as 25p with interlace metadata, would show time movement with each new field.

This works in theory (mostly). The all-important detail is how good the deinterlace at position 2.2 happens to be. If the deinterlaced frame is a merge with 2.1 and/or 3.1, then the scaler is going to see “time fragments” of past and/or future. The passage of time will not stay segregated between odd and even lines after a scale, because up- or down-sizing the image will merge lines together (and therefore the points in time they represent). If 2.2 doesn’t look like a totally independent reconstruction of the event happening at 2.2 (which is what motion compensation and neural networks try to fabricate), then we’re going to get fragments of 2.1 and/or 3.1 mixed into 2.2, which means those fragments won’t look new when we view 3.1 next. If 3.1 doesn’t look totally new and different from 2.2 (where motion is concerned), then we aren’t going to perceive an increase in frame rate or smoothness.

The other complication is mixing progressive and interlaced videos on the same timeline. In the above example, if a 25p video is dropped on the 50i timeline and the export process steps through time in 1/50th increments, it means double processing for the 25p videos. In theory, Shotcut could know the clip was progressive, calculate filters for all N.1 positions, cache those images, and reuse them for N.2 positions. That would prevent export times from doubling, and prevent temporal shifts that would happen when filters are applied twice per frame but on alternating lines. It gets even more interesting if a 30p video or even an 8p surveillance video is dropped onto a 50i timeline, which Shotcut allows. The same concepts still work, but the code gets complex.

I’m not sure how else to maintain the 50i feel yet keep the timeline at 25p. Maybe you have a better way and I over-complicated it. Unfortunately, interlace is complicated regardless of the method used, which is why I deinterlace externally before editing in progressive, and call it a day. :rofl:

Happy that you got the basic idea!

If you take lines 1,3,5 from each top field and create lines 2,4,6 as a combination from beighbouring bottom fields. (or vice versa), you get a frequency-halving deinterlace (e.g. 25p from 50i), just as is implemented now. Identical quality you would get when taking lines 2,4,6 from each bottm field and inserting lines 1,3,5 from neighbouring fields. There is no quality difference between both versions, and each produces 25 p. Only when performing both versions in alteration, using all top and bottom fields, you will do real deinterlacing job into real 50p, because all lines from all fields come into use, in their proper moment. Keep in mind that the temporal difference is 1/50 seconds (or 1/60) for all neighbouring fields, in any healthy interlaced footage.

Maybe you want to implement it for a try, to see the immense difference in quality. Use a challenging movie with much motion and edges. You will find that deinterlaced 50p is not only much smoother than 25p, but in addition, the quality loss of the substituted lines is reduced, because those lines are then alternating.

Where to implement? I believe I understand the problems you mentioned.
My first suggestion was to generally deinterlace all 50i clips at the beginning, and from then on internally then treat them just as 50p clips. (as it happens when ou do it with external softwae) Everything would work well, but it would have 3 disadvantages, that’s why my second suggestion was different. Here the disadvantages: 1) converting prior to the processes is basically out of the concept used now. 2) unnecessary spoiled resources in case not needed (if exported into 25p or 50i and no geometrical filters or speed changing effects used). 3) The feature “export interlaced” (if anybody uses it at all) would suffer quality, because top and bottm frames could be exchanged any moment in the 50 Hz timeline, then the re-interlacing catches the wrong lines, causing unnecessary motion blur. But there are no other problems. This shortcome exists only in this case of re-interlacing.

Here I repeat and explain my second suggestion:
Use the existing deinterlacer in double rate and use is the same way same as now. Shotcut already activates it in the necessary steps and places, as Brian has explained it.

– For conversion into 25p, deinterlace can be processed in 25 Hz or 50 Hz, and timeline can be 25 Hz or 50 Hz. (From a 50 Hz-deinterlaced stuff, It does not matter whether un-even or even frames are used in the 25 Hz-export or if the sequence becomes swapped)
–.For conversion into 30p or 50p or 60p, deinterlace with 50 Hz before converting into new framerate (important).
– For speed changing effects, deinterlace with 50 Hz (important).
– For geometrical filters, deinterlace before the filter.

  • If converted into 50i and no geometrical filter and no speed change: no need to deinterlace (as now also done), but can not harm either, if the following point is kept in mind.

Timeline: Only to avoid the “problem 3” (export in interlaced format), it could be useful to keep 25 Hz steps in the timeline. How to keep 25 Hz steps in the timeline while transferring 50 Hz contents: In the present version, interlaced format inplements this automatically: The timeline increments in 25 Hz steps, but each frame carries 2 fields in a temporal distance of 1/50 s, therefore the process transfers (hidden) 50 Hz, as can be seen in the healthy 50i output. The same can be acheived using a 50 Hz timeline but marking each frame which was was a top field, These markings will come in 25 Hz steps. They could guide the points of cutting But this should be ignored during speed changing effects, because they require good temporal resolution. After speed effects or after inserted clips of any other frequency, the raster markings could be resumed s soon as de-interlaced stuff comes again. But all this is only for the option of interlaced export in identical interlaced format, I see no other constellation making this necessary.

I’m adding a feature for the next release to add a “Deinterlace” option to the “Convert to Edit Friendly” conversion. This option adds a bwdif filter to the FFMpeg conversion command which will result in a doubled frame rate.

1 Like

This seems to be a good solution!
Looking forward to it…
Thank you.

This is now available in the beta:

Testing would be appreciated.

It’s fundamentally working. Congratulations!
Needs refinements and easier operation. All problems I faced and report here below, seem only due to the implementation as pre-conversion. That’s why let me first give my suggestion:


Instead of improving the present implementation, you could remove it for the coming release, and target the same thing (it works very good) as an improvement of the deinterlacing which already was implemented. If I understand rightly, the former deinterlacing does not create extra files, so it needs no extra encoding/compressing. And it automatically happens whenever needed, which is a) for export and b) before doing any geometry-affecting filters. In these cases, double rate should not disturb, only improve. Implementing it there will simplify the use a lot, compared to now.

(It will need careful testing by the programmer, and observing the implications mentioned here Failure in deinterlacing?, but I guess it is possible. With e.g. 25p-footage, the timeline could be set to 50 Hz and the export process steps through time in 1/50th, and it has to know that the material was interlaced, which already exists because this triggers the deinterlacer and can do it even before geometry filter.

No idea if the deinterlacer used for geometry filter is also defined by the dropdown choice in export > advanced > deinterlacer. Maybe you can add a choice “doubling rate” to this de-interlace dropdown, or provide a new setting in the general menu with the options to either run deinterlacing as used to, or run all in doubling rate. But maybe this option is not even necessary and all deinterlacing can be done in double frequency. All scenarios seem to match well with the double rate. Only for export into identical interlaced format, deinterlacer should not be used, but if I know rightly, it is already done this way.

Then the user only needs to chose the project frequency, all else happens automatically.

Since the first opened file automatically presets the project standard, interlaced files should then set the project to their field rate, instead of their frame rate. This is the only point for which a setting should be created (to make it an option), so that the new version can behave identical as the old one. For the quality improvement of the new interlace (which is: avoiding identical images in neighbouring frames), a disable option seems not necessary.

Problems in present Beta-Version (observe only if my suggestion can’t be used):

  1. File size is 10 x larger than the original footage (straight from the camera), even in smallest quality setting, --> Why nor define the quality in relation to the data rate of the original? As default, it should be double, due to the double frame rate. With middle quality, labelled “good”, the result is not usable. File size is 50x larger than original. Windows media player can not play it. VLC player shows complete mess (motion blur). The player of Shotcut can play it, but when pulling it into the timeline, Shotcut crashed sometimes (no response, “inactive” while 0% processor according to taskmanager in Windows 8.1).

  2. Make it easier by removing the “advanced” button in “convert”, and always display all. Otherwise it is dangerous and tricky, because the user is completely unclear if the (hidden) settings are kept or will disappear, and at which occasion this happens, so we have to have it open anyway.

  3. Finding a setting until the export delivers 50 fps seems very complicated. My first try had 50 fps but with images changing in 25 Hz. I got correct result only when during export, de-interlacing is deactivated, which seems not always possible, because it is fixed on YADIF, and in global settings, video-mode is set to 50p /or 60p. But I did not understand the meaning of the tickbox in properties > convert > advanced > “override frame rate”. Maybe it means: match the frame rate of the setting in Main-Menu > “video mode” to the converted result? In this case, the labels could be named better. And it is unclear which purpose has the numeric input for frame rate, and when to use it

  4. Using it in the present way is not easy (like using an external de-interlacer): We have to create separate files for each clip and use more conversion time for the additional enoding/compression step.