I think I finally understand what you’re saying now. Given the following 50i video…
FRAME.FIELD
1.1
1.2
2.1
2.2 <-- Export process is at this point
3.1
3.2
4.1
4.2
Let’s say we’re exporting and there is a Size, Position, Rotate filter on the clip, which needs a complete progressive frame to feed to the scaler. Bear in mind that Shotcut is internally storing video as frames at 25p, not fields at 50i. So the first change needed would be a way to flag the timeline as interlaced rather than progressive so that the export process steps through time in increments of 1/50th rather than 1/25th. Then, I’m assuming you want to construct a complete frame at position 2.2 by double-rate deinterlacing. This complete frame would be fed to the scaler, then only the even lines of the scaled image would be fed back into the export stream to overwrite the existing even lines. The odd lines from the scaler would be discarded. The end result, whether exported as true 50i or as 25p with interlace metadata, would show time movement with each new field.
This works in theory (mostly). The all-important detail is how good the deinterlace at position 2.2 happens to be. If the deinterlaced frame is a merge with 2.1 and/or 3.1, then the scaler is going to see “time fragments” of past and/or future. The passage of time will not stay segregated between odd and even lines after a scale, because up- or down-sizing the image will merge lines together (and therefore the points in time they represent). If 2.2 doesn’t look like a totally independent reconstruction of the event happening at 2.2 (which is what motion compensation and neural networks try to fabricate), then we’re going to get fragments of 2.1 and/or 3.1 mixed into 2.2, which means those fragments won’t look new when we view 3.1 next. If 3.1 doesn’t look totally new and different from 2.2 (where motion is concerned), then we aren’t going to perceive an increase in frame rate or smoothness.
The other complication is mixing progressive and interlaced videos on the same timeline. In the above example, if a 25p video is dropped on the 50i timeline and the export process steps through time in 1/50th increments, it means double processing for the 25p videos. In theory, Shotcut could know the clip was progressive, calculate filters for all N.1 positions, cache those images, and reuse them for N.2 positions. That would prevent export times from doubling, and prevent temporal shifts that would happen when filters are applied twice per frame but on alternating lines. It gets even more interesting if a 30p video or even an 8p surveillance video is dropped onto a 50i timeline, which Shotcut allows. The same concepts still work, but the code gets complex.
I’m not sure how else to maintain the 50i feel yet keep the timeline at 25p. Maybe you have a better way and I over-complicated it. Unfortunately, interlace is complicated regardless of the method used, which is why I deinterlace externally before editing in progressive, and call it a day.