Clarification on what "GPU acceleration" means

You already have most areas covered by your numbered list. However, there is one more area, and some reorganization:

  1. Decoding could be GPU-accellerated but is not currently in Shotcut. It is difficult when dealing with multiple video tracks, but I want to eventually bring it to the Source player and bottom video track. Another difficulty is to convert the decoded output to OpenGL texture for subsequent processing without making the trip through RAM and then doing this for the variety of APIs for different vendors on each OS. Decoding is actually not a huge bottleneck in the overall pipeline thanks to heavy optimization within FFmpeg and CPUs (SIMD).

  2. Image Processing. This covers filters, transitions, and compositing/blending. This the biggest bottleneck because most of the CPU code lacks much optimization. As a result, instead of having to optimize each filter individually, we have been throwing multi-threading at it via multiple slices of an image and multiple frames-at-a-time (so called Parallel processing in Export). GPU acceleration can be done in a few ways currently: OpenGL via Movit, WebGL via WebVfx, and possibly something in FFmpeg libavfilter and OpenCL although not currently in Shotcut. When we write “GPU Effects” in the context of the Shotcut Settings menu entry that is hidden, we mean Movit. Movit uses 16-bit floating point per color component, which is roughly equivalent to 10-bit integer. Also, it is able to chain filters together into a combined shader instead of rendering each effect to a texture let alone moving the video data between CPU and GPU RAM. Thus, it is very good and best approach. However, it is not user-extensible like WebGL. It is impossible to bring the same benefits of Movit to WebGL currently. It could be possible to define a filtering framework within a single WebGL filter, however; but that would not interact with the transitions and compositing.

  3. Preview. Shotcut uses custom OpenGL to display video as both a cross-platform API as well as to offload a last step colorspace conversion and scaling to the display viewport (including zoom).

  4. Encoding. You got it.

  5. The user interface, specifically the Timeline, Keyframes, and Filters panels. These use Qt’s QML Quick API, which is an OpenGL scenegraph API. Basically, each UI object renders to an OpenGL texture, which are then arranged and composited. This is where Settings > Drawing Method comes into play on Windows. The Chrome team found WebGL to be too unreliable on many OpenGL implementations that they created a middleware to convert OpenGL to Direct3D. MacOS avoids this problem because it has heavily relied on OpenGL itself for many years. Linux desktops can often have this problem, but there is nothing else like Direct3D to fallback to except Mesa-based software rendering (now also available in Shotcut for Windows v19.04). Shotcut also uses this technology to overlay a UI on the preview video - something we call VUI - think the rectangle control for Size and Position although you can do other things.

It remains to be seen where all this dependence upon OpenGL will go as Vulkan and Metal takeover.

1 Like