Granted. That’s why they charge big bucks for legalizers.
The only way to come close with ffmpeg is by clipping the RGB which you did with the beach photo. It’s ugly and it’s suboptimal but it might pass muster depending on the YUV → RGB algorithm used, and then you haggle with the client. MPEG-2 is going to mess with levels so you check levels and set ffmpeg parameters AFTER encoding.
I know a colorist here i Hollywood who corrects all of his gamut errors by hand, which is fine if you’re willing to pay him $50 per hour.
Since matroska has become the container of choice for Ut Video (thanks of course to you ) this is going to be an issue when trying to bring in a Ut Video file to Premiere Pro or Resolve. Last version I checked of Premiere Pro (2017), it did not support matroska. I can confirm that for Resolve matroska is not supported as well. Strangely enough, Resolve does not support avi either.
I don’t know if Premiere Pro has changed that since their 2017 version though.
In those cases, export to Ut Video in MPEG range using AVI container. Premiere can read it using a native VfW codec. Matroska is only necessary if the video needs to be tagged as full range.
EDIT: If need be, VfW codecs can usually be wrapped in MOV containers too instead of AVI.
There are a lot of requests in the Blackmagic Design forums to incorporate Ut Video and MagicYUV in a future release, and it’s caught the attention of the project management team, so maybe we will see it soon. I think VfW is already supported in Fusion 9 and above, so in theory they have the means to port it to Resolve.
One thing: note that I am performing LUTRGB operations on YUV pixels. Maybe this is why I must use crazy values as the arguments to LUTRGB? I don’t know if ffmpeg makes any kind of conversion when doing this. I think at some point I must convert the input pixels to RGB.
The goal is conformity to r103, which means R,G and B between digital 5 and 246. This is extremely tricky to do in YUV space. I tried LUTRGB as an experiment but the results aren’t really satisfactory.
If I’m exporting from shotcut, I could try bringing the R, G and B gains down by the same amount using the color grading tool. Everything would depend on the levels of the original video.
Doing it that way would save a generation because there would be no final pass with ffmpeg.
I thought the earlier ffmpeg command was transcoding to YUV 50Mbps MPEG-2 per client specifications. How would Shotcut export as RGB and encode as MPEG-2 at the time time? I was thinking straight export from Shotcut as RGB intermediate, then do the lutrgb clip and MPEG-2 encoding by ffmpeg command line. If mlt_image_formag=rgb24 was added to Export > Other, then there would be only one round trip from YUV source footage to RGB processing/export back to YUV final delivery. Not perfect, but probably not visibly noticeable either.
Or do one export from Shotcut directly to MPEG-2 or XDCam, bringing the levels for each RGB channel down slightly. Also need to fix the bit rate at 50 Mbps and convert to 4:2:2 and audio to PCM.
To do it really right you should bring up the gamma a little to put gray where it belongs (digital 111 for 18% gray).