My application is designed to give a quick overview of what an image would look like when a selection of LUTs are applied to it. It is a reasonable representation, but e,g, it uses Bi-Linear interpolation, not Bi-Cubic, so it does not strictly conform to the .cube “standard” (though the difference is mostly indistinguishable).
It works in RGB-space with each component ranging from 0 to 1023. Note that .cube files can specify the range over which the LUT applies by directives (DOMAIN_MIN & DOMAIN_MAX), but my app currently ignores these, as these directives are rarely used, but I do plan to remedy this.
If you are talking about applying the LUTs in Shotcut (and not my viewer app), then my understanding, from a post @brian wrote:
Most (all?) NLE color grading workflows operate in the RGB space.
Conversion from YUV to RGB is not mathematically lossless because not all YUV values can be mapped to the RGB clolor space. But video images that come from a camera originated in RGB because camera sensors are RGB. So unless your source material originated from a non-RGB source and used YUV values that don’t map to the RGB colorspace, you shouldn’t have any pixel values that get grossly modified. Any loss in the conversion should be limited to rounding errors of a single bit.
For Shotcut, we aspire to limit the pixel errors to one bit.
There is a relevant thread here: