I've been puzzled by Shotcut's normalizing filter, too.
Because I've worked with Audacity on other, audio only, projects, I'm comfortable with it's normalizing "effect" (their term). Using Shotcut, I render only the audio, massage it with Audacity, and bring that file (.wav) back into Shotcut. The Audacity effect says "Normalize maximum amplitude to -1 db". Whatever than means. All I care about is it gives me consistent maximum levels when using multiple audio tracks.
How does that translate into the Shotcut normalize filter?