First Bass&Treble and then Normalise: Two Pass or vice versa plus some strange buzzing


Two questions related audio tracks.

Is there any recommendation concerning the order of audio filters. Should I do first Normalise: Two Pass and next other filters (ex. Bass&Treble with some countour (outline) effect (which means Bass +5.9, Middle 0, Treble +5.9) or in the opposite direction, first other filters and Normalise Two Pass at the end?

Moreover I have quite strange problem. I’m having two filters Bass&Treble (with values as given previously) and Normalise: Two Pass -15 LUFS. When I play audio track directly from shotcut the audio is ok. However when I export it to mp4 (H.264 High Profile audio: stereo, sample rate 44100, codec aac, average bitrate 256kb/s then in the final video sometimes I get some strange buzzing when pronouncing some letters, ex. the words containing M at the beginning or the word “Hi” is having this strange buzzing. I’m wondering if it’s the problem with the quality of recording or it that insufficient audio codec bitrate or sth different? Any suggestions?

Thanks for help! :slight_smile:

Do all major volume modifications first. Whether that’s a Gain filter or some form of Normalize filter, do it first. EQ adjustments made at a quiet volume level will sound different if the volume is significantly raised (and vice versa), meaning EQ would have to be redone at the new volume level. This is due to the Fletcher-Munson Curve. EQ can be applied after the tracks are relatively close to their final volume levels.

That’s plenty of bitrate for speech. Something else is going on.

Are the sources 44,100? If they are 48 kHz, then something could be happening converting 48 kHz to 44.1. But it shouldn’t be as prominent as what you’re hearing.

Do any other codecs make the noise? Like AC-3, Opus, pcm_s16le? If they don’t buzz, then you know AAC is causing it.

Just occurred to me that those settings might be a bit extreme for the Bass & Treble filter. Maybe it’s combing or something during processing. What about switching to the parametric EQ with similar settings, but a very low-valued Q setting, like somewhere between zero and one? Or use the shelving bands? Maybe that wide bandwidth will be gentler on the sound when making big gain changes.


Thanks a lot for all answers. One reason might be that I first did Bass & Treble and next Normalize Two Pass. Next when wondering I started to have doubts and with your post you only confirm my doubts. I will reverse them.

The issue is that for some projects I have ex. 50 different small audio clips with filters. Is there any easy way to reverse the order in xml? any idea?

Source files are 44.1. That’s why I didn’t want to convert them to 48. However it’s the voice when this buzzing is visible. However I have also background music. So if you add them together it might be also the reason. That the bandwitch could be better here.

I haven’t tried to do check ac-3 or any other codecs.

As changing one filter to another is quite time consuming (not sure if it’s possible to do it easily inxml), I might simply reduce the values to ex. bass +3 and treble +3. Here I know the trick how to do it in xml.

I believe that this buzzing might be result of some combination of not enough good recording with some processing problems.

In the XML, there should be a <producer> for each clip with two <filter> elements inside, one for Bass & Treble and one for Normalize. In theory, you should be able to cut-and-paste the second <filter> block to come in front of the first <filter> block. They don’t have to be renamed. The order of appearance in the XML file is what matters.

1 Like

But if there are ex. 50 clips even this copying paste might be a challenge…

One thing more. What do you mean by parametric EQ? In first moment I though that you are talking about a filter. However I can’t find it in the filters list (or anything similar)…

Oops, sorry. Parametric EQ is being added to the next version of Shotcut, which is still in beta. It should be released in final form soon.

Also, the Bass & Treble filter is being replaced with new code in the next version of Shotcut. Maybe it will work better than the current one, if it is the source of the buzzing noise. I think the name changed to “Equalizer: 3-Band” instead of Bass & Treble, too.

1 Like

Ok, now that’s clear. However I have impression (at least for my ears) that after changing the order of filters, Bass&Treble (when it’s after Normalize) works in much weaker way… It’s hardly noticeable not counting volume increase…

I made some experiences and I have impression that this buzzing is not related with codecs (perhaps it’s slightly higher when having small bitrate) but it’s related with “Normalise: Two Pass” filter. If I replace it with “Gain/Volume” I get much better results. Sure if I put very high values for Volume I also get a bit of this buzzing but less. So probably not perfect recording with “Normalise: Two Pass” is causing this. Do you have any ideas why it’s like that? How does this filter works? Does it set only one gain level for the whole clip or does it do it locally? It looks like that even in 99% of clip the volume gain is ok, in this 1% audio peak meter shows “red” values close to 0.

Okay, let’s back up. Your audio shouldn’t be anywhere near 0. The buzzing noise is probably the result of clipping.

In the “Normalize: Two Pass” filter, what value are you using for Target Loudness?

When the “audio peak meter” shows values close to 0, is this the actual Peak Meter scope, or is this 0 LUFS shown by one of the meters in the Audio Loudness scope?

1 Like

@Austin, thanks a lot for help. :slight_smile:

For “Normalize: Two Pass” i set “Target Loudness” as -15. Basing on what I’ve found it’s expected value for many streaming platforms (or sth very close): Mastering for Soundcloud, Spotify, iTunes and Youtube. – Mastering The Mix

Moreover I compared my speech audio level (sure by listening to it) with other videos on youtube of many well know channels. And to my ears this -15 gives very similar result.

It’s Audio Peak Meter which shows red values close to zero. Sure it’s temporary in the moments when I can hear this buzzing…

On another track I have music from yt library with gain volume level set to -17dB to make it background music. But I don’t normalize music in any way.

And here values from one of the moments with buzzing.


That’s a very reasonable value. (EDIT: After seeing your screenshots… maybe not. Skip to the very last paragraph for the short version.)

There are two possibilities. One is that the original voice recording has a very wide dynamic range. This can happen when speaking very close to a sensitive microphone. The analog fix is to get a less sensitive microphone, or speak from a further distance away. The digital fix is to add a Compressor filter before the Normalize filter. The Compressor will reduce the dynamic range so that the loud spiky parts of the waveform will not go above 0 dB when everything is raised by the Normalize filter.

To answer your other question about how the Normalize filter works… yes, it’s very simple. It analyzes the audio, determines the average loudness, and adds Gain to the entire track. As in, if the average loudness is -20 LUFS and your target is -15, then it adds +5 dB of Gain to the entire track. This amount of gain could be pushing spikes through the 0 dB ceiling, causing clipping and the buzzing noise. The Compressor will shrink the spikes before Normalize gain is applied. Adding the Compressor after Normalize is too late… the spikes have already gone over 0 dB and that information is lost.

The other possibility is that the sound levels are hitting 0 dB because the sum of your speech track plus music track together results in a spike that’s too loud. Maybe lowering the music volume or putting an EQ notch in it will prevent the sum from being so loud.

The sure way to know is to mute the music track and see if the audio still spikes. If it does, then you know the speech track on its own is too loud and it needs a Compressor. But if speech is fine by itself, then you know the music is too loud. If you’re unable to add music to speech at a decent level without clipping, then the speech needs a Compressor added to free up some headroom for the music to fill.

Just saw your screenshots.

764.0 dBTP?? I’ve never seen it go that high. I think you set a record!

So, this screenshot is telling me some things. First, the “I” meter is the Integrated (average) volume over time, and it’s showing -10 instead of -15. So I assume I’m looking at speech plus music here, and that music has raised the volume level by 5 LUFS. That’s huge, and definitely a problem.

So we first need to figure out why the “I” meter is at -10 instead of -15. Has something changed on the speech tracks and they need the Normalize: Two Pass > Analyze button hit again to update their internal gain setting? Was the Reset button hit in the Loudness Meter section to zero the meters before playing back a section, so we’re seeing a true measurement instead of a hodge-podge average of random playback points? The LRA (loudness range) has me concerned too… it’s pretty small, similar to commercial music tracks, which has me wondering if the background music is simply too loud.

Regardless, here’s our basic problem… despite Normalize: Two Pass being set to a reasonable value of -15 LUFS, the final volume level is actually -10 LUFS, which will have a very high probability of spiking to 0 dBFS. We need to find some combination of filter gain values or clip re-analysis that will get the “I” meter back down to -15 LUFS.

It is common for background music to add 2-3 dB of volume to a speech track. This means that if you want your final volume to be -15 LUFS, then set the Normalize: Two Pass filter on the speech track to be -18 LUFS instead of -15 LUFS. Speech at -18 LUFS plus music that raises the volume by 3 LUFS will put the final volume at -15 LUFS, which should be enough headroom to avoid clipping.

1 Like

@Austin Thanks a lot for you help. You put a lot of energy. Big thank you. :slight_smile:

Unfortunately I’m not an expert here so it’s a bit challenging to answer on some questions. If I run the fresh copy of Shotcut and reset “Audio Loudness” then I got much lower values for dBTP. It might happen it was related with jumping between different places on track.

And true I have quite good microphone as for amateur fifine K678 and I like also to record from distance about 10-15 cm as if it’s farer I have impression that the voice sounds very weak. I use also pop filter.

Sometimes I was even wondering if to skip normalization as then these peaks are not so much visible. Music is quite quiet - it’s “Itty Bitty 8 Bit - Kevin MacLeod” from but with “Gain/volume” set to -17. Thus it’s not so strong.

I tried adding this Compressor filter with default values but… the voice is then so poor. It lacks then some rich sounds… Not sure how to explain this properly.

I also noticed that setting “normalise: two pass” to -20 (without compressor) gives much better results in the sense that this buzzing is much weaker (but for sure at cost of much more quiet voice). I was wondering if to use this -20 value to youtube and then to set gain’volume to -22 dB for music (without using any Compressor). What do you think about this idea? Not only sure what YT will make when it detects much weaker sound lever and will make its own normalization after upload?

If you still think that I should use Compressor, what values should I use there? As default seems to give not satisfactory results…

I was also wondering about using also “Normalize: Two Pass” for music track. What do you think here?

And one thing more - the speech audio clip is not just a single clip but many short sub-slips (main clip is divided into smaller pieces). Usually this buzzing happens somewhere at the beginning of each sub-clip. Might it have any influence? Is any delay before normalization filter works since the beginning of sub-clip?

If the audio track is louder than -14 LUFS, then YouTube will turn it down to -14 LUFS to match all its other videos. If the audio track is quieter than -14 LUFS, then YouTube assumes you did that for artistic reasons and will not turn it up to -14 LUFS.

Compressors are entirely dependent on the levels of the audio coming in. What I can say is that if the compressed audio sounds weak, it means the Makeup Gain can be cranked higher. Compression, by nature of squashing the peaks, will reduce the volume of the sound. The Makeup Gain is there to restore the volume loss, which can now be done without the risk of clipping so soon now that the peaks are squashed.

In sound design, there is a concept called the “anchor”. In your case, the anchor is the speech track. The anchor is the most important element that needs to be clearly heard amongst all the other noise happening on the other tracks.

When it comes to mixdown, “Normalize: Two Pass” is useful to set the anchor (speech in this case). By normalizing speech to -18ish LUFS, you establish it as the dominant track. From this point onward, all other tracks are mixed by ear in relation to the speech track.

This means “Normalize: Two Pass” on the music track is not efficient, because we have no intuitive way to know what Loudness Target to give it that will mix properly with the speech track. If we give it a target that’s too high, the music will bury the speech. If we give it a target that’s too low, the music will sound like a cheap afterthought.

Instead, once the speech track is anchored, simply use a Gain filter on the music track and adjust it until it combines nicely with the speech… enough to be present, but not enough to disrupt it. There is no math formula to tell you what that Gain number will be. Your ears have to figure this out.

Once you have a good-sounding combination, reset the Audio Loudness panel and let the preview play for a bit. If the “I” meter hovers around -15 LUFS (+/-1 is fine), then everything worked and you’re done. If the “I” meter is too high or low, then change the volume of speech and music together to maintain their volume relationship. The easy way to do this is to put a Gain filter on the Output track which raises or lowers everything by that “just a little bit” you need to nail the -15 LUFS target. Then you don’t have to adjust every individual speech and music filter by hand.

Actually, it’s also a good idea to put a Limiter filter on the Output track as well. It should be the last audio filter in the chain. This helps prevent clipping for the few spikes that are still getting through. A value of -1.0 to -1.5 dB is reasonable. I should have mentioned this earlier.

If it happened precisely at the beginning of each clip, I would be suspicious. If it’s random offsets of time from the beginning of each clip, then I might suspect it has more to do with the patterns of speech or the recording itself rather than a Shotcut technical issue.

That makes sense. That particular metric was more of a novelty in this case. The main thing now is to get that “I” meter back down to -15 LUFS.

You’re welcome!

1 Like

@Austin Thank you a lot. :slight_smile:

I have to make some experiments before I return. :stuck_out_tongue:

In most tutorials I’ve seen people suggest to use either Compressor either Limiter filter, not both. But AFAIU correctly you suggest using two.

Btw, as for now I was setting the filters for the whole speech clip and next I splitted it into smaller parts. As result I have ex. 30 small clips with independed filters. That was probably not the best idea. It would be better to set those filters for track and not for single clips. Sure it’s easy to set them, but do you know any way how to clean all audio filters for. ex. 30 small clips with one click? After quick look on shotcut, I can’t see such a functionnality… There is no even button like “clear all filters”. :frowning:


I made some experiments. For speech track I have 4 filters: 1) Bass and Treble and for bass and treble I set 5.9dB 2) Compressor - parameters as in the screenshoot 3) Normalize - 15 LUFS 4) Lmiter - parameters as in the screenshoot.


What do you think about these parameters? I set them partially basing on some experiments and basing on some tutorials from net.

I wasn’t only able to find good explaination what is RMS in case of Compressor? What is it’s role? And would you recommend modyfing it? Is safe to set makeup gain to 8db?

For Limiter I’m wonderning about Release parameter. I keep the default value but I’m wondering if ex. 0.1 sec would be better?

As result I get audio loudness parameters as in the screenshoot. In particular I is about -15 LUFS which is ok AFAIU.

I’m only wondering why TP is so high (despite resetting…)?

IFAIK, there is only hard way to do it, you have to remove them on each clip

This is a example having a good workflow, saving you a lot of work.

Always do all the cutting first, then apply common filters to the track (gain, compresor etc.)
If you need different filter to different clip, them add more tracks. and put the clips on multiple tracks.
Add overlays like text on a separate tracks.

In audio production your normally use a limiter on the master bus (output in Shotcut), to control the loudness and remove clipping, you also use compressors and EQ on each audio track.

One of the use cases of a compressor is suppress the high parts of the audio and increase the low parts, you reduce the dynamic range of the audio, and that is fine for speak, where you want a constant loudness.

There is some detailed info about here, if you want to understand what they do and how to use them

@TimLau Thanks a lot. That’s probably a good article if someone wants to be an audio expert and spends many hours on audio improving. However for guy who just want to have nice audio track with speech is a bit too more complicated… I read it to understand more the meaning of parameters but what I’m missing a lot are examples… Sure I know that partially it depends on recording but at least it would be good to have sth like this: For my speech audio track I use the following filters with the following parameters. If you think that your audio is… then please re-consider to change this and that parameter… Just to have sth as good starting point…

For compressor filter probably this video is a much better explanation (even if it doesn’t concern shotcut, but you can easily re-map most of the things, unfortunately not all): BEST COMPRESSOR SETTINGS FOR VOICE | A Full FCPX Audio Compressor Tutorial - YouTube

You need to have some basic understanding of what a compressor does to the sound, if you want to use one and get some good results, there is no easy way to just get good sound, it is hard work.
The easiest way to reduce the work, is to make good recordings, the better the recording, the less work need to be spend in post production.

For speak and voiceovers, check this guys channel, there is a lot of good info their.
He is using audacity, it is a good choice and easier to see what is going on with the audio.
So exporting the audio as .wav, clean it up in audacity and import to a new track in shotcut is a good idea, if the audio need more than simple tweaks