Reduce Noise - replace smartblur with hqdn3d or nlmeans

Any chance of the developers adding hqdn3d or nlmeans video denoise filters? I’m attempting a video restoration and am underwhelmed with the Reduce Noise filter. The output looks like Vaseline-smeared lens softness or ‘70s porn. Smartblur is removing too much detail. To keep the detail, I’m not getting enough noise reduction.

Shotcut Reduce Noise Filter

I’ve had good luck removing noise with Handbrake (nlmeans, but hqdn3d also available) and Avidemux (hqdn3d only – doesn’t keep as much detail as nlmeans). I moved my current video project over to Shotcut because I can’t do a decent white balance or color grading in Avidemux.

Sharpen filter is kind of the opposite issue – it amplifies both detail and noise. Working examples of Msharpen in Avidemux and Unsharp and Lapsharp in Handbrake get better detail without amplifying the noise.

The OP implies it’s using smartblur from ffmpeg. Further down the thread, another poster gives a ffmpeg CLI example using both hqdn3d or nlmeans. If all three filters are available in ffmpeg, would that make implementing nlmeans (or hqdn3d) easier?

Have you seen this tutorial?

Here are a few other discussions on Video Restoration that you might have interest in:

Technical Discussion: Interlacing Revisited
More Film Restoration Tools
Technical Discussion: Frame Rates

Would you have any interest in using ffmpeg to first create lossless intermediate files of your old videos that have nlmeans baked into them? From what I see, nlmeans can be very computationally heavy (read: slow). If you’re like me and make several small test exports to make sure the project looks good before rendering the whole thing, then each test will have to re-compute nlmeans every time. That’s a slow preview cycle. If nlmeans was already done on your sources (intermediates), your previews and final export could be faster. I don’t know if you would be a fan of the extra work, but it could get you a fast nlmeans today rather than wait and hope for an update.

Speaking of which, have you happened to use nlmeans on GoPro footage? Did you find any robust general settings? I haven’t tested this yet but sure sounds useful.

Yes I’ve seen the Film Restoration tutorial. Some neat tricks, but not helpful for reducing noise and preserving detail.

No I did not know about Video Cleaner or film 9. "ook interesting. Thanks.

Yes and no. I don’t like the idea of encoding twice. This is what happens:Multiplicity

Besides the problem of making a copy of a copy, I’d be doing white balance and color grading AFTER denoising, which would amplify noise. I’d prefer to denoise last.

GoPro? Ha? I haven’t worked with HD.

This source was probably a Sony Mavica compact camera from the early 2000s. It’s an .mpg file (not 2 or 4, but one). Handbrake will encode x265, NLMeans Medium preset, UnSharp Light preset at >100 fps on a 5-year old 4-core computer. Definitely not state of the art.

I’ve run some DVD sources (720x480) through handbrake with the same codec and filters and get >20 fps. It can queue jobs like Shotcut, so let it run over night.

I’ve looked at ffmpeg CLI options and see a lot of options for nlmeans. Handbrake just gives presets and tunes. I haven’t played with any of the tunes to see. Light or Ultralight preset might be OK from a “good” DVD source. Medium preset for something that is VGA resolution taken with a low quality sensor. Once or twice I’ve used the strong preset and hit it with Medium UnSharp preset.

Yes when Handbrake added nlmeans, it is slower than hqdn3d, but a couple of test runs I did to compare, nlmeans gave a smaller file size AND looked sharper than hqdn3d. I’m willing to take a speed hit to keep detail.

It would not be hard to add those. I evaluated those and many others before I chose the smartblur filter. The main problem with the other filters is high resource utilization. The ffmpeg filters don’t support multi-threading. And many systems would not be able to achieve near-real-time processing with those. I think people have an expectation that if they apply a filter they will be able to see a live preview in Shotcut before exporting. In my opinion, smartblur gives the best balance of results and live preview.

Have you tried applying the smartblur filter multiple times to the same clip? I had some clips for which it helped to apply the filter twice.

Yeah, since you are grading and other stuff, running the denoise last definitely makes sense. Well, if you need something immediately, there’s still the option of exporting your Shotcut project to a lossless file and using ffmpeg or Handbrake to denoise and create your final H.265 from the lossless file. FFV1 might give you a smaller file faster than H.265 Lossless can at that resolution.

I’m glad you’re getting the frame rates you are with your footage! I will simply say that enthusiasm diminishes when running the same filter chain over 4K GoPro sources. :wink:

GoPros and cell phones get noisy in low light. That’s where my interest in nlmeans comes from, as an improvement over ffdshow. I didn’t know nlmeans existed until you mentioned it, so I have you to thank for the next rabbit hole I go down haha.

This sounds like a programmer has nlmeans running across multiple threads. Pretty good technical discussion about what he did and test metrics.

Option to choose number of threads used by NLMeans filter #835

Handbrake does not use the FFmpeg filter. It has its own implementation.

Tried this twice and was happy with the final Handbrake output

Slow (2 encodes), and FFV1/FLAC files are uuuuge, but it’s temporary and concede that having a lossless intermediate file is a good trade-off.

Glad to hear you found something that works for you!

Since you got me down the rabbit hole of denoisers, I figured I would share my research results with anyone who’s interested.

ffmpeg has six denoisers built-in that I was able to find, which I’ve listed below along with their transcoding speeds on a 1080p source video using a four-core laptop computer. I wrote scripts that used a variety of settings with each denoiser to make sure I was seeing the best each one had to offer.

atadenoise (20 fps) - by averaging pixels across frames, it reduces contrast of noise areas to make them less obvious as opposed to using a specialized algorithm to smooth the noise away; this reduces overall image contrast; filter also darkens the overall output

dctdnoiz (1.6 fps) - creates beautiful detail on a still image, but randomizes the noise across frames so much that it actually makes the noise look worse during playback, plus it darkens the output

nlmeans (0.6 fps) - darkens the output, but sometimes has redeeming qualities (more on this later)

hqdn3d (21 fps) - color neutral which is good, but the output looks smeary to me where it loses a lot of fine detail in hair strands and wood grain

owdenoise (0.3 fps) - color neutral wavelet denoiser with stunningly good results on high-res sources

vaguedenoiser (7.6 fps) - another color neutral wavelet denoiser whose output looks identical to owdenoise, but its processing speed is 25x faster; tried every combination of threshold and nsteps, and found the default settings of 2/6 to consistently produce the closest-to-real-life results

I tested the denoisers on videos I took with my own mirrorless camera, meaning I remember what the scene looked like in real life. In one video, there happened to be a guy in a black business dress shirt made of silk or satin or something with a sheen to it, but the sheen wasn’t coming through due to the noise of the original footage. The wavelet-based denoisers were the only ones to remove and smooth the noise such that the fabric regained the smooth sheen you would expect from silk. To my eye, it bumped up the realism of the video an entire notch to see fabric actually look like fabric. The rest of the frame also dropped to zero dancing noise. It turned the video into a still photograph when nothing was moving. I didn’t realize until this experiment that even a tiny amount of dancing noise can seriously detract from the realism of a video, and that a sense of immersion can be restored by getting rid of it. Obviously, vaguedenoiser is my new weapon of choice.

So, about nlmeans… I found a radical difference between the ffmpeg version and the HandBrake version. I think HandBrake wins on every metric. nlmeans in ffmpeg actually makes video look worse (blockier) if the resolution is 1080p or above, or if the video comes from an excellent camera that has little noise to begin with. nlmeans in ffmpeg also can’t be used as a finishing step because it darkens the output, which destroys any color grading that happened before it. But I found two places where nlmeans in ffmpeg outshined the other ffmpeg denoisers: low-resolution video, and very-high-noise video. nlmeans does great at restoring a VHS capture, which I sense from the author’s web site was one of the original design goals. Secondly, in my tests, nlmeans did better than the other ffmpeg denoisers on high-resolution high-noise videos, which in my case meant a smartphone video in low light using digital zoom. Given these two specialized cases where nlmeans performed well, I could see a workflow where I used nlmeans to create denoised intermediates, then color graded the intermediates to fix the darkened output. Running nlmeans on a noisy source then adding it to the timeline and running vaguedenoiser on the total project did not cause any harm in my tests. But for best results, I think HandBrake is still the way to go where nlmeans is involved.

For my purposes, I think I will stick to vaguedenoiser because it’s beautiful on 1080p and 4K, and it is easily added to my existing ffmpeg filter chain when I do my finishing steps. I don’t have to create an intermediate to pass off to HandBrake this way. However, if I came across a particularly noisy source video, I would probably run it through HandBrake before adding it to my Shotcut project to get the same benefits Andrew noticed.

Good luck to everyone, whatever you use.

1 Like

I added Reduce Noise: HQDN3D for the next version 19.06. I kept Reduce Noise but renamed it as Reduce Noise: Smart Blur. The HQDN3D I added is from frei0r plugins instead of FFmpeg libavfilter because frei0r supports keyframes whereas avfilter does not. I may add more from avfilter over time even if not keyframable, but not yet sure. (Many users want everything to be keyframable.) This one was contributed and supports keyframes box; so, I added it.

Please do, as having a filter, even without keyframes, is better than not having it at all.
Perhaps adding say an “*” next to the filter names that support keyframes ?
This will then let the user know as to the capability of that specific filter.

Can you show proof for your claims for atadenoise filter? I can not reproduce any of your “findings”.
Also you forgot there is bm3d filter and fftdnoiz filter too.

Greetings, Paul! To provide context to everyone else here, Paul B. Mahol is the developer who added atadenoise to FFmpeg back in 2015. Let’s make him feel very welcome so he will stick around. :smile:

Well, the bad news is that I wrote up those findings almost a year ago and I no longer have the footage they were based on. The sources were tossed after the final edit was done.

However, because I had a great experience with James Almer (another FFmpeg developer) when I needed a Matroska bug fixed last month, I will go the extra mile for you in return.

The original footage was a piano concert in a dark room with red carpet, red wallpaper, red seats… shades of red everywhere. This is a key point.

I tried testing atadenoise on random footage a moment ago to recreate some proof of my claims, and I started to get worried when the output looked just fine. So I tried to recreate all the reds of that piano venue using some red hard drive cases on my desk. Aha! Success.

It appears that my claim of "atadenoise darkens the output" only applies to the red channel during motion or during very high noise, which happened to be my entire original footage.

For my recreated footage, I put my 4K camera at ISO 6400 to generate some noise and made three clips… one where the camera didn’t move, and two where the camera did a simple pan from left to right with the intention of stressing the averaging algorithm.

Here is a OneDrive link with PNG frame grabs of the results:
https://1drv.ms/u/s!Akbhn-hg6nZPkzdJqoxCCzFpWjWz?e=wnicKA

I’m curious if your observations are similar to mine:

  • When the camera is not moving (static), the original and the atadenoise versions have equal brightness. No problems here.

  • When the camera is moving, it looks like red is reduced across the entire frame, not just the obvious darkening of the red hard drive covers. This is evident in a video waveform scope. In the Moving1-* images, the Original image looks yellowish in the upper-right corner, but that corner turns greenish (cooler) in the atadenoise version where some red is subtracted. I found it easiest to see this while leaning back in my chair, not squinting my eyes up close, while doing an A/B comparison. There is an overall level change. It’s kinda subtle, but it would be enough to disturb a professional colorist if denoising was a finishing step after color grading. What I find odd is that all the other colors did not shift. The blue light on the USB hub looks identical in both versions. It’s just the reds that got darker.

  • VagueDenoiser did not exhibit any color shift in any of the examples. And wow, to me, that filter is pure magic and these examples show why. (Thanks for adding that filter too, by the way.) In my original post, I talked about the realistic sheen of a black silk shirt being restored by vaguedenoiser. We can somewhat see that same effect happening here in the black shiny plastic of the USB hub compared to the original image.

  • When I did my original testing, I was still using FFmpeg 4.0 because I don’t upgrade in the middle of a project. FFmpeg 4.1 was released on November 6 (around 30 days prior to my tests), and that was the first version to feature bm3d and fftdnoiz. That is why I did not include them at the time… I wasn’t on 4.1 yet. Out of curiosity, I tried them just now using default settings, and ouch, there is a glaringly obvious drop in brightness. Comparing them to the original on a video waveform scope, the entire graph dropped for both bm3d and fftdnoiz, showing a clear darkening of the output. On this 4K test video, the default settings had virtually no denoising effect either. I didn’t bother trying additional settings because I wouldn’t expect any of them to restore brightness levels, and that’s a deal-breaker for me.

  • As for atadenoise reducing contrast, I found this to be a function of how many standard deviations from average the noise was. The more “constrasty” the noise was due to having a wider swing from “center”, then the more flat the video looked when all those deviations got averaged together. I didn’t take time to recreate a test video for this claim yet, but I can try if you want me to. I figured the mathematical principle would be enough to clarify it for now.

Is there anything else I can provide for you? Thank you for all you’ve done to make FFmpeg what it is today. I and many others here use it on a daily basis.

I think the atadenoise average brightness subtle darkening have been fixed in master. Feel free to test my change. I also wrote x86-64 SIMD for it so it can be more than 2x times faster, this will be applied to master soon.

1 Like

Awesome as always. I haven’t compiled from source to test changes so far because I’ve always figured the Zeranoe nightly builds would be more representative of the final release than my own efforts. Does this mean I can test your brightness change after Zeranoe’s next nightly build goes up?

EDIT: Never mind, I just saw the commit to master in the git shortlog, so that answers my question. Thanks for your willingness to look into this. I wish I had the original footage of all the varieties of red shades in that room, because the darkening effect was much more noticeable there than the simple and flat red colors of my hard drive cases. I just wanted you to know the effort would be worth it on scenes that emphasize a variety of reds, much more so than what we saw in this hacked-up example.

bm3d and fftdnoiz are also fixed.