Which settings should be "best" considering my Machine/Input Video

Your wife is very good on camera.

It would be nice if you could shake that white brick background.

I haven’t tried HEVC in a couple of years, so I did some fresh testing to see if my previous reasons were still valid.

I found H.264 CRF 17-18 Medium to correspond to HEVC CRF 20 Slow (Shotcut quality 60%). But to my eye, these CRFs are “good enough for delivery” and not “visually lossless for archive”. Below are the general purpose settings I consider to be visually lossless enough for archival purposes and also able to survive a generation of transcoding (note this is totally subjective to my own video material):

  • H.264 CRF 16 (Shotcut quality 68%), preset Medium
  • H.265 CRF 18 (Shotcut quality 64%), preset Slow

Yup, the Slow preset was required for H.265. I tried Medium first and the fine details were always smeared. Medium never looked quite right even up to very high-quality CRFs. The slow preset is a substantial jump in quality.

For extra credit, I would consider using QP instead of CRF for any intermediate files to avoid the bitrate shortcuts that CRF uses during fast motion sequences. QP would eliminate practically all chances of macroblocking.

Now for the trade-off. Using the settings above, H.264 makes a file that is 2x the size of HEVC. However, HEVC takes 6x the time to encode the file as H.264.

So the question is processing time versus disk space. Since we have slow hardware, processing time is our biggest concern. If we have an export that takes two hours with H.264, it would take 12 hours with HEVC. That is enough time to either delay a YouTube video release by a day, or to cut a day out of the post-production schedule in order to leave time for the export (and no time for a do-over if there was a mistake). That’s brutal on the production timeline. Plus, we can’t edit the next video during the additional ten hours that HEVC requires, so that’s a double penalty.

If we had faster hardware or a GPU that could achieve similar quality in similar time, then maybe we could justify HEVC. But for now, time is more important than disk space when the space difference is only 2x versus the time penalty.

After my tests, I found a well-researched article that reached the same conclusions I did in terms of quality settings:

So I guess these settings should be pretty reliable for anyone else that wants to use them.

1 Like

Nice find!

This made her day. Thanks!

I’m not sure what you’re hoping to see because my dialect is probably different than yours… “Shake” as in shake shingles on roofs, or “shake” as in random jittery movement, or “shake” as in replace it with something else?

So in the Other tab in Export for HEVC I change this:


to this:



And how much slower is the export than medium?

Is there any noticeable jump in quality when changing the H.264 preset to Slow also?

What’s “QP”? Is that Constant Bitrate? What Bitrate and Buffer size should I aim for with Constant Bitrate?
But this suggestion for “QP” is for intermediate files and not necessarily for archival purposes, right?

Right. I have a good GPU but Shotcut doesn’t recognize it for export for whatever reason. I know others have had this problem and some have solved it but I haven’t figured it out.

if a 1060 6gb can comfortably play 8k60 youtube videos, surely an rtx 2070 can.

I don’t think the gpu would be a problem. Also, consider that youtube will always compress videos once they are uploaded, so a super high bitrate isn’t necessary.

From Merriam Webster:

b : to get away from : get rid of

can you shake your friend? I want to talk to you alone— Elmer Davis


HEVC Slow is 2.4x the time of HEVC Medium.

HEVC Medium is 2.5x the time of H.264. But the quality is noticeably worse than H.264.

I chose that H.264 setting to be visually lossless. Going to Slow could theoretically retain more color information at a mathematical perfection level, but it is not visually perceptible to me. With H.264, going from Medium to Slow is a minor improvement compared to the radical improvement found in HEVC. There are several SSIM/PSNR charts around the Internet that demonstrate the quality-vs-preset curves, and these are well-known characteristics of these codecs.

This is a great article that answers everything:


Correct. The issue here is that CRF mode will adaptively change the amount of compression based on the amount of movement in the scene. It will put more compression on areas with lots of movement because it knows the eye can’t track details that move fast. However, that extra compression, while saving bitrate, can create blocking artefacts. If that blocky video is brought onto a Shotcut timeline and then compressed again as part of the final export, the motion sequences could degrade so much from generational loss that the final video looks bad, especially if a heavy color grade brings out the macroblocking (like raising the shadows and revealing compression artefacts in dark areas). By using QP mode, the encoder does not throw away extra data during fast motion sequences. It keeps the quality high all the time and the files will be larger as a result. But in essence, this prevents a generation of transcoding loss.

I don’t use QP for the final export because the target is the human eye. If it looks good enough to the eye, then I’m done at this point. I would prefer the smaller file now because this is the video that will be distributed. I do not plan to use my final videos as source material for future videos. I would go back to the original sources if I wanted to do that.

For archival purposes, I doubt it will be good enough though. I am not aware of any consumer GPU that can encode at the quality levels we’re targeting here. Archival-grade masters need either Medium or Slow to do their work. Even the RTX 20xx series struggle to meet the Fast preset. GPUs can create files that look “decent enough”, but nowhere near “visually lossless”. It all depends on the level of quality you are targeting.

Ha ha, I feel your pain. Color grading that white background was impossible before the waveform scope was added. :slight_smile: The white brick is actually a foam sheet less than an inch thick that overlays the wall. The wall underneath is the same green that you see below the white wood chair rail. Although the white brick is intensely bright, the green wall underneath looks even worse. However, it could maybe open up some green screen options in post-production… :smile:

How about an old-fashioned can of paint?

True. But the whole house is that green color. We aren’t ready to have one wall look different from the rest, and definitely not ready to paint the entire house to match. So we are stuck scalding the eyes of our viewers with Amazon wall paneling haha. We’re about due for a set decoration refresh, so maybe we’ll find something a little darker next time.

Latest video is up (https://youtu.be/zn9rkt-glPk), but I do find myself thinking that some of the shots are still looking a little soft, and wondering if going at 80 rather than 68 would help that, or am I barking up the wrong tree?

In terms of the set decoration discussion, I found (once upon a time when I did stuff where other people showed up) that having a nice mountain print attached to a rolling panel that could be moved behind them (and moved away after) worked pretty well.

Cool video! You’re making 360 grow on me.

How does the local file exported from Shotcut look when you play it back? If that file is nice and sharp, then the softness is probably due to YouTube’s compression methods and there may not be a lot you can do about it.

There are a few things left to try…

  • Render that same video at 80% quality like you suggested, upload it as private, and see if it looks any better. A 30-second sample segment is good enough to test.
  • Add this line to Export > Advanced > Other: pix_fmt=yuv422p That line will create an output file with twice as much color information as the one you already made. Since 360 video undergoes major stretching, it could be of benefit to have twice as much color detail as usual. Create 68% and 80% quality versions and see if they look better.

That should get you much closer to a definitive answer. I’m not a 360 guru, but are any 360 videos perfectly sharp? Between the heavy compression of such huge files and all the stretching and warping and distortion that gets put on them, is it even possible to have sharp 360 these days on YouTube? I don’t know, haven’t researched it. Would love to see an example.

I like the idea of a mountain print. If we can find a way to do that without looking like a Sears family portrait studio, we may give it a go!

Local copy of the file is sharper, but again that could be YT, and it could be just that things stream a bit “hazier” - IOW while I have a fast connection I’m sure there are ups and downs and it could be lowering quality to keep up the frame rate. Will try your suggestions and see how that works.

It would be easier to get sharper if I could upgrade from my current camera (Insta360 One X - $400 USD) to their Pro 2 ($4K USD) or their Titan ($15K USD). The One X does 5.7 K video, the Pro 2 does 8k video, and the Titan does 8k 3D video. Unfortunately the associated price tags leave them out of reach unless my videos start to go viral :slight_smile:


Reading this thread and taking notes for my try with 4K and Youtube after just building a new computer! Thanks.

That 360 video is really cool :slight_smile: Maybe I should try it on my backpacking trips, haha. Probably not. Day hiking a short trail… possibly. I do take 360 photospheres and have uploaded a bunch of those to Google Maps. I know that Google has done some Street View style photos on some popular trails.

I think we should call them spheres rather than 360. It is spherical video.

Just paint the foam sheet :wink:

I’ll tell you what doesn’t work: a bed sheet. You’ll be chasing ugly wrinkles. Been there, done that. You want something made of muslin.