Which settings should be "best" considering my Machine/Input Video

It stays at 2880s. YouTube calls it 5K. Here’s a video the OP has done with a similar setup where you can check the settings:

Actually that one was done before I figured out I had to use the resolution in the Aspect Ration as well. If you check out one of my later ones, you’ll see it goes to the 4320p option.

This is for 360 Video, so the stretching would be a bit of an issue. Will try the 100M bitrate and drop the second pass. Trying to figure out which option is “correct” for the hardware acceleration - it gives several suggestions. Any idea if I want quality more than speed (but would still like a little better speed)?

OK, just pulled in my latest video I’m working on, and made the suggested changes. By doing my advanced settings and THEN clicking hardware acceleration, it seemed to pick the best ones (I hope) for me. So it’s now running and appears to be quite a bit faster (estimate is 55 minutes compared to 1 hr 30 mins for first pass and 45 mins for second pass). Also, I am seeing the workload pretty evenly split between CPU and GPU - before now I would see the CPU at 90+% usage, and often getting into the low 90’s © for temps even with liquid cooling. So if nothing else this should help the machine last longer as well.

I post new stuff every Sunday, and will make sure to add a post to this one in the “Made with Shotcut” area so y’all can see for yourselves if technically I’ve done better. This is the first one I’ve used this particular LUT on, and other than intro/outro am using all ambient sound rather than a mix of sound and music.

Thanks again!

Well, that turned out not so well. Using the Hardware Acceleration with any settings on the codec is producing sound and no video. Looking at the Log, I’m seeing an issue with any resolution option over 4096 - as well as a couple messages about how the arguments are being passed as being depreciated. So looks like I’ll have to stick to CPU only. Am trying now with a Quality Variable Rate of 99%, and will see how that does. FWIW…

Oh, that’s a good point. Your resolution may be higher than hardware acceleration supports.

As for the VBR quality setting of 99%, that will probably be overkill. The YouTube preset uses libx264, which has its own “quality slider” called CRF that ranges from 1 to 50, where lower numbers are higher quality. A value of 18 is considered visually lossless. I use a value of 16 for my “masters” knowing that YouTube will immediately transcode a new copy off of it.

In your case, using a Shotcut quality of 99% translates to H.264 CRF 1, which is super slow and makes super big files without delivering any extra visually-noticeable quality. Shotcut quality 68% corresponds to H.264 CRF 16 if you would like to try that as a starting point. You could raise or lower it from there according to your taste.

Here is a chart I made that lists the correlations between the Shotcut quality percent and the H.264 CRF value:

Can you really tell the visual quality difference between the very high resolution and a more conventional resolution such as 1080p? Does it justify the added upload time?

I wonder at what point you reach the point of diminishing returns and the placebo effect takes over.

Great question, and it probably depends on everyone’s individual preferences and eyeglass prescription and television size and viewing distance. For me, that point of diminishing returns happens at 4K. Instead of more resolution beyond 4K, I would rather have an HDR workflow that actually worked and a bigger color space.

However, the OP’s situation is a little unique because it is 360 video. That means his final exported video may be 2880p, but that covers the vertical area from his shoes all the way to the sky. If the “playback camera angle” is looking straight ahead, it may be getting only a 1080p slice of the 2,880 vertical pixels. So in a sense, he’s only providing 1080p to the “active viewing angle” at any given moment, and the other pixels are representing stuff that’s happening outside the viewing area.

Oh, here’s a thought you’ll appreciate on diminishing returns… Most 1080p video is delivered with 4:2:0 subsampling. That means the chroma plane is only 960x540. I think you’d agree that a 540p image on a large TV is going to have visible artefacts like jagged edges and smeary colors that are noticeable even in the chroma plane if you’re close enough. But a 4K 4:2:0 video has a 1920x1080 chroma plane on the same size TV. Now we’ve got at least 1080p for both luminance and chrominance for a true high-def experience. This is why a 4K video can appear to have such crisper colors than the same video in 1080p even though they’re both in the same BT.709 color space. (To be fair, a 1080p 4:4:4 video could achieve the same thing, but nobody delivers that.) And now, to your point, I don’t see higher resolutions providing any significant returns after the chroma plane hits 1080p. (This assumes we’re not talking about 360 video or IMAX presentations of course.)

EDIT: Since you asked about justifying the upload time… for my wife’s cooking videos, it’s totally worth it to upload in 4K rather than 1080p because YouTube will transcode the master with a higher bitrate as a reward for authoring in 4K. The transcode difference between a 1080p master and a 4K master is night and day due to that higher bitrate they give 4K videos.

If the target is YouTube then I think it’s fair to assume a typical desktop setup with a 22" to 24" monitor. YMMV.

I’m having trouble with YouTube as it seems they’ve recently begun screwing with the colors in uploaded videos.

The same videos uploaded to Vimeo don’t have color problems.

Can you post a link to one of your wife’s cooking videos?

In broadcast, we are tightly constrained to either 720p or 1080i.

Each station gets 6 MHz of RF spectrum and no more. In fact, some stations have sold off some of their bandwidth and some have divided their 6 MHz channels into digital subchannels. Very often these subchannels broadcast SD programming such as vintage 4:3 programming and movies.

ATSC 3.0 is ambitious but I have no idea where they’ll find the bandwidth.

Yeah, that probably accounts for the majority of viewers. My wife and I have a small computer in our living room hooked to a 42" television so both of us can enjoy the absurdities of the Internet from the comfort of our couch. And also so she can preview her YouTube videos before making them public.

Regardless of screen size, when it comes to YouTube, the real issue is bitrate more than resolution. A 1080p video on a 24" monitor can still look bad especially during high motion sequences if the bitrate is too low. And YouTube’s default bitrate is kinda low.

You’re already aware that a 4K video uploaded to YouTube gets transcoded to 2160p, 1440p, 1080p, 720p, 480p, 360p, 240p, and 144p. The great thing about uploading in 4K is that all eight transcoded versions get higher bitrates, not just the 4K version. So, to get a better looking 1080p on YouTube, it pretty much has to be uploaded as 4K to trick YouTube into thinking this video is extra special and worth the higher bitrate.

I used to have that problem. After reading a lot of other people’s experiences and experimenting on my own, I concluded that YouTube isn’t so much messing up the colors as it is making horrible guesses about what the colors are if the smallest bit of color metadata is missing in the uploaded file. My problems went away when I fully specified color space and color range in an MPEG-4 container, which is the container most likely to be interpreted correctly due to sheer popularity. I think I recall that you preferred to upload HuffYUV files. That codec would obviously require Matroska in order to capture both color space and color range given that AVI doesn’t have those flags. If Vimeo works and YouTube doesn’t, it probably comes down to Vimeo making smarter guesses about the missing (or ignored) color metadata.

Sure, here’s a video where my wife made a 3-foot model of Downton Abbey out of gingerbread. The big reveal is at 9:15 if you want to skip to the finished build:

This video was uploaded as a 4K MPEG-4 with H.264 at CRF 16 (Shotcut quality 68%) and was 11.2 GB in size. For kicks, I made a HuffYUV version, and it came out to 185.6 GB with zero discernable difference in video quality after doing A/B tests between the two on a Shotcut timeline. I love the idea of lossless codecs for masters, but my pocketbook cares more about the cost of hard drives for archiving. :slight_smile: H.264 CRF 16 has become my sweet spot between the two concerns.

1 Like

It looks like I get the best quality in the least time using the VBR at 68 as you suggest. That actually results in a slightly larger file (for my 2 minute test file the VBR is 1,931,790 vs 1,874,018 for the CBR), but the time is significantly shorter (10:40 vs 20:10). The one other thing I’m going to try on my next work is to bring all the files off my RAID-0 USB-C array and onto the local SSD. I’m thinking that part of the time issue might be the USB channel…

Thanks so much for all the help. Makes me feel better about eschewing the other $35/month for the full Adobe CC to get that “other” product :slight_smile:

I’m so glad to hear you found some settings that work for you! I was just about to ask how things were going because we started getting a little off-topic during your absence. :slight_smile:

I’m relieved that the VBR file size isn’t significantly bigger than CBR because your style of video is like a worst-case-scenario torture test on VBR. You have constant motion from walking around, so the advantages of VBR don’t get to shine as bright. But at least you don’t have to guess what bitrate will provide you with the highest quality.

Since you have Windows 10, you can start up Resource Monitor to see if your disk drives are a bottleneck. There is a “Disk” tab at the top of the screen, and there is a disk queue length graph on the right-hand edge. If the queue length is constantly above 3-ish during an export, then the hard drives are unable to provide data at the rate the data is being requested. Faster storage would likely be of benefit. On the flip side, if your CPU is maxed at 100%, then it probably won’t export any faster because there’s no processor left to deal with the data even if the hard drives could bring it in faster. Obviously, you’d have to test to know for sure, but these graphs will at least give you a benchmark for comparing your test scenarios.

Why not use HEVC instead of H.264 to save even more space? According to this youtube page, they accept HEVC.

CRF 17-18 is said to be visually lossless for H.264. What’s it for HEVC? 19-20?

By the way, your wife’s videos look fantastic! :+1:

I chased the color problem for a long time and finally concluded the color errors were creeping in at the browser. Firefox, Chrome and Opera reporduced 601 files just fine but Edge reproduced 709 perfectly.

Then something changed with Chrome and suddenly it was giving perfect colors as well as Edge.

Your wife is very good on camera.

It would be nice if you could shake that white brick background.

I haven’t tried HEVC in a couple of years, so I did some fresh testing to see if my previous reasons were still valid.

I found H.264 CRF 17-18 Medium to correspond to HEVC CRF 20 Slow (Shotcut quality 60%). But to my eye, these CRFs are “good enough for delivery” and not “visually lossless for archive”. Below are the general purpose settings I consider to be visually lossless enough for archival purposes and also able to survive a generation of transcoding (note this is totally subjective to my own video material):

  • H.264 CRF 16 (Shotcut quality 68%), preset Medium
  • H.265 CRF 18 (Shotcut quality 64%), preset Slow

Yup, the Slow preset was required for H.265. I tried Medium first and the fine details were always smeared. Medium never looked quite right even up to very high-quality CRFs. The slow preset is a substantial jump in quality.

For extra credit, I would consider using QP instead of CRF for any intermediate files to avoid the bitrate shortcuts that CRF uses during fast motion sequences. QP would eliminate practically all chances of macroblocking.

Now for the trade-off. Using the settings above, H.264 makes a file that is 2x the size of HEVC. However, HEVC takes 6x the time to encode the file as H.264.

So the question is processing time versus disk space. Since we have slow hardware, processing time is our biggest concern. If we have an export that takes two hours with H.264, it would take 12 hours with HEVC. That is enough time to either delay a YouTube video release by a day, or to cut a day out of the post-production schedule in order to leave time for the export (and no time for a do-over if there was a mistake). That’s brutal on the production timeline. Plus, we can’t edit the next video during the additional ten hours that HEVC requires, so that’s a double penalty.

If we had faster hardware or a GPU that could achieve similar quality in similar time, then maybe we could justify HEVC. But for now, time is more important than disk space when the space difference is only 2x versus the time penalty.

After my tests, I found a well-researched article that reached the same conclusions I did in terms of quality settings:

So I guess these settings should be pretty reliable for anyone else that wants to use them.

1 Like

Nice find!

This made her day. Thanks!

I’m not sure what you’re hoping to see because my dialect is probably different than yours… “Shake” as in shake shingles on roofs, or “shake” as in random jittery movement, or “shake” as in replace it with something else?

So in the Other tab in Export for HEVC I change this:


to this:



And how much slower is the export than medium?

Is there any noticeable jump in quality when changing the H.264 preset to Slow also?

What’s “QP”? Is that Constant Bitrate? What Bitrate and Buffer size should I aim for with Constant Bitrate?
But this suggestion for “QP” is for intermediate files and not necessarily for archival purposes, right?

Right. I have a good GPU but Shotcut doesn’t recognize it for export for whatever reason. I know others have had this problem and some have solved it but I haven’t figured it out.

if a 1060 6gb can comfortably play 8k60 youtube videos, surely an rtx 2070 can.

I don’t think the gpu would be a problem. Also, consider that youtube will always compress videos once they are uploaded, so a super high bitrate isn’t necessary.