Which settings should be "best" considering my Machine/Input Video

My inputs are usually 7 - 10 5760x2880k mp4 files with a Date Rate of 104694kbps. Total input video is usually around 1 hour in length, and by editing ends up between 15 and 20 minutes long. Machine is a Core i9-9900K with 16 Gigs Ram, 512GB SSD, and an RTX 2070 card running Windows 10. Most edits are 2 second transitions (from the default list), and then a replacement audio track and a LUT for my camera at the “Master” filter level.

For export I select “YouTube” then click advanced and use 5760x2880 for resolution and ratio, then click to Codec and use MP4 with a Constant Bit Rate of 200M, 5000 for buffer, and Parallel processing as well as Two Pass. I have not configured to use Hardware Acceleration as I wasn’t clear if that really works well with what I am doing.

It is taking about 2 hours for both passes, and I’m wondering if I am selecting settings that are making it too long, or if I’m maybe specifying something that isn’t really contributing to my end quality anyway.

Open to suggestions :slight_smile:

1 Like

Jeez, does Youtube support that high parameters to watch it in the end?
Better, decrease these parameters to 1080p.

Yes, but your computer might not.
firefox_2019-09-26_03-12-08

Here is an example: (Not my video)

The good news is that there are lots of things to try!

Is there a particular reason you prefer constant bitrate over average bitrate or VBR? YouTube will accept your video regardless of which method you use, so the choice boils down to the quality-vs-size tradeoff you’re willing to make for your local file. VBR will often retain the highest quality while being the smallest file size.

If you keep constant bitrate, a 200M bitrate could be overkill considering your inputs are only 100M. I get that you want to make sure no data is thrown away due to bitrate limitations, but double bitrate is a huge buffer.

If you really don’t want to use VBR, would you be open to average bitrate instead? Unless your goal is direct streaming (like from an FTP server) or physical media like Blu-ray that needs the buffer, constant bitrate will probably cause more problems than it solves.

Have you done a sample render with both one-pass and two-pass to see if the analysis pass is really gaining you anything? I do all of my exports with one-pass VBR at 68% quality and am happy with the results. Keep in mind that the purpose of the analysis pass is to figure out where the best places are to throw away data in the event that the output bitrate isn’t high enough to hold everything. In your case, your output bitrate is double the input bitrate, meaning data never needs to be thrown away, meaning the results of the analysis pass will never be used, and the time spent on analysis was wasted. (“Never” is a bit absolutist, but you get the idea.)

On the YouTube preset > Advanced > Other tab, there is a line that says preset=fast which you could change to preset=veryfast if you want a speed boost. Your output resolution is so high that I doubt anybody could visually tell a difference between the two presets. Your video will be downscaled to fit even on 4K monitors, so the nuances between fast and veryfast will probably not be measurable at all. Unless, of course, this is 360 video that gets stretched, but even then, I doubt anyone can tell the difference.

Given that you have an RTX 2070 card, hardware acceleration could be a legitimate option for you and would be worth experimentation.

Curious to hear about your results.

What is the actual resolution after YouTube resamples it?

It stays at 2880s. YouTube calls it 5K. Here’s a video the OP has done with a similar setup where you can check the settings:

Actually that one was done before I figured out I had to use the resolution in the Aspect Ration as well. If you check out one of my later ones, you’ll see it goes to the 4320p option.

This is for 360 Video, so the stretching would be a bit of an issue. Will try the 100M bitrate and drop the second pass. Trying to figure out which option is “correct” for the hardware acceleration - it gives several suggestions. Any idea if I want quality more than speed (but would still like a little better speed)?

OK, just pulled in my latest video I’m working on, and made the suggested changes. By doing my advanced settings and THEN clicking hardware acceleration, it seemed to pick the best ones (I hope) for me. So it’s now running and appears to be quite a bit faster (estimate is 55 minutes compared to 1 hr 30 mins for first pass and 45 mins for second pass). Also, I am seeing the workload pretty evenly split between CPU and GPU - before now I would see the CPU at 90+% usage, and often getting into the low 90’s © for temps even with liquid cooling. So if nothing else this should help the machine last longer as well.

I post new stuff every Sunday, and will make sure to add a post to this one in the “Made with Shotcut” area so y’all can see for yourselves if technically I’ve done better. This is the first one I’ve used this particular LUT on, and other than intro/outro am using all ambient sound rather than a mix of sound and music.

Thanks again!

Well, that turned out not so well. Using the Hardware Acceleration with any settings on the codec is producing sound and no video. Looking at the Log, I’m seeing an issue with any resolution option over 4096 - as well as a couple messages about how the arguments are being passed as being depreciated. So looks like I’ll have to stick to CPU only. Am trying now with a Quality Variable Rate of 99%, and will see how that does. FWIW…

Oh, that’s a good point. Your resolution may be higher than hardware acceleration supports.

As for the VBR quality setting of 99%, that will probably be overkill. The YouTube preset uses libx264, which has its own “quality slider” called CRF that ranges from 1 to 50, where lower numbers are higher quality. A value of 18 is considered visually lossless. I use a value of 16 for my “masters” knowing that YouTube will immediately transcode a new copy off of it.

In your case, using a Shotcut quality of 99% translates to H.264 CRF 1, which is super slow and makes super big files without delivering any extra visually-noticeable quality. Shotcut quality 68% corresponds to H.264 CRF 16 if you would like to try that as a starting point. You could raise or lower it from there according to your taste.

Here is a chart I made that lists the correlations between the Shotcut quality percent and the H.264 CRF value:

Can you really tell the visual quality difference between the very high resolution and a more conventional resolution such as 1080p? Does it justify the added upload time?

I wonder at what point you reach the point of diminishing returns and the placebo effect takes over.

Great question, and it probably depends on everyone’s individual preferences and eyeglass prescription and television size and viewing distance. For me, that point of diminishing returns happens at 4K. Instead of more resolution beyond 4K, I would rather have an HDR workflow that actually worked and a bigger color space.

However, the OP’s situation is a little unique because it is 360 video. That means his final exported video may be 2880p, but that covers the vertical area from his shoes all the way to the sky. If the “playback camera angle” is looking straight ahead, it may be getting only a 1080p slice of the 2,880 vertical pixels. So in a sense, he’s only providing 1080p to the “active viewing angle” at any given moment, and the other pixels are representing stuff that’s happening outside the viewing area.

Oh, here’s a thought you’ll appreciate on diminishing returns… Most 1080p video is delivered with 4:2:0 subsampling. That means the chroma plane is only 960x540. I think you’d agree that a 540p image on a large TV is going to have visible artefacts like jagged edges and smeary colors that are noticeable even in the chroma plane if you’re close enough. But a 4K 4:2:0 video has a 1920x1080 chroma plane on the same size TV. Now we’ve got at least 1080p for both luminance and chrominance for a true high-def experience. This is why a 4K video can appear to have such crisper colors than the same video in 1080p even though they’re both in the same BT.709 color space. (To be fair, a 1080p 4:4:4 video could achieve the same thing, but nobody delivers that.) And now, to your point, I don’t see higher resolutions providing any significant returns after the chroma plane hits 1080p. (This assumes we’re not talking about 360 video or IMAX presentations of course.)

EDIT: Since you asked about justifying the upload time… for my wife’s cooking videos, it’s totally worth it to upload in 4K rather than 1080p because YouTube will transcode the master with a higher bitrate as a reward for authoring in 4K. The transcode difference between a 1080p master and a 4K master is night and day due to that higher bitrate they give 4K videos.

If the target is YouTube then I think it’s fair to assume a typical desktop setup with a 22" to 24" monitor. YMMV.

I’m having trouble with YouTube as it seems they’ve recently begun screwing with the colors in uploaded videos.

The same videos uploaded to Vimeo don’t have color problems.

Can you post a link to one of your wife’s cooking videos?

In broadcast, we are tightly constrained to either 720p or 1080i.

Each station gets 6 MHz of RF spectrum and no more. In fact, some stations have sold off some of their bandwidth and some have divided their 6 MHz channels into digital subchannels. Very often these subchannels broadcast SD programming such as vintage 4:3 programming and movies.

ATSC 3.0 is ambitious but I have no idea where they’ll find the bandwidth.

Yeah, that probably accounts for the majority of viewers. My wife and I have a small computer in our living room hooked to a 42" television so both of us can enjoy the absurdities of the Internet from the comfort of our couch. And also so she can preview her YouTube videos before making them public.

Regardless of screen size, when it comes to YouTube, the real issue is bitrate more than resolution. A 1080p video on a 24" monitor can still look bad especially during high motion sequences if the bitrate is too low. And YouTube’s default bitrate is kinda low.

You’re already aware that a 4K video uploaded to YouTube gets transcoded to 2160p, 1440p, 1080p, 720p, 480p, 360p, 240p, and 144p. The great thing about uploading in 4K is that all eight transcoded versions get higher bitrates, not just the 4K version. So, to get a better looking 1080p on YouTube, it pretty much has to be uploaded as 4K to trick YouTube into thinking this video is extra special and worth the higher bitrate.

I used to have that problem. After reading a lot of other people’s experiences and experimenting on my own, I concluded that YouTube isn’t so much messing up the colors as it is making horrible guesses about what the colors are if the smallest bit of color metadata is missing in the uploaded file. My problems went away when I fully specified color space and color range in an MPEG-4 container, which is the container most likely to be interpreted correctly due to sheer popularity. I think I recall that you preferred to upload HuffYUV files. That codec would obviously require Matroska in order to capture both color space and color range given that AVI doesn’t have those flags. If Vimeo works and YouTube doesn’t, it probably comes down to Vimeo making smarter guesses about the missing (or ignored) color metadata.

Sure, here’s a video where my wife made a 3-foot model of Downton Abbey out of gingerbread. The big reveal is at 9:15 if you want to skip to the finished build:

This video was uploaded as a 4K MPEG-4 with H.264 at CRF 16 (Shotcut quality 68%) and was 11.2 GB in size. For kicks, I made a HuffYUV version, and it came out to 185.6 GB with zero discernable difference in video quality after doing A/B tests between the two on a Shotcut timeline. I love the idea of lossless codecs for masters, but my pocketbook cares more about the cost of hard drives for archiving. :slight_smile: H.264 CRF 16 has become my sweet spot between the two concerns.

1 Like

It looks like I get the best quality in the least time using the VBR at 68 as you suggest. That actually results in a slightly larger file (for my 2 minute test file the VBR is 1,931,790 vs 1,874,018 for the CBR), but the time is significantly shorter (10:40 vs 20:10). The one other thing I’m going to try on my next work is to bring all the files off my RAID-0 USB-C array and onto the local SSD. I’m thinking that part of the time issue might be the USB channel…

Thanks so much for all the help. Makes me feel better about eschewing the other $35/month for the full Adobe CC to get that “other” product :slight_smile:

I’m so glad to hear you found some settings that work for you! I was just about to ask how things were going because we started getting a little off-topic during your absence. :slight_smile:

I’m relieved that the VBR file size isn’t significantly bigger than CBR because your style of video is like a worst-case-scenario torture test on VBR. You have constant motion from walking around, so the advantages of VBR don’t get to shine as bright. But at least you don’t have to guess what bitrate will provide you with the highest quality.

Since you have Windows 10, you can start up Resource Monitor to see if your disk drives are a bottleneck. There is a “Disk” tab at the top of the screen, and there is a disk queue length graph on the right-hand edge. If the queue length is constantly above 3-ish during an export, then the hard drives are unable to provide data at the rate the data is being requested. Faster storage would likely be of benefit. On the flip side, if your CPU is maxed at 100%, then it probably won’t export any faster because there’s no processor left to deal with the data even if the hard drives could bring it in faster. Obviously, you’d have to test to know for sure, but these graphs will at least give you a benchmark for comparing your test scenarios.

Why not use HEVC instead of H.264 to save even more space? According to this youtube page, they accept HEVC.

CRF 17-18 is said to be visually lossless for H.264. What’s it for HEVC? 19-20?

By the way, your wife’s videos look fantastic! :+1:

I chased the color problem for a long time and finally concluded the color errors were creeping in at the browser. Firefox, Chrome and Opera reporduced 601 files just fine but Edge reproduced 709 perfectly.

Then something changed with Chrome and suddenly it was giving perfect colors as well as Edge.