Which settings should be "best" considering my Machine/Input Video

Yeah, that probably accounts for the majority of viewers. My wife and I have a small computer in our living room hooked to a 42" television so both of us can enjoy the absurdities of the Internet from the comfort of our couch. And also so she can preview her YouTube videos before making them public.

Regardless of screen size, when it comes to YouTube, the real issue is bitrate more than resolution. A 1080p video on a 24" monitor can still look bad especially during high motion sequences if the bitrate is too low. And YouTube’s default bitrate is kinda low.

You’re already aware that a 4K video uploaded to YouTube gets transcoded to 2160p, 1440p, 1080p, 720p, 480p, 360p, 240p, and 144p. The great thing about uploading in 4K is that all eight transcoded versions get higher bitrates, not just the 4K version. So, to get a better looking 1080p on YouTube, it pretty much has to be uploaded as 4K to trick YouTube into thinking this video is extra special and worth the higher bitrate.

I used to have that problem. After reading a lot of other people’s experiences and experimenting on my own, I concluded that YouTube isn’t so much messing up the colors as it is making horrible guesses about what the colors are if the smallest bit of color metadata is missing in the uploaded file. My problems went away when I fully specified color space and color range in an MPEG-4 container, which is the container most likely to be interpreted correctly due to sheer popularity. I think I recall that you preferred to upload HuffYUV files. That codec would obviously require Matroska in order to capture both color space and color range given that AVI doesn’t have those flags. If Vimeo works and YouTube doesn’t, it probably comes down to Vimeo making smarter guesses about the missing (or ignored) color metadata.

Sure, here’s a video where my wife made a 3-foot model of Downton Abbey out of gingerbread. The big reveal is at 9:15 if you want to skip to the finished build:

This video was uploaded as a 4K MPEG-4 with H.264 at CRF 16 (Shotcut quality 68%) and was 11.2 GB in size. For kicks, I made a HuffYUV version, and it came out to 185.6 GB with zero discernable difference in video quality after doing A/B tests between the two on a Shotcut timeline. I love the idea of lossless codecs for masters, but my pocketbook cares more about the cost of hard drives for archiving. :slight_smile: H.264 CRF 16 has become my sweet spot between the two concerns.

1 Like

It looks like I get the best quality in the least time using the VBR at 68 as you suggest. That actually results in a slightly larger file (for my 2 minute test file the VBR is 1,931,790 vs 1,874,018 for the CBR), but the time is significantly shorter (10:40 vs 20:10). The one other thing I’m going to try on my next work is to bring all the files off my RAID-0 USB-C array and onto the local SSD. I’m thinking that part of the time issue might be the USB channel…

Thanks so much for all the help. Makes me feel better about eschewing the other $35/month for the full Adobe CC to get that “other” product :slight_smile:

I’m so glad to hear you found some settings that work for you! I was just about to ask how things were going because we started getting a little off-topic during your absence. :slight_smile:

I’m relieved that the VBR file size isn’t significantly bigger than CBR because your style of video is like a worst-case-scenario torture test on VBR. You have constant motion from walking around, so the advantages of VBR don’t get to shine as bright. But at least you don’t have to guess what bitrate will provide you with the highest quality.

Since you have Windows 10, you can start up Resource Monitor to see if your disk drives are a bottleneck. There is a “Disk” tab at the top of the screen, and there is a disk queue length graph on the right-hand edge. If the queue length is constantly above 3-ish during an export, then the hard drives are unable to provide data at the rate the data is being requested. Faster storage would likely be of benefit. On the flip side, if your CPU is maxed at 100%, then it probably won’t export any faster because there’s no processor left to deal with the data even if the hard drives could bring it in faster. Obviously, you’d have to test to know for sure, but these graphs will at least give you a benchmark for comparing your test scenarios.

Why not use HEVC instead of H.264 to save even more space? According to this youtube page, they accept HEVC.

CRF 17-18 is said to be visually lossless for H.264. What’s it for HEVC? 19-20?

By the way, your wife’s videos look fantastic! :+1:

I chased the color problem for a long time and finally concluded the color errors were creeping in at the browser. Firefox, Chrome and Opera reporduced 601 files just fine but Edge reproduced 709 perfectly.

Then something changed with Chrome and suddenly it was giving perfect colors as well as Edge.

Your wife is very good on camera.

It would be nice if you could shake that white brick background.

I haven’t tried HEVC in a couple of years, so I did some fresh testing to see if my previous reasons were still valid.

I found H.264 CRF 17-18 Medium to correspond to HEVC CRF 20 Slow (Shotcut quality 60%). But to my eye, these CRFs are “good enough for delivery” and not “visually lossless for archive”. Below are the general purpose settings I consider to be visually lossless enough for archival purposes and also able to survive a generation of transcoding (note this is totally subjective to my own video material):

  • H.264 CRF 16 (Shotcut quality 68%), preset Medium
  • H.265 CRF 18 (Shotcut quality 64%), preset Slow

Yup, the Slow preset was required for H.265. I tried Medium first and the fine details were always smeared. Medium never looked quite right even up to very high-quality CRFs. The slow preset is a substantial jump in quality.

For extra credit, I would consider using QP instead of CRF for any intermediate files to avoid the bitrate shortcuts that CRF uses during fast motion sequences. QP would eliminate practically all chances of macroblocking.

Now for the trade-off. Using the settings above, H.264 makes a file that is 2x the size of HEVC. However, HEVC takes 6x the time to encode the file as H.264.

So the question is processing time versus disk space. Since we have slow hardware, processing time is our biggest concern. If we have an export that takes two hours with H.264, it would take 12 hours with HEVC. That is enough time to either delay a YouTube video release by a day, or to cut a day out of the post-production schedule in order to leave time for the export (and no time for a do-over if there was a mistake). That’s brutal on the production timeline. Plus, we can’t edit the next video during the additional ten hours that HEVC requires, so that’s a double penalty.

If we had faster hardware or a GPU that could achieve similar quality in similar time, then maybe we could justify HEVC. But for now, time is more important than disk space when the space difference is only 2x versus the time penalty.

After my tests, I found a well-researched article that reached the same conclusions I did in terms of quality settings:

So I guess these settings should be pretty reliable for anyone else that wants to use them.

1 Like

Nice find!

This made her day. Thanks!

I’m not sure what you’re hoping to see because my dialect is probably different than yours… “Shake” as in shake shingles on roofs, or “shake” as in random jittery movement, or “shake” as in replace it with something else?

So in the Other tab in Export for HEVC I change this:

preset=medium
movflags=+faststart

to this:

preset=slow
movflags=+faststart

?

And how much slower is the export than medium?

Is there any noticeable jump in quality when changing the H.264 preset to Slow also?

What’s “QP”? Is that Constant Bitrate? What Bitrate and Buffer size should I aim for with Constant Bitrate?
But this suggestion for “QP” is for intermediate files and not necessarily for archival purposes, right?

Right. I have a good GPU but Shotcut doesn’t recognize it for export for whatever reason. I know others have had this problem and some have solved it but I haven’t figured it out.

if a 1060 6gb can comfortably play 8k60 youtube videos, surely an rtx 2070 can.


I don’t think the gpu would be a problem. Also, consider that youtube will always compress videos once they are uploaded, so a super high bitrate isn’t necessary.

From Merriam Webster:

b : to get away from : get rid of

can you shake your friend? I want to talk to you alone— Elmer Davis

Correct.

HEVC Slow is 2.4x the time of HEVC Medium.

HEVC Medium is 2.5x the time of H.264. But the quality is noticeably worse than H.264.

I chose that H.264 setting to be visually lossless. Going to Slow could theoretically retain more color information at a mathematical perfection level, but it is not visually perceptible to me. With H.264, going from Medium to Slow is a minor improvement compared to the radical improvement found in HEVC. There are several SSIM/PSNR charts around the Internet that demonstrate the quality-vs-preset curves, and these are well-known characteristics of these codecs.

This is a great article that answers everything:

https://slhck.info/video/2017/02/24/crf-guide.html

Correct. The issue here is that CRF mode will adaptively change the amount of compression based on the amount of movement in the scene. It will put more compression on areas with lots of movement because it knows the eye can’t track details that move fast. However, that extra compression, while saving bitrate, can create blocking artefacts. If that blocky video is brought onto a Shotcut timeline and then compressed again as part of the final export, the motion sequences could degrade so much from generational loss that the final video looks bad, especially if a heavy color grade brings out the macroblocking (like raising the shadows and revealing compression artefacts in dark areas). By using QP mode, the encoder does not throw away extra data during fast motion sequences. It keeps the quality high all the time and the files will be larger as a result. But in essence, this prevents a generation of transcoding loss.

I don’t use QP for the final export because the target is the human eye. If it looks good enough to the eye, then I’m done at this point. I would prefer the smaller file now because this is the video that will be distributed. I do not plan to use my final videos as source material for future videos. I would go back to the original sources if I wanted to do that.

For archival purposes, I doubt it will be good enough though. I am not aware of any consumer GPU that can encode at the quality levels we’re targeting here. Archival-grade masters need either Medium or Slow to do their work. Even the RTX 20xx series struggle to meet the Fast preset. GPUs can create files that look “decent enough”, but nowhere near “visually lossless”. It all depends on the level of quality you are targeting.

Ha ha, I feel your pain. Color grading that white background was impossible before the waveform scope was added. :slight_smile: The white brick is actually a foam sheet less than an inch thick that overlays the wall. The wall underneath is the same green that you see below the white wood chair rail. Although the white brick is intensely bright, the green wall underneath looks even worse. However, it could maybe open up some green screen options in post-production… :smile:

How about an old-fashioned can of paint?

True. But the whole house is that green color. We aren’t ready to have one wall look different from the rest, and definitely not ready to paint the entire house to match. So we are stuck scalding the eyes of our viewers with Amazon wall paneling haha. We’re about due for a set decoration refresh, so maybe we’ll find something a little darker next time.

Latest video is up (https://youtu.be/zn9rkt-glPk), but I do find myself thinking that some of the shots are still looking a little soft, and wondering if going at 80 rather than 68 would help that, or am I barking up the wrong tree?

In terms of the set decoration discussion, I found (once upon a time when I did stuff where other people showed up) that having a nice mountain print attached to a rolling panel that could be moved behind them (and moved away after) worked pretty well.

Cool video! You’re making 360 grow on me.

How does the local file exported from Shotcut look when you play it back? If that file is nice and sharp, then the softness is probably due to YouTube’s compression methods and there may not be a lot you can do about it.

There are a few things left to try…

  • Render that same video at 80% quality like you suggested, upload it as private, and see if it looks any better. A 30-second sample segment is good enough to test.
  • Add this line to Export > Advanced > Other: pix_fmt=yuv422p That line will create an output file with twice as much color information as the one you already made. Since 360 video undergoes major stretching, it could be of benefit to have twice as much color detail as usual. Create 68% and 80% quality versions and see if they look better.

That should get you much closer to a definitive answer. I’m not a 360 guru, but are any 360 videos perfectly sharp? Between the heavy compression of such huge files and all the stretching and warping and distortion that gets put on them, is it even possible to have sharp 360 these days on YouTube? I don’t know, haven’t researched it. Would love to see an example.

I like the idea of a mountain print. If we can find a way to do that without looking like a Sears family portrait studio, we may give it a go!

Local copy of the file is sharper, but again that could be YT, and it could be just that things stream a bit “hazier” - IOW while I have a fast connection I’m sure there are ups and downs and it could be lowering quality to keep up the frame rate. Will try your suggestions and see how that works.

It would be easier to get sharper if I could upgrade from my current camera (Insta360 One X - $400 USD) to their Pro 2 ($4K USD) or their Titan ($15K USD). The One X does 5.7 K video, the Pro 2 does 8k video, and the Titan does 8k 3D video. Unfortunately the associated price tags leave them out of reach unless my videos start to go viral :slight_smile:

Thanks!

Reading this thread and taking notes for my try with 4K and Youtube after just building a new computer! Thanks.

That 360 video is really cool :slight_smile: Maybe I should try it on my backpacking trips, haha. Probably not. Day hiking a short trail… possibly. I do take 360 photospheres and have uploaded a bunch of those to Google Maps. I know that Google has done some Street View style photos on some popular trails.

I think we should call them spheres rather than 360. It is spherical video.

Just paint the foam sheet :wink:

I’ll tell you what doesn’t work: a bed sheet. You’ll be chasing ugly wrinkles. Been there, done that. You want something made of muslin.

https://www.google.com/search?tbm=shop&sxsrf=ACYBGNS5Zm3y9w5LpX63yOXHyQaUFmVljQ:1570399767560&q=photography+backdrop&spell=1&sa=X&ved=0ahUKEwim0rOC04jlAhUEOH0KHR4UCIAQBQjyAigA&biw=896&bih=473

https://www.fovitec.com/blogs/blog-fovitec-com/57703299-how-to-get-your-muslin-fabric-backdrop-wrinkle-free