Any advantage in recording in H.265?

I normally use H.264 for my 4K drone footage and then encode using libx264 (or h264_qsv) but my drone also gives me the option of recording in H.265 as an alternative. As I understand it this can preserve more information in a smaller bandwidth. It seems it can take longer to render but you also end up with smaller file sizes.

YouTube appears to also support VP9 natively so encoding H.265 footage using the libvpx_vp9 codec is an option and does produce smaller files sizes, although it seems more challenging for my laptop to actually play!

However, theoretically, would I get better quality from YouTube with the latter approach? How does the codec affect the viewer’s experience - does the browser cope with these codecs easily enough or might it be more difficult for some people to view?

It would be nice to know as this is a decision I need to make before recording and not in post-production as there doesn’t seem to be much advantage in recording in H.264 and then trying to encode in libx265 or whatever. (Or maybe I am wrong?)

Many thanks,
Keith

1 Like

The optimal recording format for the drone depends on how H.265 is being used:

  • “Use the same bitrate at H.264 to capture twice the quality” - This would obviously improve video quality and probably be the best choice for this reason alone.

  • “Stay at the same quality as H.264 but use half the bitrate” - This would save disk space at the expense of being slower to edit and process. But if disk space is not an issue and your computer can edit H.264 directly, then sticking with H.264 may be of benefit for workflow efficiency (no proxies over H.265 needed before editing can begin).

As for YouTube, it makes no difference which file format is uploaded to it. YouTube will always re-encode the file anyway according to their internal standard. So the codec chosen in Shotcut has zero impact on the viewer’s browser experience because the viewer never sees your original uploaded file. Choose a codec based on your personal disk space and quality preferences.

Personally, I still encode my final output in libx264 simply because it’s faster. Quality is the same as libx265 with the caveat of a 30% bigger file size which doesn’t bother me, and is a worthwhile trade-off for the encoding speed.

3 Likes

Thanks Austin. Very informative as usual!

I think that there is meant to be an improvement in the video quality in the way the DJI employs H.265 over H.264. So that would sway me in that direction, especially as it gives smaller file sizes. Exporting my videos in libpx_vp9 gives a file about 45% smaller than h264_qsv! (Although, libx265 only gives about 20% smaller.) However, it does take about twice as long to export.

As I said though, I have some playback issues on my PC with more compressed codecs - probably because it has to do more work. libx265 just about plays in fullscreen but is very jumpy in windows player and loses a lot of information when trying VLC player. On the other hand, libpx_vp9 does not play in the native windows player at all but does work well in VLC player.

I’ve previously had problems playing videos on my 4K TV when directly connecting a USB drive and settled on libx264 with specific bitrates for best results so compatability of viewing the file is also a concern. This is not a huge consideration though as I normally just end up watching them over YouTube anyway.

H.265 does seem to be the upcoming standard and I would also like to future-proof my recordings as well as get the best quality out of them. I guess I will just have to experiment with it a bit more. Any further comment on the source/target YouTube re-encoding query would be appreciated though.

Cheers,
Keith

Hmm, that’s a little unsettling. If file size goes down, then quality can’t go up by much without eating into the space savings. Is it possible to visually compare H.264 and H.265 recordings of the same flight path? That would answer whether H.265 improves quality or just reduces file size. Tests would need to be done on both moving and still footage.

Sorry, I must be missing the question. What was the query? With YouTube, generally speaking, so long as a file is uploaded with sufficient quality to gracefully survive one generation of re-encoding, then that’s the best that can be done. The actual format is immaterial, since nearly all modern formats can achieve equal quality if given sufficient bitrates.

Unfortunately, H.265 is dead in the water because its patent situation is so absurdly complex that few companies risked touching it. It’s been around for seven years, and in some notable places has an adoption rate of only 12% compared to H.264 at 84%, and H.264 dates back to 2003! Details are at the top of page 3 in the following PDF by the IBC. This PDF also describes the origin story of the new EVC format, and is a very worthwhile read to anyone interested in the future of video encoding:

https://www.ibc.org/download?ac=10463

To fix the legal disaster of H.265/HEVC, three new codecs were released this year by MPEG:

  • VVC: Compresses better than HEVC (approaches AV1), but costs money to distribute videos encoded with it
  • EVC: Has similar compression to HEVC while also encoding much faster, and comes in two flavors… Baseline profile which is free to use even commercially whereas Main profile compresses a little better but costs money to distribute videos
  • LCEVC (technically not a new codec, but still a new standard): Most likely to be used by streaming providers to reduce bandwidth, but not likely to be used for stand-alone video editor output. LCEVC is a process that combines two streams (a base layer and a detail enhancement layer), which is useful for adjusting detail (resolution) while streaming, but overkill for a standalone video file.

In my personal opinion (which counts for nothing haha), MPEG got it right this round. The legal and patent structure is efficient and reasonable, which was completely broken in HEVC. And with EVC Baseline being a totally free option, I can see it becoming very popular among independent producers. If these formats catch on, there will be little incentive for any software or hardware vendors to add support for H.265 to anything that didn’t already have it. H.265 growth would likely halt in favor of the new codecs that replace it, which have equal/better/faster compression and less royalty complexity.

Back to the original issue of future-proofing video… I’m trying to suggest that the industry isn’t coherent enough for true future-proofing to be possible. MPEG’s new standards are extremely ambitious and aggressive, and completely wipe out the advantages of older formats (assuming they become widely adopted). However, we can’t encode in these new formats yet because FFmpeg doesn’t have support for them (the codecs were only finalized two months ago), which in turn means Shotcut doesn’t support them until FFmpeg does. So we’re in a limbo state right now.

The number of relevant delivery formats is currently dizzying, but also understandable if given the background:

  • H.264: still the ultimate fallback option due to ubiquitous support and hardware acceleration
  • H.265: on track to get completely replaced by VVC and EVC
  • VP9: a decent option for now since it’s free and often hardware accelerated, but will likely be completely replaced by the also-free EVC Baseline codec (which also encodes faster)
  • VVC: intended to be the go-to format for anyone distributing video and willing to pay for the best
  • EVC Main: not quite VVC quality and size, but encodes much faster (and costs money to distribute)
  • EVC Baseline: likely to be the go-to royalty-free format that’s ideal for home users and independent low-budget producers
  • LCEVC: likely to be used by streaming platforms
  • AV1: was supposed to be the format to end all formats, but appears to be crumbling since encoding is still incredibly slow and a patent pool has started around it, negating its “open” primary appeal (search “Sisvel AV1 patent pool”)

Conclusion:

Basically, “future-proof” is dicey because the future is already chaos. All we can do is pick a format that’s popular enough to be around for awhile and also encodes to a quality level that meets our requirements. Many codecs fit that bill, and H.264 is potentially the most future-proof of all right now because of its sheer popularity and widespread support that cannot be undone overnight.

Since H.265 is already baked into a lot of hardware for decoding, existing support is unlikely to be dropped. So encoding to it would be fine. But if you ever wanted to sell a video you made, you would have to pay for distributing H.264 or H.265. That’s where VP9 and EVC Baseline become more attractive options. If you aren’t selling and merely encoding home videos or uploading to YouTube, then YouTube pays the royalties, so it legally doesn’t matter what you use.

Personally, I will be targeting EVC Baseline when it becomes available. It has excellent compression and encoding speed while also being royalty-free, meaning I can do anything I want with my videos and not have to contact any lawyers or royalty pools. That’s the kind of future I want.

Sorry for the long response that gave way more detail than requested, but this was a chance to introduce VVC and EVC to the forum since nobody’s talked about them yet and these formats should be a big deal pretty soon.

1 Like

Wow! I asked a short question and got a full personalised tutorial! Thanks for such a comprehensive reply! I’m running out of asterisks! :slight_smile:

That is all really useful information and tells me what I need. Just to clarify the one query above, seeing as you asked, I was basically asking does it not make sense to record in H.265 if you plan to encode in VP9 rather than going from H.264 to VP9 as you would think they would be more compatible. Or the converse, if I am going to export in libx264, do you not get more compatibility (and less lossy?) by shooting in H.264 rather than H.265 (even if the H.265 has a bit more detail encoded)?

It seems you are saying no above but maybe not. If the answer was yes and YouTube tries to use VP9 when it can then maybe that would be a reason to use H.265. But again, I think you are indicating that is not the case.

1 Like

Oh, I see what you’re saying now.

The answer is dependent on this variable… Is the source video visually lossless or not? (Not bit-for-bit mathematically exact lossless, but simply indistinguishable from a perfect original to the human eye. No obvious compression artefacts.)

If the source is visually lossless (or close enough), then it can be encoded to any other format without any reservation or quality concern. The VP9 re-encoding created by YouTube would look basically the same regardless of whether the source file was H.264, H.265, or VP9, because all three of those formats would be producing the same visually lossless image as a source for re-encoding. The more bitrate codecs are given, the more equal they become until they all hit lossless mode and truly are equal.

However, if the source has noticeable compression artefacts in it (like macroblocks, color smearing, etc), then there can be a small benefit to using the same codec for both input and output (where “input” means the file exported from Shotcut and “output” means the re-encoding of it created by YouTube). This is because the codec of the input video has already stripped color and detail out of the video to get high compression (hence the artefacts), and re-encoding the video with the same codec is likely to say “everything I would normally strip out to get the size down has already been stripped out, so I’ll leave the remaining stuff the way it is”. However, if the input was libx264 and the output was VP9, then VP9 will take an entirely different approach to stripping data out of the image, and we will see cumulative loss from both the libx264 input and VP9 output codecs.

Summary: For best quality, videos should be sent to YouTube at the visually lossless level (such as CRF 16 / quality 68% using libx264) so that re-encoding doesn’t amplify any existing artefacts. However, if videos are exported from Shotcut at less-than-lossless quality, then some guesswork has to be done to match the re-encoding format of YouTube:

  • Videos with lower resolution than 4K are re-encoded by YouTube with H.264 unless the channel is considered popular by their standards, and then it goes to VP9 or is given a higher H.264 bitrate. I’m not sure how they decide which format to use.

  • Videos at 4K or higher go straight to VP9 whether a channel is popular or not.

I’m not aware of YouTube ever sending H.265 to a browser, so I would avoid it as a less-than-lossless Shotcut export format because it’s a guaranteed codec mismatch to YouTube. H.265 and VP9 have some similarities, but ultimately still take different approaches to stripping detail from an image to get the size down and will create cumulative loss. However, H.265 would be okay if exported at a visually lossless level, such as CRF 18 / quality 64% with preset Slow.

As for the drone’s capture format, use whichever codec looks better. Shotcut needs access to the best looking source it can get regardless of format.

1 Like

Very good and interesting conversation! I am not so deep into the codecs specifications but interested in visual quality. @Austin: when you say “visually lossless” this is of course personally depending. For my eyes i hardly can distinguish the video quality when exported in Shotcut “Youtube 55% quality h.264” from “YT 64% quality h.264”. There might be a slight difference (i assume) but i can hardly tell for sure. The file size is about 30-40% less in the 55% quality version.
So what export settings would you consider “visually lossless” in SC to have an anchor point for my settings :slightly_smiling_face:

Thank you

Like most things, the answer is “it depends”. :smile:

“Visually lossless” is an industry term that has a generally-agreed quality measurement attached to it. The term is not open to personal interpretation. Consider the following failure points that would prevent two people from agreeing on whether a file is visually lossless if the definition was open-ended:

  1. The age of the viewer becomes a factor. An image that looks good to 60-year-old eyes can look mediocre to 18-year-old eyes.

  2. The size of the display device becomes a factor. Imagine a professionally-produced video that looks great on a 100-inch video wall. Transcoding it to 50% export quality in Shotcut will look fine if the video is shrunk down to a four-inch phone screen, but will not look fine if played on the same 100-inch video wall as the original. The loss of detail and color information becomes much more apparent on the larger screen.

  3. The quality of the display device becomes a factor. A calibrated professional video monitor will show problems that a 10-year-old consumer TV would not be accurate enough to reproduce.

  4. The visual complexity and quality level of the source video becomes a factor. If the source is professionally produced with brilliant color, sharp fine detail, and some fast movement scenes that don’t smear, then a Shotcut export at 50% quality will not be able to keep up with that level of detail. There will be clear visible differences between the source and the export. But if the source video was low-contrast mush from a GoPro Hero3 not using ProTune, then yes, Shotcut at 50% quality may be sufficient to retail full quality because the source itself is no better.

Video encoded at a “visually lossless” level is supposed to excel at the worst case scenario for all of these variables.

The following paper describes one path to the development of the “visually lossless” criteria:

However, that paper must be purchased to read it. The important results from it can be found as a citation in this freely-downloadable paper:

The findings were that “visually lossless” could be achieved (even on complex images with high contrast and intentional noise patterns) when the following metrics were met:

  • The luma (grayscale) component had MSSIM values greater than 98.5%
  • The luma (grayscale) component had PSNR-HVS-M values greater than 40 dB

The export settings I recommended are consistently in that ballpark. The beauty of those settings is having practically 100% confidence that the codec will do no visible harm to your video even under the toughest of viewing circumstances.

The quality targets for the color channels are more complex to describe and also have a wider interpretation (in the higher quality direction). Some visually lossless codecs like ProRes and DNxHR purposefully retain more information than necessary to make color grading possible during post-production without artefacts. But for final delivery, this level of retention is not necessary. Final delivery can fall back on the chroma subsampling principles of “the human eye is way more sensitive to intensity than it is to color”. Color reduction is where the major space savings happen.

Speaking of which, the export targets described in this thread so far do not count as “final delivery” videos. They are being handed to a service like YouTube which transcodes them to another format, and that other format becomes the final delivery. Thus, there is benefit to exporting a video at one notch above visually lossless in order to gracefully handle a generation of transcode loss, and after that transcode, still be able to conquer the viewer variables listed at the top of this message.

Long story short, the ideal export settings always depend on the use case. For personal videos, quality at 55% might absolutely be good enough, and personal interpretation is king in this scenario. Do whatever makes you happy. But if the goal is to appear your best in front of a professionally-discerning audience, or an audience of 18-year-olds watching YouTube on 100-inch video walls, then it’s best to encode with the visually lossless settings as described.

2 Likes

You are an expert! Thanks so much for the information. I think i could get the point :slight_smile:
I used Magix Video de Luxe in a version of around 2008(?) and the exported video quality in h.264 with “good quality” settings were terrible and obviously visible, esp. in fast movements (kind of heavy noise).
That was the main reason for me to look for something else and i am perfectly happy with Shotcut so far.
I did a quick test of quality export settings and its relation to file size and render time. It seems for my eyes and monitor (thinkpad P50 laptop for now) 50% seems o.k. for me :slight_smile:
Results are here: Export Basics

1 Like

If the settings look good to you, then you’ve arrived!

The ThinkPad laptop screen might give me a slight pause… does it have a matte screen or gloss? A matte screen makes it difficult to tell how sharp a video really is. If that 50% quality video was moved to a different computer or TV with a higher quality screen, the difference might become much more noticeable.

Since it doesn’t sound like you’re targeting the high end of quality, there’s a change you might be willing to do to greatly improve exporting speed. On the Export > Other tab, remove the vpre line and change preset=medium into preset=veryfast. If the quality takes a hit (which it probably won’t), just bump up the percentage a little. But the export speed should noticeably increase. I wouldn’t recommend this for professional distribution, but it may be fine for your use case.

1 Like

Thanks a lot, Austin!
Honestly, i am always after quality, but its limited to my eyes, equipment and feeling :slight_smile:
I also have a much bigger and better monitor, a big Dell U2713HM, but it also has a matte display like my laptop screen has. I didn’t know there’s a difference in detail showing. I was always thinking a glossy screen is difficult to look at if you have light from the back or the sides. Professional editors have these shades around their monitores to get rid of sidelight.
I will try the settings you recommended, probably they might fit for me. But i am already quite happy with the render times - its a big improvement compared to my old Magix video (de luxe :-)).
What does the vpre line exactly mean or do (just to understand its influence to encoding a little bit)?

Edit: just looked it up. It seems i have no “vpre line”? I just have to lines in the “other” tab: the last says “preset=fast”. I will try change it to veryfast and report if i see any difference on the video.

Edit2: Yes, it works! With “veryfast” i can reduce render time from 58s to 37s at 50% quality! At the same time file size is reduced from 85,8k to 78k. I think i can see slight rendering artefacts like noise, color flickering just a tiny bit more than on the “fast”-preset. The improvement in rendertime is quite enorme but in most cases i won’t bother.
Am i right in guessing that i can achieve the same quality with the “veryfast”-preset when i increase the quality to about 55 or 60%? And at 100% both settings give my lossless quality? There must be still a difference probably…

I really find it difficult to judge about the quality difference as my eyes are 56 years old and i am a bit color blind (typical red-green blindness - very popular for men). This is a heritage from family, my uncle had it but not my parents. So i am not really predestinated for image or video quality judgement :slight_smile:

So is libx265 the same thing as h265?

H.265 is a specification. Anybody is welcome to write whatever code they want to generate video that meets the specification (I’m glossing over the legal nuances there). libx265 is one such code library for compressing video. Other people have written their own code. All of their results should be playable in any H.265 player because their output complies with the specification. So, H.265 is a specification, and libx265 is a code implementation of that specification.

In FFmpeg and Shotcut, h265_nvenc/amf/qsv refer to hardware encoders on GPUs, which is a different way of compressing video than libx265, which is all done with CPU. With the exception of very recent GPUs, hardware encoding will produce larger files with lower quality because they make mathematical compromises for speed. And many older GPUs don’t support Lossless mode.

I keep hardware encoding unchecked in Shotcut, so that prevents the problem you just mentioned?

Generally yes, if not getting too creative with export tweaking. On the Codec export tab, it’s still possible to override the codec drop-down box to use h264_nvenc (as one of many examples) and cause hardware to be used even with that checkbox unchecked.

I used to use Default for Format but it used to crash my computer. What codec is chosen with that option?

libx264

That’s weird because when it didn’t crash my computer, Films and TV was able to play it without any issues. But when I chose the libx264 specifically, Films and TV wasn’t able to play it properly. So how is that occurring?

Default is 55% quality, which the Windows player can handle. But the player can’t handle 100%.

I’m opting for libx265 because there is less quality loss and the Films and TV app can still play it.