Quality of the finished video

Two videos may be so similar that our brains can not detect the difference. They look the same to us even though they are not the same. In these cases, we need to use computer programs to detect the difference in the videos.

Sorry but I still can’t wrap my head around the issue that I can’t see the difference between 2 exported files, 267MB and 6.6GB. :slightly_frowning_face:

If I have 267MB, 2.6GB and 6.6GB files that I can’t see the difference on my monitor, why would I use (upload to YouTube) latter two heavier files?

And, when I say “I can’t see the difference” I was comparing text or similar parts in the videos.

What makes a video file heavier 20 times if you can’t see the difference?

I apologize if I wasn’t clear in my first post.

Hi @afan
I’m not an expert on the subject, but I think you also need to take into account the quality of your original GoPro video.

Exporting it with a H.264-High Profile or one of the lossless settings, wont improve the original quality. This has to be one reason why you don’t see a difference.

1 Like

You are correct, there is no reason to upload the larger files to YouTube.

The larger files have a “closer to perfect” representation of the original GoPro video. These large and “more perfect” files are typically used during editing when a clip is exported then brought back into Shotcut again to be part of the final video.

An example of this is stabilizing the motion of a video just one time, exporting it as a stabilized version in lossless format, then bringing the stabilized video back into Shotcut to be part of the final production. It is much faster for Shotcut to play a pre-stabilized video than it is for Shotcut to stabilize video on-the-fly. The lossless formats prevent generational (accumulated) loss from encoding a video multiple times, similar to the way cassette and VHS tapes look worse after the third copy.

But if all you need is good-enough quality for the final export, then the smaller file sizes are sufficient for upload to YouTube.

The human eye is actually not very sensitive to color detail, and is also poor at tracking detail that is quickly moving (the detail gets lost in motion blur). This is where a lot of space reduction comes from, plus inter-frame compression. The non-lossless codecs are smart enough to remove detail in places you won’t notice it, which reduces file size. But those tricks also cause generational loss if encoding is repeated multiple times on a clip.

1 Like

About quality settings and size:
If you have ever worked with Photoshop on digital photos you come across saving them as jpg. JPG is a compressing file format where pixels maybe changed from original to saved jpg to save memory space. This is done for example by building blocks of similar pixels. You can choose from quality 1 (bad - much compressed) to quality 12 (superior - big file size). Depending on the photo and the amount of detail in it you can end up from say 200 kB to 12 MB for a 12 MPixel photo. So this is a factor 60 in size! You will most probably see a difference if you look at the photos side by side in 100% resolution.
Doing a video you have 30 photos per second! Imagine how big the difference in compression could be! The compression algorithm is much more complex for video as you not only group pixels but also frames together (GOP), B-, I-, P-frames. h.264 is a complex compression format and the resulting video size will differ a lot. But you eye won’t catch most differences when it sees 30 frames/s on an average monitor! If you screw down the quality even more you’ll reach a point where you will see first artefacts in the rendering. Compression algorithms and how they work is a science of its own!

1 Like

I’m sorry I didn’t respond earlier…
I forgot to mention in my first post: the original GoPro file is 1.8GB. How then the edited video (lossless) is 6.6GB? And there is no difference in quality?


Screenshot of the original file


Screenshot of the “lossless” file

Please somebody explain it to me, what am I doing wrong. Or, probably more correct, what I don’t understand? :smiley:

Thanks.

Your GoPro video is compressed using a scheme (probably HEVC) called “lossy” because information is discarded. It is OK to discard the information because it is too difficult for most human eyes to see. It is NOT RAW as some people call it. If it were uncompressed, it would be much larger than 6.6 GB. Each frame of 1080p video is 0.003 GB. If you are doing 30 fps, that means one second is 0.087 GB. If the video is 5 minutes, then it becomes over 26 GB! There is no way the camera could write to its SD card fast enough and would too quickly run out of storage if it were uncompressed or even “lossless.”

When a video file is decoded, transcoded, or edited it is converted to uncompressed temporarily in memory and then recompressed using whatever method you choose (see Export > Presets in Shotcut). It might not seem like that is possible for Shotcut to do when your computer only has, for example, 32 GB, of RAM that it must share. That is because it does not decompress the entire file into memory but only several frames at time. Finally, when you choose to export using a lossless method, it is still compressed but in a manner that does not discard information. Rather, lossless compression is somewhat like a zip file’s compression; a zip file cannot discard information, right? So, this is how it becomes much less than uncompressed but still much more than lossy compression. Lossy compression technologies are amazing modern marvels of efficiency and have helped facilitated our mobile and wireless revolutions.

I hope that helps, and if you want to learn more you should do a web search on “video compression 101” as there are other explanations that were better planned and prepared than ad hoc discussion replies.

1 Like

@afan, hi, I’m just taking a wild shot here, because it’s difficult to judge from the conversation exactly what the problem is (if there is a problem at all). Maybe you’re not doing anything wrong. Whether you see a difference or not will very much depend on the size and resolution of the screen on which you are trying to see the difference. Have you tried seeing the difference on, say, your TV screen (assuming that you have one of those huge Ultra HD TV’s)?

Also bear in mind if you’re uploading your video to YouTube etc, they will re-encode your video for their streaming service (e.g. YouTube probably using avc or vp9). The YouTube preset in Shotcut will do a good job of matching the quality you can expect from the re-encoding, so all other things being equal, just export with the YouTube preset.

Hi afan,

i already tried to explain what happens when you export lossless: this means there is no visible compression and that means the algorithm cannot achieve as much compression as would be possible with tiny losses. You won’t see the deminished quality even if you set quality to 55 or 60%! Unless you compare pixel by pixel on a 100% screen!
The original is compressed - as already mentioned above - so you won’t gain any quality by setting the export to any lossless codec! Its a misunderstanding of how compression algorithms work and what file size to expect. Do some experiments with the quality settings and export small videos - i bet you wont see any differences until you reach below 50% quality - with much smaller file size!

Yes, it did. It looks like I have to dig (just a little bit) deeper for my future videos.

Thanks.

I don’t have Ultra HD TV, but I doubt that TV will show the difference I can’t see on my really good monitors, as well as laptop screens.

I go it. It does make a sense. Thanks for the suggestion.

I think I wasn’t clear: I understand that GoPro compressed already the video. And I understand that the video will be compressed even more when exporting the video from Shotcut, or any other video editor. And, when compressed, it’s losing it’s quality. That’s clear to me (I hope so :smile:)
So, my idea was by selecting “lossless” option to NOT loose more than what’s already lost by GoPro, to keep the same quality (as much as possible). I’m not trying to “… gain any quality by setting the export to any lossless codec…”, just to keep what I already have.

But, what you are saying “… You won’t see the deminished quality even if you set quality to 55 or 60%! Unless you compare pixel by pixel on a 100% screen!..” - that’s something I didn’t know, it’s very helpful. Thanks. :slightly_smiling_face: :+1:

O.k. - i understand what you mean. When using lossy compression for encoding you will always have slightly more losses on every encoding you do as each picture and GOP are compressed anew. If you just want to do some very simple editing, e.g. cutting pieces out, you can use a tool that works on the original codec without any new losses but using the same codec. Its called “LosslessCut” - and works like the name suggests :slight_smile: You have to get used to it, i think there is only a dos-version for the command-window.

Anyway you can use lossless codecs for intermediate steps, so if you want to save an intermediate result and check the effect of some filters, this is the way to go without any additional losses. But you will get huge file sizes, as the compression will be minimal. If you have finalized your editing you will finally export it in some good compression codec with some slight losses but medium file size.

I guess you could do up to 10 exports with lossy codec and quality of 55-60% without seeing much difference. If you go below this quality setting you will start to see artefacts. So its all a compromise of file size, practicability (e.g. decoding speed on standard computer) and quality.

The lossless codecs are “visually lossless” not lossless - you should not see the difference in one encode but they are not mathematically reversible to get back to the video you put at the input to the encoder.

Because they are not truly lossless, multiple encodes will cause degredations - so try to encode as little as possible.

The main effect you need to worry about is not the output from Shotcut, but the effect the encodes you’re doing have on the final Lossy encode undertaken by YouTube or Vimeo or similar.

That’s not correct and confusing. ffv1, huffyuv, utvideo are truly lossless assuming no additional chroma subsampling is used.

:+1: Thanks.

Apologies, I was meaning codecs often described as lossless such as ProRes, DNxHD etc.
But even with Yuv, you will end up with losses caused by quantisation unless floating point is used throughout the conversion.

This topic was automatically closed after 90 days. New replies are no longer allowed.