My experiment for encoding

I tested the encoding speed in various ways. The length of the video is two mins and I checked the time with a stopwatch. Here are the methods and results:

  1. Hardware encoding (default) 4:20
  2. Parallel Processing 2:37
  3. Hardware + Parallel 2:16
  4. None 4:34

In terms of my experiment, using hardware encoding and parallel processing gave me the best result, but the speed of hardware encoding was not so impressive. As you can see, the result of no. 1 and 4 are not so much different.
Hope it helps.

The quality produced by hardware encoding (especially older GPUs) is rarely as good as software encoding.

It would be useful if you published your hardware specs - CPU and GPU.

1 Like

I also did a test but got slightly different results, my hardware (=GPU) encodings were faster than the software (cpu) one. But this was a simple one video no filters test @ 1080p30
Mixed results for me, GPU + parallel should/seems the fastest in general. but the filesize is quite big compared to cpu.

image
Oh, and something to keep in mind - file sizes: cpu encodes were 210MB, hardware(gpu) were 391MB. That is a huge difference if you want to save space.

Here is another example with some more filters (I expected a bigger difference but I guess the filters I used weren’t that impactful).

image
120MB cpu, 196MB gpu

It depends a huge amount on the input files and types of filters, some are very CPU intensive, some are bad when parallelising, some get a huge boost when using parallel and some are just always inefficient.

1 Like

The most interesting outcome for me - beside the encoding time - is that the output results can be quite different depending on the encoding technic (SW/HW - single /parallel). In my opinion (obviously wrong) the results should always be the same when the codec used is the same. Where is my thinking mistake?

I guess i would go for better video quality and smaller file size and don’t care too much on the encoding time - unless this is crucial.

That has been my experience in almost every case I have tried. GPU processing is generally more to do with speed - being able to stream data ASAP, rather than taking the time to encode efficiently for quality and smaller filesize.

1 Like

H.264 (as an example) only defines the bitstream format of the file. The bitstream spec defines what a decoder needs to do to play back that bitstream. However, there is an unlimited number of ways to create a valid bitstream. The specification does not define what an encoder has to do. This is what allows one encoder to specialize in speed while another encoder specializes in small file size, yet both outputs are playable by the same decoder in a media player.

2 Likes

Thanks for clarifying this, Austin - i didn’t know that.
I thaught the how to is arbitrary but the result should always be the same bit-wise.
I guess there are other codecs that are more definete in the output result, no matter how the encoder achieves that?

There may be, but I can’t think of any right off. Such a tightly defined format would prevent the encoder writers from making future performance improvements that are backwards compatible with released decoders. This is why most specs define how to decode but not how to encode.

1 Like

|Processor|Intel(R) Core™ i7-6700HQ CPU @ 2.60GHz
|Installed RAM|40.0 GB (39.4 GB usable)|
GTX 960m
Windows 10 64bit

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.