Playback on Linux swamps 4 cores?

During playback Shotcut 17.11.07, under Ubuntu 17.10, effectively maxes out my Linux machine with an AMD Athlon II X4 635 (4 cores), 16 Gb, and a GTX-1050ti. This is straight playback without any effects. Using htop, I can see all four cores are loaded anywhere between 75 and 95% each. At the same time, memory use is hovering around 1.5 Gb and swap space is unused. There are at least six processes with /bin/shotcut and possibly more.

I launch Shotcut with the shell script in the distro - the distro was unpacked into a reserved folder, not “script here - folder there”. It’s all very plain vanilla.

The end effect is the playback, video and audio lurch and are generally unusable. The same material on a Win10 laptop with an Intel® Core™ i7-3610QM CPU @ 2.30GHz, 2301 Mhz, 4 Core(s), 8 Logical Processor(s), 24 Gb RAM, and an on-board NVIDIA GeForce GTX 675M with 2 Gb is able to handle the same material without a problem.

What’s going on here?

ADDED: I figured out how to count all of the processes. There are 11 processes with the command /bin/shotcut. 6 of these processes appear to be active while the rest appear to be sleeping. Each of the active processes appears to be using 45-55% of CPU resources. How 4 processes each take up roughly half of the CPU resources is a challenge to my understanding of fractions and percentages. [/grin]

It’s ---- supposed to max out the cpu!

RBE, I have a “little” cpu compared to yours - and so I created a test scenario to see how my DIY system would do. I use an AMD APU, an A8-7600 which contains 4 cpu cores and 6 gpu cores and have 8 GB ram, running on Mint 17.3. I say it’s a little guy because it has a TDP of only 65 watts. It’s not considered a speed demon. http://www.cpu-world.com/CPUs/Bulldozer/AMD-A10-Series%20A8-7600.html

I have the HDV video from a cruise we took in 2010 that is about 70 minutes. It was captured in a 62 minute piece and a 7 minute piece. So, I started a new project and added the 2 clips to the Playlist and then the Timeline. I did a simple cross-fade with the 2 clips. Then, I Exported the two .m2t files as an MP4. It has no problems with excessive heat.

Linux Mint has a visual System Monitor app and it shows all 4 cpu cores maxed out 95 - 100 percent. So then, in addition to this web browser I am writing on, I opened a second instance of Shotcut and started another project using six one GB DV clips. I can do browsing the web with no discernible slowdown of my system. On Shotcut I always get slowdowns when Zooming the Timeline or any other function because - except for Exporting, Shotcut only uses one cpu core - it’s not natively multithreaded. I am currently watching this thing grind away using 5.3 GB of 6.7 GB available RAM. 0.2 percent of swap used.

In conclusion I would say if your cpu cores are NOT maxed out then your system is not using the cpu power effectively. It should allow you to share cpu cycles with other processes smoothly. HTH -=Ken=-

What started the business of looking at the state of the CPU is the fact that the playback is effectively useless. I get a few? couple? of frames with motion and audio, a pause that’s longer that the motion, repeat, repeat, repeat… The same material played back with xine has no problems. VLC does have problems, but then it does the same on my laptop. My faith in VLC for video playback isn’t all that great.

That a problem exists is easy to spot. Why the problem exists is way past my skill level.

ADDED: I went back to htop to see what processes are running, and if something’s stealing cycles. With Shotcut idle (open but nothing in the timeline), the biggest load was htop. Other than that, Linux was just muttering to itself with the usual processes coming and going. Which is what I expect.

It sure is easy to misunderstand others! I reread your original post and found you were describing “playing” files in Shotcut’s Player. Duh.

BTW, I use a Linux app “System Monitor” version 3.6.0 I probably downloaded it from the Mint repository. It shows a running graph of CPU History, Memory and Swap History and Network History in addition to Processes and File Systems. Recommended.

So, I played a DV file in Shotcut and all 4 CPUs were “idling” at around 25 percent and rose to 35 percent when playing. Playing an .m2t file the 4 CPUs rose to 50 percent. All other activities utilize only one cpu and peg it at 100 percent. Redrawing the audio waveform and zooming the Timeline do this and make Shotcut unresponsive until the task completes. It is interesting to see the cpu allocation cycling through the 4 available. Whether it is heat generated I don’t know but some algorithm keeps swapping a single cpu event from one to the next.

I just added all 5 of the Old Film filters to the .m2t clip and the cpu utilization went to 60 percent. Still plays pretty smooth.

Now I’m really confused. Some stuff plays. Some stuff chokes.

The top MediaInfo is a source file. The bottom is the same file passed through the editing process and rendered for later use. The source file plays, the rendered file lurches.

And I’m paying the price for editing on two different machines.

PS System Monitor, which should work tries to launch and doesn’t. The little “I’m working on it” spinning icon spins for 5-10 seconds and, poof! All I’ve got is htop.

Just a cursory glance here - but the lower image from MediaInfo shows an Overall bit rate TEN times higher. It would be good to investigate why that’s happening.

MP4 High Profile is appropriate for HD video. Baseline and Main aren’t good enough. I would choose Constant Bitrate on the codec and for Interpolation Hyper/Lanczos (Best).

http://blog.mediacoderhq.com/h264-profiles-and-levels/

-=Ken=-

The rendered video was done with lossless/H.264 and the default settings except I used Hyper/Lanczos, too.

Obviously this isn’t a winner.

The clip at the bottom was heavily edited to ramp the playback speed up to 3x. I stepped through 1.25, 1.5, 1.75, 2.0, 2.25, 2.50, 3.0 to avoid a sudden lurch in speed. At the end of the 3x clip, I stepped back the other way to get to 1x. And there are two freeze frames. Like I said, heavily edited.

Unfortunately, there’s no way to re-position at set of clips as a group. If I need to move the project’s clips, they need to be moved one at a time. So… I saved the project in an MLT, rendered the entire timeline, and then put it in a fresh timeline. The entire clip can be re-positioned as needed. And that’s where the wheels fell off.

My concern about rendering and using that as one big clip is to avoid any loss. Whatever quality the original clip(s) had must remain. Which, to my mind, means heading to whatever profile does that. Obviously lossless/H.264 doesn’t work because the playback is unusable. What’s losseless and usable?

ProRes or DxNH for editing.

But if your machine doesn’t have the grunt, playback “in the editor” will be laggy.

OK, I’ll give them a try.

I think you are attempting to do something with diminishing returns. I don’t want to annoy you with the possibility of OCD but I have found that such obsessions are usually not appreciated by the audience. You will obviously incur loss of video information but strangely I doubt most people will notice! I haven’t tried the method Steve suggests but have a go and please share your process and results. Best. -=Ken=-

1 Like

OCD probably, but not without reality as a foundation. Going with the knowledge that “entropy will not be denied” (laws of thermodynamics in five words), anything I can do to minimize entropy is to the good, particularly when using multiple generations of clips to get things right. The little nibbles of lost data become bigger with each lossy iteration.

Most of my projects wind up on YouTube, where videos’ entropy raises to high levels, anything to minimize what’s fed into YT is a step in the right direction. For that matter, it’s why I transcode to 4K. YT’s compression isn’t as bad 4K. 1080p looks close to the original and even 4K playback is pretty good. (I get it that transcoding adds nothing to the video’s imagery resolution - it’s only a work-around for YT)

Maybe the audience doesn’t notice the difference. At least consciously. NTL fuzzy, pixelated video is probably a subtle influence on picking one of many videos covering the same topic.

Consider this clip, created only to demonstrate, to the vendor, two issues with the camera’s new, beta, firmware. The color saturation is much too high and the image stabilization (supposedly improved from earlier rev.s) has taken a step backward. Unfortunately, Smugmug wasn’t kind to this clip. At least the two problems are visible, but cleaner video will tell the story more effectively.

https://photos.smugmug.com/General/n-DMz9Qq/New-folder/i-5rDj43G/0/016c75d7/1920/V1-19%20demo-1920.mp4

I can go on past being tiresome, as at present, about the subject . The bottom line is I don’t like turning out “meh - close enough” work. Period.

To get really OCD about this subject, I just remembered I have an A/B comparison between my work and someone else’s work on the same stretch of road.

This is the first of the long, check out the road, no wind noise-just music videos I found.

Skip ahead to 0:50 and this is my version of the same ride. The start is almost the same frame as the original. The lighting is different (lucky me!) but the difference in res. IMHO there. Alexander’s video has places where pixelation shows up. To be fair, IIRC he said he shot at 30 fps.

A heck of a lot of ppl don’t have anywhere near the problems with Youtube results as you appear to believe you have.
I know I don’t.
Also, consider this - no matter how high the video IQ is you upload and how good you perceive the results to be, not everyone will be able to view the intended full resolution if their internet speed is insufficient, youtube will down-res the footage to suit their speeds, often to 480p.
I feel you are missing out on the joy by over-thinking/over-tinkering and being overly critical.
Remember, the closer you get to perfection the easier it is to see how far from it you are. Life is too short. :slight_smile:

I get that some people figure 720 is SHD. But I just don’t want to put that out. OCD, over-thinking, whatever - it’s …um… how I roll.

I bounce back and forth between Resolve and Shotcut. Emotionally, I prefer Shotcut but I keep banging into what, IMNSHO, are significant problems. Being able to move a group of clips simultaneously is what started this discussion. The only way, in Shotcut, to move a large collection of clips that are settled (no changes remaining) is to render an intermediate version before continuing to work. Except, as this thread is all (mostly) about, there’s apparently, no smooth workflow to do this. If I knew I could do edits, render, edits, render, edits, then I could get on with my real job: producing a video.

Right now I’ve got a timeline with about 15 edits in it (and they’re work-arounds for other missing features including keyframes). Editing a point in the middle of the timeline means either moving the 7-8 clips at the front or the 5 clips in the back, to say nothing of re-aligning the freeze-frame clips on V2. If there’s another way to do the work, I’m listening.

BTW, DNxHD 1080p 59.94 works better, for me, than ProRes. DN… has a very slight stutter in playback, where ProRes comes back to what’s reported in my initial post.

[code]General
Complete name : I:\YouTube\Jaufenpass\Render\Jaufen 01 DNxHD 1080p 5994.mov
Format : MPEG-4
Format profile : QuickTime
Codec ID : qt 0000.02 (qt )
File size : 5.46 GiB
Duration : 1 min 46 s
Overall bit rate mode : Constant
Overall bit rate : 442 Mb/s
Writing application : Lavf57.56.101

Video
ID : 1
Format : VC-3
Format version : Version 1
Format profile : HD@HQ
Codec ID : AVdn
Codec ID/Info : Avid DNxHD
Duration : 1 min 46 s
Bit rate mode : Constant
Bit rate : 440 Mb/s
Width : 1 920 pixels
Height : 1 080 pixels
Display aspect ratio : 16:9
Frame rate mode : Constant
Frame rate : 59.940 (60000/1001) FPS
Color space : YUV
Chroma subsampling : 4:2:2
Bit depth : 8 bits
Scan type : Progressive
Bits/(Pixel*Frame) : 3.540
Stream size : 5.44 GiB (100%)
Language : English

Audio
ID : 2
Format : PCM
Format settings : Little / Signed
Codec ID : sowt
Duration : 1 min 46 s
Bit rate mode : Constant
Bit rate : 1 536 kb/s
Channel(s) : 2 channels
Channel positions : Front: L R
Sampling rate : 48.0 kHz
Bit depth : 16 bits
Stream size : 19.4 MiB (0%)
Language : English
Default : Yes
Alternate group : 1
[/code]

[code]General
Complete name : I:\YouTube\Jaufenpass\Render\Jaufen 01 ProRes.mov
Format : MPEG-4
Format profile : QuickTime
Codec ID : qt 0000.02 (qt )
File size : 6.06 GiB
Duration : 1 min 46 s
Overall bit rate mode : Variable
Overall bit rate : 490 Mb/s
Writing application : Lavf57.56.101

Video
ID : 1
Format : ProRes
Format version : Version 0
Format profile : 422
Codec ID : apcn
Duration : 1 min 46 s
Bit rate mode : Variable
Bit rate : 489 Mb/s
Width : 1 920 pixels
Height : 1 080 pixels
Display aspect ratio : 16:9
Frame rate mode : Constant
Frame rate : 59.940 (60000/1001) FPS
Color space : YUV
Chroma subsampling : 4:2:2
Scan type : Progressive
Bits/(Pixel*Frame) : 3.931
Stream size : 6.04 GiB (100%)
Writing library : fmpg
Language : English
Matrix coefficients : BT.601

Audio
ID : 2
Format : PCM
Format settings : Little / Signed
Codec ID : sowt
Duration : 1 min 46 s
Bit rate mode : Constant
Bit rate : 1 536 kb/s
Channel(s) : 2 channels
Channel positions : Front: L R
Sampling rate : 48.0 kHz
Bit depth : 16 bits
Stream size : 19.4 MiB (0%)
Language : English
Default : Yes
Alternate group : 1
[/code]

The 50 MHz difference makes a big difference in playback.

Actually, 720p = 1280 x 720 - is usually known as HD or “HD Ready” resolution.
But that wasn’t my point. I’m saying that your best efforts my not be served down to all viewers dependent on their internet connections. Also I’ll add that even 1080p is wasted on most mobile devices.

Depends on your PC’s grunt, as already mentioned.
Neither stutter for me on playback in Shotcut. But ProRes will stutter in some Media players, but it’s an editing format - not a playback format.

Quote from Wiki:
“ProRes is a line of intermediate codecs, which means they are intended for use during video editing, and not for practical end-user viewing. The benefit of an intermediate codec is that it retains higher quality than end-user codecs while still requiring much less expensive disk systems compared to uncompressed video. It is comparable to Avid’s DNxHD codec or CineForm which offer similar bitrates which are also intended to be used as intermediate codecs.”

Gotit that neither is intended to be used for distribution.

At this point I’ve literally shut down the Linux machine. I’m looking into putting in a new “mobo” and CPU. Until then, it’s not worth using for editing, which is its main reason for existing.

The answer to my original question is “the video’s bit rate was high enough to bring the machine to its knees.”

Now I know. [/smile]

RBE, you should build a new video editing machine. Use an AMD Ryzen Threadripper cpu with 16 cores and 32 threads!!! That and a suitable motherboard would be the heart of a real dream machine. Then, according to Videoguy.com, put an OS on it that is completely stripped down and never gets connected to the Internet - all business! It would be like the difference between a '57 Chevy and '57 Chevy funny car - zoom!

As I said earlier I have never looked into intermediate codecs but the idea of codecs designed just for editing is kind of novel. I thought they belonged exclusively to Avid studio editing suites.

There are forums on the Internet (videohelp.com, doom9.org) that deal with "archival’ restoration of film and early video. They can do some great work preserving and even enhancing old video but it is painstaking and usually encodes overnight - even on a fast rig. -=Ken=-

Oooohhh, and did I hear you say you’d pay for this nitro-burning tire melter? [/snicker]

When I transcode about 30-35 minutes of HD to 4K, it takes around 6-7 hours. Life in the fast lane…

Looking for a donation, are you?

Well, this has been fun. I am planning to build a new machine for myself and it will likely be a newer generation of the AMD APU again since the functionality seems to be integrated so well. I am now using an SSD which I’m sure helps jockeying video files around and next time will pay a little extra for an m.2 SSD card which is supposed to give Gbit transfer speeds.

-=Ken=-

Donation? Heck, I’m looking for 100% underwriting and no less. LOL

I use SSD’s in other settings but I’ll cope with big HDD’s here. If I were making a living or at least income from video, I’d take another approach. Since the Linux box is mostly a one-activity machine, I’m not looking to put together the ultimate video-gaming-nuclear physics machine. I just want my edits without a lot of aggro.


I can’t resist… talking about the issue of “doing it right” versus “just good enough”, I recommend Robert Pirsig’s Zen and the Art of Motorcycle Maintenance. Which, as he says in the intro, is neither about Zen or motorcycle maintenance. The subtitle explains more: “An Inquiry into Values”. More than once, on a job, I’ve had a “what would Pirsig do” moment. Or, worse, a “Pirsig would never accept this” realization.

Even Pirsig had those moments. He was not pleased(!) with his first, handwritten, draft; he burned it and started all over. Not exactly a “eh - close enough” guy.

The book can be tough sledding in places, but the time spent isn’t, IMO, wasted.

For anyone with ZAMM OCD, follow up with Zen and Now by Mark Richardson. It explains a lot about ZAMM. It’s not a happy story.