Why are broadcast submission standards so strict?

Thanks for the reply. I figured the stations were directly targeting their playout servers, but what I don’t understand is the economics.

If I send in an XDCAM file according to their specification, aren’t they still going to take time to QC it and possibly reject it? Unless that process is fully automated, wouldn’t they spend less time accepting ProRes HQ or XQ files and transcoding it to exactly what they wanted on their own?

There are a number of intermediate formats that can handle a few generations of transcoding before any significant artefacts show up. I do like the OCD-level control that a creator has over the playback quality by targeting the playout server directly, but it would be nice to have the option to submit ProRes HQ or XQ as well. This would be especially useful for sending one program (like a commercial) to multiple stations or networks. I guess I don’t understand why they don’t allow that yet.


I can’t speak for the North American stations and have little knowledge of their requirements/guide lines.

In Europe (and other parts of the world), the spec ensures things like correct video levels,
scan, audio levels, colorimetry and so on.
This then ensures consistent quality throughout.

However, exceptions can be made for times when no better footage is available (e.g. very old footage) or it’s in the public interest, for example a natural disaster when only inferior
footage is available.

Even though many different codecs can be accepted, they will still need some “treatment”
to make them playable on the servers.

In our case, this involves at a minimum, interlacing cellphone footage and re-exporting as XDCAM.
Where time allows, we also do a bit of color correction, tweaking of brightness and contrast
and try to get the audio (if any) to R128.
Many times, especially with cellphone clips, people insist on shooting in portrait mode.
It looks bad to have black borders and we then place a blurred background to fill in the black areas, again time permitting.

In our workflow, this is exactly where we use Shotcut.
It’s quick, easy and does the job perfectly.

If I send in an XDCAM file according to their specification, aren’t they still going to take time to QC it and possibly reject it? Unless that process is fully automated

In some places it is fully automated (UK DPP, US NABA) - the onus on QC is with the creator who has to run it through QC software and send the certificate along with the asset.

The reason the XDCAM requirement is so tight is that many playout servers use hardware XDCAM decoders - so you have to exactly match the spec. for it to play out.

There are a number of intermediate formats that can handle a few generations of transcoding before any significant artefacts show up.

There is no lossless codec that doesn’t add artefacts. They may be imperceptible when you view them, but that’s not the same thing. The broadcasters care about the affect on the bitrate required in the DTH encoder. Allowing the use of an extra transcode at the playout stage can lead to a significant cost increase in datarate required to send to the home.

Yep, what we do is set flags for certain parameters which are on the edge and an in-house QC operator will then review the footage and pass final judgement.
This obviously incurs a cost and repeat offenders are either sent an invoice or booted from further
submissions until they get it right.
You don’t want to pi&$ the broadcaster off. :rofl:

We use the Omneon playout servers and boy are they picky.

Well, some come pretty darn close, but we don’t accept codecs like huffYUV, Pro-res4444 and other
huge files.
It’s a complete over-kill and just clogs up the system.

Yep, bottom line.

Oops, meant lossy codecs. I’m 8 timezones from home.

This is all very helpful, thanks. I’ve heard horror stories that QC was more of a manual operation at some studios, including running the footage through a Tektronix and complaining about millivolt ringing of white text on a black background even though software QC said it was fine. (The ringing was introduced by the codec in that example and required making the text less than pure white to pass QC.) I thought there would be a higher cost in all that effort than the studio doing their own transcode to their own spec, but maybe not. Especially if QC is automated now. I totally sympathize with targeting the playout server. I’m just wrapping my head around how the file eventually lands in that format and why it has to be that way.

One last question, and I’ll let it rest…

If using Shotcut to export for broadcast, I assume the playout servers require interlaced 1080. But another post said that Shotcut does not interlace from progressive sources (which would include cell phone video). It can signal as interlace, but it’s not a true interlace. Does Omneon only care about signaling and not the actual data or the final look? I’m wondering how you export interlaced from Shotcut using progressive 25/30fps sources and it not be a problem. What is your timeline/project frame rate when doing such an export?


This is where it gets interesting.
SC/ffmpeg will not properly interlace footage but will only signal it as such, but I have found that for
XDCAM, it will only get the signaling right if exported to a MXF container.
Does not get it right with a QT mov.

Do the Omneon servers accept it, yes, they think it’s Psf and will happily play it out.
(The great advantage of Psf is that it maintains the original frame rate).
It is a bit of a compromise, but one we decided to accept as the time taken to convert and tweak short
clips using SC versus PP or Da Vinci, far outweighs the negatives.

Keep in mind that there is often only minutes to spare from the time we get some cellphone footage
that is news worthy to the time it needs to be played out as breaking news.

I must add that we only do this with short clips that need a very fast turn around time.
No ways will this even be an option for a whole episode or even a longish (greater than a few minutes) insert.

Raw news footage is usually treated differently from pre-produced material such as programs and commercials. This is largely due to time.

Generally you want to get news footage on the air as expediently as possible so there isn’t time to fuss over QC or to dub copies of footage. Some facilities have converters to get footage to a usable/editable/playable state. Bright Eyes is a converter we use which I don’t deal with myself. There are also video “legalizers” which take care of some of this.

A lot has to do with the news value of the footage. If a member of the public happens to capture, say, a plane crash, on his cell phone in portrait rather than landscape mode, due to the news value we’re not going to reject that footage and we will find a way to get it on air.

Programming and commercials are a whole different story. There is generally a post production phase and no rush to get it on air. Broadcasters then shift the burden (in terms of time and expense) for spec-compliance to the producer/distributor/syndicator. Besides compatibility with playback servers, of paramount importance is that material be able to be viewed on the home receiver. That’s why digital TV is stuck on MPEG-2 in the US. Broadcasters are unwilling to change for fear that home receivers will be unable to decode the picture and sound, resulting in a loss of viewers and ratings, and ultimately a loss of jobs for engineering managers who decide to make the change. For the foreseeable future US broadcasters consider 16:9 MPEG-2 to be “good enough”.

From the producer’s standpoint, you don’t want to spend a pile of money on producing your show only to have it be rejected at the QC stage.

Does that answer your questions or raise any new ones?

Does that mean if somebody wanted to produce a long form program for broadcast TV, they couldn’t create a legal export directly from Shotcut?

Spot on.

There still needs to be basic QC such as watching audio and video levels.
But in general yes, QC for things like this is much more relaxed.

Yep, Bright Eyes, Kramer, Four-A and many other make real-time interlacers, legalizers with built-in
frame buffers for synchronizing.
However, these are only really useful if we are getting live feeds as opposed to a video file
sent to us, as it would still need to be played out into one of these units, re-ingested and then onto final TX.

BTW, I’m a great fan of Bright Eye products and quite disappointed to hear that the company went belly-up.

How brave and lucky do you feel?
I certainly would not do it.

This is a great question which I have been preoccupied with for over a year now.

I’m only familiar with US standards, sorry, where we have a choice between 720p (ABC and Fox) and 1080i (CBS, NBC, CNN and PBS).

First I’ll correct a previous misconception. Yes, ffmpeg is fully capable of converting progressive footage to interlaced, both in terms of setting the flags and converting the actual picture content. I have posted code for this on this board and I can repost it but I will have to hunt for it. Unfortunately, Shotcut doesn’t do this so you’ll have to run an additional conversion pass with ffmpeg. If it isn’t urgent news footage then you can take the time to do this.

I have found that you can actually see interlacing in picture content if you play back on virtualdub2.

Generally you want to deliver PCM audio at 48 kHz.

OK, I found the magic ffmpeg interlace code in another post.

Here is a simple ffmpeg script which adds interlacing and sizes to 1920 x 1080. Some of the filters are necessary to prevent color shifts. ilme and ildct set the flags but do not interlace the picture content. tinterlace takes care of interlacing the picture content. This script copies over any existing audio.

If there is something you don’t understand, ask a question.

ffmpeg -y  -i "input.mp4"   -vf scale=out_color_matrix=bt709:width=1920:height=1080:out_range=limited,tinterlace=4:vlpf  -flags +ilme+ildct  -r 29.97  -color_primaries bt709  -color_trc bt709  -colorspace bt709    -acodec copy  output.mp4

This table may come in handy:


Are you sure your script is correct?
This is what I get:

It works for me. Let it run to completion then examine the output file.

Does the script stall partway through? Be sure to copy the long line of code:

ffmpeg -y  -i "input.mp4"   -vf scale=out_color_matrix=bt709:width=1920:height=1080:out_range=limited,tinterlace=4:vlpf  -flags +ilme+ildct  -r 29.97  -color_primaries bt709  -color_trc bt709  -colorspace bt709    -acodec copy  output.mp4

That is all it gives, a file of zero bytes is created.
Tried with source footage of Prores422 (progressive) with PCM audio in a mov
and also h.264 with AAC audio in a mp4.
Same story both times.

Does not even start, errors out right from the start.
Yes copied the whole long line.

Do you have ffmpeg.exe in the same folder as the script? Sounds like the script isn’t finding ffmpeg.exe.

ffmpeg is found and runs, that is where the error messages are coming from.
It’s ffmpeg that is not happy with many of the options passed to it.