This case study shows how we were able to find the cause and provide a solution to stop Ads falling off air on a live broadcast service. One of our customers contacted us as they were having problems with programmes randomly freezing on air and causing Ads to miss transmission. This was beginning to be a serious problem for the broadcaster as they had to reschedule Ads, resulting in lost revenue for them, and more importantly their advertisers were starting to lose confidence in them. We started by taking recordings of their stream grabs and homing in on times when these events were taking place. Their frequency appeared to be completely random and with no obvious patten.
Developed analysis software
The broadcasters own engineers had performed extensive in depth analysis to find the source of the problem. The service data was in tact, the pid counts were correct, the PAT's and PMT's all matched with no obvious loss or corruption of data. We further developed our analysis software to find patterns in the data streams at the point were the transmission problems were occurring. After going through the usual tests of checking data rates, pid continuity counters, pid number allocations and table mapping, all to no avail, we decided to look further at the timing components within the elementary stream. For this particular broadcaster all of the elementary stream PCR's were referenced to one single PCR pid. We analysed the PCR values, checked for jitter and frequency using our software analysers and found them to be within spec. Turning to the elementary stream pids containing the PTS data we noticed that at the point of transmission freezing there was a large discrepancy between the values in the PCR data and that of the PTS value in two of the services.
Set top box simulators
To analyse further we decided to build a set-top box buffer simulator to test the impact of this timing discrepancy. We soon discovered that the PTS packets were being pushed outside of the playout buffer of set top box. In effect the set-top box buffer was under-flowing. Taking further stream grabs pre and post transmission mux we found the problem existed post mux, but not pre mux. Investigation with the manufacturer of the mux showed it had the latest software, and was working correctly. We knew something was prioritising scheduling of the Pids within the multiplexer to reduce the priority of the video and audio within the affected services. Removing services from the multiplexer was not an option as it was on air 24/7.
Temporal shift in pids
By building more analysis software we were able to compare the time components pre and post mux and we discovered some of the video pids had significant and unexpected temporal shift post mux. In their place pids from a low bandwidth data service were evident. Although the low bandwidth pids were set at 38,400 baud, the equipment generating them was sending bursty data. We demonstrated that the short term data rate of this service was in excess of 10Mbit/s on a 34Mbit/s transport stream. The net affect was that the mux was prioritising the data service within the transport stream and holding back the video and audio pids, moving these pids backwards in the transport stream. Although the timestamps had been taken into affect within the temporal shift and had been correctly re-stamped by the mux, they were causing the set-top box to under or overflow as the buffers were trying to hold on to too many Pids waiting for the next PTS. Although all of the pids were correct and without data loss or corruption, the temporal shift had caused a timing issue which was not identify-able on the broadcasters analysers. The reason the low bandwidth data service was bursty was due to the fact that it was being created by a PC, which with its operating system overheads had no control over how the data packets should be presented, and further complicated by the underlying ethernet network. Removing or replacing the data carousel was not viable as it was situated in their clients premises.
More bandwidth available
We built a data flow control product which was essentially a series of parallel FIFO's to pre-buffer the data packets from the data carousel and evenly distribute them prior to presentation to the mux. This completely fixed the problem and resulted in the data flow control units being rolled out across other services to stop the same type of problem. We were able to supply reports and graphs showing the pre and post distribution of pids within the transport stream so the broadcaster could prove to their advertisers that they had fixed the problem, thus further enhancing their client relations. An interesting side affect of this product was that it also reduced costs for the broadcaster. In the mux configuration the upper limits of the expected data service could be reduced as there was no burstiness in the incoming data service, this resulted in more bandwidth being available in the transport stream. Once the configuration had been completed, enough transport stream bandwidth had been released to allow a whole new pay service to be placed into the mux.
This case study shows how we designed and wrote the automation of splicing a bumper to a programme in the transport stream including recalculation and stamping of the stream timing information. A customer contacted us as they needed an automated way of splicing a bumper to the beginning of a film in the transport stream without decoding the video and audio to baseband for a VOD application. This application was complicated by two factors. First the programme could not be decoded into the baseband and then recoded. And second, the PCR of the bumper had to be determined and then computed to sync and insert into the film.
Different 27MHz clock references
The bumper and film were presented as two different files, with their own unique filenames. They were both MPEG2 compressed and presented as file based transport streams. Although the VOD application was for a european customer, and hence 50 fields per sec frame rate, we could not assume the two frame rates were exactly synchronous from the point of view of the PCR and PTS, they had been compressed by different systems and hence had different 27MHz clock references. Splicing the two transport streams together was a relatively simple task, although the PAT and PMT's had to be determined and reinserted back into the output transport stream. The PAT's from each file would definitely duplicate their pid numbers, but the PMT's might not. To avoid any confusion we remapped the pid numbers in the film to be the same as the bumper and removed it's PAT, replacing it with the PAT from the bumper. This also removed any problems of PAT/PMT table sequencing and duplication.
PCR jitter in spec
The frame sequence should have always been I-frame only, however, to avoid any unexpected results we had to check the GOP to confirm conformity to I-Frame only. This involved decoding the first pid in each GOP sequence to make sure the structure was correct. If it wasn't then the process was aborted, an error file created and an error-XML sent back to the automation system. To synchronise the two files together we had to build a software flywheel which synced to the PCR in the first file, that is the bumper. When the start of the film was reached we would sample each PTS and add or subtract the difference from the current value, this was an extremely complex task due to the structure of the PCR. Then we would use the current PCR to re-stamp the existing PCR in the film at that time. This proved highly effective, kept PCR jitter within spec and provided a seamless frame cut between the bumper and the film. The spec allowed for the audio to be mute twelve frames before the edit point and twelve frames after it, thus creating a silent transition between the bumper and the film.
Once the process was complete a report was created and XML sent back to the automation system to show the splicing had been successful. The software was designed to run under Windows on a stand alone server. However, recent changes in infrastructure design of the broadcaster have resulted in the application being moved to the cloud and file splicing now takes place there.
This case study shows how we designed and delivered video scalar with audio insertion into the SDI stream One of our customers contacted us as they needed a bespoke video scalar solution for a radio transmitter on the back of a broadcast camera operated from a Steadycam. The system used 50i HD video and needed the audio from the on camera mic to be digitised and inserted into the SDI video stream. This in turn would be connected to the RF back and transmitted back to the studio. Our customer needed the down converted video to provide an SD feed for a local monitor on the Steadycam rig so the camera operator could see the output of their camera without trying to use the on camera view finder. They had tried to find off the self solutions but none was available. Our customers own development team were swamped with other work and did not have time to work on this design. They were also losing new business as the the number of operators that could use their existing system was limited, and to open their product to a wider audience and hence increase revenues, they had to deliver the video card.
FPGA decimation filters
The application was for live sports and had to be extremely reliable with a boot up time of less than five seconds. The space limitation was significant measuring approximately 100mm wide, 50mm high and 10mm deep. No off the shelf solutions were available and due to the functionality, boot time and size constraints the design had to be a hardware solution. We used an FPGA to take the LVDS RGB from the camera back plane, down convert one output to SD for the camera operators viewfinder screen, and serialised the other output to HD-SDI, inserted the audio from the mic and the fed this to the RF transmitter over a short length of coax. The decimation filters in the HD to SD down converter were designed by ourselves using a thirteen point tap recursive filter. Simulation in VHDL is relatively straightforward. However, the amount of video required to adequately simulate the video conversion in a test bench was significant and took half an hour for each pass. We used a 74.25MHz voltage controlled oscillator to sync up the video bit clock from the camera to create the SDI 1.485GHz bit clock.
We designed the circuits and circuit boards, using a assembly contractor to build the circuit boards. Several of the integrated circuits were ball grid arrays and needed specialist companies to build them for us. Once the initial prototypes were delivered we debuged the prototypes, programmed the FPGA's and delivered complete working cards to our customer so they could fully test them in the field. After the first successful tests were complete we were commissioned to supply a further batch of thirty cards as an OEM supplier. Once our customer had sufficient resource in their development department we passed all of the design onto them so they could make their own alterations and build more cards.