Ask the experts: Content readiness in time-deferred workflows

Sudeep Bose

Author: Sudeep Bose

Published 1st August 2013

by Sudeep Bose
Issue 79 - July 2013
What are the fundamental considerations that must be addressed to ensure the quality of non-real-time content, whether played out from a broadcasters media server or time-shifted for VOD and other OTT services?
When focusing on the actual content itself, rather than the service, network or other components, the key is the readiness of content to be used. For instance, a piece of media must be syntactically correct in order to ensure that it causes no downstream issues in the workflow and that it will play and decode properly. At the same time, content must be free of artifacts, meeting visual quality standards as well as those dictating the correct resolution and bit rate. Compliance with applicable regulations, such as EBU loudness standards or requirements for closed-captions/subtitles, is also critical. Finally, the media files themselves must meet the requirements of the production and/or distribution workflow. Content must be transcoded properly to assure that it can be fragmented properly for an adjustable bit rate (ABR) workflow. As different as the operations of a studio, post house, broadcast, VOD service provider, or ad insertion facility or department may be, all of these organizations need to ensure that incoming and outgoing content is evaluated and validated according to these considerations. Why is effective evaluation of non-real-time content so important right now?
Now that file-based workflows have been broadly embraced, facilities have begun working with files in massive volumes in some cases, tens of thousands of pieces of content each month. The numbers keep growing with the explosion of multiscreen services in use and, within those services, the delivery of content to multiple platforms. The emergence of ABR-based service delivery further multiplies the quantity of files necessary, requiring six or seven versions for every piece of content. Given the nearly exponential growth in the quantity of files being handled, it is nearly impossible for media organizations without automated T&M and quality control solutions to maintain not only the quality of content throughout the delivery chain, but also the quality of experience expected by consumers today.

What processing is typical of todays time-deferred workflows?

Typically, the media facility in question is dealing either with natively digital content or with content that has been converted to digital. At beginning of the content ecosystem, content is encoded at a very high resolution, such as 4K. Content creators and owners such as studios then create a mezzanine or house version, bringing content down to 200, 300, or 400 megabits per second for delivery to the next link in the distribution chain. Often, this is an internal department or external post facility that transcodes content to the profiles suitable for service providers such as Netflix, Comcast, AT&T U-verse, DISH Network, and DirecTV. Transcoded from formats such as ProRes or QuickTime, or MXF-wrapped Sony XDCAM or JPEG2000, content is prepared as MPEG-2 or H.264 in a transport stream container that can be transported across the service providers network to the consumer. As this shift in format and bit rate is performed, additional steps may be taken to ensure that the audio mix is correct and that captions are present. If the delivery mode is ABR, than multiple versions in multiple bit rates will be created. Throughout this workflow, on both sides of every handoff, content quality or readiness should be validated. In terms of maintaining content quality and integrity, what specific challenges does such a workflow present?
One of the biggest pitfalls lies in the transformation of content from 2K or 4K to a mezzanine format, a process by which content is compressed. Encoding via any lossy compression format introduces the possibility that, as a result of data being discarded, images will be marred by blockiness, a loss of sharpness, or unwanted effects caused by disordered referential frames. Such issues may be the result of an encoder/transcoder that is not calibrated properly, or because a certain sequence in the content simply proved too much for the system. In other instances, artistic elements or the rapid motion of live-recorded content may present information that throws off the encoder.
Audio presents a second significant challenge. Frankly, one of the things that our experience and our discussion with customers reveals is that technology has come far enough that handling video is becoming easy. Audio, on the other hand, is the source of many problems. The different ways in which audio must be carried and where the stereo pair or 5.1 channels, in multiple languages, must be situated is itself an issue. More and more audio content is being delivered along with source video, and the post facility must figure out, based on the order form dictating delivery, how to create the appropriate mix with all that content. The manifest may identify the kind of video and target bit rate, resolution, and format, with English stereo pair and 5.1 channels, and maybe also the same in Spanish. As all of this work is done, one critical task is ensuring that all content is there and in the right sequence. If audio elements are not in right sequence, then the facility may have issues downstream. Other factors, such as loudness, low audio, muting, and clipping, also dictate that content be tested throughout its transition to a ready state.
Like audio, ancillary data presents plenty of opportunity for error. Whether it supports customer-facing elements such as captions/teletext/ subtitling or provides signaling info such as time codes, this data must be present and correct if it is to enable the key functions it supports.
With respect to T&M and quality control for timedeferred workflows, what trends are you seeing today?
File-based operations offer many benefits, including greater automation, less user intervention, simpler content delivery, fewer errors, and a less-expensive approach to media storage, processing, and distribution. The migration to file-based operations continues to be a major industry trend. The VOD sector and other OTT services, however, have relied on file-based distribution for quite some time. Whats happening in those operations is a jump in the volume of content being handled.
Cable operators that previously boasted a couple thousand hours of content in their VOD libraries are today offering upwards of 10,000 hours of content even hundreds of thousands of hours. What that means is that this colossal content store must be made ready for use in the providers file-based environments and maintained to ensure ongoing readiness.
Companies such as Amazon and even Intel, which never before were in the business of providing media, have seen the viability of streaming services and have gotten into the game. So have traditional broadcast networks, such as the BBC, which offers the very popular BBCiPlayer TV service. These new and successful business ventures have proved that OTT is not only viable, but also convenient. It doesnt require over-the-air service or a dedicated physical service, but rather can ride over another companys broadband delivery infrastructure and service. These factors are causing the growth of OTT to accelerate. At the same time, streaming technologies have emerged and been refined to address the challenge of providing a quality user experience despite bandwidth limitations and network fluctuations. ABR started with Microsofts Smooth Streaming, and the folks at Apple released HLS technology, and then others distributed their own formats. Now, MPEG DASH is coming into play, looking like the next revolution in this industry. The potential of DASH is a popular topic today, and this was evident at the Entertainment Technology in the Internet Age (ETIA) conference, hosted by SMPTE and Stanford Universitys Stanford Center for Image Systems Engineering (SCIEN), which featured a dynamic and well-attended panel on Internet media delivery formats.
As companies get better and better at distributing higher-resolution content, whether from a broadcast facility at a network operations center or via OTT services over broadband, one truth remains constant: Content is king. At the end of the day, while all the technology enabling these delivery models is exciting not only for the experience it enables, but also because it opens the door to new providers and players it is the content underpinning all these services that drives them forward. People want quality content, and they expect to experience it at a high quality. Automated evaluation and validation of content thus plays an essential role in the time-deferred workflows supporting todays broadcast and OTT service models.

Related Listings

Related Articles

Related News

Related Videos

© KitPlus (tv-bay limited). All trademarks recognised. Reproduction of this content is strictly prohibited without written consent.