Since the advent of digital technologies, the way audio and video is controlled and transported around broadcast facilities has been constantly evolving in the search for greater efficiencies and improved workflows. Today, broadcasters are demanding more and more versatility and integration from their equipment, and in turn, the capabilities now available are leading broadcasters to reassess how they design studio complexes.
Sources have long been shared between control rooms by using splitters, distribution amplifiers, tie-lines and miles of cabling. As production requirements have become more demanding, using such methods becomes increasingly impractical. Traditional routers provide a practical method for dealing with larger amounts of sources and destinations, allowing users to change signal flow on the fly, and en-masse without having to hunt around on physical patch-bays, but they do not address the cabling issue and have finite capabilities.
The ever increasing requirements of modern broadcasting requires scalable solutions. Traditional routers, distribution amplifiers, format convertors, tie-lines and even physical patch-bays themselves are now being replaced by modern, networked router systems, capable of providing plug and play convenience to scale up when required. I/O modules can be located remotely from the routers, passing large quantities of signals over a fiber or cat5 cable, reducing overall cabling costs and installation / setup times. Networked I/O and routers break the traditional link between control room and studio, allowing for much more flexibility and ease in the planning of studio resource management, and providing a solution capable of meeting whatever the future may require. Using recent UK examples, Salford's Media City, BBC's West 1, and Sky's Harlequin are all designed around a scalable networked router, I/O and mixing console system.
Transporting audio, video or any data over a network requires that the data is sent in packets. Rather than sending a constant stream of audio or video data from point A to point B, the stream is chopped up into manageable sized segments and each packaged up with the destination and source address, allowing multiple streams of data with differing destinations to be sent out over a single cable. In order for this to work seamlessly, with high quality audio, without buffering or interruptions, it needs to be very quick and network traffic needs to be predictable or managed – this can be a problem across shared use networks.
Using an Ethernet infrastructure, communications protocols can be split into three main groups: those that operate on a physical layer, those that operate on a data link (ie. in the frame), and those that operate on the network layer.
Layer 1 protocols use Ethernet wiring and signalling components but not the Ethernet frame structure. They are very cost effective and reliable because of this, but commercial ethernet components such as switches, hubs or media converters cannot be used so topology can be limited.
Layer 2 protocols encapsulate audio data in standard Ethernet frames and most can make use of standard Ethernet hubs and utilise a variety of topologies i.e. stars, rings, daisy chains etc. Calrec’s Hydra 1 is an example of this.
Layer 3 protocols encapsulate audio data in standard IP packets rather than MAC frames. This can be less efficient as the segmentation and reassembly is more processor intensive, which may mean fewer channels and higher latency or more expensive hardware.
Hydra2 was introduced in 2009 with the launch of the Apollo platform of consoles and is an 8192 x 8192 router which is integral to the console. With Hydra2, you can take a single mic input and send it out to more than 8000 outputs if you want. It’s a TDM-type router, capable of true ‘one to many’ routing, and although it uses the physical layer of Gigabit Ethernet technology (a tried and tested technology with an affordable chip set), the Hydra2 protocol itself is a considerably more efficient system, which is how 512 bidirectional connections can be packed down each link.
Networked infrastructures like Hydra2 are cheap, easy to install and very simple to understand in that sharing inputs and outputs across any number of mixing consoles is an easy and natural process. The idea of networking has involved a shift from end to end points to the network itself, where the only limitation is the imagination of the network designer.
Whilst manufacturers may be protective of their proprietary network transportation protocols which can pass ever increasing quantities and quality of video and audio data with negligible latency, many are responding to user demands by incorporating cross-platform control protocols, allowing their systems to be controlled in depth by other manufacturers’ equipment. This allows for example, a high quality dedicated audio processor to be controlled by the user of a video switcher, via an interface that is familiar to them.
At NAB this year Calrec showed this, demonstrating Hydra2’s potential to work with third party clients through several different protocols. The SW-P-08 protocol was put into practice incorporating a variety of third-party router panels, including Evertz, Nvision, Snell, and LSB’s Virtual Studio Manager (VSM), to demonstrate remote control over input source to output destination cross-point routing, and control over mixing console DSP I/O routing. The EMBER protocol was be demonstrated via VSM, enabling memory loads, loading and removing alias files, viewing and editing Hydra2 I/O box and ports labels, SMPTE 2020 metadata insertion, and selective muting of SDI outputs.
The most prolific of these protocols is SW-P-08, or the “General Remote” protocol.
It was first developed by Pro-Bel in 1988 (by a team of engineers including Roger Henderson, now Managing Director of Calrec), and it has since had a wide uptake by router and controller designers to allow their equipment to control, or to be controlled by other manufacturers’ equipment. Input sources and output destinations that are to be controlled in this way are assigned unique SW-P-08 ID’s within the router which are mapped and labelled accordingly in the controller.
SW-P-08 controllers can route any source to any destination across the Hydra2 network that they have been given access to. As well as physical Hydra2 input and output ports, the H2O GUI and SW-P-08 controllers can also route to and from Hydra Patchbays, giving access to console DSP outputs as sources, and the ability to change sources feeding control surface faders.
Imagine that the Calrec Hydra2 routers allow for 1-to-n cross-point matrix routing of sources to destinations, without using up DSP or control surface space. Using SW-P-08, control over cross-point routing can be carried out either from a console, a standalone PC running the Calrec H2O network administrator GUI, or via third party controllers supporting the SW-P-08 protocol.
Although there is a very wide uptake in SW-P-08, it is still not an official standard and there may be variations in different manufacturers’ interpretations.
L-S-B’s Virtual Studio Manager (VSM) supports both S W-P-08 and EMBER. The EMBER protocol is a sophisticated data exchange mechanism that has potential for controlling many functions across varied equipment types.
A relative newcomer, EMBER with exciting potential for interfacing a wide range of equipment types and their control parameters, with 3rd party GUI’s and hardware panels. Using EMBER, the third party controller can change the active user memory on any control surface on Calrec’s Hydra2 network, load pre-defined I/O sets for use by each console, insert SMPTE2020 metadata into SDI output streams, mute audio channels within SDI output streams and edit I/O port labelling
The Calrec Serial Control Protocol, CSCP allows for remote control over mixing console operational functions by third party systems such as video switchers and production automation systems.
Several broadcast equipment manufacturers provide serial control protocols that are compatible with CSCP. Currently on air and actively controlling Calrec audio mixers for live on-air applications are Ross Switchers, Sony’s ELC, Snell’s Kahuna, Mosart and Grass Valley’s Ignite.
CSCP allows third party controllers access to 192 paths on each control surface, giving them the ability to display the path type’s assigned to faders, control and display path information, control of and status display for path Cut / On, control of and status display for path Pre-Fader Listen, and others.
Control, integration and commonality. Using these protocols, broadcasters can achieve far more with much less hardware, and design flexibility into their audio systems for years to come. The playing field has definitely shifted and technology is now meeting the demands placed on network infrastructures. Infrastructure designers should be giving greater consideration to their networks as a whole, and how the flexibility of those networks can allow greater efficiencies for operational workflows.