To view this page ensure that Adobe Flash Player version 11.1.0 or greater is installed.

camps huddled around separate racks of gear, one muttering ‘it’s a broadcast problem’, while the other says ‘it’s an IT problem.’ This is more likely to happen when the operator tries to build a monitoring system from a collection of broadcast-specific tools and IT-specific tools. Even if a motley collection of tools is gathered together under the umbrella of a central NMS collating all the alarms, whenever a problems occurs engineers have to dive deep into the specific tool-silo where the problem has surfaced in order to get a more detailed picture, and this detail can’t be easily related to any other part of the delivery chain. The overview is lost, and with it the ability to understand and relate symptoms to root causes (which may lie elsewhere, in a different technology). The main challenge for manufacturers in the test and monitoring sector is to provide toolsets that offer homogeneity, allowing users to navigate across the boundaries of the different technologies without having to learn new skills when they do so. To draw another analogy, we are used to driving across borders now with hardly any inconvenience: we may not speak the local language, but we don’t have to learn a whole new set of road signs or exchange our steering wheel for a joystick. Our satnav navigates us to the destination we are aiming for, even if it’s in Athens or Ingolstadt, and not our home town. We are travelling in a different territory, but we still have enough familiar tools to arrive safely. A monitoring system should do something similar, and allow staff from both broadcast and IT backgrounds to cross borders with a degree of confidence, knowing that they can follow the same road and interpret the landscape with familiar conventions. It’s extremely important therefore to provide a very visual monitoring interface that helps users grasp the overview, identify trends, and understand easily what the different chunks of data are and what they represent. Instead of a very abstract window on the data – screens full of numbers and one-line errors – the interface should give the user a clear and easy sense of how the delivery chain behaves, almost helping the user to develop a feel of how it works, without being overwhelming or baffling. Even when it’s necessary to delve deep into detail, this is much easier to interpret when it’s presented in a consistent, familiar and graphical form – and always correlated to the overview, so that symptoms and cause can be quickly linked. Where should we monitor our OTT operation? This is a question to which there is an apparently simple answer. Monitoring data should be gathered from every point in the chain where the condition of the stream could potentially be affected. It’s an apparently simple answer because not all monitoring solutions are able to do this. But to provide complete assurance of service quality, a monitoring system should be able to give you data from your content ingest point, your origin servers, before entry to the CDN and after the CDN, and from all of the end user devices. An advanced highly-integrated monitoring solution tells the operator what’s happening at the origin servers, how well each CDN is performing, what the 3G or 4G performance is like, and correlates all the data into a coherent whole, so that QoS and QoE are all part of the picture. TV-BAY MAGAZINE: ISSUE 84 DECEMBER 2013 | 65