Live TV viewing habits around the world are changing. This is especially true with audiences under 30 who prefer to consume Live TV on their mobile devices rather than on a large screen TV. One of the major reasons for this change is the lack of options for personalisation of content on Linear TV.
Linear broadcast can capture a signal and get it into your living room in just three to 10 seconds, but it only delivers a one-dimensional viewing experience. While multiple cameras can capture the live action on the field, in most cases only one camera angle is broadcast at any time. Viewers can neither personalise their viewing experience, nor watch an event from their preferred perspective. The proliferation of OTT streaming services solved this issue to a certain extent. OTT services can now deliver not just the main broadcast feed but also feeds from multiple cameras at the event allowing viewers to select camera angles, soundtracks or personalised information of their choice. However, when viewers watch the main feed on their TV and an additional camera on their mobile device, the user experience is still poor as the streams are not in sync. One of the main reasons is latency with OTT streaming ranging from 10 seconds to one minute, or longer.
Content that isn’t properly synced is annoying for audiences, which is why many broadcasters have stopped delivering multiple camera angles to the second screen. From a broadcaster’s perspective, apart from not delivering the experience that the customer wants, a large amount of content that has already been captured is simply being wasted. Additionally, broadcasters still need to run two independent workflows to cater to both linear and OTT viewers, adding to their production costs.
There are several solutions to reduce streaming latency. Previous streaming protocols like RTMP offer relatively low latency but have other limitations when it comes to deploying scalable infrastructure or supporting easy Adaptive Bitrate Streaming. Additionally, RTMP has limited to no support for new codecs such as H.265. HTTP protocols like HLS were introduced to solve these issues but were not built with low latency in mind. In the recent years this limitation in HTTP streaming has been overcome by introducing standards like CMAF, which utilises existing HTTP streaming protocols and specifies all parameters needed across the pipeline for low latency streaming. Solutions not based on HTTP streaming (such as WebRTC) deliver low latency streaming but face the same issues with scalability and ease of deployment as conventional CDN’s cannot be used and might require broadcasters to adopt proprietary technology. Depending on the format used, one also has to consider the capability on the player side.
Using low latency solutions solves some problems related to good Multiview experiences, but it doesn’t solve all sync related issues because it misses some key parameters especially when it comes to distribution latency. The distribution methods used by linear TV - Terrestrial, Satellite, Cable etc - all have different levels of latency and it is difficult to guess which distribution method is being used by each viewer. Additionally, different hardware used to watch television could add different levels of latency despite being in the same household and using the same distribution network.
The most commonly available solutions deliver no sync at all or use manual time alignment management to achieve some sort of sync - or require modification of content that disrupts workflows. They oblige broadcasters to adopt proprietary CDN’s and media players, all of which limits scalability or increases the effort and cost involved in managing the system once a certain number of users has been reached. This is one of the primary reasons for the limited success of these solutions.
At NativeWaves, we believe that using audio sync measurements to automatically detect the position of the broadcast signal on the main device is the ideal way to sync multiple video, audio and data streams on multiple devices. We also believe that the process should run automatically, should be easy to use for the end consumer and should not require the broadcaster to make any additional integration into the main playback. As this solution uses audio from the main device as a reference, it accounts for all latencies at every stage and delivers a truly synced experience.
Latency and sync are major issues that the industry has been facing for years and it is still looking for a solution that is easy to implement, non-intrusive and uses automated synchronisation to deliver a Multiview experience adjusted to milli-seconds. The solution also shouldn’t require marking or making changes to the content or changes to existing workflows, and it should be CDN agnostic, work across all mobile platforms and use native players on the devices to deliver a synced multi-screen experience.
The Multiview platform – the broadcast solution proposed by NativeWaves - meets all the requirements that the industry has been asking for and is sufficiently flexible and agile that it can be tailored to fit into a broadcaster’s existing workflow without needing any major changes.
This solution enables broadcasters to engage their audiences by delivering all the extra content that is being captured so that nothing goes to waste. It also allows customers to enjoy a truly personalised viewing experience. Multiview also delivers savings by driving synergy between broadcast and streaming service production workflows, thus allowing broadcasters to explore new revenue streams with targeted advertising on additional screens. But most importantly it delivers complete viewer satisfaction, which drives viewership and revenue.
As issues relating to latency and sync are starting to take the centre stage, the perfect solution is now available for broadcasters to use.