Common Questions on the Topic of Distribution and Delivery


As the Broadcast and Post industries continue to adopt file-based workflows, there has been an ever-increasing need for the implementation of technologies for the distribution of digital media. Being tapeless also means being aware of how to send and receive files over IP networks, but there are many common questions that arise. Here is a sampling in this, volume one, of the most frequently asked questions on digital distribution and delivery:
Isn’t FTP adequate?
The answer, as in all things, is that it depends. You can’t really complain about the cost, since it’s “free”. But, it’s really not free. And here’s why. First, what are the real annual costs of operating, say, 25 FTP servers for your clients? That is actually not a fanciful amount of servers. I’ve talked with many companies that operate close to and beyond that number. Their problems with FTP? “Help me consolidate my FTP server problem. I have too many companies that I deal with and I don’t want to have to put up a new server for each company.” After that, of course, we’re into the gory details. Among them: Unless there’s built-in checkpointing, and the file transfer dies, you’ll have to retransmit the entire file. There’s no way of being able to change the amount of bandwidth for the transfer as it’s moving. The sending protocol doesn’t let you fill the pipe. How do you prove the file arrived at its destination? Opening up one port on one router to transfer files to one company is pretty easy. Try opening up 100 ports on 100 routers to talk to 100 companies. In that case, firewall perforation becomes a real issue to deal with as you interact with the IT staffs of your business partners. As your business-to-business needs grow, FTP is ill-equipped to provide core functionality needed to address an ever-increasing number of partners that you need to communicate with.
What’s the difference between hardware based file transfer acceleration and software based acceleration systems?
The most noticeable difference, of course, is the scalability issue. With hardware based acceleration systems, typically what is needed is a device on either end of the network, similar to what we experienced with fax machines. Again, it becomes quite difficult to scale this model and, further, there is no way to centrally monitor all the traffic that is occurring between all of the appliances in the network. This would be akin to querying each send/receive pair in order to change one parameter as opposed to a centralised software-based model wherein one can change a parameter centrally and have it automatically update software movers. The notion of having to log into, say, 20 locations and then change the same parameter on each of those 20 locations is clearly more time-consuming than that of a centrally managed, all-software system. For scale, it’s critical to have some way to manage ALL network traffic across ALL points under a common GUI. Otherwise, the issue of scalability and manageability (i.e. the amount of staff needed to manage the system) will continue to be issues.
Can I use the open, public Internet to send files?
Yes, absolutely. One large media conglomerate in 2008 used the public Internet to send over 25 million files and over 200 TB of data. The cost of using relatively inexpensive connections to the open Internet—as opposed to the use of satellite time—carries with it demonstrable cost savings. With the appropriate security aspects taken into account, using the open Internet is absolutely appropriate. The most common issue encountered for a company is that many of them already have adequate network resources but are using insufficient transport protocols and therefore cannot make the most efficient use of those network resources.
Further, another very typical finding are companies that have over-provisioned and under-utilised networks. After full examination, it is often possible to employ lower capacity networks. This can and does easily translate into cost savings. After analysing one company’s needs, it was determined that the monthly rental of an OC-3 connection was not warranted and that all needs could be accomplished via a DS-3 connection utilising the appropriate transport protocols and management system. The company was paying $10,000 / month for the OC-3 and ended up paying $4,000 /month for the DS-3 connection.
How Fast Can Data Be Sent?
Simple question, complicated answer. In general terms, WAN acceleration technologies can achieve somewhere on the order of 95% of the provisioned line rate. But, it is also important to take into consideration all the components that constitute a distribution system. For example, if you have a 1 Gigabit link and high latency, you will benefit from the implementation of WAN acceleration technology. At the same time, however, one must examine the disk subsystem and the sustained read/write speeds on either end of the transfer. This must support the ability to do high speed reads and writes of the data being transmitted. Further, WAN acceleration technologies may not be of assistance when network connectivity and latency are low. With high latency and even a T1 connection, it may be possible to experience 60% faster transfer speeds when network latency is considerable and certainly experience benefits over a comparable FTP upload or download.
If the Pipe is Small, Does WAN Acceleration Still Help?
This is an excellent question, because, in theory, when you have a small network pipe and no latency, WAN acceleration technology typically will have no effect. If you are trying to send, say, a 10 GB file over a network that is only 1 megabit (Mbit) / sec in capacity and there is no latency on that network, there is practically no positive effect that WAN acceleration can have to help you send that file faster.
But… when you don’t have a lot of network capacity (again, let’s use that example above at 1 mbit/sec) and you add network latency, WAN acceleration can have a dramatic effect. Which brings me to the following example that is indicative of what you are very likely to experience when using software-based WAN acceleration. Enticingly, this example is titled “the tale of the hotel room” and is based on a true story.
The Tale of the Hotel Room
A person just finishes capturing an event and goes into a hotel lobby in Oslo, Norway. He tries to upload to a server in the United Kingdom using both FTP and Signiant’s Media Exchange application which includes software-based WAN acceleration technology. The hotel WI-FI connection supports a 1 (one) megabit/second upload and download speed. There was over 150 ms of latency from Oslo to London over this network link.
Using FTP, the file uploads at just 60 Kbytes / second, despite the 1 megabit / second capability of the network. The file is then re-sent, this time using Media Exchange which includes WAN acceleration and uploads at 900 Kbytes / second, or 15 times faster. This true tale clearly indicates that slower networks (e.g. 1 mbit/sec) can be of use when latency is a factor. In this case, theoretical speeds did not matter and a key factor in being to take advantage of even limited bandwidth (in this case 1 mbit / second) with WAN acceleration is, indeed, possible and achievable.

Tags: signiant | distribution and delivery | ftp | latency | wan acceleration | iss029 | N/A
Contributing Author N/A

Read this article in the tv-bay digital magazine
Article Copyright tv-bay limited. All trademarks recognised.
Reproduction of the content strictly prohibited without written consent.

Related Interviews
  • Signiant at IBC2011

    Signiant at IBC2011

  • JVC GY-HM650 upgrade at NAB 2013

    JVC GY-HM650 upgrade at NAB 2013

  • File Catalyst at NAB 2013

    File Catalyst at NAB 2013


Articles
21st Century Technology for 20th Century Content
James Hall A big challenge facing owners of legacy content is rationalising and archiving their tape and film-based media in cost effective and efficient ways, whilst also adding value. Normally the result of this is to find a low cost means of digitising the content – usually leaving them with a bunch of assets on HDD. But then what? How can content owners have their cake and eat it?
Tags: iss135 | legacy | digitising | digitizing | archive | James Hall
Contributing Author James Hall Click to read or download PDF
The making of The Heist
Tom Hutchings Shine TV has never been one to shy away from a challenge, be that in terms of using new technologies, filming ideas or overall formats: we pride ourselves on being ambitious and risk-takers.
Tags: iss135 | liveu | heist | streaming | cellular | mobile | connectivity | Tom Hutchings
Contributing Author Tom Hutchings Click to read or download PDF
Grading BBC Sounds
Simone Grattarola

The BBC has launched its new personalised music, radio and podcast app with a campaign that follows one listener’s journey from meeting Kylie Minogue in a lift to Idris Elba on a bus. 

BBC Sounds offers a single home for the BBC’s thousands of hours of audio content, including live and on-demand shows and special music mixes curated by artists.

BBC Creative, the broadcaster’s in-house creative division, took the brief to agency Riff Raff Films and Megaforce directing duo of Charles Brisgand and Raphaël Rodriguez who in turn brought on board regular collaborators Time Based Arts.

Tags: iss135 | bbc | grading | bbc sounds | davinici | resolve | blackmagic | editing | Simone Grattarola
Contributing Author Simone Grattarola Click to read or download PDF
Switching to Internet Based Distribution
Chris Clark

"An IP status check for the broadcast industry", "Resistance is futile", "IP points the way forward for the broadcast industry"...

Yes, we've read the headlines too. But rather than force you into submission, scare you, or leave you feeling like you have no other choice, we want to give you the information that helps you to make a sensible decision about Internet-based distribution.

So what’s stopping you from making the switch right now?

Tags: iss135 | ip | internet | distribution | cerberus | Chris Clark
Contributing Author Chris Clark Click to read or download PDF
Future proofing post production storage
Josh Goldenhar Advancements in NVMe (Non-Volatile Memory Express), the storage protocol designed for flash, are revolutionising data storage. According to G2M Research, the NVMe market will grow to $60 billion by 2021, with 70 percent of all-flash arrays being based on the protocol by 2020. NVMe, acting like steroids for flash-based storage infrastructures, dynamically and dramatically accelerates data delivery.
Tags: iss135 | nvme | sas | sata | it | storage | post production | Josh Goldenhar
Contributing Author Josh Goldenhar Click to read or download PDF