To view this page ensure that Adobe Flash Player version 11.1.0 or greater is installed.

CONTENT Powerful new levels of discovery by Drew Lanham, Nexidia F or media creators, owners, and distributors, the amount of digital media in their libraries never stops growing as they continue to add new content every day. Those who don’t organize and manage their data well are not getting the maximum value out of their archive. That’s why they need easy, accurate, and cost-ef- fective tools to discover, repurpose, and monetize their media. MAM or product asset management systems hold a lot of fi le-based metadata with attributes such as the date the footage was shot, but there’s usually very little descriptive metadata about the content, making it diffi cult to fi nd an asset quickly and accurately. To make matters worse, manual logging and transcription are not only time- consuming but prohibitively expensive, and yield limited detail. Transcripts are yet another asset to manage and is not directly tied to assets. There are software applications available that can semi-automatically tag assets, but they can be expensive and also might not provide enough detail to be useful. Image recognition might reveal who’s in the media, but not what they’re talking about. Speech- to-text has insuffi cient performance and accuracy to be useful even on the clearest speech. Fortunately there’s a better way to do it — automated phonetic-based dialogue search. Dialogue is the most abundant source of metadata in media. It is present in almost all program types and often provides more detailed, precise content description than any other metadata. As a result, phonetic-based dialogue 56 | TV-BAY MAGAZINE: ISSUE 90 JUNE 2014 search provides the richest, most relevant results at a fraction of the cost and time it takes for the other methods can in providing the same amount of metadata granularity. It’s akin to Google for audio, where users enter keywords and phrases that locate the specifi c moments in the media, allowing them to take further action. How it works In the phonetic-based search method, an application scans all of the audio in a media library and creates an index of sounds called phonemes, hundreds of times faster than real time. Once the assets are indexed, they are instantly searchable based not on the fi le properties or the information that has been typed into the metadata fi elds, but on what is actually spoken on the audio tracks. That means that traditional metadata — descriptive or otherwise — is optional once the index is created. (Even so, the system can still leverage any existing metadata, in combination with the dialogue, to further refi ne and improve the search.) As a result, the phonetic-based search method dramatically reduces logging and transcription costs, speeds production, and uncovers valuable assets that traditional metadata could never expose within hours of being added to the management system. Phonetic search solutions integrate directly with MAMs, fi le systems, and editing applications, and no training is required. Users simply type any combination of words or phrases into the search interface, and it will quickly fi nd any media clip in the system where those words or phrases are spoken. Users can then go to an integrated media player to audition the hits within each clip, quickly without having to scroll through numerous clips to fi nd a specifi c asset. When the correct results are found, they are exported as timecoded markers to MAMs and video-editing applications. Users can also deploy phonetic search technology to search from within their own production toolset via APIs. For example, Adobe Premiere Pro users can leverage the benefi ts of phonetic- based search functionality directly from Premiere Pro’s interface without having to leave the Premiere Pro application. By simply adding an extension to their workspaces, users can search and preview search results in a video player and then just drag and drop a clip into their project with the search terms and search terms automatically displayed as markers in their clips on their timelines. Intended applications Versatile, affordable, accurate, and lightning-fast, phonetic-based search tools can change the way media operations discover and use their assets. They can be applied broadly to any market that creates, owns, or distributes content, including fi lm and entertainment, sports, news, education, corporate, government, fi nancial, house of worship, and nonprofi t. As media-driven organizations build up ever larger stores of video content, or mine their archives, the need to locate specifi c pieces quickly and effi ciently will play a huge role in an organization’s competitive advantage and bottom line. After all, if you can’t fi nd it, then you can’t monetize it. Creating a searchable index that does not rely on traditional logging or transcription metadata — such as the one that results from phonetic-based searching — can be the key to unlocking a media archive’s potential.