Television engineers first worked out how to send pictures around the planet in the mid-60s, at a time when it involved rooms full of equipment and an orbiting satellite. Until recently, it changed in the details: resolution went up, costs down, and equipment became more portable. Only since the capability explosion of the public internet, though, did the fundamentals begin to change.
That process began perhaps a decade ago. Identifying a technological peak is always difficult, but the wealth of new ideas at IBC2023 makes now a good time for an update on the ways we move video from camera to facilities and then to viewers.
Backlight is particularly well-placed to take a view on the state of the industry, with divisions dealing with both the creative and infrastructure sides of production. Co-CEO Christian Livadiotti describes a world which perpetually demands a wider variety of media in a wider variety of situations. “What’s happened in the media industry at large is a lot of fragmentation of the eyeballs across platforms. I think that we are seeing some smart evolution. Years ago, it took a long time to iterate. You were betting on one growth opportunity, so you invest billions, and you have to make ten years out of it.”
The company’s Zype and Wildmoka systems have never been more relevant, as Livadiotti goes on. “With the solutions that we’re building, the front-end investment to address a growth opportunity is way lower. You can iterate more and create more growth as you’re doing it, pushing out linear channels that are automatically created based on a collection of VOD assets that are sitting idle. You push it out to the audience, and now it costs very little money. So you test it – is it working? You grow it… live sports, live news, whether that’s repurposing content or actually pushing it out into as many platforms as you want.”
With viewers hungry for content, the challenge is in repackaging material to suit, and doing that at scale has been an early success story for artificial intelligence. Meghna Krishna is Chief Revenue Officer at Magnifi, and describes the company’s new Digital Highlights Pro application as doing exactly that. “We work with some of the largest clients in India, the US, the UK and now we’re slowly expanding to Latin America, South East Asia and Africa. You’ve got some very large audiences in that part of the world. You might think that it’s only required by large broadcasters, but, for example, we work with a client in South Africa that does youth leagues and these leagues were not broadcast, they were not anywhere online.”
The result, Krishna goes on, was a collaboration between broadcaster and league that could never have happened without the company’s technology. “Now these kids can become celebrities, they can become influencers, because we can publish highlights for them. They can get scouted to play in the future. So there are multiple use cases. In terms of the future we are developing similar technologies for entertainment and for news. There’s huge potential once you’ve developed the code.”
Making that sort of concept into a business reality, though, relies not only on a system such as Magnifi’s to prepare the material; it requires distribution technologies that could not exist as part of a conventional broadcast chain. Pebble CEO Peter Mayhead has watched those changes occur. “What the likes of Netflix have done is taken advantage of the fact that there’s no longer a requirement for proprietary distribution capability, what we used to call broadcasting. If you wanted to have a channel, you had to have a licence, and you had to have an infrastructure and a facility to be able to broadcast to the audience. As streaming came along, they woke up one day and said, “we can get to a viewer audience without needing to invest into satellites, into cable.”
“It was a very large disruption, over a very long period, but then all of a sudden,” Mayhead recalls. “So our challenge is [that] we’ve got to be able to get that content at the time it’s needed to more and more outlets. And that’s just getting more and more complex – different signal types, HD, SD, etc., transcoding, get that available… and then create additional ways of consuming it, especially when you’re in the holy grail of sports and live.”
Those changes have created opportunities not only in content creation and repurposing, but also at the most fundamental levels of distribution. As Paul Calleja, CTO at GlobalM puts it, sometimes the rush to take advantage of those opportunities has not always been well enough considered to take best advantage of the technologies involved. “When you walk around the show floor here, you’ll see the word cloud and virtualization used a lot. What a lot of the manufacturers have done is just take an existing product, spin up an instance and run it on that instead.”
A deeper dive into GlobalM’s world of distribution, contribution and transcoding reveals much more consideration. “Our pricing model is based on time. We’ve studied the very, very complicated rate card of AWS, and what we’ve done is we’ve turned that into a pricing model that simulates a satellite pricing model – but less. If you consider that you need to calculate the number of hours that you need for a particular project, you can do it with our rate card. We’re not calculating bandwidth or bitrate or anything like that. Transcoding is included in that price.”
Allowing clients to operate on much the same model as they did traditionally is one way of making new approaches attractive. At the same time, video is a heavy network payload, and moving it efficiently has benefits no matter the underlying hardware. Zixi’s software-defined video platform is designed to do exactly that, as Harjinder Sandhu, Director of Marketing, explains. “When we talk about efficiency and sustainability and reducing cost to a customer, we’re saying that using Zixi as your streaming platform, you’re going to save up to fifty per cent of the total cost. The way we achieve that is two main things. One is that we support AWS Graviton, and second, we have access to DPDK, which means we’re bypassing all the interface and our throughput becomes really high.”
Graviton represents Amazon’s option to use ARM-based CPUs, which, as every cell phone owner knows, generally do the same work for less power and less air conditioning. DPDK, the Data Plane Development Kit, facilitates faster, more efficient handling of networked data with less overhead than other approaches. Both of those things have a cost, Sandhu goes on, for Zixi itself, and the company’s approach relies on creating a more attractive offering. “For a vendor, it requires investment to put that effort in, to then offer to customers. We’re not making any more money with it. It’s just customers spending less.”
Exactly the same concerns apply when material is sent to the consumer, and while individual bandwidths are lower, the sheer scale of the task makes small numbers crucial. Silvia Candido, VP of Marketing Communications at Ateme, explains the company’s work in the area. “If you think about encoding, encoders encode stuff without having any idea of how it’s actually being used by consumers. They don’t have that information the content distribution network has. If you can connect the CDN with the encoders, you can reallocate encoding resources in a more efficient way so that you consume less compute.”
Ateme’s innovation is to spend more CPU time packing maximum picture quality into minimum bandwidth for popular streams, while less popular streams can use more bandwidth while saving on compute resources for lower overall cost. “If you allocate more encoding resources to the more popular channel so you compress it better, it will have much more impact because then it’s being seen by so many people. That’s what we call audience aware streaming,” Candido says, “and that’s something fairly new. “
Minimising the amount of data used to transport pictures has been actively researched for decades, hence the existence of video compression codecs such as H.264 and its later followup, named, with minimum creativity, H.265. As Ateme’s innovations demonstrate, though, there’s more to codecs than fancy mathematics; the amount of computer horsepower required to compress or decompress video is just as crucial.
LCEVC is the Low-Complexity Enhancement Video Codec, a development intended to improve the performance of codecs just like H.264 by adding a new way to handle high frequency detail. Fabio Murra, Senior Vice President of Product and Marketing points out that standard encoding techniques used by popular existing codecs don’t deal well with the fine details created by things like film grain – things that cinematographers often adore. “LCEVC has two features that help with that. One is designed to efficiently encode high frequencies, whether that is details or noise. And also, LCEVC has an adaptive film-like dither function that kind of restores some of that film grain.”
The mathematics underlying the new way of encoding fine detail are designed to be straightforward – since the detail contains little contrast, it can be encoded using simple techniques. The company has already shown integration into devices from chip manufacturer Realtek, which provides hardware to a huge range of consumer devices. “TV3.0 in Brazil for now,” Murra reports, “and we’re working with ATSC and TVB as well. But I think more stream providers are looking at efficiencies and reach, and using the new technologies like LCEVC to improve their streaming.”
At one time, that sort of distribution might have been concerned mainly with satellite engineering, not codecs. Even at the sharp end of studio and live outside broadcast production, though, the gradual metamorphosis of hardware into software is advanced. Ross Video has been manufacturing hardware for that field for decades. The company’s Executive Vice President & Chief Marketing Officer, Jeff Moore tells us that Ross uses the term hyper-convergence to describe the general move toward less gear that does more.
“The concept of hyper-convergence is to pack as much into one package as we possibly can,” Moore explains, referring to Ross’s Ultrix system. “The more we can do inside the routing system, the easier it gets. There’s less outboard gear – racks of synchronizers, racks of multi-viewers. We build that all into one box and a lot more. So in, for example, the largest system that we make has got 288 ins and outs, they’re all modular, so they can be either IP, SDI, fibre, whatever. And then you can have processing built in, so you can have frame syncs on every input, it’s UHD capable, as many multi-viewers as you want. It’s got audio embedding, dis-embedding.”
The operational advantage, beyond space and cost, is flexibility. “It takes a lot of what used to be systemisation and turns it into configuration,” Moore points out. The result is an intrinsic ability to serve multiple markets. “Stadiums are very different than broadcasters, and they’re very different than houses of worship, and they’re very different than e-sports. So, you have all of these different use cases, different needs, different visions that these customers have.”
Huge capability risks bringing huge complexity, a problem familiar to anyone with experience in the management of a whole broadcast operation from acquisition, through edit, to broadcast. TMT Insights’ Polaris platform is designed to let people monitor exactly those processes, particularly where re-versioning of episodic television means tracking a lot of different things all at once. TMT’s Brian Kenworthy describes a mission to “make everything as efficient and easy as possible with a multitude of systems. Most companies have more than one – maybe ten – and we try to simplify that with a unified interface to see their entire operation.”
It is, as Kenworthy goes on, an area ripe for the application of AI. “Where I think it’s going is that once we continue to combine all these systems into one view, maybe we take a spoke of that system and introduce some artificial intelligence. Maybe it’s localization and we do some R&D on speech to text or translations… but we’re trying to anticipate, okay, are we going to look at streamlining workflows, are we trying to move them away from manual processes as much as possible with AI or with anything else that we can implement.”
The final delivery of content, with subtitles and translations in place, might seem to be the end of the road for broadcast technology, but not quite: Friend MTS is a company dedicated to protecting broadcast material against copyright infringement, detecting leaks and investigating their source. It’s a world that’s become much more complicated as network distribution has multiplied the ways in which material might be illicitly redistributed. Nick Foreman is Vice President of Marketing, and describes distribution as “a chain, and there are a number of elements in that chain. They all require protection. We call it glass to glass. Lots of people use those terms, but if you don’t do all of it, then everything else is limited.”
“You need additional layers,” Foreman goes on, “and those are the services we provide. Being able to do that rests on your understanding of the landscape and how pirates think, what they do, what their technologies, what the latest technologies are.” The innovation in Foreman’s approach, meanwhile, is to find ways in which anti-piracy measures might be something more than a cost centre. “What we’re looking at is actually turning that around and seeing how people can use that to generate new revenue. How do you engage with those diverted audiences? Well, one, you have to know who they are. You imagine you could put a slate up that says ‘you’re watching illicit content’. Phone this number, get 20% off legitimate content. I don’t know if anybody is actually doing that, but it’s having that business intelligence.”
The single benefit which probably most characterises the integration of modern networking into broadcast infrastructure is exactly that ability: to analyse downstream activity in a way that conventional broadcast simply could not. Friend MTS’s identification of pirated streams, Ateme’s audience-aware encoding, or the way Magnifi repackages material to exactly suit the target user are all examples of things which are only in demand because the network facilitates certain uses of content, and only possible because that same network allows them to be created. That sort of synergy is rare, and one that seems likely to keep bearing fruit for as long as people keep having good ideas.