
Broadcaster’s attitudes to the cloud have definitely changed over the last few years.
Where once there was a lot of mistrust about the cloud, now it’s seen as a proven
and reliable technology. Not so long ago, moving workflows to the cloud felt like a
leap into the unknown, but this is no longer the case. Now I think broadcasters are a
lot less scared and instead see it as a verified means of achieving flexibility,
scalability and agility that on-premise systems can’t match.
Take playout for example: when running in the cloud, broadcasters can launch
temporary channels or try out new markets without having to install new hardware.
That kind of flexibility is transformative because it lets broadcasters’ experiment and
respond to audiences’ changing needs quickly and easily.
Another thing that’s changed is the attitude towards cloud-ready and virtualised
solutions. Finally, it’s sunk in that simply moving existing solutions into the cloud by
making them cloud ready or through virtualisation does not give the same benefits as
true-cloud. Adapting existing solutions to make them cloud-ready doesn’t provide the
scalability, built-in redundancy, flexibility, nor the agility that cloud-native applications
provide.
To be effective, cloud solutions must be built in the cloud, from the ground up, using
containerised, microservices-based modular architectures. This cloud-native
approach to solution design is the foundation of systems that are flexible, resilient,
and scalable. This is in part because microservices enable solutions to be broken
down into smaller, independent components, each of which can be updated, scaled,
or replaced without affecting the rest of the chain.
Containers ensure these services run consistently across environments, whether on
public cloud, private cloud, or hybrid setups, providing the flexibility broadcasters
need for complex workflows. This modular, containerised approach also allows
broadcasters to innovate more easily, adopting new technologies and adapting
workflows without overhauling entire systems.
Additionally, there’s also less push or expectation for broadcasters to go full-cloud, at
least not straight away anyway. This shift was noticeable at the last IBC show where
it was plain to see that a lot of broadcasters seem to have settled on a hybrid
environment where some workloads are running in the cloud with others remaining
on-premise. It’s accepted now that there’s no need to jump straight into the cloud
with both feet, moving all workflows over, deleting local data centres and getting rid
of hardware. It’s better instead to start with baby steps, gradually moving workflows if
and when it makes most sense.
Some functions such as processing, editing, deep storage, and playout, work
particularly well in the cloud, and broadcasters are often concentrating on these
areas first because they can deliver a lot of benefit with minimal effort and complexity
in terms of transition. Once broadcast engineers have tested their newly established
cloud infrastructure and are confident that it works well for the migrated workloads,
more workflows can be moved over when and where appropriate.
Unsurprisingly, security and reliability remain top of mind and broadcasters do worry
about putting all their eggs in one basket. In which case, a multi-cloud approach may
be preferable because locking into one vendor can leave a broadcaster exposed to
outages or service disruptions. Running services across multiple cloud providers is
one approach that may help broadcasters mitigate that risk while maintaining
flexibility.
While attitudes to the cloud are evolving, what hasn’t changed is that cloud-native
architectures give broadcasters the flexibility to innovate in ways that were difficult or
impossible with traditional infrastructure. With a thoughtful and measured approach
to cloud adoption, broadcasters can build confidence and learn and adapt without
overcommitting. This is the path to unlocking innovation while delivering tangible
benefits and real value.