It was a pleasure connecting with so many friends and colleagues at IBC 2022. Let’s hope by now everyone has made it home safely and no one is still trapped in the lines at Amsterdam Airport Schiphol! The recent post-Covid reconvening of IBC in Amsterdam offered a sweeping view of cloud-driven transformations in services and… Continue reading IBC 2022 Illuminates 6 Major Trends Driven by a Media & Entertainment Cloud Surge
It was a pleasure connecting with so many friends and colleagues at IBC 2022. Let’s hope by now everyone has made it home safely and no one is still trapped in the lines at Amsterdam Airport Schiphol!
The recent post-Covid reconvening of IBC in Amsterdam offered a sweeping view of cloud-driven transformations in services and user experience that are taking the media and entertainment industry far beyond where things stood at the last in-person show in 2019.
The overarching show theme was clear from the cloud-labeled branding emblazoned on virtually every stand across the vast expanse of the RAI convention center. This once cloud-reluctant industry no longer questions the need to embrace software-based solutions running on commodity appliances as the way forward.
With the migration to the cloud, it’s now obvious to everyone that the future lies with IP-based extensions of high-performance capabilities that, from a user perspective, will render any remaining distinctions between managed-network and OTT services meaningless. IBC made clear that a tidal wave of solutions long pitched by vendors is finally going into play worldwide, enabling the money-saving operations, monetization flexibility, and service enhancements essential to service providers’ survival in what has become a crowded, low-margin marketplace.
As a result, with ample signals coming out of IBC to confirm this, it’s now widely understood that the industry will need recourse to real-time interactive streaming (RTIS) infrastructures to fully capitalize on these developments. In this sense, IBC was a landmark event when it comes to industry recognition that it’s time to put WebRTC-centric RTIS platforms like Red5 Pro’s Experience Delivery Network (XDN) technology to work at massive scales.
Following are the significant developments that leaped out at IBC as signs of how the M&E evolution will unfold. There’s not one that isn’t either totally dependent on or certain to be enhanced through the use of RTIS.
1 – FAST Everywhere
The market has responded with astounding speed and success to consumers’ demand for subscription-free services with an outpouring of thousands of free ad-support television (FAST) channels. IBC made clear that the phenomenon, which took off over the past two years in the U.S., is now taking hold across Europe and elsewhere.
Vendors have responded to this unanticipated explosion with refinements to their cloud platforms that provide seamless tie-ins to dynamic advertising workflows and servers while cutting the time it takes to launch new channels from a matter of weeks or months to days or even hours. Some vendors, including Brightcove, QuickPlay, and Simplestream, said they’re now supporting instant aggregation of VOD assets into FAST channels, and one, QuickPlay, said they were going a step further by enabling the spin up of user-personalized FAST channels.
Of course, one-way FAST streaming per se does not require RTIS support, although using that technology to achieve end-to-end sub-500ms latencies more than satisfies the requirement for low-latency streaming with FAST sports and esports channels. But the other developments comprising the most significant IBC highlights as enumerated here make RTIS as essential to FAST as it is to all the other OTT service modes.
2 – Dynamic Ad Insertion & Flexible Monetization
Advertising now reigns as the primary monetization force in OTT services, prompting industry analyst Colin Dixon to declare in the aftermath of IBC, “The Love affair with pure SVOD models is officially over.” And the cloud is enabling new approaches to subscription monetization as well.
In response to advertisers’ willingness to pay CPMs (costs per thousand impressions) well above traditional averages for a more targeted approach to delivering their messages, in-stream dynamic ad insertion (DAI) has become the primary mode of placement in FAST channels. And now, DAI is taking hold across hybrid ad-subscription (AVOD) services as well, with degrees of addressability ranging from contextual and geographic placements to more granular demographic categorizations all the way to individual user personalization.
In addition, some cloud solutions suppliers are providing support for greater flexibility in subscription pricing. For example, EaselTV makes it possible to set prices for different user categories, such as low-volume users or those behind in payments, within the same service group. MediaKind, attempting to keep restive, long-form averse Z-generation viewers engaged, supports pricing for special entitlements to post-game viewing of highlights or the performance of fantasy team players.
Cloud-based support for DAI is now common among providers of OTT video streaming playout and distribution platforms, including Imagine Communications, Harmonic, MediaKind, Brightcove, ATEME, Amagi, and others. But there’s another shoe to drop if DAI is to live up to its potential.
Currently, DAI is designed to work with HTTP-based streaming, which, as discussed in this white paper, involves costly performance complexities that can be avoided with the approach to DAI employed with the XDN platform. Once XDN-caliber DAI is employed, there’s no need for the manifest manipulations and redirections of client calls to ad servers that result in unacceptable ad error rates. Ads are simply streamed as signaled by ad markers in real-time synchronization with primary streams, enabling frame-accurate insertion using DAI intelligence at the XDN Edge Nodes.
3 – Gamification
Leveraging IP technology to foster interactive approaches to driving user engagement, commonly tagged at IBC as gamification, was top of mind in this churn-infested competitive maelstrom. An outpouring of demonstrations and announcements revealed an unprecedented range of approaches to interactivity with live-streamed sports and other content, which included multi-screen viewing, personalized aggregations of graphics and statistics, and watch parties and micro-betting. Vendor solutions that enable on-the-fly insertions of personalized overlays and even in-frame object manipulations for branding, ecommerce applications, and gamification prompts are going into operation at scales that are destined to transform what users expect from their viewing experiences, whether they’re using smartphones, computers, or internet-connected TVs (CTVs).
The ability to create deeper fields of metadata aided by artificial intelligence in both the categorization and discovery processes is a major facet of the gamification process. AI-fueled metadata creation in production makes it possible to extract information on the fly in the live content production process, which can be used in the delivery of video and text overlays highlighting players and stats of interest to any given user. Time-coded objects extracted and categorized by metadata algorithms from movies and TV shows can be used to support on-demand user access to sequences involving specific characters or dramatic moments presented as in-stream mosaics without pausing the primary linear content flow.
These are compelling applications, many of them doable in the one-way conventional streaming environment, but the move to full gamification, enabling people to interact with each other in synchronized video communications, requires RTIS architecture that significantly outperforms legacy video conferencing platforms. Full realization of the potential with unlimited user engagement at high levels of quality and application design flexibility opens a new realm of social possibilities with watch parties, game playing, micro-betting, e-commerce, sharing of user-generated content, and more.
4 – Location-Independent Production
Content producers’ migration to IP and the cloud is revolutionizing production and normalizing remote collaborations in the creation of everything from live sports to episodic TV series and movies. Along with the support this trend is getting from leading suppliers of production and postproduction solutions, two major initiatives reflect the scale of industry adoption to location-independent operations.
One is the standardization process undertaken by the Reliable Internet Stream Transport (RIST) Forum, which has gained wide industry backing as a non-proprietary approach to ultra-low latency transmission of content over the internet in the production, postproduction, and contribution/playout stages. RIST has come into play amid widescale usage of other RTIS modes in remote production, including Secure Reliable Transport (SRT), WebRTC, and Zixi’s Software Defined Video Platform (SDVP).
Like SRT and WebRTC, RIST is based on the open-source Real-time Transport Protocol (RTP), whereas SDVP uses proprietary technology. RIST includes modes of protection and authentication along with transport in ready-to-use vendor-agnostic formulations, which, when it was conceived, were deemed to be more precisely tuned to broadcast content producers’ needs.
The other development pointing to remote production as the new norm and, with it, the growing need for RTIS across the M&E creative space, is the set of principles laid out for motion picture production in the 2030 Vision statement issued by the major studios’ joint technology arm MovieLabs. The 2030 Vision principles, first adopted in 2019 and now widely followed by some MovieLabs members and their affiliates, require that all newly created assets be ingested to the cloud and retained there for access by authorized creators wherever they are through completion. Fast and ready access to rich archives must be included in workflows that are “designed around real-time iteration and feedback.”
We should add to these observations that while RIST is definitely a validation of the need for RTIS in this new production environment, it’s likely that over time, as RTIS is normalized at global scales on sophisticated WebRTC platforms, content producers will discover they don’t need to resort to separate infrastructure to support production and playout. That’s especially true with the use of XDN architecture, which enables the encapsulation of SRT streams for delivery over the underlying RTP foundation. SDVP, too, lends itself to integration with a well-designed WebRTC platform.
Going forward, the need for RIST will be further diminished by the availability of digital rights management (DRM) support with the use of WebRTC. As noted, one of the issues underlying the formulation of RIST was broadcasters’ desire to rely on conventional protection mechanisms in the movement of content through production to playout. These new developments will address that issue.
While the DRM-like encryption system known as Secure Realtime Protocol (SRTP) is baked in as a mandatory component of the WebRTC stack, the growing popularity of WebRTC has brought with it intensifying demand for access to the DRM platforms commonly used in video streaming. At IBC, the security software firm CastLabs was promoting its support for WebRTC workflows with built-in DRM protection.
We’re also aware of another vendor, a major supplier of DRM solutions to the streaming industry we can’t name, that will be bringing a WebRTC DRM solution to market in the near future. Others are sure to follow. As a result, it won’t be long before WebRTC users will have ready access to DRM support with any use case, including distribution as well as production/playout
5 – Extended Reality & the Metaverse
Hovering virtually everywhere in the background at IBC and sometimes in the forefront in demos, conference sessions, and private discussions was the question of what comes next in the evolution and adoption of extended reality (XR) technologies. The big difference in how the subject was treated this year compared to three years ago was the greater significance attached to XR advances now that they’re seen as linchpins to what’s become widely recognized as the emerging Metaverse era in internet experience. There were at least seven conference sessions devoted to exploring developments in this vein.
Mercifully, the hype of earlier years gave way to acknowledgment that full realization of the XR potential awaits further technological development, not to mention a slackening in the headwinds that have slowed investments in technologies associated with the Metaverse vision. But there seemed to be widespread agreement that the transition to mass adoption of XR technology will be made in a few years, bringing with it a major M&E shift to the delivery of Metaverse experiences.
In a sign that monetization strategies are beginning to take shape around Metaverse expectations in the consumer products, services, and advertising arenas, major companies have begun announcing the appointments of chief Metaverse officers or similarly titled senior executives. As reported by Bloomberg’s Canadian outlet, firms setting up these roles include P&G, Publicis Groupe, Moët Hennessy Louis Vuitton SE, Crate & Barrel, Creative Artists Agencies, and Walt Disney Co.
Demonstrations of XR applications were plentiful at this year’s IBC. Some of the more noteworthy included:
- A display of soccer content delivered by Sky for both 2D and 3D immersive 360-degree viewing supported by encoding technology from Ericsson and tiled VR streaming from Tiledmedia. Along with soccer, which was delivered in 4K, the demo included immersive engagement with Formula 3 racing delivered at 8K resolution.
- A full-room rendering of immersive experiences by Nokia without the use of headsets, utilizing AI, automation, and wall-size projector video technology. One such experience put couch-sitters in the cockpit of a Formula One car racing around London.
- Browser-based VR production involving Zero Density’s Reality 5 real-time virtual studio and AR/VR platform built on Unreal Engine 5. The demo showed how broadcasters with access to a green-screen “cyclorama” (closed volumetric space) and an LED virtual production stage can create professional-caliber immersive user experiences on par with the things Fox Sports, BBC, and others are doing with the Zero Density technology in major production studios.
Unrelated to but coincident with IBC, one of the more interesting technological developments in the XR arena came to light with the VR Industry Forum’s report on progress toward completion of a new MPEG standard known as Video-based Dynamic Mesh Coding (V-DMC). This is the latest in the series of VRIF-promoted Visual Volumetric Video-based Coding (V3C) standards, which include several that have already won ISO/IEC MPEG endorsement, including Video-based Point Cloud Coding (V-PCC) standardized as ISO/IEC 23090-5, MPEG Immersive Video (MIV) or 23090-12, and carriage and delivery mechanisms for V3C coded data collectively known as 23090-10.
After evaluations of various approaches to mesh coding, the ISO/IEC’s MPEG team has chosen a framework for future development that helps reduce VR networking bandwidth by using so-called mesh coding solutions to reduce the bitrates of the different element layers and related texture data streams in parallel. These are reconstructed to full-scale rendering at the client with the aid of “displacement maps,” which are also part of the streamed payload. VRIF expects the standard to be completed by 2024.
In addition, there’s now a browser designed for VR known as Wolvic. This is a project undertaken by the open-source consultancy Igalia utilizing the advances encompassed in Mozilla’s Firefox Reality technology. Igalia says Wolvic is now active on more than 20,000 Oculus Quest 2 head-mounted devices (HMDs) and other HMDs as well and will soon be introduced in the Oculus app store.
As a full-featured browser created exclusively for standalone AR and VR headsets, the free Wolvic browser allows users, without leaving their immersive environments, to find and access in AR or VR display mode any related content on the Web. This should aid creators in the development process and could go a long way toward fostering greater XR engagement by providing users easier access to all the content now available on the internet.
6 – WebRTC Becomes the Acknowledged Linchpin to Real-Time Interactive Streaming
As noted in previous blogs and white papers (see, for example, the blog entitled “Entering the Metaverse at the Speed of Thought”), RTIS is one of the fundamental ingredients to the immersive, fully interactive use cases that will define the Metaverse. It was clear that in this realm and others requiring RTIS, WebRTC was top of mind as the best recourse to attaining the scalability, flexibility, and video-rich interactivity people need to bring all this to fruition.
Yes, there was some promotion around the new low latency initiative known as High Efficiency Streaming Protocol, which was introduced in 2020 by the newly formed HESP Alliance. But, two years on, it was hard to see the need for a one-way ultra-low latency platform that requires radical revisions to HTTP streaming and the use of a player in lieu of browser support when WebRTC has gone into wide usage as the massively scalable RTIS solution of choice with plug-in-free support from all the major browsers.
Moreover, while HESP backers are claiming the one-way streams can be delivered at sub-500ms end-to-end latency, the latest announcement tied to HESP suggests that’s not a persistently reliable metric. CDN operator G-Core Labs, in announcing it had integrated its network with the HESP protocol, said this would enable distribution of live content “to millions of users all over the world, with delays not exceeding 2 seconds.
It’s also worth noting that Synamedia, one of the founders of the HESP Alliance, was demonstrating WebRTC applications at its stand as executives expressed strong interest in finding the best approaches to putting the protocol to work on their platform. There were several other streaming platform providers that we’re not at liberty to name which also indicated they were moving in this direction. And, as noted earlier, demand from WebRTC infrastructure users who want to benefit from DRM protection has reached the point where leading vendors in that space are providing solutions.
Providers of quality-assurance platforms constitute another ecosystem sector that’s getting the call to address WebRTC needs. Specifically, there was much discussion about the need for monitoring capabilities that can give cloud providers and their customers confidence that cameras streaming real-time content to WebRTC platforms are capturing targeted scenarios as required. This is especially important in first-responder and military surveillance operations where WebRTC infrastructure, as described in this white paper, is playing an ever bigger role.
One final upbeat development we should note in conjunction with the massive cloud migration highlighted at IBC has to do with the gathering momentum around reductions in public cloud usage costs stemming from intensifying competition among platform providers. We see developments pointing in this direction as competitors to the Big 3 (AWS, Microsoft Azure, and Google Cloud) introduce new pricing models that lower or eliminate charges for transferring data from the cloud.
Most notably, Oracle’s OCI (Oracle Cloud Infrastructure), now the fifth largest cloud platform after the Big 3 and Alibaba, has been putting heavy downward pressure on competitors’ pricing, first by offering 10 terabytes of free data egress per month compared to 1 gigabyte free for AWS and then by joining the Bandwidth Alliance, whose members agree to not charging egress fees provided customers subscribe to an alliance member’s security service. Shortly after Oracle made the latter move last year, AWS announced it was raising its free egress quota to 100 GB monthly. We suspect we’ll be seeing more dramatic movement in this direction among top-tier players.