Entering the Metaverse at the Speed of Thought

metaverse VR
SHARE

Amid much recent commentary about what’s meant by Metaverse as a description of the next phase in cyber evolution, one thing is clear: Whatever your preferred definition might be, there will be no transition to life in the Metaverse without a network infrastructure optimized for video-infused interactivity at the speed of thought over any distance,… Continue reading Entering the Metaverse at the Speed of Thought

Amid much recent commentary about what’s meant by Metaverse as a description of the next phase in cyber evolution, one thing is clear:

Whatever your preferred definition might be, there will be no transition to life in the Metaverse without a network infrastructure optimized for video-infused interactivity at the speed of thought over any distance, at any scale.

This might sound like a stipulation that puts the Metaverse well off in the future. But the truth is the role Metaverse-enabling networking is already playing in a vast number of use cases is solid confirmation that we’ve entered the Metaverse era with a major transformation in how we use connected technology.

At the speed of thought? You bet.

As we know from neuroscience, there’s a time gap between the instant something engages our sensory apparatus and when we become consciously aware of the input. The lag varies from one person to the next, but the speed of thought is generally measured in the range of 50-200ms with 150ms commonly cited as the norm. That’s the average time it takes for runners in a race to respond to the starting gun.

These are the latencies in interactive audio and visual exchanges that are bringing the Metaverse to life. The primary requirement is that all participants in interactive engagements experience the same thing synchronously in what each person perceives as real time. As long as the lag time is below 400ms between a point of origin and a recipient, it is below most people’s threshold of perceptible delay.

How close are we, then, to life in the Metaverse? In one of the more compelling discussions  about where we are, Matthew Ball, managing partner of the venture fund EpyllionCo, lists eight areas of development he is tracking as Metaverse signposts. They include hardware, networking, compute, virtual platforms, interchange tools and standards, payments, Metaverse content, services and assets, and consumer behavior.

As Ball notes, “sustained, interconnected, and cross-affecting improvements” in all these areas are well underway. Where networking is concerned, this progress encompasses “the provisioning of persistent, real-time connections, high bandwidth, and decentralized data transmission by backbone providers, the networks, exchange centers, and services that route amongst them, as well as those managing ‘last mile’ data to consumers.”

It’s Not Just About VR

Notably, he cites virtual platforms, i.e., “the development and operation of immersive digital and often three-dimensional simulations, environments and worlds,” as just one of those key trend categories. This is an important departure from the common way of thinking about the Metaverse as “a shared virtual 3D world, or worlds, that are interactive, immersive, and collaborative,” in the words of NVIDIA chief blogger Brian Caulfield.

That’s the definition Facebook CEO Mark Zuckerberg implicitly referenced during a recent quarterly call with investment analysts when he said Facebook’s goal is to become a “Metaverse company.” By CNN’s count, Metaverse was mentioned more than a dozen times during the hour-long session.

Facebook isn’t alone. As CNN reported, “The idea has transformed into a moonshot goal for Silicon Valley and become a favorite talking point among startups, venture capitalists and tech giants.”

Caulfield describes NVIDIA’s new Omniverse software stack as “a platform for connecting 3D worlds into shared virtual universe.” Microsoft says it is building the “enterprise Metaverse.” Epic Games has committed $1 billion to funding its Metaverse ambitions. The government of South Korea has gone so far as to launch an alliance with 17 of the country’s leading companies aimed at building a virtual Metaverse space.

These goals have many permutations. As CNN noted, “the idea is still amorphous.”

With billions invested in virtual reality (VR) hardware and, to a lesser extent, content and VR applications, Zuckerberg’s declaration putting VR at the center of the Metaverse concept wasn’t a big surprise. But the notion distorts what is really happening in the world’s transition to life in the Metaverse, say Ball and others who are invested in that transition.

“[T]he Metaverse doesn’t mean a game or virtual space where you can hang out,” Ball writes.

“Instead, we need to think of the Metaverse as a sort of successor state to the mobile internet. And while consumers will have core devices and platforms through which they interact with the Metaverse, the Metaverse depends on so much more.”

The Future Is Here

So, if it’s not just the interconnection of a lot of VR spaces, what is the Metaverse, and when will we know we’re in it? A definition that fits well with Ball’s analysis of what it will take to get there was recently provided in a blog co-authored by two executives at product design consultancy Argodesign: founder and chief creative officer Mark Rolston and chief creative technologist Jared Ficklin.

“The meta in metaverse refers to a universe built out of pure data,” they write, which means it “won’t be realized in a closed-garden VR space. Instead, it will emerge as our digital lifestyles begin to join us in the physical world.”

That will take shape across “different business models, content types, and classes of experiences,” they add. “Individuals will orchestrate the interfaces for these into workflows that bring productivity, entertainment, or socialization in the manner they want. The common thread is that all of these applications will amplify the capabilities of the individual users.”

It’s a “kind of hyper-personalization” where the “tools are becoming us. In the data is a meta me that is becoming every bit as real as us.”

All of which suggests that, with real-time connectivity, the Metaverse is already here, if we want it to be. Or, as Ben Johnson, creator of the widely read Stratechery blog, puts it, “The future is here…it’s just not very evenly distributed.”

All the areas of development cited by Ball are ripe for exploitation in an environment where hybrid life engaging dispersed people in shared activity has become the norm on the job, at school and during leisure hours. The opportunity for anyone who wants to bring the Metaverse to life in any given scenario is to envelop people in computing and data “with a kind of human-amplifying ubiquity,” in the words of Rolston and Ficklin.

Building Private Metaverses

In the enterprise realm, that means, as Johnson put it, “there is likely to be good business in building metaverses for private companies.” VR may not be essential, but, as Johnson notes, it will play an ever-bigger role in such efforts.

Noting he has doubts about the potential of VR in consumer use cases, Johnson says Covid’s role in virtualizing workspaces has made VR “far more compelling” in the business realm. Of course, VR was playing a growing role in business, healthcare and other fields before Covid hit, as described in this Red5 Pro blog.

As we noted then, there can be no remote live interpersonal engagement through VR without network connections that can facilitate the exchange of immense amounts of data conveying each participant’s multidimensional perspective on what everyone else is doing at each instant in real time. And while there’s less data to transmit at each instant if VR isn’t involved, the fact that at least 2D video presence is essential to experience in the Metaverse imposes the same real-time interactive networking requirement.

Moreover, as Rolston and Ficklin make clear, compute power must be instantaneously available to execute processing of any amount of data in synchronicity across all endpoints. As Ball describes it, this entails support for “such diverse and demanding functions as physics calculation, rendering, data reconciliation and synchronization, artificial intelligence, projection, motion capture and translation.”

From Incremental Beginnings to an All-Encompassing Metaverse

While we can now  move forward with building operational manifestations of the Metaverse incrementally, there’s a lot of heavy lifting ahead across all eight categories of development on Ball’s list before the Metaverse can explode into reality on a global scale. Tech futurist Cathy Hackl, in an article written for Forbes, provides an excellent summation of what must happen in areas related to what Ball calls “interchange tools and standards.” And she adds some categories of her own, such as universal digitization, environmental sustainability, and accessibility and inclusive design that works for all people regardless of physical impairments or tech knowhow. She even calls for updating copyright laws to accommodate “copyright at the speed of innovation” for the digital age.

When it comes to what Hackl refers to as “Next Wave Digitization,” it goes without saying that, at the incremental level of Metaverse buildout, everything within the targeted realm must be digitized. But Hackl’s point is that a full transition to the Metaverse era requires comprehensive digitization at all layers of networking, content, applications, workflows, and services, including undigitized “elements from the past.” “This urgency is driven by the need to bring content, products, and services to customers on-demand and wherever they can connect with it through the internet with lower cost and sustainable hardware,” she says.

Equally fundamental, and much harder to achieve, are some of the requirements on her list that will require cooperation globally, including aspects stemming from open standards and common practices such as “Interoperability and Portability,” “Migration, Emulation and Re-Presentation,” and “Global Commons of the Metaverse.”

Core technology standards are essential to interoperability. But it also requires consensus that overcomes proprietary barriers to the fungibility of “asset classes” such as “avatars, 3D models, mixed-reality and spatial environments.”

Adherence to well-established principles ensuring continued use of data and source assets is essential to efficient Metaverse evolution. By emulation, Hackl means the imitation of experience and presentation of content in its source context, even though “it may undergo technical changes in the background.”

The ability to migrate assets and data, even with “significant changes in the conditions or experience,” will make it possible to pursue outcomes “resulting in meaning and interpretation substantively different from the source.” Ongoing attention to the evolution of data and assets enables “re-presentation” in “newer media, formats, and platforms.”

One of the more far-reaching concepts articulated by Hackl concerns the need for “Global Commons of the Metaverse,” which refers to the creation of shared repositories of resources “from which all may contribute to and draw upon.” As a place with open access “where we come to collaborate, share, and work together,” a “functional global commons in the metaverse will require dialog, discernment, generosity, and trust to be successful,” she says.

Hackl points to Creative Commons Attribution license and Zero Public Domain Dedication as legal tools that are well positioned to support open access. And she cites Crucible Networks, Outlier Ventures and the Open Metaverse Interoperability Group as examples of companies and organizations that are seeking to create an open Metaverse.

“The metaverse is an opportunity for new economies, ecosystems, and a chance to put an emphasis on ethics and privacy,” Hackl says, but it won’t be easy. “Companies and governments will have to rethink the way they see the world (physically and virtually). But the hope is that through collaboration, transparency, openness, and accessibility, we can create a thriving metaverse that’s built to last and that’s built for everyone.”

The Metaverse Grid in Operation

Meanwhile, there’s no better way to get there than through incremental instantiations of Metaverse environments. In that regard, networking and compute power are arguably the most fundamental areas of development on Ball’s Metaverse watchlist. Their role in bringing the Metaverse to life worldwide has been well documented in blogs describing the multitude of use cases supported by Red5’s Experience Delivery Network (XDN) platform.

As explained in this white paper, XDN infrastructure can be instantiated on any combination of one or more leading cloud platforms to support delivery of video in any direction to and from any number of end users. Video-rich payloads can be streamed at any distance with end-to-end latencies no greater than 200-400ms and as low as 50ms or less over intra-regional distances.

Instantaneous access to cloud compute power is intrinsic to operations on XDN infrastructure, including the functions embodied in the XDN architecture itself. With XDN architecture, cloud computing is central to the orchestration of all processing across the cloud resources supporting real-time interactive streaming.

The XDN orchestration system manages hierarchies of Origin, Relay, and Edge Nodes in clusters in one or more clouds to create the fastest route for every stream from ingestion to reception. The ability of the XDN Stream Manager to automatically spin up resource capacity as needed supports scalability essential to serving any number of senders and receivers. This overcomes the scaling limitations common to other systems that rely on WebRTC, which is the primary real-time streaming mode employed by the XDN platform.

Computational intelligence also allows the XDN to ingest content delivered over the other leading transport formats used with live streaming, including Real-Time Transport Protocol (RTP), Real-Time Messaging Protocol (RTMP), Secure Reliable Transport (SRT) and MPEG Transport Protocol (TS). Every stream is matched to the capabilities of the receiving device.

Most devices receive content via WebRTC by virtue of its support in the five leading browsers, including Chrome, Edge, Foxfire, Safari, and Opera. This eliminates the need for plug-ins or purpose-built hardware.

RTSP, like WebRTC, is based on the Real-Time Transport Protocol (RTP). It can be used with mobile devices when providers prefer to tailor streams to exploit iOS and Android client-server architectures rather than relying on browser support for WebRTC.

When the XDN ingests content delivered over the other supported protocols, it packages them for streaming on the RTP foundation to reach clients that can’t be served via WebRTC or RTSP. In the rare instances where devices can’t be engaged via any of the XDN-supported protocols, the platform directs streams targeting those devices to HTTP-based streaming via Apple’s HTTP Live Streaming (HLS).

In addition, while XDN architecture does away with the one-way and latency limitations of HTTP-based streaming, it does preserve the benefits of adaptive bitrate (ABR) streaming. In cases where XDN-ingested content is encoded in multiple bitrate profiles emulating an ABR ladder, Edge Node intelligence ensures the content is streamed to each device at the bitrate best suited to device and bandwidth parameters.

Computational intelligence also underlies the automated mechanisms used to ensure continuous streaming, load balancing, and fail-safe redundancy on the XDN platform.

Continuous streaming with the User Datagram Protocol (UDP) mode of transfer used by RTP requires retransmission of important dropped packets without impacting real-time latency. To do this, XDN architecture employs a well-designed implementation of Negative Acknowledgement (NACK) messaging, which operates in conjunction with advanced iterations of Forward Error Correction (FEC) and other mechanisms to replace any dropped packets that might impact user experience.

Load balancing and fail-safe redundancy are maintained across the XDN infrastructure through persistent monitoring and autoscaling mechanism that can either redirect traffic or activate additional resources at impacted Nodes. In the case of malfunctions, platform controllers designed to work with each cloud provider’s APIs can instantaneously shift processing from a malfunctioning component within a node to another appliance in that node. In the event all of a Node’s appliances go offline, the processing shifts to another Node with minimum disruption to the flow.

Use Cases Marrying XDN and Compute Power

Providers operating on XDN infrastructure are bringing cloud computing, often with AI support, into play in many ways. For example, the platform is used to facilitate real-time aggregation of multiple surveillance camera feeds to provide a comprehensive, synchronized view of monitored activities. Surveillance operators are able to analyze the compiled feeds—with the aid of AI—to find specific objects, such as the license plate of a vehicle speeding away from a crime scene.

In one dramatic use case involving immense amounts of on-the-fly processing, OnAir Systems is supporting a unique user experience at air shows staged by the Air Force’s Thunderbird and Navy’s Blue Angel squads. By ingesting multiple camera feeds from the wings and cockpits of each plane, the XDN platform allows attendees’ smartphones to seamlessly shift from one view to another while retrieving information about the pilots, air speed, each plane’s position in the formation, and other aspects of what they’re seeing.

All of this must happen fast enough to prevent discernable disparities between what viewers see on their screens and what’s happening in the air. “This is a killer piece of technology,” says OnAir president Jonathan Hoggard. “Upwards of 60 to 80 thousand people, sometimes as many as 500,000, are able to watch video streaming real time from the cockpits as planes fly overhead.”

Cloud computing is playing a big role in other types of event productions such as sports, esports, and concerts that employ XDN infrastructure to support real-time streaming and interactive video engagement across vast audiences at great distances. Producers can synchronize delivery of multiple viewing options for seamless navigation by each user. Or they can synchronize data and graphics feeds with individual content streams to enable consistent personalized transfer of data such as chat messages, live bets, auction bids, virtual chalk boards, and GPS data.

XDN operators are also making use of cloud compute intelligence to manage remote collaborations in private enterprise scenarios and in productions of trade shows and live entertainment. For example, real-time collaboration in live sports productions is saving money by reducing the number of people and amount of equipment that must be positioned on site. Editing of raw camera feeds can be done remotely when content is synchronously aggregated and processed in real time.

These use cases are just the beginning of XDN delivering immersive experiences. In the near future, we will be sharing—directly in VR—some collaborative work with organizations including Red Pill and Sixense that we can’t wait to unveil.

———————————–

With the essential networking and compute frameworks readily at hand, the Metaverse is quietly taking shape all around us.

Matthew Ball notes that the emergence of a new era in our use of technology has always been an evolutionary process. But, at some point, we can look back and choose a particular development as the event that marks the transition from incremental steps to full-blown birth of an epoch.

For example, Ball says, the electricity revolution began with a first wave instigated by Thomas Edison’s breakthrough demonstrations of electrical power. This was followed by 30 years of incremental steps that finally led to the beginning of the second wave, which brought electrical power to the masses.

That’s when electrical grids emerged to carry electricity beyond local generators.

And that’s where we are now in the evolution to the next era in cyberspace. We have a Metaverse grid that can deliver all the capabilities embodied in the progress that’s underway across all the other areas of development enumerated by Ball and Cathy Hackl.To learn more about how to put the Metaverse grid to use, contact info@red5.net or schedule a call.