How Edge Computing will Revolutionize Mobile Gaming

Edge computing has been around since the 1990s but, with the rise of the internet of things (IoT), the technology is finally taking hold-and it’s not a moment too soon for the mobile gaming industry.

In this article, we’ll take a look at the state of mobile gaming and explore how edge computing will contribute to a new era of IoT gaming through virtual reality (VR) and augmented reality (AR).

Has Mobile Gaming Stagnated?

During the past few years, mobile game downloads and in-app purchases have started to plateau. 1 While mobile games in 2018 are valued at more than $50 billion, 2 nearly half of global gaming revenue, continued growth will require innovation.

VR headsets released in the early 2010s promised 3 to change gaming forever, but haven’t due, in large part, to a myriad of challenges. Users have come to expect seamless, sophisticated gameplay experiences which require large amounts of storage (both locally and in the cloud), as well as processing power. Given network, storage and processing limitations, delivering this level of sophistication on a mobile or IoT device such as VR gear has been difficult.

Edge Computing Reducing Latency for Better Gaming

The rapidly evolving technology of the will help game companies take a step toward making VR viable again, because the technology increases processing power closer to the source (your smartphone or gear), lowering network-dependent latency issues-meaning, gameplay is less dependent on a cloud connection to the game server.

What is edge computing? Essentially it’s taking some of the power of cloud computing and bringing it closer to our devices that are located near the end, or edge, of the network.

For typical mobile games, a latency of 100 milliseconds 4 or less can create a positive gaming experience, but immersive VR (and AR), experiences require latencies that are much lower-less than 20 milliseconds. 5 Any perceivable lag can be nausea-inducing for gamers, but eliminating lag is a challenge since VR experiences consume much more processing power. 6

on VR & AR experiences, and what they require.

Building an Extended Reality with Edge Data Centers

While edge computing can be placed on board a device, some game makers hope to leverage localized edge data centers to reduce latency between the user’s device and the game server. Startups are stepping into fill this need.

“We are reimagining the compute and network stack at multiple layers-device, network and cloud,” explains Dijam Panigrahi, COO of GridRaster.

“Today’s standalone mobile devices are severely limited in terms of compute power and battery capacity,” says Panigrahi, co-founder and COO of mobile VR and AR startup GridRaster. Panigrahi and his team aim to solve this problem. GridRaster is building an edge-based platform to power mobile VR and AR that shifts computing demands away from mobile devices.

By leveraging edge computing, Gridraster can provide the low latency (< 20 milliseconds) that AR and VR games need for a user-friendly experience. The technology also reduces the amount of data processing required by the core network by processing that data locally, helping to bring down costs overall for game providers by lowering data storage requirements and limiting transmission costs.

“Thanks to the lack of 5G and edge servers, the current cloud and network infrastructure is ‘ill-equipped’ to handle the demanding nature of truly immersive extended reality (XR) experiences,” says Panigrahi. GridRaster hopes to encourage an industry shift toward edge computing as a solution.

Games on the Edge Showing Early Signs of Success

To see what’s possible with an edge-powered mobile VR game running on 5G, consider Project Split Render, a space shooter game created by Envrmnt, an AR/VR brand owned by Verizon.

One of Envrmnt’s key technologies is its split rendering solution, which divides AR/VR rendering tasks between a mobile device and a low-latency compute server on the edge of a 5G network, rather than sending information back to the cloud. Project Split Render-demoed for the first time earlier this year 9 -showcases this technology and acts as a benchmark for what’s possible on the edge.

“It graphically highlights the differences of working with and without an edge-rendering solution,” says Raheel Khalid, CTO and chief engineer at Envrmnt by Verizon. While on a traditional light infrastructure with a mobile headset, gamers can include two light sources. Project Split Render, on the other hand, provides over 40 times that-and processes them at once, while maintaining a standard frame rate.

“Without this edge solution, providing compute, vision, or rendering would take between 80 and 200 milliseconds to communicate with a central server,” Khalid adds. “With edge solutions and 5G, we can provide much lower roundtrip latency time.”

Unlike most of today’s VR and AR experiences, Project Split Render isn’t limited by the device it’s played on. “With our solution, the end device is only a few hops away to an edge appliance that can handle the computationally intensive calculations,” says Khalid. “Once completed, data is quickly streamed back to the device, which now only needs to compose the visuals and display them in real-time.”

Project Split Render’s demonstration of what will be possible when edge and 5G become mainstream earned Envrmnt awards at a recent conference in Berlin. 10

As our data and processing infrastructure matures, it’s only a matter of time before mobile XR games can look and perform on par or better than their console counterparts, heralding a new era in mobile gaming that’s not limited by the device in your hand.

This content is produced by WIRED Brand Lab in collaboration with Western Digital Corporation.


Article by channel:

Read more articles tagged: Edge Computing