FYI.

This story is over 5 years old.

Tech

Streaming and Cloud Computing Endanger Modding and Game Preservation

Services like Google's Stadia seem convenient, but they could completely change the past and future of video games.
stadia

Rich Whitehouse is a video game preservationist and a veteran programmer in the video game industry. This story was originally published on Whitehouse's website.

Yesterday, Google announced Stadia, a video game streaming and development platform that leverages the tech giant's data centers around the world to let users play high-end video games on practically any screen, with no need for a console or PC. We still don't know what Stadia will cost users, what games it will offer, and if it will even work as advertised, but in light of Google's announcement, it's worth taking a hard look at what this really means for the future—and more importantly, the history—of video games.

Advertisement

It should be understood up front that a streaming service for video games is every major video game publisher's dream come true. It promises an absolute solution to piracy (users can't copy games hosted on a Google server), and it offers publishers absolute control over video game content.

This sounds absolutely lovely, if you're in the business of publishing and/or marketing video games. You can now dictate what users are allowed to do with your game's content, and whether any sort of modding will be allowed. In the event that modding is facilitated, likely through your own backend, mods may be censored and curated at your whim. You don't need to worry about client support issues, which is a nightmare on PC and, to a slightly lesser degree, mobile. You decide how long you want your game to be available to users, for however long it's paying for its presence on the cloud, giving you even more predictable post-launch support costs. Your game is playable on every platform that can support your streaming solution, with no need to create ports and deal with limited hardware constraints for each of those ports.

Of course, for users and historians alike, this has a number of dreadful implications. We've been lucky that up to this point, streaming has not been an especially viable solution for playing video games. Stadia continues to take a naive approach to streaming (and, most assuredly, there are far more sophisticated solutions on the horizon which do not merely rely on audio/video encoding/decoding), with a controller capable of interfacing directly with the server in order to help eliminate some of the input latency that would be contributed by client software/hardware. It is, however, still going to be far from ideal for games and genres that demand minimal input latency and short response times.

Advertisement

There are other concerns with the viability of streaming video games, still, particularly in the realm of data usage and access speeds. With time, however, these issues will inevitably fall upon a small enough user base that any cost-benefit scenario will overwhelmingly work out in favor of streaming.

So let's sidetrack a bit to explore the technical options we have ahead of us in the realm of streaming and cloud computing. This will probably be enough to force you to reconsider, if you've remained stubbornly planted in the "streaming is never going to replace dedicated gaming hardware" camp. The gradual shift to "games that live entirely on the cloud" can, and will, roll out over many iterations and in many forms.

As we've seen, we're starting out by naively streaming video games in the same way that we stream static audio and video content. For the games, genres, and demographics where this still isn't quite viable, however, we might start out by simply offloading important but deferrable computations for the game to the cloud. This could conceivably apply to something like path-finding, or perhaps some load-time computation which might be especially useful for games which leverage randomly-generated content.

As the practice of offloading computations to the cloud becomes more commonplace, this helps lay the groundwork for games in general to be always-connected, and helps define the kind of bandwidth requirements that one can expect in order to play the latest and greatest games. For games that absolutely cannot make the sacrifice in general responsiveness and input latency that traditional streaming requires, I expect this is the immediate path forward. As core behavior and functionality becomes associated with cloud computations, the associated data gradually stops existing on the client, and modding or analyzing anything relating to that data becomes impossible.

Advertisement

There is so much more history to a video game than the playable end result conveys.

Next, we progress to more sophisticated streaming solutions. These solutions require custom client software, but this is software that will, eventually, have relatively light processing requirements. I'm probably influenced by my past as a Quake-engine developer here, but it seems obvious to me that you might end up with the same sort of client/server separation we saw in Quake 3. The cloud effectively runs the game, and the client becomes something akin to a more generalized Quake 3 client, being a glorified geometry processor with very minimal code to support processing local input in order to minimize the perception of latency. This will certainly come with some difficulties, but as reliance on cloud computing becomes more commonplace, it seems a likely direction for developers, much at the behest of their publishers, to begin to take.

From here, we could start getting even more sophisticated. Extrapolating from the last rendered image we received from the server by re-projecting and expending only minimal client computations for additional fragment processing, leveraging only the already-projected (and possibly simplified) geometry from the last frame received from the cloud, and so on. All of this ultimately removes more data and processing burden from the client, until we eventually arrive at a solution that offers all of the perceived responsiveness we expect from video games today with only minimal hardware requirements.

Advertisement

So we've established that there a lot of ways this can happen, and that there are a lot of gradients in between "running locally" and "running on the cloud" where we don't necessarily see a change in user experience, but we do see a loss of data that exists on the client. In other words, we see a gradual loss of data that's accessible to the user.

Counter-Strike

Counter-Strike. Image: Valve

Now let's talk about why this is terrible! For most of today's games, modding isn't an especially friendly process. There are some exceptions, but for the most part, people like me are digging into these games and reverse engineering data formats in order to create tools which allow users to mod the games. Once that data starts only existing on a server somewhere, we can no longer see it, and we can no longer change it. I expect some publishers/developers to respond to this by explicitly supporting modifications in their games, but ultimately, this will come with limitations and, most likely, censorship. As such, this represents an end of an era, where we're free to dig into these games and make whatever we want out of them. As someone who got their start in game development through modding, I think this sucks. It is also arguably not a healthy direction for the video game industry to head in. Dota 2, Counter-Strike, and other massively popular games that generate millions of dollars annually, all got their start as user-modifications of existing video games from big publishers. Will we still get the new Counter-Strike if users can't mod their games?

Advertisement

However, people will ultimately channel that creativity elsewhere, and making games with pre-made engines and template content is easier than it's ever been. So while it does undeniably suck in a lot of different ways, it's hard to argue that it's really a "sky is falling" scenario.

The bigger problem here, as I see it, is analysis and preservation. There is so much more history to a video game than the playable end result conveys. When the data and code driving a game exists only on a remote server, we can't look at it, and we can't learn from it. Reverse engineering a game gives us tons of insight into its development, from lost and hidden features to actual development decisions. Indeed, even with optimizing compilers and well-defined dependency trees which help to cull unused data out of retail builds, many of the popular major releases of today have plenty waiting to be discovered and documented. In reverse engineering Final Fantasy XV, for example, I came across countless interesting little tidbits, a couple of which I took care to document.

We're already living in a world where the story of a game's development remains largely hidden from the public, and the bits that trickle out through presentations and conferences are well-filtered, and often omit important information merely because it might not be well-received, might make the developer look bad, etc. This ultimately offers up a deeply flawed, relatively sparse historical record.

Advertisement

While reverse engineering doesn't completely solve this problem, it gives us, both present and future, the ability to dig deeper and discover more data, more history, more things that help paint a better picture of how a game was made and how it functions. This might support or contradict the higher level accounts of the game's development, and I have a feeling that the ability to dig deeper into these games will be far more valued by future generations than it is by today's.

So, you might now be thinking along the lines of "whatever, publishers will take care to archive this stuff for future generations, users don't need access to it for that crap." Well, let me tell you a little bit about that!

People deserve to be able to look back and learn from these games, and the people who made them deserve to have their work persist for longer than a publisher's bottom line.

Before I start outright slamming publisher archival practices, I will note that things are a lot better today than they were at the turn of the century when I got into the industry myself. However, they are still not great. I personally saw the source code to a popular title lost because a publisher did not implement any sort of hardware redundancy when it was initially backed up. I saw source history to another project, involving one of the best-known IP's in the industry, lost relatively recently thanks to an IT blunder. I can't name names or projects here due to the personal liabilities (and hard feelings, most likely) it would create, but suffice to say, publishers and developers really, really suck at keeping their stuff safe. I cannot really express how much it pains me to know that there are countless prototypes and even titles that got canned in production which have been completely lost to the ages thanks to haphazard or non-existent archival practices.

Advertisement

So let's think about what that means for games that only ever live on cloud servers. Ignoring the really unpleasant fact that no member of the general public will ever be able to play a game again when/if it's removed from an active service (to say nothing of what happens when/if the service perishes), we're relying completely on the publisher to archive this game properly for, hopefully, someone in the future to be able to come back and somehow access it.

Well, suppose that the game becomes a liability. You may or may not be aware, but potential legal liabilities are a primary reason that publishers don't currently like to release source code for older titles. They're quite paranoid that something might be lingering in there that could cause a legal issue, or perhaps just some kind of community backlash. They don't especially want to put resources into having the code vetted by professionals in order to make a release at still something of a gamble, for no real tangible (monetary) return.

What if we can now apply this same approach to an entire game? The game's data was never in public hands, and there's no need to risk something untoward being seen by letting the public get hold of it. What if there's a texture from another game in there? What if the binary links a library which the publisher didn't have a proper license for? Well, just delete the data. No more problem! After all, no one else will ever be the wiser, and who cares about historical preservation anyway?

So, yeah, this is a huge concern for data that only exists on a cloud. There's a very real possibility that it might be forever lost to the ages. People deserve to be able to look back and learn from these games, and the people who made them deserve to have their work persist for longer than a publisher's bottom line.

What's the solution? I don't see a clear one, or one that could work without a lot of cooperation from publishers and developers. It seems inevitable that the industry will move in this direction, and we can only hope that by the time we arrive, we've set forth some good standards and open-data policies that everyone can adhere to. Unfortunately, though, this is one of those issues that's really hard to appeal to the masses over, and the industry has shown over decades that it is not great at assigning lasting value to its output. So I hope that video game historians aren't one day reading this while lamenting on how badly we screwed up.

Follow Rich Whitehouse on Twitter.