
The AI generates gaming “fan fiction” to expand a game’s virtual world.
I admit, since middle school, I’ve spent most of my downtime immersed in video games. There are the quintessential epics: Resident Evil, Final Fantasy, World of Warcraft, and Fortnite. And then there are some indies close to my heart—a game that simulates a wildfire watcher in a forest, a road trip adventure, or one that uses portals to connect improbable physical spaces.
I’m not the only one sucked into games. The multi-billion-dollar video game industry is now bigger than Hollywood. And designers are constantly scrambling to expand their digital worlds to meet endless expectations for new content.
Now, they may have a nifty helper.
This week, Microsoft Research released Muse, an AI that spews out a multitude of diverse new scenarios within a game. Like ChatGPT and Gemini, Muse is a generative AI model. Trained on roughly 500,000 human gameplay sessions from Microsoft-owned Ninja Theory’s multiplayer shooter Bleeding Edge, Muse can dream up facsimiles of gameplay in which characters obey the game’s internal physical rules and associated controller actions.
The team is quick to add that Muse isn’t intended to replace human game designers. Rather, true to its name, the AI can offer inspiration for teams to adopt as they choose.
“In our research, we focus on exploring the capabilities that models like Muse need to effectively support human creatives,” wrote study author Katja Hofmann in a blog post.
Muse is only trained on one game and can only produce scenarios based on Bleeding Edge. However, because the AI learned from human gameplay data without any preconception of the game’s physics itself, the model could be used for other games, as long as there’s enough data for training.
“We believe generative AI can boost this creativity and open up new possibilities,” wrote Fatima Kardar, corporate vice president of gaming AI at Microsoft, in a separate blog post.
Whole New Worlds
Generative AI has already swept our existing digital universe. Now, game developers are asking if AI can help build wholly new worlds too.
Using AI to produce coherent video footage of gameplay isn’t new. In 2024, Google introduced GameNGen, which according to the company, is the first game engine powered by neural networks. The AI recreated the classic video game Doom without peeking into the game’s original code. Rather, it repeatedly played the game and eventually learned how hundreds of millions of small decisions changed the game’s outcome. The result is an AI-based copy that can be played for up to 20 seconds with all its original functionality intact.
Modern video games are a lot harder for an AI to tackle.
Most games are now in 3D, and each has its own alluring world with a set of physical rules. A game’s maps, non-player characters, and other designs can change with version updates. But how a character moves inside that virtual world—that is, how a player knows when to jump, slide, shoot, or tuck behind a barrier—stays the same.
To be fair, glitches are fun to hack, but only if they’re far and few in between. If the physics within the game—however improbable in real-life—constantly breaks, the player easily loses their sense of immersion.
Consistency is just part of the gaming experience a designer needs to think about. To better understand how AI could potentially help, the team first interviewed 27 video game designers from indie studios and industry behemoths across multiple continents.
Several themes emerged. One was about the need to create new and different scenarios that still maintain the framework of the game. For example, new ideas need to fit not only with the game’s physics—objects shouldn’t pass through walls—but also its style and vibe so they mesh with the general narrative of the game.
“Generative AI still has kind of a limited amount of context,” one designer said. “This means it’s difficult for an AI to consider the entire experience…and following specific rules and mechanics [inside the game].”
Others emphasized the need for iteration, revisiting a design until it feels right. This means that an assistant AI should be flexible enough to easily adopt designer-proposed changes over and over. Divergent paths were also a top priority, in that if a player chooses a different action, those actions will each have different and meaningful consequences.
WHAM
Based on this feedback, the team created their World and Human Action Model (WHAM)—nicknamed Muse. Each part of the AI was carefully crafted to accommodate the game designers’ needs. Its backbone algorithm is similar to the one powering ChatGPT and has previously been used to model gaming worlds.
The team then fed Muse on human gameplay data gathered from Bleeding Edge, a four versus four collaborative shooter game in 3D. With videos from the battles and controller input, the AI learned how to navigate the game from the equivalent of seven years of continuous play.
When given a prompt, Muse could generate new scenarios in the game and their associated controller inputs. The characters and objects obeyed the game’s physical laws and branched out in new explorations that matched the game’s atmosphere. Newly added objects or players stayed consistent through multiple scenes.
“What’s groundbreaking about Muse is its detailed understanding of the 3D game world, including game physics and how the game reacts to players’ controller actions,” wrote Kardar.
Not everyone is convinced the AI could help with gaming design. Muse requires tons of training data, which most smaller studios don’t have.
“Microsoft spent seven years collecting data and training these models to demonstrate that you can actually do it,” Georgios Yannakakis at the University of Malta told New Scientist, “But would an actual game studio afford [to do] this?”
Skepticism aside, the team is exploring ways to further explore the technology. One is to “clone” classic games that can no longer be played on current hardware. According to Kardar, the team wants to one day revive nostalgic games.
“Today, countless classic games tied to aging hardware are no longer playable by most people. Thanks to this breakthrough, we are exploring the potential for Muse to take older back catalog games from our studios and optimize them for any device,” she wrote.
Meanwhile, the technology could also be adapted for use in the physical world. For example, because Muse “sees” environments, it could potentially help designers reconfigure a kitchen or play with building layouts by exploring different scenarios.
“From the perspective of computer science research, it’s pretty amazing, and the future applications of this are likely to be transformative for creators,” wrote Peter Lee, president of Microsoft Research.
The post This Microsoft AI Studied 7 Years of Video-Game Play. Now It Dreams Up Whole New Game Scenarios. appeared first on SingularityHub.
* This article was originally published at Singularity Hub
0 Comments