Artificial intelligence is all the rage these days, and it's making its way into gaming in a big way. Sony and Microsoft have only recently started implementing AI and machine learning into their consoles, with features like the PS5 Pro's PSSR, but PC hardware manufacturer Nvidia has been at it for much longer. The company brought a paradigm shift in PC gaming with its machine learning-based DLSS, and in a recent tour, showed me its ambitions with AI for the future of video games.
At a press tour in Bangalore, Nvidia gave me a look at various generative AI-powered demos for gaming and content creation. Most of these demos had already been showcased at various industry conferences before, like Computex, but this was my first time getting hands-on with them. The demos, running on what Nvidia calls RTX AI PCs, included various experiences that use existing and new LLMs (Large Language Models) and run locally on Nvidia's RTX graphics cards. The entire suite of features makes up what's being called Nvidia ACE, a facilitator for the creation and deployment of digital humans in interactive media.
These demos all use slightly different technologies to aid content creation and gaming in various ways. In one demo, I talked to an in-game NPC directly to guide me contextually through its progression system while another application makes it easy to search for documents and files on your system much faster than possible on vanilla Windows. One demo assists you with generative AI for creating images in a digital environment, while another is an extension of ChatGPT molded with an AI NPC.
It's a lot to take in, so I chose to focus on what matters most to me: gaming.
AI NPCs are the hot new thing that Nvidia wants to focus on, and we're starting to see their first real-world implementations in existing and upcoming games. At the event, I talked to an AI NPC in Amazing Seasun Games' multiplayer title Mecha BREAK. Put simply, you can talk to Martel, the in-game mechanic in charge of your mech suits, using natural language. Ask Martel about the objective for the next mission, and she will answer promptly. Ask her what the fastest mech armor in your inventory is, and she will bring it up ready for further customization, which you can direct with voice commands. Sure, you'll have to stick to lore-accurate terms when engaging with her, but it's a pretty cool demo.
Is that a feature I see myself using any time soon? No, not after the excitement wears down after the first couple of prompts. While these demos look cool, gamers care more about practicality. I'd much rather use the input mechanics I'm familiar with, like a mouse and keyboard, to customize my in-game character and experience, than rely on an NPC with whom I have to talk. If I need any assistance, I'm more than happy to head to the internet where I'll find plenty of helpful guides and forums where the answers I'm looking for aren't fine-tuned by game creators to suit their needs.
Perfect World Games' Legends tech demo was a similar experience. In it, you speak to Yun Ni, a member of a jungle tribe. You can talk to Yun Ni naturally and she can see and recognize objects from the real world using the webcam on your computer. While there is no game surrounding the demo itself, it is impressive how its developers have utilized GPT 4 within a container with a set amount of instructions governing the world Yun Ni inhabits. Ask her about the best cars from Tesla and she'll reply, "What's a car?" How would she know? She lives in a jungle. Hold up an object she might recognize, like a knife, and she'll get excited to talk about its applications in the jungle.
Well, she seems to be excited by her replies, but I can't tell if she really is by her voice. She sounds the same as those text-to-speech auto-narrated TikTok videos, so you can guess how lifeless her voice must be. The same goes for Mecha BREAK's Martel, though I keep asking myself why I should care about any of this.
There is a short delay between your command/inquiry and Yun Ni's answer as the demo sends data to GPT4 servers. While you're only waiting for a couple of seconds, it's enough to remind you that all of this is exactly what it appears to be: a gimmick. Microsoft also showed off a demo of its Copilot AI assisting gamers inside Minecraft earlier this year, though we've yet to see the feature ship in the current build of the game.
There's an argument to be made about how the new wave of AI NPCs, who you primarily interact with using voice input, can be beneficial in the accessibility space. We're not at the point where games are interactive enough to be navigated through using only natural language voice inputs, but Nvidia ACE is paving the way for that future. One could also argue that while these features won't be of use, or be adopted by hardcore gamers, they may reach the casual masses on lower-end hardware.
Speaking of hardware, Nvidia's technical product marketing manager John Gillooly tells us that all of these models are running locally on the hardware. We can confirm the same as we interacted with each demo on systems that were not connected to the internet, except for Legends, and they were using consumer hardware like the RTX 4080. The only cost? VRAM. We weren't given specific numbers, but don't count on holding long, meaningful conversations with these NPCs on an RTX 4050.
Surprisingly, one demo that was curiously absent from the event was Project G-Assist. Nvidia demoed the AI assistant at Computex earlier this year in its prototype phase, but I didn't see it in action. For those unaware, G-Assist is an AI-powered chatbox that is fed information about your system's specs and the game you're playing. It works similarly to Microsoft's Copilot for gaming, where users can ask questions about the game and get contextual answers. G-Assist will also help gamers optimize their systems with minimal steps, and the closest I saw at the event was ChatRTX.
ChatRTX is a local tool that lets you pick any of the publicly available LLMs and personalize them for your system. This includes Llama 2, Mistral 7B, CLIP, and more. Essentially, you can build a personal chatbot that has access to your system, and you can interact with it to find documents and photos and retrieve critical system information. It's a localized version of what Apple is advertising with its big AI makeover on iPhone with Apple Intelligence, except you can customize it to your heart's content.
While they're all impressive on their own, what's missing is a sense of coherency. With Microsoft also promising several context-aware AI features in its upcoming PCs and OS releases, why should I care about Nvidia's promises? If most of these features are only available on Nvidia GPUs, will developers integrate them well enough into their projects and leave AMD/Intel users in the dust?
There are a lot of questions surrounding the future of AI in gaming and general computing, and every company has a slightly different answer. But even if the hardware and software manufacturers can get together to figure out their goals and present a unified vision to the end gamer, will the consumers even care? At the end of the day, I just want to use my PC to play games and I want my hardware to do one thing: run them well. I do that to avoid talking to people in the real world, and I'm not going to start talking to video game characters. At least not until they start to feel more like a human, and that's probably going to take a while.