This Monday at its GTC Conference, NVIDIA presented DLSS 5 which it promotes as a new form of “Neural Rendering”. Reactions from social media were immediate and critical, calling it a real-time AI slop filter for games.
“Social Media Realism”
The backlash against DLSS 5 is perhaps best illustrated through meme production. At first, users on X began posting examples of the existing meme “RTX ON / RTX OFF”, which spread after the 2018 announcement of the GeForce RTX line and its Real-Time Raytracing technology.
Only hours later, the meme itself began to be inverted to “DLSS OFF / DLSS ON”, where as Know Your Meme documented, the comparison image for DLSS OFF shows “a normal, clean, or desirable image” while “DLSS ON shows a distorted, AI-generated, or ‘AI slop’ version.”
On social media, users were quick to point out that the results of DLSS 5 look less like photorealism, and more like a filter, with a user on YouTube calling it “social media realism”. Someone pointed out how the characters look like they are in front of a ring light, while others drew a direct line to the changes in smartphone camera processing: specifically in low-lit environments, one can observe the classic effects of the “HDR” look now common in smartphone cameras.
The “realism” offered by Generative AI is after all, as Roland Meyer has argued, “a second-order generic realism, fueled and mediated by the automated appropriation of generic visual content”.
In the context of DLSS 5, this became most apparent in the way that it transforms the appearance of characters. In particular, changes to the character Grace Ashcroft from the game Resident Evil provoked a strong and critical reaction. With DLSS 5 on, her face appears significantly transformed, with fuller lips and sharper cheekbones, matching popular beauty filters on social media, which was interpreted as surfacing a specific bias towards certain beauty standards in the model. Quickly, the discussion shifted as some users came to NVIDIA’s defense, arguing that these changes in appearance could be explained fully by changes in lighting. However, regardless of whether “neural rendering” only changes lighting or also alters or filters the underlying asset, the aesthetic preferences for the kind of “realism” offered by generative AI are clearly apparent.
In a first response to the criticism, NVIDIA stressed that developers would be offered granular control in the form of “intensity sliders” and would be able to distinguish between characters, objects and environmental elements as well as grading effects. Still, while developers can adjust the intensity of the neural shading, and potentially even finetune the model, they cannot fully alter the model’s learned conception of what constitutes photorealistic skin, hair, or material and what kind of lighting it will infer.
The controls for DLSS 5, once released, will be made available through NVIDIA’s poetically named “Streamline” SDK. It is after all, specifically the fear of “streamlining” which has motivated a large part of the criticism. Critics argue that neural rendering represents a further transfer of aesthetic authority from developers to generative AI. This criticism is reinforced by the technical reality of DLSS 5 as a single unified model, without any specific training on the context of the game in question.
Between Physical Simulation and Probabilistic Computing
Of course, DLSS is not a new technology, and NVIDIA had been using AI to improve rendering for a while. However, the previous version DLSS 4.5 was largely used to generate additional frames to improve motion fluidity and to increase resolution through image reconstruction, where the goal was still to faithfully reproduce what the engine would have rendered at full resolution.
With DLSS 5, NVIDIA introduced a “neural shading” layer, which no longer just reconstructs but actually re-shades the frame using AI-inferred lighting and materials, thus going beyond the aesthetic affordances of the game engine alone. On stage, Huang introduced Neural Rendering as the combination of controllable 3d graphics – structured data he calls the “ground truth” of virtual worlds – with generative AI as a form of “probabilistic computing”. In this sense, DLSS 5 makes use of the deterministic 3d graphics of the game engine as a way of geometric grounding for generative AI.
Short snippet from Huang’s introduction of DLSS 5, originally posted on X by user @NikTek
Where physically based rendering as a deterministic simulation aims to approximate how light actually interacts with materials and is falsifiable, neural rendering operates through statistical inference, by generating plausible representations of lighting based on learned patterns from the training data. While DLSS 5, due to its “grounding” in 3d graphics, does not appear to produce completely new or unexpected geometries, even smaller surface deviations can have an unsettling effect. The resulting frames are no longer only the result of the deterministic process of physical simulation, but also an expression of the training data, which necessarily encodes its own particular cultural assumptions about what "realistic" appearance looks like.
At a later press Q&A, Huang was confronted with the criticism, and reiterated how DLSS 5 “fuses controllability of the geometry and textures and everything about the game with generative AI”, only to then add: “This is very different than generative AI; it’s content-control generative AI. That’s why we call it neural rendering".
While NVIDIA thus tries to distance itself somewhat from “generative AI”, DLSS 5 also seems like a particularly striking admission from a company which spent the last decade on attempting the simulation of real-world physics through physical simulation. Here, NVIDIA’s own press release is quite telling: “Real-time rendering cannot bridge the gap to photorealism through brute force alone.”
However, it is keen on underlining the complementary use of physical simulation and generative AI, or as Huang calls it, of structured data and probabilistic computing, which he argues will “repeat itself in one industry after another”.
The question remains whether in the long run, the coupling of 3d geometries and generative AI remains necessary. Already in September 2023, the vice president of Applied Deep Learning Research at NVIDIA, Bryan Catanzaro, proposed that “DLSS 10 (in the far far future) is going to be a completely neural rendering system.”
Posted by user @mrdoob on X.
After all, NVIDIA is, among others such as DeepMind and RunwayML, also working on an alternative to physical simulation in the form of so-called “generative world models”. World models, as vice president of Omniverse, Rev Lebaredian argued in early 2025, “attempt to understand the physics of the world using the same technology behind large language models (LLMs). But instead of learning linguistic rules, they learn the fundamental laws of physics. This allows us to generate capabilities similar to those of language models, but for physical environments.”
For now, it seems that the coupling of deterministic and “structured” 3d-data with the probabilistic inference of generative AI only functions to essentialize both approaches further. 3D Simulation, as Gillian Rose reminds us, is itself a specific form of organizing the world: “neither photo-realism nor three-dimensionality see or spatialise the world in ways that innocently replicate its ‘laws’.”
© 2026 Lars Pinkwart