← All

On Neural Rendering

Realism and "Ground Truth" between Physical Simulation and Generative AI 






This Monday at its GTC Conference, NVIDIA presented DLSS 5 which it promotes as a new form of “Neural Rendering”. Reactions from social media were immediate and critical, calling it a real-time AI slop filter for games. For NVIDIA however, DLSS 5 represents something more significant in their pursuit of simulating the physical world: according to NVIDIA CEO Jensen Huang, we are supposedly at “a GPT moment for graphics”. DLSS 5 combines the structured 3D data of its game engine with generative AI, in order to get controllable and adjustable geometries but with AI-inferred “realism”. In this sense, “Neural Rendering” represents the beginning of a slow concession from the physical simulation approach NVIDIA had pursued for the past decade, to a very different form of virtual world-making in which generative AI is increasingly front and center.

“Social Media Realism”


The backlash against DLSS 5 is perhaps best illustrated through meme production. At first, users on X began posting examples of the existing meme “RTX ON / RTX OFF”, which spread after the 2018 announcement of the GeForce RTX line and its Real-Time Raytracing technology. 


Example of the meme RTX OFF / RTX ON


Only hours later, the meme itself began to be inverted to “DLSS OFF / DLSS ON”, where as Know Your Meme documented, the comparison image for DLSS OFF shows “a normal, clean, or desirable image” while “DLSS ON shows a distorted, AI-generated, or ‘AI slop’ version.” It is indicative of how tired people have become of the specific aesthetic of “photorealism” brought forth by AI models, and how “slop” has become a shared conceptual vocabulary for identifying and rejecting AI-generated aesthetics. Darren Allen at TechRadar fittingly observed that: “The internet has a new game: invent your own new acronym for DLSS featuring the word ‘slop’”


Comment on YouTube under a DLSS 5 demo video.


On social media, users were quick to point out that the results of DLSS 5 look less like photorealism, and more like a filter, with a user on YouTube calling it “social media realism”. Someone pointed out how the characters look like they are in front of a ring light, while others drew a direct line to the changes in smartphone camera processing: specifically in low-lit environments, one can observe the classic effects of the “HDR” look now common in smartphone cameras.  What DLSS 5 and the reactions online show is just how much “realism” is itself a changing cultural and social construction: while some users insist images with DLSS 5 turned on look more “real” and oftentimes claim it to be “objectively” so, others distinctly disagree, causing intense online debates. 


Example of DLSS 5 showing the HDR look


The “realism” offered by Generative AI is after all, as Roland Meyer has argued, “a second-order generic realism, fueled and mediated by the automated appropriation of generic visual content”. In this way, the supposed “realism” injected by DLSS 5  through the integration of generative AI is a recursive aesthetic effect, where notions of “photorealism” long fostered in game engines are reinforced with the visual conventions learned from internet content, stock photography and even gaming itself. 


A timely subtweet surfaced how other such A / B comparisons of supposed aesthetic “improvement” have gone horribly wrong. Posted by user @LeahLundqvist on X. 


In the context of DLSS 5, this became most apparent in the way that it transforms the appearance of characters. In particular, changes to the character Grace Ashcroft from the game Resident Evil provoked a strong and critical reaction. With DLSS 5 on, her face appears significantly transformed, with fuller lips and sharper cheekbones, matching popular beauty filters on social media, which was interpreted as surfacing a specific bias towards certain beauty standards in the model. Quickly, the discussion shifted as some users came to NVIDIA’s defense, arguing that these changes in appearance could be explained fully by changes in lighting. However, regardless of whether “neural rendering” only changes lighting or also alters or filters the underlying asset, the aesthetic preferences for the kind of “realism” offered by generative AI are clearly apparent. 


Example of DLSS 5 showing the character Grace Ashcroft in the game Resident Evil


In a first response to the criticism, NVIDIA stressed that developers would be offered granular control in the form of “intensity sliders” and would be able to distinguish between characters, objects and environmental elements as well as grading effects. Still, while developers can adjust the intensity of the neural shading, and potentially even finetune the model, they cannot fully alter the model’s learned conception of what constitutes photorealistic skin, hair, or material and what kind of lighting it will infer.

The controls for DLSS 5, once released, will be made available through NVIDIA’s poetically named “Streamline” SDK. It is after all, specifically the fear of “streamlining” which has motivated a large part of the criticism. Critics argue that neural rendering represents a further transfer of aesthetic authority from developers to generative AI. This criticism is reinforced by the technical reality of DLSS 5 as a single unified model, without any specific training on the context of the game in question. DLSS 5, in this view, represents another form of platformization and homogenization already caused by game engines, through the introduction of a new rendering layer which further homogenizes visually distinct art directions towards a statistically derived aesthetic mean. 

Between Physical Simulation and Probabilistic Computing


Of course, DLSS is not a new technology, and NVIDIA had been using AI to improve rendering for a while. However, the previous version DLSS 4.5 was largely used to generate additional frames to improve motion fluidity and to increase resolution through image reconstruction, where the goal was still to faithfully reproduce what the engine would have rendered at full resolution. 

With DLSS 5, NVIDIA introduced a “neural shading” layer, which no longer just reconstructs but actually re-shades the frame using AI-inferred lighting and materials, thus going beyond the aesthetic affordances of the game engine alone. On stage, Huang introduced Neural Rendering as the combination of controllable 3d graphics – structured data he calls the “ground truth” of virtual worlds – with generative AI as a form of “probabilistic computing”. In this sense, DLSS 5 makes use of the deterministic 3d graphics of the game engine as a way of geometric grounding for generative AI. 




Short snippet from Huang’s introduction of DLSS 5, originally posted on X by user @NikTek


Where physically based rendering as a deterministic simulation aims to approximate how light actually interacts with materials and is falsifiable, neural rendering operates through statistical inference, by generating plausible representations of lighting based on learned patterns from the training data. While DLSS 5, due to its “grounding” in 3d graphics, does not appear to produce completely new or unexpected geometries, even smaller surface deviations can have an unsettling effect. The resulting frames are no longer only the result of the deterministic process of physical simulation, but also an expression of the training data, which necessarily encodes its own particular cultural assumptions about what "realistic" appearance looks like. 

At a later press Q&A, Huang was confronted with the criticism, and reiterated how DLSS 5 “fuses controllability of the geometry and textures and everything about the game with generative AI”, only to then add: “This is very different than generative AI; it’s content-control generative AI. That’s why we call it neural rendering". It’s noteworthy how generative AI has become a term which NVIDIA both wants to promote, based on its own investments, and at the same time needs to distance itself from. The argument Huang is making is that a generative AI model conditioned on the given geometry and textures of the game is different than your average prompt-based image or video generation. Whether the actual model, however, is significantly different, for example in regard to what it has (or has not) been trained on, remains unclear. 

While NVIDIA thus tries to distance itself somewhat from “generative AI”, DLSS 5 also seems like a particularly striking admission from a company which spent the last decade on attempting the simulation of real-world physics through physical simulation. Here, NVIDIA’s own press release is quite telling: “Real-time rendering cannot bridge the gap to photorealism through brute force alone.” After all, its gaming technologies and graphics cards have long promised photorealism through the simulation of light physics with technologies such as real-time ray tracing and path tracing, while its Omniverse platform aims at physically accurate digital twins. In an interview in 2021, Huang argued that Omniverse “has to obey the laws of physics. It has to obey the laws of particle physics, of gravity, of electromagnetism, of electromagnetic waves, such as light, radio waves. It has to obey the laws of pressure and sound. All of those things have to be obeyed.” Broadly, the assumption seemed to be, that when you simulate physics accurately enough, on enough NVIDIA hardware, you would eventually end up with a photorealistic simulation of the physical world. It seems that NVIDIA has realised for a while that this idea might be too ambitious, and inspired by other approaches and its own investments, has shifted to the increasing implementation of generative AI across its products. 

However, it is keen on underlining the complementary use of physical simulation and generative AI, or as Huang calls it, of structured data and probabilistic computing, which he argues will  “repeat itself in one industry after another”. In turn, structured data such as deterministic 3d geometries will increasingly become an in-between, “operative” layer in the rendering process, and it can be expected that in future production of games and other virtual worlds, detail will increasingly move from these geometries to the generative capacities of “neural rendering”. 


A post by the principal animation programmer at Epic Games, Kiaran Ritchie, speculating on how in the future, DLSS 5 will require only  a basic map of a scene. 


The question remains whether in the long run, the coupling of 3d geometries and generative AI remains necessary. Already in September 2023, the vice president of Applied Deep Learning Research at NVIDIA, Bryan Catanzaro, proposed that “DLSS 10 (in the far far future) is going to be a completely neural rendering system.”   In this understanding, DLSS 5 only represents one step in a larger project aimed at progressively injecting generative AI across its products, replacing more and more aspects previously reserved for physical modelling. Is DLSS 5 thus simply another example of Rich Sutton’s “bitter lesson”, the observation that general methods leveraging computation will ultimately outperform methods leveraging human knowledge and domain-specific engineering? 


One particular example also getting the “DLSS 5 OFF / DLSS 5 ON” treatment is a “selfie” shared by Mark Zuckerberg in August 2022 from the Meta project “Horizon Worlds”, which was mocked online for its terrible graphics and what users called “dead eyes”. Today, the planned shutdown of the project was announced, after nearly $80 billion of investment and no official product.
Posted by user @mrdoob on X.


After all, NVIDIA is, among others such as DeepMind and RunwayML, also working on an alternative to physical simulation in the form of so-called “generative world models”. World models, as vice president of Omniverse, Rev Lebaredian argued in early 2025, “attempt to understand the physics of the world using the same technology behind large language models (LLMs). But instead of learning linguistic rules, they learn the fundamental laws of physics. This allows us to generate capabilities similar to those of language models, but for physical environments.”   Lebaredian believes that “classical simulations” require too much computing power and can still “only provide estimated results”, whereas generative world models, he argues, are trained to “learn physical laws using real-world data”. However, Lebaredian is also quick to point out that the efforts of NVIDIA in regard to physical simulation were not wasted, but are a necessary resource for the training of generative world models. 

For now, it seems that the coupling of deterministic and “structured” 3d-data with the probabilistic inference of generative AI only functions to essentialize both approaches further. 3D Simulation, as Gillian Rose reminds us, is itself a specific form of organizing the world: “neither photo-realism nor three-dimensionality see or spatialise the world in ways that innocently replicate its ‘laws’.” When contrasted with the probabilistic approach of generative AI models, physical simulation is all too easily elevated to the status of “ground truth”, as Huang’s introduction of the technology shows. In turn, by framing generative AI as an enrichment – an injection of “realism” – its capacities for dealing with unstructured data are overstated. In this understanding, generative AI can provide the infusion of “realism” as a contingent texture, while the underlying geometry of “structured data” allows for a layer of control. Huang has promoted this idea elsewhere, for example in the announcement of NVIDIA’s collaboration with Dassault Systèmes towards real-time virtual twins, where he argued that industries would move from structured representations towards “a generative computing model”, integrating generative AI into their pipelines. Ultimately, both the framing of physical simulators as the necessary “ground truth” of virtual worlds, as well as the desire to integrate generative AI for its supposed “realism”, are important to NVIDIA’s broader retrospective and future rationalisation of its own project. However, if the backlash against DLSS 5 is any indication, even the supposedly small changes and contingencies added by generative AI to the underlying “structured data” can have unintended consequences. 


© 2026 Lars Pinkwart