I don't normally talk about stuff my employer announces (I work for Nvidia, my opinions are my own, etc etc), but something interesting struck me about the DLSS4 frame generation stuff.

Jensen gave the example of computationally rendering 2 million pixels and then inferring 31 million more from that baseline, and noted that this is more efficient than just computing all 33 million. This is interesting because it's kind of backwards from how we usually think about AI.

In particular, AI and inefficiency are often discussed in the same breath, particularly with respect to energy consumption. Clearly, training is incredibly expensive, and the transformer which powers DLSS probably gobbled a huge amount of energy coming into existence. But now that we *have* it, its inferred approximation of the shader computation function seems to be about an order of magnitude *less* energy-hungry than the traditional computation analogue. That's really interesting.

Framed another way, even though training DLSS probably had a massive carbon footprint, the inference will be applied across every PC gamer who turns it on, and stretched over the lifetime of the GPUs that almost certainly results in a very significant net carbon *savings* on the video game rendering that people were going to do anyway.

So in other words, this is an example of AI significantly reducing net energy usage.

Follow

@djspiewak interesting point. The potential rebound effects (more demand because it consumes less power) are probably limited by the fact that Nvidia is a monopoly (so they're not passing on those savings to the consumer), and because other factors are limiting the GPU load (like not many 8k displays yet, or it not being worth it to offer ultra high quality graphics if only the latest cards can use it).

@spoltier I was about to say that I agree about the monetary savings, but there's kind of an asymmetry there. Sure, the consumer pays more for the GPU than they might for an equivalent AMD, but then they're the ones who get the lower power bill every month relative to the same performance.

Where it gets tricky is as it relates to your second point, I think. Games can have more insane graphics, less efficient shaders, and higher resolution *because* of the more efficient inferred rendering.

@spoltier So that inducement couples together in this interesting way with your point about how the net gains are less dramatic than they could be because most people aren't buying 5090s or hooking them up to 8k (or often even 4k) displays. The calculus is clearly pretty complicated.

@spoltier What this actually makes me think about though is how many more problems are like this? Clearly there's nothing particularly special about real-time graphics rendering: it just happens to be a classical approach which is being significantly augmented by a learned one. Maybe weather forecasting is another good near-term exemplar?

@djspiewak yeah, with weather forecasting you definitely would want more accurate or farther-predicting models. (worse, trading firms, militaries and others probably want forecasts that they then keep secret, spurring the demand further as they make potentially the same forecast unbeknownst to each other)

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.