3

There's been talk that UE5's Nanite isn't actually all that efficient (sometimes slower than the alternative) and that kind of got me thinking.

You give developers very high end machines so that they can move quickly. But that doesn't always translate to lower machines. When benchmarking how would you even target lower machines in a simple way? Like for me, I have two GPUs in my system, but one is passed through to a Windows VM. I'd love to test on that GPU but it's just not feasible

All the great test results I (and others) have been seeing might just be a result of the newest cards being insanely fast in relation to cache. Is visibility rendering really faster on a few generation old card? I don't know! Nvidia MASSIVELY beefed up L2 cache on the 4000 series. Does that play a role? Maybe even a big one...

Comments
  • 4
    Oh, right, Nanite. Remember when that shit came out? Every other gamedev was tripping balls over it for a while, mass hype. Then it kinda faded away, I stopped hearing about it all the time at least. Did something happen or do I just live under a rock?

    Anyway, at that time, I got into bit of a debate about whether it would make a bunch of optimizations redundant. Like why use normal maps and simplified geometry for a wall if you can just push fifty gazillion triangles per square meter, right? The argument was somewhere along those lines.

    Obviously, I said that was a stupid idea. Using less resources is always better. So they called me crazy and a grumpy old cat, both absolutely true, and I think that's where I gave up trying to talk sense into the heads of ten-gigs-for-hello-world type of people.

    Where is the point? I don't know, I'm mentally derailing.
  • 1
    @Liebranca "Where is the point? I don't know, I'm mentally derailing."

    arguing with retards that think a 500mb install for a basic crud app is acceptable, will tend to do that to a person.

    be gentle on yourself. the world is drowning in an ocean of go-with-the-flow tards. like an open sewer.
Add Comment