Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
There's been talk that UE5's Nanite isn't actually all that efficient (sometimes slower than the alternative) and that kind of got me thinking.
You give developers very high end machines so that they can move quickly. But that doesn't always translate to lower machines. When benchmarking how would you even target lower machines in a simple way? Like for me, I have two GPUs in my system, but one is passed through to a Windows VM. I'd love to test on that GPU but it's just not feasible
All the great test results I (and others) have been seeing might just be a result of the newest cards being insanely fast in relation to cache. Is visibility rendering really faster on a few generation old card? I don't know! Nvidia MASSIVELY beefed up L2 cache on the 4000 series. Does that play a role? Maybe even a big one...
rant