2

Why TF does unity use mesh renderers for generating navmeshes? In what possible situation would that be a usefull?
Why would it chose to bug out on the complex visual geometry instead of using the finely crafted low-poly clipping layer? In what situation is that a good idea? Why would the AI need to collide with different things than the player? (IMHO NavMeshAgent should depend on CharacterController or Rigidbody)

I feel like so many features in Unity are potentially very nice but don't work well together or have WTF design elements like this one. Like custom shaders not being able to alter the result after the lights have been added together, and the undocumented finalgbuffer:ColorFunction function. Or a million other tiny things that make me wish I was smart enough to build my own engine.

/rant

Comments
  • 0
    Building your own engine is not that hard tbh. Go for it if you're not doing game dev to earn money or need to get a game out asap.

    AI colliding with different stuff -> not all AI is computer players. NPC, projectile, crowdsim etc. all have specific requirements and hence it's probably generalized.

    Dunno about the mesh bit, not really a unity person, but at a guess you can't alter output after lighting because that's now unity's render pipeline is built. Most graphics engines use a specific render pipeline architecture and each stage depends on some assumptions about previous stages (also great for optimisation). Plus afaik unity lights use deferred rendering and that's a multipass method, nothing should change between light passes.
  • 0
    @rememberMe I've tried building my own engine with some success but there are just too many features something like Unity offers. Skinned mesh rendering is quite difficult to get working yourself, let alone static lightmapping or navmesh generation. I wish I could mix and match the useful unity features with my own replacements for the WTF areas
Add Comment