10

~Of open worlds and post apocalypses~
"Like dude. What if we made an open world game with a map the size of the united states?"

"But really, what if we put the bong down for a minute, and like actually did it?"

"With spiky armor and factions and cats?"

"Not cats."

"Why not though?"

"Cuz dude, people ate them all."

https://yintercept.substack.com/p/...

Comments
  • 6
    Good luck generating something that large. eep.

    Then again, there’s always procedural. Then again it’s procedural.
  • 2
    @Root On my current computer it'd take maybe twenty minutes for a million tiles. An hour for 2-3 million.

    Procedural absolutely would be the go to.

    If every tile represents one square mile, it'd look something like

    2,680 miles wide, and a vertical distance of 1,582 miles in the u.s., or roughly 4.2 million tiles.

    Keeping it mostly proportional, a map of say

    2048x1200 might be doable.

    Thats 2.5 million tiles, a little more than half the original, so its within the realm of possibility.

    The neoscav map works out to 1-in-400 tiles being a generalized location (lootable), that a text description has to be written for.

    And half that being unique.

    For our map, that would be 6,144 searchables (probably requiring random generation of most text)

    And 3,072 unique locations (probably the bulk of which would be eaten up with named places in the u.s.)

    Even at just 50 words a location, that works out to 460.8k words, or about 1843 pages of text to write.
  • 3
    Maths a little off. It works out to 6144 named locations and general locations *all together*, 307.2k words, or roughly 1228 pages of content. Assuming only a measly small paragraph for the bulk of them. And no other additional content either.

    I think what I would do is write a text generator, with samples, generate them in bulk, and then hand select ones that are good, and filter out ones that aren't.

    From there the hand selected ones would be spruced up lightly, and then saved for use.

    Picking out whats good is often faster then hand crafting it.

    Its the difference between *curation* and *design*.
  • 3
    Why procedural? There's apis to get height maps and even soil composition of almost everywhere in the USA
  • 2
    @Wisecrack Of course you can generate all of that without too much effort — that’s exactly what procedural is good at.

    What I meant is that procedural world generation is often boring because there are limits on how interesting the generated content is, and this quickly becomes obvious while playing.
  • 2
    @Root having seen enough talks, and read enough retrospectives and design papers over the years, I think the mistakes made with procedural are exactly what you wrote:

    Developers thought it was a magic wand. It's not.

    Procedural works best when it's merged with manual work to enhance it or refactor it.

    It's a labor savor, not a labor replacer.
  • 1
    @demortes

    Not a bad idea all things considered. Its a lot of data to sort through but if its available I'll look into it. The big thing is 1. what data best represents a given area, 2. how to combine it.

    So for example, mountainous terrain is one thing, but if that terrain is in utah its going to be pines.

    If it were in chile (bad example I know), its going to be jungle.

    So theres some things to consider.

    I think eco-regions would useful data.

    What I see now is, separating all the key landscape features into distinct layers, finding the correct maps or data for it, and then layering it, allowing the relevant layers to override each other.
  • 1
    So for example I'd process data for height.

    Some cut off leads to mountains (with a density setting to randomize it a bit, so the region isn't entirely impassible).

    And then another map that only shows forest cover.

    And then rivers.

    Rivers override forest data.

    Roads and bridges in turn override the previous layers and so on.

    This works because there are a lot of maps that cleanly separate data like this. Meanwhile popular map offerings, like openstreet, bake in place names.

    When you're faced with pulling in a ton of xml, converting the lat and long to the nearest hex, and looking at the features there, theres open questions.

    is this data gonna be missing something important? (tree cover in some small towns, but you're only told theres a town there).

    Satellite data and cleanly separated map data give you what you might otherwise might miss.

    Is it faster to pull and integrate this data than simply scraping it from a crude set of images of the region?
Add Comment