4
kiki
1d

here are things that prove god doesn't exist:
- trie (non-binary tree)
- nybble (half a byte)
- .DS_Store
- .Trash-1000
- tsconfig.json
- Electron
for if he existed, he would have never allowed those hubris monuments to be erected in his kingdom.

Comments
  • 1
    Don't forget the story of the tower of babel or Sodom and Gomorrah ;P

    But that's old testament, now he's all about that forgiveness ;P
  • 3
    Fucking DS store somehow always ending into all my repos the moment some crapple fanboy touches them... Agreed.
  • 1
    @CoreFusionX if they don’t know what a global gitignore is, you shouldn’t let them touch your repos. If they know but don’t care, you _definitely_ shouldn’t let them touch your repos.
  • 1
    God?
    He?
    lol
    BTW Religious people are immune to arguments against their religion
  • 1
    Trie is a generic datastructure for maps keyed by sequences of elements from the same finite (small) set. It's typically taught with keys as ASCII characters, but actually the most popular variant uses keys as booleans, commonly known as binary tree.
  • 1
    This is an unpopular opinion but I think that memory addressing should have been bit-aligned.
  • 1
    @lorentz can you elaborate on the bit alignment part?
  • 1
    @kiki Spanky, the true god condemns the things on your list. They are all cardinal sins.
  • 2
    @Lensflare I think a lot of really efficient techniques are grossly underutilized because bitfields are hard and slow. If bit-aligned reads and writes had first-class support, a lot more specialization would be possible in code, as we could rely on booleans and enums that occupy just a handful of bits to be neatly compacted. A lot of effort is expended in both language and library design to optimize enums and pack them together better. A related but different problem is that i64 is huge and it's not really possible to fill it, so a lot of bits are wasted in pointers that could be used to solve this problem by assigning just 3 of the ~24 unused bits in a pointer to sub-byte indices. Instead, ARM uses the extra bits of i64 to solve out-of-bounds access vulnerabilities, a problem that is decidedly better solved in language design.
  • 1
    @lorentz sure but isn‘t it practically impossible to have addressing down to single bits? The size of a single address would be enormous. I think it would in fact be the size of the whole memory that you are using the addresses in.
    You would need a 1GB large address to point to a single bit on a 1GB RAM memory.
  • 1
    @lorentz ok maybe I‘m wrong. The size of the address would be log2(size of memory)
  • 2
    @kiki

    You don't really have a choice about repo access when they belong to your company.

    @lorentz

    The reason addressing did not go down to the bit level is because it would have taken a prohibitive amount of circuitry to pull off.
  • 2
    @Lensflare it would take 3 additional bits to turn byte addressing into bit addressing. 64-bit pointers have a lot more than 3 unused bits between the sign bit (used to separate kernel memory on Linux) and the usable range. Matter of fact, 64-bit address space is so absurdly big that most CPUs require the top few bits to be the same for loads and stores, and the exact number of these matching bits is barely considered a compatibility concern.
  • 1
    @CoreFusionX It would take a bit of additional circuitry (pun not intended), but the number of neighboring bits that would ideally be processed together is a manufacturing concern and I would be very surprised if 8 is that number for today's popular RAM designs. I'll grant that I don't know much about modern hardware design, but I think it would be way more interesting and not that much more complicated than everything already is for the CPU and RAM to negotiate this stuff.
  • 2
    This is not a very serious take either way since, as I said, I don't know much about hardware design in the nanometer and microsecond scale. I'm just not convinced that the answer is so obvious. Generally, Assembly is so disconnected from the hardware it runs on that manufacturers pretty much do whatever and then implement the actual specs on top.
  • 2
    Any interop standard designed today assumes that the implementation and the performance landscape that advises it will soon be unrecognizable. They try to betray as little of the internals as possible. I'm just wondering whether the 8-bit byte is really as universal as we think, or just the worst case of tech debt.
  • 1
    @lorentz that’s an interesting topic for sure and I also know very little about it. And it’s never a bad idea to question if something could be different and maybe better.
  • 0
    You forgot about jews
  • 3
    @lorentz

    I know about hardware design but I'm not an expert either.

    The circuitry aspect was definitely a thing before. Nowadays it's, as you said, probably feasible but you have two main limitations

    - budget: it's much cheaper to reuse tried and tested hardware units than redesign from scratch.

    - thermals: in modern processes, we've gotten dangerously close to the thermal limit (which is why Moore's law is no longer applicable), where higher component density will cause then to melt, short-circuit or otherwise malfunction.

    There's also a lesser known issue which, while solvable, exponentially increases complexity, which is, in this case, memory limits, as even with your 3 extra bit solution, you'd need another timing control line, like CAS and RAS on current RAM, BAS, from bit, I guess.
Add Comment