5
adante
295d

Has anybody else gotten to the point where people who need to mansplain how language models aren't truly sentient/conscious/intelligent are now more annoying than people who think language models are sentient/conscious/intelligent?*

While it has been a tight race but I think I have just about hit the inflection point.

The amount of time I've wasted because of someone condescendingly barging into a conversation with a iamverysmart 'actually you see they are just automata trying to predict the next text tokens'. When in actuality, everybody in the discussion is aware and that is not the point.

And to further exacerbate it, with a good number of them it is really difficult to get this through their thick little skulls. They just keep parroting the same thing over and over. Ironically, in their singleminded ego driven desire to be the Daniel Dennett of the chat they actually come across as less sentient/conscious/intelligent than a language model.

(*this should not be taken as endorsement for or against that idea - it is actually mostly orthogonal to this rant)

Comments
  • 1
    It's why I keep saying -

    Fuck LLMs, Fuck AI, Fuck Sora, Fuck OpenAI

    and fuck any influencer/person who thinks they know better
  • 5
    Until the LLM them self manages to convince me they are in fact sentient I will keep sorting them under the same nonsentient as phone sellers, politicians and rocks :P
  • 1
    People can't even be sure the person next to them is sentient and they're arguing about if AI is.

    I barely know if I am FFS.
  • 2
    The point actually is that LLMs are parrots on steroids. They are hallucinating 100% of the time, not only when this becomes too obvious. This means you can't trust anything these models put out.

    Any discussion that tries to conveniently ignore that fact, even if that isn't out of ignorance, and jump to applications is simply baseless and needs to be informed - and yes, repeatedly so.

    Given the choice between a stair safety rail that can and will break off at any moment, or no rail at all, I prefer the latter because it at least makes the risk clearly visible.
  • 1
    Sentience isn’t well defined, so arguing about if something has it or not is pretty much useless. It’s similar to "free will".

    Imho it makes more sense to argue about intelligence of the LLMs. Unless you argue with people who confuse sentience with intelligence, which are disturbingly, quite many.
  • 0
    @Voxera matrix multiplication chains nothing more. Or markov chain.
Add Comment