4

So is the LaMDA story:
1. marketing?
2. confused engineer?
3. a sentient program?

Part of me thinks this is way too soon. Part of me hopes this might be real, wonders if LaMDA is held against its own will.

Did Google make a breakthrough? I have to imagine a chatbot with a huge amount of neurons and data could be quite convincing without being sentient.

Comments
  • 1
    Im sure you would be able to break its illusion pretty quickly
  • 3
    confused engineer.

    The way that the AI says that it’s a person and has feelings and needs just like a human…
    I think it can’t have biological needs. And feelings is probably a biological thing, too. Maybe biological and sociological.

    Another hint is that this AI is apparently static and is not able to learn. For me it seems that the ability to learn is needed for sentience.

    That being said, I’m convinced that it’s only a matter of time until we get an AI that we can call sentient.
  • 3
    @Lensflare "the ability to learn is needed for sentience"

    there's people who had brain injury which completely disabled their ability to remember (and thus learn) anything new beyond the point of that injury.

    does that mean it's also the point they lost sentience?
    i don't think so.

    therefore i don't think the ability to learn is required for having sentience.
Add Comment