• MTK@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    9 hours ago

    I’ve seen people dumber than ChatGPT, it definitely isn’t sentient but I can see why someone who talks to a computer that they perceive as intelligent would assume sentience.

    • AdrianTheFrog@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 hours ago

      We have ai models that “think” in the background now. I still agree that they’re not sentient, but where’s the line? How is sentience even defined?

      • MTK@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        Sentient in a nutshell is the ability to feel, be aware and experience subjective reality.

        Can an LLM be sad, happy or aware of itself and the world? No, not by a long shot. Will it tell you that it can if you nudge it? Yes.

        Actual AI might be possible in the future, but right now all we have is really complex networks that can do essentially basic tasks that just look impressive to us because the are inherently using our own communication format.

        If we talk about sentience, LLMs are the equivalent of a petridish of neurons connected to a computer (metaphorically) and only by forming a complex 3d structure like a brain can they really reach sentience.

        • AdrianTheFrog@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          Can an LLM be sad, happy or aware of itself and the world? No, not by a long shot.

          Can you really prove any of that though?

          • MTK@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            51 minutes ago

            Yes, you can debug an LLM to a degree and there are papers that show it. Anyone who understands the technology can tell you that it absolutely lacks any facility to experience

    • Patch@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 hours ago

      Turing made a strategic blunder when formulating the Turing Test by assuming that everyone was as smart as he was.