• Credibly_Human@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    17 hours ago

    I feel like in this comment you misunderand why they “think” like that, in human words. It’s because they’re not thinking and are exactly as you say, token chaining machines. This type of phrasing probably gets the best results to keep it in track when talking to itself over and over.

    • kazerniel@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      Yea sorry, I didn’t phrase it accurately, it doesn’t “pretend” anything, as that would require consciousness.

      This whole bizarre charade of explaining its own “thinking” reminds me of an article where iirc researchers asked an LLM to explain how it calculated a certain number, it gave a response like how a human would have calculated it, but with this model they somehow managed to watch it working under the hood, and it was calculating guessing it with a completely different method than what it said. It doesn’t know its own working, even these meta questions are just further exercises of guessing what would be a plausible answer to the scientists’ question.