• Bell@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    6 months ago

    Take all you want, it will only take a few hallucinations before no one trusts LLMs to write code or give advice

    • sramder@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 months ago

      […]will only take a few hallucinations before no one trusts LLMs to write code or give advice

      Because none of us have ever blindly pasted some code we got off google and crossed our fingers ;-)

      • Avid Amoeba@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        6 months ago

        It’s way easier to figure that out than check ChatGPT hallucinations. There’s usually someone saying why a response in SO is wrong, either in another response or a comment. You can filter most of the garbage right at that point, without having to put it in your codebase and discover that the hard way. You get none of that information with ChatGPT. The data spat out is not equivalent.

        • deweydecibel@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          That’s an important point, and and it ties into the way ChatGPT and other LLMs take advantage of a flaw in the human brain:

          Because it impersonates a human, people are more inherently willing to trust it. To think it’s “smart”. It’s dangerous how people who don’t know any better (and many people that do know better) will defer to it, consciously or unconsciously, as an authority and never second guess it.

          And the fact it’s a one on one conversation, no comment sections, no one else looking at the responses to call them out as bullshit, the user just won’t second guess it.

          • KeenFlame@feddit.nu
            link
            fedilink
            English
            arrow-up
            0
            ·
            6 months ago

            Your thinking is extremely black and white. Many many, probably most actually, second guess chat bot responses.

      • Seasm0ke@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        6 months ago

        Split segment of data without pii to staging database, test pasted script, completely rewrite script over the next three hours.