Would you like a spicy spaghetti dish? Just use some gasoline.

  • TommySoda@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    6 months ago

    It’s almost like LLM’s aren’t the solution to literally everything like companies keep trying to tell us they are. Weird.

    I honestly can’t wait for this to blow up in a company’s face in a very catastrophic way.

    • youngalfred@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 months ago

      Already has - air Canada was held liable for their ai chatbot giving wrong information that a guy used to buy bereavement tickets. They tried to claim they weren’t responsible for what it said, but the judge found otherwise. They had to pay damages.

      • barsquid@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        That’s not catastrophic yet. That cost them only the money which would otherwise have been margin on top of a low priced ticket.

    • Max-P@lemmy.max-p.me
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      AI is basically like early access games but the entirety of big tech is rushing to roll it out first to as many people as possible.

      • sp3tr4l@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        6 months ago

        Hah, remember when games and software used to be tested to ensure they would function correctly before release?

        At least with Early Access games you know its in development.

        What has it been, nearly a decade now that we just expect nearly everything to be broken on launch?

  • brsrklf@jlai.lu
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    5 months ago

    The most baffling part of it is how it looks like zero attempt was made to attribute credibility to sources.

    Using Reddit as a source was bad enough (of course, they paid for it, so now they must feel like they need to use this crap). But one of the examples in the article is just parroting stuff from The Onion.

    Edit : I’ve since learned that the Onion article was probably seen as “trustworthy” by the AI because it was linked on a fracking company’s website (as an obvious joke, in a blog article).

    If all it takes for a source to be validated is one link with no regard for context, I think the point stands.

  • Gsus4@mander.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    6 months ago

    This is such a disinfo nightmare, imagine if it was trained (prompting would be easier actually) to spread high quality data with strategically planted lies to maximize harmful confident incorrectness.