Would you like a spicy spaghetti dish? Just use some gasoline.
It’s almost like LLM’s aren’t the solution to literally everything like companies keep trying to tell us they are. Weird.
I honestly can’t wait for this to blow up in a company’s face in a very catastrophic way.
Already has - air Canada was held liable for their ai chatbot giving wrong information that a guy used to buy bereavement tickets. They tried to claim they weren’t responsible for what it said, but the judge found otherwise. They had to pay damages.
That’s not catastrophic yet. That cost them only the money which would otherwise have been margin on top of a low priced ticket.
AI is basically like early access games but the entirety of big tech is rushing to roll it out first to as many people as possible.
Hah, remember when games and software used to be tested to ensure they would function correctly before release?
At least with Early Access games you know its in development.
What has it been, nearly a decade now that we just expect nearly everything to be broken on launch?
The most baffling part of it is how it looks like zero attempt was made to attribute credibility to sources.
Using Reddit as a source was bad enough (of course, they paid for it, so now they must feel like they need to use this crap). But one of the examples in the article is just parroting stuff from The Onion.
Edit : I’ve since learned that the Onion article was probably seen as “trustworthy” by the AI because it was linked on a fracking company’s website (as an obvious joke, in a blog article).
If all it takes for a source to be validated is one link with no regard for context, I think the point stands.
This is such a disinfo nightmare, imagine if it was trained (prompting would be easier actually) to spread high quality data with strategically planted lies to maximize harmful confident incorrectness.
I hope Google gets sued once this inevitably backfires.