insaan@leftopia.orgtoTechnology@lemmy.world•We have to stop ignoring AI’s hallucination problemEnglish
0·
6 months agothe argument that they can’t learn doesn’t make sense because models have definitely become better.
They have to be either trained with new data or their internal structure has to be improved. It’s an offline process, meaning they don’t learn through chat sessions we have with them (if you open a new session it will have forgotten what you told it in a previous session), and they can’t learn through any kind of self-directed research process like a human can.
all of your shortcomings you’ve listed humans are guilty of too.
LLMs are sophisticated word generators. They don’t think or understand in any way, full stop. This is really important to understand about them.
If you click the article link, then use a process called “reading”, you would see:
Edit: I misunderstood and assumed he hadn’t read the article, which is entirely too common these days.