Of course they can't. Any product or feature is only as good as the data underneath it. Training data comes from the internet, and the internet is full of humans. Humans make and write weird shit so so the data that the LLM ingests is weird, this creates hallucinations.
Because it's all a corporation and a huge part of the corporate capitalist system is infinite growth. They want returns, BIG ones. When? Right the fuck now. How do you do that? Well AI would turn the world upside down like the dot-com boom. So they dump tons of money into AI. So..... it's the AI done? Oh no no no, we're at machine leaning AI is pretty far down the road actually, what we're firing the AI department heads and releasing this machine leaning software as 100% all the way done AI?
It's all the same reasons section 8 housing and low cost housing don't work under corporate capitalism. It's profitable to take government money, it's profitable to have low rent apartments. That's not the problem, the problem is THEY NEED THE GROWTH NOW NOW NOW!!!! If you have a choice between owning a condo where you have high wage renters, and you add another $100 to rent every year, you get more profit faster. No one wants to invest in a 10% increase over 5 years if the can invest in 12% over 4 years. So no one ever invests in low rent or section 8 housing.
Everything these AIs output is a hallucination. Imagine if you were locked in a sensory deprivation tank, completely cut off from the outside world, and only had your brain fed the text of all books and internet sites. You would hallucinate everything about them too. You would have no idea what was real and what wasn’t because you’d lack any epistemic tools for confirming your knowledge.
That’s the biggest reason why AIs will always be bullshitters as long as their disembodied software programs running on a server. At best they can be a brain in a vat which is a pure hallucination machine.
Yeah, I try to make this point as often as I can. The notion that AI hallucinates only wrong answers really misleads people about how these programs actually work. It couches it in terms of human failings rather than really getting at the underlying flaw in the whole concept.
LLMs are a really interesting area of research, but they never should have made it out of the lab. The fact that they did is purely because all science operates in the service of profit now. Imagine if OpenAI were able to rely on government funding instead of having to find a product to sell.
First of all I agree with your point that it is all hallucination.
However I think a brain in a vat could confirm information about the world with direct sensors like cameras and access to real-time data, as well as the ability to talk to people and determine things like who was trustworthy. In reality we are brains in vats, we just have a fairly common interface that makes consensus reality possible.
The thing that really stops LLMs from being able to make judgements about what is true and what is not is that they cannot make any judgements whatsoever. Judging what is true is a deeply contextual and meaning-rich question. LLMs cannot understand context.
I think the moment an AI can understand context is the moment it begins to gain true sentience, because a capacity for understanding context is definitionally unbounded. Context means searching beyond the current information for further information. I think this context barrier is fundamental, and we won't get truth-judging machines until we get actually-thinking machines.
I'm 100% sure he can't. Or at least, not from LLMs specifically. I'm not an expert so feel free to ignore my opinion but from what I've read, "hallucinations" are a feature of the way LLMs work.
Because everything they output is a hallucination. Just because sometimes those hallucinations are true to life doesn't mean jack shit. Even a broken clock is right twice a day.
"Only feed it accurate information."
Even that doesn't work because it just mixes and matches every element of its input to generate a new, novel output. Which would inevitably be wrong.
Well yeah, its using the same dataset as MS copilot.
Spitting out inaccurate (I wish the media would stop feeding into calling it something that sounds less bad like hallucinations) answers is nothing something that will go away until the LLM gains the ability to decern context.
It’s insane how many people already take AI as more capable/accurate than other medium. I’m not against AI, but I’m definitely against how much of a bubble of being worshipped that some people have it in.
Stupid headline, it's like Tim Cook saying he's not 100% sure Apple can stop batteries in their devices from exploding. You do as much as you can to prevent it but it might happen anyway because that's just how it is.
If you want to have good AI, you need to spend money and send your AI to college. Have real humans interact with it, correct it's logic, make sure it understands sarcasm and logical fallacies.
Or, you can go the cheap route: train it on 10 years of Reddit sh*tposts and hope for the best.