Cancel 'hallucinate'; please replace


Based on a huge many-dimensional map of words created by processing huge amounts of text, LLMs decide which words (based on meaning and current context) would most likely follow from the words used in a prompt. They’re inherently more concerned with sounding truthy than delivering a factually correct response, the researchers conclude.

“ChatGPT and other LLMs produce bullshit, in the sense that they don’t care about the truth of their outputs,” Hicks said in an email to Fast Company. “Thinking of LLMs this way provides a more accurate way of thinking about what they are doing, and thereby allows consumers and regulators to better understand why they often get things wrong.”

Importantly, the LLM doesn’t always choose the word that is statistically most likely to follow, the researchers point out. Letting the model choose between a set of more or less likely candidates for the next word gives the output an unexpected, creative, or even human quality. This quality can even be modulated using a control that AI developers call “temperature.” But dialing up the model’s temperature increases the chances it will generate falsehoods.

When LLMs “hallucinate,” it’s normally because they’ve encountered a dearth of information about the particular subject or context within their training data. They rely on their existing probability settings and, in a sense, wing it.

Comments

Popular posts from this blog

Perplexity

Aphorisms: AI

DeepAI's Austen on China