> For example, if you ask a model something it doesn’t know, the drive to be helpful can sometimes overtake the drive to be honest. And faced with a hard task, LLMs sometimes cheat. “Maybe the model really wants to please, and it puts down an answer that sounds good,” says Barak. “It’s hard to find the exact balance between a model that never says anything and a model that does not make mistakes.”
"Wants to"
I find I have a weird take on this kind of anthropomorphism... It doesn't bother me when people say a rock "wants to" roll down hill. But in does when someone say an LLM "wants to"... equally I'm not nearly as bothered by something like the LLM "tries to"... It's a strange rule set for correct communication...
Anyways, please forgive that preemptive tangent. My primary point is
[ citation needed ]
I remember reading a paper praising GPT for being able to explain it's decision making process. This paper provided no evidence, no arguments, and no citations for this exceptionally wild claim. How is this not just a worse version of that claim? I ask that as a real question, so many people willingly believe and state that an LLM is able to (correctly) explain it's decision making process. How? Why isn't it better to assume that's just another hallucination? Especially given it would be nonfalsifiable?
https://archive.is/69DwW
> For example, if you ask a model something it doesn’t know, the drive to be helpful can sometimes overtake the drive to be honest. And faced with a hard task, LLMs sometimes cheat. “Maybe the model really wants to please, and it puts down an answer that sounds good,” says Barak. “It’s hard to find the exact balance between a model that never says anything and a model that does not make mistakes.”
"Wants to"
I find I have a weird take on this kind of anthropomorphism... It doesn't bother me when people say a rock "wants to" roll down hill. But in does when someone say an LLM "wants to"... equally I'm not nearly as bothered by something like the LLM "tries to"... It's a strange rule set for correct communication...
Anyways, please forgive that preemptive tangent. My primary point is
[ citation needed ]
I remember reading a paper praising GPT for being able to explain it's decision making process. This paper provided no evidence, no arguments, and no citations for this exceptionally wild claim. How is this not just a worse version of that claim? I ask that as a real question, so many people willingly believe and state that an LLM is able to (correctly) explain it's decision making process. How? Why isn't it better to assume that's just another hallucination? Especially given it would be nonfalsifiable?