You're both completely missing the point. It's important that an LLM be able to perform exact arithmetic reliably without a tool call. Of course the underlying hardware does so extremely rapidly, that's not the point.
Damn, who knew there would be an arms race to gobble up those domains and be the sole judge to decide if we reached arbitrary milestones? I should have thought of that sooner :P
That’s a coward’s take, and even if you are taking the middle-ground route there are sufficiently legal ways. You just won’t find much enthusiasm about it among people here because the demographic of this platform is living comfortably.
What's strange is I just saw a TikTok video in my Waymo earlier.
"a perfect explanation of quantum tunneling"
It was a baseball game. A pitcher had thrown a pitch, and there was some optical illusion of the ball appearing to go through the batter's swing. It looked like the ball went through the bat. Apparently this is quantum tunneling. The atoms aligned perfectly and the ball passed through.
How does a model “trigger” self-harm? Surely it doesn’t catalyze the dissatisfaction with the human condition, leading to it. There’s no reliable data that can drive meaningful improvement there, and so it is merely an appeasement op.
Same thing with “psychosis”, which is a manufactured moral panic crisis.
If the AI companies really wanted to reduce actual self harm and psychosis, maybe they’d stop prioritizing features that lead to mass unemployment for certain professions. One of the guys in the NYT article for AI psychosis had a successful career before the economy went to shit. The LLM didn’t create those conditions, bad policies did.
By telling paranoid schizophrenics that their mother is secretly plotting against them and telling suicidal teenagers that they shouldn’t discuss their plans with their parents. That behavior from a human being would likely result in jail time.
Agents, tool-integrated reasoning, even chain of thought (limited, for some math) can address this.
reply