Hacker Newsnew | past | comments | ask | show | jobs | submit | Nuzzerino's commentslogin

> We need llms to be able to tap that not add the same functionality a layer above and MUCH less efficiently.

Agents, tool-integrated reasoning, even chain of thought (limited, for some math) can address this.


You're both completely missing the point. It's important that an LLM be able to perform exact arithmetic reliably without a tool call. Of course the underlying hardware does so extremely rapidly, that's not the point.

The computer ALREADY does do math reliably. You are missing the point.

Could you explain why that is?

A tool call is like 100,000,000x slower isnt it?

No idea really, but if it is speed related I would have thought that OP would have used faster rather than importance to try and make their point.

It's both. Being dirrctly a part of it makes it integrated into its intelligence for training and operation.

Damn, who knew there would be an arms race to gobble up those domains and be the sole judge to decide if we reached arbitrary milestones? I should have thought of that sooner :P


That’s a coward’s take, and even if you are taking the middle-ground route there are sufficiently legal ways. You just won’t find much enthusiasm about it among people here because the demographic of this platform is living comfortably.


It’s okay to look at things as art. Not everything needs to be explained to have value.


What's strange is I just saw a TikTok video in my Waymo earlier.

"a perfect explanation of quantum tunneling"

It was a baseball game. A pitcher had thrown a pitch, and there was some optical illusion of the ball appearing to go through the batter's swing. It looked like the ball went through the bat. Apparently this is quantum tunneling. The atoms aligned perfectly and the ball passed through.


How does a model “trigger” self-harm? Surely it doesn’t catalyze the dissatisfaction with the human condition, leading to it. There’s no reliable data that can drive meaningful improvement there, and so it is merely an appeasement op.

Same thing with “psychosis”, which is a manufactured moral panic crisis.

If the AI companies really wanted to reduce actual self harm and psychosis, maybe they’d stop prioritizing features that lead to mass unemployment for certain professions. One of the guys in the NYT article for AI psychosis had a successful career before the economy went to shit. The LLM didn’t create those conditions, bad policies did.

It’s time to stop parroting slurs like that.


‘How does a model “trigger” self-harm?’

By telling paranoid schizophrenics that their mother is secretly plotting against them and telling suicidal teenagers that they shouldn’t discuss their plans with their parents. That behavior from a human being would likely result in jail time.


At least they didn’t claim to invent AGI this time from prompts only… lol


There are other languages like Linear A that could use attention as well!


Ever tried to get a remote job lately?


How can you do this in the spirit of what the author is talking about if you have some kind of chronic pain?


That's not necessarily a bad thing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: