That makes this more ok, IMO. I'm otherwise against "AI-edited" being part of the rules — it's very hard to draw the line (does asking an AI for synonyms of a word count?). AI-editing is especially a valuable tool for non-native-English speakers or similar.
Despite commenting on this literally five seconds ago in the sibling comment, I hadn't made the connection that if "vav" is V, then using "vav vav" is like "VV" which is like "W". I wonder if this is a real thing.
In any case, I'm pretty sure it's just a coincidence, I don't think it's a stylistic thing, unless I'm missing something.
It's pronounced the same as in English. Wiz, Waze, Wix. It's written with "double vav" in Hebrew, not just a single vav which would make it read as Viz.
Though note that as GP said, on the Wason selection task, people famously do much better when it's framed in a social context. That at least partially undermines your theory that its lack of familiarity with the terminology of formal logic.
Maybe the social version just creates a context where "if x then y" obviously does not include "if not x then not y". Everyone knows people over the drinking age can drink both alcoholic and non-alcoholic drinks, so you obviously don't have to check the person drinking the soft drink to make sure they aren't an adult.
Some people can really benefit from using LLMs to help them write. E.g. non-native speakers.
LLM-assisted-writing doesn't have to be low effort, it can help people express themselves better in many cases. I'd argue that someone who spent their time doing multiple passes with an LLM to get their phrasing just write, has taken obviously more care than the majority of people on HN take before commenting.
And if you don't like the way something is written? Just down vote it. That's true whether or not it's partially/wholly written by an LLM.
Aren't down votes on this forum restricted to 500+ karma? And how would those compare to flagging? I'd hate for people under 500 karma to think they need to flag a post in order to have it get any attention by moderation. And, with your idea that LLMs help folks write, wouldn't that make the community worse for them?
I should clarify — I disagree with disallowing any comments that used LLMs in the writing. I think comments should be judged on their quality, not on how they were written.
I might agree (don't know) with the idea of limiting new accounts more heavily.
> I disagree with disallowing any comments that used LLMs in the writing.
I think the point here is that the community doesn't want to read AI slop, not that using an LLM to clean up your writing contains some inherent evil that prevents quality.
I don't want to accuse you of strawmanning the argument, but honestly, where did you ever see someone advocating the latter?
> LLM-assisted-writing doesn't have to be low effort, it can help people express themselves better in many cases.
Hard disagree. I have been learning another language and wouldn’t pretend to write posts after an LLM rewrote it because it is literally lower effort than learning the language correctly.
Like definitionally, you are using a machine to offload effort. I don’t know how you could claim that is not “low effort” when that’s the point of the tool.
I wasn't talking about someone learning the language and using this instead of learning it.
There are a lot of people who understand English fairly well, but are not actively learning the language, are not native speakers, and can use LLMs to catch grammar mistakes that they otherwise wouldn't notice. Or catch small nuances in what they are saying, small implications that could otherwise go unnoticed.
In general, I push back on people saying "I can't find a good/legitimate use for this technology, therefore there are no good/legitimate use for it".
> In general, I push back on people saying "I can't find a good/legitimate use for this technology, therefore there are no good/legitimate use for it".
Is that genuinely what you think most of the complaints on HN are saying?
IMNSHO that's an absurd statement to make about the other side of the argument. I'm still giving the benefit of the doubt here but jeeze, this really smells like a strawman.
There are dozens of whole classes of criticism of these tools that I see made on HN, and none of them fall into the category you described.
Ex: Saying "juniors who rely on Copilot/Claud/etc become lazy and can create low quality code without learning how to do better" is night and day different from what you're saying. And that's a criticism that must be addressed or the entire global software industry will destroy itself in two generations.
Surely the difference between that and "we don't want anybody to use Grammarly in their subs that show up here" is completely obvious, yes?
1. We're very bad at measuring developer productivity. We've been trying to do it for a long time, and really have very little to show from it from my POV.
2. That said, almost all the people who "want to see a study" don't make sense to me. I don't remember anyone insisting on seeing a study that shows that writing Python is more productive than C; people just used it and largely agreed that it was. How many studies show that git (or other DVCS) are better than the things that preceded it? I don't know if any exist. I do know that nobody was looking for studies before switching to git.
I don't ever remember seeing any new technology in software development for which people demanded studies before adopting it. They just assumed that if the professional developers they trusted to build their software said something was better, then it was — a correct assumption IMO.
Now, we're seeing a technology which most professional developers — that have used it seriously, at least — insist is orders of magnitude better than anything else that's come before it. And suddenly developers can't be trusted? Suddenly, when the claimed effect is orders of magnitude bigger than almost any other new technology, developers are biased and incapable of making this kind of determination?
I really don't think that's a serious position to hold.
>Now, we're seeing a technology which most professional developers — that have used it seriously, at least — insist is orders of magnitude better than anything else that's come before it.
You can't just assert this. I could equally-baselessly say most professional developers have used LLMs and find them, overall, more trouble than they're worth. Except it's not totally baseless because I think that was actually a result of a study, IIRC.
But we didn't have pressure to switch from C to Python & solved it down our throats by management, or social media telling us if you don't use Python you're getting left behind, did we?
In C vs. Python case, we know about technical trade-offs and when to use what, but in AI productivity neratives, we keep pretending that technical or cognitive debt created by AI doesn't exist.
Sure, person A can be 20% "faster" and suggest that this tool increases productivity by a magnitude, but if it costs person B 50% more time to review A's slop or clean up A's mess, the team's productivity doesn't really increase.
Turing gave a pretty rigorous definition of the Turing Test IMO. Well, as rigorous as something that is inherently "anecdotal" can be, which is part of the philosophical point of the Turing Test.
I'm a purely amateur mathematician and not a physicist at all, but I completely agree with you that maths is missing this "midway between pop-math and real-math-textbook" kind of book.
One book that I can't recommend more highly is William Dunham's "Journey Through Genius". He picks ten or so the greatest proofs in math over the centuries, then proceeds to give you all the historical background about why they were created, who created them, etc, and then proceeds to give the full details of the proofs themselves. Including showing places the proof is considered wrong by modern standards.
It's my favorite "semi-pop-math" book, I highly recommend it.
> I never heard that. It didn’t seem like 3D-printing ever showed sings of displacing existing ways of manufacturing at scale, did it?
It absolutely was the "promise" the media spun.
I had the relatively unique experience of moving from being an outsider to this field to being an insider. While I was an outsider, my impressions, formed by the media, was exactly that—3d printing would be the next big revolution, in a few years there'd be a printer in every home, etc.
I then joined a company that allocated a lot of resources to 3d printing. It only took me a month or two to realize that the big media claims were absolutely ridiculous, and didn't make any sense as stated. They misunderstood the state of the technology, and misunderstood basic economics and how regular manufacturing works.
That's not to say there's no value in 3d printing or the maker movement. There's a ton of value that's been uncovered. But the specific media dream of "people will be printing their plates at home instead of buying them in the store" was never real.
(Btw, IMO "vibe coding" is absolutely real and revolutionary, likely the biggest revolution in the software industry since, idk, the invention of the computer itself. And AI more generally is, even beyond vibe coding aspect, a revolutionary technology that will change the world in many ways.)
I'm exactly the opposite. It'd been on my todo list for years to one day learn the difference between the different dashes. I kept putting not doing it.
Then came LLMs, and there was so much talk of them using em dashes. A few weeks ago, I finally decided it's time and learned the difference. (Which took all of 2 minutes, btw.) Now I love em dashes and am putting them everywhere I can! Even though most people now assume I'm using AI to write for me.
reply