Hacker Newsnew | past | comments | ask | show | jobs | submit | ryanisnan's commentslogin

Are you serious? When you lower the cost of killing, nobody wins.


It usually boils down to who controls the technology, not the absolute cost.

Centralized AI killbots with no safety controls are almost certainly bad.

Individually owned and controlled militias of defensive (and decentralized) AI killbots? Unclear.


The film 'Slaughterbots' presents a scenario which could be either of those, but is implied to be the latter.


Yes, but 'Murderbot' makes it clear which alternative is preferable, and also introduces the excellent Sanctuary Moon.

You don't lower the cost of killing by improved targeting, you lower it by thugs shooting people in broad daylight with no consequences.

I understand the argument that moving the decision making power to a black box would clear conscience of the operator, yadda yadda yadda, but newsflash, price of human life is falling so quick, that I think we're far beyond the point where it matters.


Less severe than killing, you’re essentially describing the “broken windows” theory. https://en.wikipedia.org/wiki/Broken_windows_theory


No, I'm not kidding. Some people need to be killed. Look at all the "collateral damage" when America kills people that need to be killed. Could AI help let us kill the people who need a killing, without killing the people who shouldn't be killed?


For reals. I love the general premise behind the article, but to me how you publish it, and how others access it, is the sauce. Creating static sites is hardly the problem.


I'm not totally clear, but it probably involves the ol' stranger.


The amount of negativity in the original post was astounding.

People were making all sorts of statements like: - “I cloned it and there were loads of compiler warnings” - “the commit build success rate was a joke” - “it used 3rd party libs” - “it is AI slop”

What they all seem to be just glossing over is how the project unfolded: without human intervention, using computers, in an exceptionally accelerated time frame, working 24hr/day.

If you are hung up on commit build quality, or code quality, you are completely missing the point, and I fear for your job prospects. These things will get better; they will get safer as the workflows get tuned; they will scale well beyond any of us.

Don’t look at where the tech is. Look where it’s going.


As mentioned elsewhere (I'm the author of this blogpost), I'm a heavy LLM user myself, use it everyday as a tool, get lots of benefits from it. It's not a "hit post" on using LLM tools for development, it's a post about Cursor making grand claims without being able to back them up.

No one is hung up on the quality, but there is a ground fact if something "compiles" or "doesnt". No one is gonna claim a software project was successful if the end artifact doesn't compile.


I think for the point of the article, it appeared to, at some point, render homepages for select well known sites. I certainly did not expect this to be a serious browser, with any reliability or legs. I don’t think that is dishonest.


> I certainly did not expect this to be a serious browser, with any reliability or legs.

Me neither, and I note so twice in the submission article. But I also didn't expect a project that for the last 100+ commits couldn't reliably be built and therefore tested and tried out.


My apologies - my point(s) were more about the original submission for the Cursor blog post, not your post itself.

I did read your post, and agree with what you're saying. It would be great if they pushed the agents to favour reliability or reproducibility, instead of just marching forwards.


> What they all seem to be just glossing over is how the project unfolded: without human intervention, using computers, in an exceptionally accelerated time frame, working 24hr/day.

Correct, but Gas Town [1] already happened and what's more _actually worked_, so this experiment is both useless (because it doesn't demonstrate working software) _and_ derivative (because we've already seen that you can set up a project where with spend similar to the spend of a single developer you can churn out more code than any human could read in a week).

[1]: https://github.com/steveyegge/gastown


> What they all seem to be just glossing over is how the project unfolded: without human intervention, using computers, in an exceptionally accelerated time frame, working 24hr/day.

The reason I have yet to publish a book is not because I can't write words. I got to 120k words or so, but they never felt like the right words.

Nobody's giving me (nor should they give me) a participation trophy for writing 120k words that don't form a satisfying novel.

Same's true here. We all know that LLMs can write a huge quantity of code. Thing is, so does:

  yes 'printf("Hello World!");'
The hard part, the entire reason to either be afraid for our careers or thrilled we can switch to something more productive than being code monkeys for yet-another-CRUD-app (depending on how we feel), that's the specific test that this experiment failed at.


Spending 24h/day to build nothing isn't impressive - it's really, really bad. That's worse than spending 8h/day to build nothing.

If the piece of shit can't even compile, it's equivalent to 0 lines of code.

> Don’t look at where the tech is. Look where it’s going.

Given that the people making the tech seem incapable of not lying, that doesn't give me hope for where it's going!

Look, I think AI and LLMs in particular are important. But the people actively developing them do not give me any confidence. And, neither do comments like these. If I wanted to believe that all of this is in vain, I would just talk to people like you.


>If you are hung up on commit build quality

I'm sorry but what? Are you really trying to argue that it doesn't matter that nothing works, that all it produced is garbage and that what is really important is that it made that garbage really quickly without human oversight?

That's.....that's not success.


Quality absolutely matters, but it's hyper context dependent.

Not everything needs to, or should have the same quality standards applied to them. For the purposes of the Cursor post, it doesn't bother me that most of the commits produced failed builds. I assume, from their post, that at some points, it was capable of building, and rendering the pages shown in the video on the post. That alone, is the thing that I think is interesting.

Would I use this browser? Absolutely not. Do I trust the code? Not a chance in hell. Is that the point? No.


"Quality" here isn't if A is better than B. It's "Does this thing actually work at all?"

Sure, I don't care too much if the restaurant serves me food with silverware that is 18/10 vs 18/0 stainless steel, but I absolutely do care if I order a pizza and they just dump a load of gravel onto my plate and tell me it's good enough, and after all, quality isn't the point.


Software that won’t compile and doesn’t do anything is not software, it’s just a collection of text files. A computer that won’t boot isn’t a computer anymore, it’s a paperweight. A car that won’t start isn’t a car anymore, it’s scrap metal.

I can bang on a keyboard for a week and produce tons of text files - but if they don’t do anything useful, would you consider me a programmer?


> Quality absolutely matters, but it's hyper context dependent.

There are very few software development contexts where the quality metric of “does the project build and run at all” doesn’t matter quite a lot.


It is hard to look at where it is going when there are so many lies about where the tech is today. There are extraordinary claims made on Twitter all the time about the technology, but when you look into things, it’s all just smoke and mirrors, the claims misrepresent the reality.


What a silly take. Where the tech is is extremely relevant. The reality of this blog post is it shows the tech is clearly not going anywhere better either, as they seem to imply. 24 hours of useless code is still useless code.

This idea that quality doesn't matter is silly. Quality is critical for things to work, scale, and be extensible. By either LLMs or humans.


People that spend time poking holes in random vendor claims remind me of folks you see video of standing on the beach during a tsunami warning. Their eyes fixed on the horizon looking for a hundred foot wave, oblivious to the shore in front of them rapidly being gobbled up by the sea.


> oblivious to the shore in front of them rapidly being gobbled up by the sea

Am I misunderstanding this metaphor? Tsunamis pull the sea back before making landfall.


This looks awesome - my son loves gears, and my wife and I have been talking about buying him a 3D printer soon. Thank you!


You are a great writer - thanks for putting this together!


I think the average human would do a far worse job at predicting what the HN homepage will look like in 10 years.


That one got me as well - some pretty wild stuff about prompting the compiler, starship on the moon, and then there's SQLite 4.0


You can criticize it for many things but it seems to have comedic timing nailed.


Needs a dang archetype, who merges similar posts.


that is a great idea.


thanks! love the app, it's really fun, and surprisingly engaging, despite knowing that it's all AI nonsense


What an awesome piece of technology. I've been wanting to create something similar, just on the technical merits. We have some pretty amazingly capable technology these days, but so much of it relies on IP infrastructure, which is fine when things work and you are either aligned with your government, or live in a society where there are strong checks and balances on government overreach.


Exactly. With Chat Control being revived again in the EU, various VPN bans being proposed in US states, and ID verification rolling out seemingly everywhere, this kind of tech may end up being more useful than people expect. If it works in the extremely adversarial environment of a warzone, it should work fine here.


How is this a solution to Chat Control and EU law? If this is used, governments will simply demand Apple and Google get the app declared forbidden, which both have done to apps for many reasons.

Worse: they might demand a list of people who have it installed (and this violates the Chat Control law of course).

Even worse: this app turns out to be written by a security agency or scammers and starts exploiting people.


If they are demanding a list of people who have apps installed, you have two options: lie down like a dog or get in the streets and fight. If you think it’s going to get to that point, you need tools like this even more.


Why is chat control controversial? It seems like the same people afraid of this are the same people outraged when people then use private chat to do bad things.


The thing that I really like about the approach taken by OP is that it AFAIK is broadcast-only, up to a certain radius. The hard part in mesh networking is routing, and broadcast sidesteps that


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: