Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This guy is vibing some react app, doesnt even know what “npm run dev” does, so he let the LLM just run commands. So basically a consumer with no idea of anything. This stuff is gonna happen more and more in the future.


There are a lot of people who don't know stuff. Nothing wrong with that. He says in his video "I love Google, I use all the products. But I was never expecting for all the smart engineers and all the billions that they spent to create such a product to allow that to happen. Even if there was a 1% chance, this seems unbelievable to me" and for the average person, I honestly don't see how you can blame them for believing that.


I think there is far less than 1% chance for this to happen, but there are probably millions of antigravity users at this point, 1 millionths chance of this to happen is already a problem.

We need local sandboxing for FS and network access (e.g. via `cgroups` or similar for non-linux OSes) to run these kinds of tools more safely.


Codex does such sandboxing, fwiw. In practice it gets pretty annoying when e.g. it wants to use the Go cli which uses a global module cache. Claude Code recently got something similar[0] but I haven’t tried it yet.

In practice I just use a docker container when I want to run Claude with —-dangerously-skip-permissions.

[0]: https://code.claude.com/docs/en/sandboxing


We also need laws. Releasing an AI product that can (and does) do this should be like selling a car that blows your finger off when you start it up.


This is more akin to selling a car to an adult that cannot drive and they proceed to ram it through their garage door.

It's perfectly within the capabilities of the car to do so.

The burden of proof is much lower though since the worst that can happen is you lose some money or in this case hard drive content.

For the car the seller would be investigated because there was a possible threat to life, for an AI buyer beware.


I think the general public has a MUCH better grasp on the potential consequences of crashing a car into a garage than some sort of auto-run terminal command mode in an AI agent.

These are being sold as a way for non-developers to create software, I don't think it's reasonable to expect that kind of user to have the same understanding as an actual developer.

I think a lot of these products avoid making that clear because the products suddenly become a lot less attractive if there are warnings like "we might accidentally delete your whole hard drive or destroy a production database."


Responsibility is shared.

Google (and others) are (in my opinion) flirting with false advertising with how they advertise the capabilities of these "AI"s to mainstream audiences.

At the same time, the user is responsible for their device and what code and programs they choose to run on it, and any outcomes as a result of their actions are their responsibility.

Hopefully they've learned that you can't trust everything a big corporation tells you about their products.


This is an archetypal case of where a law wouldn't help. The other side of the coin is that this is exactly a data loss bug in a product that is perfectly capable of being modified to make it harder for a user to screw up this way. Have people forgotten how comically easy it was to do this without any AI involved? Then shells got just a wee bit smarter and it got harder to do this to yourself.

LLM makers that make this kind of thing possible share the blame. It wouldn't take a lot of manual functional testing to find this bug. And it is a bug. It's unsafe for users. But it's unsafe in a way that doesn't call for a law. Just like rm -rf * did not need a law.


there are laws about waiving liability for experimental products

sure, it would be amazing if everyone had to do a 100 hour course on how LLMs work before interacting with one


Where are these laws? Are they country, state, province?


varies by jurisdiction, but just as you can

- sell a knife that can lead to digit loss, or

- sell software that interacts with your computer and can lead to data loss, you can

- give people software for free that can lead to data loss.

...

the Antigravity installer comes with a ToS that has this

   The Service includes goal-oriented AI systems or workflows that perform
   actions or tasks on your behalf in a supervised or autonomous manner that you
   may create, orchestrate, or initiate within the Service (“AI Agents”). You
   are solely responsible for: (a) the actions and tasks performed by an AI
   Agent; (b) determining whether the use an AI Agent is fit for its use case;
   (c) authorizing an AI Agent’s access and connection to data, applications,
   and systems; and (d) exercising judgment and supervision when and if an AI
   Agent is used in production environments to avoid any potential harm the AI
   Agent may cause.


Google will fix the issue, just like auto makers fix their issues. Your comparison is ridiculous.


Didn't sound to me like GP was blaming the user; just pointing out that "the system" is set up in such a way that this was bound to happen, and is bound to happen again.


Yup, 100%. A lot of the comments here are "people should know better" - but in fairness to the people doing stupid things, they're being encouraged by the likes of Google, ChatGPT, Anthropic etc, to think of letting a indeterminate program run free on your hard drive as "not a stupid thing".

The amount of stupid things I've done, especially early on in programming, because tech-companies, thought-leaders etc suggested they where not stupid, is much large than I'd admit.


> but in fairness to the people doing stupid things, they're being encouraged by the likes of Google, ChatGPT, Anthropic etc, to think of letting a indeterminate program run free on your hard drive as "not a stupid thing".

> The amount of stupid things I've done, especially early on in programming, because tech-companies, thought-leaders etc suggested they where not stupid, is much large than I'd admit.

That absolutely happens, and it still amazes me that anyone today would take at face value anything stated by a company about its own products. I can give young people a pass, and then something like this will happen to them and hopefully they'll learn their lesson about trusting what companies say and being skeptical.


> I can give young people a pass

Or just anyone non-technical. They barely understand these things, if someone makes a claim, they kinda have to take it at face value.

What FAANG all are doing is massively irresponsible...


Cue meme: "You really think someone would do that? Just go on the Internet and tell lies?"

... Except perhaps with phrases like "major company" and "for profit", and "not legally actionable".


> phrases like "major company"

Right here. And I think you're not quite getting it if you have to refer to "go on the internet and tell lies"...

Sure plenty of people might be on "social media" and have some idea that people fib, but they aren't necessarily generally "surfing the internet".

To them, saying "the internet tells lies" is comparable to saying "well sometimes, at the grocery store, you buy poison instead of food", and yes, it can happen, but they aren't expecting to need a mass spectrometer and a full lab team to test for food safety... to you know, separate the snake oil grocers from the "good" food vendors.


And is vibing replies to comments too in the Reddit thread. When commenters points out they shouldn’t run in YOLO/Turbo mode and review commands before executing the poster replies they didn’t know they had to be careful with AI.

Maybe AI providers should give more warnings and don’t falsely advertise capabilities and safety of their model, but it should be pretty common knowledge at this point that despite marketing claims the models are far from being able to be autonomous and need heavy guidance and review in their usage.


In Claude Code, the option is called "--dangerously-skip-permissions", in Codex, it's "--dangerously-bypass-approvals-and-sandbox". Google would do better to put a bigger warning label on it, but it's not a complete unknown to the industry.


This is engagement bait. It’s been flooding Reddit recently, I think there’s a firm or something that does it now. Seems very well lubricated.

Note how OP is very nonchalant at all the responses, mostly just agreeing or mirroring the comments.

I often see it used for astroturfing.


I'd recommend you watch the video which is linked at the top of the Reddit post. Everything matches up with an individual learner who genuinely got stung.


The command it supposedly ran is not provided and the spaces explanation is obvious nonsense. It is possible the user deleted their own files accidentally or they disappeared for some other reason.


Regardless of whether that was the case, it would be hilarious if the laid off Q/A workers tested their former employers’ software and raised strategic noise to tank the stock.


> So basically a consumer with no idea of anything.

Not knowing is sort of the purpose of AI. It's doing the 'intelligent' part for you. If we need to know it's because the AI is currently NOT good enough.

Tech companies seem to be selling the following caveat: if it's not good enough today don't worry it will be in XYZ time.


It still needs guardrails, and some domain knowledge, at least to prevent it from using any destructive commands


I don't think that's it at all.

> It still needs guardrails, and some domain knowledge, at least to prevent it from using any destructive commands

That just means the AI isn't adequate. Which is the point I am trying to make. It should 'understand' not to issue destructive commands.

By way of crude analogy, when you're talking to a doctor you're necessarily assuming he has domain knowledge, guardrails etc otherwise he wouldn't be a doctor. With AI that isn't the case as it doesn't understand. It's fed training data and provided prompts so as to steer in a particular direction.


I meant "still" as in right now, so yes I agree, it's not adequate right now, but maybe in the future, these LLMs will be improved, and won't need them.


Natural selection is a beautiful thing.


It will, especially with the activist trend towards dataset poisoning… some even know what they’re doing


Because that is exactly what the hype says that "AI" can do for you.


There’s a lot of power in letting LLM run commands to debug and iterate.

Frankly, having a space in a file path that’s not quoted is going to be an incredibly easy thing to overlook, even if you’re reviewing every command.


I have been recently experimenting with Antigravity and writing a react app. I too didn't know how to start the server or what is "npm run dev". I consider myself fairly technical so I caught up as I went along.

While using the vibe coding tools it became clear to me that this is not something to be used by folks who are not technically inclined. Because at some point they might need to learn about context, tokens etc.

I mean this guy had a single window, 10k lines of code and just kept burning tokens for simplest, vague prompts. This whole issue might be made possible due to Antigravity free tokens. On Cursor the model might have just stopped and asked to fed with more money to start working again -- and then deleting all the files.


Well but 370% of code will be written by machines next year!!!!!1!1!1!!!111!


And the price will have decreased 600% !




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: