Hacker Newsnew | past | comments | ask | show | jobs | submit | dr_dshiv's commentslogin

“Don’t code yet” is a longstanding part of the rapport

It’s only an imperfect vaccine — broad immune protection. Evolution probably has come up with a bunch of these.

My timbers are shivering.

These days, command line offers way better usability and accessibility (because Claude Code can do it). Whenever I have to use a GUI I’m like uuughgh…

Am I the only one who thinks like this?


I easily go through two pro max $200/m accounts and yesterday got a third pro account when I ran out.

It’s worth it, but I know they aren’t making money on me. But, of course I’m marketing them constantly so…


“The impotence of naive idealism in the face of economic incentives.”

“The changing goalposts of AGI and timelines. Notably, it’s common to now talk about ASI instead, implying we may have already achieved AGI, almost without noticing.”

Amen


AI;DR

DigID in the Netherlands is amazing— it works super well and is central to many services. Then it was bought by an American company—oops.

Why is having a digital ID that captures your biometric information, that is central to accessing many services so amazing? Please elaborate. I guess if it was allowed to be purchased by a foreign actor, it's not so amazing after all...

I agree, it should not be purchaseable.

But, it beats going to physical places to show physical ids for the same service.


I understand the convenience factor, but the privacy / potential ethical concerns with digital IDs connected to private sector services like banking transactions, internet access, sim card availability, etc... outweigh the convenience factor in my mind. Too easy for governments to do bad things.

One hour of Claude code— well, I’d guess it would be comparable to an hour of driving an electric car. How to know?

OP says one query uses 0.3 Wh. Driving an electric car for 10 miles = 3,000 Wh which is roughly 10,000 Wh per hour.

I'm not sure how many queries is equivalent to an hour of Claude code use, but maybe 5 seconds, which means an hour of continuous use = 216 Wh, or ~50x less than an electric car.

OP has a longer article about LLM energy usage: https://hannahritchie.substack.com/p/ai-footprint-august-202...


Beside the point, but 10,000 Wh per hour is kind of an insane unit. It's 10,000 watts. Or 10 kW if you're really into the whole brevity thing.

My point is that Claude might easily be about 50x more energy intensive than normal ChatGPT prompting.

A coding agent runs near-constantly, so of course it'd require a lot more compute than running even, say, a multi-minute query with a thinking model every hour. How much exactly is pretty hard to calculate because it requires some guesswork, but...

For a long input of n tokens from a model with N active parameters, the cost should scale as O(N n^2) (this is due to computing attention - for non-massive n, the O(N n) term is bigger, which is why API costs per token are fixed until a certain point and then start to rise). From the estimates from [1], it's around 40Wh for n=100k, N=100B. I multiply by 2.5 to account for Opus probably being ~2.5x larger than gpt-4o, and also multiply by 2 to pessimistically assume we're always close to Opus's soft context limit of 200k (it's possible to get a bigger context for extra cost, but I suspect people compact aggresively to not have to use it). That gets me 7.2J/t, which at a rough throughput estimate of 20t/s gives me power of 144W. Like a powerful CPU or a mediocre GPU, and still orders of magnitude lower than a car.

[1] https://epoch.ai/gradient-updates/how-much-energy-does-chatg...


It is not only about raw power consumption. Comparing driving an electric car with using AI only in kW hides a major point: Hyperscale datacenters are massively centralised, which brings it's own problems; a lot of energy is used for cooling, and water consumptions is enormous. Charging electric cars at home is distributed and does not suffer from the same problems as the centralised hyperscalers do. Also, running AI models at home is not much different than a gaming session :)

This is an incredible sequence of assertions, every single one of which is very incorrect.

"A lot of energy used for cooling": hyperscale data centers use the least cooling per unit of compute capacity, 2-3x less than small data centers and 10-100x less than a home computer.

"Water consumption is enormous": America withdraws roughly 300 billion gallons of fresh water daily, of which IT loads are expected to grow to 35-50 billion gallons annually by 2028. Data center water demands are less than a rounding error.

"distributed and does not suffer from the same problems": technically correct I guess but distributed consumption has its own problems that are arguably more severe than centralized power consumption.


If you want to try quantum vibecoding, I threw up a site at https://www.haiqu.org where you can mcp with the quantum computer at TU Delft. Free, after you make an account.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: