Hacker Newsnew | past | comments | ask | show | jobs | submit | karmasimida's commentslogin

HN is in denial, which is understandable

AI is already better at understanding code than 99.99% of human, the more I use it the more I believe this is true. It can draw connections between dots far quicker and accurate than a human could ever be.

At very least, AI is going to be a must even as a co-supervisor to your project

What in doubt right now, is whether AI can manage a codebase fully autonomously without bring it down, which I doubt it can at the moment. Be it 4.6 or 5.4, they always, almost always, add code instead of removing them, the sheer complexity will explode at certain point.

But that is my assessment for models TODAY, who knows where they will end up being in 6 months. AI is entering the recursive self improvement phase, that roadmap is laying in front our eyes, what it can and would unlock is truly, truly unpredictable.

I am both intrigued and scared.


The RAG models are very competent at programming. I am worried about my job as a SWE in the near future, but didn't the MIT paper about a week ago pretty much confirm that width-scaling the model is about to (or has already) stopped giving any measurable increase in quality because the training data no longer overfills the model?

Any authentic training data from pre-LLM's is assumed to have been used in training already and synthetic or generated data gives worse performing models, so the path of increasing its training data seems to be a dead end as well?

What is the next vector of training? Maybe data curation? Remove the low quality entries and accept a smaller, but more accurate data set?

I think the AI companies are starting to sweat a little, considering the promises they have made, their inability to deliver and turn a profit at its current state and the slowing improvements.

Interesting times! We are either all out of jobs or a massive market crash is imminent, awesome...


This is true.

When I am using codex, compaction isn’t something I fear, it feels like you save your gaming progress and move on.

For Claude Code compaction feels disastrous, also much longer


Well I really don't like my handwriting, would rather avoid it

I mean you don’t need your first job go to top of the top companies. Your first job is to get you into the industry then you can flourish.

How many juniors OpenAI GDM are going to hire in a year, probably double digits at max, the chances are super slim and they are by nature are allowed to be as picky as they should be.

That being said, I do agree this industry is turning into finance/law, but that won’t last long either. I genuinely can’t foresee what if when AGI/ASI is really here, it should start giving human ideas to better itself, and there will be no incentive to hire any human for a large sum anymore, maybe a single digit individuals on earth perhaps


The problem is the lack of experience compounds.

Because AI accelerates the rate of knowledge gain, this gets even faster.


This is definitely the Claude killer OpenAI is cooking.

And so far it has succeeded


You should respect the government’s choice. It is elected after all

The executive doesn’t pass laws. Congress created the Department of Defense. Only Congress can rename it. The executive being elected is irrelevant to this point. The Constitution actually matters.

As programmers become intelligently irrelevant in the whole picture, you would see more posts like this

"This account belongs to a lazy person" true

It is a bigger model, confirmed

And he is back to Pete hegeseth now? Lollll

Raw scale of parameters is POWER, you can't get performance out of a small model from a much larger one.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: