Hacker Newsnew | past | comments | ask | show | jobs | submit | mrinterweb's commentslogin

This is what I've been working on. I've written a project wrapper CLI that has a consistent interface that wraps a bunch of tools. The reason I wrote the CLI wrapper is for consistency. I wrote a skill that states when and how to call the CLI. AI agents are frequently inconsistent with how they will call something. There are some things I want executed in a consistent and controlled way.

It is also easier to write and debug CLI tooling, and other human devs get to benefit from the CLI tools. MCP includes agent instructions of how to use it, but the same can be done with skills or AGENTS.md (CLAUDE.md) for CLI.


Waiting for some autonomous OpenClaw agent to see that XMR donation address, and empty out the wallet of the person who initiated OpenClaw :)


I thought this was going to talk about a nerfed Opus 4.6 experience. I believe I experienced one of those yesterday. I usually have multiple active claude code sessions, using Opus 4.6, running. The other sessions were great, but one session really felt off. It just felt much more dumbed down than what I was used to. I accidentally gave that session a "good" feedback, which my inner conspiracy theorist immediately jumps to a conclusion that I just helped validate a hamstrung model in some A/B test.


This article has some cowboy coding themes I don't agree with. If the takeaway from the article is that frameworks are bad for the age of AI, I would disagree with that. Standardization, and working with a team of developers all using the same framework has huge benefits. The same is true with agents. Agents have finite context, when an agent knows it is using rails, it automatically can assume a lot about how things work. LLM training data has a lot of framework use patterns deeply instilled. Agents using frameworks that LLMs have extensive training on produce high quality, consistent results without needing to provide a bunch of custom context for bespoke foundational code. Multiple devs and agents all using a well known framework automatically benefit from a shared mental model.

When there are multiple devs + agents all interacting with the same code base, consistency and standards are essential for maintainability. Each time a dev fires up their agent for a framework their context doesn't need to be saturated with bespoke foundational information. LLM and devs can leverage their extensive training when using a framework.

I didn't even touch on all the other benefits mature frameworks bring outside of shared mental model: security hardening, teams providing security patches, performance tuning, dependability, documentation, 3rd party ecosystems. etc.


VRAM is the new moat, and controlling pricing and access to VRAM is part of it. There will be very few hobbyists who can run models of this size. I appreciate the spirit of making the weights open, but realistically, it is impractical for >99.999% of users to run locally.


This is so passive aggressive. I kinda love it and hate it if that makes sense.


I've often thought about this. There are times I would rather have CI run locally, and use my PGP signature to add a git note to the commit. Something like:

``` echo "CI passed" | gpg2 --clearsign --output=- | git notes add -F- ```

Then CI could check git notes and check the dev signature, and skip the workflow/pipeline if correctly signed. With more local CI, the incentive may shift to buying devs fancier machines instead of spending that money on cloud CI. I bet most devs have extra cores to spare and would not mind having a beefier dev machine.


I think this is a sound approach, but I do see one legitimate reason to keep using a third-party CI service: reducing the chance of a software supply chain attack by building in a hardened environment that has (presumably) had attention from security people. I'd say the importance of this is increasing.


"Works on my machine!"


The propaganda trying to brand inhumane cruelty as fun, funny, cool, justified, excusable (I'm guessing at words because none of those words are the way I see this), is so messed up. Trying to make other people's suffering a meme, says a lot about the people doing this.


The substandard and terrible propaganda and victory lap celebration of criminality should be exploited in satire to underscore and drive home how horrible this ideology is, one that is distinct and different from Nazism, but nonetheless disgusting.


"this ideology is, one that is distinct and different from Nazism" a couple of things about that first is that they are sometimes useing nazi phrases and slogans, second , many of the actual foot soldiers are in fact full tilt nazis ™, with some foriegn "guests" of the US government bieng openly tatooed, card carying nazis, and the distinction is definitly less every day with many side projects to rehabilitate nazi type nationalism EVERYWHERE.right

™ or some off brand of murderous ,racist, supremisist with different fashion choices


It's splitting bikeshed hairs if it's a true Scot or not, it's bad.

The problem is that fascistic ideological camps, once latched onto power, have a historical tendency to cause world wars and require much blood and treasure to remove.


Most consumers aren't running LLMs locally. Most people's on-device AI is likely whatever Windows 11 is doing, and Windows 11 AI functionality is going over like a lead balloon. The only open-weight models that can come close to major frontier models require hundreds of gigabytes of high bandwidth RAM/VRAM. Still, your average PC buyer isn't interested in running their own local LLM. The AMD AI Max and Apple M chips are good for that audience. Consumer dedicated GPUs just don't have enough VRAM to load most modern open-weight LLMs.

I remember when LLMs were taking off, and open-weight were nipping at the heels of frontier models, people would say there's no moat. The new moat is high bandwidth RAM as we can see from the recent RAM pricing madness.


> your average PC buyer isn't interested in running their own local LLM.

This does not fit my observation. It's rather that running one's local LLM is currently far too complicated for the average PC user.


Your average PC buyer doesn’t know what an LLM is, let alone why they should run one locally.

They just want a good PC that runs Word and Excel and likely find the fact that Copilot keeps popping up in Word every time they open a new document to be annoying rather than helpful.


The Apple M series chips are solid for inference.


Correct me if I'm wrong, but I thought everyone was still doing inference on the GPU for Apple silicon.


The Apple M series is SoC. The CPU, GPU, NPU, RAM are all part of the chip.


The RAM is not part of the SoC. It's a bunch of separate commodity RAM dies packaged alongside the SoC.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: