Hacker Newsnew | past | comments | ask | show | jobs | submit | Havoc's commentslogin

You can already run some models on the NPUs in the Rockchip RK3588 SBCs which are pretty abundant.

A claude 4.6 they are most certainly not, but if you get through the janky AF software ecosystem they can run small LLMs reasonably well with basically zero CPU/GPU usage


They're definitely inferior to proper tests, but even weak CC tests on top of CC code is an improvement over no tests. If CC does make a change that shifts something dramatically even a weak test may flag enough to get CC to investigate.

Even better though - external test suits. Recently made a S3 server of which the LLM made quick work for MVP. Then I found a Ceph S3 test suite that I could run against it and oh boy. Ended up working really good as TDD though.


yeah i have been hearing a lot more about this concept of “digital twins” - where you have high fidelity versions of external services to run tests against. You can ask the API docs of these external services and give it to Claude. Wonder if that is where we will be going more towards.

Isn’t this just an API sandbox? Many services have a test/sandbox mode. I do wish they were more common outside of fintech.

Not controversial per se but it’ll go the same way as Netflix - once it’s got adoption they’ll crank enshitification up to 11

Crazy writeup.

Author is right about the base64 part. Does seem weird that it can decode and understand it at same time. And I guess what makes it weird that we just sorta accept that for say English and German this works ie normal use but when framed as base64 then it suddenly stops feeling intuitive


why tho? it's just an alternate alphabet/set of symbols.

Because its generally expected that models only work 'in distribution', i.e. they work on stuff they have previously seen.

They almost certainly have never seen regular conversations in Base64 in their training set, so its weird that it 'just works'.

Does that make sense?


If you do not properly MIME-decode email, you end up with at least some base64-encoded conversations.

For all we know, AI tech companies could theoretically have converted all of the "acquired" (ahem!) training set material into base64 and used it for training as well, just like you would encode say japanese romaji or hebrew written in the english alphabet.

Unlikely that every company would have bothered to do this.

'Yes, I know we already trained on all that data, but now I want you to convert to base64 and train it again! at enormous cost!'

On the contrary, it could be a deliberate attempt to augment or diversify the dataset.

> They almost certainly have never seen regular conversations in Base64 in their training set, so its weird that it 'just works'.

People use Base64 to store payloads of many arbitrary things, including web pages or screenshots, both deliberately and erroneously, and so they have almost certainly seen regular conversations in Base64 in their 10tb+ text training sets scraped from billions of web pages and files and mangled emails etc.


Yes, thats true.

But that points again to the main idea: The model has learnt to transform Base64 into a form it can already use in the 'regular' thinking structures.

The alternative is that there is an entire parallel structure just for Base64, which based on my 'chats' with LLMs in that format seems implausible; it acts like the regular model.

If there is a 'translation' organ in the model, why not a math or emotion processing organs? Thats what I set out to find, and are illustrated in the heatmaps.

Also, any writing tips from the Master blogger himself? Huge fan (squeal!)


> Plus who knows what open routed providers do in term quantization

The quantisation is shown on the provider section.


Well on the plus side at least we’ll see how this story ends. Seems MS is going all in with win 12 on subscription and AI everything

What’s that North Korean Linux flavor called again?

Red Star. I'd sooner use Berry, Kylin, or SUSE if I wanted to avoid the Noid- I mean, avoid U.S.-based distros.

Red Star OS.

That’s some serious out of the box thinking

I'd think thin copper sheet on something soft would also work. That indentation will probably outlast any sort of ink

Sounds a lot like Zuckerberg getting caught on a hot mike at the trump dinner about how many billion meta is investing

All made up bullshit numbers


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: