Hacker Newsnew | past | comments | ask | show | jobs | submit | woadwarrior01's commentslogin

Also, flexbuffers.

I'm a (non-practicing) Dwaitin Hindu. AFAICT, there's no mainstream school of Hindu philosophy (there are three) espouses that view. Although, Advaitins come very close to it with their four mahavakyas.

IMO, Integrated Information theory of consciousness (IIT) is exactly that. Everything is conscious, the difference is only in the degree to which they are conscious.


Oh, thank you very much enlightening me! All the time I misunderstood! I guess then IIT it is for me :-)

> Cool to see Claude doing decently though!

The scales do seem to be tipped in its favor (cf: my other comment in this thread).


Interesting benchmark.

I can't help but notice that they're benchmarking Opus 4.6 (Anthropic's latest and greatest model) against GPT-5.2 (which is three generations behind OpenAI's latest coding models: GPT-5.2-Codex, GPT-5.3-Codex and the latest GPT-5.4).


As far as I know, OpenAI did not release 5.3 Codex in their API. You can only use it with Codex CLI or app.

It's there, you just need to use it with the responses API. Set model field to 'gpt-5.3-codex'

5.2 and 5.2 Codex is arguably the same gen.

Sure, but one is fine-tuned for what they are testing and one is not.

The first half of this is already happening to a certain extent. I first noticed this in a submission[1] on Dimitris Papailiopoulos' Adderboard[2], which is a code-golf competition for training the smallest transformer that can add two 10-digit numbers. Most submissions on it are fully AI generated.

The report in the linked repo is Claude Code generated.

[1]: https://github.com/rezabyt/digit-addition-491p

[2]: https://github.com/anadim/AdderBoard


Please don't post links with tracking parameters (t=jQb...).

https://xcancel.com/cperciva/status/2029645027358495156


Haha. This was the second time in like a year that I’ve posted a Twitter link, and the second time someone complained. Okay, I’ll try to remove those before posting, and I’ll edit this one out.

Feels like a losing battle, but hey, the audience is usually right.


I'm sorry, but it's my pet peeve. If you're on iOS/macOS I built a 100% free and privacy-friendly app to get rid of tracking parameters from hundreds of different websites, not just X/Twitter.

https://apps.apple.com/us/app/clean-links-qr-code-reader/id6...


This is great! I have been meaning to implement this sort of thing in my existing Shortcuts flow but I see you already support it in Shortcuts! Thank you for this!

Anywhere I can toss a Tip for this free app?


I'm glad you like it. :)

It works on iOS? That’s cool. I’ll give it a go.

So what is your motivation for doing this, incidentally? Can you be explicit about it? I am genuinely curious.

Especially when it’s to the point of, you know, nagging/policing people to do it the way you’d prefer, when you could just redirect your router requests from x.com to xcancel.com


It's not particularly about x.com, hundreds of site like x, youtube, facebook, linkedin, tiktok etc surreptitious add tracking parameters to their links. The iOS Messages app even hides these tracking parameters. I don't like being surreptitiously tracked online and judging by the success of my free app, there are millions of people like me.

so, since these companies have to comply with removing PII, is the worst thing that could happen to me, that I get ads that are more likely to be interesting to me?

i’m not being facetious, honest question, especially considering ads are the only thing paying these people these days


Who has to comply with removing PII? Your profile, yours, mapped to a special snowflake ID, is packaged and sold across a network of 2500 - 4000 buyers, including in particular those that clean, tie (a surprisingly small footprint turns into its own "natural primary key"), qualify, and sell on to agencies. No step in this is illegal.

https://www.theverge.com/2024/10/23/24277679/atlas-privacy-b...


my first and last name is already a "natural primary key" (every single google result of Peter Marreck is me), so I've already had to give that up a long time ago. So nothing new is lost I guess?

The worst thing that could happen is that you get caught in some government dragnet based on your historical viewing data and get disappeared because (as is the nature of dragnet searches) no matter how innocent you are you still look guilty.

The more data they have on you, the more valuable that data is to a third party. So they sell your data to someone else, who then phones you based on your known deep interest in <whatever it was that tracked you>. Or spams you. Or messages you. Or whatever method they think will most get your attention.

If you don't give them that information, they can't sell it, and the buyers won't annoy you.

It's not that the ads you get are more interesting, it's that you get more ads because they think they know more about you.


IMO the tracking, advertising, and attention market might just be societies biggest problem.

Certainly it employs a lot of people, as do cartels.


Helpful type of nagging for me. Most here would agree they are not a positive aspect of the modern digital experience, calling it out gently without hostility is not bad. It might not be quite self policing but some of that with good reason is not bad for healthy communities IMO.

> Running deepseek 6B on the Private LLM app on the iPhone 13 basically set my phone on fire

Hey, I’m the author of Private LLM. I hope you’re joking about the phone catching fire. Btw, there’s no DeepSeek 6B model, you’re likely talking about the DeepSeek Distill 7B model.


Yes thank you for the correct correction

The last model I tried to run was Dolphin 3B but even Zephyr1.6B kills the phone pretty fast even with a full battery

Even Dolphin completely killed my phone so yeah I can’t really use the Private LLM app for anything heavier than the default 1.6B


Yeah, I agree. Really hard to fit anything larger on the 4GB of RAM on the iPhone 13 of which, only about (depending on what iOS version you're on) 2.1-2.5GB is usable by apps.

> Are they doubling down on local LLMs then?

Neural Accelerators (aka NAX) accelerates matmults with tile sizes >= 32. From a very high level perspective, LLM inference has two phases: (chunked) prefill and decode. The former is matmults (GEMM) and the latter is matrix vector mults (GEMV). Neural Accelerators make the former (prefill) faster and have no impact on the latter.


The macOS Claude app is absolutely an electron app, which is what the github issue in this post is about.

If you'd like to verify for yourself: On your mac, right click on the Claude app icon and click on "Show Package Contents" and then navigate to Contents > Frameworks > Electron Framework.framework.


Chorba is also soup in Eastern European languages like Bulgarian and Romanian.

https://en.wikipedia.org/wiki/Chorba


And also in Turkey. It (the word, if not the stew itself) arrived to the Balkans by way of the Ottomans. (And having just now clicked through to the link, it seems to have arrived to Turkey by way of the Persians).

Apparently that's kinda where the name comes from. It's named after a Serbian musician who was known as Bora Čorba, who played for a band called Riblja Čorba (fish stew).

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: