I’m sure you have good intentions, but that view works against the very thing you’re arguing for. Anthropic is leaving a ton of capital on the table, maybe for moral reasons, probably for PR reasons, but whichever it is they’ll have no incentive to scale back the unethical if doing so doesn’t earn them any goodwill because “oh look they stole books they’re just as bad as the AI companies creating tech to bomb civilians”.
The world is a dark place, better to reward any sliver of good than write it off for being tainted.
and you can't really claim that they 'stole' books. they bought millions of physical book copies to scan and ingest rather than just scrape pirated pdfs
The few people I’ve asked who’ve been hypnotized said it was true and had no reason to lie or trick me, and it seems true. But if the lens is “we already figured out all biology and physics so we can ignore the possibility of actual hypnosis (putting someone in a trance stage) being possible” then it’s hard to see things that there’s actually immense evidence for (eg the telepathy tapes).
I imagine most people are secretly turned off by the victim card (for the same reason you’d avoid someone who always has a “woe is me” story) but who knows how that affects the job market when virtue signaling is so powerful.
It seems pretty clear the moat is built at the application layer, how enjoyable/easy the actual application is to use, but these applications seem to be getting worse over time even as the models get better. Is it really that hard to do both? Isn’t the point of agentic coding to do more better (not just more)?
reply