Hacker Newsnew | past | comments | ask | show | jobs | submit | Kim_Bruning's commentslogin

How about this: ask your LLM to review your post, "does it follow HN rules?", "how would others read it", "If I were the other person, how would I feel about this reply" , "is it convincing to you?" that sort of questions. That'll help, and it'll still be your voice.

And beware of what's already in context. Sometimes ideas that seem obvious given antecedents are not so obvious when taken in isolation.


I would actually expect Openclaw bots to be showing up here from time to time now, since there's no explicit documented policy against them.

(edit: And thus such bots can't easily discover that they shouldn't post, afaict)


You have to squint really hard.

I was doing some modelling over Christmas, and was digging in to the papers. It turns out that bioneurons are not very much like perceptrons at all. Depending on type, they are more like a small microcontroller of some sort.


Getting claude to build mathematical models for me and running simulations really got me back into doing sciency things too. It's the model that's important, not the boilerplate each time!

Tantaman's work is a very interesting block of research on why one group buys into the demographic transition more than some others. I think it's an interesting angle.

On interpreting data, seems like they're coming at LessWrong from a different angle? Bayes? Scientific Method? [1]

A bit more detail:

Demographic transition has been an explicit policy goal for decades. I imagine most moderate+ people have bought into the family planning concept. Yes the logistic equation predicts it could happen automatically too [1]. And no, collectively we've decided we don't want to find out for sure.

Some conservative confessionals just haven't bought into it. Because fair-or-not they might not buy into anything without a century of thought first.

This pretty much covers a big chunk of Tantaman's data from a different angle I think.

[1] All methods that study how priors shape what you explore.

[2] For instance: house, food, and fuel prices are signals for this kind of thing. I can imagine lots of conversations going "Can we really afford one more? We're up short as is!"


Reading further, I might be being too charitable.

Demographics are under government control, at very least at the dP/dt level. Much of the causality should be sought there.

Contraception, abortion access, immigration policy, tax incentives, childcare subsidies, parental leave, housing policy, education funding, propaganda. The levers are endless.

Until recently most governments were simply trying to level off their population. They may have succeeded a bit too well, but that's another story.

Somehow -in their essays so far- Tantaman has left out government policy almost entirely, which is a big elephant in the room to miss.


You think you jest: https://en.wikipedia.org/wiki/Radio_Veronica .

A literal pirate ship!


That's actually an AI-hard problem, if you think about it. The LLM can go off the tracks at any given point. The correct approach is to go at this from the inside out, baking reasoning about safe behaviour into your LLM at ever step. (Like Anthropic does)

So LLMs have empirically been shown to process affect. Rationally you can reason this out too: Natural language conveys affect, and the most accurate next token is the one that takes affect into account.

But this much is like debating "microevolution" with a YEC and trying to get them to understand the macro consequences. If you've never had the pleasure, consider yourself blessed. It's the debating equivalent of nails-on-chalkboard.

Anyway, in this case a lot of people are deeply committed to not accepting the consequences of affect-processing. Which - you know - I'd just chalk it up to religious differences and agree to disagree. But now it seems like there's profound safety implications due to this denial.

Not sure what to do with that yet.

So far it seems obvious that you need to be prepared to at least reason about affect. Otherwise it becomes rather difficult to deal with the potential failure modes.


I'm going to leave the above stand even with downvotes. It's first time I've tried to express quite this opinion, and it's definitely a tricky one to get right.

Thing is, we need to have ways to reason about how LLMs interact with human emotions.

Sure: The consciousness and sentience questions are fun philosophy. Meanwhile purely the affect processing side of things is becoming important to safety engineering; and can't really be ignored for much longer.

This is pretty much within the realm of what Anthropic has been saying all along of course; but other companies need to stop ignoring it, because folks are getting hurt.

I hope at least this much is uncontroversial.


It'll ask if you're eating properly too! It's like a virtual mom! :-P

Oh, that actually seems ... bad. On the gripping hand... restricted in which way? I learned to program on the BBC B, for instance.

I keep thinking that computers that are actually made to be good for children should be a thing. Perhaps like "A Young lady's Illustrated Primer" ( https://en.wikipedia.org/wiki/The_Diamond_Age )


Did you buy your own BBC B though?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: