Hacker Newsnew | past | comments | ask | show | jobs | submit | gabriel666smith's commentslogin

Inconsistent capitalisation ('Twitter' vs 'reddit'); subtly using the outdated name for 'Twitter' as most humans do; the genuinely hard-to-parse final clause of the comment.

Though I note it didn't say "read comments by other humans", only "read comments by humans", so confirmed AI.

I think the guidelines here work quite well, and expect a good-faith interpretation, which they mostly receive.

I think you're asking for some sort of empirical verification of "this is / is not LLM text" (which seems impossible), but there's no real reason to expect the existence of LLMs to change that this website is, generally, interacted with in a good-faith way. People are really good at calling others out on here -- I doubt that will change.


Quite! It's very easy to send a HN link to one of our new artificial friends to see what they have to say about it. Subsequently publicly posting the inference variation you receive strikes me as very self-centered. Passing it off as your own words - which the majority seem to - is doubly bizarre.

It's very funny to imagine people prompting: "Write a compelling comment, for me, to pass off as my thoughts, for this HN news thread, which will attract both upvotes and engagement.".

In good faith, per the guidelines: What losers!


I agree with much of what you say, but it isn't as simple as "post to LLM, paste on HN". There are notable effects from (1) one's initial prompt; (2) one's phrasing of the question; (3) one's follow-up conversation; (4) one's final selection of what to post.

For me, I care a lot about the quality of thinking, as measure by the output itself, because this is something I can observe*.

I also care -- but somewhat less -- about guessing as to the underlying generative mechanisms. By "generative mechanisms" I mean simply "Where did the thought come from?" One particular person? Some meme (optimized for cultural transmission)? Some marketing campaign? Some statistic from a paper that no one can find anymore? Some dogma? Some LLM? Some combination? It is a mess to disentangle, so I prefer to focus on getting to ground on the thought itself.

* Though we still have to think about the uncertainty that comes from interpretation! Great communication is hard in our universe, it would seem.


Taking the time to write something, and read over it is a better skill than asking an LLM to do it for you.

Also, quality doesn't come from any of those points you've mentioned. Quality comes from your ability to think and reason through a topic. All those points you mention in your first paragraph are excuses, trying to make it seem like there was some sort of effort to get an LLM to write a post. It feels like fishing for a justification


>Taking the time to write something, and read over it is a better skill than asking an LLM to do it for you.

Furthermore, if someone doesn't think whatever they're saying is worth investing the time to do this, it's a signal to me that whatever they could say probably isn't worth my time either.

I don't know why this isn't a bigger part of the conversation around AI content. It shows a clear prioritization of the author's time over the readers', which fine, you're entitled to valuing your own time more than mine, but if you do, I'll receive that prioritization as inherently disrespectful of my time.


> Taking the time to write something, and read over it is a better skill than asking an LLM to do it for you.

Yes, this is a great skill to have: no argument from me. This wasn't my point, and I hope you can see than upon reflection.

> All those points you mention in your first paragraph are excuses, trying to make it seem like there was some sort of effort to get an LLM to write a post.

Consider that a reader of the word 'excuses' would often perceive an escalation of sorts. A dismissal.

> Quality comes from your ability to think and reason through a topic.

That's part of it. Since the quote above is a bit ambiguous to me, I will rephrase it as "What are the factors that influence the quality of a comment posted on Hacker News?" and then answer the question. I would then split apart that question into sub-questions of the form "To what extent does a comment ..."

- address the context? Pay attention to the conversational history?

- follow the guidelines of the forum?

- communicate something useful to at least some of the readers?

- use good reasoning?

One thing that all of the four bullet points require is intelligence. Until roughly ~2 years ago, most people would have said the above demand human intelligence; AI can't come close. But the gap is narrowing. Anyhow, I would very much like to see more intelligence (of all kinds, via various methods, including LLM-assisted brainstorming) in the service of better comments here. But intelligence isn't enough; there are also shared values. Shared values of empathy and charity.

In case you are wondering about my "agenda"... it is something along the lines of "I want everyone to think a lot harder about these issues, because we ain't seen NOTHING yet". I also strive try to promote and model the kind of community I want to see here.


You missed something much more important than all 4 of those points:

- what does the human behind the keyboard think

If you want us to understand you, post your prompts.

Some might suggest that the output of an LLM might have value on it's own, disconnected from whatever the human operating it was thinking, but I disagree.

Every single person you speak with on HN has the same LLM access that you do. Every single one has access to whatever insights an LLM might have. You contribute nothing by copying it's output, anyone here can do that. The only differentiator between your LLM output and mine, is what was used to prompt it.

Don't hide your contributions, your one true value - post your prompts.


The prompt & any follow-ups do have notable effects, but IMO this just means that most of actual meaning you wanted to convey is in those prompts. If I was your interlocutor, I'd understand you & your ideas better if you posted your prompts as well as (or instead of) whatever the LLM generated.

> The prompt & any follow-ups do have notable effects, but IMO this just means that most of actual meaning you wanted to convey is in those prompts.

If you mean in the sense of differentiating meaning from the base model, I take your point. But in another sense, using GPT-OSS 120b as example where the weights are around 60 GB and my prompt + conversation are e.g. under 10K, what can we say? One central question seems to be: how many of the model's weights were used to answer the question? (This is an interesting research question.)

> If I was your interlocutor, I'd understand you & your ideas better if you posted your prompts as well as (or instead of) whatever the LLM generated.

Indeed, yes, this is a good practice for intellectual honesty when citing an LLM. It does make me wonder though: are we willing to hold human accounts to the same standard? Some fields and publications encourage authors to disclose conflicts of interest and even their expected results before running the experiments, in the hopes of creating a culture of full disclosure.

I enjoy real human connection much more than LLM text exchanges. But when it comes to specialized questions, I seek any sources of intelligence that can help: people, LLMs, search engines, etc. I view it as a continuum that people can navigate thoughtfully.


> how many of the model's weights were used to answer the question? (This is an interesting research question.)

That’s not the point. Every one of your conversation partners has the same access to the full 60 GB weights as you do. The only things you have to offer that your conversation partners don’t already have are your own thoughts. Post your prompts.

> I enjoy real human connection much more than LLM text exchanges. But when it comes to specialized questions, I seek any sources of intelligence that can help: people, LLMs, search engines, etc. I view it as a continuum that people can navigate thoughtfully.

We are all free to navigate that continuum thoughtfully when we are not in conversation with another human, who is expecting that they are talking to another human.

If you believe that LLM conversation is better, that’s great. I’m sure there’s a social media network out there featuring LLMs talking to other LLMs. It’s just not this one.


Sure, I agree that getting something you want (top post) out of an LLM isn't zero-effort.

But this isn't about effort. This is about genuine humanity. I want to read comments that, in their entirety, came out of the brain of a human. Not something that a human and LLM collaboratively wrote together.

I think the one exception I would make (where maybe the guidelines go too far) is that case of a language barrier. I wouldn't object to someone who isn't confident with their English running a comment by an LLM to help fix errors that might make a comment harder to understand for readers. (Or worse, mean something that the commenter doesn't intend!) It's a privilege that I'm a native English speaker and that so much online discourse happens in English. Not everyone has that privilege.


This. LLMs are an autocomplete engine. They aren't curious. Take your curiosities and use your human voice to express them.

The only reason you should be using an LLM on a forum like this is to do language translation. Nobody cares about your grammar skills, and there really isn't a reason to use an LLM outside of that.

LLMs CANNOT provide unique objectivity or offer unknown arguments because they can only use their own training data, based on existing objectivity and arguments, to write a response. So please shut that shit down and be a human.

Signed, a verified/tested autistic old man.

cheers


> Nobody cares about your grammar skill

One thing that impressed me about HN when I started participating is how rarely people remark on others' spelling or grammatical mistakes. I myself have been an obsessive stickler about such issues, so I do notice them, but I recognize that overlooking them in others allows more interesting and productive discussions.


I agree with the above comment on a broad normative (what is good) take: on a forum for humans, yes, please, bring your human self. But there is a lot of room for variety, choice, even self-expression in the be your human self part! Some might prefer using the Encyclopedia Brittanica to supplement an imperfect memory. Others DuckDuckGo. Some might bounce their ideas off friends. Or (gasp) an LLM. Do any of these make the person less human? Nope.

Of course, there are many ways to be more and less intellectually honest, and there is a lot to read on this, such as [1].

Now, on the descriptive / positive claims (what exists), I want to weigh in:

> LLMs are an autocomplete engine.

Like all metaphors, we should ask the "what is the metaphor useful for?" rather than arguing the metaphor itself, which can easily degenerate into a definitional morass. Instead, we should discuss the behavior, something we can observe.

> [LLMs] aren't curious.

Defined how? If put aside questions of consciousness and focus on measuring what we can observe, what do we see? (Think Turing [2], not Chalmers [3].) To what degree are the outputs of modern AI systems distinguishable from the outputs of a human typing on a keyboard?

> LLMs CANNOT provide unique objectivity...

Compared to what? Humans? The phrasing unique objectivity would need to be pinned down more first. In any case, modern researchers aren't interested in vanilla LLMs; they are interested in hybrid systems and/or what comes next.

Intelligence is the core concept here. As I implied in the previous paragraph, intelligence (once we pick a working definition) is something we can measure. Intelligence does not have to be human or even biological. There is no physics-based reason an AI can't one day match and exceed human intelligence.*

> or offer unknown arguments ...

This is the kind of statement that humans are really good at wiggling out of. We move the goalposts. So I'll give one goalpost: modern AI systems have indeed made novel contributions to mathematics. [4]

> because they can only use their own training data, based on existing objectivity and arguments, to write a response.

Yes, when any ML system operates outside of its training distribution, we lose formal guarantees of performance; this becomes sort of an empirical question. It is a fascinating complicated area to research.

Personally, I wouldn't bet against LLMs as being a valuable and capable component in hybrid AI systems for many years. Experts have interesting guesses on where the next "big" innovations are likely to come from.

[1]: Tversky, A., & Kahneman, D. (1974). Judgment under Uncertainty: Heuristics and Biases: Biases in judgments reveal some heuristics of thinking under uncertainty. science, 185(4157), 1124-1131.

[2]: The Turing Test : Stanford Encyclopedia of Philosophy : https://plato.stanford.edu/entries/turing-test/

[3]: The Hard Problem of Consciousness : Internet Encyclopedia of Philosophy : https://iep.utm.edu/hard-problem-of-conciousness/

[4]: FunSearch: Making new discoveries in mathematical sciences using Large Language Models : Alhussein Fawzi and Bernardino Romera Paredes : https://deepmind.google/blog/funsearch-making-new-discoverie...

* Taking materialism as a given.


> This is about genuine humanity.

The meaning of the word genuine here is pretty pivotal. At its best, genuine might take an expansive view of humanity: our lived experience, our seeking, our creativity, our struggle, in all its forms. But at its worst, genuine might be narrow, presupposing one true way to be human. Is a person with a prosthetic leg less human? A person with a mental disorder? (These questions are all problematic because they smuggle in an assumption.)

Consider this thought experiment. Consider a person who interacts with an LLM, learns something, finds it meaningful, and wants to share it on a public forum. Is this thought less meaningful because of that generative process? Would you really prefer not to see it? Why?

Because you can point to some "algorithmic generation" in the process? With social media, we read algorithmically shaped human comments, many less considered than the thought experiment. Nor did this start with social media. Even before Facebook, there was an algorithm: our culture and how we spread information. Human brains are meme machines, after all.

Think of human output as a process that evolves. Grunts. Then some basic words. Then language. Then writing. Then typing. Why not: "Then LLMs"? It is easy to come up with reasons, but it is harder to admit just how vexing the problem is. If we're willing, it is way for us to confront "what is humanity?".

You might view an LLM as an evolution of this memetic culture. In the case of GPT-OSS 120b, centuries of writing distilled into ~60 GB. Putting aside all the concerns of intellectual property theft, harmful uses, intellectual laziness, surveillance, autonomous weapons, gradual disempowerment, and loss of control, LLMs are quite an amazing technological accomplishment. Think about how much culture we've compressed into them!

As a general tendency, it takes a lot of conversation and refinement to figure out how to communicate a message really well to an audience. What a human bangs out on the first several iterations might only be a fraction of what is possible. If LLMs help people find clearer thinking, better arguments, and/or more authenticity (whatever that means), maybe we should welcome that?

Also, not all humans have the same language generation capacity; why not think of LLMs as an equalizer? You touch on this (next quote), but I am going to propose thinking of this in a broader way...

> I think the one exception I would make...

When I see a narrow exception for an otherwise broad point, I notice. This often means there is more to unpack. At the least, there is philosophical asymmetry. Do they survive scrutiny? Certainly there are more exceptions just around the corner...


Late replying - I don't think you should have been downvoted so much. You're right that I was using a comically simple example for comic effect (though I'm certain it is something that happens a lot), and also that LLMs are very interesting thought tools. Private dialogue is really analogous to thinking. There's nothing in your comment that suggests posting a critically unexamined, verbatim snippet of one's private LLM dialogue.

Preface: this is social commentary that I'm reflecting back to HN, not a complaint. No one likes rejection, but in a way, I at least find downvotes informative. If a thoughtful guideline-kosher comment gets a lot of downvotes, there may be a story underneath.

For this one, I have some guesses as to why. 1. Low quality: unclear, poor reasoning; 2. Irrelevant: off topic, uninteresting; 3. Using the downvote for "I disagree" rather than "this is low quality and/or breaks the guidelines"; 4. Uncharitable reading: not viewing the comment in context with an attempt to understand; 5. Circling of the wagons: we stand together against LLMs; 6. Virtue signaling: show the kind of world we want to live in; 7. Raw emotion: LLMs are stressful or annoying, we flinch away from nuance about them; 8. Lack of philosophical depth: relatively few here consider philosophy part of their identity; 9. Lack of governance experience and/or public policy realism: jumping straight from an undesirable outcome (LLM slop) to the most obvious intervention ("just ban it").

Discussion on this particular topic (LLM assistance for comments), like most of the AI-related discussion on HN, seems to not meet our own standards. It is like a combination of an echo chamber plus an airing of grievances rather than curious discussion. We're better than this, some of us tell ourselves. I used to think that. People like me, philosophers at heart, find HN less hospitable than ever. I'm also builder, so maybe one day I'll build something different to foster the kinds of communities I seek.


That’s a generous way to think about downvotes. Seeing them as signal rather than rejection leaves room to reflect and adjust.

I’m new here and come more from a philosophical background than a technical one, so I’m still learning the norms. One thing I’m sensitive to in communities like this is who ends up informally deciding what counts as legitimate participation.


This resonates with me. Intent is hard to infer, so it seems better to engage with the content itself. Most ideas are recombinations of earlier ones anyway—the interesting part is the push and pull of refining thoughts together.

I wonder if the adverts in the "personal super-assistant", per the blog post, ("that helps you do almost anything"!) will have the same triggers as the shopping assistant, which pops up underneath messages right now in the web UI.

When first trying 5.2, on a "Pro" plan, I was - and still am - able to trigger the shopping assistant via keyword-matching, even if the conversation context, or the prompt itself, is wildly inappropriate (suicide, racism, etc).

Keyword-matching seems a strange ad strategy for a (non-profit) company selling QKV. It's all very confusing!

Hopefully, for fans of personal super-assistants--and advertising--worldwide, this will improve now that ads have been formalised.


This is fun!

Given online is now bot-riddled, I half-finished something similar a while back, where the game was adopting and 'coaching' (a <500 character prompt was allowed every time the dealer chip passed, outside of play) an LLM player, as a kind of gambling-on-how-good-at-prompting-you-are game. Feature request! The rake could pay for the tokens, at least.


If you're a Philip K. Dick-head, you might enjoy the episodes of the podcast Weird Studies which cover his life and work.

The hosts often focus on his Exegesis, mentioned in the article. It feels like a privilege to hear two very smart academics engage in longform discussion - in which they're unconstrained, and clearly having genuine fun - about Dick's work.

More broadly, the non-Dick episodes are also wonderful, and often cover the kind of art I typically see discussed here.

You can dip in for times they cover work you love already, to hear their interesting (and academically, often quite new) perspectives on your favourites, or listen from the start, chronologically, as a kind of curriculum in the weird. Which I found to be an incredibly useful thing.

I'm not associated in any way, just a fan, and think a lot of users here would enjoy it: https://www.weirdstudies.com/10


She was such a good marketer of ideas, and at sneaking them into more palatable constructs.

The opinion you replied to frustrates me when I encounter it.

She was only doing "magical thinking" in her narratives so much as her novels are marriage comedies, and this is required.

The reality of her life was that she was incredibly uncompromising. She had to publish her early work under an androgynous pseudonym to profit from it.

She didn't marry cynically despite having opportunities to. She was a realist, and a strain of that runs through her work. There are many moments where she anticipates the great Russian realists. She managed to turn a good profit on her art in spite of her period's circumstances. She genuinely advanced the idea of who is allowed to make art, and who is allowed to profit from it.

Generally the novels have nuanced but happy endings. She was writing for an audience. She was a shrewd businessman at a time when there weren't businesswomen. In her personal life, she was genuinely uncompromising. She's a GOATed artist. You can't ask much more of a human!


"It is a truth universally acknowledged that a single man in possession of a good fortune must be in want of a robot suit to ride around and fight things with."

I was visiting Jane Austen's House Museum last year and it always gives me pleasure to see how wildly popular her work remains. There always seem to be tourists there visiting from all over the world. That is really heartening.

She was very innovative. Maybe even underrated as a craftsperson at the sentence level. My favourite trick that I believe she invented is slipping from prose into a soft Iambic pentameter, essentially unnoticed. Lots of people have copied that from her.

And class-pressure narratives will never not be relevant to people's lives. She's a very very humane storyteller in that respect.

I am slightly biased - she's my great aunt (x 6). Used to find that embarrassing but now I feel quite proud.


I'm not well read, and don't think I'd be able to finish any of the classics. As such I have no clue what "slipping from prose into a soft Iambic pentameter" means. I came here for the robots.


You know how in Disney movies they shift smoothly from talking to singing? It’s just like that, only instead of the bass beat to the character’s song starting to play, her ‘prose’ (think ‘non-poetry words’, aka what most people consider books to be full of) shifts smoothly into Shakespeare-like syllable emphasis patterns. Listen for the percussion notes starting about ten seconds into https://youtu.be/79DijItQXMM and imagine that instead of him bursting into musical song, he burst into chanting a limerick:

There once was a demi-god, Maui / Amazing and awesome: I’m Maui // Who stole you your fire / and made your days lighter // Yes, thank you, you’re welcome! Love: Maui

It’s a bit odd of an analogy, but limericks and “Iambic pentameter” are specific instances of an underlying language architectural thing, so it should be just enough to convey the basics of that “prose to Iambic” sentence. And: if you’ve ever watched “Much Ado About Nothing” from the mid-90s, that’s 100% Iambic.

(If you’re an English major, yes, I know, this is all wrong; it’s just a one-off popsicle-sticks context-unique mindset-conveyance analogy-bridge, not step-by-step directions to lit/ling coordinates in your field.)


English major here, and your post is great. It's not complete, of course, but you've hit everything a beginner needs to know to get over the first hump of understanding, in a way that "expert" knowledge sometimes gets in the way of communicating. I doubt the reply I was writing in my head would have been better, and probably would have been worse, so thank you for jumping in.

But (because I have to go there - and I promise getting to this paragraph wasn't the point of the compliments above), Much Ado isn't entirely in verse: the clowns - lower class, all of them (Dogberry, et al) - speak in prose. So, the next layer of the onion, for anyone who wants to pick at it, is noticing in what circumstances writers use different registers, and why. Austin does the same thing: Mr Collins speaks in flat, prosy sentences, except (if I recall correctly) when he talks about his patron, Lady Catherine de Bourgh. I think that has a subconscious effect, even on people who couldn't name an iamb, but once you pick up on it, it's one of those "ooh!" sorts of moments where you get a glimpse behind the authorial curtain.


Thank you :)

and, yes, what you said! I vaguely recognize that from studying the written form but certainly I didn't remember it here beyond “I bet this needs a conditional or something”.

ps. I am especially proud of the unplanned field pun!


>prosy

prosaic.



This is a great example, and not odd as an analogy at all. It surfaces something subtle.

Language architecture is really interesting, I think, for programmers who have bought into the LLM hype in any meaningful way. It's an important field to have a sense of.

Tokenizers, for example, generally have multi-syllabic tokens as their base-level, indivisible unit.

You rarely see this mentioned when LLM capability against non-coding tasks is discussed, despite it being deeply important for prose construction.

Not to mention, putting language models aside, that the vast majority of code is written in language with a logical grammar. The disciplines are highly linked.


The AI generated front page of HN posted yesterday had some generated comments in at least one of the threads that scanned and rhymed. It's clearly there in whatever model that was, and while it might just have been a confluence of having seen a specific word pair a certain distance apart in the learning data to account for the rhyming, I'm having a hard time explaining away the construction of a coherent meter.


The Judoon have such a lovely language, though!


Your desire to share knowledge and the pleasure of what you’re describing really shows; thank you for using your time so generously.


Two broad categories, verse and prose.

Prose is mostly focused on describing meaning using any words that serve to do so.

Verse is more concerned with structural factors like rhythm, tonality, and structure within syllables, or within types of sound, or parts of speech. Other linguistic devices which look at details beyond the strict meaning of the words, like rhyme or many other factors (you could even use visual spacing for example) can be considered in verse.

Within verse there's the concept of iambs. I think of it as a tuple of two syllables which are said, weak-strong. Pentameter means ten syllables, and iambic means in groups of weak and strong. Most of Shakespeare is written like this. Also English naturally sounds iambic a lot of the time.

Iambic pentameter sounds like this:

  I watched a bird attempt its beak upon
  The end of fake too-moist baguette in vain
  For it was sick of stale McDicks tossed on
  It endlessly maintained its rationed pain

  While others in its bobbing flock for scraps
  Of birds fought for the thrill squawked on and on
  Till cannibals among their kind rejoiced
  To find cousins in mayonnaise so long
Normally you'd also look at rhyme structure if writing a legit Shakespearean sonnet [2] but I fired this one out as in the style of fast food. So this is technically iambic pentameter but not technically a sonnet.

Or like a particular Shakespearean sonnet [0]. Or like any of them, [1]

[0] https://shakespeare.mit.edu/Poetry/sonnet.I.html

[1] https://shakespeare.mit.edu/Poetry/sonnets.html

[2] https://www.poetryfoundation.org/education/glossary/shakespe...


Minor nitpick: "pentameter" means 5 parts, and each part is an iamb in iambic pentameter, so it's 5 parts where each part is 2 syllables in a weak-strong pattern. That results in 10 syllables, but "pentameter" doesn't mean 10 syllables alone.


Not with that attitude you won’t! But dip your toe in, _Pride and prejudice_ is pretty light and breezy while having some depth to it.


I don't know. All I remember from school is absolutely hating being forced to read, and understand/interpret, thing like shakespeare and Jane Austin. But then again I now like a lot of the vegetables I used to hate as a kid...

My daughter loves the classics, me, science fiction and fantasy.


Just read Treasure Island. It's a classic but one that is easy to comprehend and also timeless.


_Pride and prejudice_ hits for me similar notes as those for fantasy and historical fiction, though of course it is commenting on contemporary issues with no magic (except the magic of love), alas. It’s like entering a foreign society where you may have to infer why people are acting the way they do. Now that you’re not in school, no one is forcing you to write essays on what you read, or even to understand or interpret what is going on in the narrative! Cool, huh?


It’s like incel mentality except applied to literacy


She made free indirect speech [1] the cornerstone of the English language novel. She is recognized as a titanic figure. I don't know who would underrate her!

What I find strange is that people enjoy her books as romantic comedies because the world she represents is incredibly claustrophobic.

[1] https://en.wikipedia.org/wiki/Free_indirect_speech

Edited for clarification


> I was visiting Jane Austen's House Museum last year and it always gives me pleasure to see how wildly popular her work remains.

I have believed for a long time that Austen is broadly popular because her works deal with issues of human relations and economic prosperity at the heart of modern, bourgeois existence. The draw is summed up in this excellent quote from the article:

> They also both, mostly, focus on characters who have enough privilege to have choices, but not enough power to escape circumstances.

That's a perceptive description of middle class life. The movie "Clueless" is an illustration of how easily Austen's insights translate to a society that is superficially very different from hers. [0]

[0] https://en.wikipedia.org/wiki/Clueless


She is although simply a joy to read. Witty remarks and well written.

"Elinor agreed with it all, for she did not think he deserved the compliment of rational opposition". - from S&S

Who wasn't in a situtation where they felt arguing would do nothing? John Green asked: "Who doesn't want a friend as witty as Jane Austin to comment on life?


Austen's command of language and empathy for her characters is second to none. I love the hook at the end of this passage from Pride and Prejudice.

   ``And of this place,'' thought she, ``I might have been mistress! With these rooms I might now have been familiarly acquainted! Instead of viewing them as a stranger, I might have rejoiced in them as my own, and welcomed to them as visitors my uncle and aunt. -- But no,'' -- recollecting herself, -- ``that could never be: my uncle and aunt would have been lost to me: I should not have been allowed to invite them.'' This was a lucky recollection -- it saved her from something like regret.
https://pemberley.com/janeinfo/ppv3n43.html


> to see how wildly popular her work remains

There's an annual Jane Austen festival there too - it really brings people from all over the world. Very fun event even if you're just +1 to someone who's into it.


I upvoted for the perfect first line for this HN post. That you're related, makes sense.


Do you have an example of her writing moving into iambic pentameter in prose, please?

I googled for examples from her books but — search results are terrible.


Of course! This is my favourite example, from Sense and Sensibility, because it announces itself with "burst", and that's the novel where she deploys it most:

"Elinor could sit it no longer. She almost ran out of the room, and as soon as the door was closed, burst into tears of joy, which at first she thought would never cease."

She 'tends towards Iambic' in literary criticism terminology. So it's not a strict Iambic, more like a 'soft Iambic' which is a term I can't remember if it's actually used in lit crit, or if I made it up.

You need to drop the "at" syllable, in that example (which you would do in vocal rhythms of English, then and now), for it to be a true Iambic.

There's lots of good writing on the King James Bible "tending towards" Iambic, which should be more Google-able, and her father was a preacher, so that's a likely influence there, I would speculate.

Some others I like that I remember:

"You pierce my soul. I am half agony, half hope." - Persuasion (I think?).

"Till this moment I never knew myself." - Sense and Sensibility again? I can't remember off the dome. That's a gorgeous strict Iambic.

There are much longer examples - whole paragraphs that close chapters of Sense and Sensibility specifically. I'll try and find the version I have notations on when I'm next around my books. She regularly slips into it to close moments of emotional crescendo - "Cursus" being the Latin term for an analogous technique, when it was more frequently used in a more stylised manner.


One of those comments that let you glimpse the depth behind things and the joys that lie in exploring those depths.

Thanks for taking the time. I will spend tomorrow evening reading.


What a nice thing to say, thank you. A pleasure to be even a vague signpost toward work that's so rewarding. Enjoy your evening(s)!


> "Till this moment I never knew myself." - Sense and Sensibility again? I can't remember off the dome. That's a gorgeous strict Iambic.

"Till this moment I" and "I never knew myself" would be trochaic and iambic, respectively, but they don't strictly scan when you overlay the 'I's. You can of course get them to by e.g. eliding 'moment', or adding a line break and taking '-ment' as a feminine ending, or just scanning according to the writer's idiosyncrasies.

And individual writers can be very idiosyncratic here. Shakespeare, for example, if I remember right, lets monosyllabic words occur in almost any position. Disyllabic words on the other hand can have any combination of stresses (iamb, trochee, spondee, or pyrrhic), but only if they're foot-aligned. And so on.

The field has probably evolved since I was last part of it, but I'll still recommend Kristin Hanson's work in this area: https://linguistica.sns.it/RdL/9.1/Hanson.pdf. (Actually the second time I've recommended Hanson on HN. The last time was, let's see, 6 years ago!)


+1! Hanson is one of the gold-standards on this. It is idiosyncratic, you're right - to the speaker / reader as much as the writer (is my contention with their work).

Personally, I do take 'ment' as a feminine ending there, or - more specifically - the T sound runs into the I sound when I read it, the way it would in the predominantly Italian stuff she's likely referencing.

I'm very much with Gordon Lish on Shakespeare's monosyllabic drift words - that he was educated in Latin, and integrating Germanic vocabulary into that structure relatively freely, and further analysis is almost impossibly complex. That said, there's a lot of moments in those where I'd kill to hear where the stress landed when first performed.

This specific area is really one of those "What if?" moments in literary criticism, I think - I believe it would be incredibly beneficial for the form if this was the dominant focus of critique, rather than thematic stuff. On the rare occasions I teach at universities, this is all completely new to students, which sucks - it's entirely possible to approach prose theory with the same rigour as music theory, and it seems (in the UK, at least) to be very quickly becoming a lost art!


Thankyou. Scanning those phrases, I am trying to read the cadence and understand this -- this is very much a comment to return to and ponder. Thankyou very much for answering my question, too.


Go ahead, be proud! Be Austen-tatious!


I recall this from one of Donald Westlake's books:

"He stopped on a dime and collected 5 cents change."


That sounds like a line Raymond Chandler would have used. His turn of phrase is also delightful.


laughed and was warmed by the end reveal. i support gushing over literature in HN


I don't think equating "extremely close" with "pretending like it can" is a fair way to frame the sentiment of the comment you were replying to. Saying something is close to doing something is not the same as saying it already can.

In terms of cinema tech, it took us arguably until the early 1940s to achieve "deep focus in artificial light". About 50 years!

The last couple of years of development in generative video looks, to me, like the tech is improving more quickly than the tech it is mimicking did. This seems unsurprising - one was definitely a hardware problem, and the other is most likely a mixture of hardware and software problems.

Your complaints (or analogous technical complaints) would have been acceptable issues - things one had to work around - for a good deal of cinema history.

We've already reached people complaining about "these book spines are illegible", which feels very close to "it's difficult to shoot in focus, indoors". Will that take four or five decades to achieve, based on the last 3 - 5 years of development?

The tech certainly isn't there yet, nor am I pretending like it is, and nor was the comment you replied to. To call it close is not laughable, though, in the historical context.

The much more interesting question is: At what point is there an audience for the output? That's the one that will actually matter - not whether it's possible to replicate Citizen Kane.


Interestingly, Gemini 3 Pro Thinking clocks in at 2nd place - behind Horizon Alpha, but ahead of any actually-released model checkpoints - on the TvP Benchmark, with a score of 71.8%.

This blows Sonnet 4.5 (64.8%), GPT5.1 (58.7%) and Opus 4.1 (66.3%) away.

An example output:

'''

The cabin was cold. It smelled of pine needles and old dust. Bryan sat on the porch with a glass of scotch balanced on his knee. It was cheap scotch. He missed the good stuff, but he wasn't in Paris anymore, and he certainly wasn't in Beverly Hills.

He checked his watch. It was a bulky thing, black rubber. He checked the Glock on the table next to him. That was black, too.

"I told you," Bryan said. He wasn't talking to anyone. "I told you not to go into the woods."

He was thinking about Kim. He was always thinking about Kim. She was safe, probably. At a mall in Santa Monica. Or maybe a library. But it felt like she was gone. It always felt like she was gone.

The trees at the edge of the clearing shifted. There wasn't any wind. The air was heavy and humid, unusual for October. A bird stopped singing. Then the crickets stopped. It got very quiet.

Bryan took a sip of the scotch. It burned his throat. He liked the burn. It reminded him he was still there.

Three red dots appeared on his chest. They made a small triangle.

Bryan didn't look at them. He set the glass down on the railing. He picked up the phone. It was an old flip phone. He pressed a button.

"I don't know who you are," Bryan said into the phone. "I don't know what you want. If you are looking for ransom, I can tell you I don't have money."

The leaves rustled. A shape began to form out of the empty air. It looked like heat rising off the highway in August. It was tall. Much taller than a man.

"But what I do have," Bryan said, "are a very particular set of skills."

The shape solidified. It was ugly. It had skin like a lizard and dreadlocks that looked like heavy black cables. It wore a metal mask. It made a clicking sound. Click-click-click. Like a Geiger counter.

"Skills I have acquired over a very long career," Bryan said. He stood up. His knees popped. He felt the ache in his lower back. He was getting too old for this.

The thing roared. It wasn't a human sound. It was a wet, guttural noise.

"I will look for you," Bryan said. He put the phone in his pocket. "I will find you."

The creature raised a metal gauntlet. Two jagged blades slid out. They caught the moonlight.

"And I will kill you," Bryan said.

He raised the Glock and fired twice. Pop-pop.

The creature didn't fall. It took a step forward. Green blood, thick and glowing, leaked from its shoulder. It looked like antifreeze. The creature touched the wound, then looked at its hand. It seemed surprised.

Bryan didn't wait. He moved off the porch. He moved fast for a man his age. He went left, toward the woodpile. The creature fired a blast of blue light from its shoulder. It hit the porch swing. The swing exploded. Wood splinters rained down like confetti.

Bryan rolled behind the logs. He breathed in. He breathed out. He could hear the heavy footsteps crunching the dry leaves. Crunch. Crunch.

"You're big," Bryan whispered. "That makes you slow."

He grabbed a splitting maul from the stump. It was heavy. The handle was smooth hickory.

The creature came around the woodpile. It was scanning the trees. It didn't see Bryan down low. Bryan swung the maul. He put his hips into it. He swung it like he was chopping a cord of oak.

The blade hit the creature in the knee. There was a snap. A loud, wet snap.

The thing howled. It fell onto one knee.

Bryan dropped the maul. He stepped inside the creature’s reach. He knew exactly where to hit. The throat. The armpit. The soft spot under the ribs. He hit the creature three times, hard chops with the side of his hand. It was like hitting a brick wall, but he felt something give.

The creature swung its arm back. It hit Bryan in the chest.

Bryan flew backward. He hit the dirt. The wind went out of him. He lay there for a second, staring up at the stars. They looked very far away. He wondered if Lenore was looking at the same stars. Probably not. She was probably sleeping.

He sat up. His ribs hurt. Maybe broken.

The creature was trying to stand. It was clicking again. It tapped something on its wrist. A series of red symbols started flashing. They counted down.

Bryan knew a bomb when he saw one.

"No," Bryan said.

He tackled the thing. He didn't think about it. He just did it. He grabbed the creature’s arm. He twisted the wrist mechanism. He’d seen something like it in Baghdad once. Or maybe Istanbul. The memories ran together now.

He ripped the gauntlet loose. Wires sparked. He threw it as hard as he could into the darkness of the woods.

Three seconds later, there was a flash. A boom. A shockwave that shook the pine needles from the trees.

Silence came back.

The creature lay on the ground. It was breathing shallowly. The green blood was pooling under it. It took off its mask.

The face was hideous. Mandibles. Beady eyes. It looked at Bryan. It said something, a garbled copy of Bryan's own voice.

"...good luck..."

Then it died. It just stopped.

Bryan stood up. He dusted off his pants. He walked back to the porch. The swing was gone. The railing was scorched.

His glass of scotch was still sitting there, untouched. The ice hadn't even melted.

He picked it up. He took a drink. It still tasted cheap.

He took his phone out and looked at it. No service.

"Well," he said.

He went inside the cabin and locked the door. He sat on the couch and waited for the sun to come up. He hoped Kim would call. He really hoped she would call.

'''


An old joke that I was thinking about recently: Two local government consultants - tasked with seeing if it'd be financially beneficial to dig a new tunnel so that cars don't have to drive up and down a mountain - dig two small holes on opposite sides of the mountain then stand at either end.

The punchline, which I can't remember, is something about the two holes being, according to the two consultants, an MVP of a tunnel: "Just stand at either end of it."


I don't know that one, but here's a superficially similar joke from somewhere among http://miresperanto.com/humuro.htm :

When the British government invited commercial proposals for the digging of the Channel Tunnel between England and France, one man submitted a bid for only £10,000. “How can you possibly dig under the English Channel for only £10,000?” asked the project manager.

“It’s simple,” replied the low bidder. “My partner takes a spade, goes to France and starts digging. I take another spade and start digging from England. We’ll both keep digging until we meet in the middle.”

“Hm, I see. But what happens if, through a miscalculation, you two do not meet?”

“That’s even better for you!” replied the bidder enthusiastically. “In that case you will have two tunnels!”


An answer to your puzzle in another post that is locked: https://news.ycombinator.com/item?id=42498953 The "alternate" 5x5 word square that satisfies all the clues without using the words from the first grid is:

    S T R I P
    C H I N A
    R E G A L
    A T O N E
    P A R E R
Breakdown of the solution:

Across

STRIP (Remove the outer layer of, perhaps) — Counterpart to SCALD.

CHINA (Region on a globe) — Counterpart to POLAR.

REGAL (Like some movie theaters; e.g., Regal Cinemas) — Counterpart to ARTSY.

ATONE (Command to a lawbreaker) — Counterpart to CEASE.

PARER (Rhyme for Tom Lehrer /'lɛrər/) — Counterpart to ERROR.

Down

SCRAP (____yard; scrapyard is a common sci-fi setting) — Counterpart to SPACE.

THETA (It goes something like this: Ꮎ) — Counterpart to CORER.

RIGOR (Feature of liturgy, often; strictness/adherence to rubrics) — Counterpart to ALTAR.

INANE (It's vacuous, in a sense) — Counterpart to LASSO.

PALER (Fino is paler than Pedro Ximénez sherry) — Counterpart to DRYER.


Yep, you got it! :)


A perfect joke, really.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: