Weird, I've been using Ubuntu for quite a few years now, both in my work laptop and personal desktop, and have never even heard the terms "Ubuntu Pro" or "Expanded Security Maintenance for Applications."
Algorithms Illuminated by Tim Roughgarden really helped me understand algorithm design and analysis. Used it to prep for a masters in computer science, with no previous degree in the area.
Keeping the state on the server allows us to run arbitrary Python code and libraries in our event handlers that update the state. Currently only the ui is compiled to React but the logic stays in Python.
We’re working to offload more logic to the client in the for purely UI operations like you mention, and in the future want to leverage wasm once it’s more mature.
It depends on what you mean by function. I did an internship at Coca-Cola in Brazil a decade ago. They had ~500 employees country-wide, mainly marketing and operation strategy folks. The production of the actual beverages and other products was in great part delegated to partner bottlers. My boss there used to joke that if all the 500 employees were fired at once, the amount of Coca-Cola cans and bottles sold wouldn't drop at all for months. And I do believe he was right. In the long-run, of course, things would be different - that's where marketing and strategy pays off.
Which reminds me of https://fivebooks.com/, where people from a particular field are asked their top five book recommendations for a given theme. The interview format is great, and I've picked up a few recommended books along the way.
It seems you are mistaking the actual book (a story, an exposition of a subject, etc.) for the printed set of cover and pages, which is only a physical instance of the book itself. Writing your notes in a separate medium is just as much engaging with the book as writing in the margins of a physical copy. It's a matter of taste really, but definitely not an insult to the author!
> This is overstated and easily disproved. ChatGPT produces accurate facts as a matter of course. test it right now
"the idea that this stopped clock is broken is overstated and easily disproved. the clock produces accurate time as a matter of course. go ahead and ask what time is it, just make sure it is 3:45am or 3:45pm"
what? the argument here is that ChatGPT giving factual answers is a mere coincidence, not at all what the model was trained to do. It's a broken clock, it can tell you the correct time at very specific contexts, but you shouldn't rely on it as your source of factual information. If you feed him enough data saying the statue of liberty is 1 cm tall, it will happily answer a query with that "fact".
Any analogy is incorrect if you stretch it enough, otherwise it wouldn't be an analogy...
My clock analogy works up to this: ChatGPT success in factually answering a query is merely a happy coincidence, so it does not work well as a primary source of facts. Exactly like... a broken clock. It correctly tells the time twice a day, but it does not work well as a primary source of time keeping.
Please don't read more deeply into the analogy than that :)
Nope, not random behavior. ChatGPT works by predicting the continuation of a sentence. It has been trained in enough data to emulate some pretty awesome and deep statistical structure in human language. Some studies even argue it has built world models in some contexts, but I'd say that needs more careful analysis. Nonetheless, in no way, shape or form has it developed a sense of right vs wrong, real vs fiction, in a way you can depend on it for precise, factual information. It's a language model. If enough data says bananas are larger than the Empire State building, it would repeat that, even if it's absurd.
I didn’t say it was random behavior. You did when you said it was a happy coincidence.
I know it is just a language model. I know that if you took the same model and trained it on some other corpus that it would produce different results.
But it wasn’t so it doesn’t have enough data to say that bananas are larger than the Empire State Building, not that it would really matter anyways.
One important part of this story that you’re missing is that even if there were no texts about bananas and skyscrapers that the model could infer a relationship between those based on the massive amounts of other size comparisons. It is comparing everything to everything else.
See the Norvig-Chomsky debate for a concrete example of how a language model can creat sentences that have never existed.
> the model could infer a relationship between those based on the massive amounts of other size comparisons
That is true! But would it be factually correct? That's the whole point of my argument.
The knowledge and connections that it acquires comes from its training data and it is trained for completing well-structured sentences, not correct ones. Its training data is the freaking internet. ChatGPT stating facts are a happy coincidence because (1) the internet is filled with incorrect information, (2) its training is wired for mimicking human-language's rich statistical structure, not generating factual sentences, and (3) its own powerful and awesome inference capabilities can make it hallucinate completely false but convincingly-structured sentences.
Sure, it can regurgitate simple facts accurately, especially those that are repeated enough in its training corpus. But it fails for more challenging queries.
For a personal anecdote, I tried asking it for some references for a particular topic I needed to review in my masters dissertation. It gave me a few papers, complete with title, author, year, and a short summary. I got really excited. Turns out all the papers it referenced were completely hallucinated :)
Clock correctness is relative. If the antique windup clock in your living room is off by 5 minutes, it's still basically right. But if the clock in your smartphone is 5 minutes off, something has clearly gone wrong.
Nor is it only incorrect one billionth of the time, as you seem to be indicating through your hypotheticals. Depending on what I've asked it about, it can be incorrect at an extremely high rate.
Once I stayed in an Airbnb owned by Karl Friederich Gauss' distant relatives in Brazil.
It was a very cozy cabin in the mountains around Rio and I was celebrating a two-year anniversary with my girlfriend. There were a few books arranged in a short rack, mostly teen stuff, but one aged book stood out. It was an English version of Gauss' Theory of the Motion of the Heavenly Bodies, apparently borrowed from an university library in the 1970s but never returned. Inside, I found two documents from 1969, a voter registration and an exam card. They belonged to a woman with a Brazilian first name and Gauss' surname. Later, I had to transfer money to the Airbnb host, and she also had Gauss as a surname.
I was pretty thrilled with the whole thing. My girlfriend was more entertained by the cabin's cat.
I believe they tackle this exact bias. From the Sampling Effects section:
> Second, the choice of CDS n-grams could lead to a "recency bias” in our results, explaining their rise in prevalence in recent decades. We control for this effect with a null model that samples random n-grams more frequently from recent books, due to rapidly increasing publication volume since 1895, thereby inducing a bias toward more recent language. We observe increases of CDS n-gram prevalence well above levels predicted by this null model
> thereby inducing a bias toward more recent language. We observe increases of CDS n-gram prevalence well above levels predicted by this null model
I don't get why this would work? I get the null model predicting a bias X, and I guess they have a greater bias X + Y, but I don't see how that handles their choice of cognitive distortion signifiers being biased? I mean isn't it likely their choice of signifiers matches to Y?
they are claiming that their basket of distortion N-Grams became more common faster than other randomly sampled n-grams from recent works.
This seems like an interesting approach to controlling for the bias, but I'd expect the random sampling would bias lower than a specific sample as a random sampling of n-grams would pick up a lot of english grammar which hasn't changed in many years.