Hacker Newsnew | past | comments | ask | show | jobs | submit | stephenboyd's commentslogin

The iPhone SE was only $400.

“hundreds of micro-services for a web application that barely have 1k concurrent users.”

That would not be appropriate in any of the mainstream cloud-native architecture styles.


Unless someone convinced someone that they should prepared in case they become BIG and get millions of concurrent users.

And even then, nothing is sure. During the COVID-19 pandemic, Pokémon Go, thanks to an adaptation of game mechanics, literally exploded in users. Despite being world-scale, they struggled for a couple of weeks to meet the workload. World-scale naturally degrades to normal-scale if you don't stress it regularly.


> That would not be appropriate in any of the mainstream cloud-native architecture styles.

I mean it ends up being like that. A significant number of medium sized sites bust out into micro-services at some point. Or they start off as serverless, and realise that actually its not as easy to scale as it was claimed.


Did they say it was an LLM? I didn’t see that in the reporting.


It isn't just the projections that distort our perception. North being up and south being down is so ubiquitous that it seems like Earth (and the Solar System) has a top side and a bottom side. But that's just a convention.

https://www.bbc.com/future/article/20160614-maps-have-north-...


Well, if you define up and down as the axis perpendicular to the ecliptic, there is an up and down in the solar system.


Kinda? It's still arbitrary which one we think of as "up" and which we think of as "down," though, right?


If there are no breaking changes, is it really a disadvantage for a language to add new capabilities frequently?


That is also the advice given in the article.

Unfortunately it follows that with "occasionally connect the TV to the internet for a minute to see if it needs any firmware updates" which is pointless if the TV is already working properly.


It looks like the recent popularity of hybrid poodle mixes explains the decline of purebred retrievers.


Possibly. Judging only by my own (heavily Asian) neighborhood, little white dogs gave way to (Goldens / Labs and -doodles). I can't even remember all the yellow Labs' names anymore.


The training data is so large that it incidentally includes basically anything that Google would index plus the contents of as many thousands of copyrighted works that they could get their hands on. So that would definitely include some test prep books.


They seem to be taking this into account: We did no specific training for these exams. A minority of the problems in the exams were seen by the model during training; for each exam we run a variant with these questions removed and report the lower score of the two. We believe the results to be representative. (this is from the technical report itself: https://cdn.openai.com/papers/gpt-4.pdf, not the article).


By the same token, though, whatever test questions and answers it might have seen represent a tiny bit of the overall training data. It would be very surprising if it selectively "remembered" exact answers to all those questions, unless it was specifically trained repeatedly on them.


I wonder if the sequel will also have a mortuary-based economic system.


If the future of AI is LLMs like ChatGPT, which are trained on literature and other things that people create, you're going to need humanities scholars like you need computer scientists to understand the AI. Microsoft gave their chatbot, which has probably almost every published work of science fiction in its training set, a human name and then were surprised that it imitated the fictional poorly-behaved named AIs that it was exposed to in its training.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: