Hacker Newsnew | past | comments | ask | show | jobs | submit | mangamadaiyan's commentslogin

Hm. I thought LLMs weren't free. Am I missing something?


1. You can run decent local AI now - see /r/LocalLlama. You pay the electricity cost and hardware capex (which isn't that expensive for smaller models).

2. Chinese APIs like Moonshot and DeepSeek have extremely cheap pricing, with optional subscriptions that will grant you a fixed number of requests of any context size for under $10 a month. Claude Code is the bourgeois option, GLM-4.7 does quite well on vibe coding and is extremely cheap.


Something to think about: perhaps the problem is with the duration of the appointment, and the difficulty of getting one in the first place? Elsewhere in the world, doctors can and do spend more than 12 minutes figuring out what's wrong with their patients. It's the healthcare system that's broken, and it _can_ be fixed without resorting to chatgpt. That it won't is the reality, though


Can't really compete with LLMs on duration of attention - SOTA LLMs can ingest years of research on the spot, and spend however long you need on your case. No place on Earth has that many specialists available to people (much less affordable); you'd have to have 50% of the population become MDs, and that would still cover just one sub-specialty of one specialization.


GP sessions being around 20 minutes is pretty standard in North American and European countries. You can't have standard hour-long GP sessions, as it'd become impossible to make a timely appointment, no matter which system.


Can confirm having experienced both the USA and Dutch systems now. In both countries is my visit only about 20 minutes + another 15-30 sitting in the lobby because they doctor is always running behind schedule.

In theory, the Dutch system will take care of your more quickly for "real" emergencies as their "urgent care" (spoedpost) is heavily gate kept and you can only walk in to a hospital if you're in the middle of a crisis. I tried to walk into the ER once because I needed an inhaler and they told me to call the call the hotline for the urgent care... this was a couple of months after I moved.

That said, I much prefer paying €1800/year in premiums with a €450 deductible compared to the absolute shitshow that is healthcare in the USA. Now that I've figured out how to operate within the system, it's not so bad. But when you're in the middle of a health crisis, it can be very disorienting to try and figure out how it all works.


Ever wonder why famous people and celbrities always seem so healthy? They have unfettered access to well paid doctors. People with lots of money can spend literal days with GPs, constantly trying and testing things based on feedback loops with the same doctor at the same time.

When people are forced to have a consultation, diagnosis, and treatment in 20 minutes, things are rushed and missed. Amazing things happen when trained doctors can spend unlimited time with a patient.


You make a good point, but the key here is that there are a lot less people with that kind of money. The lower volume of patients is why that's possible. There are a lot more people in the middle class. So sessions have to be limited to ensure everyone has fair, equal and timely access to a doctor.

And of course, GPs typically diagnose more common problems, and refer patients to specialists when needed. Specialists have a lower volume of patients, and are able to take more time with each person individually.


Ever wonder why famous people and celebrities seem so unhealthy with mental health and substance abuse conditions? I'm all for improving affordable access to healthcare but most people wouldn't benefit from spending more time with doctors. It's a waste of scarce resources catering to the "worried well".

While some people are impacted by rare or complex medical conditions that isn't the norm. The health and wellness issues that most consumers have aren't even best handled by physicians in the first place. Instead they could get better results at lower cost from nutritionists, personal trainers, therapists, and social workers.


Having worked in rare disease diagnostics in a non-US country with good public healthcare, most patients had to fight their way to the correct speciality to get their diagnosis. Without the persistence of family/specific doctors, its not possible.

AI might provide the most scalable way to give this level of access/quality to a much wider range of people. If we integrate it well and provide easy ways for doctors to interface with this type of systems, it should be much more scalable, as verification should be faster.


> Elsewhere in the world, doctors can and do spend more than 12 minutes figuring out what's wrong with their patients.

Where? According to "International variations in primary care physician consultation time: a systematic review of 67 countries" Sweden is the only country on the planet with an average consultation length longer than the US.

"We found that 18 countries representing about 50% of the global population spend 5 min or less with their primary care physicians."


I was referring to a couple of countries in Asia.


The American Medical Association has long lobbied to reduce the number of medical schools, reduce the number of positions for new doctors, and limit what tasks nurse practitioners can do [1].

[1] https://petrieflom.law.harvard.edu/2022/03/15/ama-scope-of-p...


Tell us you've never worked in a faang without telling us.


It doesnt work in faang either, which is why they are wildly slow to produce software. They can just print money when running at 10% efficiency.


... and bear more load as well.


Wow, the Mandelbrot set example really put things into perspective.

Unoptimized code would easily take tens of minutes to render the Mandelbrot in 640x480x256 on a 486. FractInt (written by Ken Shirriff) was fast, but would still take tens of seconds, if not longer -- my memory is a little hazy on this count.


Around that time I worked in a shop that had an Amstrad 2386 as one of our demo machines - the flagship of what was really quite a budget computer range, with a 386DX20 and a whopping 8MB of RAM (ordered with an upgrade from the base spec 4MB, but we didn't spring for the full 16MB because that would just be ridiculous).

Fractint ran blindingly fast on that compared to pretty much everything else we had at the time, and again it could show it on a 640x480x256 colour screen. We kept it round the back and only showed it to our most serious customers, and our Fractint-loving mates who came round after hours to play with it.

It still took all night to render a Lyapunov set.


Transputers were a 1980s CPU innovation that didn't live up to their original hype, and have little to no connection with TransMeta.


Indeed. It is the enigma of success in an industry with no franchise value.


Maybe actually making the interviews less of a hazing ritual would help.

Hell, maybe making today's tech workplace more about getting work done instead of the series of ritualistic performances that the average tech workday has degenerated to might help too.

Ergo, your conclusion doesn't follow from your initial statements, because interviews and workplaces are both far more broken than most people, even people in the tech industry, would think.


Well it looks like if companies and startups did their job in hiring the proper distributed systems skills more rather than hazing for the wrong skills we wouldn't be in this outage mess.

Many companies on Vercel don't think to have a strategy to be resilient to these outages.

I rarely see Google, Ably and others serious about distributed systems being down.


There was a huuuge GCP outage just a few months back: https://news.ycombinator.com/item?id=44260810


> Many companies on Vercel don't think to have a strategy to be resilient to these outages.

But that's the job of Vercel and it looks like they did a pretty good job. They rerouted away from the broken region.


After reading this post, I fear that LLMs and their ilk will make humans terrible at reading and comprehension, and impair their ability to think, much as how the advent of a car-first society resulted in many humans following a sedentary lifestyle to the detriment of their health.


Great, but why? For some things I want to think, but for some things I want the information with subjectivity taken out of it. I think it depends on intention. For newspapers and other sources with known biases, I think there's value. As with many things, information rarely exists in a vacuum. In this case if we don't think with intention about the framing of such an article, then we've already outsourced part of our thinking to the authors who intend to shape it.


You're concerned about the author's bias contaminating your thinking, so the solution is to outsource your thinking to the LLM, because it's impossible for one of them to have any sort of bias at all.

This, instead of actually thinking yourself, and examining your own biases - or those of the people who wrote what you read

Right. Good luck!


if you look for problems, you'll find them.

stripping something down to the objective parts isn't that hard for an llm as it's all about language. Sure they can and do have biases, although in this case it's a relative matter, and undoubtably the guardian is well known as left wing (in case somehow it isn't obvious just from looking at this article). So I'd say it's more steps forward than backwards. It's not either or. Removing subjective fluff from such a language is a function of thinking for oneself. using an llm to remove bias doesn't mean you need to then say "ok and now it's 100% objective". I recommend chomsky on the subject, who for instance purposely speaks in monotone so as not to infuse emotion into what he's saying.

enjoy thinking what somebody else decided for you.


I think you just proved my point - but hey, to each their own.

Good luck, again!


I still don't know what your point is. don't use llms to remove bias as they also have bias? is it just nihilism for the sake of nihilism? If so I can kind of get that, but then it leads to nowhere right? if something doesn't work perfectly I'd still use it if it's better than a less good alternative. I see it as a matter of relativism.


Your claim is that LLMs are better at "x" than the alternatives, for some value of x.

My point is that your claim is unsubstantiated.


The article, you mean?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: