Hacker Newsnew | past | comments | ask | show | jobs | submit | oc1's commentslogin

It's not impossible. Websites will ask for an iris scan to identify if you are a human as a means of auth. They will be provided by Apple/Google and governed by local law. Those will be integrated in your phone. There will be a global database of all human iris to fight ai abuse since ai can't fake the creation of a baby. Passkeys and email/passwords will be a thing of the past soon.


Why can't the model just present the iris scan of the user? Assuming this is an assistant AI acting on behalf of the user with their consent.


A whole new generation will discover the term net-negative programmer again ;)


But do you use it locally? It seems to be more of a server-side product


Personally, I absolutely use it locally. I’m always trying different editors and tech. It saves me from entering a multitude of API keys into each different software, in addition to the other reasons supplied for being able to specify your own limits and avoid surprise charges if you want.

When I want to try a new editor, vs code plugin or software, I only have to point it at my litellm proxy and immediately have access to all of my providers and models I’ve configured, no extra setup. It’s like a locally hosted openrouter that doesn’t charge you for routing. I can just select a different provider as easy as choosing the model in the software; switching from “openai/gpt-4o” to “groq/moonshotai/kimi-k2-instruct”, for example.

You can use litellm or OpenAI protocols which makes it compatible with most software. Add on ollama proxy and you can proxy ollama requests from software that doesn’t support specifying OpenAI’s base address but that does support ollama (a not uncommon situation). That combo covers most software.

So yes, to me it is absolutely worth running locally and as easy as editing a config file and starting a docker (or a shell script to open a venv and start litellm, if you prefer).

The only drawbacks I’ve found so far is that not all providers accurately respond with their model information so you sometimes have to configure models/pricing/limits manually in the config (about 5 lines of text that be copy/pasted and edited). All the SOTA models are pre-configured and kept relatively up to date, but one can expect updates to lag behind real pricing changes.

The UI is necessary if you want to set up api key/billing restrictions which requires a db, but that is rather trivial with docker as well.


Its a server-side proxy, so instead of ie. OpenAI url you point your AI tool to url of LiteLLM proxy, and use its virtual keys with budget limits or LLM models restrictions etc... - the features LLM providers will not give you, because it might save you money ;)


> Cost: $50K of API tokens

What? It costs exactly $200


This will be the golden age of hackers for lulz or money, security researchers and script kiddies (fka idea guys)


God, i'm living in a dilbert comic.

I would have never thought i would one day envy licensed professionals like lawyers who have a barrier for entry into their profession.


what's wrong with that?


Open the network tab on F12. Or for giggles try accessing that UI from mobile.


365 requests 77.2 MB transferred 79.8 MB resources Finish: 5.71 s

I'm impressed it took only 6 seconds. Would hire.


I mean that's the kind of website I would have been done by Flash and I would played around for ages on. Now it's just stupid scroll jacking and poor HTML.


What a bunch of crap. Everyone who lived these era remembers the horrors of optimizing for a bunch of different browsers. Ajax was barely understood. PHP was all over the place. No frameworks. No Stack overflow. No Vibe Coding. Nah. Nowadays i just have to prompt Claude "Hey, write me a html site for html hobbyists and upload it somewhere on the internet". voila!


It wasn't so much "horror" as "this designer wants rounded corners". AJAX was well-understood but not yet standardized or widely supported, which is down to slow standardization bodies and browser developers like Microsoft and nowadays Google doing their own thing.

Anyway, show us your HTML site, document how you built it - that's the kind of thing this page is advocating for.


This era is before optimizing for browsers.


I like how you say this last part out loud, even with some sense of pride, it seems.


> What a bunch of crap.

> Nowadays i just have to prompt Claude

The second behavior leads to the first statement.


Tea is too big too fail, that's why Apple doesn't pull the plug otherwise they would anger a good portion of their angry female user base.


And that angry user base will do what, exactly? Switch to Android? One can dream.


Crazy. If true this solves the question why humans need sleep and could be a great direction to resolve further question about sleep diseases.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: