Hacker Newsnew | past | comments | ask | show | jobs | submit | dahcryn's commentslogin

When you get to partner level, you also get profit sharing on top of you salary.

Partners get 300-400k and senior partners get closer to 600-800


is this the same at quantumblack? They at least give the impression their assets on Brix are somewhat up to date and uesable

QuantumBlack is synonymous -- it's where all of McKinsey's AI expertise got reorganized these days, anyone working on this tool was likely doing it on a rotation in between client engagements under "QuantumBlack, AI by McKinsey"

QB is no more, leadership left, technical experts left. Just the brand stayed behind.

I would like to counteract your statement that each token adds a distraction.

In our experiments, we see a surprising benefit to rewriting blocks to use more tokens, especially long lists etc..

E.g. compare these two options

"The following conditions are excluded from your contract - condition A - condition B ... - condition Z"

The next one works better for us:

"The following conditions are excluded from your contract - condition A is excluded - condition B is excluded ... - condition Z is excluded"

And we now have scripts to rewrite long documents like this, explicitly adding more tokens. Would you have any opinion on this?


This observation makes sense, because all models currently probably use some kind of a sparse attention architecture.

So the closer the two related pieces of information are to each other in the input context, the larger the chance their relationship will be preserved.


I saw a demo of parloa (or maybe it was a different provider), and no joke, they insert sound of typing on a keyboard or stuff like that during an LLM tool call, its weird but surprisingly effective lol

Benefit of mcp is that it exists and kinda works, and a lot of tools are available on it. I guess it's all about adoption. But inherently yeah it's a discovery service thingy. Google will never embrace mcp since it's invented by anthropic

I consider it a good first attempt, but indeed hope for a sort of mcp2.0


Right, but surely swagger/openapi has been providing robust API discovery for years? I just don't get what LLMs don't like about it (apart from it possibly using slightly more tokens than MCP)

MCP is like "this is what the API is about, figure it out". You can also change the server side pretty liberally and the agent will figure it out.

Swagger/OpenAPI is "this is EXACTLY what the API does, if you don't do it like this, you will fail". If you change something, things will start falling apart.


I've actively started to use outlook and teams through chrome to free up some of my ram, easily saves 3-4gb. It's gotten ridiculous how much ram basic tools are using, leaving nothing for doing actually real work


People get on me all the time about not installing programs on my computer. I run everything in the browser, if I can. Partly so I can kill it properly without it misbehaving, and partly because I don't trust their software at all. Zoom, Slack, Gmail, etc-- if I can run it in the browser, then that's the only way I'll run it.


Same for me on mobile. I don’t install the Amazon app I just use the browser where I can limit tracking and only log in when actually buying something.


Every app ships with its own isolated web browser now. That idea needs to die.

Back to native apps without bloated toolkits!


Or at least improving the shared browser ui / chromeless experience for "app" installs. I think that Tauri is pretty reasonable as well, weak link being Linux currently.


No fuck the browser. It's just layers of shit on shit on shit.

Mail.app is sitting here using 137Mb of RAM. Outlook 1270Mb.


And the likes of Zed save so much ram over VS Code... oh, wait...


I use vim for everything so I have no idea.

My main machine has 16Gb of RAM and I don't think I've ever seen it go over 4Gb and that was when I had a 200gb mmap'ed sparse array.


On my personal desktop, I have 96gb... I've never gone over 70 or so.. but that was with a lot of services running a fairly complex system with data loaded locally. I generally don't five a f*ck about the ram I'm using day to day. I'll run various updates and reboot between once a month and once a quarter.


ive found the web versions use a similar amount of memory and have fewer features

my issue is that my company won't issue laptops with more than 16 gbs of ram

guess i'm not virtualizing anything...


not necessarily, if openai managed to monetize free users. Could be through advertising, or integrations with marketplaces on commission (e.g. order your next Hello Fresh through ChatGPT? Get recommended a hotel?)

They could succeed where Alexa failed. A free user can even bring in more than a paid user if you look at some platforms like spotify, where apparently there is a large chunk of free users generating more income through ads than if they would pay


We are so far away from ordering stuff from LLM


Not really!

I was researching CAVA ( due to the crazy earnigs announcement yesterday ) and it was displaying some nice links to the website, all suffixed with ?utm=chatgpt

So, it has begun!


not true at all, onboarding is complex too. E.g. you cant just connect claude to your outlook, or have it automate stuff in your CRM. As a office drone, you don't have the admin permissions to setup those connections at all.

And that's the point here: value is handicapped by the web interface, and we are stuck there for the foreseeable future until the tech teams get their priorities straight and build decent data integration layers, and workflow management platforms.


Gemini on fast also tells me to walk...

On Thinking it tells me I should drive if I want to wash it, or walk if it's because I work there or if I want to buy something at the car wash shop.

On Pro it's like a sarcastic teenager: Cars are notoriously difficult to wash by dragging a bucket back and forth.

Technically correct, but did catch me offguard lol.


It's not surprising that some models will answer this correctly and it's not surprising that smaller, faster models are not necessarily any worse than bigger "reasoning" models.

Current LLMs simply don't do reasoning by any reasonable definition of reasoning.

It's possible that this particular question is too short to trigger the "reasoning" machinery in some of the "reasoning" models. But if and when it is triggered, they just do some more pattern matching in a loop. There's never any actual reasoning.


You gotta love the "humor" of Gemini. On Fast it told me:

> Drive. Unless you plan on pushing the car there


We already require all relevant and referenced documents to be uploaded in a contract lifecycle management system.

Yes we have hundreds of identical Microsoft and Aws policies, but it's the only way. Checksum the full zip and sign it as part of the contract, that's literally how we do it


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: