Hacker Newsnew | past | comments | ask | show | jobs | submit | abiraja's commentslogin

M5 has been out since last year, no?

I just bought a M5 Macbook Pro 2 weeks ago. Thinking of returning it and getting a M5 Pro with the same configuration but only $200 more. How should I compare M5 vs M5 Pro?

You'll get slightly more performance and ever so slightly less battery life. I'd do it

I don’t see how 30% more CPU, 50% more GPU and 2x the memory bandwidth is slightly more performance.

Thanks for the advice! Gonna do it.

You might also get more monitor support:

M5 Supports up to two external displays over any combination of Thunderbolt and HDMI ports:

Two displays up to a native resolution of 6K at 60Hz or 4K at 144Hz or

One display up to a native resolution of 8K at 60Hz or 5K at 120Hz or 4K at 240Hz

M5 Pro Supports up to three external displays over any combination of Thunderbolt and HDMI ports:

Three displays up to a native resolution of 6K at 60Hz or 4K at 144Hz or

One display up to a native resolution of 8K at 60Hz or 5K at 120Hz or 4K at 240Hz plus a second display up to a native resolution of 5K at 120Hz or 4K at 200Hz


I've been using it lately with OpenCode and it's working pretty well (except for API reliability issues).


GPT4o and 4.1 are definitely not the best models to use here. Use Claude 3.5/3.7, Gemini Pro 2.5 or o3. All of them work really well for small files.


What are people using to interface with Gemini Pro 2.5? I'm using Claude Code with Claude Sonnet 3.7, and Codex with OpenAI, but Codex with Gemini didn't seem to work very well last week, kept telling me to go make this or that change in the code rather than doing it itself.


I use Gemini Pro 2.5 from Zed sometimes. But whilst it is good at higher level architecture on a lot of context, it is quite bad at 1) generating the correct diffs that Zed can apply and 2) at continuing. It just doesn’t seem to get “tool usage”.


Working on a web app builder that generates code via AI, much like hundreds of other tools out there. The differentiator is that the tool automatically sets up a Postgres DB (using Neon) for you. So, it's a lot easier to get started and it can handle large complex apps that require auth and database, but it can also build simple websites. The stack is next.js and code is easy to export and view.

Primarily uses Claude Sonnet 3.7 and Gemini Pro 2.5. But you can choose other models too.

You can try it for free while I'm beta testing it here: https://lumosbuilder.com?ref=hn


This is awesome. My name is David and I work at Neon btw. Email me through david@ if you want to chat!


Batching likely means the response is not real-time. You set up a batch job and they send you the results later.


If only business people I work with would understand 100GB even transfer over the network is not going to return immediately results ;)


That makes sense. Idle time is nearly free after all.


This is really cool! Would love an open source version of it.


i have a rudimentary pipeline that takes in a ton of data, converts to json & markdown and then i used claude and o1 pro to generate the dashboard. That is to say, there are manual hops. How would you want it packaged / what would be useful?


And also the solving of hundreds of diseases that ail us.


You need to solve diseases and make the cure available. Millions die of curable diseases every year, simply because they are not deemed useful enough. What happens when your labor becomes worthless?


One of the biggest factors in risk of death right now is poverty. Also what is being chased right now is "human level on most economically viable tasks" because the automated research for solving physics etc. even now seems far-fetched.


Why do you think you’ll be able to afford healthcare? The new medicine is for the AI owners


It doesn’t matter. Statists rather be poor, sick, and dead than risking trillionaires.


You should read about workers right in the gilded age, and see how good laissez-faire capitalism was. What do you think will happen when the only thing you can trade with the trillionaires, your labor, becomes worthless?


If you're looking for a open source version of the same, check out https://github.com/abi/screenshot-to-code


Have you tried Mistral?


Mistral is genuinely groundbreaking, for a fast, locally-hosted model without content filtering at the base layer. You can try it online here: https://labs.perplexity.ai/ (switch to Mistral)


It's very fast, but it doesn't seem very good. It doesn't take instruction well (acknowledges and spits back the same wrong stuff) and doesn't seem to have much of a corpus or it's dropping most of it on the floor because it successfully answers zero of my three basic smoke-test questions.


>doesn't seem to have much of a corpus

what do you mean by 'corpus'? It is only 13GB so questions that require recalling specific facts aren't going to work well with so little room for 'compression', but asking mistral to write emails or perform style revisions works quite well for me


Are you running mistral-7B or mistral-7B-instruct?


Wow I was not expecting this, It's really something else in terms of speed, and results are not bad! Will test it more


Are more companies/teams than the creating team working to get this to copilot/chatgpt standards?


Thanks for the link, do you know any other similar services that support fine-tuning ?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: