Hacker Newsnew | past | comments | ask | show | jobs | submit | hrmtst93837's commentslogin

Most terminals already trust clipboard access and window titles in ways that can be abused, no scripting engine required. Embedding a web engine would just make the threat model explicit instead of the current half-baked mix of text UI plus unsanitized metadata channels. If your workflow includes pasting from a terminal or clicking strange links you've already lost unless your threat tolerance is set near zero. It's a decent reminder that the stuff we treat as just text keeps accumulating side channels faster than most users can keep up.

Neat until you need to sync configs or keep multiple machines in harmony, at which point dotfile headaches stack up with Hammerspoon and Lua. Adding complex logic like window rules, app-specific behavior, or handling monitor changes strips away some of that hotkey simplicity and leads to endless tweaking. Still, for avoiding the mouse, it's one of the few flexible options left on macOS that doesn't feel ancient. Tradeoffs everywhere but nowhere else really compares in control.

Syncing configs is a pretty solved problem with dotfile repos. I even made a starter repo anyone can fork & use: https://github.com/dbalatero/dotfiles-starter

Your hardware can run a good range of local models, but keep an eye on quantization since 4-bit models trade off some accuracy, especially with longer context or tougher tasks. Thermal throttling is also an issue, since even Apple silicon can slow down when all cores are pushed for a while, so sustained performance might not match benchmark numbers.

Helium for party balloons is low grade and not pure enough for chip fab use, so stacking up birthday tanks won't keep TSMC running. Industrial grade helium has a restricted and oddly international supply chain thanks to regulation and a few weirdly-placed depots. The US 'helium stockpile' isn't really a menu you can just order from when a factory across the planet runs dry, especially if offtakes and logistics are tied up by decade-old government contracts. If you want to see supply chain fragility, try pricing MRI-grade helium after a shutdown and watch everyone in medical procurement panic quitely.

Isn't Helium one of the easiest elements to purify? Just cool it below 14 Kelvin, which will make everything else freeze out. Collect the remaining liquid, which should be pure Helium.

14 kelvin is not easy to achieve at scale + after that, you need to keep it pure.

Apparently 14 K cooling is not used even up to 5N or 6N purity, commercial large-scale sources use various other tricks to remove the other gases. They do cool the input gas down to liquid nitrogen temperatures as one of the first steps.

My point is that there's "maximally efficient / profitable" versus "can be made available as an emergency alternative".

Cooling to 14 K isn't the cheapest option, but it has very low complexity. You can "simply" pressurise the source gas, cool it to room temperature through an ordinary heat exchanger, then allow it to expand. The only issue is that if you do this naively, the expansion nozzle will get clogged with ice.

Obviously, this wastes a lot of Helium, but we have lots of it. If what's needed is high purity Helium, then throwing away even 90% to get 10% that's 6N pure should be no problem for an industrial nation.


You can't just spin up such a facility in a few days or weeks though, surely? Even if the core of a process is relatively simple physically, you still need all the supporting infrastructure to make it happen.

after what kind of shutdown?

If the helium gets warm, you have to vent it outside before it goes kaboom from the pressure.

https://radiology.ucsf.edu/patient-care/patient-safety/mri-s...


Damn, that's intense:

> If the scan room door is closed when a quench occurs and helium escapes into the scan room, the depletion of oxygen causes a critical increase in pressure in the room compared with the control area. This produces high pressure in the scan room, which may prevent opening of the door. If this should happen, the glass partition between the scan and control rooms should be broken to release the pressure. The scan room door can then be opened as usual and the patient evacuated. In such a case the patient should be immediately evacuated and evaluated for asphyxia, hypothermia and ruptured eardrums.


Most MRIs vent their helium in an emergency shutdown. https://medprotech.de/en/what-is-an-mri-quench/

If you need a UI over SSH or inside tmux, skipping browsers and CSS isn't just a novelty, it's essential since HTML can't touch that territory. TS-based layout in terminals can be ugly but it also dodges a pile of accessibility, latency and bloat issues you get by default with anything running in Chrome.

If you push tool execution into the model itself, you inherit all the I/O unpredictability and error handling baggage, but now inside a GPU context that's allergic to latency. Inference throughput tanks if external calls start blocking, and A100s make expensive waiters. Batching is fantasy unless you know up front exactly what gets executed, which is the opposite of dynamic tools. If you want "faster" here, the trade is reliable deterministic compute versus the usual Wild West of system calls and side effects.

If you block name reuse globally, you introduce a new attack surface: permanent denial by squatting on retired names. Companies mess up names all the time from typos, failed rollouts, or legal issues. A one-shot policy locks everyone into their worst error or creates a regulatory mess over who can undo registrations.

Namespaces are annoying but at least let you reorganize or fix mistakes. If you want to prevent squatting, rate limiting creation and deletion or using a quarantine window is more practical. No recovery path just rewards trolls and messes with anyone whose processes aren't perfect.


Embedding owner metadata and file origin helps, but relying on it as a cure-all is risky. Attackers aiming to poison your RAG are just as happy to phish an employee or exploit public-facing sources with legitimate owner signatures. Corporate directory info and source attribution can still be faked or compromised, so provenance is not the same as integrity. If you treat any document with a valid owner field as authoritative, you are still one social engineering email away from junk in your knowledge base.

If you want IPv6 without dynamic allocation you end up rewriting half the stack anyway so probably not what most embedded engineers are itching to spend budget on. The weird part is that a lot of edge gear will be stuck in legacy-v4 limbo just because nobody wants to own that porting slog which means "ubiquitous IPv6" will keep being a conference slide more than a reality.

Rust works well for toolchains where speed counts and you can control deps, but it's a much bigger ask for server-side app logic where teams lean on JS and its libraries. Switching an established stack to Rust hits hiring and maintenance friction fast, especially with async and lifetime bugs. For Vite's community, requiring plugin authors to redo everything in Rust would probably destroy most of the value users care about.

It has worked perfectly fine with compiled languages until someone had the idea to use V8 outside of the browser.

In fact it still does, I only use node when forced to do so by project delivery where "backend" implies something like Next.js full stack, or React apps running on iframes from SaaS products.


> ... it's a much bigger ask for server-side app logic where teams lean on JS and its libraries.

Well that's where they went wrong.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: