The value here isn't 'local VMs'. it's that the defaults are inverted. Everything else defaults to persistent and networked. This defaults to ephemeral and isolated. Small shift, but matters when you don't trust the code that's about to run.
This is how Show HN should work. Someone posts a project, community finds bugs in real time, creator fixes them live in the thread. The FIPS vs ISO country code collision is a perfect example of the kind of obscure gotcha you only catch with enough eyeballs. Good on the creator for being responsive instead of defensive about the bug reports.
AI is present everywhere these days. I wouldn’t be surprised if a OpenClaw bot autonomously create a project on GitHub and then submit it to HN, without any human involement.
Agents need guardrails, the real question is whether those live in the database or the framework. The 30% to 90% success rate jump from TypeScript types alone suggests the framework layer matters more than the schema layer for AI coding. Smart bet from a team that learned this the hard way scaling on Meteor for a decade.
The "skills not features" contribution model is the most interesting part of this. Instead of a project that grows into another 52-module beast, contributors teach Claude how to transform the codebase per-user. It's basically contributing build instructions instead of build artifacts. If it actually works in practice, it's a genuinely novel approach to keeping small projects small.
The embarrassing part isn't that Microsoft employees prefer Claude Code. It's that Microsoft had every advantag, the OpenAI partnership, the distribution, the enterprise relationships, the $13B investment and still built a product their own engineers don't want to use. That's not a model problem. That's a product taste problem. Anthropic built Claude Code with like 30 engineers. Microsoft has tens of thousands. At some point you have to accept that no amount of investment compensates for not actually understanding what developers need.
This was less a philosophy and more a competitive jab at Gates' "Open Letter to Hobbyists." Apple bundled BASIC for free because Woz wrote it himself, they had no software costs to recoup. Easy to be generous when your cofounder is the product.
> Apple bundled BASIC for free because Woz wrote it himself, they had no software costs to recoup.
But in respect of Gates' letter, Woz didn't write that BASIC for free, he wrote it to enable his hardware platform and the time spent writing it is a cost/investment in the platform. Gates was just trying to make a business writing software without the hardware.
Also Apple are the ones that made Gates' sentiments a reality with the precedent set in Apple v Franklin (1983) defending the copyright of their BIOS software.
The real divide isn't technical vs. non-technical: it's people with new problems vs. people maintaining old solutions. AI is incredible at generating first drafts of anything. It's terrible at understanding why the existing thing is the way it is.
Regardless of whether the DFU port documentation is technically wrong or the author misdiagnosed the root cause, the real failure here is that macOS silently spent an hour "installing" an update, then rolled back without any actionable error message. No "hey, try a different port." No diagnostic log surfaced to the user. Just a vague "some updates could not be installed" notification with a "Details" button that shows no details.
Apple knows which port each device is connected to. Apple knows which port is the DFU port. If there's a known incompatibility with external disk updates on that port, the OS should refuse to start the update with a clear message, not waste an hour of the user's time and silently fail. This is the kind of UX regression that erodes trust in the platform, especially for power users who are exactly the audience booting from external disks.
This looks really interesting. I like the idea of moving API testing closer to intent instead of fighting with heavy UIs and config files. How does this handle more complex flows like multi-step auth, chained requests, or dynamic variables across calls? Also curious how you’re thinking about environment management and secrets long-term, since that’s usually where CLI tools start getting messy. The prompt-driven approach feels especially useful for fast iteration and debugging, and I can see this fitting nicely into CI pipelines as well. Nice work, excited to try it out.