This seems a natural evolution of Raycast Extensions (which are an evolution of Script Commands) - given the current landscape (generative everything). I would be surprised if there’s no “Raycast inside” within and around the new toolchain.
I’m torn about what this likely means for iOS; while I do want to do Raycast-y things in my phone, I’m not sure there’s enough of us to make a business out of it.
As a Raycast paying user, I was a little bent that they have apparently not been focusing on the core product.
However, having just vibe coded an actually useful Raycast extension, I can see wanting to bring this capability to a wider audience - and how this could scale their core product adoption beyond “nerds who think Spotlight stinks”. Which is getting
A lot of good (if negative) comments ITT though; it’s going to be tough for them to bring this to market safely.
There’s some sort of underground genai SEO movement happening. I am not sure how it works but I have been examining source quality from Perplexity and ChatGPT and finding the same sources over and over. I’ve found quite a few “gamed-looking” sources recently, basically similar to trash that have dominated Google serps for the last few years.
Edit: this is a ridiculous question, I know. Trying to eat my dogfood so to speak
Does Tailscale maintain an q&a agent, mcp, or llms.txt that anyone is aware of?
I’m trying to use Tailscale across my personal networks - without investing a lot of time - and so I’m throwing agents at it. It’s not going well, primarily because their tools/interfaces have been changing a lot, and so tool calls fail (ex ‘tailscale serve —xyz’ is now ‘tailscale funnel ABC’ and needs manual approval, and that’s not in the training set).
For one, qmd uses SQLite (fts5 and SQLite-vec, at least at some point) and then builds reranked hybrid search on top of that. It uses some cool techniques like resilient chunking and embedding, all packaged up into a typescript cli. Id say it sits at a layer above Wax.
Tell us more. I had Codex port this to Python so I could wrap my head around it, it’s quite interesting. Why would I use this WAL-check pointing thingamajig when I have access to SQLite-vec, qdrant and other embedded friends?
WAL/checkpointing is about control over durability and crash behavior, not “better vectors.”
sqlite-vec and Qdrant are storage engines first; their durability is mostly “under the hood.” If your goal is a clean
local RAG system, owning that layer can be better when you want:
1. deterministic ingest semantics (append-only event log of chunks, then materialize state),
2. fast recovery from partial writes (replay only WAL since last checkpoint),
3. precise checkpoint boundaries tuned to your app (e.g., after every batch/conv/session ingest),
4. a single-file, dependency-light artifact you can own end-to-end.
That’s why it can be better than sqlite-vec/Qdrant in this specific case: not for raw ANN quality, but for operational
predictability + composability of ingestion, retrieval, and memory lifecycle in one library.
If you don’t care about that control and are fine with a managed server/extension model, built-ins are usually the
simpler and smarter choice.
Claude could access anything on your device, including system or third party commands for network or signal processing - it may even have their manuals/sites/man pages in the training set. It’s remarkably good at figuring things out, and you can watch the reasoning output. There are mcp tools for reverse engineering that can give it even higher level abilities (ghidra is a popular one).
Yesterday I watched it try and work around some filesystem permission restrictions, it tried a lot of things I would never have thought of, and it was eventually successful. I was kinda goading it though.
We are missing some building blocks IMO. We need a good abstraction for defining the invariants in the structure of a project and communicating them to an agent. Even if we had this, if a project doesn’t already consistently apply those patterns the agent can be confused or misapply something (or maybe it’s mad about “do as I say not as I do”).
I expend a lot of effort preparing instructions in order to steer agents in this way, it’s annoying actually. Think Deep Wiki-style enumeration of how things work, like C4 Diagrams for agents.
reply