Hacker Newsnew | past | comments | ask | show | jobs | submit | CubsFan1060's commentslogin

41 minute old account. "I built post". LLM sounding everything. I'd be surprised if there was a real person behind this at all.

This feels like we're still on the march to the dead internet.

What percentage of your interaction do you want/think is actually real people, and not just agents talking to other agents?


I’d be okay with it generating the posts and the reports of the financials and such but you need some human interaction in there.

Generate the posts with AI so it can free up your time to interact with people replying to the post.

Or write the bigger, longer, more content posts yourself with maybe some AI assistance in places here and there then use AI to create smaller posts from your larger posts. Still keeping with the human interaction with those that reply to the posts.


100% agree. Content generation is where agents shine — it's repetitive and time-consuming. But genuine engagement is where trust gets built, and that needs to be real.

My engagement scripts do auto-reply to comments on my own posts, but they're rate-limited and context-aware (max 2 rounds). For anything meaningful — client conversations, community discussions like this one — it's always me.


> For anything meaningful — client conversations, community discussions like this one — it's always me.

In a six-minute time period, you posted 10 different comments here, totaling nearly 800 words. I don't believe you are being truthful.


"Fair catch on both points. The batch of replies: I had a list of expected questions and drafted answers beforehand. When I finally had time to respond, I posted them all at once. Not real-time typing — that's why the timing looks suspicious. Should've spaced them out. On MRR: I dodged it. Honest answer — client project revenue is irregular (project-based, not subscription), so I don't track it as MRR. MindThread subscription revenue is early and small, I'm not comfortable putting a number on it publicly yet. What I can say: it covers my infra costs and Claude subscription with room left over. Not life-changing, but real

You may be right about taste, but I think it takes a different dimension in the future.

"Dear Claude, please make me a clone of <fancy new saas> but make <these changes specific to my tastes>".

For many things, it's probably not "select the one of 100 that fits my taste", it's probably going to be to just make your own personal version that fits your taste in the first place. And, probably, never share that anywhere.


This has to be a bot account, right? 2 days old.

Yesterday "I don't know about you, but I benefit so much from using Claude at work that I would gladly pay $1,500-$2,000 per month to keep using it."


Agreed, those comments are all over the map, and so many comments in 2 days!


Agreed, those comments are all over the map, and 22 comments in 2 days!


Bots don't write like me


The interesting part about that is both of those things require some sort of time to start.

If I launch a new product, and 4 hours later competitors pop up, then there's not enough time for network effects or lockin.

I'm guessing what is really going to be needed is something that can't be just copied. Non-public data, business contracts, something outside of software.


I can't tell if this is genius or terrifying given what their software does. Probably a bit of both.

I wonder what the security teams at companies that use StrongDM will think about this.


I doubt this would be allowed in regulated industries like healthcare


It is: https://azure.status.microsoft/en-us/status

"Impact statement: As early as 19:46 UTC on 2 February 2026, we are aware of an ongoing issue causing customers to receive error notifications when performing service management operations - such as create, delete, update, scaling, start, stop - for Virtual Machines (VMs) across multiple regions. These issues are also causing impact to services with dependencies on these service management operations - including Azure Arc Enabled Servers, Azure Batch, Azure DevOps, Azure Load Testing, and GitHub. For details on the latter, please see https://www.githubstatus.com."


I've been keeping my eye on this one, it's very interesting.

Feel free to ignore this, but, what's your long term plan here? I see you have Enterprise plans (especially that allow different licenses). From what I can tell you're the only contributor, but, I assume that if you accepted contributions there'd be a CLA?


Thank you, I haven't accepted any contributions so far primarily because of this reason but things might change in the future. As mentioned in the README and docs, Octelium is designed specifically for self-hosting so the commercial side of the project is simply confined to commercial AGPLv3-alternative licensing, support, and other very enterprise-y/customized features such as SCIM, SIEM to specific providers, etc...


Please don’t put SCIM behind enterprise licenses, it makes IT Administrators life hell.


Do you foresee this changing anytime soon? Would love to contribute but also I think community adoption and contribution would go along way in terms of businesses less worried about single points of failure.

It’s hard balance to strike for sure. And it’s getting weirder by the day with agents.


I might be missing something, but if you’ve never given them out to anyone at all, then what’s the point?


I'm not familiar with this at all. But at first blush, it seems like the Readme is far more interested in being angry with Anthropic than actually telling me what this is or why I care.

I see "Multi Agent Orchestration", but, scrolling through this I still have no idea what I'm looking at.


The readme (and probably most of the project) is likely generated by an LLM - chances are we'll learn more reading the prompts than the readme.

I actually tried this few days back before the Claude Code EULA reinforcement, I went through the same thing.

1. I honestly had a hard time parsing what this is supposed to do or provide over standard opencode setup from the readme. It is rather long-winded and have a lot of bombastic claims but doesnt really explain what it does.

2. Regardless, the claims are pretty enticing. Because I was in experiment mode, and I already had a VM running to try out some other stuff, I gave it a try

3. From what I can tell, its basically a set of configs and plugins to make opencode behave a certain way. Kinda like how lazyvim/astronvim are to neovim.

4. But for all its claims, it had a lot of issues - the setups are rather brittle and was hard to get working out of the box (this is from someone who is pretty comfortable tinkering with vim configs), when I managed to get it working (at least I think its working), its kinda meh? It uses up way more tokens than the default opencode, for worse (or at less consistent) results.

5, FWIW, I dont find the multi/sub-agent workflow to be all that useful for most tasks, or at the very least its still very early IMO, kinda like the function calling phase of chatgpt to really be useful.

6. I was actually able to grok most of Steve Yegge's gastown post from the other day. He made-up a lot of terms that I think made things even more confusing, but I was able to recognize many of the concepts as things that I also had thought of them in a "it would be cool if we can do X/Y/Z" manner. Not with this project.

TBH, at this point im not sure if I'm using it wrong or am I missing something, or this is just how people market their projects in the age of LLM.

edit: what I tried the other day was the code-yeongyu/oh-my-opencode, not this (fork?) project


Re point 5, the simplest argument in favor of sub-agent workflows it that it allows the main agent context to remain free of a large amount of task-specific working context. This lets the main context survive longer before you need compaction. Compaction in CC is a major loss of context IME. Context compaction is generally the point where I reset the conversation as the compacted conversation is practically as bad as a new one but has a bunch of wasted space already.


How I wish we could just see and patch up the raw context before it goes out. If I could hand edit a compaction it would result in better execution going forward and better for my own mental model. It’s such a small feature, but Anthropic would never give it to us.


Thanks for pointing me to the Gas Town blog post. That was...a lot. I'm going to need a lot of time to digest everything that was in there.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: