Hacker Newsnew | past | comments | ask | show | jobs | submit | linsomniac's commentslogin

Meshcore and Meshtastic are, depending on your area, pretty good for offline comms. They provide a pretty compelling combination of encryption and no license, and there's a fairly lively set of users in my area.

MeshCore is currently budding out, unless you are in one of the few places where it is really entrenched. Just in the last 6 weeks I've been doing it, I've gone from basically receiving 0 messages to having to disconnect my phone from my client node last night because it was going crazy.

Much of the communication is like HAM radio: talking about the mesh.


One component I haven't seen discussed here is that you also likely can't use them for anything but entirely off-grid setups. I've heard that in our locality, and I think this is pretty common around the US, that they won't permit you, and the utility won't allow grid tie of anything at all sketchy, including UL listed panels you buy used.

My wife and I just finished our morning Tiled Words and Bracket City. It's become part of our morning routine. Thanks for it, it's a lot of fun!

That’s awesome, thanks!

Bracket City is great! Definitely one of my favorites


>Because all of my services share the same IP address, my password manager has trouble distinguishing which login to use for each one.

In Bitwarden they allow you to configure the matching algorithm, and switching from the default to "starts with" is what I do when I find that it is matching the wrong entries. So for this case just make sure that the URL for the service includes the port number and switch all items that are matching to "starts with". Though it does pop up a big scary "you probably didn't mean to do this" warning when you switch to "starts with"; would be nice to be able to turn that off.


Just giving them hostnames is easier.

In homelab space you can also make wildcard DNS pretty easily in dnsmasq, assuming you also "own" your router. If not, hosts file works well enough.

There is also option of using mdns for same reason but more setup


> Just giving them hostnames is easier

Bitwarden annoyingly ignores subdomains by default. Enabling per-sudomain credential matching is a global toggle, which breaks autocomplete on other online service that allow you to login across multiple subdomains.


You can override the matching method on an individual basis though, just using the setting button next to the URL entry field.

Tell me about it... that infinite Ctrl + Shift + L sequence circling through all credentials from all subdomains. Then you brain betrays you making you skip the right credential... ugh, now you'll circle the entire set again. Annoying.

You can set that globally but override at the individual entry.

Seriously? That sounds incredibly awful - my keepass setup has dozens of domain customizations, there's no way in hell you could apply any rule across the entire internet.

How do I edit the hosts file of an iPhone?

You don't have to if you use mDNS. Or configure the iPhone to use your own self-hosted DNS server which can just be your router/gateway pointed to 9.9.9.9 / 1.1.1.1 / 8.8.8.8 with a few custom entries. You would need to jailbreak your iPhone to edit the hosts file.

I have a real domain name for my house. I have a few publicly available services and those are listed in public DNS. For local services, I add them to my local DNS server. For ephemeral and low importance stuff (e.g. printers) mDNS works great.

For things like Home Assistant I use the following subdomain structure, so that my password manager does the right thing:

  service.myhouse.tld
  local.service.myhouse.tld

Exactly, you don't. My qualm was with the "hosts file works well enough" claim of the person I responded to.

This is what i do.

"Because all of my services share the same IP address"

DNS. SNI. RLY?


That's a bit weird to read for me as well. DNS and local DNS were the first services I've been self-hosting since 2005.

On Debian/Ubuntu, hosting local DNS service is easy as `apt-get install dnsmasq` and putting a few lines into `/etc/dnsmasq.conf`.


These modern-day homelabbers will do anything to avoid DNS, looks like to them it's some kind of black magic where things will inevitably go wrong and all hell will break loose.

Not to diminish having names for everything but that just shifts the Bitwarden problem to "All of my services share the same base domain."

One cool trick is having (public) subdomains pointing to the tailscale IP.

This is what I do. Works great! And my caddy setup uses the DNS mode to provision TLS certs (using my domain provider's caddy plugin).

For my homelab, I setup a Raspberry Pi running PiHole. PiHole includes the ability to set local DNS records if you use it as your DNS resolver.

Then, I use Tailscale to connect everything together. Tailscale lets you use a custom DNS, which gets pointed to the PiHole. Phone blocks ads even when im away from the house, and I can even hit any services or projects without exposing them to the general internet.

Then I setup NGINX reverse proxy but that might not be necessary honestly


Could also use Cloudflare tunnels. That way:

1. your 1password gets a different entry each time for <service>.<yourdomain>.<tld>

2. you get https for free

3. Remote access without Tailscale.

4. Put Cloudflare Access in front of the tunnel, now you have a proper auth via Google or Github.


You can also use cloudflare to create a dns record for each local service (pointed to the local IP) and just mark it as not proxied, then use Wireguard or Tailscale on your router to get VPN access to your whole network. If you set up a reverse proxy like nginx proxy manager, you can easily issue a wildcard cert using DNS validation from your NAS using ACME (LetsEncrypt). This is what I do, and I set my phone to use Wireguard with automatic VPN activation when off my home WiFi network. Then you’re not limited by CF Tunnel’s rules like the upload limits or not being able to use Plex.

This is exactly what I do. I have a few operators set up in k8s that handle all of this with just a couple of annotations on the Ingress resource (yeah, I know I need to migrate to Gateway). For services I want to be publicly-facing, I can set up a Cloudflare tunnel using cloudflare-operator.

Yup doing this with Caddy and Nebula, works great!

This is the way

Tunnels go through Cloudflare infrastructure so are subject to bandwidth limits (100MB upload). Streaming Plex over a tunnel is against their ToS.

Pangolin is a good solution to this because you can optionally self-host it which means you aren't limited by Cloudflare's TOS / limits.

Also achievable with Tailscale. All my internal services are on machines with Tailscale. I have an external VPS with Tailscale & Caddy. Caddy is functioning as a reverse proxy to the Tailscale hosts.

No open ports on my internal network, Tailscale handles routing the traffic as needed. Confirmed that traffic is going direct between hosts, no middleman needed.


Another vote for Pangolin! Been using it for a month or so to replace my Cloudflare tunnels and it's been perfect.

Yeesh, the last thing I want is remote access to my homelab.

I wonder why each service doesn’t have a different subdomain.

That's what I do, but you still have to change the default Bitwarden behavior to match on host rather than base domain.

Matching on base domain as the default was surprising to me when I started using Bitwarden... treating subdomains as the same seems dangerous.


It's probably a convenience feature. Tons of sites out there that start on www then bounce you to secure2.bank.com then to auth. and now you're on www2.bank.com and for some inexplicable reason need to type your login again.

Actually it's mostly financial institutions that I've seen this happen with. Have to wonder if they all share the same web auth library that runs on the Z mainframe, or there's some arcane page of the SOC2 guide that mandates a minimum of 3 redirects to confuse the man in the middle.


This is the way. You can even do it with mDNS.

Setup AdGuard-Home for both blocking ads and internal/split DNS, plus Caddy or another reverse proxy and buy (or recycle/reuse) a domain name so you can get SSL certificates through LetsEncrypt.

You don't need to have any real/public DNS records on that domain, just own the domain so LetsEncrypt can verify and give you SSL certificate(s).

You setup local DNS rewrites in AdGuard - and point all the services/subdomains to your home servers IP, Caddy (or similar) on that server points it to the correct port/container.

With TailScale or similar - you can also configure that all TailScale clients use your AdGuard as DNS - so this can work even outside your home.

Thats how I have e.g.: https://portainer.myhome.top https://jellyfin.myhome.top ...etc...


This is always annoying me with 1Password, before that I just always added subdomains but now I'm usually hosting everything behind Tailscale which makes this problem even worse as the differentiation is only the port.

You can use tailscale services to do this now:

https://tailscale.com/docs/features/tailscale-services

Then you can access stuff on your tailnet by going to http://service instead of http://ip:port

It works well! Only thing missing now is TLS


This would be perfect with TLS. The docs don't make this clear...

> tailscale serve --service=svc:web-server --https=443 127.0.0.1:8080

> http://web-server.<tailnet-name>.ts.net:443/ > |-- proxy http://127.0.0.1:8080

> When you use the tailscale serve command with the HTTPS protocol, Tailscale automatically provisions a TLS certificate for your unique tailnet DNS name.

So is the certificate not valid? The 'Limitations' section doesn't mention anything about TLS either:

https://tailscale.com/docs/features/tailscale-services#limit...


I think maybe TLS would work if you were to go to https://service.yourts.net domain, but I've not tried that.

It works, I’m using tailscale services with https

Thanks for clarifying :) I'll try it out this weekend.

In the 1Password entry go to the "website" item. To right right there's an "autofill behavior" button. Change it to "Only fill on this exact host" and it will no longer show up unless the full host matches exactly

Is this a per-item behaviour or can this be set as a global default?

I'm guessing this is 1Password 8 only, as I can't see this option in 1Password 7.


I've looked in the settings on 1p8, and didn't find a setting for a global default.

Not entirely true. It can't seem to distinguish between ports..

because ports don't indicate a different host.

Omg thank you, I had no idea they added this feature!

Pangolin handles this nicely. You can define alias addresses for internal resources and keep the fully private and off the public internet. Also based on WireGuard like Tailscale.

You can still have subdomains with Tailscale. Point them at the tailscale IP address and run a reverse proxy in front of your services

Good point, but for simplicity i'd still like 1Password to use the full hostname + port a the primary key and not the hostname.

tailscale serve 4000 --BG

Problem solved ;)


or just use the same password for everything. ;)

If it is like 12 characters non dictionary and PW you use only in your homelab - seems like perfectly fine.

If you expose something by mistake still should be fine.

Big problem with PW reuse is using the same for very different systems that have different operators who you cannot trust about not keeping your PW in plaintext or getting hacked.


Ah nice! Didn’t know that. I’ll try that out next time.

not really a solution (as others have pointed out already) but it also tells me you are missing a central identity provider (think Microsoft account login). You can try deploying Kanidm for a really simple and lightweight one :)

I'm a very long time user of vi/vim, and I've gotten tired of maintaining vim configs. I've gotta have my LSPs and treesitters. I decided I wanted to move away from self maintenance and use something opinionated.

But, I found helix a little too opinionated. In particular, when you exit and go back into the file it won't take you back to where you were. I decided I'd start using helix on my "work journal" file which is over 20K lines and I edit somewhere towards but not at the end (done is above the cursor, to do and notes are below). Also, I NEED hard line wrapping for that.

Helix doesn't seem interested in incorporating either of those, which were "must haves" for me.

So I set the LLMs on it and they were able to make those changes, which was pretty cool. But I ended up deciding that I really didn't want to maintain my own helix fork for this, not really a plus over maintaining my vim config.


>What you bring to the table night be fine, but how long do you think you'll find emoloyers willing to still pay for this?

I'm assuming that the software factory of the future is going to need Millwrights https://en.wikipedia.org/wiki/Millwright

But, builders are builders. These tools turn ideas into things, a builders dream.


You're missing something.

I've been in ops for 30 years, Claude Code has changed how I work. Ops-related scripting seems to be a real sweet spot for the LLMs, especially as they tend to be smaller tools working together. It can convert a few sentences into working code in 15-30 minutes while you do something else. I've given it access to my apache logs Elastic cluster, and it does a great job at analyzing them ("We suspect this user has been compromised, can you find evidence of that?"). It's quite startling, actually, what it's able to do.


Yeah, it's useful for scripting, but it's still only marginally faster. It certainly hasn't been "groundbreaking productivity" like it's being sold.

The problem with analyzing logs is determinism. If I ask Claude to look for evidence of compromise, I can't trust the output without also going and verifying myself. It's now an extra step, for what? I still have to go into Elastic and run the actual queries to verify what Claude said. A saved Kibana search is faster, and more importantly, deterministic. I'm not going to leave something like finding evidence of compromise up to an LLM that can, and does, hallucinate especially when you fill the context up with a ton of logs.

An auditor isn't going to buy "But Claude said everything was fine."

Is AI actually finding things your SIEM rules were missing? Because otherwise, I just don't see the value in having a natural language interface for queries I already know how to run, it's less intuitive for me and non deterministic.

It's certainly a useful tool, there's no arguing that. I wouldn't want to go back to working with out it. But, I don't buy that it's already this huge labor market transformation force that's magically 100x everyone's productivity. That part is 100% pure hype, not reality.


The tolerance for indeterminacy is I think a generational marker; people ~20 years younger than me just kind of think of all software as indeterminate to begin with (because it's always been ridiculously complicated and event-driven for them), and it makes talking about this difficult.

I shudder to think of how many layers of dependency we will one day sit upon. But when you think about it, aren’t biological systems kind of like this too? Fallible, indeterminable, massive, labyrinthine, and capable of immensely complex and awe inspiring things at the same time…

People younger than me are not even adults. I grew up during the dial up era and then the transition to broadband. I don't think software is indeterminate.

>still only marginally faster.

Is it? A couple days ago I had it build tooling for a one-off task I need to run, it wrote ~800 lines of Python to accomplish this, in <30m. I found it was too slow, so I got it to convert it to run multiple tasks in parallel in another prompt. Would have taken a couple days for me to build from hand, given the number of interruptions I have in the average day. This isn't a one-off, it's happening all the time.


Did that need to be 800 lines of Python, though, is the question

NEED to be? No.

But, to be robust you want a signal handler with clean shutdown, a circuit breaker, argument processing (100 lines right there), logging, reporting progress to our dashboard (it's going to run 10-15 days), checking errors and exceptions, retrying on temp fail, documentation... It adds up.

So it could be shorter, but it's not like there is anything superfluous in it.


You probably want to sign up for SNDS: https://sendersupport.olc.protection.outlook.com/snds/faq

That should allow you to be more proactive about users reporting your messages as spam, either intentionally or unintentionally.

FWIW: We've been sending Microsoft properties e-mails for over a decade, fairly small scale (maybe 5-20K unique recipients at MS properties in a month), and every 2-4 years we have to submit our IP to their "whitelist me" site and then we're golden again.

This time was different, when we submitted our IP to the whitelist site it said "Nothing is blocking your ability to send to us". They did end up responding to our whitelist request a week later asking if we were good or still needed help, which is a first.


A couple weeks ago I was working remote and didn't bring a power adapter, and I realized a couple hours in that my battery was getting kind of low. I clicked on the battery icon and got a list of what was using a lot of power: 1 was an hour long video chat using Google Meet, the other was Claude desktop (which I hadn't used at all that morning).

What in the world is an idle Claude Desktop doing that uses so much power?


They run a resource heavy VM for the claude cowork feature.

Electron?

In my view, these agent teams have really only become mainstream in the last ~3 weeks since Claude Code released them. Before that they were out there but were much more niche, like in Factory or Ralphie Wiggum.

There is a component to this that keeps a lot of the software being built with these tools underground: There are a lot of very vocal people who are quick with downvotes and criticisms about things that have been built with the AI tooling, which wouldn't have been applied to the same result (or even poorer result) if generated by human.

This is largely why I haven't released one of the tools I've built for internal use: an easy status dashboard for operations people.

Things I've done with agent teams: Added a first-class ZFS backend to ganeti, rebuilt our "icebreaker" app that we use internally (largely to add special effects and make it more fun), built a "filesystem swiss army knife" for Ansible, converted a Lambda function that does image manipulation and watermarking from Pillow to pyvips, also had it build versions of it in go, rust, and zig for comparison sake, build tooling for regenerating our cache of watermarked images using new branding, have it connect to a pair of MS SQL test servers and identify why logshipping was broken between them, build an Ansible playbook to deploy a new AWS account, make a web app that does a simple video poker app (demo to show the local users group, someone there was asking how to get started with AI), having it brainstorm and build 3 versions of a crossword-themed daily puzzle (just to see what it'd come up with, my wife and I are enjoying TiledWords and I wanted to see what AI would come up with).

Those are the most memorable things I've used the agent teams to build in the last 3 weeks. Many of those things are internal tools or just toys, as another reply said. Some of those are publicly released or in progress for release. Most of these are in addition to my normal work, rather than as a part of it.


Further, my POV is that coding agents crossed a chasm only last December with Opus 4.5 release. Only since then these kinds of agent teams setups actually work. It’s early days for agent orchestration

can you tell us about this "ansible filesystem swiss army knife"?

I'd be happy to! I find in my playbooks that it is fairly cumbersome to set up files and related because of the module distinction between copying files, rendering templates, directories... There's a lot of boilerplate that has to be repeated.

For 3-4 years I've been toying with this in various forms. The idea is a "fsbuilder" module that make a task that logically groups filesystem setup (as opposed to grouping by operation as the ansible.builtin modules do).

You set up in the main part of the task the defaults (mode, owner/group, etc), then in your "loop" you list the fs components and any necessary overrides for the defaults. The simplest could for example be:

    - name: Set up app config
      linsomniac.fsbuilder.fsbuilder:
        dest: /etc/myapp.conf
Which defaults to a template with the source of "myapp.conf.j2". But you can also do more complex things like:

    - name: Deploy myapp - comprehensive example with loop
      linsomniac.fsbuilder.fsbuilder:
        owner: root
        group: myapp
        mode: a=rX,u+w
      loop:
        - dest: /etc/myapp/conf.d
          state: directory
        - dest: /etc/myapp/config.ini
          validate: "myapp --check-config %s"
          backup: true
          notify: Restart myapp
        - dest: /etc/myapp/version.txt
          content: "version={{ app_version }}"
        - dest: "/etc/myapp/passwd"
          group: secrets
I am using this extensively in our infrastructure and run ~20 runs a day, so it's fairly well tested.

More information at: https://galaxy.ansible.com/ui/repo/published/linsomniac/fsbu...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: