Meshcore and Meshtastic are, depending on your area, pretty good for offline comms. They provide a pretty compelling combination of encryption and no license, and there's a fairly lively set of users in my area.
MeshCore is currently budding out, unless you are in one of the few places where it is really entrenched. Just in the last 6 weeks I've been doing it, I've gone from basically receiving 0 messages to having to disconnect my phone from my client node last night because it was going crazy.
Much of the communication is like HAM radio: talking about the mesh.
One component I haven't seen discussed here is that you also likely can't use them for anything but entirely off-grid setups. I've heard that in our locality, and I think this is pretty common around the US, that they won't permit you, and the utility won't allow grid tie of anything at all sketchy, including UL listed panels you buy used.
>Because all of my services share the same IP address, my password manager has trouble distinguishing which login to use for each one.
In Bitwarden they allow you to configure the matching algorithm, and switching from the default to "starts with" is what I do when I find that it is matching the wrong entries. So for this case just make sure that the URL for the service includes the port number and switch all items that are matching to "starts with". Though it does pop up a big scary "you probably didn't mean to do this" warning when you switch to "starts with"; would be nice to be able to turn that off.
Bitwarden annoyingly ignores subdomains by default. Enabling per-sudomain credential matching is a global toggle, which breaks autocomplete on other online service that allow you to login across multiple subdomains.
Tell me about it... that infinite Ctrl + Shift + L sequence circling through all credentials from all subdomains. Then you brain betrays you making you skip the right credential... ugh, now you'll circle the entire set again. Annoying.
Seriously? That sounds incredibly awful - my keepass setup has dozens of domain customizations, there's no way in hell you could apply any rule across the entire internet.
You don't have to if you use mDNS. Or configure the iPhone to use your own self-hosted DNS server which can just be your router/gateway pointed to 9.9.9.9 / 1.1.1.1 / 8.8.8.8 with a few custom entries. You would need to jailbreak your iPhone to edit the hosts file.
I have a real domain name for my house. I have a few publicly available services and those are listed in public DNS. For local services, I add them to my local DNS server. For ephemeral and low importance stuff (e.g. printers) mDNS works great.
For things like Home Assistant I use the following subdomain structure, so that my password manager does the right thing:
These modern-day homelabbers will do anything to avoid DNS, looks like to them it's some kind of black magic where things will inevitably go wrong and all hell will break loose.
For my homelab, I setup a Raspberry Pi running PiHole. PiHole includes the ability to set local DNS records if you use it as your DNS resolver.
Then, I use Tailscale to connect everything together. Tailscale lets you use a custom DNS, which gets pointed to the PiHole. Phone blocks ads even when im away from the house, and I can even hit any services or projects without exposing them to the general internet.
Then I setup NGINX reverse proxy but that might not be necessary honestly
You can also use cloudflare to create a dns record for each local service (pointed to the local IP) and just mark it as not proxied, then use Wireguard or Tailscale on your router to get VPN access to your whole network. If you set up a reverse proxy like nginx proxy manager, you can easily issue a wildcard cert using DNS validation from your NAS using ACME (LetsEncrypt). This is what I do, and I set my phone to use Wireguard with automatic VPN activation when off my home WiFi network. Then you’re not limited by CF Tunnel’s rules like the upload limits or not being able to use Plex.
This is exactly what I do. I have a few operators set up in k8s that handle all of this with just a couple of annotations on the Ingress resource (yeah, I know I need to migrate to Gateway). For services I want to be publicly-facing, I can set up a Cloudflare tunnel using cloudflare-operator.
Also achievable with Tailscale. All my internal services are on machines with Tailscale. I have an external VPS with Tailscale & Caddy. Caddy is functioning as a reverse proxy to the Tailscale hosts.
No open ports on my internal network, Tailscale handles routing the traffic as needed. Confirmed that traffic is going direct between hosts, no middleman needed.
It's probably a convenience feature. Tons of sites out there that start on www then bounce you to secure2.bank.com then to auth. and now you're on www2.bank.com and for some inexplicable reason need to type your login again.
Actually it's mostly financial institutions that I've seen this happen with. Have to wonder if they all share the same web auth library that runs on the Z mainframe, or there's some arcane page of the SOC2 guide that mandates a minimum of 3 redirects to confuse the man in the middle.
Setup AdGuard-Home for both blocking ads and internal/split DNS, plus Caddy or another reverse proxy and buy (or recycle/reuse) a domain name so you can get SSL certificates through LetsEncrypt.
You don't need to have any real/public DNS records on that domain, just own the domain so LetsEncrypt can verify and give you SSL certificate(s).
You setup local DNS rewrites in AdGuard - and point all the services/subdomains to your home servers IP, Caddy (or similar) on that server points it to the correct port/container.
With TailScale or similar - you can also configure that all TailScale clients use your AdGuard as DNS - so this can work even outside your home.
This is always annoying me with 1Password, before that I just always added subdomains but now I'm usually hosting everything behind Tailscale which makes this problem even worse as the differentiation is only the port.
> When you use the tailscale serve command with the HTTPS protocol, Tailscale automatically provisions a TLS certificate for your unique tailnet DNS name.
So is the certificate not valid? The 'Limitations' section doesn't mention anything about TLS either:
In the 1Password entry go to the "website" item. To right right there's an "autofill behavior" button. Change it to "Only fill on this exact host" and it will no longer show up unless the full host matches exactly
Pangolin handles this nicely. You can define alias addresses for internal resources and keep the fully private and off the public internet. Also based on WireGuard like Tailscale.
If it is like 12 characters non dictionary and PW you use only in your homelab - seems like perfectly fine.
If you expose something by mistake still should be fine.
Big problem with PW reuse is using the same for very different systems that have different operators who you cannot trust about not keeping your PW in plaintext or getting hacked.
not really a solution (as others have pointed out already) but it also tells me you are missing a central identity provider (think Microsoft account login). You can try deploying Kanidm for a really simple and lightweight one :)
I'm a very long time user of vi/vim, and I've gotten tired of maintaining vim configs. I've gotta have my LSPs and treesitters. I decided I wanted to move away from self maintenance and use something opinionated.
But, I found helix a little too opinionated. In particular, when you exit and go back into the file it won't take you back to where you were. I decided I'd start using helix on my "work journal" file which is over 20K lines and I edit somewhere towards but not at the end (done is above the cursor, to do and notes are below). Also, I NEED hard line wrapping for that.
Helix doesn't seem interested in incorporating either of those, which were "must haves" for me.
So I set the LLMs on it and they were able to make those changes, which was pretty cool. But I ended up deciding that I really didn't want to maintain my own helix fork for this, not really a plus over maintaining my vim config.
I've been in ops for 30 years, Claude Code has changed how I work. Ops-related scripting seems to be a real sweet spot for the LLMs, especially as they tend to be smaller tools working together. It can convert a few sentences into working code in 15-30 minutes while you do something else. I've given it access to my apache logs Elastic cluster, and it does a great job at analyzing them ("We suspect this user has been compromised, can you find evidence of that?"). It's quite startling, actually, what it's able to do.
Yeah, it's useful for scripting, but it's still only marginally faster. It certainly hasn't been "groundbreaking productivity" like it's being sold.
The problem with analyzing logs is determinism. If I ask Claude to look for evidence of compromise, I can't trust the output without also going and verifying myself. It's now an extra step, for what? I still have to go into Elastic and run the actual queries to verify what Claude said. A saved Kibana search is faster, and more importantly, deterministic. I'm not going to leave something like finding evidence of compromise up to an LLM that can, and does, hallucinate especially when you fill the context up with a ton of logs.
An auditor isn't going to buy "But Claude said everything was fine."
Is AI actually finding things your SIEM rules were missing? Because otherwise, I just don't see the value in having a natural language interface for queries I already know how to run, it's less intuitive for me and non deterministic.
It's certainly a useful tool, there's no arguing that. I wouldn't want to go back to working with out it. But, I don't buy that it's already this huge labor market transformation force that's magically 100x everyone's productivity. That part is 100% pure hype, not reality.
The tolerance for indeterminacy is I think a generational marker; people ~20 years younger than me just kind of think of all software as indeterminate to begin with (because it's always been ridiculously complicated and event-driven for them), and it makes talking about this difficult.
I shudder to think of how many layers of dependency we will one day sit upon. But when you think about it, aren’t biological systems kind of like this too? Fallible, indeterminable, massive, labyrinthine, and capable of immensely complex and awe inspiring things at the same time…
People younger than me are not even adults. I grew up during the dial up era and then the transition to broadband. I don't think software is indeterminate.
Is it? A couple days ago I had it build tooling for a one-off task I need to run, it wrote ~800 lines of Python to accomplish this, in <30m. I found it was too slow, so I got it to convert it to run multiple tasks in parallel in another prompt. Would have taken a couple days for me to build from hand, given the number of interruptions I have in the average day. This isn't a one-off, it's happening all the time.
But, to be robust you want a signal handler with clean shutdown, a circuit breaker, argument processing (100 lines right there), logging, reporting progress to our dashboard (it's going to run 10-15 days), checking errors and exceptions, retrying on temp fail, documentation... It adds up.
So it could be shorter, but it's not like there is anything superfluous in it.
That should allow you to be more proactive about users reporting your messages as spam, either intentionally or unintentionally.
FWIW: We've been sending Microsoft properties e-mails for over a decade, fairly small scale (maybe 5-20K unique recipients at MS properties in a month), and every 2-4 years we have to submit our IP to their "whitelist me" site and then we're golden again.
This time was different, when we submitted our IP to the whitelist site it said "Nothing is blocking your ability to send to us". They did end up responding to our whitelist request a week later asking if we were good or still needed help, which is a first.
A couple weeks ago I was working remote and didn't bring a power adapter, and I realized a couple hours in that my battery was getting kind of low. I clicked on the battery icon and got a list of what was using a lot of power: 1 was an hour long video chat using Google Meet, the other was Claude desktop (which I hadn't used at all that morning).
What in the world is an idle Claude Desktop doing that uses so much power?
In my view, these agent teams have really only become mainstream in the last ~3 weeks since Claude Code released them. Before that they were out there but were much more niche, like in Factory or Ralphie Wiggum.
There is a component to this that keeps a lot of the software being built with these tools underground: There are a lot of very vocal people who are quick with downvotes and criticisms about things that have been built with the AI tooling, which wouldn't have been applied to the same result (or even poorer result) if generated by human.
This is largely why I haven't released one of the tools I've built for internal use: an easy status dashboard for operations people.
Things I've done with agent teams: Added a first-class ZFS backend to ganeti, rebuilt our "icebreaker" app that we use internally (largely to add special effects and make it more fun), built a "filesystem swiss army knife" for Ansible, converted a Lambda function that does image manipulation and watermarking from Pillow to pyvips, also had it build versions of it in go, rust, and zig for comparison sake, build tooling for regenerating our cache of watermarked images using new branding, have it connect to a pair of MS SQL test servers and identify why logshipping was broken between them, build an Ansible playbook to deploy a new AWS account, make a web app that does a simple video poker app (demo to show the local users group, someone there was asking how to get started with AI), having it brainstorm and build 3 versions of a crossword-themed daily puzzle (just to see what it'd come up with, my wife and I are enjoying TiledWords and I wanted to see what AI would come up with).
Those are the most memorable things I've used the agent teams to build in the last 3 weeks. Many of those things are internal tools or just toys, as another reply said. Some of those are publicly released or in progress for release. Most of these are in addition to my normal work, rather than as a part of it.
Further, my POV is that coding agents crossed a chasm only last December with Opus 4.5 release. Only since then these kinds of agent teams setups actually work. It’s early days for agent orchestration
I'd be happy to! I find in my playbooks that it is fairly cumbersome to set up files and related because of the module distinction between copying files, rendering templates, directories... There's a lot of boilerplate that has to be repeated.
For 3-4 years I've been toying with this in various forms. The idea is a "fsbuilder" module that make a task that logically groups filesystem setup (as opposed to grouping by operation as the ansible.builtin modules do).
You set up in the main part of the task the defaults (mode, owner/group, etc), then in your "loop" you list the fs components and any necessary overrides for the defaults. The simplest could for example be:
- name: Set up app config
linsomniac.fsbuilder.fsbuilder:
dest: /etc/myapp.conf
Which defaults to a template with the source of "myapp.conf.j2". But you can also do more complex things like:
MeshCore is currently budding out, unless you are in one of the few places where it is really entrenched. Just in the last 6 weeks I've been doing it, I've gone from basically receiving 0 messages to having to disconnect my phone from my client node last night because it was going crazy.
Much of the communication is like HAM radio: talking about the mesh.
reply