Thanks for this very important point. It often gets lost in the discussion.
The big idea with Linux/BSD/fully-open-source is that you can fix whatever you don't like.
That was the breaking point for me with Tahoe. I never loved MacOS before that, but it never got in the way. Then with Tahoe, it got in the way, so I went to fix it, and found out that fixing it is actually impossible! That was the breakup moment.
Sophisticated LLMs make it even easier to fix or tweak any Linux/BSD/fully-open-source software to our liking.
> The big idea with Linux/BSD/fully-open-source is that you can fix whatever you don't like.
That's a great theory, and sometimes it's actually true, but in reality for most users most of the time, Linux is as "fixable" as Windows or macOS, because most people, even the technically savvy ones aren't driver developers. Heck most software developers probably aren't even C programmers anymore. And even if someone had the competency in the language and low level system programming, do they have the time and the inclination to re-write the audio stack so that it finally works correctly? Or to fix the fact that even in 2026, sleep and hibernate are hit and miss? And then to maintain their patch against future system updates or go through the process of getting it upstreamed?
Most Linux users, and especially most Linux users switching from something like macOS or Windows would be waiting and hoping that someone else decided to fix the thing for them because they either lack the skills, time or inclination to do it themselves. And we know this is true because if it weren't true, all the various "wars" over the years like systemd and pulse audio and wayland wouldn't have been a war at all because everyone who didn't like it would have easily patched it out and moved on. But a modern full fledged OS experience is a mess of intertwined and complex dependencies. So when a distro decides to switch a big chunk of the underlying stack like that, most people either have to go along with it, or hope that enough people feel strongly enough about it to fork everything and make their own distro, and then they have to hope the forkers have the passion and drive to maintain that for them.
Yes, you "can" fix whatever you don't like in linux. Just like you "can" find all the information you need to diagnose and treat whatever medical condition you might have online and at your local libraries. But most people are still going to pay a doctor, because most people don't have the time or skills to actually do it.
> but in reality for most users most of the time, Linux is as "fixable" as Windows or macOS,
I disagree with this. For most users, most of the time, Linux is significantly more fixable than Windows or MacOS.
In nearly 20 years, I've never had to write a line of C or touch the Linux kernel to fix issues I've had on Linux.
For example, one of my big peeves I've had lately on both PopOS and MacOS are the looooong animations to switch desktops.
On PopOS, I had two paths to fix this: Tweak the COSMIC desktop to fix the behavior, or the simple thing of simply installing GNOME (or KDE or any other DE of choice).
On MacOS, I'm SOL. There's no way to fix that on my Macbook (short of installing Asahi Linux, of course).
> Just like you "can" find all the information you need to diagnose and treat whatever medical condition you might have online and at your local libraries. But most people are still going to pay a doctor, because most people don't have the time or skills to actually do it.
This isn't a great analogy, but it's worth noting: Many conditions are expected to be self-diagnosed and self-treated. I don't go to the doctor for scrapes, bruises, colds, dry eyes, a stubbed toe, etc. By this analogy, Linux users are buying their own aspirin and applying their own band-aids, while MacOS users are waiting in line, dependent on someone else to fix these things.
I say this as someone who uses both MacOS and Linux daily.
> On PopOS, I had two paths to fix this: Tweak the COSMIC desktop to fix the behavior, or the simple thing of simply installing GNOME (or KDE or any other DE of choice).
So what did you do? Did you fix the DE? Again, this is effectively outside the skill of the sorts of people who would be "switching" to linux due to the issues with macOS or Windows.
And while installing a new DE is certainly easier than re-programming one, it's still dependent on someone else having written a DE that not only solves your problem, but doesn't introduce entirely new ones and isn't so fundamentally different to the user that they might as well have switched OSes in the first place. And if the user's primary issue was being forced into a major interface re-design like liquid glass, having to switch to a completely new DE is more of a lateral move than actually fixing the problem.
And to be clear, the fact that it's POSSIBLE for someone to fix a problem for you even if you can't, and it doesn't have to be the primary OS vendor is a benefit of using an open source OS. So I'm not saying it's not possible to benefit from this. I'm just saying that for most users, most of the time, the ability to "fix it themselves" is effectively as out of reach for them as it is using macOS or Windows because having access to the source code is only the tiniest part of actually fixing a problem for themselves.
Since my doctor analogy fell flat, let me try again with a traditional car analogy. A kit car is infinitely more open, customizable and user controllable than any car bought from an auto manufacturer. And yet, for the vast majority of drivers, buying a kit car, even if it was turn key and pre-built would do absolutely nothing to make it more likely that they will do their own repairs or modifications to the car. They will continue taking it to the same mechanics they always took their traditional cars to, they will continue to buy off the shelf parts if possible and do without if not.
Nope, I swapped to GNOME. Forking the DE was something I was considering doing just to contribute back. It's not something I'd recommend someone to do. (That said, it's Rust and not C, so the barrier for entry is much lower.)
If someone can install Linux, they can install a new DE. It's easy peasy.
> if the user's primary issue was being forced into a major interface re-design like liquid glass, having to switch to a completely new DE is more of a lateral move than actually fixing the problem.
No, switching DEs fixes the problem. If MacOS were open source, then you'd have a community-run fork from before Liquid Glass. (If MacOS were open source, you'd also probably have an LTS branch anyways, and no dark patterns forcing you to update.)
Ubuntu users dismayed by Unity were able to stay on GNOME by installing GNOME. Ubuntu users dismayed when Unity went away were able to stay on Unity because someone forked it. GNOME users dismayed by GNOME 3 are able to stay on forks of GNOME 2.
And it's worth stressing that _none_ of these were so bad as Liquid Glass.
>for most users, most of the time, the ability to "fix it themselves" is effectively as out of reach for them
This is the thing I take contention with. This seems hard to square with the experience of someone using Linux. Is this an assertion you're making as someone who doesn't use it?
I think the most common experience on Linux is that people are able to fix the things that annoy them. It's a tangible and normal thing, not a hypothetical.
> No, switching DEs fixes the problem. If MacOS were open source, then you'd
>have a community-run fork from before Liquid Glass. (If MacOS were open
>source, you'd also probably have an LTS branch anyways, and no dark patterns
>forcing you to update.)
>Ubuntu users dismayed by Unity were able to stay on GNOME by installing GNOME.
>Ubuntu users dismayed when Unity went away were able to stay on Unity because
>someone forked it. GNOME users dismayed by GNOME 3 are able to stay on forks >of GNOME 2.
And again, all of these solutions are the user being dependent on someone else doing the work they want for them, and are very much not "fixing it themselves" any more than installing Asahi linux on their macbook would be "fixing it themselves"
> Is this an assertion you're making as someone who doesn't use it?
No it's an assertion I'm making knowing that the vast majority of computer users barely understand what it is their computer is doing at any given time or why. And of the subset of users that do have an understanding, an even smaller subset of those users have the necessary skills, time and inclination to fix something wrong with the system. I worked computer retail for years. The vast majority of people I interacted with had no interest in knowing what their computer was doing under the hood or how they could solve their own problems. For every one customer that I had the chance to show how they could do something for themselves, I had 10-15 other customers tell me they didn't want to know, they just wanted it fixed.
I have plenty of experience using Linux. I spent 7 years working at a job where I was thankfully allowed to use a Linux box as my primary development machine. My home network runs stacks of Debian boxes, my 3d printers are running klipper, my home media systems Ubuntu or Debian. I built an arcade system than runs off of a Debian box. I've built remote scanning and printing workstations out of some Raspberry Pis for a company I worked for, and built custom touch screen inventory workstations prototyping them out on "Puppy Linux" installations (some weirdness around needing to work forward from a very old x11 config that didn't work with modern ubuntu at the time). I've been installing and using Linux in some form or another since I first spent 3 days twice in a row downloading the set of 600MB install CDs for "mkLinux" over a 33.6 dialup connection (twice because the first time I pulled the files down in "text mode" which broke the images).
But it's also these experiences that inform my opinion that Linux presents plenty of its own pain points and that plenty of those pain points are simply unfixable by the vast majority of their users. Every other year or so, some updates to Ubuntu would inevitably break multi-display handling or the network or something else on my dev machine at work. I would easily lose a day or two to hunting down esoteric configuration options and work arounds and digging into things that most computer users will never want to touch. My arcade system worked fine for months until an update to something in the Debian/Ubuntu audio stack broke audio on boot. It's been over a year now and it's still broken. You have to manually go into alsamixer, swap which audio "card" the system thinks its talking to (the onboard audio presents as two different cards, one for the normal audio jacks and one for HDMI out) and then toggle the muting on the various outputs until you find the one that was enumerated to be your current output on this boot. As near as I can figure out, it has something to do with a change in the order that the audio system is brought up on boot. It's now loaded much earlier in the boot process and apparently this particular chip and board combination doesn't initialize the second card until after some later step in the boot process pokes it. So when the audio system first comes up, it only sees the one card, can't apply the saved configurations and drops into a default. I've built some work around scripts that try to re-apply the audio settings again later in the process, but so far they're only about 60% effective. In the mean time, it's just broken for me and plenty of other people like me with the same AMD on board audio setup. And I'm someone comfortable digging into debugging hardware boot-up issues and the rats nest that is the linux audio stack.
But this same box also saw me need to switch from XFCE to KDE because some bug with the "notifications" system in XFCE hard hangs any user input for 5 minutes or so if you try to pop up a notification before the first time a user logs into the DE, something that I was doing because the arcade doesn't have a mouse plugged in, but you can hit a hotkey combo to switch to a keyboard mouse control scheme and I wanted a notification to display when you switched control schemes.
I have a raspberry pi running home-assistant that refused to boot if one of the zwave radio devices is plugged in to USB on boot. No idea why and it's been working fine ever since I switched to a different zwave radio, but was certainly a pain if the power ever flickered.
And lets not get into the nightmare that getting each individual linux system to play nicely with DHCPv6 was. Apparently every linux distro does IPv6 DHCP things just a little differently and even across versions of the same distro it can vary wildly.
Are all of these things fixable by AN end user? Yes, probably they are. Are all of them fixable by me? Probably with enough free time and a little luck, yes they probably are. Are they fixable by most people who use a computer day to day and especially the sort of people who aren't already interested in Linux? No almost certainly not. Those users would rely on people like me (or more likely the people I'm relying on) to figure it out and drop a solution in the up stream or provide some package you can install to replace the broken component. And again, I'm not denying that this possibility is a benefit. It's just not the same as "fixing it yourself".
Does it matter? Generally Linux desktop distributions are made for the people who use them, who would tend towards people who will fix things. You mention distros but there obviously are a lot of passionate distro makers because right now it seems like there are more distros than ever.
There are often comments on threads like this that go along the lines of "If only the people making Linux desktop did X then they'd get more people". But there there isn't really anyone making Linux on the desktop. It's not a product. Even the products within it are built on the work of people with very disparate interests. It's kind of amazing that we get a cobbled together working experience at all.
Apple and Microsoft can focus on particular things, like getting more users, or supporting hardware they want to sell, or trying to get you to sign up to Office 365. No Linux desktop environment can have that kind of focus. So when you say it's not fixable to most users I think: well it's not supposed to be. It's not supposed to be anything, it just kind of is. Like coming across a mountain instead of a theme park - it's not a curated experience, it's not going to be for everyone, you might get hurt, but it's far far more beautiful.
It does matter if you're selling someone on the idea of switching away from their mac or windows machine that they're complaining about something the OS vendor has done by highlighting that with Linux they could "fix it themselves". It misses the point that most people don't want to "fix it themselves" and even if they had the inclination to that, for many problems they don't have the time or the skills. If someone is upset that Apple forced a move to Liquid Glass with Tahoe and all the bad UX that comes along with it, it's possible that they could also have the skills to fix their OS if they were equally upset that their chosen linux distro switched to Wayland. But it's more likely than not that they don't have those skills and so for that user, Linux is theoretically an OS they can fix, and practically just as likely to force them to accept the march of technology as any other OS they use.
I personally wouldn't try to sell Linux to anyone and get them to switch. It is a futile game and I see no real reason for it. People will move if they have reason to (in any direction) and the best one can do is show and tell. I will tell people what I like using if they ask. I'm more likely to tell folks not to switch because I don't want to be technical support for anyone outside my household.
I don't think anyone will switch from MacOS to Linux because of rounded corners. If they're really into theming it would make sense.
Being able to fix things is also a bit of a vague statement. You can fix things in many different ways, and you can fix some things in every OS. Fixing might be writing your own code, or switching a theme, or an application, or a distro, or the whole OS. The level of lockdown then matters. MacOS has the greatest lockdown because you can't just get a new Macbook and fix it by installing something other than MacOS.
Your comments really sound like you don't have experience with Linux. This sounds like you're repeating things others have heard.
> it's more likely than not that they don't have those skills
No, they absolutely do.
Even at the most basic level of interacting with the OS, Linux desktops usually offer more options in its Settings application than you'd get with MacOS.
If something annoys you on Linux, it probably annoyed someone else, and there's probably a toggle or switch for it.
If not, the barrier to fixing it is usually "sudo apt install cool_thing". Higher than "open the settings app", but it doesn't require compiling or coding. It only requires literacy (and, granted, not everyone is literate).
> Linux is ... practically just as likely to force them to accept the march of technology
For starters, let's not characterize Liquid Glass as "the march of technology". It's a symptom of dysfunction within Apple.
Second, no, this is just simply wrong. Many Linux distros offer LTS versions. Ubuntu 16.04 was released in 2016 and its support is ending this year, after a decade. (That's not counting the five more years of security maintenance.) Very importantly, these also don't have dark patterns to tick you to update like Apple did with Tahoe.
> Your comments really sound like you don't have experience with Linux. This sounds like you're repeating things others have heard.
It's really disappointing to me that so many people assume that just because you're not convinced that linux is the right solution for every computer user that you don't have experience with the system. As I mentioned in my other reply to you, I have plenty of experience with Linux, and those experiences are why I say that Linux is just as "unfixable" to your average computer user as MacOS or Windows is.
> > The big idea with Linux/BSD/fully-open-source is that you can fix whatever you don't like.
> That's a great theory, and sometimes it's actually true, but in reality for most users most of the time, Linux is as "fixable" as Windows or macOS, because most people, even the technically savvy ones aren't driver developers.
But there a whole lot more people who are happy to pay Claude $200/month now than there used to be. Claude isn’t a driver developer, but it’s taken a bunch of different open projects and modified for them for me in such a way that it’s made my life meaningfully better.
Things I couldn’t do for years, that I’ve wanted for years, got accomplished in 2 evenings: one to implement and deploy, one to optimise because the original deployment was a good POC but not good enough to keep running (e.g. doubling or tripling of CPU usage or RAM from prior to modification).
Sure, you could argue I’m paying a doctor, but there isn’t a doctor for the apple ecosystem. There’s just “suck it up, sunshine.”
(Written from my iPad, where I continue to suck it up)
While I understand that, I can't help but compare this to Mac hardware rather than software. There was a years-long stretch when it seemed like they'd really seriously lost the plot: the butterfly keyboard, the Touch Bar, the "trashcan" Mac, heat issues across the line. There was a real case to be made for abandoning Macs based on hardware issues alone (and I'm sure some folks did, and hopefully they're happy for it).
Then came Apple Silicon. And at least in my eyes, Apple hardware is the best it's been in a really long time.
There are some definite trainwrecks in the current state of Liquid Glass (especially on the Mac), and there have been other dubious choices and mounting bugs made over the last few years. But I've used both Windows 11 and a recent Linux distribution (Fedora, via Asahi Linux, running KDE Plasma), and while I like the latter it's just not enough to make me give up what I like on the Mac in terms of Mac-only applications and little life-bettering affordances I've internalized over the years I've been here. Yes, if the trajectory they're on now in software continues, I'll have to re-evaluate that -- but their hardware took a real turn for the better after Jony Ive and some of his deputies left. Alan Dye and some of his deputies left earlier this year, and I'm not going to count the new team out before giving them a chance to prove themselves.
It's a good point. I hated that butterfly keyboard, and the Touch Bar was an utterly useless gimmick for me. And they realised that and rolled it back (and added ports again!).
They do eventually listen to their customers. Let's hope it doesn't take as long for these changes to get rolled back.
I'm kinda stuck with Mac at work. I don't mind it, but I run Linux on all my personal computers and find that is way better.
I wonder how much connect there is between those in charge of hardware and those in charge of working software. It would be one thing if the software had a design direction, we all hated it, but it was implemented to its logical conclusion and pure stupid bugs weren't left to linger for years. That would be a matter of difference in taste and vision.
But I wonder if they have the ability to execute... anything, anymore. It's starting to look a little like Windows, which in a totally shameless and burlesque fashion has 3 or 4 design paradigms at the same time, jumbled together in a big stew.
It does feel like the decision making is internal-politics-driven rather than customer-satisfaction-driven, for both Mac and Windows now. Senseless changes that have little in common with other changes.
We've had this for decades with Windows, and internal leaks confirming that it's all to do with turf wars between departmental heads.
As you say, it's an indication that Apple are going down the same road, and are unable to actually execute a vision anymore.
I was 100% Apple: Mac Mini on the desktop, Macbook Air laptop, iPhone, and two iPads.
Then came Tahoe.
I hated it so badly and it wouldn't let me change the things I hated.
I noticed a subtle sneer as I worked, having to use this stupid computer that wouldn't let me adjust it to my liking anymore.
Then I noticed I wasn't working as much as I used to because I just viscerally hated having to work in that Tahoe environment.
At first I did the thing of erasing the entire computer and doing a USB install just go back to the previous.
But then like you said: “I don't feel like we own them.” I didn't trust Apple to not keep making it worse.
So I switched. Got a Linux desktop, and a Framework laptop. Sooooo nice!! Snappy-fast Linux just the way I want it.
While I was at it, got my first Android phone and installed GrapheneOS on Google Pixel. Sooooo nice! So quiet, doing only what I want.
Even got my first Android tablet to replace the iPad. (OnePlus Pad 3.) It's great too. I'm loving the whole Android ecosystem, when made nerdy like Linux.
So yeah I'm 100% off Apple now and will never go back.
I am currently all-in on the Apple ecosystem and have been for almost 13 years now. But quality of life in Apple land has steadily been getting worse, so I have been considering making the sort of change you describe, but not quite ready to make that leap yet. What would you say was the hardest part of the transition for you?
I’m firmly anti-Tahoe and haven’t updated, and I’ve started to find Apple despicable. But…
Anyone saying they had no trouble migrating away is either lying or delusional.
Just off the top of my head: external accessory issues when Linux wakes up from sleep; trackpad quality; battery life; full-disk encryption is still spotty if you, for example, want to use ZFS; boot-level security can be a nightmare to setup (although Evil Maids aren’t a concern for me); systemd things that don’t work and don’t report they’re not working; inconsistent shortcuts for basic things.
Man, I like the ideals behind BSD and Linux, but we gotta stop pretending that basic UX stuff is done and fixed, when we know perfectly well that it’s been broken for a decade (running Linux servers or desktop-on-the-side since 2015).
I don't remember ever having external accessory issues, I don't use a track pad, battery life is fine (although I will admit macs seem to do better) and the ZFS one is just odd. Literally first time I hear this because while it sure would be nice... everyone uses btrfs or xfs or even ext4 with FDE without any problems for the last.. 15 years? Just one FS apparently has some issues?
Maybe your definition of broken is just as subjective as mine (aka I don't remember ever seeing window management so bad as OSX since fvwm)
ZFS got me into a weird race condition with systemd trying to unlock different filesystems that it couldn’t find because the root system itself wasn’t ready. ZFS-on-root is itself not really recommended, but you work around that.
Might as well give a recommendation then: I've been using hashcards [0] for a few weeks now and have enjoyed its simplicity and the fact that it all stays forever in raw markdown files and versioned git. A simple justfile has also been helpful.
Wonderfully under-rated. Robust as anything and SO FAST. It was my sole desktop OS for years, and while I’m dabbling with Debian right now, I miss Void the most. So lean and snappy.
Coming from OpenBSD and FreeBSD, Void Linux feels almost the same. Same rc init scripts and such.
What made you leave Void? I tried Debian, but I just couldn't do it; too dated and too many workarounds for the dated bugs. I tried Testing and Sid as well, but the only taste that I was left with was that these, somewhat obviously, are not meant to be production distributions and, while they get newer stuff, they're simply too buggy for daily use.
In case you were not aware, there is a large overlap between people who work/worked on NetBSD and OpenBSD that also work on Void Linux, which is why Void feels like that. Juan Pardines being an example of one individual.
Thanks! And yeah the other inspiration is that in the last 27 years of making web apps, I've gone from PHP to Ruby to JavaScript back to Ruby, considered switching to Go or Elixir, but my PostgreSQL database was at the center of it all, throughout.
So it makes sense to notice what's constant, what's ever-changing, and organize accordingly.
OOC how did you land on PostgreSQL 27 years ago? That’s before I was doing this sort of thing but 7 years later MySQL definitely seemed like the “no one got fired for choosing..” option.
PS been a big fan of your writing over the years and it’s a little intimidating to just respond to one of your posts asking a silly question haha
As someone that was kicking around back then, PostgreSQL was seen as more of a proper database, but MySQL was both faster and came with batteries included replication. Those really motivated its mass adoption with LAMP style stacks vs PostgreSQL.
Some years down the road however this was changing. PostgreSQL began to catch up in terms of simple performance, but also MySQL stumbled on transitioning into a multicore world, while PostgreSQL scaled better due to some of the hard work already being done in the architecture. Additionally we got an included replication option, as well as all the main PaaS vendors providing automation around it. So MySQL's previous advantages became less compelling.
And today, hardware is so incredibly capable that just scaling vertically on a single server is totally viable for a ton of apps. For this swath of the market, just running PostgreSQL has become a bit of a no brainer in the way that MySQL was during the peak of LAMP.
my vague memories of ~2000 were that postgres was acknowledged as the way to go if you wanted to do it right, but mysql would let you get set up quickly and easily so you could get on with your actual application, and postgres had the reputation of being harder to set up and administer
Same memories here. In the late 90s I chose MySQL over Postgres, at the time for its speed and replication. And at least partly because I got to talk with Monty Widenius at an Open Source Conference (or perhaps even a Perl Conference) in the late 90s about replication, and asked how hard it'd be to make replication use SSL - and he sent me a beta MySQL version with that implemented a few days later. So I had a quite serious "feel good" reason for using MySQL. In the subsequent 5-10 years or so I regretted not choosing Postgres instead over it's stored procedure handling, but we had way to much deeply embedded MySQL tech and skill by then which made switching always end up on the too hard list.
I've been making PostgreSQL-centered web apps for 9 years that way, by having PostgreSQL just return JSON. Then the "controller" (Ruby or whatever) parses the Mustache or ERB template with the JSON and returns HTML to the browser.
What I'm doing differently now is having PostgreSQL parse Mustache templates directly!
So now the controller just has to pass in the HTTP params, give it to the PostgreSQL function, and it returns HTML ready to return in the HTTP response.
It's not new, my first job circa 2007 was working on a Delphi 7 desktop application and all the "business logic" was stored procedures in an Oracle db. It was early in my career but I believe this was fairly popular in the early 00s. I was too young to have an opinion but for sure others will remember and be able to add more colour to it.
I haven't had the "pleasure" to work with stored procedures,etc but from conversations the main takeaways seems to be:
1: cooperation, nowadays database instances are cheaper and with Docker we can spin them up but having them shared doesn't feel like a fun thing when developing (triggers more than stored procedures here)
2: version control, kinda ties to the above but being able to keep track of changes (and related then to code being out of sync even if that would matter less in a application-less world)
3: debugging in general ?
4: debuging "spooky effects at a distance" when triggers,etc run.
> 2: version control, kinda ties to the above but being able to keep track of changes.
We dont use stored procedures at work, but all other database changes like tables, triggers etc. are committed to git and deployed using github actions. There's no need to run the sql manually
Oracle has a nice way to bundle stores procedures in packages, which makes large amounts of stored procedures manageable. So still ahead of Postgres, but Postgres is definitely good enough.
Asking from ignorance - are schemas not enough to replicate this most of the way? What are the extra nice to haves that would bring PG on par with Oracle here?
In my first proper job, I worked with an accounting system in 2014/2015 that was a .NET GUI client that directly called SQL Server stored procedures. There was a bundled WMS that did the same thing. IIRC the requests were sent directly to the database and were authenticated with the client user's details.
I was a data analyst and had full access to the database for reporting and data import/export purposes. I had a lot of fun browsing through the stored procedures, which were not locked down or encrypted in any way, and figuring out how it all worked.
I even fixed a bug with a custom module that was causing huge stock valuation errors (I can't believe I even did this now) and also created my own automated order import procedure by monitoring the procedures used by the client's order import screen. Possibly invalidating warranties and support contracts etc. but no problems came of it. They even tried to rehire me a few years later.
There's nothing wrong with it. Stored procedures, Java, Delphi, Ruby, Python, or whatever can be considered as a business logic layer separated from the data storage. Similarly, you consider your Python controller business layer separate from the web UI frontend.
And if you complain that stored procedure language is not so versatile for the business logic, remember that people have been using far worse languages for that, like COBOL, MUMPS, ColdFusion, ...
Despite reading the README and the article I am still unclear about how these templated values are populated. So presumably we store our HTML on the sql server along with some templating syntax then how do we plug that value so to speak?
Secondly, what do we do about things like HTML fragments à la HTMX / Datastar hypermedia approach? Do we just hit the DB for 10 lines of HTML to populate the next card in a multi step form?
Thanks for your response. I didn’t explain myself properly.
Suppose I have a html template that contains the dynamic value {{ foo }}, that template is on my SQLDB, how do I populate {{ foo }} whilst querying the template table?
For someone who wants to tread this path, using postgreSQL stored procedures in a team settings, what would be a good dev workflow in a team. Update them in git and use a CI to update the DB etc? Is there some tips that you can share on that?
The big idea with Linux/BSD/fully-open-source is that you can fix whatever you don't like.
That was the breaking point for me with Tahoe. I never loved MacOS before that, but it never got in the way. Then with Tahoe, it got in the way, so I went to fix it, and found out that fixing it is actually impossible! That was the breakup moment.
Sophisticated LLMs make it even easier to fix or tweak any Linux/BSD/fully-open-source software to our liking.
reply