Going back as far as I remember, option 2 is exactly what has happened for distributions with long support.
Eg. Debian maintainers have taken a particular release of upstream software, and if upstream didn't provide bugfixes for older releases, maintainers in Debian would backport them and Debian users would enjoy exactly what they want: stable versions with security fixes.
Ubuntu LTS is exactly that as well, with a quicker release cadence (at least when it was introduced — was it 2006 with Hardy Heron? :).
However, business entities (like Canonical) are trying to move away from supporting users to such an extent because it's very costly (thus snaps and reduced "main").
Even though everyone using a dependency now has to pay the cost at unexpected times (which is ultimately more costly than distros doing it), market was obviously not willing to reward developers like Canonical sufficiently for it to keep at it.
RedHat is still doing it for their enterprise editions, but desktop users are being left out.
> "It is remarkable, love," he said, looking at Nell for the first time, "how much money you can make shovelling back the tide." [ "Diamond Age" by Neal Stephenson ]
What the customer wants here is impossible, but some customers (larger enterprises in particular) will pay handsomely for you to try. Why is it impossible? Change is the one constant in our universe. Long after there is no consciousness left to perceive it, change will continue.
Red Hat are doing the same thing as Microsoft. You have a support programme, and then a more expensive extended support programme, and then sometimes yet a further even more expensive programme. Charging people for their over-confidence is a successful strategy, because every customer believes they are going to get off the old version in plenty of time, and many of them are wrong, but you don't care which ones.
If Enterprise desktop users want to pay $$$ for longer term support they could make it worth Red Hat's money to do this work, shovelling back the tide. But I'd guess that Linux has far fewer cases of "desktop" systems that are actually servers. I know at work there were at least two "desktop" machines sat in empty offices over the pandemic, switched on with big "DO NOT SWITCH OFF" notices because the Windows "desktop" software running on them was vital to normal operations, even though that's clearly very stupid and the software ought to live on a server somewhere.
> What the customer wants here is impossible, but some customers (larger enterprises in particular) will pay handsomely for you to try. Why is it impossible? Change is the one constant in our universe. Long after there is no consciousness left to perceive it, change will continue.
Companies (and their budgets) like predictability.
Sure, it's unavoidable that you'll have to upgrade or abandon certain software, but it's an altogether different ballgame when you know exactly when that's coming.
If I know I've got guaranteed security fixes in whatever is installed on my system for 5 or 10 years, it means I only have to do the work once every 5 or 10 years.
That's service being provided, and I'd like to test that hypothesis sometime with starting a company to support your dependencies for a monthly fee.
Existence of enterprise Linuxes like RHEL and SUSE offering confirms the market is there, the biggest question is if the overlap of those willing to subscribe to such a model matches those who want this type of predictability. You'd be basically paying for distribution-type development focused on packages you need.
Canonical has a commercial ESM (extended security maintenance) thing which gives 5 more years of security-focused backports of key parts of the distro if you want to pay for that.
Yes, I remember when it was introduced in 2017 when precise (12.04) was going EOL.
However, the uptake was so good that it was soon bundled in the regular Ubuntu support contracts, and I see it's available for free for a limited number of machines today (3 iirc, like Ubuntu Livepatch).
Maybe it is the driver for enterprises to get Ubuntu paid support today, but doesn't seem like it feom the outside.
> the people using them generally want the platonic ideal version of semantic versioning minor releases, namely updates that only fix bugs and improve things and never introduce backward compatibility problems or undesired changes
Oh, so very much this! I would pay kilodollars per year for a real LTS version of firefox that will never ever change its UI or randomly break how plugins work, but take security fixes.
> I would pay kilodollars per year for a real LTS version of firefox that will never ever change its UI or randomly break how plugins work, but take security fixes.
Is this what Firefox ESR is supposed to be? I don't use it, does it change UI and plugin stuff with minor updates?
I guess, ESR misses the “ever” part in the parent comment. Once you update from Firefox 78 ESR to, say, Firefox 91 ESR, you get all the newest UI changes and all.
There is a certain demand for “the boring UI”. Using as much of the native controls as possible in the browser UI, not moving things, not inventing new behaviour, and so on.
It would be basic common sense, but given that I have seen Firefox versions use directly unbelievably insane UI, sheer lunacies - such as psychedelic progress bars in FF for Android -, good sense is what faults.
Virtualisation is cheap. Docker + Debian - stable, with the official docker ppd - you're able to achieve a compromise of stability (and out of date packages) - while running fast moving projects that require later build tools than those available in the distribution.
Docker to me is a proof that distro package management has failed. People would rather bundle a snapshot of an entire operating system with every application they use, instead of trying to install a compatible set of packages on the host OS.
Alpine Linux containers can be as low as 8MB in size. Virtualisation has a cost, that decreases every year. I disagree - many people want an efficient system where the package manager handles it - Arch linux.
In my use case - I started running Arch Linux and installing the packages I want - however aarch64 w/ prop Mesa drivers = bad time. too few AUR packages has aarch64 binaries - would end up compiling nearly everything. eventually I snapped trying to build my own PKGBUILD for printer drivers (that worked on my arch/hardware) instead of just using the provided install script that had ubuntu aarch64 support.
while a sleek minimalist system, ticking smoothly, is really nice - to get to that "happy place" on many bits of hardware can be tricky. this constantly getting stuck and having side effects across different programs on my system led to me taking the same approach I take deploying in production. just install stock os, get docker, then run docker compose pointing to your persisted config/data.
Forcing everybody to use the latest versions is as anti-user as forcing them to use versions from 5 years ago.
What people want is to develop and run their applications with the least amount of friction possible.
That is the purpose of the operating system: make it easier to write and run applications. Anything that gets in the way of that is bad.
What containers and other forms of virtualization provide is enhanced ability to deploy and run applications cheaper than without it.
Linux kernel and modern networking is incredibly sophisticated and Unix design of the past can't allow people to take full advantage of it in a reasonable manner. This is why they switched to containers and virtual machines.
Because, frankly, running one Linux operating per application actually makes more sense than what people were doing before.
Package management systems, no matter how sophisticated, can't solve these problems.
I'm not saying Docker doesn't work well. But whether you're using a slim OS in Docker or not, you're still choosing not to use your host OS, because your host OS is failing to provide equally stable, reliable and reproducible environment for your applications.
the irony is - if you stick to software of the time, you live in GNOME LTS land - it's pretty frictionless and stable. my experience is the bad times start rolling in as soon as you need to compile - sway for example - you need a newer version of meson than is on ubuntu LTS, and there's a tight wayland dependency too. its the shiny new software (often the ones posted on hn...) that end up needing the most up to date build dependencies.
i agree - it's a failing of the pkg management and devs. but also - the hardware developers. if you rely on prop blobs - docker is a way to liberate yourself somewhat from this - you dont need to have a well maintained upstream
It could just be that as a technology community we collectively are still currently getting our heads around what containerization is really essentially about. Maybe as most of us in the technology herd grok it at a deeper level, it will later become more obvious how tools prior to containerization can achieve the qualities we think we can only get from containers.
As a brain collective it may be before that we weren't conceptualizing our systems at an atomic enough level that containers introduced. Containers in service to our isolation mental models may be the true benefit of the trend. Even if the underlying technology dies or falls out of trend, the ideas will surely survive.
For my career I have learned to appreciate GNU/unix more by using containers.
yes well. docker sits atop the lofty shoulders of cgroups. i don't think containerisation is the only way, it's coercion with flatpak and snaps is fustrating.its a means to an end. a means that has mass adoption and is proven, even if it's not ideal.
Eg. Debian maintainers have taken a particular release of upstream software, and if upstream didn't provide bugfixes for older releases, maintainers in Debian would backport them and Debian users would enjoy exactly what they want: stable versions with security fixes.
Ubuntu LTS is exactly that as well, with a quicker release cadence (at least when it was introduced — was it 2006 with Hardy Heron? :).
However, business entities (like Canonical) are trying to move away from supporting users to such an extent because it's very costly (thus snaps and reduced "main").
Even though everyone using a dependency now has to pay the cost at unexpected times (which is ultimately more costly than distros doing it), market was obviously not willing to reward developers like Canonical sufficiently for it to keep at it.
RedHat is still doing it for their enterprise editions, but desktop users are being left out.