Sounds like a layering issue to me. We have enough problems with semantic differences between filesystems on different platforms - permission models, allowed characters in names etc. What is a strong reason why the multiple streams shouldn't just be implemented on top of a single-stream filesystem file abstraction? I know a number of formats that do such a thing, like ELF, PDF, Sqlite3, various media containers, various archive formats.. Probably some people here can come up with dozens without looking anything up.
If you do these resource streams, how do you copy those to various other mediums, like physical drives, tapes, pipes, sockets... All consist only of a single stream at a low level, because why make it harder than that? Now that means that instead of "cat" your standard way to read such a multi stream file would be a command that serializes it to the "sit" format as you mentioned, making that format almost the canonical representation. So what was he point of implementing these resoure forks in the filesystem again?
Critizicing Unix often comes back to the "reinventing poorly" phrase, the simplicity of not having some features at every layer is a virtue.
Actually Apple had to find solutions to some of the problems you mention when they transitioned to OS X:
* multiple resource forks can be presented as files in a directory, the directory having the name of the multi-stream file and the files in it named after the stream, e.g. "resources"
* for transfering data to other mediums/computers etc you can use container formats such as zip or tar, but with the unpacker having the ability to unpack them properly to a multi-stream file
* since on modern systems such as windows you can seamlessly file-browse into container formats such as zip so this is even less of a problem
* actually OS X to this day uses something like this for "Apps" as they are a special container/directory that you can browse into when you know the magic incantation
Not only apps, but many other types as well. And the organization technique is nothing magical, they’re just directories with an extension in the name, and usually some standardized structure. They’re officially referred to as “bundles.”
Apps are bundles, various plugins are bundles (audio units, pref panes for System Preferences), and if Finder didn’t treat them specially, you’d never think they were. In a shell, they’re just like any other directory.
The problem with the Unix lowest-common-denominator model is that it pushes complexity out of the stack and into view, because of stuff other designs _thought_ about and worked to integrate.
It is very important never to forget the technological context of UNIX: a text-only OS for a tiny, already obsolete and desperately resource-constrained, standalone minicomputer. It was written for a machine that was already obsolete, and it shows.
No graphics. No networking. No sound. Dumb text terminals, which is why the obsession with text files being piped to other text files and filtered through things that only handle text files.
While at the same time as UNIX evolved, other bigger OSes for bigger minicomputers were being designed and built to directly integrate things like networking, clustering, notations for accessing other machines over the network, accessing filesystems mounted remotely over the network, file versioning and so on.
People brought up on Unix look at that and see needless complexity, but it isn't.
VMS' complex pathnames are the visible sign of an OS which natively understands that it's one node on a network, that currently-mounted disks can be mounted on more than one network nodes even if those nodes are running different OS versions on different CPU architectures. It's an OS that understands that a node name is a flexible concept that can apply to one machine, or to a cluster of them, and every command from (the equivalent of) `ping` to (the equivalent of) `ssh` can be addressed to a cluster and the nearest available machine will respond and the other end need never know it's not talking to one particular box.
50 years later and Unix still can't do stuff like that. It needs tons of extra work with load-balancers and multi-homed network adaptors and SANs to simulate what VMS did out of the box in the 1970s in 1 megabyte of RAM.
The Unix was only looks simple because the implementors didn't do the hard stuff. They ripped it out in order to fit the OS into 32 kB of RAM or something.
The whole point of Unix was to be minimal, small, and simple.
Only it isn't any more, because now we need clustering and network filesystems and virtual machines and all this baroque stuff piled on top.
The result is that an OS which was hand-coded in assembler and was tiny and fast and efficient on non-networked text-only minicomputers now contains tens of millions of lines of unsafe code in unsafe languages and no human actually comprehends how the whole thing works.
Which is why we've build a multi-billion-dollar industry constantly trying to patch all the holes and stop the magic haunted sand leaking out and the whole sandcastle collapsing.
It's not a wonderful inspiring achievement. It's a vast, epic, global-scale waste of human intelligence and effort.
Because we build a planetary network out of the software equivalent of wet sand.
The point is that it was 1885 and the design was able to support buildings 10× as big without fundamental change.
The Chicago Home Insurance building wasn't very impressive, but its design was. Its design scaled.
When I look at classic OSes of the past, like in this post, I see miracles of design which did big complex hard tasks, built by tiny teams of a few people, and which still works today.
When I look at massive FOSS OSes, mostly, I see ant-hills. It's impressive but it's so much work to build anything big with sand that the impressive part is that it works at all... and that to build something so big, you need millions of workers, and constant maintenance.
If we stopped using sand, and abandoned our current plans, and started over afresh, we could build software skyscrapers instead of ant hills.
But everyone is too focussed on keeping our sand software working on our sand hill OSes that they're too busy to learn something else and start over.
Working on large VAXclusters early in my career, totally spoiled me as to what was possible. Even now, I look at what is laughingly called 'clustering' and I sigh.
I can relate to your point. I know both Windows and Linux quite well and both have their strengths and weaknesses. Just to put it in perspective: You didn't say why we need containers of multiple streams implemented at the filesystem level. Also, these alternative designs that you describe and that work so wonderfully are often seen through rose-tinted glasses. You've probably seen a few videos and became impressed what was possible such a long time ago. But you probably haven't actually used these things, so you can't experience the limitations.
I hate many things about Linux, but there is a lot of development work where Linux is much stronger. A lot about the "minimal design" approach is still valid today, and I don't mean Dbus or Docker or Kubernetes or whatever, which I likely would hate if I actually knew them.
In my view, the main problem about (Desktop) Linux is fragmentation and lack of standardization. A strong suit of Windows development is the APIs, at least the older ones that don't get deprecated after a year. There are useful APIs for everything related to a Desktop experience, and you can count on their existence as a developer.
The lack of standardization is what makes it feel like sand. Apart from the simpler stuff (POSIX), there isn't a trustworthy authority that maintains stable APIs for a solid user experience, at least not APIs that I feel like using personally.
> VMS' complex pathnames are the visible sign of an OS which natively understands that it's one node on a network, that currently-mounted disks can be mounted on more than one network nodes even if those nodes are running different OS versions on different CPU architectures. It's an OS that understands that a node name is a flexible concept that can apply to one machine, or to a cluster of them, and every command from (the equivalent of) `ping` to (the equivalent of) `ssh` can be addressed to a cluster and the nearest available machine will respond and the other end need never know it's not talking to one particular box.
Are you sure you understand how the Unix filesystem (VFS) works? On Unix, a filepath is exactly what you say: a name that can identify a resource. There are distributed filesystem protocols that are of course portable, not dependent on CPU architecture or anything.
I don't get what is your point about these Drive-Letter paths, they often create annoying complexity. I believe even NTFS has developed extensions to get rid of them. So, not that I think filepaths are beautifully easy to use on Unix, but they're much better than on Windows in my experience.
I mean, yes, I agree with you, Plan 9 is the true successor to Unix.
But Inferno is the true successor to Plan 9.
And yet, both are obscure and relatively rarely used anywhere, whereas VMS Software Inc. just shipped OpenVMS version 9.2, the first production-ready release of native x86-64 OpenVMS.
VMS has now run on 4 different CPU architectures, migrated 3 times, and it is still out there, still in production, still being used by enough organisations to pay for another port and a new native version in 2022.
1. VAX → Alpha
2. Alpha → IA64 (Itanium)
3. IA64 → X86-64
It's doing well for something "obsolete".
Plan 9, sadly, has not even managed to replace Unix enough to hinder the vast uptake of an original 1970s-style monolithic FOSS version: Linux. I'm typing on it right now.
9front took plan9 further. A truckload of new drivers and software. They even have a video and audio player, some game ports, system emulators, (hardware virtualization!), and so on. Get 9front and try it.
OFC not even close to a Linux or BSD support, but everything works like magic. No ssh, no enforced VTs, no POSIX (coding in C it's pure love here), no crapware.
On Linux, meh. I prefer OpenBSD, my main OS. Meanwhile I am using Alpine for, well, that ecosystem bound to the penguin with Linux-only software. But not for long...
I found the Inferno desktop a lot more navigable and comprehensible than 8½, Rio, Acme etc.
I still wonder if it might be possible to merge Plan 9 (and derivatives) and Inferno. Give the choice of C compiled to a native binary, or Limbo compiled to Dis.
Resource Forks were necessary on early Macintosh so that the OS itself could partially load programs into RAM when you only had 512K or so, loading resources as needed.
You could argue that your resources instead should be multiple files in a folder so we don't have to treat a fork specially, and you'd be right, and you'd also have invented the NeXT/OSX .app bundle.
I don't see the reason why the streams couldn't be implemented _on top_ of single stream filesystem implementations. The usecase you mention is probably solved with Virtual Memory and ELF today?
Yeah this design is pre-mmu and virtual memory, and NeXT/Apple solved it with bundles (store the streams as separate files in a specially named (ends in .app, .bundle, .framework etc) directory that the OS presents as a single file).
I... kinda want to see how this could be done with ELF now. It seems totally possible.
If you do these resource streams, how do you copy those to various other mediums, like physical drives, tapes, pipes, sockets... All consist only of a single stream at a low level, because why make it harder than that? Now that means that instead of "cat" your standard way to read such a multi stream file would be a command that serializes it to the "sit" format as you mentioned, making that format almost the canonical representation. So what was he point of implementing these resoure forks in the filesystem again?
Critizicing Unix often comes back to the "reinventing poorly" phrase, the simplicity of not having some features at every layer is a virtue.