"It's interesting how every build system, frontend framework, programming language implements its own promise pipeline/delayed execution/observables/event propagation."
This rings so true to me.
I've recently realized how every single non trivial part of my app is in fact a workflow problem : it could be ideally written as a pipe of asynchronous steps, glued together. It's true both for the frontend part and the backend.
I believe that's the point of reactive frameworks, but somehow those frameworks are usually designed around continuous streams of incoming events. Which isn't what i've noticed is the most widespread case. One-shot instanciation of pre-designed workflows would be really ideal.
I think this is true for a subset of software. Although I think a more general truth is that how you organize data pretty is what defines your software. It's the root of virtually all of your problems and successes in designing software.
It's also why it's so unfortunate data modelling is often ad-hoc by defaulting to some bucket-of-json model with no regard for the needs of the application.
I also think surprisingly often, given some ideal data modelling, it's both faster and easier to use synchronous processing because you no longer end up having data far away in weird formats.
I’m a big fan of temporal and xstate (xstate for UIs in particular) lately, but they both present a major issue in my mind. They provide excellent foundations to create very reliable software, but the tools and conventions they offer are hard to learn and to work with.
That is such a huge setback because virtually no one I work with is willing to learn these tools in order to write better software. Even people I think are very intelligent (certainly more so than I am) think they can write these kinds of tools themselves ad hoc, as needed. It simply isn’t true; it’s a bad idea almost all of the time.
If we could find some way to make these tools more intuitive and attractive to depend on, I think it could be literally transformative. I know similar tools are popular in more engineering-centred software, so it isn’t necessarily impossible. On the web and mobile software side at least, getting people to define their application states and flows with any rigour seems to be like asking someone to file their taxes and write an essay about it afterwards.
But I also get it. These tools are a lot to absorb. Sometimes it feels like they’re in the way. Though I’d argue that when they feel like they’re in the way, it’s often because you didn’t anticipate a workflow stage or application state and its absence in your mental model is making the tool hard to use because they simply won’t accommodate broken workflow or state models. That’s a good thing, and something we should want from our tools. It’s something many people love about Rust, for example. Yet again though, many feel as though Rust gets in the way as well.
I think this tech is harder to get started with than not using it and that's a problem you've noticed.
I played with temporal on my workstation and thought it was really interesting but it is more things to deploy and maintain in exchange for reliability and robustness.
I had the choice between using Rust or C recently, but because the domain was new to me I chose C to get it done faster. Rust definitely has a learning curve.
I often wonder about some sort of unified distributed system, encompassing frontend & backend into a single whole. One where user input is just something the system can ask and wait for, no matter which frontend or client it comes from.
But I’ve only ever been able to catch glimpses of it. More of a nebulous feeling and intuition than a real understanding of how such a thing would work. Something that feels obviously right, that will make perfect sense once understood, but that I still can’t begin to grasp.
It’s frustrating.
I’m also pretty sure I must not be the first, and that it either already exists or involves some complexity, maintainability or evolution issues I have no idea of.
There was a Haskell project that seamlessly transferred state between frontend and backend but I don't think it was a distributed system. I don't remember what it was called.
Writing APIs to glue together data fetching and actions and GUI state is all very siloed. If you could talk about the system as a whole including GUI interactions at the same time as system interactions that could be truly powerful.
Imagine multi omnichannel event streams that map to the users notifications, email inboxes, chat interface, post, deliveries, accounting, customer data, synchronisations, integrations, microservices and business CRM and ERP. Everything is linked together by powerful workflows. An interaction with a customer is just an extension of the system. It's a distributed system of human tasks as well as digital tasks and interactions between the customer and the company.
The first obvious risk that comes to mind is that nothing can be made that correctly takes into account everything it might some day be required to be able perform, so it can easily be done wrong and would probably need to be very flexible / loosely coupled.
And while the distributed aspect adds complexity, I feel that involving different devices, not all of which will on, or online, all the time, makes it a necessity. Not accounting for that would doom it.
But yes, that's a big part of it.
For the interaction / capabilities discovery part, I've lately been drawn to some sort of declarative interface, akin to Apple's "App Intents", dynamically exposed depending on the system's state. It also reminds me of how REST APIs were supposed to be discoverable.
But I'm not sure, and it could be a dead-end.
Another thing that comes to mind is Bell's Plan 9, and how someone who used to work there said that when he would come home, he'd just open his computer and everything would be there, just like he had left it at work. It's not enough, and the goal wouldn't be to have the exact same interface replicated everywhere, but this single distributed state, with each device just being a window into it, feels like a start.
Not that "it" would be an operating system. That too would be a doomed effort (many people can't change their OS nowadays, and most people who could wouldn't do it just to use a product or service). It would have to a paradigm, and perhaps a framework, or a language.
You've got me thinking again. I'm not going to be able to get anything done for days now.
Oh! Thanks! It's interesting. I'm not familiar enough with closure, and I can't tell how well it handles distributed states, but I'll try to play around with it. Thank you!
Thanks for the compliment. Since you seem to like it, i’ll emphasis a little as to how i came to realize that :
It’s very common that workflows are suited for data manipulation: write the data to disk, when it’s complete send it to the network, then once it’s completed, etc. I/O are known to be asynchronous, so we’re already equiped for that.
What i noticed is that the screen of your app is also a source of asynchrony, and as such, everytime we interact with it (animations, transitions, waiting for user interactions), we’re actually dealing with problems of exactly the same nature.
> the screen of your app is also a source of asynchrony
Exactly!
That’s why wrapping a “dialog” or “form” or “screen” into a Promise is such a powerful technique. When the user closes the dialog, the promise resolves with a result (e.g. whether the user clicked OK or Cancel), which then you can use for whatever else needs to be done, including invoking another dialog/form/screen!
This makes UI composable, and with async/await “hiding” the promise continuations, the syntax for doing that is essentially the same as when composing ordinary functions.
This workflow engine of yours seems to expect a very predictable and linear sequence of actions. What happens if the user clicks "log out" after user.tutorial();
It's basically making an already easy case easier.
Even linear workflows can handle conditional execution, you just lift the condition into a value, like into a Maybe/Option type, and later stages only execute if there's a value.
Or if you want to be more literal, declare a specific type:
type Authenticated(User) = No | Yes(User)
And each stage of your workflow matches on Authenticated.Yes, and so only executes if the user is authenticated. That's basically what any system does internally, this just makes that implicit behaviour explicit.
I assume you would have conditions for each workflow which are then pattern matched to user behaviour to see if that workflow is relevant.
You still have a globally defined happy path of coordination but your overarching application logic isn't spread everywhere but contained in one place.
The preconditions for the logout would trigger a different workflow.
I am the author of additive-gui, which is based around the idea that you provide all the rules of the GUI and the computer works it out - what applies when. Additive GUIs loosely models dataflow between components and layout.
This rings so true to me.
I've recently realized how every single non trivial part of my app is in fact a workflow problem : it could be ideally written as a pipe of asynchronous steps, glued together. It's true both for the frontend part and the backend.
I believe that's the point of reactive frameworks, but somehow those frameworks are usually designed around continuous streams of incoming events. Which isn't what i've noticed is the most widespread case. One-shot instanciation of pre-designed workflows would be really ideal.