I spent 11+ years at Goldman, working in SecDb Core, and various SecDb-related infrastructure.
Goldman's Slang language has subsets of both lazily-evaluated backward-propagating dataflow graph ("The SecDb Graph") and forward-propagating strict-evaluating dataflow graph ("TSecDb"). They both have their use cases. The lazily evaluated graph is much more efficient in cases where you have DAG nodes close to the output that are only conditionally dependent upon large sub-graphs, especially in cases where you might be skipping some inputs, and so the next needed graph structure might not be known at invalidation time.
Ideally, you'd have some compile-time/load-time static strictness analysis to determine which nodes are always needed (similar to what GHC does to avoid a lot of needless thunk creation) along with some dynamic GC-like strictness analysis that works backward from output nodes to figure out which of the potentially-lazy nodes should be strictly evaluated. In the general case, the graph dependencies may depend upon the particular dynamic values of some graph nodes (the nodes whose values affected the graph structure used to be called "purple children" in SecDb, but that lead to Physics/Statistics PhDs coming to the core team confused by exceptions like "Purple children should not exist in subgraph being compiled to serializable lambda")
TSecDb already contains a similar analysis to prune dead code nodes from the dataflow DAG after the DAG structure is dynamically updated. (For instance, when a new stock order comes in, a big chunk of TSecDb subgraph is created to handle that one order, and the TSecDb garbage collector immediately runs and removes all of the graph nodes that can't possibly affect trading decisions for that order. This also means that developers new to TSecDb often get their logging code automatically pruned from the graph because they've forgotten to mark it as a GC root (TsDevNull(X))... and it's pretty bad logging code if it affects the trading decisions.)
Risk exposure calculations (basically calculating the partial derivatives of the value of everything on the books with respect to most of the inputs) are done mostly on the lazy graph, and real-time trading decisions are done mostly on the strict graph.
Sounds cool and quite intricate. Distributed reactive computations might indeed change the basic logic I described. Most reactive systems I've worked with and thought about are local.
Goldman's Slang language has subsets of both lazily-evaluated backward-propagating dataflow graph ("The SecDb Graph") and forward-propagating strict-evaluating dataflow graph ("TSecDb"). They both have their use cases. The lazily evaluated graph is much more efficient in cases where you have DAG nodes close to the output that are only conditionally dependent upon large sub-graphs, especially in cases where you might be skipping some inputs, and so the next needed graph structure might not be known at invalidation time.
Ideally, you'd have some compile-time/load-time static strictness analysis to determine which nodes are always needed (similar to what GHC does to avoid a lot of needless thunk creation) along with some dynamic GC-like strictness analysis that works backward from output nodes to figure out which of the potentially-lazy nodes should be strictly evaluated. In the general case, the graph dependencies may depend upon the particular dynamic values of some graph nodes (the nodes whose values affected the graph structure used to be called "purple children" in SecDb, but that lead to Physics/Statistics PhDs coming to the core team confused by exceptions like "Purple children should not exist in subgraph being compiled to serializable lambda")
TSecDb already contains a similar analysis to prune dead code nodes from the dataflow DAG after the DAG structure is dynamically updated. (For instance, when a new stock order comes in, a big chunk of TSecDb subgraph is created to handle that one order, and the TSecDb garbage collector immediately runs and removes all of the graph nodes that can't possibly affect trading decisions for that order. This also means that developers new to TSecDb often get their logging code automatically pruned from the graph because they've forgotten to mark it as a GC root (TsDevNull(X))... and it's pretty bad logging code if it affects the trading decisions.)
Risk exposure calculations (basically calculating the partial derivatives of the value of everything on the books with respect to most of the inputs) are done mostly on the lazy graph, and real-time trading decisions are done mostly on the strict graph.