[Sorry, I missed this one originally.]
[...]
If you, um, look at the code you see that "cells.a = 42" triggers
cells.__setattr__, which fires a's callback; the callback then
reaches inside and sets the value of b _without_ going through
__setattr__, hence without triggering b's callback.
In Cells you can't have A depend on B and also B depend on A?
That seems like an unfortunate restriction - I'd want to be
able to have Celsius and Farenheit, so that setting either
one sets the other.
Set Kelvin, and make Celsius and Fahrneheit functions of that. ie, There
is only one datapoint, the temperature. No conflict unless one creates one.
Realized later that I hadn't thought this through.
I'd been assuming that of course we should be allowed to
have A and B depend on each other. Hence if a change in
A propagates to a change in B that change in B has to
be a non-propagating change - thought I was just so
clever seeing a way to do that.
I think it could be arranged, if one were willing to tolerate a little
fuzziness: no, there would be no strictly correct snapshot at which
point everyone had their "right value". Instead, A changes so B
recomputes, B changes so A recomputes... our model has now come to life,
we just have to poll for OS events or socket data, and A and B never get
to a point where they are self-consistent, because one or the other
always needs to be recalculated.
I sometimes wonder if the physical universe is like that, explaining why
gravity slows time: it is not the gravity, it is the mass and we are
seeing system degradation as the matrix gets bogged down recomputing all
that matter.
[Cue Xah]
But duh, if that's how things are then we can't have
transitive dependencies working out right; surely we
want to be able to have B depend on A and then C
depend on B...
(And also if A and B are allowed to depend on each
other then the programmer has to ensure that the
two rules are inverses of each other, which seems
like a bad constraint in general, something non-trivial
that the programmer has to get right.)
Right, when I considered multi-way dependencies I realized I would have
to figure out some new syntax to declare in one place the rules for two
slots, and that would be weird because in Cells it is the instance that
gets a rule at make-instance time, so i would really have to have some
new make-instance-pair capability. Talk about a slippery slope. IMO, the
big constraints research program kicked off by Steele's thesis withered
into a niche technology because they sniffed at the "trivial"
spreadsheet model of linear dataflow and tried to do partial and
multi-way dependencies. I call it "a bridge too far", and in my
experience of Cells (ten years of pretty intense use), guess what?, all
we need as developers is one-way, linear, fully-specified dependencies.
So fine, no loops. If anything, if we know that
there are no loops in the dependencies that simplifies
the rest of the programming, no need for the sort of
finagling described in the first paragraph above.
Actually, I do allow an on-change callback ("observer" in Cells
parlance) to kick off a toplevel, imperative state change to the model.
Two cells who do that to each other will run until one decides not to do
so. I solve some GUI situations (the classic being a scrollbar thumb and
the text offset, which each at different times control the other, by
having them simply set the other in an observer. On the second
iteration, B is setting A to the value A has already, so propagation
stops (a longstanding Cells feature).
These feel like GOTOs, by the way, and are definitely to be avoided
because they break the declarative paradigm of Cells in which I can
always look at one (anonymous!) rule and see without question from where
any value it might hold comes. (And observers define where they take
effect outside the model, but those I have to track down by slot name
using OO browsing tools.)
But this raises a question:
Q: How do we ensure there are no loops in the dependencies?
Elsewhere I suggested the code was:
(let ((*dependent* this-cell))
(funcall (rule this-cell) (object this-cell)))
It is actually:
(let ((*dependents* (list* this-cell *dependents*)))
(funcall (rule this-cell) (object this-cell)))
So /before/ that I can say:
(assert (not (find this-cell *dependents*)))
Do we actually run the whole graph through some algorithm
to verify there are no loops?
The simplest solution seems like adding the cells one
at a time, and only allowing a cell to depend on
previously added cells. It's clear that that would
prevent loops, but it's not clear to me whether or
not that disallows some non-looping graphs.
As you can see, the looping is detected only when there is an actual
circularity, defined as a computation requiring its own computation as
an input.
btw, a rule /does/ have access to the prior value it computed, if any,
so the cell can be value-reflective even though the rules cannot be
reentrant.
A
math question the answer to which is not immediately
clear to me (possibly trivial, the question just
ocurred to me this second):
Say G is a (finite) directed graph with no loops. Is it always
possible to order the vertices in such a way that
every edge goes from a vertex to a _previous_ vertex?
I am just a simple application programmer, so I just wait till Cells
breaks and then I fix that.
kenny
--
Cells:
http://common-lisp.net/project/cells/
"Have you ever been in a relationship?"
Attorney for Mary Winkler, confessed killer of her
minister husband, when asked if the couple had
marital problems.