[F'up to clf]
Joachim Durchholz said:
I usually have no problems grasping extremely abstract concepts.
Actually monads are the first concept that eludes me. Which may be
because monads are too complicated (in which case they shouldn't be
used), or because they are misrepresented.
They are not too complicated

The only thing that is difficult
is understanding how they fit in category theory, and you've never
seen category theory before. But to use them, you don't have to
understand that.
I suspect it's similar to a situation like this:
Imagine you never heard of recursion, and somebody explains it to you as
"a way to find the least fixpoint of a set of values described by a
recursive equation".
This definition is technically correct (modulo a few minor points that I
may have overlooked), but it's rather useless for a programmer: he'd
have to learn fixed-point theory to even understand the technical terms
and apply them to entities in the program, and it fails to even mention
what recursion is good for in programming.
Yes. So do you want an explanation of how to do recursion for the
working programmer, or do you want an explanation of the theoretical
background? The former is not so difficult, and for the latter, you
first have to lern some things about ordered sets.
Well, creating one's own monads seems to be one of the things that are
never explained, not even in the dumbed-down texts.
And "understanding what's going on" is impossible with the available texts.
Here's my view of "what's going on" without invoking category theory,
and with a bit of simplification. Maybe it can help you, maybe it doesn't.
You certainly know how to program with strings: A string is just a
sequence of characters. From a more theoretical point of view, one can
describe a string as a monoid, that is, a structure with a neutral
element (the empty string), and a binary operation (concatenation).
The laws are, in case you have never seen them before:
n * x = x
x * n = x
x * (y * z) = (x * y) * z
where n is the neutral element, * is the operation, and x, y, z are
arbitrary elements of the monoid.
A monad works very similarly, the main difference is that composition
is made in such a way that values can be passed to the thing on the
right hand side. So if m denotes the "type" (the set of elements), the
signature (in Haskell notation) of the composition is not
(*) :: m -> m -> m
as for the monoid, but (as a first step) rather
(*) :: m -> (a -> m) -> m
Additionally, the "type" of the monad gets annotated with this additional
"value-passing" type, so we get
(*) :: m a -> (a -> m b) -> m b
This operation is called (>>=) in Haskell. For the neutral element,
we see now that we shouldn't use a single element, but instead we
can take an element of type a and "inject" it into the monad m.
So this injection has signature
return :: a -> m a
That's it. The three monoid laws from above now correspond to the
three monad laws (see for yourself if you can discover the similarity).
They are a bit more complicated because of the extra "value-passing"
argument, but that's all.
And like a monoid, a monad can describe a "sequence". Why is that
important? For example, I/O is difficult with lazy evaluation, because
it's hard for the programmer to follow the order of evaluation.
But I/O operations have side effects, and therefore should only
happen one after another, in the correct sequence. So if we package
up each I/O operation as an element of a monad, we can enforce the
correct sequence, and the "value-passing" argument enables us to
actually use the result of an I/O operation in the next computing step.
What about other monads? Are there other situations where a sequence
of operations is also important? Well, what about error handling -- as
soon as an error is raised, computation should stop and the whole
thing should return with an error value immediately. How can we simulate
this? We can use the "maybe" type, with "Nothing" corresponing to
an error, and "Just x" corresponding to value x. So what will (>>=)
look like? Just by case analysis, we fail as soon as we see Nothing
on the left hand side
Nothing >>= _ = Nothing
and just pass on the value to the right hand side if there is one
(Just x) >>= f = f x
What's "return"? Well, we just "inject" the value:
return x = Just x
And on the type side, type constructor "Maybe" which has kind
Maybe :: * -> * (i.e., it maps a type to a type) corresponds to
the "m" used above, which has the same kind. That's how you make a
monad.
Was that difficult?
Now, you wanted to roll your own monad. Exercise: Write an extended
error monad that can store an extra value together with the fact that
an error occured. If you want to use predefined data types, have a look
at Either.
Well, OK, but what then *are* the "computation combination strategies
possible with monads"?
Everything you can think of.

Just as a monoid can have every
operation you can think of. For example, one could use the cartesian
product ("crossproduct") of sets. This will give you an "all possible
combinations" monoid. In a similar way, you can do an "all possible
combinations" monad with lists.
Another point: there are numerous ways to combine computations, and they
have some (quite abstract) properties in common that people call a
"monad". So what? Reals and strings also have some properties in common,
and that's what people call a "monoid", but nobody talks about "monoidic
operations" in programming - in fact the analogies between reals and
strings, while interesting, don't play that much of a role in programming.
Well, reals support more structure than a monoid. And those structures
play a big role in programming: You use numeric operations all the
time. You just don't think about the fact that they can be seen as a
monoid. And you don't have to understand ring theory, either. In the
same way, you don't have to think about that fact that the I/O monad
is a monad if you just want to use it.
The bewildering array of massively different monads in functional
programming indicates that a similar misunderstanding may be in place
for monads. The monad concept is so general that it doesn't give me much
of an interesting distinction anymore, so why bother with the concept?
Well, in the same way you "bother with the concept" when thinking
about rings, say. You don't have to know that polynomial rings and
finite fields have much in common, but sometimes it helps
If you know how to use the standard monads (IO, Error, Nondeterminism,
State) then that's fine. Unless you want to, you don't have to learn
category theory to "really understand it". As you don't have to learn
ring theory to "really understand" numbers.
The text that came nearest to what I need is on
http://www.nomaware.com/monads/html/analogy.html . It's essentially the
description of a pipe (with the variation that the items may be of
different types between "stations").
Unfortunately, the physical analogy also applies to function composition.
Yes. That's why you can generalize monads to arrows, which are just
a bit complicated way to do function composition. It's made a bit more
complicated, because in that way you can "hide the internal plumbing"
from the user of this design pattern.
- Dirk