Jon's many-language ray tracer

V

Vesa Karvonen

In comp.lang.functional Joachim Durchholz said:
Chris F Clark wrote: [...]
(Personally, I am most interested in monads, to see if they are
really a boon.
The topic is vastly overrated. They are just a specific Design Pattern.
Since this particular pattern is extremely abstract, many people waste
far too much time trying to understand the concept. Unfortunately,
nobody with enough knowledge and talent (and interest) has written a
good generally understandable explanation.

I think that the article ``What the hell are Monads'' by Noel Winstanley
should be fairly easy to understand (but wasn't my first introduction to
Monads):

http://www.cs.pdx.edu/~antoy/Courses/TPFLP/lectures/MONADS/Noel/research/monads.html

For understanding Monads, I would also particularly recommend reading the
(first half of the) article ``Generalising Monads to Arrows'' by John
Hughes:

http://www.haskell.org/arrows/biblio.html#Hug00

I also found ``How to declare an imperative'' by Philip Wadler quite
understandable:

http://homepages.inf.ed.ac.uk/wadler/topics/monads.html
(The best approximation that I have come up with is "a monad is what's a
pipe in Unix, only type-safe", but (a) I'm not sure that this is fully
correct and (b) I'm pretty sure that this still isn't the full picture.)

I'd say that the approximation falls rather short.

-Vesa Karvonen
 
A

Andreas Rossberg

Joachim said:
Not in Java - they are there to provide a substitute for Java's
nonexistent support of multiple inheritance.

That's my point: Java interfaces provide nothing over abstract classes,
they were only introduced to have a simple rule identifying
unproblematic cases of multiple inheritance. I think I mentioned that
before.
 
J

Joachim Durchholz

Tomasz said:
I think the ones on http://haskell.org/bookshelf/#monads are quite good.
The problem is that the concept seems to be really difficult to grasp.
This depends on person - I bet there are people who (would) have no
problem with it (I am not one of them).

I usually have no problems grasping extremely abstract concepts.
Actually monads are the first concept that eludes me. Which may be
because monads are too complicated (in which case they shouldn't be
used), or because they are misrepresented. I think it's the latter case:
the monad laws themselves aren't very complex, and monads seem to come
quite early in category theory so the background theory can't be very
complex either.

My problem is that the explanations generally fall into two categories:
either they are "dumbed down" and concentrate on how to use a monad -
which is OK but doesn't help with understanding the concept itself. Or
they require a working knowledge of category theory (which I don't have,
and I yet have to see what this category stuff is all about - I read
some of the texts, and it started out with presenting lots of fiddly
trivial stuff that seemed to lead nowhere; I'm pretty sure things get
more interesting later, but I never quite got that far).

(There's a third category, which concentrates entirely on using monads
for doing IO. Which is just as useful as explaining files as "a way to
write output to a terminal" - correct enough but ignoring 90% of useful
applications.)

I suspect it's similar to a situation like this:
Imagine you never heard of recursion, and somebody explains it to you as
"a way to find the least fixpoint of a set of values described by a
recursive equation".
This definition is technically correct (modulo a few minor points that I
may have overlooked), but it's rather useless for a programmer: he'd
have to learn fixed-point theory to even understand the technical terms
and apply them to entities in the program, and it fails to even mention
what recursion is good for in programming.
The important thing is that monads are really easy to use (use
existing ones, create your own ones) once you understand what's going
on,

Well, creating one's own monads seems to be one of the things that are
never explained, not even in the dumbed-down texts.
And "understanding what's going on" is impossible with the available texts.
and they are not only a solution for IO in Haskell - they are really
useful as a programming tool.

That's what some of the texts have said.
If pipes is all you want, using lazy lists and function composition will
be sufficient (and convenient).

Lazy lists don't quite cut it: a monad allows that each item in the
chain has a different type; lists (lazy or not) don't. (I'm pretty sure
that's not the kind of mapping of the monad concept to program entities
that you mean, but the available texts haven't allowed me a better
understanding.)
I think you are missing a big part of the picture. Simple function
composition is only one of "computation combination strategies" possible
with monads.

Well, OK, but what then *are* the "computation combination strategies
possible with monads"?

The text that came nearest to what I need is on
http://www.nomaware.com/monads/html/analogy.html . It's essentially the
description of a pipe (with the variation that the items may be of
different types between "stations").

Unfortunately, the physical analogy also applies to function composition.

Another point: there are numerous ways to combine computations, and they
have some (quite abstract) properties in common that people call a
"monad". So what? Reals and strings also have some properties in common,
and that's what people call a "monoid", but nobody talks about "monoidic
operations" in programming - in fact the analogies between reals and
strings, while interesting, don't play that much of a role in programming.
The bewildering array of massively different monads in functional
programming indicates that a similar misunderstanding may be in place
for monads. The monad concept is so general that it doesn't give me much
of an interesting distinction anymore, so why bother with the concept?
(This is just one of several possible explanations why people are so
confused about monads.)

Regards,
Jo
 
V

Vesa Karvonen

Well, creating one's own monads seems to be one of the things that are
never explained, not even in the dumbed-down texts.

See:

http://www.cs.pdx.edu/~antoy/Courses/TPFLP/lectures/MONADS/Noel/research/monads.html

The above tutorial implements two well known Monads: Maybe and State. I
would also recommend implementing the IO Monad in an ML-style language.
(Wadler's paper I mentioned earlier does it, but I recommend that you
attempt it by yourself before looking at the solution.)

-Vesa Karvonen
 
T

Tomasz Zielonka

Joachim said:
Another point: there are numerous ways to combine computations, and they
have some (quite abstract) properties in common that people call a
"monad". So what?

In Haskell you get: uniform framework (the same syntax, libraries of
generic functions), ability to change "the way to combine computations"
by touching only 1% of the code, (optional) encapsulation of internals,
monad transformers (David Espinosa call this "Semantic Lego"), and many
more.
but nobody talks about "monoidic operations" in programming

Haskell programmers do.
The bewildering array of massively different monads in functional
programming indicates that a similar misunderstanding may be in place
for monads. The monad concept is so general that it doesn't give me much
of an interesting distinction anymore, so why bother with the concept?

Let's try another (incomplete) explanation:
many semantic models - one syntax.

Important: to do anything interesting with monads, you need something
more than the common operators (>>= and return). Every interesting
monad introduces some additional operations, eg. State has get/put, Cont
has callCC, Writer has tell and IO has getChar/putChar/... Without them
monads would be nothing more than a verbose way to write functional
code - >>= would be equivalent to function application.

Best regards
Tomasz
 
J

Joachim Durchholz

(Followup-To set to comp.lang.functional.)

Vesa said:

That tutorial leaves me confused and puzzled.

I can perfectly understand everything up and including this:

"It's obvious that the style of programming shown above - using
combinators to manage parameter passing or computational flow - is a
powerful technique for structuring code and results in clearer programs.

In fact, the same ideas can be used for many other computational idioms:

* Data Structures : lists, trees, sets.
* Computational Flow : Maybe, Error Reporting, non-determinism
* Value Passing : StateT, environment variables, output generation
* Interaction with external state: IO, GUI programming, foreign
language interfaces
* More Exotic stuff : parsing combinators, concurrency, mutable
data structures.

It would be nice to be able to use the same combinators for each of
these idioms - as well as a uniform notation,"

That's all nice and well.

"this would allow a library of more general functions, that would
operate over any of these idioms, to be built up."

But this one I find hard to follow. Building a "library of more general
functions" is abstraction - but what aspects exactly are abstracted
away? What remains concrete (and hence useful)?

Besides, this approach is sort of cheating. Those "in the know" seem to
have given up on explaining what a monad *is*, and explain how they came
to be introduced into functional programming. While the latter is
definitely interesting, it doesn't help me with the former - it prevents
me from using monads in novel ways. I can't explore the concept, I'm
limited to those uses that others have explored before me. It's an
extremely frustrating experience, even more since nobody can explain to
me what a monad *is*.

What got nearest is the listing of the Monad Laws in the Haskell School
of Expression book. Unfortunately, the "pseudo-associative law" is
worded in such a strange way that I don't really understand it. With
that, I mean that I don't "see" what this law is intended for: what
kinds of computation structures it includes as monads and what kinds it
excludes as non-monads.

Regards,
Jo
 
D

Dirk Thierbach

[F'up to clf]

Joachim Durchholz said:
I usually have no problems grasping extremely abstract concepts.
Actually monads are the first concept that eludes me. Which may be
because monads are too complicated (in which case they shouldn't be
used), or because they are misrepresented.

They are not too complicated :) The only thing that is difficult
is understanding how they fit in category theory, and you've never
seen category theory before. But to use them, you don't have to
understand that.
I suspect it's similar to a situation like this:
Imagine you never heard of recursion, and somebody explains it to you as
"a way to find the least fixpoint of a set of values described by a
recursive equation".
This definition is technically correct (modulo a few minor points that I
may have overlooked), but it's rather useless for a programmer: he'd
have to learn fixed-point theory to even understand the technical terms
and apply them to entities in the program, and it fails to even mention
what recursion is good for in programming.

Yes. So do you want an explanation of how to do recursion for the
working programmer, or do you want an explanation of the theoretical
background? The former is not so difficult, and for the latter, you
first have to lern some things about ordered sets.
Well, creating one's own monads seems to be one of the things that are
never explained, not even in the dumbed-down texts.
And "understanding what's going on" is impossible with the available texts.

Here's my view of "what's going on" without invoking category theory,
and with a bit of simplification. Maybe it can help you, maybe it doesn't. :)

You certainly know how to program with strings: A string is just a
sequence of characters. From a more theoretical point of view, one can
describe a string as a monoid, that is, a structure with a neutral
element (the empty string), and a binary operation (concatenation).
The laws are, in case you have never seen them before:

n * x = x
x * n = x
x * (y * z) = (x * y) * z

where n is the neutral element, * is the operation, and x, y, z are
arbitrary elements of the monoid.

A monad works very similarly, the main difference is that composition
is made in such a way that values can be passed to the thing on the
right hand side. So if m denotes the "type" (the set of elements), the
signature (in Haskell notation) of the composition is not

(*) :: m -> m -> m

as for the monoid, but (as a first step) rather

(*) :: m -> (a -> m) -> m

Additionally, the "type" of the monad gets annotated with this additional
"value-passing" type, so we get

(*) :: m a -> (a -> m b) -> m b

This operation is called (>>=) in Haskell. For the neutral element,
we see now that we shouldn't use a single element, but instead we
can take an element of type a and "inject" it into the monad m.
So this injection has signature

return :: a -> m a

That's it. The three monoid laws from above now correspond to the
three monad laws (see for yourself if you can discover the similarity).
They are a bit more complicated because of the extra "value-passing"
argument, but that's all.

And like a monoid, a monad can describe a "sequence". Why is that
important? For example, I/O is difficult with lazy evaluation, because
it's hard for the programmer to follow the order of evaluation.
But I/O operations have side effects, and therefore should only
happen one after another, in the correct sequence. So if we package
up each I/O operation as an element of a monad, we can enforce the
correct sequence, and the "value-passing" argument enables us to
actually use the result of an I/O operation in the next computing step.

What about other monads? Are there other situations where a sequence
of operations is also important? Well, what about error handling -- as
soon as an error is raised, computation should stop and the whole
thing should return with an error value immediately. How can we simulate
this? We can use the "maybe" type, with "Nothing" corresponing to
an error, and "Just x" corresponding to value x. So what will (>>=)
look like? Just by case analysis, we fail as soon as we see Nothing
on the left hand side

Nothing >>= _ = Nothing

and just pass on the value to the right hand side if there is one

(Just x) >>= f = f x

What's "return"? Well, we just "inject" the value:

return x = Just x

And on the type side, type constructor "Maybe" which has kind
Maybe :: * -> * (i.e., it maps a type to a type) corresponds to
the "m" used above, which has the same kind. That's how you make a
monad.

Was that difficult? :)

Now, you wanted to roll your own monad. Exercise: Write an extended
error monad that can store an extra value together with the fact that
an error occured. If you want to use predefined data types, have a look
at Either.
Well, OK, but what then *are* the "computation combination strategies
possible with monads"?

Everything you can think of. :) Just as a monoid can have every
operation you can think of. For example, one could use the cartesian
product ("crossproduct") of sets. This will give you an "all possible
combinations" monoid. In a similar way, you can do an "all possible
combinations" monad with lists.
Another point: there are numerous ways to combine computations, and they
have some (quite abstract) properties in common that people call a
"monad". So what? Reals and strings also have some properties in common,
and that's what people call a "monoid", but nobody talks about "monoidic
operations" in programming - in fact the analogies between reals and
strings, while interesting, don't play that much of a role in programming.

Well, reals support more structure than a monoid. And those structures
play a big role in programming: You use numeric operations all the
time. You just don't think about the fact that they can be seen as a
monoid. And you don't have to understand ring theory, either. In the
same way, you don't have to think about that fact that the I/O monad
is a monad if you just want to use it.
The bewildering array of massively different monads in functional
programming indicates that a similar misunderstanding may be in place
for monads. The monad concept is so general that it doesn't give me much
of an interesting distinction anymore, so why bother with the concept?

Well, in the same way you "bother with the concept" when thinking
about rings, say. You don't have to know that polynomial rings and
finite fields have much in common, but sometimes it helps :)

If you know how to use the standard monads (IO, Error, Nondeterminism,
State) then that's fine. Unless you want to, you don't have to learn
category theory to "really understand it". As you don't have to learn
ring theory to "really understand" numbers.
The text that came nearest to what I need is on
http://www.nomaware.com/monads/html/analogy.html . It's essentially the
description of a pipe (with the variation that the items may be of
different types between "stations").
Unfortunately, the physical analogy also applies to function composition.

Yes. That's why you can generalize monads to arrows, which are just
a bit complicated way to do function composition. It's made a bit more
complicated, because in that way you can "hide the internal plumbing"
from the user of this design pattern.

- Dirk
 
S

Stefan Ram

Dirk Thierbach said:
They are not too complicated

What about defining a Java interface for monads, then?

interface M<A,B>
{ M<A> return_( A a );
M<B> bind_( M<A> ma, Map<A,B> map ); }

interface Map<A,B> { M<B> map( A a ); }

Then a class to test the axioms:

class Test<A,B>
{ boolean hasPassed( M<A,B> monad, Map<A,B> map ){ ... }; }

and some example classes implementing M<A,B>.

class M0 implements M<A0,B0> { ... }

The problem: I do not really have a clue about monads, so the
above interfaces are just a first wild guess as a starting
point to be refined by someone with more knowledge.
 
G

Greg Buchholz

Joachim said:
I usually have no problems grasping extremely abstract concepts.
Actually monads are the first concept that eludes me. Which may be
because monads are too complicated (in which case they shouldn't be
used), or because they are misrepresented. I think it's the latter case:
the monad laws themselves aren't very complex, and monads seem to come
quite early in category theory so the background theory can't be very
complex either.

I had the same problem, so I had to reformulate the examples in a
domain I was more familiar with. I chose to implement the basics of
monads in perl, and I wrote up my findings at...

http://sleepingsquirrel.org/monads/monads

....I also tackled a more advanced example and tried a simple parser
combinator library using monads (also in perl)...

http://sleepingsquirrel.org/monads/parser/monad_parser.txt
http://sleepingsquirrel.org/monads/parser/Do_notation.pm
The text that came nearest to what I need is on
http://www.nomaware.com/monads/html/analogy.html . It's essentially the
description of a pipe (with the variation that the items may be of
different types between "stations").

Unfortunately, the physical analogy also applies to function composition.

Well, monads aren't that much more general than function composition.
When you compose the functions "f.g.h", each function can only act upon
the ouput from the immediatly preceding function. That is, the only
input that "f" gets comes from the output of "g". Monads allow the
functions to act on the outputs from any of the preceding functions,
not just its predecessor. So "f" could act upon the output of "h" as
well as "g".

Greg Buchholz
 
S

Stefan Ram

What about defining a Java interface for monads, then?

/* general interfaces for a Kleisli triple (monad) in Java 1.5 */
/* Stefan Ram, 2005 */

interface M<T> {}

interface Function<A,B>
{ B value( A o ); }

interface KleisliTriple<C,A,B> /* "Monad" */
{ M<A> T( A a ); /* type constructor */
M<A> return_( A a ); /* unit */
M<B> bind( M<A> t, Function<A,M<B>> f ); /* bind */ }

/* a specific example implementation of a Kleisli triple */

interface List<T> extends java.util.List<T>, M<T>, Iterable<T> {}

class ArrayList<T> extends java.util.ArrayList<T> implements M<T>, List<T>
{ private static final long serialVersionUID = 0; }

class ListTriple implements KleisliTriple<List,String,Integer>
{ public M<String> T( final String s )
{ List<String> list = new ArrayList<String>();
list.add( s );
return list; }
public M<String> return_( final String o )
{ return T( o ); }
public M<Integer> bind( final M<String> list,
final Function<String,M<Integer>> f )
{ List<Integer> list1 = new ArrayList<Integer>();
for( String l :( List<String> )list )
{ list1.add((( List< Integer> )( f.value( l ))).get( 0 )); }
return list1; }}

class HalveFunction implements Function<String,M<Integer>>
{ public M<Integer> value( String o )
{ List<Integer> list = new ArrayList<Integer>();
list.add( Integer.valueOf( Integer.valueOf( o ).intValue() / 2 ) );
return list; }}

public class Main
{ public static void main( final String[] _ )
{ ListTriple l = new ListTriple();
M<String> L = l.T( "7" );
M<Integer> L1 = l.bind( L, new HalveFunction() );
System.out.print((( List<Integer> )L1).get( 0 )); }}

/* 3 */
 
J

Joachim Durchholz

(Followup-To set to comp.lang.functional)

Greg said:
Well, monads aren't that much more general than function composition.
Actually neither is more general than the other. There are monads that
aren't variants of function composition, such as List. And I'm pretty
sure that there are ways to do function composition that aren't monads :)

Regards,
Jo
 
S

Stefan Ram

/* general interfaces for a Kleisli triple (monad) in Java 1.5 */

The next example uses the same interface, but this time it
tries to implement a calculation with a "state" (a program
counter).

I still have no idea whether this is correct.

I wonder why the function usually is taken to be "a -> m b"
instead of simple "a -> b", because in both examples the type
"b" is packed into "m b" only to be unpacked to "b" again
within "bind".

/* general interfaces for a Kleisli triple (monad) in Java 1.5 */
/* Stefan Ram, 2005 */

interface M<T> {}

interface Function<A,B>
{ B value( A o ); }

interface KleisliTriple<C,A,B> /* "Monad" */
{ M<A> T( A a ); /* type constructor */
M<A> return_( A a ); /* unit */
M<B> bind( M<A> t, Function<A,M<B>> f ); /* bind */ }

/* another example implementation of a Kleisli triple */

class State { int counter = 0; /* program counter */
State( int counter ){ this.counter = counter; }}

class WithState<Value> implements M<Value>
{ State state;
Value value;
WithState( Value value, State state )
{ this.value = value; this.state = state; }}

class StateTriple implements KleisliTriple<State,String,Integer>
{ public M<String> T( final String s )
{ return new WithState<String>( s, new State( 0 )); }
public M<String> return_( final String o )
{ return T( o ); }
public M<Integer> bind( final M<String> stringState,
final Function<String,M<Integer>> f )
{ return new WithState<Integer>
( /* calculate the function value */
(( WithState<Integer> )f.value(
(( WithState<String> )stringState ).value )).value,
/* increment the program counter */
new State( 1 +((( WithState<String> )stringState ).state ).counter )); }}

class HalveFunction implements Function<String,M<Integer>>
{ public M<Integer> value( String o )
{ return new WithState<Integer>
( Integer.valueOf( Integer.valueOf( o ).intValue() / 2 ),
new State( 0 )); }}

public class Main
{ public static void main( final String[] _ )
{ StateTriple processor = new StateTriple();
M<String> seven = processor.T( "7" ); /* add a unit state */
M<Integer> result = processor.bind( seven, new HalveFunction() );
System.out.println((( WithState<Integer> )result ).value );
System.out.println((( WithState<Integer> )result ).state.counter ); }}

/*
3
1
*/
 
S

Stefan Ram

I wonder why the function usually is taken to be "a -> m b"
instead of simple "a -> b", because in both examples the type
"b" is packed into "m b" only to be unpacked to "b" again
within "bind".

I try to make some more sense of it, by a modification
to the previous program:

The HalveFunction does two things:

- It returns as its proper result the halve of the
value of the string argument as an integer object.

- It specifies as its "side effect" the value 12,
take to be the program counter change.

The bind function then returns the value augmented with the
new program counter, which is obtained by bind by adding the
program counter change of HalveFunction to the program counter
augmenting the argument.

Is the class "StateTriple" a monad (Kleisli triple) with
the intended use?

/* general interfaces for a Kleisli triple (monad) in Java 1.5 */
/* Stefan Ram, 2005 */

interface M<T> {}

interface Function<A,B>
{ B value( A o ); }

interface KleisliTriple<C,A,B> /* "Monad" */
{ M<A> T( A a ); /* type constructor */
M<A> return_( A a ); /* unit */
M<B> bind( M<A> t, Function<A,M<B>> f ); /* bind */ }

/* another example implementation of a Kleisli triple */

class State { int counter = 0; /* program counter */
State( int counter ){ this.counter = counter; }}

class WithState<Value> implements M<Value>
{ State state;
Value value;
WithState( Value value, State state )
{ this.value = value; this.state = state; }}

class StateTriple implements KleisliTriple<State,String,Integer>
{ public M<String> T( final String s )
{ return new WithState<String>( s, new State( 0 )); }
public M<String> return_( final String o )
{ return T( o ); }
public M<Integer> bind( final M<String> stringState,
final Function<String,M<Integer>> f )
{ return new WithState<Integer>
( /* calculate the function value */
(( WithState<Integer> )f.value(
(( WithState<String> )stringState ).value )).value,
/* applye the programm counter change to the state */
new State(((( WithState<Integer> )
f.value((( WithState<String> )stringState ).value ) ).state )
.counter +
((( WithState<String> )stringState ).state ).counter )); }}

class HalveFunction implements Function<String,M<Integer>>
{ public M<Integer> value( String o )
{ return new WithState<Integer>
( Integer.valueOf( Integer.valueOf( o ).intValue() / 2 ),
new State( 12 )); }}

public class Main
{ public static void main( final String[] _ )
{ StateTriple processor = new StateTriple();
M<String> seven = processor.T( "7" ); /* add a unit state */
M<Integer> result = processor.bind( seven, new HalveFunction() );
System.out.println((( WithState<Integer> )result ).value );
System.out.println((( WithState<Integer> )result ).state.counter ); }}

/*
3
12
*/
 
T

Tomasz Zielonka

Tomasz said:
(The best approximation that I have come up with is "a monad is what's a
pipe in Unix, only type-safe",
but (a) I'm not sure that this is fully correct and

Not even correct for any concrete monad. Unix pipes are severly
constrained as a programming construct. They are only good for simple
processing of a single stream of data. You cannot name and duplicate
intermediate values (well, you could try to save files or fork streams,
but that wouldn't be what you mean by "unix pipes", and still wouldn't
suffice). [...]

Hmmm, some other people find unix-pipes similar to the IO monad:

http://okmij.org/ftp/Computation/monadic-shell.html

Best regards
Tomasz
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,768
Messages
2,569,574
Members
45,048
Latest member
verona

Latest Threads

Top