The duck's backside

E

Eleanor McHugh

Man, I'm really not giving these replies the attention they deserve, =20=
but I did want to mention one thing:

This has nothing to do with "static typing." "Static typing" refers =20=
to type checking being performed at compile-time instead of runtime. =20=
It has nothing to do with interrogating an object's class during the =20=
execution of a program. If you think about it, the class of an =20
object is _continually_ being interrogated at runtime, in order to =20
dispatch a given message to the right method.


That's why I said it was a static typing _mindset_ - because it's =20
impossible to do static typing in Ruby. However putting this kind of =20
type-checking boilerplate into code is attempting to do the same =20
thing: allow only objects of a very limited type to be used in a given =20=

context. This isn't pushing against the Ruby Way because we're putting =20=

philosophy ahead of good design, but because the very shape of the =20
language makes it ugly and cumbersome to do so.

I'm often minded of the following extract from Lewis Carroll when =20
discussing this topic:

<quote>
'When I use a word,' Humpty Dumpty said, in a rather scornful tone,' =20
it means just what I choose it to mean, neither more nor less.'
'The question is,' said Alice, 'whether you can make words mean so =20
many different things.'

'The question is,' said Humpty Dumpty, 'which is to be master - that's =20=

all.'

Alice was too much puzzled to say anything; so after a minute Humpty =20
Dumpty began again. 'They've a temper, some of them - particularly =20
verbs: they're the proudest - adjectives you can do anything with, but =20=

not verbs - however, I can manage the whole lot of them! =20
Impenetrability! That's what I say!'

'Would you tell me, please,' said Alice, 'what that means?'

'Now you talk like a reasonable child,' said Humpty Dumpty, looking =20
very much pleased. 'I meant by "impenetrability" that we've had enough =20=

of that subject, and it would be just as well if you'd mention what =20
you mean to do next, as I suppose you don't mean to stop here all the =20=

rest of your life.'

'That's a great deal to make one word mean,' Alice said in a =20
thoughtful tone.

'When I make a word do a lot of work like that,' said Humpty Dumpty, =20
'I always pay it extra.'

</quote>



In Ruby methods have a terrible temper and will soon tell you if =20
you're misapplying them, so instead of trying to restrict the =20
allowable types they can work on it makes much more sense to handle =20
their temper tantrums when they occasionally throw them. The only =20
reason I see any justification for not doing that in the original =20
poster's code is because the method was talking to a remote web =20
service and that could add an appreciable delay into finding out =20
whether or not an error had occurred, but to be honest I'm not sure =20
that in practice I'd be that concerned about it: instead I'd focus my =20=

energies on finding places in the application where the context could =20=

become muddled and redesigning so that they didn't occur in the first =20=

place.



Ellie

Eleanor McHugh
Games With Brains
http://slides.games-with-brains.net
 
V

Vidar Hokstad

However, an object's class and ancestors _determine_ what messages an  
object responds to. That's what they're there for - to hold message  
maps

But in Ruby knowing an objects class and an ancestors is not
sufficient, as the object can have methods added to it's eigenclass as
well. More importantly, it can have methods _removed_ by calling
#undef in it's eigenclass, which means that if you want to call #foo,
checking an objects class is not sufficient - the object might be the
only one of it's class not to answer to #foo.

Furthermore, the object may answer to a message by implementing
#method_missing.

The only guaranteed way of knowing whether or not an object answers to
a message in Ruby is to try to send it.




Vidar
 
E

Eleanor McHugh

No, you cannot possibly measure with certainty what an object does
when you send it a message. The only thing you can measure is
whether it will throw a runtime error because it doesn't implement a
method for the message.

There are many other kinds of runtime error than that, and it will be
a highly unusual (and I suspect carefully designed) method which
doesn't raise any of them when provided with garbage inputs. Hence why
Duck Typing is the preferred approach in Ruby.
However, an object's class and ancestors _determine_ what messages
an object responds to. That's what they're there for - to hold
message maps.

To some extent. But as an individual object's message maps are
alterable at runtime, knowing its class and ancestors is a very
incomplete picture of what that object is.

Unless the definition of type is "a mutable correlation of behaviour
and state, integrated over time" it's clear objects can and do change
their type at runtime.


Ellie

Eleanor McHugh
Games With Brains
http://slides.games-with-brains.net
 
B

Bill Kelly

From: "Mark Wilden said:
Hmmm. Is it that objects change their type, or is it that variables do?

I don't see variables in ruby as having any type at all. They merely
hold a reference to some object (any object.)

If you think about it, the class of an object
is _continually_ being interrogated at runtime, in order to dispatch a
given message to the right method.

Ruby is more dynamic than that.
i'm turning "fancy" into a widget!
=> "ordinary"


In my view, neither the variable 'x' nor 'y' changed in the
above example. The object referenced by 'y' definitely
changed. If an object's type is defined by what methods
it responds to, then the type of the object referenced by
'y' changed. But its class hierarchy did NOT change:
y.class => String
x.class => String
y.class.ancestors => [String, Enumerable, Comparable, Object, Kernel]
x.class.ancestors
=> [String, Enumerable, Comparable, Object, Kernel]


The only way to tell that the object referenced by 'y'
responded to :to_widget was to ask it. However, in ruby,
even asking doesn't always prove anything:
i'm turning "fancier" into a widget!
=> #<Widget:0x2c95b18>


The use of method_missing is a perfectly valid programming
approach in ruby, used often to forward method calls on to
some delegate, or also used to manufacture previously
nonexisting methods on-the-fly.

An example of the latter is the Og (Object Graph) ORM
library.[1] One can invoke methods on a class representing
a database table, without said methods previously existing:

class IPHost
property :ip, String, :unique => true
property :hostname, String
end

rec = IPHost.find_or_create_by_ip_and_hostname("1.2.3.4", "example.com")

Og is smart enough, using method_missing, to notice that
the method name begins with find_or_create_by_... and as
I recall it does a fairly simple split(/_and_/) on the
remainder of the method name to determine the column names
being referenced.

For optimization purposes, Og may also then define that
method dynamically, so that it will now exist to speed up
further invocations. (But, semantically, of course, there's
no need for the method to literally be defined, as it could
continue to handle it via method_missing.)


One further small point of interest. Notice that IPHost
doesn't inherit from anything. (Except Object, by default.)

When the Og library is initialized, it reflects through
ObjectSpace looking for classes which have used its :property
method to define database columns. And it includes an
appropriate module on such classes to imbue them with the
smarts they need to function.


All of the above being reasons why in Ruby, duck typing is
preferentially approached as "tell, don't ask."

There is no infallible way to ask objects whether they
respond to a particular method, so just Trust The Programmer
and call the method you expect to be there. In the more
rare cases (like the OP in this thread) where one finds a
need to ask, then ask the particular object if it responds
to the method in question. Because querying its class
inheritance hierarchy, as we have seen above, is about the
least reliable and most overly restrictive approach.


[1] Og: http://oxywtf.de/tutorial/1.html


Regards,

Bill
 
D

David A. Black

Hi --

No, you cannot possibly measure with certainty what an object does when you
send it a message. The only thing you can measure is whether it will throw a
runtime error because it doesn't implement a method for the message.

I mean you can measure what it's done after it's done it. I don't mean
you can measure what it *will* do; that's precisely my point (you
can't).
However, an object's class and ancestors _determine_ what messages an object
responds to. That's what they're there for - to hold message maps.

No; they determine what messages a freshly-minted object responds to.
Objects can change, and classes are objects and can also change.
Hmmm. Is it that objects change their type, or is it that variables do?

Objects. Variables are untyped.

I always end up feeling in these discussions like it sounds like I'm
advocating some kind of wild, chaotic, pseudo-non-deterministic
programming style for Ruby. Far from it. I only want to point out that
Ruby imposes certain conditions on us (the changeability of objects),
and that I'd rather embrace that and try to find ways to leverage it
than interpret it as a language design flaw or a test of my resistance
to temptation.


David
 
D

David Masover

If classes are not categories, what are they?

Classes are categories. But not all categories are classes.
Exactly. It has absolutely nothing to do with Ruby's "mindset," or
"the way we do things in Ruby." We choose the latter because it's
better, not because it's the Ruby way. It's the other way around: it's
the Ruby way because it's better; not that it's better because it's
the Ruby way. The former attitude is pragmatic, the latter is religious.

There are fundamentally different ways to do it -- I'm guessing functional
languages would do a recursive function. Lisp would iterate, but with...
whatever the opposite of a callback is. (Long weekend, I'm slipping...)

We do it because we believe it's better. We call it "the way we do things"
when a majority of us believe it's better. But obviously, not everyone agrees
on what's better -- insisting that one particular way is better than every
other way is also religious.
There _is_ something wrong with it. It's harder to read and more prone
to error.

No, there's nothing wrong with it. Just because there's a better way doesn't
mean the old way is wrong.
The obvious case is an object that implements to_i in a noncongruent
way. For example, let's say the number passed in is to be used as a
count. An IP address may respond to to_i, but it can't possibly be
correct in this context.

I think that's the disagreement. I prefer being optimistic about type
checking -- assume everything's alright until something breaks a test. Others
prefer being pessimistic -- assume a Numeric is a Numeric until you need to
do something else.

Also: It depends on the context. It might well be that an IP address is fine.
We're not talking about multiple categories - just one. In all my
years of using MI in C++, I've very rarely, if ever, seen a class that
didn't have a "dominant" base class. Most of the time MI is used for
mixing in. You might have a class, for example, that descends both
from Numeric and Observable, but obviously, its main category is
Numeric.

I'm not convinced that there's always a "dominant" base class. Simple example:
IP addresses are both numeric (in a sense) and byte strings (and by extension
arrays/enumerables), and their own thing. There might not be a class that
supports all of these things, but the thing itself does.
 
D

David Masover

Hi --



I mean you can measure what it's done after it's done it. I don't mean
you can measure what it *will* do; that's precisely my point (you
can't).

As long as we're nitpicking, you can't necessarily measure what's happened
after the fact, either. The object may well swallow everything with
method_missing and do nothing. It may be possible to play tricks with
respond_to?, __send__, and so on, to achieve the same effect.

After all, your only way to check what goes on inside the class is to ask it,
by sending a message.

Of course, at this point, it doesn't really matter. Your definition of success
is probably based on what your program actually does -- what it actually
inputs and outputs -- and not on the internal state of some object.
No; they determine what messages a freshly-minted object responds to.

Unless I override #new, or #method_missing, or a number of other tricks.
 
D

David A. Black

Hi --

As long as we're nitpicking, you can't necessarily measure what's happened
after the fact, either. The object may well swallow everything with
method_missing and do nothing. It may be possible to play tricks with
respond_to?, __send__, and so on, to achieve the same effect.

I'm thinking of the effect, though. When you send a message to an
object, something comes back, or an exception is raised. My point is
just that none of that can be known with certainty (I can't absolutely
know that a.b will return c, or an object of class D, or whatever)
before it happens.
After all, your only way to check what goes on inside the class is to ask it,
by sending a message.

Of course, at this point, it doesn't really matter. Your definition of success
is probably based on what your program actually does -- what it actually
inputs and outputs -- and not on the internal state of some object.


Unless I override #new, or #method_missing, or a number of other tricks.

It depends how you define "responds to". But mainly my point is that,
at most, the class tells you about the state of things at the birth of
the object -- the nature part, that is, but not the nurture part.


David
 
M

Mark Wilden

[Note: parts of this message were removed to make it a legal post.]

No, there's nothing wrong with it. Just because there's a better way
doesn't
mean the old way is wrong.


Sorry I haven't replied to a lot of very interesting posts in this thread
lately. I changed my host and things have been up in the air lately.

So I'll just say that we're not going agree on a lot if you don't think it's
wrong to do something in the old C style when an easier, more readable, and
less bug-prone way is available.

///ark
 
D

David Masover

So I'll just say that we're not going agree on a lot if you don't think it's
wrong to do something in the old C style when an easier, more readable, and
less bug-prone way is available.

Depends very much on context, and on the definition of "wrong". It's a
semantic argument -- we both agree that #each is better.
 
E

Eleanor McHugh

I'm thinking of the effect, though. When you send a message to an
object, something comes back, or an exception is raised. My point is
just that none of that can be known with certainty (I can't absolutely
know that a.b will return c, or an object of class D, or whatever)
before it happens.


Exactly.

This is the same argument that split physics a century ago. The
classical view relied on the precision of mathematics to provide a
clockwork understanding of the universe whilst the modern view used
real-world experiments to show that below a certain level of
granularity such certainty was an illusion. At the time many critics
of the new physics claimed that it couldn't possibly be right for very
similar reasons to those given by advocates of immutable and static
typing: that runtime uncertainty makes a nonsense of provability/
causality and hence must be something other than it appears. That
argument still rages in some corners of physics (cf Bohm's Implicate
Calculus) but for all intents and purposes uncertainty is the dominant
view and the bedrock of our digital electronic technology.

How does this apply to Ruby? Because of method_missing and the open
nature of classes and objects the only way to know anything about an
individual object is to make an observation, sending it a message that
queries its internal state. The very act of observation may or may not
change that internal state, and the observer will never be entirely
certain that the latter is not the case. That's just the nature of the
language, much as in C programs there is no way to know in advance
what type of memory structure a void pointer will actually reference
or the damage that operating on it may cause to a program's integrity
- but there are still cases where a void pointer is an appropriate
solution.

If certainty is important you can apply all kinds of design
conventions to support it. Unit testing performs a battery of
experiments to ensure requirements are met. Behaviour driven
development encourages a minimal implementation closely mirroring
expressed requirements. Tight coding standards might forbid use of
certain 'dangerous' language features such as method_missing or
dynamic module inclusion, or perhaps even mandate specific design
techniques such as exception-driven goal direction, runtime contracts,
design patterns or whatever else happens works for the developers
concerned.

The point with all these approaches is that they are focused on
reducing the number of ways in which an object will act at runtime so
that the underlying uncertainty is managed. Effectively they move the
granularity of the system so that it obeys classical expectations.

But the uncertainty is still there under the covers, and when applied
appropriately it can be used to provide elegant solutions to problems
that would otherwise be tedious and/or impossible to tackle with
static approaches. And once embraced for what it is, it opens a range
of new possibilities for writing reliable and robust applications.


Ellie

Eleanor McHugh
Games With Brains
http://slides.games-with-brains.net
 
R

Robert Klemme

2008/6/2 Eleanor McHugh <[email protected]>:

<snip/>

Absolutely.
The point with all these approaches is that they are focused on reducing the
number of ways in which an object will act at runtime so that the underlying
uncertainty is managed. Effectively they move the granularity of the system
so that it obeys classical expectations.

But the uncertainty is still there under the covers, and when applied
appropriately it can be used to provide elegant solutions to problems that
would otherwise be tedious and/or impossible to tackle with static
approaches. And once embraced for what it is, it opens a range of new
possibilities for writing reliable and robust applications.

You could even say that static typing conveys a false sense of safety
(which could lead you to neglect testing) whereas this effect does not
happen with "uncertain" (aka "dynamic") languages.

I am not sure about DbC languages such as Eiffel. These go much
further in defining semantics and you cannot easily violate assertions
that they provide, i.e. you get more safety than just static types. I
have always wanted to work with Eiffel but unfortunately never found
the time. Also, from what I read it would feel a bit like a
straitjacket - and given the option I much more prefer Ruby to get
things done. :)

Cheers

robert
 
E

Eleanor McHugh

You could even say that static typing conveys a false sense of safety
(which could lead you to neglect testing) whereas this effect does not
happen with "uncertain" (aka "dynamic") languages.

As is often the case on big C++ and Java projects where complexity (in
the form of uncertainty over requirement correctness) is a dominant
factor.
I am not sure about DbC languages such as Eiffel. These go much
further in defining semantics and you cannot easily violate assertions
that they provide, i.e. you get more safety than just static types. I
have always wanted to work with Eiffel but unfortunately never found
the time. Also, from what I read it would feel a bit like a
straitjacket - and given the option I much more prefer Ruby to get
things done. :)

I've played with DbC by convention on embedded projects where the
overhead was much less than the reward, but my attempts to get into
Eiffel always fall foul of a low boredom threshold (as is the case
with Ada). I guess like most hackers I'm lazy, and languages like Ruby
allow that laziness to be productive ;)


Ellie

Eleanor McHugh
Games With Brains
http://slides.games-with-brains.net
 
D

David A. Black

Hi --

As is often the case on big C++ and Java projects where complexity (in the
form of uncertainty over requirement correctness) is a dominant factor.


I've played with DbC by convention on embedded projects where the overhead
was much less than the reward, but my attempts to get into Eiffel always fall
foul of a low boredom threshold (as is the case with Ada). I guess like most
hackers I'm lazy, and languages like Ruby allow that laziness to be
productive ;)

I took an extended look at Eiffel 10 or 12 years ago, and thought it
was very cool, in ways that are almost diametrically opposed to Ruby's
coolness, of course. I've always thought Eiffel would be a good
alternate name for Ruby, though, because apparently the Eiffel tower
is lighter than the cylinder of air that contains it (!) and I think
of Ruby as having that quality of more power than can be accounted for
by what you actually see. (Or something.) But it's also a good name
for Eiffel, for other reasons.


David
 
R

Robert Dober

Classes are categories. But not all categories are classes.

IMHO this is a gross generalization!

I guess that you might indeed use Ruby classes as categories in your
designs. I do not know what others do, but the simple fact that I use
Ruby classes for other things falsifies your statement. BTW,
performance apart, I can perfectly life without classes in Ruby.

Cheers
Robert
 
R

Rick DeNatale

[Note: parts of this message were removed to make it a legal post.]

I took an extended look at Eiffel 10 or 12 years ago, and thought it
was very cool, in ways that are almost diametrically opposed to Ruby's
coolness, of course. I've always thought Eiffel would be a good
alternate name for Ruby, though, because apparently the Eiffel tower
is lighter than the cylinder of air that contains it (!) and I think
of Ruby as having that quality of more power than can be accounted for
by what you actually see. (Or something.) But it's also a good name
for Eiffel, for other reasons.


A little less than 18 years ago, I chaired an OOPSLA panel called "OOP in
the Real World" http://portal.acm.org/citation.cfm?id=97946.97981

One of the panelists was Burton Leathers from Cognos. He gave a
characterization of the popular OO languages at that time based on their
"origin." As I remember some of the highlights were:

Objective-C reflects it's Yankee origins (It was originally developed by
Brad Cox and Tom Love at an ITT lab in Connecticut), it does just what it
has to do and nothing more.

C++ is like Hagen-Dazs ice cream, you think it's from Scandanavia, but it's
really an industrial product from New Jersey.

As I recall he said he hadn't really used Smalltalk (which comes from
California) but he got the impression that if he did it would be like
surfing in baggie shorts.

And his description of Eiffel, "Quintessentially French!"

I think that there were a few more, but I can't recall them. Neither Java
nor Ruby were on the radar in 1990. Java (actually Oak) wasn't planted until
1991, and Ruby didn't appear until 1993.

I wonder who Burton would have characterized them.
 
E

Enrico Franchi

David A. Black said:
Dave Thomas, who coined the term "duck typing", has described it as
"a way of thinking about programming in Ruby."

Do you know when he actually started speaking about "duck typing"?
 
E

Enrico Franchi

David A. Black said:
No, I don't know exactly when it was.

That is because the first time *I* read about duck typing was in a post
by Alex Martelli in early 2000 *and* he wrote as if the term was already
estabilished (which could be meaningless).

I was just curious whether someone could date the first uses from
Thomas. Of course Thomas Duck Typing in Google gives too many results to
check.

Nothing important, just curious. :)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,766
Messages
2,569,569
Members
45,043
Latest member
CannalabsCBDReview

Latest Threads

Top