Thomas said:
Well, I was trying to illustrate what could be a disastreous
thing in a weakly typed language lacking capabilities for
coupling the type (data) with the "can do" (functions)...
I assume you understand me, but for the clearity of other
readers (this post is going to several groups) I'll explain...
The above scenario of asubmarine trying to fly is a normal
metaphor for when things have gone REALLY bad!
These types of bugs where quite common to experience in
earlier days when you where sending void pointers around
everywhere!
Also this is a problem many newbies experience even in more
"secure" languages like C++ etc, but they are more "difficult"
to experience in "better" languages...
I certainly don't dispute that the lack of type safety can be a
problem. I'm all in favor of strong typing, preferably static.
But in some ways, typing and encapsulation are orthogonal. In
C, one classical way to encapsulate is to use a void* as a sort
of a universal handle. You have a function createXXX which
returns a void*, and an number of other functions
doSomethingWithXXX which take a void* as their first parameter.
This is highly encapsulated, and probably meets Alan Kay's
original definition of OO (although he might have also insisted
on polymorphism), but it is anything but typesafe.
[...]
I can partially agree here, it's a "hyped" word... In the
late 90's they were almost selling fridges which was Object
Oriented! (I wonder how you could inherit from the milk
bottle and override the Dispose function...
There's an interesting annecdote at
http://c2.com/cgi/wiki?HeInventedTheTerm 
.
Of course, exact meanings evolve -- Alan Kay has also been
quoted as saying "I made up the term 'object-oriented', and I
can tell you I didn't have C++ in mind"

. More to the point,
it seems clear from many things that Kay has said that he
considers full dynamic typing essential to OO; another OO guru,
Bernard Meyers, considers static typing essential to modern OO.
Alan Kay's earliest statements don't seem to consider
polymorphism essential, although most modern pundits (including
Kay, I think) do -- Booch distinguishes object oriented (with
dynamic polymorphism) from object based (all of the
encapsulation, but without polymorphism).
No I am not formally right whatsoever!
You are!
But to separate two different mechanisms which occured around
the same time in evolution both trying to solve the same
problems in the same domain would be like separating brown
eyes from blue! Sure it can be done, but what's the point?
Those with brown eyes are neither dumber nor more intelligent
then those with blue eyes... And they both percieve the world
in roughly the same way.
OK. By formally right, I meant simply that whatever concepts
you consider OO do interact in some way with a type system. The
type of an object is determined by the set of possible states
and the operations allowed on it. And OO is certainly concerned
with the set of possible states and operations.
What I see as a minimum in all definitions of OO (or even
Booch's object based) is encapsulation. Hiding the state, and
only exposing a subset (of state and operations -- typically
most of the operations, but little of the state). It's this
encapsulation which makes polymorphism possible. Rather than
requiring an exact type, it allows you to use any type which
supports the desired operations. In that sense, it goes in the
opposite direction of strong typing.
I would argue that strong (static) typing and OO are two
distinct developments, which occured at about the same time
(Pascal and Smalltalk), but which actually went in radically
different directions, at least at the start. Both brought
important advantages, and much of following the development
(including such languages as C++ and Eiffel) have been attempts
to merge these two developments.
(when that's said they of course have tons of qualities which
the other does not have but their one main biggest feature
(denial) is common for both)
Denial is NOT a characteristic of early OO. At least not of
Smalltalk. I would qualify early OO as more enablement. To get
back to your original example: I don't have to know what type an
object is to fly it. All I need is that it has a method "fly".
And that method can be added to any type I want.
Many (most) later "OO" languages take a more restrictive view;
that a class must explicitly declare that it supports a set of
operations, by deriving from a class which declares the
interface. Thus, for example, if the interface contains the
functions "fly", "land" and "takeoff", an object cannot support
flying without also supporting landing and taking off.
If you see the evolution of C++ you will see that Bjarne
implemented features according to a priority list (at least he
says so himself), and the strongly typing system was one of
the FIRST features he did, long before e.g. exceptions and
templates...
Yes, but as far as I can see, it was never a goal of C++ to be
"purely object oriented", or anything else but useful.
Encapsulation is a very useful feature. So is strong static
type checking.
Concerning the priorities of C++, it's also important to see
where it was coming from. C has fairly weak typing, and while
C++ has improved it greatly, there are still weak points, as
anyone who has done something like "aString += aDouble" can
attest to.
[...]
I'm not familiar with them, so I don't know

. On the other
hand, Smalltalk and CLOS definitly do not have static typing, at
all.
And of course, different communities use it in different ways.
There are a lot of interfaces in Java which simply deal with
Object, and leave you to guess the actual type, once you get it.
Which results in de facto dynamic type checking instead of
static. And numerous run-time errors instead of the compiler
complaining (but also certain freedoms which you don't have in
C++).
I know nothing about Smalltalk unfortunately...
I don't know it very well myself. But I know some of the
principles behind it.
But It sounds interesting if they managed to create something
definable as OO today without coupling functions within the
types...
It depends on what you mean by "definable". The basic principle
is simple: everything is an Object. Rather than "call
functions", you "send messages". The receiving object looks up
the message type in a table it contains, which maps message type
to method code, and either invokes the method code or
complains. I'm not too sure how it handles parameters, but if
it's anything like Lisp (on which it is based), the parameters
are just a list of Object, and it is up to the invoked method to
verify number and whether a given parameter supports the
needed operations.
I'd hate to try and develop robust software in such an
environment, but apparently, people do so, and do so well.
That might be true, but Bjarne had to take C compatibility
into account...
I think C++ wouldn't look like it did if it wasn't for C...
Certainly not. But I think that Bjarne rather favors (or at
least favored at one time) strong static type checking. At
least, one aspect of the evolution of C++ has been in this
direction.
( ...C compatibility yet again... )
Not at all. C compatibility certainly didn't require
inheritance and virtual functions. Which is what I was talking
about -- I declare my function to take a Base&, and you can pass
it a Derived1, or a Derived2, or any number of other types.
This is Smalltalk's enablement -- a possibility to take
liberties with the type system. In the case of C++, it's a
constrained enablement; you can't do just anything.
Of course, there is also a (static) polymorphism due to the fact
that a double implicitly converts to int (or even to char).
Which allows things like the "aString += aDouble" above. That
IS due to C compatibility

. (And I consider it more a bug
than a feature. Part of the price we had to pay for the
language to become widely used.)