Austin said:
#

arameter bar: aString
#

arameter baz: anArray or +nil+
# :returns: true if aString ...
def foo(bar, baz =3D nil)
I *WANT* type hints. I have believed that they are useful for
things like SOAP from the beginning. I don't, however, want
anything that looks like type declarations in the method
declaration.
Thus, if we allow people to do:
def foo(bar: String, baz: [ NilClass, Array ] =3D nil)
:
end
They will. And all of the clean, easy-to-read-and-use Ruby code
that we've come to love will go away.
I don't see how what you propose is all that different,
semantically, from calling out the type in the method header. You
seem to be arguing that: 'I want X like in other languages, but I
don't want it to look like it looks in other languages, because
programmers from other languages can import their bad habits.'
Then you're not paying attention. Sorry if that offends you, but
you're just not paying attention. Type hints are primarily a
documentation feature. Let me repeat that for emphasis: it is a
documentation feature. If an optimizing Ruby compiler is able to
take advantage of it -- great. If not ... too bad. This is intended
as both human and machine documentation for automatic documentation
tools (rdoc), IDE autocompletion, and SOAP method descriptions (and
the like).
And to add a feature that doesn't at least resemble the syntax for
that feature in most other languages is to compromise the
(currently much battered) principle of least surprise, at least
for those programmers coming to ruby from another language.
DO *NOT* USE POLS in your argument. It doesn't apply. Matz has
indicated that it's just so much crap. I don't particularly *care*
if programmers coming from other languages are confused by a type
hinting system that doesn't look remotely like their static typing
system -- it isn't the same. It shouldn't be the same. It shouldn't
give them a false sense of understanding as to what it is.
The usual syntax where the type is embedded in the variable
definition has some advantages: it's compact, and it keeps related
things together. It's not for no reason that langauge after
language that actually implements this has done it (more-or-less)
the same way.
It also carries tons of disadvantages, as I've expounded before, but
will happily repeat again for those not paying attention:
1. It looks like the stuff in other languages, but because Ruby
isn't statically typed (thank GHU for that), it won't have the
same meaning as it does in those languages.
2. If it does have the same meaning in those languages, we're no
longer talking about Ruby.
3. It will encourage people to use it for the wrong reasons in the
wrong way.
4. It becomes more than informational.
All of these are killer problems. Type hinting in Ruby should be
informational. No more.
Specifying an overly restrictive type is an error. One can write a
bad program in any language, as the various obfuscated programming
contests have shown. Just because this is possible is no reason to
limit what good programs can do.
You haven't been paying attention. StringIO doesn't inherit from
String or IO, but it will act like either. If I add a #read method
and a #write method to a random object, it will act a lot like an
IO. If I add #to_str to said object, it acts a lot like a String.
Specifying a restrictive (that is, non-informative) type AT ALL
prevents me from using many of these features that make the language
we're using Ruby.
Emphasis: restrictive type indicators will make the language that
does this Not Ruby.
Type and class are not the same thing. Class is one variety of type,
but there are others. For instance, there is the 'method signature',
(or duck type,) the set of methods that can be called. You try to
rebut this notion:
Oh, please *don't* try to lecture me on this stuff, Caleb. You'll
just piss me off. The method signature is not related to the "duck
type." If you've used Ruby for any length of time, you recognise
that as well. I'm not Matz -- or even Dave Thomas -- but duck typing
is, ultimately, not *caring* about the type of object you're
provided. It's just using the type of object.
You manage to criticize without letting us know what your actual
objections are. What things are complicated, and how? Java
interfaces are a poor substitute for mixins, but that's because
they're not meant to be mixins. I would say instead: Java
interfaces are a poor substitute for duck types, (since, like
everything else in Java, they require a big pile of declarations
ahead of time) but we can make the Ruby version work the right
way, with a lot less declarations. (Only a little pile.)
No, we can't. Interface declarations are simply wasted code time and
space. It's that simple. (And Java interfaces *are* a poor
substitute for mixins; they are the way that Java implements MI, but
they require independent implementation in all cases. Ruby
implements MI through mixins, but like C++'s STL, they have a common
implementation.)
Simple rebut of your attempt at justification: at what point in a
class definition can you conclusively say that it is -- or is not --
an Enumerable? Given that classes in Ruby are open, you can't say it
until you try to use it like an enumerable.
class Foo
include Enumerable
end
a =3D Foo.new
a.inject(0) { |a, b| a + b }
# NoMethodError: undefined method `each' for #<Foo:0x2b5f290>
class Foo
def each
yield self
end
end
a.inject(0) { |a, b| a + b }
Obviously, this class won't work -- there are other parts missing,
but not until I've defined an #each method is Foo an enumerable.
What value would be something like a NoEnumerableInterface error be
as opposed to the simple "undefined method `each'" that we get?
Enumerable's documentation clearly states that it requires the #each
method be implemented for the Enumerable methods to work. Why do we
need an "interface" for that? The interface -- and implementation --
is in Enumerable. We don't need anything more than that.
People expect too much from hotspot and the like. It works great
for Java, sure, but that's in large part because Java is
statically typed. Expecting the same performance in a language
with no type information on variables whatsoever is unrealistic.
Not at all. Self is an indication that my suggestion will work:
[...] the compiled method is specialized on the type of the
receiver. If the same message is later sent to a receiver of
different type (e.g., a float instead of an integer), a new
compilation takes place. This technique is called customization
[...] the compiled methods are placed into a cache from which they
can be flushed for various reasons; therefore, they might be
recompiled from time to time. Furthermore, the current version of
the compiler will recompile and reoptimize frequently used code,
using information gathered at run-time as to how the code is being
used [...]
http://rubyurl.com/aFVmq
=09(
http://research.sun.com/self/release_4.0/
=09 Self-4.0/manuals/Self-4.1-Pgmers-Ref.pdf)
Gee. It looks like I'm right. Self is an untyped language -- and the
HotSpot technology in Java is *based on the research for Self*.
(Indeed, note the point about the compiled cache and recompile and
reoptimize. That describes HotSpot exactly.)
And no, type inferencing _by_itself_ will not solve the problem.
Type inferencing works in ML, but ML, like Java and C# is not a
dynamic language. ML lacks a feature that Ruby has:
method_missing. To understand why this is important, let me
briefly explain type inferencing.
IIRC, Haskell also supports type inferencing, and is far more
closely considered a dynamic language. Method missing is a simple
case of unoptimizable code. At any rate, it's rather moot, since the
support (above) for the original point that I made rather shoots
down what you've said.
[...]
Speed is only one advantage to a static typing system. In general,
static typing makes it easier (for programs as well as people) to
analyse programs that use it.
Bullshit. This is a lie told by proponents of statically typed
languages that has been shown false again and again and again.
Static typing doesn't increase safety, ability to be analysed, or
anything else -- that doesn't help the compiler of said statically
typed language. There are side benefits from IDEs that have been
observed in the last serveral years, but statically typed languages
have been around a lot longer than autocompletion. And we *still*
get buffer overruns.
Sorry, but I question even the belief that speed is an advantage to
a static typing system -- remember that C++ was *slower* than C for
years (and in some cases, still is), and they're both statically
typed languages.
(Hint: even statically typed languages need to break the chains more
often than proponents want to admit. In Java it's Object. In C/C++
it's void*. Both require a lot of care and work to make them
happen.)
-austin
--=20
Austin Ziegler * (e-mail address removed)
* Alternate: (e-mail address removed)