Why Generics?

  • Thread starter David Blickstein
  • Start date
C

Chris Uppal

David said:
Enums were truly needed. They eliminated a class of (what I consider to
be) type-related errors that in my experience was very common: using the
wrong constant in the wrong context. Like substituting
LAYOUT.LEFT_JUSTIFIED for LAYOUT.WEST in a layout manager.

Well, the calls were for C++-style enums, rather than for syntactic sugar for
the type-safe enum idom. Fortunately Sun ignored the C++-weinies (sp?) in this
case and did the Right Thing. (I could wish that they hadn't attempted to
conceal the object-nature of the enum objects, but it's not a big deal since I
can always fall back on the old idiom if needs be.)

C# allows you to use the SAME syntax for fields and nyladic (zero
arguments) methods. This allows you to abstract the implementation of
the field so that fields can be re-implemented as methods without
breaking existing code.

Can't be done without JVM changes. (And, IMO, would be a very bad idea to do
at all.)

-- chris
 
C

Chris Uppal

Thomas said:
Robert C. Martin, mostly a denizen of comp.object, has pointed out several
times that there are some things that are /over/ engineered. I don't know
his position on generics, but the example that he has lately discussed is
that a "Just Create One" is superior to a true singleton. I am seeing
shades of this line of thinking (right or wrong) in this thread.

The cases are certainly comparable (if you grant my position on generics). The
extra superstructure (in some interpretations of Singleton) merely prevents you
from creating extra instances although there is rarely a genuine need to
prevent this, and it may even lead to inflexibility in the code.

-- chris
 
C

Chris Uppal

Thomas said:
I'm not convinced that the word "strong" belongs with the words "dynamic
typing".

Ha! I see through you -- you are attempting to stir a dynamic/static debate
(again!). Well, I'm not biting!

;-)

-- chris
 
L

Lasse Reichstein Nielsen

Thomas G. Marshall said:
If that's what you were worried about then enums were not "truly needed" at
all. The examples you gave are examples of poorly designed type-UN-safe
enums.

Consider this type-SAFE pseudo-enum instead:

Indeed. The Java Tiger Enums are merely syntactic sugar for creating
such type-safe enums. One reason for having enums in the language is
to make it easier to make good enums. I don't think it's the most
important reason.

We already made type-safe enums in our code. The syntactic sugar makes
it explicit what the conde means. It makes it more readable, as well as
writable.

I consider it a "Good Thing" to make common idioms explict, instead of
merely relying on naming conventions. I like the concept of C#'s
properties for the same reasons - they make it explicit what is a getter
and a setter instead of encoding it in the method names. (I don't know
whether the implementation is as good as the idea, haven't tried it in
practice :).

It's also a reason for annotations. Not all methods are alike. Some of
them might have something in common, and an annotation can make that
explicit, no matter what name the methods might have.

Hmm, maybe we should have the annotations @Setter("propname") and
@Getter("propname") instead of using name-prefixing. :)

/L
 
T

Thomas G. Marshall

Chris Uppal coughed up:
David Blickstein wrote:
....[rip]...
C# allows you to use the SAME syntax for fields and nyladic (zero
arguments) methods. This allows you to abstract the implementation
of the field so that fields can be re-implemented as methods without
breaking existing code.

Can't be done without JVM changes. (And, IMO, would be a very bad
idea to do at all.)


Regardless as to whether it's a good idea or not, I often wonder what many
here have wondered. To what degree did sun bite themselves in the ass by
rushing the initial versions out the door in 1995? I'm not convinced one
way or the other, but if you take a close look at many of the criticisms of
late, you see the reasons all sounding like

"they had to do it that way, or else they would
break compatibility"

Which is a common song in all of computer science, the biggest example of
which I'm guessing is probably MS Windows....
 
T

Thomas G. Marshall

Chris Uppal coughed up:
The cases are certainly comparable (if you grant my position on
generics). The extra superstructure (in some interpretations of
Singleton) merely prevents you from creating extra instances although
there is rarely a genuine need to prevent this, and it may even lead
to inflexibility in the code.


Well, and to the biggest complaint: inflexibility in the /testing/. But we
can let this go for now, since it is another /major/ topic altogether.

I see such worries everywhere in computer science. For example, in my C
days, I've had no end of shouting sessions with supposedly senior engineers
screaming at the top of their lungs that /any/ use of goto is evidence of
bad code. Devoid of any consideration as to how goto can actually /improve/
readability and maintainability.

Imagine this particular hell (LOL): Over a decade ago, I was at a customer
site defending my code to a large customer who was attempting a "code
review". I put this into quotes, because in that particular horrible
company there was a distinct culture of posturing, and slamming each other's
code, and it all spread down from the top like a fungus. There was one
section where I appropriately used a goto in a fairly unusual function which
required a "cleaning up" section. The goto only traveled downward. They
chose this as an opportunity to all pose in front of the boss to show how
senior they were. I showed them what the function(s) would have to look
like if it did not have a goto (you can imagine). They tried to point out
how much clearer it is to read without the goto no matter what, even though
the result was an obfuscated mess.

Much of that shallowness in thinking I attribute directly to the same
mindset that spawned off the "structural programming" witch-hunt of the
80's. There are still people today that will insist that a function should
/always/ exit in only one spot (the end) and that you must carry down state
and condition codes, and do whatever necessary to that end. No. matter.
what.

It's as if the rules that have been taught to students (singletons, no-goto,
structural programming) have all moved from being the means to an end, to
the end itself. And we ought to fight against that, at least IMO.

Now, it may seem contradictory that I am a pundit of the handcuffs within
java. For example, I think that a labeled-break is a great example of how a
restrained goto should be. I also like the lack of MI, and definitely
/love/ the fact that java didn't even attempt operator overloading. But
it's all where the line is drawn, and I see nothing inconsistent in these
opinions.
 
T

Thomas G. Marshall

Chris Uppal coughed up:
Ha! I see through you -- you are attempting to stir a dynamic/static
debate (again!). Well, I'm not biting!


Oh man. Nothing worse then when someone's on to you. :)

Actually, no. I've been trying to /not/ go down that route. In this
particular case though, there is a huge disagreement about what "strong"
typing really means, and if there's a way to discuss that /without/ making a
DTL/STL argument, then I'm for it.
 
B

Bjorn Borud

["Thomas G. Marshall" <[email protected]>]
|
| Regardless as to whether it's a good idea or not, I often wonder
| what many here have wondered. To what degree did sun bite
| themselves in the ass by rushing the initial versions out the door
| in 1995?

look at it this way: the evolution process has been a lot less painful
than that of C++ or even Python.

-Bjørn
 
B

Bjorn Borud

[Tor Iver Wilhelmsen <[email protected]>]
|
| Allow me to add another recommendation of that book: Bruce Eckel's
| "Thinking in C++" taught me the language, "Effective C++" taught me
| never to touch it.

Java books are usually about *doing* stuff. C++ books are mostly
about not shooting yourself in the foot. that just about sums up how
you use the two environments too.

-Bjørn
 
T

Thomas G. Marshall

Bjorn Borud coughed up:
["Thomas G. Marshall"
Regardless as to whether it's a good idea or not, I often wonder
what many here have wondered. To what degree did sun bite
themselves in the ass by rushing the initial versions out the door
in 1995?

look at it this way: the evolution process has been a lot less painful
than that of C++ or even Python.

-Bjørn


I agree with you on C++. I wouldn't say as much about python. The problems
that python had towards the beginning, at least IME using the thing, were
the results of flat out bugs and not design "incrementalism".
 
C

Chris Uppal

Thomas said:
Regardless as to whether it's a good idea or not, I often wonder what many
here have wondered. To what degree did sun bite themselves in the ass by
rushing the initial versions out the door in 1995?

I suppose that boils down to asking whether they'd have produced something
better if they'd had more time. At one level it's clear that they would
have -- there are some messy details that obviously would have been fixed if
there had been time for another major iteration before release (e.g. some of
the weirdnesses in the JVM bytecode definition). OTOH, I'm not convinced that
they had the combination of talent, experience, and insight to have realised
how damaging some of the design features were, even if they'd had another
couple of years to think about it. (The whole "static" mess being just one
example.)

-- chris
 
C

Chris Uppal

Thomas said:
It's as if the rules that have been taught to students (singletons,
no-goto, structural programming) have all moved from being the means to
an end, to the end itself. And we ought to fight against that, at least
IMO.

A slightly depressing idea has just occurred to me. It may be that this kind
of thinking (which I also see far too often) comes from an inability to
perceive the quality (or lack of it) in code. E.g. if someone cannot see that
one expression is much more readable than another, then they have no way to
gauge "quality" other than by applying the more-or-less superficial "laws" that
they have learned (presumably by rote) in the past.

I am unable to read French. But -- having been "taught" it at school (or, more
accurately, having been drilled to scrape through the French exams at
school) -- I can sometimes puzzle out the rough sense of simple sentences.
Given those "talents" there is no way at all that I could ever tell the
difference between good written French (expressive, articulate, clear, and
interesting to read) and bad (dull, obscure, ambiguous, or banal). I could
(rote) learn a few simple tests (how many words per sentence,
presence/absence of some fixed set of known clichés, etc), but being
effectively blind to it as a language, I could never develop a sense of the
quality that is supposed to be Just Obvious.

I don't know if that /is/ the explanation for the knee-jerk responses to --
say -- gotos, but I've worked with enough code that was (apparently) written
without any sensitivity to what I would call quality (fluid, flexible,
transparent, and correct), that I suspect that "quality blindness" isn't all
that rare.

-- chris
 
C

Chris Uppal

Thomas said:
Actually, no. I've been trying to /not/ go down that route. In this
particular case though, there is a huge disagreement about what "strong"
typing really means, and if there's a way to discuss that /without/
making a DTL/STL argument, then I'm for it.

Probably a good idea to avoid terminology that has a built-in bias, then.

Type systems vary in many dimensions, e.g:

What kinds of concepts are represented within the system (you could call it the
system's ontology). E.g. C++'s type system is more powerful than Java's in
that it supports/requires reasoning about mutability. Other type systems exist
where guaranteed termination of computations is part of the type of that
computation. Yet more examples are that null-ness could be explicitly
represented in the type system (I think Groovy does this), array accesses could
be bounds-checked are checked by the type-system (I seem to remember that
Pascal did this).

Where the type system gets its information (it's only stretching a little to
call this the epistemology). The principle choices being (as far as I know)
limited to static-declarative (as in Java/C++); static-implicit (as in ML); or
dynamic (as in Smalltalk or LISP).

Completeness and accuracy: does the system guarantee never to produce false
positives or negatives -- does it pass code that should fail, or fail code that
would work perfectly. I don't believe that any static analysis can be both
complete and accurate.

How the system "reports" errors: does it prevent execution (e.g. by refusing to
compile, or by aborting the runtime); or does it allow continued controlled
execution (e.g. throwing a recoverable exception); or does it allow the
programmer to circumvent the checks, and risk continuing with an apparently
buggy execution.

That list is not intended to be complete (e.g. I think Dylan's type system
differs from Java's in ways that I haven't listed). AFAIK, the term "strong
typing" is used in two senses. One is (IMO) deprecated -- it is mostly still
used only by the ignorant or bigoted -- which identifies "strong" typing with
static typing (whether declarative or implicit). The other makes more sense:
it is used specifically in opposition to "weak" typing -- that characteristic
of a type system that allows programs to execute (or continue executing)
without detecting errors, or to compile without enforcing the type-system's
rules. C and C++ are weak in this sense, Java[*] and LISP are not.

([*] but note that there is a class of detect/able/ errors that Java could, but
does not, detect. E.g. synchronisation errors, or bit-shifting by more than 31
positions. These lie outside the type system's ontology.)

It wouldn't be unreasonable to say that one type system was "stronger" than
another if it reflected a wider range of aspects of the program. In that sense
C++ has a stronger type system than Java. But I think that usage is
independent of (and rarely confused with) the terms "strongly typed" or "strong
typing".

I am happy to call Java type system "strong", because it doesn't contain any
escape hatches that permit me to circumvent its checking[*]. If, instead, you
want to identify "strong" with "static" then you'd have to label Java's type
system "weak", since there are classes of error that Java's runtime detects
and reports (e.g. NullPointerException, ArrayIndexOutOfBoundsException), but
which are not subject to its /static/ type checking.

([*] except JNI ;-)

-- chris
 
T

Thomas G. Marshall

Chris Uppal coughed up:
I suppose that boils down to asking whether they'd have produced
something better if they'd had more time. At one level it's clear
that they would
have -- there are some messy details that obviously would have been
fixed if there had been time for another major iteration before
release (e.g. some of the weirdnesses in the JVM bytecode
definition). OTOH, I'm not convinced that they had the combination
of talent, experience, and insight to have realised how damaging some
of the design features were, even if they'd had another couple of
years to think about it. (The whole "static" mess being just one
example.)

-- chris


Well, again I have to qualify my remarks with the firm belief of mine that
sun is often judged devoid of the context in which they first designed java.
My analogy is the criticism of the special effects of Star Wars, which is of
course, criticism not taking into account the state of the art at the time.

Java as a whole, I believe, was an attempt at something new, even if each of
the language components have long since been understood. Note that I say
"attempt"---the mere attempt at something new (whether or not it actually
/is/ new) is enough to have language designers wander down roads that they
otherwise might not have considered. I don't think that you unfairly judge
java, but I think that many often do.

So my pondering is less "is it their fault for rushing" and more "to what
degree would /anything/ have been 'better' had they waited".
 
D

David Blickstein

If that's what you were worried about then enums were not "truly needed" at
all.

I agree with you, but only in the "possible" sense of "needed", not in the
"desirable" sense. In that sense of the word, your claim is already proven
by the fact that the new enumerations are (in my understanding) just a
little syntactic sugar provided by the javac compiler. So clearly type
safe enums were always possible.

However, you could say the same thing about the new enhanced "for"
statement. Yes, it's possible for me to to write a loop that is
effectively coding "for each element of this list", but (IMO) it is
substantially better to be able to CODE it as "for each element" rather than
"something that translates to for each element".

For example... which is more readable:

int sum = 0;
for (int i=0, i<a.length() ; i++) {sum += a} ;
print(sum/a.length());

or:
print(average(a));

The problem with the example you gave (IMHO) is in what I call the
"exposition". The advantage of enumerations (and for each and a bunch of
the othernew features) is that they provide a syntax to DIRECTLY express a
basic operation, and thus expose the operation more directly.
 
B

Bent C Dalager

So my pondering is less "is it their fault for rushing" and more "to what
degree would /anything/ have been 'better' had they waited".

A related question is "if they had waited, would anyone have ended up
taking note of their product at all?"

If it was rushed, there was probably a reason and it might have been a
good one :)

Cheers
Bent D
 
T

Thomas G. Marshall

David Blickstein coughed up:
I agree with you, but only in the "possible" sense of "needed", not
in the "desirable" sense. In that sense of the word, your claim is
already proven by the fact that the new enumerations are (in my
understanding) just a little syntactic sugar provided by the javac
compiler.

Actually, it's proven just by understanding what a class is. But sure.
We're done with this part of the discussion. It was hard to tell your level
of understanding simply by your statements.

The problem with the example you gave (IMHO) is in what I call the
"exposition". The advantage of enumerations (and for each and a
bunch of the othernew features) is that they provide a syntax to
DIRECTLY express a basic operation, and thus expose the operation
more directly.

You'll need to explain this part a little better. I'm not sure: you started
with the problem with my enum example, and then followed with the advantage
of enumerations.
 
T

Thomas G. Marshall

Bent C Dalager coughed up:
Thomas G. Marshall


A related question is "if they had waited, would anyone have ended up
taking note of their product at all?"

If it was rushed, there was probably a reason and it might have been a
good one :)


Yep. Dang good point.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads

Generics ? 14
can this be done with generics? 32
More Generics warnings. 5
stupid generics 38
Generic generics help 7
Java generics limitations? 9
Generics and for each 12
Help on java generics 18

Members online

No members online now.

Forum statistics

Threads
473,770
Messages
2,569,584
Members
45,077
Latest member
SangMoor21

Latest Threads

Top