final methods and classes

  • Thread starter Lionel van den Berg
  • Start date
R

Rzeźnik

I don't understand.

I meant "omniscient", my mistake.
If you are ominous [?] then this idea is great. I'd bet you are not. In
fact declaring everything as final is worse than most things you can
do to cripple program development. IMO every time you are
_introducing_ final you have to think deeply.

Every time you omit 'final' in a class declaration you should consider
carefully.  Read the item in /Effective Java/ referenced upthread for a
thorough explanation of why to prefer non-heritability.

If there is such guidance there then the book is useless rubbish - but
I can't say for sure because I actually did not read this.
As an API writer (that is, one who writes a class intended for use) one should
control how the API is used, not predict it.

Agreed. So if we are on the same page - we do you try to predict how
your API will be used and 'contaminate' it with 'final's?
 
R

Rzeźnik

For APIs you do publish, you should make every class final that you intend not
to be heritable, which should be most of the concrete classes.

Lew, I am sorry to say this, but this does not make any sense. It goes
against the very spirit of OOP. If you write your published APIs in a
way you described then your APIs are useless in the face of design
changes. You can get away with this as long as you write programs for
yourself only, or programs with very short life cycle. Why do you fear
inheritance so much? I know of so called FBC problem, I am well aware
what it means, but it is no argument against inheritance.
 
A

Alessio Stalla

Sorry about the accidental post.  I had intended to write:

Excellent link, Arne.  However, that very article gives many good reasons for
using the 'final' keyword, contradicting your notion that there aren't good
reasons for it.

 From an API writer's perspective, that is, anyone writing classes intended to
be used, 'final' on a class or method indicates that it should not, and
therefore cannot be extended / overridden.  As Mr. Goetz said in the
referenced article, this decision should be documented (in the Javadocs).

There's a difference between something "you should not do" and
something you are prohibited from doing. The creator of a class should
clearly document that s/he didn't design it to be extensible using
inheritance, but s/he should think twice about making it non-
extensible forever. Sometimes the classes we write can be used in a
way we didn't anticipate, and that's not automatically a bad thing.
Josh Bloch in /Effective Java/ suggests that one should prefer composition to
inheritance, and that inheritance is somewhat abused.  ("Design and document
for inheritance or else prohibit it")  He advises to make classes final unless
you explicitly and properly make them heritable.

I would gladly accept such advice if there was a thing like
composition as a first-class concept in Java; e.g. if you were able to
say

public class Example extends X implements Y uses Z(z) {
private Z z; //Error if z is not assigned a value by a constructor
public Example(Z z) {
this.z = z;
}
...
}

and automatically have methods in the implemented interfaces delegated
to z, unless you override them. That is not the case, and composition,
while being the right thing in certain cases, is way more cumbersome
and "foreign" than inheritance, and thus can't be used as a general
substitute for inheritance.
It's like saying that pure functions are to be preferred over
functions with side effects: that might be true in a language with
heavy support for functional programming, but giving it as an advice
for good Java coding would be wrong.

Alessio
 
R

Rzeźnik

There's a difference between something "you should not do" and
something you are prohibited from doing. The creator of a class should
clearly document that s/he didn't design it to be extensible using
inheritance, but s/he should think twice about making it non-
extensible forever. Sometimes the classes we write can be used in a
way we didn't anticipate, and that's not automatically a bad thing.


I would gladly accept such advice if there was a thing like
composition as a first-class concept in Java; e.g. if you were able to
say

public class Example extends X implements Y uses Z(z) {
  private Z z; //Error if z is not assigned a value by a constructor
  public Example(Z z) {
    this.z = z;
  }
  ...

}

This is nice, powerful construct. Are you aware of any language that
uses composition is this way?
and automatically have methods in the implemented interfaces delegated
to z, unless you override them. That is not the case, and composition,
while being the right thing in certain cases, is way more cumbersome
and "foreign" than inheritance, and thus can't be used as a general
substitute for inheritance.
It's like saying that pure functions are to be preferred over
functions with side effects: that might be true in a language with
heavy support for functional programming, but giving it as an advice
for good Java coding would be wrong.

Alessio

+1
 
L

Lew

Rzeźnik said:
If there is such guidance there then the book is useless rubbish - but
I can't say for sure because I actually did not read this.

You didn't read it, yet you feel confident in refuting the arguments
you didn't read?

Interesting.

FWIW, far from being "useless rubbish", /Effective Java/ is arguably
the most useful book published to help one write idiomatic and, well,
effective Java code.

Lew:
Agreed. So if we are on the same page - we do you try to predict how
your API will be used and 'contaminate' it with 'final's?

Begging the question. You assume that 'final' is "contamination" in
order to argue that it's a bad thing.

And I'm not talking about prediction, as you can clearly see from the
passage you quoted, where I say one should NOT predict but control how
the API is used. So that adds straw man to the fallacies you're using
to attempt to refute my points.

A rather bald straw man, too. You took the exact opposite of my
argument in simplest possible terms without even a pretense of logic
or evidence. For you to use such a weak argument completely
undermines your points.

Since you said nothing that speaks to my arguments, I cannot but refer
you to what I've already said. If you have points that refute mine,
or even pretend to, we can continue.

And do try to read a book before you judge its content. I referred
you to Mr. Bloch's explanations because they're already far more lucid
and detailed than I can hope to achieve in a Usenet post. It is not
valid to declare his reasoning "rubbish" simply because it confronts
your prejudices.
 
A

Andreas Leitgeb

Probably just like Rzeźnik, I do have some problems grasping
this statement.

It seems to me that it means two entirely separate things depending
on one's viewpoint:

Lew: rather than predict what users are going to do with my API,
one had better control them, by using "final" to prevent any
possibly "unintended" uses ...

Rzeźnik: Rather than predict, what the user really wants to do with
the API, rather give them a good base and let them override where
they want custom behaviour. Control them by offering them a good
API of public and protected methods so they can't do too nasty
things.

I must admit, that Rzeźnik's point of view is nearer to my own, but
then again, I might have misunderstood both.
 
R

Rzeźnik

You didn't read it, yet you feel confident in refuting the arguments
you didn't read?

Interesting.

I wrote "I can't say for sure" and "If there is...". Sincerely, I do
not believe that the author of /Effective Java/ prefers "non-
heritability".
FWIW, far from being "useless rubbish", /Effective Java/ is arguably
the most useful book published to help one write idiomatic and, well,
effective Java code.

That might be. Still, your words rather describe it as OO heresy.
Lew:



Begging the question.  You assume that 'final' is "contamination" in
order to argue that it's a bad thing.

No, I explained why I believe that final should be used with care.
And I'm not talking about prediction, as you can clearly see from the
passage you quoted, where I say one should NOT predict but control how
the API is used.  So that adds straw man to the fallacies you're using
to attempt to refute my points.

A rather bald straw man, too.  You took the exact opposite of my
argument in simplest possible terms without even a pretense of logic
or evidence.  For you to use such a weak argument completely
undermines your points.

I am not sure whether we understand each other. Let me reiterate: you
said that one should NOT predict, with which I agree. But you clearly
do not want to see that 'final' is one endless bag of predictions.
Every 'final' you put in your code cries: I PREDICT THAT THIS METHOD/
CLASS HERE IS WRITTEN IN STONE. While sometimes predictions like these
may be valid, more often than not they aren't.
Since you said nothing that speaks to my arguments, I cannot but refer
you to what I've already said.  If you have points that refute mine,
or even pretend to, we can continue.

And do try to read a book before you judge its content.  I referred
you to Mr. Bloch's explanations because they're already far more lucid
and detailed than I can hope to achieve in a Usenet post.  It is not
valid to declare his reasoning "rubbish" simply because it confronts
your prejudices.

I am declaring rubbish not his reasoning per se, but his reasoning as
you described it - that may be two different things. Anyway, there is
no VALID argument against inheritance in OO language. One may argue
that inheritance should be thought out and thoroughly reviewed but one
cannot state that it should be abandoned as it is the only way to make
sure that OO system is opened for future modifications while being, at
the same time, closed so that it is able to execute. The more final
you use, the more closed your class hierarchy becomes - which is
almost always 'the bad thing'.
 
R

Rzeźnik

I found it exemplary: <http://java.sun.com/docs/books/effective/>. Here
are several cogent excerpts:

<http://www.ddj.com/java/208403883>
<http://www.ddj.com/java/210602264>


If I may amplify on the point, Item 17 in _Effective_Java_ indicates
that a class designed for inheritance "must document its self-use of
overridable methods." By convention, such documentation typically begins
with the phrase "This implementation..." The item cites this example:

<http://java.sun.com/javase/6/docs/api/java/util/AbstractCollection.html
#remove(java.lang.Object)>

The item goes on to discuss several consequences and alternatves. IIUC,
the point is to make the heritability choice deliberately and document
that choice effectively.

Yes, so now we are substantially closer to the truth. I agree that you
should use inheritance with care and document what you have done and
on what basis, and I agree that documentation is probably the only,
informal, way to deal with unexpected interactions between base and
overridden methods in the Java language. But I do not agree that
limiting inheritance severely does any good, but the author does not
state it neither. Thank you John.
 
R

Rzeźnik

Rzeźnik:  Rather than predict, what the user really wants to do with
   the API, rather give them a good base and let them override where
   they want custom behaviour. Control them by offering them a good
   API of public and protected methods so they can't do too nasty
   things.

I must admit, that Rzeźnik's point of view is nearer to my own, but
then again, I might have misunderstood both.

As for me, you expressed my thoughts correctly. Thank you
 
A

Arved Sandstrom

Rzeźnik said:
I don't understand.

I meant "omniscient", my mistake.
If you are ominous [?] then this idea is great. I'd bet you are not. In
fact declaring everything as final is worse than most things you can
do to cripple program development. IMO every time you are
_introducing_ final you have to think deeply.
Every time you omit 'final' in a class declaration you should consider
carefully. Read the item in /Effective Java/ referenced upthread for a
thorough explanation of why to prefer non-heritability.

If there is such guidance there then the book is useless rubbish - but
I can't say for sure because I actually did not read this.
As an API writer (that is, one who writes a class intended for use) one should
control how the API is used, not predict it.

Agreed. So if we are on the same page - we do you try to predict how
your API will be used and 'contaminate' it with 'final's?

Point being, if you mark a class as final, there's no more prediction
involved - you have laid down the law.

You can get mixed messages about using final for classes depending on
who you read. For example, Goetz in 2002
(http://www.ibm.com/developerworks/java/library/j-jtp1029.html) does not
mention composition once, and since the use of final on classes can
often be a deliberate decision to discourage use of inheritance and
promote composition, it's an odd omission. What he does mention about
the use of final to enforce immutability is quite right, but I believe
he's incorrect when he states that the use of final on classes
discourages OO design. I think it's the other way around.

Nobody is saying that inheritance is evil, just that it should be
considered carefully when designing a class. It's not acceptable for
starters to *not* think about it: both the decision to use the final
keyword, and to not use it, should be thought out. It shouldn't be a
default. Goetz in 2002, and from the sounds of it you at present, seem
to fall into the camp of "don't mark a class as final unless you've got
really good reasons to do it", whereas I fall into the camp of "don't
_not_ mark a class as final unless you can explain why you want that
class to be a base class."

Inheritance is certainly not a bad thing, when properly used. I just ran
some stats on a medium-sized project I am helping to maintain, and out
of 2849 classes roughly a quarter of them (657 to be precise) extend
another. The majority of them are good uses of inheritance:

- creating custom exception classes (although I know that in some cases
this was overly enthusiastic);
- extending core classes that are meant to be so extended when doing JSF
customizations;
- concrete jobs that extend an abstract job that implements the Quartz
Job interface;
- all the JPA entities extend a @MappedSuperclass;
- concrete serializer classes that extend a default serializer base
class, the whole family meant to handle different types of inventory
serial numbers;
- concrete realizations of base classes (default or abstract) that
provide commonality for managed beans in this application.

And so forth. Bear in mind too that when Bloch and others warn about
inheritance they are referring to implementation inheritance, and where
the extensible classes are being used across package boundaries (IOW,
probably not by the same people who wrote the base classes), and where
the extensible classes are neither explicitly designed nor commented for
inheritance. If all of those conditions obtain then you could have problems.

AHS
 
R

Rzeźnik

That might _sound_ omnious, but in practice I find that the only non-
trivial code that actually _can_ be gainfully inherited is code that has
explicitly been _designed_ to be inherited.

If proper thought has gone into making code re-usable, then the use of
the "final" keyword won't be a problem as it will be where it should be
and not where it shouldn't. If proper thought _hasn't_ gone into making
the code re-usable, the use of the "final" keyword isn't likely to be a
problem since the code most likely won't be practically reusable
anyway.

Right but I do not think that shutting the door on inheritance is
substantially better. Quite the opposite if you ask me. Even the code
which was not designed upfront to be reusable may become, in a limited
way, reusable in the unforeseen future. It may still be just easier to
write well-behaving ancestor than redo all the work or deal with murky
composition. And even if it is not going to be the case, you are not
hurting anyone by omitting final. So I judge 'final' to be usable when
it is obviously known that class will not be redefined ever (it is
quite rare - I can think of dealing with 'equals' through subclasses,
utility classes, classes representing the most concrete redefinitions
in the Strategy pattern). Sometimes it suffices just to hide it within
a package and export only an interface, that way you still possess the
ability to redefine it 'silently'.
 
R

Rzeźnik

Rzeźnik wrote:


Point being, if you mark a class as final, there's no more prediction
involved - you have laid down the law.

For good or for bad?
You can get mixed messages about using final for classes depending on
who you read. For example, Goetz in 2002
(http://www.ibm.com/developerworks/java/library/j-jtp1029.html) does not
mention composition once, and since the use of final on classes can
often be a deliberate decision to discourage use of inheritance and
promote composition, it's an odd omission. What he does mention about
the use of final to enforce immutability is quite right, but I believe
he's incorrect when he states that the use of final on classes
discourages OO design. I think it's the other way around.

'Final' is a part of OO design since it is tied to inheritance, so I
agree with you here.
Nobody is saying that inheritance is evil, just that it should be
considered carefully when designing a class. It's not acceptable for
starters to *not* think about it: both the decision to use the final
keyword, and to not use it, should be thought out. It shouldn't be a
default. Goetz in 2002, and from the sounds of it you at present, seem
to fall into the camp of "don't mark a class as final unless you've got
really good reasons to do it", whereas I fall into the camp of "don't
_not_ mark a class as final unless you can explain why you want that
class to be a base class."

That is tricky question which is not answerable in general. You might
not know whether possibility of inheritance is sound at the point of
making a decision.
Inheritance is certainly not a bad thing, when properly used. I just ran
some stats on a medium-sized project I am helping to maintain, and out
of 2849 classes roughly a quarter of them (657 to be precise) extend
another. The majority of them are good uses of inheritance:

- creating custom exception classes (although I know that in some cases
this was overly enthusiastic);

There are two camps I believe - every exceptional condition should
have its type against Occam's followers arguing that client is
typically not interested in such distinctions. I cannot decide for
myself really.

And so forth. Bear in mind too that when Bloch and others warn about
inheritance they are referring to implementation inheritance, and where
the extensible classes are being used across package boundaries (IOW,
probably not by the same people who wrote the base classes), and where
the extensible classes are neither explicitly designed nor commented for
inheritance. If all of those conditions obtain then you could have problems.

Now I have clearer picture of what Block really wrote. Anyway,
implementation inheritance is just a variant of inheritance, it is
neither better or worse than other types, just trickier to be
justified.
 
T

Tom Anderson

You are omniscient :) I meant 'omniscient' :)

I suspected this, but am disappointed. I would love 'ominous' to become a
technical software engineering term.

public final void append(ominous String s) ...

tom
 
T

Tom Anderson

You didn't read it, yet you feel confident in refuting the arguments
you didn't read?

Interesting.

FWIW, far from being "useless rubbish", /Effective Java/ is arguably
the most useful book published to help one write idiomatic and, well,
effective Java code.

Nonetheless, it does contain at least one piece of pretty questionable
advice, namely this one.

I say that despite the fact that i haven't read the book, and don't intend
to. I have, however, come across problems in my work which we could solve
by subclassing an existing class in a third-party library and overriding
some of its methods, in a way that its designers almost certainly did not
intend. If they'd taken JoBo's advice of finalising everything that wasn't
explicitly intended to be overridable, we wouldn't have been able to do
that.

Now, you could counter that what the Blochfather really meant was that you
*should* design your classes to be subclassable, and use final to protect
the bits that must remain unchanged even when subclassed - the emphasis
being on enabling subclassing, not preventing it. If the designers of the
libraries i've had to mangle in this way had done that, then i would still
have been able to fix things by subclassing, and everybody would be happy.
But i think this requires superhuman effort, bordering on ominousness -
they'd have to have anticipated everything someone might usefully do with
their code and provided for it.

I think it's better to make a more permissive approach - finalise the
things that absolutely must not be overriden (of which there will be
fairly few, i would think - mostly security stuff, or very fundamental
support code), and leave the rest open to change, with a large "caveat
overridor" sign on it.

tom
 
R

Rzeźnik

I suspected this, but am disappointed. I would love 'ominous' to become a
technical software engineering term.

public final void append(ominous String s) ...

tom

Yeah, sounds great - lmao :)))
 
L

Lew

Rzeźnik said:
I am not sure whether we understand each other. Let me reiterate: you
said that one should NOT predict, with which I agree. But you clearly
do not want to see that 'final' is one endless bag of predictions.
Every 'final' you put in your code cries: I PREDICT THAT THIS METHOD/
CLASS HERE IS WRITTEN IN STONE. While sometimes predictions like these
may be valid, more often than not they aren't.

Your point here is incorrect. Declaring a class 'final' is not a
prediction, it's a constraint. You are not predicting that "this
method / class is written in stone" with 'final'. You are forcing it
to be. No prediction involved, just control. I advocate dictating
the use, not predicting it.
I am declaring rubbish not his reasoning per se, but his reasoning as
you described it - that may be two different things. Anyway, there is

I described it thus:
In other words, "Read the book. See for yourself."

What part do you disagree with, that you should consider carefully, or
with the reasoning expressed in the book to which I referred?
no VALID argument against inheritance in OO language. One may argue

No one is arguing against inheritance, just its abuse. Read the
book. Decide for yourself.
that inheritance should be thought out and thoroughly reviewed but one

Ummm, yeah-ah.
cannot state that it should be abandoned as it is the only way to make
sure that OO system is opened for future modifications while being, at

Straw man, straw man, straw man. I also am not arguing against
inheritance, or for its abandonment, only against its abuse.
the same time, closed so that it is able to execute. The more final
you use, the more closed your class hierarchy becomes - which is
almost always 'the bad thing'.

I am simply saying that one should design and document for inheritance
(of concrete classes), or else prohibit it, in line with and for the
reasons stated in /Effective Java/.

If you don't design a class to be heritable but allow it to be, then
you have problems. You cannot use the circular argument that a
recommended practice for object-oriented programming violates the
principles of O-O without demonstrating that it does, in fact, do so.

So let's try again - drop the straw man and circular arguments.
 
L

Lew

Rzeźnik said:
So I judge 'final' to be usable when
it is obviously known that class will not be redefined ever

If you declare a class 'final', then it is obviously known that it
will not be subclassed. ("Redefined" does not apply here - we're
talking about inheritance, not redefinition.) You're putting the cart
before the horse.

No prediction needed - it is a dictatorship.

If you do not declare a class 'final', then you had better darn well
make sure that it's conditioned for inheritance. If you do not
declare a class 'final', you are giving permission for it to be
inherited. No prediction needed.

The API writer does not predict, he permits.
 
L

Lew

Rzeźnik said:
We all know what we are talking about.

Precision good. Sloppiness bad.

Correction provided for the benefit of readers present and future who
very well might not know what you're talking about.
 
R

Rzeźnik

Your point here is incorrect.  Declaring a class 'final' is not a
prediction, it's a constraint.  You are not predicting that "this
method / class is written in stone" with 'final'.  You are forcing it
to be.  No prediction involved, just control.  I advocate dictating
the use, not predicting it.

Here you present just word juggling. If I were to reply, I'd have
repeat myself
I described it thus:


In other words, "Read the book.  See for yourself."

What part do you disagree with, that you should consider carefully, or
with the reasoning expressed in the book to which I referred?

You already know what it is I am disagreeing with: "explanation of why
to prefer non-heritability". This strongly suggests that use of
inheritance should be perceived as an anomaly rather than day-to-day
tool. Also, from what John Matthews and others presented, the author's
claims that "non-heritability" is somehow preferred are nowhere to be
found.
No one is arguing against inheritance, just its abuse.  Read the
book.  Decide for yourself.

If you want almost anything to be final ("Every time you omit 'final'
in a class declaration you should consider carefully") you are in fact
arguing AGAINST inheritance in my opinion. Or, in other words, you
seem to want to eliminate the abuse of inheritance so much that you'd
rather forbid inheritance in the first place.

Ummm, yeah-ah.


Straw man, straw man, straw man.  I also am not arguing against
inheritance, or for its abandonment, only against its abuse.

OK, let's replace abandonment with serious limitation.
I am simply saying that one should design and document for inheritance
(of concrete classes), or else prohibit it, in line with and for the
reasons stated in /Effective Java/.

If you could only remove the part "or else prohibit it"...
If you don't design a class to be heritable but allow it to be, then
you have problems.

Not necessarily

 You cannot use the circular argument that a
recommended practice for object-oriented programming violates the
principles of O-O without demonstrating that it does, in fact, do so.

If you so strongly believe that what you are advocating for is
'recommended practice' then it is not strange that anything I've said
makes no sense to you. Point is that "prohibiting inheritance" is not
a recommended practice. If you don't design a class to be thread safe
but allow it to be you are in far bigger troubles - do you recommend
putting 'synchronized' everywhere?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,777
Messages
2,569,604
Members
45,217
Latest member
IRMNikole

Latest Threads

Top