Java language and library suggestions

L

Lew

Tomas said:
As I already said, it's mainly about readability.

Your own personal definition of readability. I have no trouble whatsoever
reading try-catch blocks. Why do you?

Actually, I find "@Safe" no easier to read.

Tomas said:
Why not?? You do this every time you throw an exception or make an
assertion.

One question mark suffices to indicate an interrogative.
if(i<0)
throw new IllegalArgumentException("negative argument");

This is an elevation of invalid argument into a runtime exception.

That's not an elevation. An invalid argument is a runtime exception; throwing
a RuntimeException is an explicit way of reporting that.

Taking a typical error/exception taxonomy:
- RuntimeException - programmer error.
- checked Exception - recoverable environment error.
- Error - error from which a program cannot reasonably be expected to recover.

an invalid argument is a programmer error, exactly the level that
RuntimeException reports. Your suggestion elevates checked exception to an
error, and one that can be disabled, at that.

As I stated upthread, if @Safe converted a checked exception to a runtime
exception, that would be marginally better. It still is inferior to explicit
try-catch for the checked exception, though.
 
T

Tomas Mikula

Your own personal definition of readability.  I have no trouble whatsoever
reading try-catch blocks.  Why do you?

Actually, I find "@Safe" no easier to read.



One question mark suffices to indicate an interrogative.



That's not an elevation.  An invalid argument is a runtime exception; throwing
a RuntimeException is an explicit way of reporting that.

Taking a typical error/exception taxonomy:
- RuntimeException - programmer error.
- checked Exception - recoverable environment error.
- Error - error from which a program cannot reasonably be expected to recover.

an invalid argument is a programmer error, exactly the level that
RuntimeException reports.  Your suggestion elevates checked exception to an
error, and one that can be disabled, at that.

If I think that the code can't throw an exception and it actually can,
it's my error.
OK, according that taxonomy, be it RuntimeException.
I'm not arguing whether it should be AssertionError or
RuntimeException. I'm arguing about the usefulness of the construct.

Tomas
 
A

Arne Vajhøj

Tomas said:
From the beginning I'm talking about situations when throwing an error
is the most proper handling of the situation.

Our whole point is that the number of situations where usage
of this construct makes sense is too small to justify a language
change.
As I noted in the example, bubbling of the exception would be a
logical error - getting an IOException from a method that by
definition does not do any I/O.

No.

It calls an API that declares to throw an IOException.

Maybe it is not possible in current implementation.

But it may be possible in 5 or 10 years.

Which is why that it is good to program to the contract
and not to ones knowledge about the implementation.

Arne
 
A

Arne Vajhøj

Tomas said:
Use of @Safe is the outcome of that decision. Noone is forced to use
it. If you think that it would encourage people to use it
inappropriately, then most probably these people would do worse things
(like empty catch blocks) without it.

That comparison is not very valid.

The existence of exceptions and checked exceptions has huge benefits
for the coding style.

And empty catch blocks are very easy to spot both via manual
inspection or via a style checker.

Using the @Safe annotation will not really have any impact at all.
(saving a bit of typing is not important).

And detecting misuse of it would be relative difficult.
Use of @Safe *is* explicit handling of exception. It's just a
different syntax.

We don't want two syntaxes for the same unless it really provides
some benefits.

Arne
 
L

Lew

I said:
What is the best practice for situations where you know that a particular
invocation of a function will not throw an exception that is in the
function's exception specification? Obviously an empty catch block isn't
an acceptable option. Should you put an assertion in the catch block? Or
is there some other preferred idiom?

Log the exception and return gracefully.

public void Foo()
{
try
{
invokeMethodThatThrowsIOE();
}
catch ( IOException exc )
{
final String msg = "IOException: "+ exc.getLocalizedMessage();
logger.error( msg );
return;
}
}

You could put an assertion after the catch block but it is hard to think what
one would assert inside the catch block. What would be the invariant there?
 
L

Lew

Patricia said:
I don't like return here, because the rest of the code may be depending
on Foo having done something that has not been done. It could lead to
bad outcomes, such as the wrong thing being written to a file that
survives beyond the program run. Wouldn't a runtime exception be better?
Or even a System.exit call.

A very valid point. There are times when a return is appropriate, and there
are times when it isn't.

Often having a System.exit() is the wrong thing to do - for example, in a web
application. One has to apply situational wisdom and figure out a way to
gracefully return to a valid program state that will not disturb the user.

The situation is easier when the method returns a reference. Returning a
sentinel value like 'null' or an enum 'ERROR' value can help invoking code
decide where to go next.

I try to avoid simple propagation of exceptions, checked or runtime. It makes
it hard to keep a program running in a useful way. I prefer to trap and log
exceptions, then branch to a valid state, which often is a user-friendly error
screen.
http://java.sun.com/j2se/1.4.2/docs/guide/lang/assert.html#usage-control
suggests

assert false; // Execution should never reach this point!

to mark a place in the code that should never be reached.

That works, but I also like to put a more affirmative assertion after the
catch block.
 
J

Joshua Cranmer

Patricia said:
I would not use AssertionError for this, for several reasons:

1. It has a defined meaning, "Thrown to indicate that an assertion has
failed.", that this usage does not match.

The standard construct I see in one codebase (admittedly C++) is that
unreachable code is handled with the equivalent of assert false. In a
way, an assertion is failing: the assertion that the code is not reached.
 
A

Arne Vajhøj

I said:
Return, rather than terminate? If the exception is thrown, that's
evidence of programmer error, in the face of which I'm not sure the
program could sensibly continue; that's why an assert seems appropriate...

Only assuming that the current implementation will never change.

Usually not a good assumption.
...and that's why an assert doesn't seem appropriate; the invariant is
something like "this code should never be executed", but one can't
directly express that with an assert.

As others has already stated you can use assert false.

Arne
 
A

Arne Vajhøj

I said:
What is the best practice for situations where you know that a particular
invocation of a function will not throw an exception that is in the
function's exception specification? Obviously an empty catch block isn't
an acceptable option. Should you put an assertion in the catch block? Or
is there some other preferred idiom?

Handle it as an exception that will happen.

Some day when the code has been refactored and enhanced, then
the exception declared in the API will suddenly be there.

Arne
 
T

Tom Anderson

Yes.

But it is very bad code.

The safe construct is relying on knowledge about implementation of both
the calling and the called code instead of just relying on the exposed
API's.

But this is complete nonsense! That example DOES only rely on the exposed
APIs!

WriterEncoder.writeEncoded's API specifies that it throws the same checked
exceptions as the writer's write method. StringWriter.write declares that
it throws no checked exceptions. Therefore, if you use
WriterEncoder.writeEncoded with a StringWriter, you are permitted to
conclude that it won't throw any checked exceptions. That is not bad code,
and it is not relying on knowledge of implementation details.

tom
 
T

Tom Anderson

Tomas Mikula wrote:
...
...

I would not use AssertionError for this, for several reasons:

1. It has a defined meaning, "Thrown to indicate that an assertion has
failed.", that this usage does not match.

2. It breaks a key invariant for AssertionError. It is not thrown from
code that is run with assertions disabled.

I don't think that meaning or that invariant are as set in stone as you
think they are. Sun suggest exactly the usage Tomas is using - find for
"throw new AssertionError":

http://java.sun.com/j2se/1.5.0/docs/guide/language/assert.html

This is very much an example of using the assertion mechanism to check
flow control - the AssertionError is only fired if an allegedly impossible
branch happens.
3. Passing e to the AssertionError constructor forces the message to be
e.toString(). Is that the most useful message for explaining what is
going on?

That's a good point. Sadly, AssertionError doesn't have a (String,
Throwable) constructor, but you could do:

catch (IOException e) {
AssertionError ae = new AssertionError("an exception which was declared to be impossible occurred"); // or better
ae.initCause(e);
throw ae;
}

tom
 
T

Tom Anderson

Amen, brother!

I don't see the saving of six lines of typing to be worth the burial of this
serious a decision.

And tom, while the proposed @Safe (obviously it wouldn't be "@safe") "is" and
exception handler under the hood, it doesn't look like one. I espouse
writing explicit exception handlers for checked exceptions. The purpose of a
checked exception is to require explicit handling. To dodge that to save a
few lines of typing ("Ow! My poor fingers!") defeats that purpose,

This is not about typing, it's about reading.

This business about exception handlers being perfectly readable, and thus
a fine way of dealing with this situation, is nonsense. An exception
handler says to the reader "there is an exception which can occur here,
and this is how we deal with it". Here, we're talking about exceptions
which *cannot* occur. Writing catch blocks for them simply serves to give
the reader more to read, and once they've read it, confuses them as to why
it's there.

We evidently have different feelings about catch blocks, though. You
posted an example a few days ago, about closing streams, which had some
great number of catch blocks spread out down a page - taking a screen to
do what i'd do in a few lines. You think that's good code. I don't. I
thought that was shocking.
and as Arne points out, moves the realm of expected exception to
impossible error.

Have we been reading completely different threads? The whole point of this
is for situations where the exception *is* impossible. Or perhaps you
could explain to me how this line:

Reader r = new InputStreamReader(System.in, "UTF-8");

Is capable of throwing an IOException.
A checked exception is supposed to represent a recoverable situation, an
error an unrecoverable one. Why elevate the seriousness of the flaw?

You've missed the point - nothing is being elevated, because that catch
block *can never run*. And because it can never run, we put what is
effectively an assertion in it, so that if some day the impossible happens
(due to a bug in the standard library, or a rather surprising change in
the specification) and it does run, we find out about it.
Instead of calling lazy programming "Java as it is" and a call to
improve coding practices putting one's head in the sand, if you become a
proponent of making Java (not "java") coding rigorous and disciplined,
that's what looks like a reasonable idea. Why promote laziness, poor
design and whining?

Firstly, i think that Tomas's proposal is an outright good one - it would
make code clearer and more concise where used appropriately. That seems
like the most important test of a language feature to me.

Secondly, i think that where it could be misapplied, it would be better
than the misapplication of other features (empty catch blocks) that
currently happens. That also seems like a pretty good test.

I don't see how this could be construed as support for laziness, poor
design and whining.

tom
 
T

Tom Anderson

Log the exception and return gracefully.

public void Foo()
{
try
{
invokeMethodThatThrowsIOE();
}
catch ( IOException exc )
{
final String msg = "IOException: "+ exc.getLocalizedMessage();
logger.error( msg );
return;
}
}

Seriously? So if a method which you were assuming would never throw an
exception threw an exception, you'd effectively shrug, mention it, and
carry on? You wouldn't hold up your hands and say "whoa, that shouldn't
happen, i'm going to stop right now"?
You could put an assertion after the catch block but it is hard to think
what one would assert inside the catch block. What would be the
invariant there?

That it's unreachable - so "assert false;" would do.

tom
 
A

Arne Vajhøj

Tom said:
But this is complete nonsense! That example DOES only rely on the
exposed APIs!

No.

It relies on:
1) that it is indeed a StringWriter and not another writer that gets
passed in as argument
2) that writeEncoded only throws IOException if the passed Writer
throws IOException.

Bad code.
WriterEncoder.writeEncoded's API specifies that it throws the same
checked exceptions as the writer's write method.

No it does not.

WriterEncoder.writeEncoded's API specifies that it
throw IOException. It does not say anything about
when it does it.
StringWriter.write
declares that it throws no checked exceptions. Therefore, if you use
WriterEncoder.writeEncoded with a StringWriter,

But the day the code is changed to pass another Writer, then
the code still compiles fine, but the code is broken.

As usual happens when coding to implementation instead of
interface.
you are permitted to
conclude that it won't throw any checked exceptions. That is not bad
code, and it is not relying on knowledge of implementation details.

It is bad code, because it makes two assumptions about
implementation.

Arne
 
A

Arne Vajhøj

Tom said:
This is not about typing, it's about reading.

This business about exception handlers being perfectly readable, and
thus a fine way of dealing with this situation, is nonsense. An
exception handler says to the reader "there is an exception which can
occur here, and this is how we deal with it". Here, we're talking about
exceptions which *cannot* occur. Writing catch blocks for them simply
serves to give the reader more to read, and once they've read it,
confuses them as to why it's there.

Have we been reading completely different threads? The whole point of
this is for situations where the exception *is* impossible. Or perhaps
you could explain to me how this line:

Reader r = new InputStreamReader(System.in, "UTF-8");

Is capable of throwing an IOException.

Not know.

But just:

Reader r = new InputStreamReader(System.in, strfrompropertiesfile);

is enough to break the guarantee.

It is not good to have code that breaks without compiler warnings
for simple code refactoring done above.
Firstly, i think that Tomas's proposal is an outright good one - it
would make code clearer and more concise where used appropriately. That
seems like the most important test of a language feature to me.

Secondly, i think that where it could be misapplied, it would be better
than the misapplication of other features (empty catch blocks) that
currently happens. That also seems like a pretty good test.

As states earlier, then the comparison with empty catch blocks
is not a very good comparison because:
* this features does not carry the same benefits as exceptions
and checked exceptions
* misuse of this features is very difficult to find while
empty catch block is easy to find both visually and
with style checkers

Arne
 
L

Lew

Wouldn't throwing an IllegalStateException match better such a sitatuin
.. after all its by your defintion a state that should never happen/is
illegal ..

Excellent idea.
 
T

Tomas Mikula

No.

It relies on:
1) that it is indeed a StringWriter and not another writer that gets
    passed in as argument

The very same method that uses @Safe passed StringWriter to the
constructor of WriterEncoder just a couple of lines before. So the
method is only relying on itself.
2) that writeEncoded only throws IOException if the passed Writer
    throws IOException.

Bad code.


No it does not.

WriterEncoder.writeEncoded's API specifies that it
throw IOException. It does not say anything about
when it does it.

The API can't fully express the semantics.
If the semantics of WriterEncoder changes without generating compiler
warning or error, that is a bad decision of the WriterEncoder
designer.
And you can't avoid this risk of changing semantics with almost
anything. You rely on the documentation all the time. For example, is
it a bad practice to rely on java.sql.Statement.close() to close
associated ResultSet? (I guess you will say yes, so I provide another
example.) Is it bad to rely on String.length() to return non-negative
value? The String.length's API specifies that it returns int. It does
not say anything about the particular value of returned int.
But the day the code is changed to pass another Writer, then
the code still compiles fine, but the code is broken.

As usual happens when coding to implementation instead of
interface.


It is bad code, because it makes two assumptions about
implementation.

Summarized:
- it doesn't do the first assumption
- you do the second kind of assumptions all the time

Tomas
 
L

Lew

Joshua said:
The standard construct I see in one codebase (admittedly C++) is that
unreachable code is handled with the equivalent of assert false. In a
way, an assertion is failing: the assertion that the code is not reached.

That can be done. That begs the question of whether it should be done.

In another part of this thread, I asked:
You could put an assertion after the catch block but
it is hard to think what one would assert inside the
catch block. What would be the invariant there?

and Patricia Shanahan also suggested 'assert false;' Neither of you answered
the question: What is the invariant we mean to enforce, and then to assert
was enforced?

With 'assert false : "Should not throw IOException";' we're saying that the
invariant at this point in the code is 'false'. That's a rather empty
invariant. Really, 'assert false;' is a hack.

Normally we regard as invariants those things that should hold *true* in an
algorithm, for example, that we have a valid URI. An affirmative invariant,
one that says, "We have made this [pre|post]condition hold!" looks like this:

private static final String HARDCODED_URI = "http://valid.uri.com/";
// because hard-coded values are so much better than configured!
final URI uri; // lest someone later "accidentally" change it!
try
{
uri = new URI( HARDCODED_URI );
}
catch( URISyntaxException exc )
{
throw new IllegalStateException( "The impossible!", exc );
}
assert uri != null; // exceptions enforce, asserts prove

This is what the real invariant is - not that the impossible happened, but
that the necessary happened.
 
T

Tomas Mikula

Not know.

But just:

Reader r = new InputStreamReader(System.in, strfrompropertiesfile);

is enough to break the guarantee.

And this is the case I want the program to crash. It is reading the
configuration during initialization and the configuration contains
invalid value. I get an error, fix the configuration and run again.
And if you don't want your program to crash in this situation, noone
forces you to use this construct.
It is not good to have code that breaks without compiler warnings
for simple code refactoring done above.




As states earlier, then the comparison with empty catch blocks
is not a very good comparison because:
* this features does not carry the same benefits as exceptions
   and checked exceptions

No, not the same benefits. It has its own benefits. But the talk here
is about misuse, not benefits. And misused @Safe would usually do less
damage than misused try-catch.
* misuse of this features is very difficult to find while
   empty catch block is easy to find both visually and
   with style checkers

I can't object, but compare it to construct that it is meant to
replace, i.e.
try {
safe_statement;
} catch(EXCEPTION e) {
throw new AssertionError(e); // or throw new RuntimeException(e);
}

How is misuse of @Safe more difficult to find than the misuse of this?

Tomas
 
A

Arne Vajhøj

Tomas said:
The very same method that uses @Safe passed StringWriter to the
constructor of WriterEncoder just a couple of lines before. So the
method is only relying on itself.

I don't think "the declaration of the argument is probably just
a few lines above" is a good argument in software development.

It may start there and then be refactored to somewhere else.
The API can't fully express the semantics.
>
If the semantics of WriterEncoder changes without generating compiler
warning or error, that is a bad decision of the WriterEncoder
designer.

No no.

He is not changing anything.

He gave you an API where he said that he might throw an
exception.

You decided to code against his implementation instead
of the API he gave you.

He changed the implementation without changing the API.

Your code broke.

You violated the rules of encapsulation.

He was doing good OOP.
And you can't avoid this risk of changing semantics with almost
anything.

Maybe not. But in this case the API actually provides the
information that it may throw an exception.

The fact that some information may be missing is not a good
argument to ignore the information provided.
You rely on the documentation all the time. For example, is
it a bad practice to rely on java.sql.Statement.close() to close
associated ResultSet? (I guess you will say yes, so I provide another
example.)

In this case the JDBC specification says that the implementation
must do that, so that it is not up to the implementer whether
what he does.

It can still be argued that it is good to do it explicit,
because the implementation may have an error.

It would still be an error in the implementation, but
defensive programming is good.
Is it bad to rely on String.length() to return non-negative
value? The String.length's API specifies that it returns int. It does
not say anything about the particular value of returned int.

No, but it has to return the length. Length can not be negative.

So a correct implementation will always return an int >= 0.

Again this is not something that is left to the implementation.
Summarized:
- it doesn't do the first assumption

It does.
- you do the second kind of assumptions all the time

1) That does not make it good.
2) And it is hopefully very rare that such blatant violations of
OOP encapsulation are done

Arne
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,770
Messages
2,569,584
Members
45,077
Latest member
SangMoor21

Latest Threads

Top