hi

C

Chris Uppal

Randolf said:
This typically occurs because the reserved word "goto" isn't
implemented. There are situations where "goto" would be very useful, such
as:

0. An alternative to "break label" since label is currently limited in
where it can be located (such code could be easier to read)

I'm not in the "GOTO is intrinsically evil" camp -- I think it has its uses,
and (if used with discretion, which is the only way that anyone /does/ use goto
these days), can help keep code clean. Mind you, a lot of the reasonable uses
for goto are replaced in Java by try/finally statements; so I'm not sure that
it would actually pay for itself.

But the labelled-break thing is, IMO, even worse than goto could possibly be --
it always seems to make the code confusing. Fortunately, it is hardly ever
used. To be honest, I don't know whether the few times I /have/ seen it used
are confusing because the programmer (having labelled break available) didn't
feel the need to find a clearer structure, or because the underlying task logic
itself was inherently confusing, and a labelled break was the best available
way to express it.

1. The ability to share code between methods within a class, which all
end with the same functionality (this could be more efficient than calling
another method; javac would need to generate errors such as attempts to
access variables that belong to different methods, return type mismatches,
etc.)

I think that's technically infeasible given anything like current JVM
implementation technologies.

-- chris
 
A

Alex Hunsley

John said:
I suspect that Java left in "goto" as a reserved word for the same
reason that Ada supports (with intentionally hideous syntax) "goto" --
considerations of machine-generated code.

But jumping from one method to another? Ye gods! Even Fortran doesn't
allow that, except for a jump-out-to-caller mechanism that is
essentially a poor-man's "throw".

What exactly are you talking about when you refer to jumping from one
method to another? Which language, which construct?
lex
 
P

Patricia Shanahan

Alex said:
What exactly are you talking about when you refer to jumping from one
method to another? Which language, which construct?
lex

Fortran has a construct called "alternate return" that lets the callee
pick from caller labels supplied as call arguments.

I have only ever seen it in Fortran compiler test programs.

Patrica
 
J

John W. Kennedy

Alex said:
What exactly are you talking about when you refer to jumping from one
method to another? Which language, which construct?

The hypothetical "improved" (ick! gack!) Java that this thread is
talking about.

But some older languages do have the ability to goto (permanently) out
of a routine to a statement in its caller (or caller's caller, etc.). As
I said, that is essentially a poor-man's "throw".
 
L

Luc The Perverse

John W. Kennedy said:
The hypothetical "improved" (ick! gack!) Java that this thread is talking
about.

But some older languages do have the ability to goto (permanently) out of
a routine to a statement in its caller (or caller's caller, etc.). As I
said, that is essentially a poor-man's "throw".

You'd need to clean up the call stack (assuming you were using one at all .
.. . LOL!!!)

Myself, originally a GW-BASIC programmer, have used goto extensively in my
life - and see absolutely no benefit to using it.

If your program requires truly unusual code flow it can be accomplished with
try catch blocks.

I can't believe this thread is still alive.

I will admit I used goto once in C++ just to be a rebel (certainly not
because I needed one). Then I replaced it with an if statement.
 
J

John W. Kennedy

Luc said:
You'd need to clean up the call stack (assuming you were using one at all .
.. . LOL!!!)

Any language that can do calls uses some kind of call stack, even if
it's only a chain of staticly-stored return addresses. But, yes, you
have to clean it up.
I will admit I used goto once in C++ just to be a rebel (certainly not
because I needed one). Then I replaced it with an if statement.

Goto is good for getting out of a nested loop if you don't have the
equivalent of a Java labeled break. And it can give better performance
in nasty machine-generated decision-table or state-machine code--but I
wouldn't use it in hand-written code.
 
G

Gordon Beaton

I've written far too much C (and Algol 60, Algol 68R, PL/9 and PL/1)
to easily let go of the habit of declaring variables at the start of
a block. If that makes me weird, so be it.

I'm with you on this. I find that readability is improved if
declarations are collected at the start of the method, rather than
interspersed with code statements. While reading code, I think it's
much easier to glance directly to the start of the method (which tends
to stand out anyway) to see how a variable was declared, than to
search upwards from the current position, looking for declarations in
the code.

Methods shouldn't be so long that the declarations end up "far" from
first use anyway, or that the set of needed variables changes
significantly thoughout the method.

/gordon
 
A

Andrew Thompson

I totally disagree. Assemblers need "goto" but no decent HLL does. That
goes for COBOL too and before you ask, yes I've written GOTO-less COBOL.

Huh! I coded COBOL as well, and was not
even aware it *had* a GOTO. (Seems I never
saw the need to RTFM that far - even though
it was my 'instruction manual' for the language.)

Andrew T.
 
L

Lew

Gordon said:
I'm with you on this. I find that readability is improved if
declarations are collected at the start of the method, rather than
interspersed with code statements.

I would accept your formatting in a code review if you said start of the
block, rather than start of the method. Engineering rules trump formatting
rules, so a "declare at the top" style should still yield to scope limitation.

I distinguish between rules I follow and rules I want others to follow. I
believe we all should limit the scope of variables, and I personally declare
variables inline with use, but I wouldn't ask you to move your declarations in
line.

I happen to believe that readability is improved with variables declared near
their use, diametrically opposite to your viewpoint. I also do not see any
objective evidence that either of us is correct. I do see evidence that
declaring variables in the narrowest scope provides stability and other
engineering benefits.

If I worked in a group that said "declare at the top of the block", I'd follow
their rule except where engineering considerations override.

- Lew
 
J

John W. Kennedy

Andrew said:
Huh! I coded COBOL as well, and was not
even aware it *had* a GOTO. (Seems I never
saw the need to RTFM that far - even though
it was my 'instruction manual' for the language.)

Post-1985, then, I imagine. Until then, it was possible to manhandle a
COBOL program into GOTO-less form, but it tended to be fragile.
 
M

Martin Gregorie

Lew said:
I would accept your formatting in a code review if you said start of the
block, rather than start of the method. Engineering rules trump
formatting rules, so a "declare at the top" style should still yield to
scope limitation.

I distinguish between rules I follow and rules I want others to follow.
I believe we all should limit the scope of variables, and I personally
declare variables inline with use, but I wouldn't ask you to move your
declarations in line.

I happen to believe that readability is improved with variables declared
near their use, diametrically opposite to your viewpoint. I also do not
see any objective evidence that either of us is correct. I do see
evidence that declaring variables in the narrowest scope provides
stability and other engineering benefits.

If I worked in a group that said "declare at the top of the block", I'd
follow their rule except where engineering considerations override.
I'm happy with either approach and would go with either project standard
without argument. I used to declare variables at the top of a block in
the Algols and do like the scope limitation that provides: I guess that
too much C has stopped me using it. I must try harder.

The layout standard I've seen that I really don't like is the custom of
declaring class-level variables after the methods. Why reverse the
standard used inside blocks and methods other than sheer perversity?
 
M

Martin Gregorie

John said:
Post-1985, then, I imagine. Until then, it was possible to manhandle a
COBOL program into GOTO-less form, but it tended to be fragile.
Indeed - and I wrote mostly COBOL and assembler before 1985 and mostly
other languages (C, Tal, PL/1) since. I suspect my COBOL habits are
permanently set by that experience.

The problem with goto-less COBOL was that not all conditional clauses
could have ELSE clauses. These implied conditionals were mandatory parts
part of some verbs, such as READ.....AT END....

COBOL-85 introduced ELSE clauses to the implied conditionals and in-line
PERFORM clauses, so now its easy to write:
PERFORM
READ A-FILE
AT END
MOVE "YES" TO A_FILE_EOF
ELSE
NOTE Process the record
END-IF
WHILE A-FILE-EOF NOT = "YES".

Prior to COBOL-85 writing it as goto-less code used to look like this:

PERFORM READ-INPUT-FILE WHILE A-FILE-EOF NOT = "YES".
...

READ-INPUT-FILE SECTION.
RIF-1.
READ A-FILE
AT END
MOVE "YES" TO A-FILE-EOF.
IF A-FILE-EOF NOT = "YES"
NOTE Process the record.
RIF-EXIT.
EXIT.

and arguably AT END MOVE 'YES' TO A-FILE-EOF GOTO RIF-EXIT would have
been a more robust solution than using the extra IF statement.

....and that's quite enough COBOL in a Java group. I now return you to
the normal language. I thank you for your time.
 
C

Chris Uppal

Martin said:
The layout standard I've seen that I really don't like is the custom of
declaring class-level variables after the methods. Why reverse the
standard used inside blocks and methods other than sheer perversity?

I used to do that; I have since reconsidered, but I can speak as a reformed
sinner...

I take it that you wouldn't want to see methods and fields interleaved (at
least, not except in very special cases) ? If so then either the fields go
first or they go last (or possibly both if you split public fields from other
fields). Now it's also fairly normal to want to put public members before
private ones[*]. So, fields, being private, don't go before the public
methods...

So they have to follow /all/ the methods. Simple!

([*] There's a well-regarded recommendation to do it the other way around for
C++, but that is C++-specific -- and I can't remember the justification
anyway.)

-- chris
 
N

nukleus

Plus, and don't forget about this,
otherwise you'll end up spending lifetimes
to fix your program in case there is error.

If variables are declared at the top of the methods,
then all the error conditions and variable values
could be seen if you set a breakpoint in correct place
cause they won't go out of scope.

Especially if you have the exceptions in that method,
as if they are hit, you are COMPLETELY out to lunch,
and it would take you lifetimes to figure out WHY
that exception was hit if you are using declarations
deep inside the method. I even avoid using things like

for (int nInd = 0; nInd < limit; nInd++)

Instead, i do

int nInd;

for (int nInd = 0; nInd < limit; nInd++)
{
...
}

So, if i have an error condition inside for loop,
i can see which exact element caused it,
even if i am thrown out of the scope of the loop
by some exception.

I absolutly hate "code reviews".
It simply means that there is some smart ass,
sitting on the top of a bunch of dummies,
telling them how screwed up they are,
and all it produces at the end
is guilt, leading to fear
to lose onse skin.

Instead of those stupid "code reviews",
you just work within a group and maintain
a constant feedback and exchange the ideas.

I know plenty of those rules. But as far as end product goes,
those "rules" do not cover about the most subtle issues,
and especially the maintainability and future code modifications.
Long subject.

I know, I know.

:---}

It is called scitsophrenia.
In one situation you think like this,
and in another situation you think like that.

No wonder...

Yep. I like those "shoulds",
the byproducts of a rigid mind.

Now, on what basis should we limit the scope of variables?
What does it improve?
Not that I do not see what you mean,
but you need to be able to put the WHOLE thing
in perspective, and not only in terms of your
ideology, but the debugging process, code extensibility,
maintenance issues, code simplicity and readability
and all sorts of other things.

The rules of:

THOU SHALT LIMIT THE SCOPE OF YOUR VARIABLES
is simply incomplete of a statement.
There are no ifs and buts in it.
So, it is not programming.
It is totalitarian dictatorship
of black and white.

Programming is a pure logic,
as THAT is where all the intricacies come in.

If you use this black and white model,
your program will be the most oppressive thing there is.
To the user, to the programmer and the rest of them
mortals that come its way.

Uhu.

And what happens if you hit an exception
during debugging?

Ever thought?

Can you look at those variables inside some loop
and see what EXACTLY happened?

How long will it take you to fix bugs with this approach?
How many times you would have to recompile
and how many source files you'd have to go thru
to see where is the error?

You see, looking at the source is not the same thing
as looking at the state of your program and variables
in RUN time.
Cause what you THOUGHT MUST be happening,
for some strange reason, do not happen.
And, usually, it is the simpliest things imaginable
that break your royal code.

Which is about the most foolish thing to do.
Just recently, i was lazy enoug to declare my
variables in one lengthy operation where a number
of files are opened and, in some cases, a single
statement would cause the entire file load and parsing.
The error would happen after tens of minutes of operation
and, once the exception was hit, i did not have a SLIGHTEST
clue what exactly was the reason for that exception.

Once i moved the declaration on the most outside scope
of method, i could see that error within seconds.

Hows THAT for lesson in programming?

It all depends.
Sure, you are guaranteed a better memory dealocation
and sure, you may avoid some subtle bugs if you initialize
the variables on the most outside scope and, later on,
use them when they were not properly set in the inner levels,
and, instead of compile time errors, you get run time exceptions.
No question about it.
But this is just the BEGINNING of a story,
and not the end.

Well, BOTH are correct
and both are wrong.

Because you need to specify ALL the conditions
and ALL the tradeoffs.

To general to even consider.
I doubt you have enough evidence for stability,
and i do not even attempt to figure out what those
"engineering benefits" are in specific terms.

"declare at the top of the block" dictates
are no different then "use bracess this way" dictate.
You can hit me with a sledge hammer,
but i will still use the braces on the next line,
and i have a REASON for it, you see,
and the reason is simple enough to a mere mortal to comprehend.

If you use braces at the end of a line,
you can never be certain if you put them in the
right scope, especially in nested "if else" clauses.
I'm happy with either approach and would go with either project standard
without argument. I used to declare variables at the top of a block in
the Algols and do like the scope limitation that provides: I guess that
too much C has stopped me using it. I must try harder.

First of all, if you declare variables at the very beginning
of you method, you can review that code and see if some
variables and, therefore, some operations are simply overkill,
or you are doing some extra work on a method level that would
not have to be done if your code was more structured.

So, by sheer fact that you have some variables,
it indicates that you do some operations that NEED those
variables, and you can see that you might be using several
variable to carry the same exact information except in
a slightly different context, and so you can fix that code.

As far as memory management goes, it is true that the memory
deallocation may improve if you declare those variables
on the deepest scope possible as in that case,
the deallocation is pretty much automatic.

But again, that is just a beginning of the story,
and not the end.
The layout standard I've seen that I really don't like is the custom of
declaring class-level variables after the methods. Why reverse the
standard used inside blocks and methods other than sheer perversity?

Well, i must be a pervert then.
I agree with you that it is better to move it in front,
generally speaking, but some methods have so much stuff
attached to them that it takes you quite some time
before you see the constructor code, which is about
the FIRST thing i want to see.

Then, after contructor code, i like to put the highest,
possibly thread level methods, such as start, stop, run,
Terminator (ever heard of such an animal), and things
like that.

Then, I'd like to see the main method, starting the
whole operation.

Then, after it, i like to see the more fine grained
methods.

And, about the LAST thing i want to see, is the even
handler code. That comes as just about the LAST thing
in that class, and the beauty of it, i know EXACTLY
where i can find it, even if i do not remember the
exact name of some variable or method,
which is an art on its own.

I can remember just about every single method or
variable in a pretty complex program, just because
of naming convention i use.

I can see what the methods do simply by looking
at the variable names those methods use.

And, the LAST thing i want to see is...

Tada!

The VARIABLES, in bulk.
But even there, they are all nicely sequenced
and the first in order come those of the most
outer scope and meaning, the MAJOR variables,
that could correspond to entire complex classes,
frames, etc.

Then comes the file level variables, file names,
handles, etc.

Then come the groups of related variables describing
some logically related operations or constructions.

Then comes the miscellaneous stuff, which i periodically
look at to see what kind of garbage is there
as there should be none on the first place as just about
all the variables should be decleared on more local
scope, otherwise you have a database consistency issue,
when several different methods access the same global,
and, in some cases, one hand does not know what the other
is doing.

As a general rule, I use as little globals as I can manage.
So far, my stuff runs, compiles, extends, and maintains
like a tank. Try to kill it. Even if you unplug the power
cord, in the middle of a file operation, I'll recover,
as long as operating system can recover the damaged file.

:--}
 
C

Chris Dollin

nukleus said:
Plus, and don't forget about this,
otherwise you'll end up spending lifetimes
to fix your program in case there is error.

If variables are declared at the top of the methods,
then all the error conditions and variable values
could be seen if you set a breakpoint in correct place
cause they won't go out of scope.

If your methods are /so/ big
that this presents a problem to you
then I respectfully suggest
that your methods are /too/ big;
making them smaller
will simplify your [1] scope issue
and allow you to give useful names
(and a respectable meaning)
to the segments you extract
as well as
making your names properly local.

[1] I've had an issue with very-local-scoping
perhaps once in the past twenty years;
that was in C.

I spotted it
the next time I ran my ad-hoc tests
and fixed the issue in
about twenty seconds.

Of course anecdotes
are not data.
 
O

Oliver Wong

Martin Gregorie said:
Indeed - and I wrote mostly COBOL and assembler before 1985 and mostly
other languages (C, Tal, PL/1) since. I suspect my COBOL habits are
permanently set by that experience.

The problem with goto-less COBOL was that not all conditional clauses
could have ELSE clauses. These implied conditionals were mandatory parts
part of some verbs, such as READ.....AT END....

COBOL-85 introduced ELSE clauses to the implied conditionals and in-line
PERFORM clauses, so now its easy to write:
PERFORM
READ A-FILE
AT END
MOVE "YES" TO A_FILE_EOF
ELSE
NOTE Process the record
END-IF
WHILE A-FILE-EOF NOT = "YES".

Prior to COBOL-85 writing it as goto-less code used to look like this:

PERFORM READ-INPUT-FILE WHILE A-FILE-EOF NOT = "YES".
...

READ-INPUT-FILE SECTION.
RIF-1.
READ A-FILE
AT END
MOVE "YES" TO A-FILE-EOF.
IF A-FILE-EOF NOT = "YES"
NOTE Process the record.
RIF-EXIT.
EXIT.

and arguably AT END MOVE 'YES' TO A-FILE-EOF GOTO RIF-EXIT would have been
a more robust solution than using the extra IF statement.

What about:

PERFORM
READ A-FILE
AT END
MOVE "YES" TO A_FILE_EOF
NOT AT END
NOTE Process the record
END READ
WHILE A-FILE-EOF NOT = "YES".

?

I'm not sure how "standard" this is, but I believe this works on both Liant
RM/COBOL and ILE COBOL400.

- Oliver
 
N

nukleus

Chris Dollin said:
If your methods are /so/ big
that this presents a problem to you
then I respectfully suggest
that your methods are /too/ big;
making them smaller

Agreed. But, in case of code I am dealing with,
it is easier said then done.
I did try get get rid of as much stuff,
as I could manage, but there are some more
subtle issues involved.
ANY class is guaranteed to be aware of the
main class and so it can ask from main just
about any information out there. But if your
move some code to their own classes, you simply
add at least one more level of idirection and
a number of get/put methods.

Sure, we can argue about it this way or that,
and if i wrote this code from scratch, i'd
probably use the entirely different approach
and use state machine an fully asynchronous
operation. Unfortunately, to rewrite the code
to do that is just about as good as scrapping
the whole thing and starting from scratch.
Too much work to be done and too lil time to do it.
will simplify your [1] scope issue
and allow you to give useful names
(and a respectable meaning)
to the segments you extract
as well as
making your names properly local.
Agreed.

[1] I've had an issue with very-local-scoping
perhaps once in the past twenty years;
that was in C.

I spotted it
the next time I ran my ad-hoc tests
and fixed the issue in
about twenty seconds.

Of course anecdotes
are not data.

Yep, and I know EXACTLY what I am talking about.
Played with this scope thing in just about
any way imaginable. In some simpliest cases i do
use the most local scope possible, usually at the
point of initial prototyping. But then I look at the
code and try to move all declarations as high up
as I can manage because I know all too well, that
when I have some bug and a breakpoint hit or exception
triggered, about the last thing i want to see
is my variables getting out of scope.
 
L

Lew

I know, I know.

:---}

It is called scitsophrenia.
In one situation you think like this,
and in another situation you think like that.

The correct spelling is "schizophrenia" and it refers to a psychosis whereby
the patient suffers hallucinations, delusions, inappropriate affect or other
symptoms of being completely out of touch with reality. You may be referring
to so-called "split personality", which is not schizophrenia but a completely
different disorder.

<http://en.wikipedia.org/wiki/Schizophrenia>

A stereotypic type of schizophrenia is paranoid schizophrenia, wherein the
sufferer imagines themself the object of persecution, ridicule, oppression or
other hostile behavior when no such behavior actually exists.

Lew said:
Uhu.

And what happens if you hit an exception
during debugging?

Ever thought?

Of course.
Can you look at those variables inside some loop
and see what EXACTLY happened?
Yes.

How long will it take you to fix bugs with this approach?

Not too long.
How many times you would have to recompile
and how many source files you'd have to go thru
to see where is the error?

Once, after I fix the bug.
You see, looking at the source is not the same thing
as looking at the state of your program and variables
in RUN time.
Cause what you THOUGHT MUST be happening,
for some strange reason, do not happen.
And, usually, it is the simpliest things imaginable
that break your royal code.

That is why I use exception handling, logging and debuggers.

Lew said:
Yep. I like those "shoulds",
the byproducts of a rigid mind.

Or of one that has read Joshua Bloch on this subject (/Effective Java/) and
others, and has understood the benefits of the approach.

The /ad hominem/ attack does not go far to support the argument.

- Lew
 
L

Lew

nukleus said:
Yep, and I know EXACTLY what I am talking about.
Played with this scope thing in just about
any way imaginable. In some simpliest cases i do
use the most local scope possible, usually at the
point of initial prototyping. But then I look at the
code and try to move all declarations as high up
as I can manage because I know all too well, that
when I have some bug and a breakpoint hit or exception
triggered, about the last thing i want to see
is my variables getting out of scope.

Or lingering past their useful scope and causing trouble thereby.

- Lew
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads

Hi everyone! 4
Hi! 0
Hi, I need some advice please. 3
My Status, Ciphertext 2
Looking For Advice 1
Hi! 0
how to create a webservice using java on eclipse? 2
How do i set specific code where in arduino 1

Members online

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top