Graduating soon in Comp Sci. Need real world advice.

R

Rene

Stephen Kellett said:
Code documents what the code does. That isn't much use - you need to
know what the code is meant to do - that is what comments and
documentation are for. Which is why "code is self documenting" is very
poor software engineering attitude to take - it shows you don't
understand a fundamental insight about writing and maintaining reliable
code.

Sorry, I should have stripped the "self" above, it was meant more or less
as a joke with a little factoid (it documents the writer's abilities)
attached to it. Probably should have added more smilies.

I'm with you on this matter. Interference in multi-threaded programs -
which essentially all java programs are - is quite hard to document, with
or without any help from the language.
People teaching comp sci. are obviously not teaching something simple
but fundamental when we keep seeing this nonsense about code being self
documenting. Sigh.

That may vary highly from school to school.

CU

René
 
S

Stephen Kellett

Rene said:
Sorry, I should have stripped the "self" above, it was meant more or less
as a joke with a little factoid (it documents the writer's abilities)
attached to it. Probably should have added more smilies.
Understood.

I'm with you on this matter. Interference in multi-threaded programs -
which essentially all java programs are - is quite hard to document, with
or without any help from the language.

Indeed. It was quite interesting my first foray into Java in 1996. All
this huff and puff about Java being a simpler language. Then as soon as
you want to do anything other than a trivial app and bingo you are into
multi-threaded land. Thats fine, I don't mind, but thats a huge
conceptual step for a beginner, or more limited ability programmer.

C# has the same problem.

Stephen
 
R

Robert B.

Stephen Kellett said:
In message <[email protected]>, Rene ....clip...
People teaching comp sci. are obviously not teaching something simple
but fundamental when we keep seeing this nonsense about code being self
documenting. Sigh.

Which probably explains a lot about the state of a lot of commercial
software!
 
M

Mike

I guess it's a matter of perspective. I've also seen that the differences
are more than a few percent. We did a test one time. The same project -
requirements, interface, database, everything was given to 2 experienced
groups. My group worked in Assembler and the other used C. The Assembler
package was 6 times faster than the C package and the C package had been
optimized. Both groups completed their assignments in the same time frame.
The C package was easier to understand for most people who weren't into
minutae, but problems were easier to find in the Assembler package because
there wasn't much between the source and the executable (no compiler
problems/eccentricities to deal with) Granted, C programmers are a lot
easier to find and the learning curve for Assembler is a LOT longer. As I
said in an earlier post - you need lots of tools in the toolbox so you know
which to use in a situation.
Sure, perspective, the type of problem you are
attempting to solve, the time-criticality and the range
of programming languages/tools available. My first
serious assembler work was a fast fourier transform for
an Apple 2e. Funding at the time meant that the only
alternative language was interpreted basic. From memory,
the basic routine took 18 minutes for a 512 pt transform
and the assembler routine took around 20 seconds - and
this was vital as the results of the transform were
vital in determining the next step in a time-critical
process. At that time, the experiment couldn't have
beenm done without assembler.

On reflection, the need to write my own multiply
routines, floating point types, look-up tables etc (in a
language I was entirely new to) did teach me the value
of planninmg before coding, so - on hindsight - assembly
programming did supply me with some valuable lessons
that I still use today.
 
R

Roedy Green

Which probably explains a lot about the state of a lot of commercial
software!

When I taught Fortran and assembler at UBC, I would stress that their
assignments were more to communicate with me, the guy who graded them,
than the computer. I spend huge amounts of time with my typewriter
ADDING proper comments to their assignments as part of marking them.

When I taught kids to program at summer camp, they learned to write
comments about what a method was going to do before they wrote the
method. They never even saw anything else, so they accepted this
practice without resistance, at least the little kids did.
I taught them method calls and decomposition before anything else. I
was amazed at the result, 7 year olds writing 13 page beautifully
structured code. I had the advantage that they were doing computer
animations and games where elements in the pictures naturally compose
themselves into a structure, and there are big benefits to writing
reusable components to be shared, or replicated.

Getting clear BEFOREHAND what a method will and will not do really
helps keep things compartmentalised neatly.
 
S

Stephen Kellett

Roedy Green said:
When I taught kids to program at summer camp, they learned to write
comments about what a method was going to do before they wrote the
method. They never even saw anything else, so they accepted this

Totally! When I'm writing code I typically write the comments as a
sequence of steps in the method, then write the code for each step. If
you can't formalize the steps in written language you sure as hell can't
formalize them as code. It also identifies quite neatly where your
original thinking went wrong, when you type the comment that "oh bugger,
that doesn't work".

Sometimes I write the start and end steps and then fill in the gaps.
Kind macro level to micro level. Then the code.

Stephen
 
R

Roedy Green

Totally! When I'm writing code I typically write the comments as a
sequence of steps in the method, then write the code for each step. If
you can't formalize the steps in written language you sure as hell can't
formalize them as code. It also identifies quite neatly where your
original thinking went wrong, when you type the comment that "oh bugger,
that doesn't work".

The big thing is you write a contract for what the method will and
will not do. It makes it far easier to make decisions about where
details belong.

You have no business adding code to a method not implied in the
general comment about what is does.

The English language fuzziness imposes strong structure on just what
can go in the method. It makes FINDING where something was done later
so much easier if you stick to your verbal comment contracts.

In a similar way I spend a lot of time thinking about the name of a
method, and often rename it. I want it to as closely as possible
evoke exactly what the method does and does not do, particularly in
relation to other similar methods.
 
R

Robert B.

Roedy Green said:
When I taught Fortran and assembler at UBC, I would stress that their
assignments were more to communicate with me, the guy who graded them,
than the computer. I spend huge amounts of time with my typewriter
ADDING proper comments to their assignments as part of marking them.

When I taught kids to program at summer camp, they learned to write
comments about what a method was going to do before they wrote the
method. They never even saw anything else, so they accepted this
practice without resistance, at least the little kids did.
I taught them method calls and decomposition before anything else. I
was amazed at the result, 7 year olds writing 13 page beautifully
structured code. I had the advantage that they were doing computer
animations and games where elements in the pictures naturally compose
themselves into a structure, and there are big benefits to writing
reusable components to be shared, or replicated.

Getting clear BEFOREHAND what a method will and will not do really
helps keep things compartmentalised neatly.

Couldn't agree more. My practice is similar. Other than line comments
(which I think are also mandatory), I code the main program comment block
and then comments for each routine (in this case, method) further refining
the requirements and design documents. I then code from those. If the
comments don't make sense, chances are that the code won't either... I
taught my students the same way and have converted a lot of my collegues to
the same practice...
 
E

Elspeth Thorne

Roedy said:
When I taught kids to program at summer camp, they learned to write
comments about what a method was going to do before they wrote the
method. They never even saw anything else, so they accepted this
practice without resistance, at least the little kids did.
I taught them method calls and decomposition before anything else. I
was amazed at the result, 7 year olds writing 13 page beautifully
structured code. I had the advantage that they were doing computer
animations and games where elements in the pictures naturally compose
themselves into a structure, and there are big benefits to writing
reusable components to be shared, or replicated.

Getting clear BEFOREHAND what a method will and will not do really
helps keep things compartmentalised neatly.

I learnt how to decompose a problem on paper with a pencil and an
eraser, before I even hit Logo, let alone anything more sophisticated.

We do something like this with the kids (highschoolers) we take on a
camp every spring (September) holidays. It's rare that they come out
with bad code, because they don't get to see any.

But I shudder when I look at some university-level assignments, because
they've followed the lecturer's example, with the occasional difference
that it compiles, even though it may not quite work as expected.

There's a lot to be said for proper environment and training.


Elspeth.
 
B

blmblm

I met a chap at a Microsoft security bash the other week - he'd met the
guy that invented APL, apparently also wrote a language called A,

Are you sure it wasn't J? Iverson is the inventor of both.

J is an interesting language -- I keep hearing good things about it
from a colleague who's a bit of a fanatic about it, and he in fact
does amazing things with it -- but there's a bit of a learning curve,
and to those unfamiliar with the language the resulting programs are
apt to look like, well, line noise. A resemblance to APL as described
above is surely not accidental.

Wikipedia has nice articles on both "APL programming language" and
"J programming language". The one on APL includes a quotation --
something about how one can write an APL program to simulating
shuffling and dealing a deck of cards in four characters, none of
them found on a normal keyboard -- that seems to pretty much capture
the feel of the language.

[ snip ]
 
S

Stephen Kellett

Are you sure it wasn't J? Iverson is the inventor of both.

"J" may be what I was calling "K" - but the "A" language - it is
installed in only one place, a large clearing bank, the name of which I
can't remember. The chap I met was very clear about that.

Stephen
 
P

Phlip

Dale said:
It's funny that I am reading this after just having put in a 14
hour day and can attest to the fact that the reason why I put in
the 14 hour day is basically because of lack of unit and
acceptance tests.

Don't tell me. Let me guess. Someone else wrote it (possibly all
night, too), and you had to clean it up.

Such a rare story in our industry!
 
R

Roedy Green

Don't tell me. Let me guess. Someone else wrote it (possibly all
night, too), and you had to clean it up.

No wonder Dale has been a little on the grumpy side lately!

I used to have this fantasy that I would figure out a way to stop
time. Then I could do it properly ALL myself and produce a lifetime of
work the next morning, a fait accompli, done properly.
 
D

Dale King

Hello, Matt!
You said:
I agree. Computer science major doesn't focus too much on Software
Engineering. Well, I think software engineering is a required course
in computer science though. I prefer to have a degree in software
engineering, rather than computer science. Of course, computer science
graduates can go to many different fields rather than just software
development.

I think it is becoming increasingly clear that Computer Science
is not really the right course of study for creating good
software developers. Its focus is on theoretical aspects of
computer science. And I speak from experience since I have a
master's degree in it and had a DB class where the prof tried to
indocrinate us into a research mindset every day.

It is sort of how a physics degree does not really help you a
geat deal for a job designing electronic circiuts. For thatyou
want an Electrical Engineering degree.

In that same vein, some colleges are offering degrees in Software
Engineering.
 
D

Dale King

Hello, Ben Pfaff !
You said:
My current favorite comes from the "Nachos" software used to
teach operating system courses at several universities:

I actually used it in an O/S class myself.
#ifdef FILESYS
halt->Execute(gInitialProgram);
#else
#ifdef SIMOS
halt->Execute(gInitialProgram);
#else
halt->Execute(gInitialProgram);
#endif
#endif // FILESYS

The programmer must have been on drugs.

Actually I think one could make a case for that one being
logical. It is highly likely that one might have to modify it for
a particular set of options and having it already split out this
way emphasizes that you need to consider the repercussions on
other cases. Consider if you were doing SIMOS and needed to
change what happened here. If you didn't have the conditional
compilation you might make a change that affected other
configurations as well.

And since it is conditional compilation it really isn't hurting
anything.
 
D

Dale King

Hello, Phlip!
You said:
else's code
after walkthrus.

Pick only one:

A> 2am panic call to fix a problem in someone else's code after
you've already had a hard 14 hour day, but their code has
copious comments

B> 2am panic call to fix a problem in someone else's code after
you've already had a hard 14 hour day, but their code has
wall-to-wall unit and acceptance tests

BTW option B has also been shown to dramatically reduce the incidences of 14
hour days...

It's funny that I am reading this after just having put in a 14
hour day and can attest to the fact that the reason why I put in
the 14 hour day is basically because of lack of unit and
acceptance tests.
 
D

Dale King

Hello, Phlip!
You said:
Don't tell me. Let me guess. Someone else wrote it (possibly all
night, too), and you had to clean it up.

Such a rare story in our industry!

Unfortunately not. In that case it was my own blunder and my own
fault for not unit testing and the fault of the project for not
having acceptance tests. This was for embedded C++ code not Java.
 
D

Dale King

Hello, Roedy Green !
You said:
No wonder Dale has been a little on the grumpy side lately!

I used to have this fantasy that I would figure out a way to stop
time. Then I could do it properly ALL myself and produce a lifetime of
work the next morning, a fait accompli, done properly.

I wasn't aware that I've been grumpy here. I've had little time
to even be here. I have about 10 messages queued up that I want
to reply to but haven't had the time. I can barely keep up with
reading. And yes I will get around to replying to you on XML vs.
binary.

I have been working a lot of overtime trying to track down major
issues like resets and exceptions. Unfortunately this is in
embedded C not Java. But unfortunately that 14 hour day was my
own making. I am definitely committed to test-driven design now.
I always believed in it, but only sometimes practiced it.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,065
Latest member
OrderGreenAcreCBD

Latest Threads

Top