C++ fluency

J

James Kanze

I invented "waterfall" as a strawman argument?? I wish!!

You didn't invent it. It's been a popular strawman for any
number of "new" methodologies. It's never been used or promoted
as an actual methodology, however.
"...waterfall values promoted big up-front speculative
requirements and design steps before programming.
Consistently, success/failure studies show that the waterfall
is strongly associated with the highest failure rates for
software projects"

And where did they find this information?

I don't think that they invented it, either---I have been
hearing criticisms of the "waterfall methodology" since the mid
1980's, at least. On the other hand, I've never seen or heard
of a methodology actually proposing it---every time it's been
cited, it's been as a strawman, to knock down.
Again, please read up on the relevant literature here,
including the studies that book cites.

I'm apparently more familiar with the relevant literature than
you are. And the article doesn't give any sources for its
"waterfall methodology", so there's nothing I can look up.

Now, it's entirely possible that somewhere, at some time,
someone has recommended a waterfall methodology. It's also
possible (although I would consider it highly unlikely) that
they even called it that. But it's never been an accepted
methodology, at least not since I've been involved in computer
science (from about 1975 on). And the only citations about it I
can find are articles presenting it as a strawman.
 
J

James Kanze

Philip please read what he wrote. The "invention" of which
you're accused isn't the waterfall model, but the studies that
showed that collecting requirements up front led to failure.

Not quite. And the "on your part" in my posting is definitely
incorrect on my part---I'm sure that Philip didn't invent the
waterfall methodology, since it's been around (as a strawman)
for an awful long time. He's just repeating what he's heard.
(And I'm sorry if my statement gave the wrong impression. I
don't think he's intentionally lying in his references to the
waterfall methodology. He's just misinformed. Which is very
easy to be, because it's such a popular strawman. It's hard to
find a description of any methodology which doesn't include some
demostration as to why the waterfall methodology is wrong.)

The strawman is setting up a methodology which insists that all
requirements are fixed in stone before the first design step is
done. If such a methodology has ever existed, it predates my
time. And it certainly isn't something that has been
recommended by any serious methodology anytime recently.
Comparing some new methodology to it is simply raising a
strawman---"inventing" something which will make your ideas look
good by comparison.

[...]
The evidence suggests that nobody has ever really esposed or
attempted to follow the waterfall model. Rather the contrary,
it appears to have been set up as a straw man from day one,
and every mention of it (ever) has been to point out that it
is deficient, impractical, and will lead almost inevitably to
disaster.

Exactly. (And thanks for the reference.)
Though the economics were somewhat different in 1970, the
paper cited above also advocates many (perhaps even most) of
the procedures that now go under the names like XP, Agile
Development, and so on. Way back in 1970, he essentially
advocated pair programming: "Every bit of an analysis and
every bit of code should be subjected to a simple visual scan
by a second party who did not do the original analysis or code

I don't think that's what is generally meant by pair
programming. That sounds more like code review to me. (The
important part is that the code is seen with new eyes, by
someone who *wasn't* involved in writing it.)
but who could spot things like dropped minus signs, missing
factors of two, jumps to the wrong address, etc., which are in
the nature of proofreading the analysis and code." He then
demonstrates the change in economics in the next sentence: "Do
not use the computer to detect this kind of thing -- it is too
expensive."
Another step he considered important was: "Involve the
customer". He said: "For some reason what a software design is
going to do is subject to wide interpretation even after
previous agreement. It is important to involve the customer in
a formal way so that he has committed himself at earlier
points before final delivery."
Unless somebody can show evidence to the contrary, I'm
prepared to believe that James was entirely correct -- the
waterfall model has been a straw man from the very beginning,
and nobody has ever believed or claimed that it does or even
could really work.

I'm curious too. I won't say that it's impossible, but given
the paper you cited, it looks like you'd have to go back well
before 1970 to find it.

The problem is that it is such a universally established
strawman that no one thinks to question it. And that people
honestly use it as a comparison, without considering it a
strawman.
 
J

James Kanze

(e-mail address removed)>, (e-mail address removed)
says...
[ ... ]
Except that they don't. They define policies, but they
certainly don't allow you to control exactly when the context
swaps occur.
It's certainly possible to do so in Windows, and I'm pretty
sure it is in (at least most) UNIX as well. It's decidedly
non-trivial though. To do it in Windows, you turn your test
harness into a real debugger -- i.e. you set breakpoints in
the client code and use those to give you control over the
relative order of code execution.
On the face of it, that sounds fairly complex, but in reality
it's really much worse. The problem is that the test harness
needs knowledge of the client code to ensure that it doesn't
force a context switch at a time that it couldn't happen under
normal circumstances. For example, it has to track all usage
of critical sections, to be able to force context switches
between threads that don't depend on the same critical
section, but ensure against context switches among threads
that do depend on the same critical section.
In short, before you have a working test harness, you've
duplicated a large part of the functionality of a complete
multitasking operating system. Worse, there's almost no real
scaling factor -- which is to say that the test harness is
tremendously huge and complex even if the code under test is
tiny.

I'm sure that if you're willing to write a complete OS (or just
the kernel, if you have some means of using the rest of an
existing OS with your kernel), it could be done. I'm just
unaware of any existing package or tool which does it. (And as
you say, it's a bit too much to integrate into the debugging of
a single application, if no tool exists.)

It's an interesting thought, however, because if you do have
access to the kernel (because, say, you're Microsoft, or you're
working on the Linux kernel), it shouldn't be that hard to add
some sort of controls which would support it. (I'd guess, here,
the most difficult part would be specifying exactly what the
interface should be.)
 
J

James Kanze

If you put a high priority real-time thread on a single core
Solaris system in a while(1) loop, you will brick the box.
The last context switch the system will make is the one that
starts your thread.
I know, I did it once!

I know that too. In 32 bit mode on a Sparc, the g++
implementation of std::string does just that (and will brick the
box, if priority inversion occurs).

But how is this relevant to debugging multithreaded code? What
I need is the possibility to create a situation where two
threads are active, and one interrupts the other immediately
after a specific machine instruction (or rather, that I can
loop, having one interrupting the other one instruction later
each time in the loop).

Jerry suggested an idea using the debugger (or the same
techniques used by the debugger). I'm not too sure how it would
work in practice---you could definitely loop, stopping one
instruction later in the thread each time, but how would you
force the OS to resume in the other thread (and not in the
thread you stopped)? And of course, this isn't the sort of test
you'd want to run each time you added a line of code, since it's
likely to be very, very slow.
 
P

Phlip

Jerry said:
Unfortunately, for people like me who did our first real programming in
FORTRAN on mainframes that weighed as much as an average house, age has
caught up to the point that a few extra aches and pains are an everyday
sort of thing...

Luxury! We used to program FORTRAN all night while our dad beat use with IPO
diagrams!
 
J

Jerry Coffin

On May 9, 6:04 pm, Jerry Coffin <[email protected]> wrote:

[ ... ]
I don't think that's what is generally meant by pair
programming. That sounds more like code review to me. (The
important part is that the code is seen with new eyes, by
someone who *wasn't* involved in writing it.)

Oh, I don't mean to imply that he described pair programming exactly,
but I also think it was closer to pair programming than a normal code
review would be.

Pair programming (at least from what I've seen) normally implies one
person sitting at the keyboard, and another looking over their shoulder.
At that time, however, that wasn't generally even possible -- most
computer programming was still done via Hollerith cards. At least to me,
pair programming is _largely_ a matter of taking the same general idea
and modifying it to deal with the short turnarounds that are now common.

[ ... ]
I'm curious too. I won't say that it's impossible, but given
the paper you cited, it looks like you'd have to go back well
before 1970 to find it.

Right -- well before. Even in that paper, he only barely mentions the
purely forward-moving model, then moves on to a model with iterations
where each step depends not only on its predecessory, but also its
successor.
The problem is that it is such a universally established
strawman that no one thinks to question it. And that people
honestly use it as a comparison, without considering it a
strawman.

I think there's a much more damaging aspect than that. People use it as
a straw man when presenting their new and improved methodology. Even
thogh all they talk about is its problems, the message that's often
received is that it's the best available alternative to the methodology
being preached at the moment.

That means if the current methodology fails to deliver what they expect
(and given the nature of hype, it often will) they consider the
alternatives -- and what they've been told is that despite its flaws,
the waterfall metodology is the widely accepted alternative, so they try
to follow it.

The people attempting to "kill" the waterfall methodology are the ones
who are almost entirely responsible for it being put to use at all.
Without it's being taught as the prevailing and accepted method, nobody
would accept the utterly insane idea that each step must be completed to
perfection before the next step is started, nor that each step must be
treated as sacrosanct once completed, no matter how serious of flaws are
found later in the process.
 
P

Phlip

Jerry said:
Pair programming (at least from what I've seen) normally implies one
person sitting at the keyboard, and another looking over their shoulder.

Pair programming involves two programmers with dual keyboards & mice. They take
turns typing, and they both know the plan in their heads. Either could take over
typing for the other at any time. If one thinks of a better plan, they type and
describe it, out loud, until the other could take over.
I think there's a much more damaging aspect than that. People use it as
a straw man when presenting their new and improved methodology.

Waterfall most certainly is not a strawman.

For a small project - such as one feature within an otherwise iterative cycle -
"Waterfall" casually refers to exactly what James Kanze advocates - sketching a
complete design before committing it to code. That is "Big Design Up Front", and
it's not really the worst possible system. It's better than Code-n-Fix. Those
who practice TDD have learned to do without it and produce competitive designs.

For a large project, the worst part of Waterfall is the understandable urge to
collect all possible requirements before beginning the project. That is the
worst system possible. For example, I once temped at a company that had
established a successful biz-to-biz website, at the expense of very crufty code.
They had diligently followed Waterfall, collected requirements, designed them,
and then coded them. No tests.

The problem with calling Waterfall a "strawman" is these senior software
engineers and managers actually thought the software engineering literature TOLD
THEM to follow this incorrect process.

Then the inevitable happened. Real world requirements appeared and conflicted
with the planned requirements. At this point, the design quality went over a
cliff. To rapidly add new features to a deployed application, the programmers
had to start adding hacks and patches. Did I mention the code had no tests?
Even
thogh all they talk about is its problems, the message that's often
received is that it's the best available alternative to the methodology
being preached at the moment.

Exactly. And I was there when the managers had a little dinner together to
celebrate a successful quarter, and to plan a rewrite. Of course it was going to
be in Java, and coded by an outside team. And I actually got to hear with my own
ears the second-in-command manager saying, "This time, we will spend lots more
time planning the requirements, and getting them all right before we start".

Waterfall most certainly is not a strawman, despite its use as a simple contrast
to Agile in the literature. Waterfall reappears whenever managers see new
requirements afflict old code, and they think this means they must collect
_more_ up front requirements.

Whenever you hear in the news of a billion dollars sunken in some huge abandoned
software project, you can bet money that its managers started with a big
requirements gathering session. The more requirements gathered, the more likely
the failure, simply because each requirement represented a decision made with
the least hard data possible.

In hardware, compare the processes of GM to those of Toyota. 'nuff said!
The people attempting to "kill" the waterfall methodology are the ones
who are almost entirely responsible for it being put to use at all.
Without it's being taught as the prevailing and accepted method, nobody
would accept the utterly insane idea that each step must be completed to
perfection before the next step is started, nor that each step must be
treated as sacrosanct once completed, no matter how serious of flaws are
found later in the process.

To do Waterfall right, each time you find a mistake, you must go back to the
phase that created mistake and start from there again.

Given arbitrary time and arbitrary programmers, that works great!
 
J

James Kanze

Pair programming involves two programmers with dual keyboards
& mice. They take turns typing, and they both know the plan in
their heads. Either could take over typing for the other at
any time. If one thinks of a better plan, they type and
describe it, out loud, until the other could take over.

That's more or less what I had always thought. Not necessarily
with dual keyboards and mice, but at least one looking over the
shoulder, and being totally involved in the development.

In other words, it doesn't fulfill one of the essential
requirements for code review.
Waterfall most certainly is not a strawman.
For a small project - such as one feature within an otherwise
iterative cycle -
"Waterfall" casually refers to exactly what James Kanze
advocates - sketching a complete design before committing it
to code. That is "Big Design Up Front", and it's not really
the worst possible system. It's better than Code-n-Fix. Those
who practice TDD have learned to do without it and produce
competitive designs.

It would help if you'd read what you're responding to. Neither
I, nor anyone else, has made such a suggestion. It's a
strawman, invented for the sole purpose of having something to
look good against.

You can invent anything you like. It doesn't change the basic
fact that "the waterfall methodology" was created with no
reference to existing practice, uniquely to have something to
look good against. It's not recommended by anyone.
The problem with calling Waterfall a "strawman" is these
senior software engineers and managers actually thought the
software engineering literature TOLD THEM to follow this
incorrect process.

Which senior software engineers? I've never heard it
recommended. Anywhere. (And while I've generally tried to
avoid such companies, I have worked, once or twice, in places
that really had no methodology.)
Then the inevitable happened. Real world requirements appeared
and conflicted with the planned requirements. At this point,
the design quality went over a cliff. To rapidly add new
features to a deployed application, the programmers had to
start adding hacks and patches. Did I mention the code had no
tests?
Exactly. And I was there when the managers had a little dinner
together to celebrate a successful quarter, and to plan a
rewrite. Of course it was going to be in Java, and coded by an
outside team. And I actually got to hear with my own ears the
second-in-command manager saying, "This time, we will spend
lots more time planning the requirements, and getting them all
right before we start".

Did you understand what Jerry wrote. The only people speaking
about the waterfall methodology are those condemning it. It's
has no supporters, and it doesn't actually exist, as an
established methodology.
Waterfall most certainly is not a strawman, despite its use as
a simple contrast to Agile in the literature. Waterfall
reappears whenever managers see new requirements afflict old
code, and they think this means they must collect _more_ up
front requirements.

It would help if you could site some concrete examples. I've
never heard anyone propose the waterfall methodology, except to
knock it down.
Whenever you hear in the news of a billion dollars sunken in
some huge abandoned software project, you can bet money that
its managers started with a big requirements gathering
session. The more requirements gathered, the more likely the
failure, simply because each requirement represented a
decision made with the least hard data possible.

Actually, the biggest failure I know was due to a total absence
of specifying requirements. Or any other communications between
the teams. But I can think of other ways projects can fail as
well.
In hardware, compare the processes of GM to those of Toyota.
'nuff said!
To do Waterfall right, each time you find a mistake, you must
go back to the phase that created mistake and start from there
again.

That has nothing to do with waterfall; in fact, it's not
possible with waterfall. It is, in fact, a necessary procedure
if you want to avoid code rot. But in order to work, it
requires small iterations.
 
I

Ian Collins

James said:
That's more or less what I had always thought. Not necessarily
with dual keyboards and mice, but at least one looking over the
shoulder, and being totally involved in the development.

In other words, it doesn't fulfill one of the essential
requirements for code review.

It does when you rotate pairs.

Please remember an agile process like XP is the sum of its parts. If
any one of them were the universal panacea, we wouldn't need the others.
 
P

Phlip

Ian said:
James Kanze wrote:

I didn't explicitly contradict Jerry, but...

Nobody is looking over anyone's shoulder. Would you like that?

Both are working the plan...
It does when you rotate pairs.

Please remember an agile process like XP is the sum of its parts. If
any one of them were the universal panacea, we wouldn't need the others.

Ian, if someone asked how std::cout<< worked, we would know better than to
recite the tutorials here. Processes should be the same - don't do the
questioners' homework for them!
 
N

Noah Roberts

Phlip said:
To do Waterfall right, each time you find a mistake, you must go back to
the phase that created mistake and start from there again.

That's also how it's done in any Agile management practice. Note that
XP is NOT an agile project management methodology but a development
practice methodology. To see Agile management methods you need to look
at Scrum, Lean, and others and these have nothing to do with TDD (with
the last D meaning development OR design). In these processes, when you
find a bug it goes back on the list of things to do, and goes through
the same sets of management processes that happened before: bidding,
scheduling, etc...

You can't just fix bugs willy-nilly as you find them. They need to go
through a process that reviews the bug and decides what effects a fix
will have, how expensive it will be to fix, and when and if to fix it.
The "customer" is in charge in Agile development, not the developer, as
it should be.

Frankly, I'm super-glad we don't do XP. I could never do pair
programming. For one thing I hate people standing over my shoulder.
For another thing, having to tell another person what I'm doing, as I'm
doing it, when I'm trying to solve a problem is incredibly difficult.
For some reason the areas of my brain that do programming don't speak
english too well. Once I'm done, I can tell someone what I've done
without any problem, but to tell them as I go requires constant context
switching that is quite slow and frustrating.

Any development methodology needs to account for and adapt to all
personalities or it simply can't work. You might say that I need to be
fired if I can't do pair programming efficiently but I'd simply retort
that you'd be losing an incredible resource; I'm very good at what I do.

I came in to this discussion to correct what I saw as some
misunderstandings about TDD from those saying it's useless or an invalid
approach but I also agree with what many have said here: TDD isn't some
catch all miracle cure. You seem to be pushing it like it is and making
claims about it that I simply don't see as true. TDD is just one tool
in an arsenal making up the various development and management
methodologies that people call "Agile". You also need to understand and
speak fluently about patterns, refactoring and refactors, smells, and
"design principles" before you'll gain much from TDD...just for starters.
 
P

Phlip

Andy said:
None of our tests take more than an hour or so, but I still don't want to
run them after every edit!

Can you explain why you _can't_ run a short list of relevant tests..?

Noah said:
Frankly, I'm super-glad we don't do XP. I could never do pair
programming. For one thing I hate people standing over my shoulder. For
another thing, having to tell another person what I'm doing, as I'm doing
it, when I'm trying to solve a problem is incredibly difficult. For some
reason the areas of my brain that do programming don't speak english too
well. Once I'm done, I can tell someone what I've done without any
problem, but to tell them as I go requires constant context switching that
is quite slow and frustrating.

I'm super-glad I don't ride a bike to work. I would have to put my coffee in
a thermos, I'd have to stop at the kinds of lights I typically zip thru, I
have to think about each item in my backpack, I have to rotate the songs in
my cellphone, and I get all buff.

F--- the environment! I'm driving a gas-guzzler to work!

(Seriously, to learn pairing you must relearn programming as you know it.
Keep your language centers loaded up in your brain. Don't think of the next
keystroke, think of the next high-level direction. The benefits are worth
it - /especially/ if you feel your own magnificent abilities are going to
waste!)
You also need to understand and speak fluently about patterns, refactoring
and refactors, smells, and "design principles" before you'll gain much
from TDD...just for starters.

Maybe you should pair more to see how that stuff falls into place easily.
 
P

Phlip

However, by disagreeing with that statement you've killed XP - which is
IMHO a good thing.

Andy

Yup. Tests can't find every bug. Therefore XP sucks and nobody should use
it.
 
P

Phlip

Andy said:
Really? I've never heard of any avionics programme run like that. Do
you have a reference?

You mean besides Robert C. Martin teaching TDD at Boeing?

http://www.fastcompany.com/magazine/06/writestuff.html

The equivalent steps are...

- every code change reviewed during the change
(review of static code, post-change is less important)
- logging every change & every bug
- both white and black box tests
- running the build environment & all tests after every edit
- collecting requirements in realtime...

Put another way, if you took that process and then streamlined out the excesses
(pseudocode, multiple teams, multiple redundancies, strict type checking,
automated proofs, etc.), you would have competitive commercial software
development with a low bug rate.
Which studies?

http://www.google.com/search?q=standish+group+software+chaos
 
I

Ian Collins

Andy said:
None of our tests take more than an hour or so, but I still don't want
to run them after every edit!

It sounds like you either have an enormous number of tests, of they go
beyond unit tests. I'd expect to be able to run several thousand unit
tests per minute.
 
J

Jorgen Grahn

Bart van Ingen Schenau wrote: ....

That's one practice everyone should be following.

I disagree. I'd prefer if people also thought about things like:

- Where did my design (that is, my thinking) go wrong and make this bug happen?
- Should I spend time redesigning so it doesn't happen again?
- In what other places could I have made similar mistakes?
- Could I make this a compile-time error?

In that sense, a bug found is an opportunity to make the code better overall.
If on the other hand your only goal is to make the tests pass, the
code will just get worse and worse over time.

/Jorgen
 
P

Phlip

Jorgen said:
I disagree. I'd prefer if people also thought about things like:

- Where did my design (that is, my thinking) go wrong and make this bug happen?
- Should I spend time redesigning so it doesn't happen again?
- In what other places could I have made similar mistakes?
- Could I make this a compile-time error?

In that sense, a bug found is an opportunity to make the code better overall.
If on the other hand your only goal is to make the tests pass, the
code will just get worse and worse over time.

All of that process, from handing (external) bugs to the Onsite Customer for
scheduling, to reviewing the process, is in the XP literature...
 
J

Jerry Coffin

phlip2005 said:
Andy Champ wrote:

[ ...and Philip had previously written: ]

Once again, you don't seem to have really read what was written. Of the
California DMV project, they said:

The project had no monetary payback, was not supported
by executive management, had no user involvement, had
poor planning, poor design specifications and unclear
objectives. It also did not have the support of the
state's information management staff.

The alternative to a "poor design specification" would appear to be a
better design specification -- IOW, they seem to be indicating that MORE
should have been done to collect requirements up front.

Likewise, of the American Airlines CONFIRM system, they said:

This project failed because there were too many cooks and
the soup spoiled. Executive management not only supported
the project, they were active project managers. Of course,
for a project this size to fail, it must have had many
flaws. Other major causes included an incomplete statement
of requirements, lack of user involvement, and constant
changing of requirements and specifications.

Here they give "incomplete statement of requirements" as a reason for
failure. The alternative would obviously be a "[more] complete statement
of requirements" -- which would obviously require at least attempting to
collect the requirements in question.

I didn't just pick a couple of projects that happen to use ambiguous
wording while the bulk of the data indicates what you've claimed. Rather
the contrary, I'm commented on 100% of the projects they studied that
were considered failures.

The study you've cited is really quite interesting -- you ought to read
it some day!
 
J

Jerry Coffin

[ ... ]

You seem to have ignored or misread a great deal of what this says. For
example, at least by your definitions, they seem to fully espouse the
"big design up front" methodology: "...carefully planning the software
in advance, writing no code until the design is complete, ..."

They fail to mention one crucial aspect of writing high reliability.
Typical software is delivered by itself, to be used on whatever hardware
the user already posesses, in conjunction with whatever other software
they happen to have/get. High reliability software is run only on the
specified hardware, with only the specified software installed. The
difference this makes would be hard to overstate!
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,768
Messages
2,569,575
Members
45,051
Latest member
CarleyMcCr

Latest Threads

Top