Timeless Classics of Software Engineering

P

Phlip

rem642b said:
I agree. That's one of the winning things about interactive languages
such as LISP. When I write LISP code, I unit-test one line at a time as
I write it. When I have finished all the lines of a function, I
unit-test that function for each of the different kinds of situation it
must deal with. I include all my unit-tests as comments just after the
end of the function definition, so later if I need to change that
function I can re-do all those old unit tests and any new unit tests to
make sure the function still works after the modifications.

Manual or automated unit tests?
An alternative is what might be called a "consumer-oriented hacker":
The hacker inquires about the customer's real needs until he knows what
tool the customer most urgently needs. Not a fullblown do-everything
program, just one tool that does one kind of task and actually works.
The hacker can typically make such a tool within a few hours or a
couple days. Then while the consumer is beta-testing it, the hacker
works on the next-most-urgent tool. So day after day the customer has
yet another tool to handle yet another urgent problem.
Are there any consumers interested in hiring me in this way?

If you refactor your code together between each feature, and if your
bug-rate is absurdly low, and if each feature makes you faster, then what's
the problem?
 
B

Bernd Paysan

Phlip said:
Manual or automated unit tests?

In interactive languages, there's no real differences. When you start
programming a function, you define it on the command prompt, as you do with
the manual unit tests. If it works (and the tests are supplied with the
proper commands), you can cut&paste them from the command history into your
source code, and then you have automated unit tests.
 
R

Rene de Visser

Phlip said:
rem642b wrote: ...
....
I leave my unit tests in uncommented so that they execute after each
function is compiled,
that way I can be that they are applied each time.

This cannot always been done though, as some tests can take a long time to
run if you are testing a
large number of input combinations.

Rene.
 
J

Jeffrey Dutky

snipped most of a fabulous post about XP and engineering.

I agree with most all of what you wrote, except for the very last bit:
BTW: The real hacker way to deal with customer requirements is to throw
them into the bit bucket, and analyze the problem yourself (only care
about it if it is interesting, and redefine it until the solution is
trivial). Unfortunately, this often leads to software where user and
developer have to be the same sort of people, and which doesn't solve
the problems of average users (problems hackers don't have at all).

It is not necessarily so, that hacker's can't think like regular users
and solve regular users problems. It may currently be the case, in
general, but it is not a necessary state of affairs. The technical and
engineering community needs to take a masonic attitude toward this
problem and start 'making better hackers.'

Many, if not most, hackers started out as regular users and should be
able to recall what it felt like to deal with the recalcitrant
machine: I can certainly recall the feeling of helplessness I
experienced for the first few years I used unix, and I try to channel
that feeling into my programs and documentation.

The disciplines of user-interface design and HCI (human-computer
interaction), at their best, are attempts to focus the attention of
technical people on the problems and perceptions of non-technical
users. If you, as a software developer, can manage to think like a
non-technical user, even if only occasionally, you will have gone a
long way toward making better programs. Of course, the requirement for
user reviews and focus groups is an admission that such focus is
difficult to maintain, and that formal processes may be required as
occasional reminders.

Still, as a hacker, it should not be so difficult to put yourself in
other peoples' shoes. In the commercial environment you may have to
ignore some things in the interest of meeting a deadline or a budget,
but the private hacker has no such limitations. It requires only a
change of attitude, not of constitution. I simply don't believe that
hackers are inately incapable of thinking like non-technical users,
only that they are unaccustomed to it.

To bring this back on topic, there are a few books that I like for
discussing user interface design:

* 'Tog on Interface' and 'Tog on Software Design'
by Bruce Tognazinni

* 'The Elements of Programming Style'
by Kernighan and Plauger
chapter 5 on input and output

* 'The Practice of Programming'
by Kernighan and Pike
chapter 4 on interfaces (last part on user interfaces)

* 'Programming as if People Mattered'
by Nathaniel Borenstein

* 'The Humane Interface'
by Jef Raskin

- Jeff Dutky
 
K

Ken Hagan

Jeffrey said:
Many, if not most, hackers started out as regular users and should be
able to recall what it felt like to deal with the recalcitrant
machine: I can certainly recall the feeling of helplessness I
experienced for the first few years I used unix, and I try to channel
that feeling into my programs and documentation.

Well, yes.

Maybe it's different in Linux land, but in Windows I struggle daily with
examples of bad UI, rude software and inadequately documented systems. I
can think of no-one better qualified to defend the interests of
end-users.

Of course, the most horrendous errors are when the reality is buried
under
a pile of marketing. If you lie to your end-users, you can't really
expect
them to understand your system, can you?
 
?

_

They can't or won't because they develope into egotistical know-it-alls
that will devalue anyone that is lower than themselves. The machine is
just a machine, the langauge is a language to solve problems in.

--chris
reply: (e-mail address removed)
 
M

Mabden

_ said:
They can't or won't because they develop into egotistical know-it-alls
that will devalue anyone that is lower than themselves. The machine is
just a machine, the language is a language to solve problems in.

Many of us started out, not as regular users, but as programmers in school
or on our own. So our first programs were written for ourselves. We figured
it out the hard way, because there was no one else who knew any more than we
did. That's where a lot of the "figure it out yourself" mentality comes
from - it's how we learned, and really the only way TO learn anything.

Read about it, try it out, figure out the mistakes, fix it. Repeat.
 
R

rem642b

From: "Phlip said:
Manual or automated unit tests?

Currently when I'm just writing software for my own use, nobody else
ever looks at my code or runs it, it's all manual tests. I write a new
function one line at a time, manually checking it's correct before
proceeding to write the next line of code. At the top of the function
are SETQs for the parameters to have canned values for testing purpose.
When I finish writing and testing every line of code, I wrap it into a
function definition, and comment out the SETQs at the start, so it uses
the parameters as given instead of my canned test values. I then
copy&paste the function declaration and those test SETQs, several
different sets of tests in some cases, and edit the copy to yield a
test function call, which I then immediately try. Then I comment out
that test-call and leave it sitting permanently as such a comment
immediately after the function.

But back when I was doing A.I. research at Stanford, working on an
English language command-language for a simulated robot, I made a much
more formal test rig, whereby I collected actual input to the parser
and output from the parser, and when I ran a test it told me each place
where the output wasn't the same as before, so I could check whethe
those discrepancies were bugfixes or oops. I would expect something
similar when coding for a company in the future. Also by making a test
rig for each function, I could demonstrate day-by-day progress to my
supervisor, hypothetically anyway if said supervisor were at all
interested.
If you refactor your code together between each feature, and if your
bug-rate is absurdly low, and if each feature makes you faster, then
what's the problem?

I don't know anybody who has surplus money and who has any desire for
me to write any software for them. No company in this whole SF bay area
is hiring programmers. This recession has been running for a long time
with no serious sign of let-up yet. I have only a few more months to
find a decent source of income, or become homeless.

Oh, back on topic of testing: If more than one programmer works on the
same function, then of course automated testing is essential!
 
P

Phlip

rem642b said:
Currently when I'm just writing software for my own use, nobody else
ever looks at my code or runs it, it's all manual tests. I write a new
function one line at a time, manually checking it's correct before
proceeding to write the next line of code.

If you automated the check for that line of code, you could leverage a trail
of tests to go faster.
At the top of the function
are SETQs for the parameters to have canned values for testing purpose.
When I finish writing and testing every line of code, I wrap it into a
function definition, and comment out the SETQs at the start, so it uses
the parameters as given instead of my canned test values.

My cod. You just told me you do half of test-first. But then you comment the
tests out and don't preserve them.

No matter how fast and bug-free your code, and how clean your design, you'd
be faster, free-er and cleaner by preserving and leveraging those tests!
I don't know anybody who has surplus money and who has any desire for
me to write any software for them. No company in this whole SF bay area
is hiring programmers.

I heard a rumor than everyone with significant "XP" on their resume, in the
Bay Area, was hitched. But I wouldn't know...

But I don't mean "Windows XP" ;-)
 
R

Richard Riehle

Phlip said:
I heard a rumor than everyone with significant "XP" on their resume, in the
Bay Area, was hitched. But I wouldn't know...
In the SF Bay area, as far north as Napa, as far south as Salinas,
as far East as Merced, one keeps running into Pizza chefs, store
clerks, security guards, insurance salespersons, real estate sale
people, handymen, etc., who claim that they used to be computer
programmers. Throughout Silicon Valley, one meets former
engineers whose skill set was so narrowly focused that they
were laid off in the last round. It is sometimes amazing to me,
and humbling, that so many highly educated, well-trained,
technologists are now engaged in low-paying service industry
jobs, even as some large companies continue to import
specialized engineers from abroad.

Richard Riehle
 
P

Phlip

Richard said:
In the SF Bay area, as far north as Napa, as far south as Salinas,
as far East as Merced, one keeps running into Pizza chefs, store
clerks, security guards, insurance salespersons, real estate sale
people, handymen, etc., who claim that they used to be computer
programmers.

Maybe they wrote lots of bugs...

(I know I know - many dot-coms were pumped and dumped, based on investors'
abilities to blame programmers, regardless of their proficiency.)
 
R

rem642b

From: Bernd Paysan said:
When you start programming a function, you define it on the command
prompt, as you do with the manual unit tests.

There are at least three UI configurations where this is the opposite
of what's actually done:
- When using a CL IDE, such as Macintosh Allegro CommonLisp that I used
on my Mac Plus before it died: The user doesn't type code into the
command window, but rather composes code in the edit window, then uses
command-E to execute whatever s-expression is adjacent to the cursor.
- When using EMACS-LISP, something similar I presume.
- What I'm doing currently: McSink text editor on Macintosh, VT100
emulator connecting me to Unix, CMUCL running on Unix. I compose code
in a McSink window, copy and paste to VT100 window whereupon it is
transmitted to Unix and fed on stdin to CMUCL.
If it works (and the tests are supplied with the proper commands),
you can cut&paste them from the command history into your source
code, and then you have automated unit tests.

I don't know about EMACS-LISP, but in both MACL and McSink/VT100/CMUCL,
there's a scrolling transcript of the current Listener or dialup
session respectively. I already have the "command" (the s-expression
fed into READ-EVAL) locally in my edit window. If I want to retain a
copy of the output (from PRINT), that's the only part I need to copy
from the Listener/dialup window and paste into my edit window. But for
manual tests, I just eyeball the result when testing, it's obvious
whether it worked or not. Sometimes if the result is just a number,
such as an index into a string, which I can't tell if correct or not
just by looking at it, then I'll copy the result into a comment
alongside the "command" expression (after first doing an additional
check to make sure it was correct, for example to test the result of a
SEARCH or POSITION, I do a SUBSEQ call to see the part of the string
starting or ending with what it had allegedly found so I know it found
the correct thing, especially for example in my recent work where I'm
parsing 30k HTML files which are Yahoo! Mail output and the index where
something was found is typically 15-20k into the long string).

So anyway, it's pretty easy to collect whatever you need, input and/or
output from a test, as you go along, except for very long strings and
large structures where you don't want to include verbatim the whole
thing but instead want to save it to a file and have your test rig read
the file to get input for the function under test. Very flexible what
to actually do from moment to moment as needed...

If I were getting paid, and I'm not the only programmer working on the
code, I'd want to set up something more formal: Each test-data input
file would be formally registered and kept as-is with nobody allowed to
change it without permission. Then a test suite for a batch of code
could confidently perform some sort of read of that file to get the
first item of test data, pass that data through the first function and
compare output with what was supposed to be the output, then pass that
output and possibly more canned test data to the next function, etc.
testing all the various functions one-by-one in sequence from raw input
through processing stages to final output.

Often any single one of those major data-processing-pipeline functions
is composed of calls to several auxiliary functions. It'd be easy to
use that sequence of calls to directly produce a test rig, based on the
master data flow, for each of those auxiliary functions. So the
finished test suite would, after loading the canned-test-data file,
first test all calls to auxiliary functions in sequence within the
dataflow of the single first major pipeline function, then test that
one main dataflow function as a gestalt, then likewise test inside then
all of second, etc. down the line. Of course if one of the auxiliary
functions is itself composed of pieces of code that needs to be tested,
the same breakdown could be done another level deeper as needed (test
parts in sequence before testing whole as gestalt). Of course for
functions that take small easily expressed parameters, instead of huge
strings, they could be tested with literal constant data instead of
data from dataflow from canned-test-data file. For testing boundary
conditions, error conditions, etc., this would be useful. What about
functions that take huge inputs but where it'd be nice to test boundary
cases? Well then we just have to contrive a way to generate
boundary-case input from the given valid canned-test-data file, or
create a new canned-test-data file just for these exceptional cases.

I wish somebody would hire me to do programming work for them, so I
could put my ideas into commerical practice...
 
M

Maynard Handley

Phlip said:
Maybe they wrote lots of bugs...

(I know I know - many dot-coms were pumped and dumped, based on investors'
abilities to blame programmers, regardless of their proficiency.)

To be fair here, are these people who assumed that stringing HTML 3.x
together counted as programming? God knows 6 yrs ago Silicon Valley was
full of those.

Maynard
 
E

Eray Ozkural exa

Maynard Handley said:
To be fair here, are these people who assumed that stringing HTML 3.x
together counted as programming? God knows 6 yrs ago Silicon Valley was
full of those.

Well now it's XML and Visual Basic/C#, what's the difference?

Cheers,
 
C

Chris Morgan

Two contributions I haven't seen in this thread :

The Practice of Programming (Kernighan and Pike)

and (maybe, one day)

Joel on Software, Joel Spolsky

Chris
--
Chris Morgan
"Post posting of policy changes by the boss will result in
real rule revisions that are irreversible"

- anonymous correspondent
 
E

Eric Hamilton

Thanks for a stimulating topic.

I heartily agree that Mythical Man Month is essential reading for
anyone who wants to understand large scale software projects.

The other essential on my book case is Lakos' "Large Scale C++ Software
Design". It's applicable to any language and has enough rationale that's
grounded in real development practices and the problems of large scale
projects that I think it's relevant to the original topic.

A few years ago, I happened to reread Brooks and wrote up a collection
his insights that resonated with me. I've attached it below in hopes of
whetting the appetite of anyone who hasn't already read it and as a reminder
for those who haven't reread it recently. I encourage everyone to
(re)read the full book.

Eric

==============

Notes from re-reading "The Mythical Man-Month" by Fredrick P. Brooks, Jr.

I went looking for a quotation about the value of "system design" and
ended up reading most of the book because it has many insights into the
challenges of producing large-scale software projects.

Some highlights:

In the preface Brooks says that while OS/360 had some "excellencies in design
and execution", it had some noticable flaws that stem from the design process.

- "Any OS/360 user is quickly aware of how much better it should be."
- OS/360 was late, took more memory than planned, cost several times the
estimate, and did not perform very well until several releases after the
first.

His central argument is:

- "Briefely I believe that large programming projects suffer management
problems different in kind from small ones, due to division of labor. I
believe the critical need to be the preservation of the conceptual
integrity of the product itself."

Why are industrial teams apparently less productive than garage duos?
- Must look at what is being produced.
- program: complete in itself, written by author for own use
- programming product: more generalized, for use by others
- programming system: collection of interacting programs with
interfaces and system interactions.
- programming system product: both product and system

Interesting exposition of the "joys of the craft" of programming:
- joy of making things
- pleasure of making things that are useful to others
- fascination of making complex puzzle-like objects
- joy of always learning due to non-repeating nature of the task
- delight in working in such a tractable medium

Also "woes of the craft"
- one must perform perfectly
- others set objectives, provide resources, furnish information
- dependence on others' poor programs
- finding nitty bugs is just work
- linear convergence of debugging
- appearance of obsolescence by time you ship when you compare what you
ship to what others imagine

An analysis of sources of programmer optimism:
- creative activity comprises the idea, the implementation, the
interaction with the user
- tractability of medium leads us to believe it should implement easily
- other media place constraints on what can be imagined and limitations of
media mask mistakes in the ideas

Man-month: fallacy of lack of communication or serialization

Naive preference for "small sharp team of first-class people" ignores the
problem of how to build a *large* software system.

Harlan Mills proposed "surgical team" approach. [Not applicable everywhere.]

Conceptual integrity:
- Analogy to architectural unity of Reims cathedral vs. others that were
"improved" inconsistently.

- "I will contend that conceptual integrity is the most important
consideration in system design. It is better to have a system omit
certain anomalous features and improvements, but to reflect one set of
design ideas, than to have one that contains many good but independent
and uncoordinated ideas."

- The purpose of a programming system is to make a computer easy to
use. [We may modify purpose to be to make it easy to do the things that
our customers need done.]
- Ratio of function to conceptual complexity is the ultimate test of
system design.
- For a given level of function, that system is best in which one can
specify things with the most simplicity and straightforwardness.

Careful division of labor between architecture and implementation allows
conceptual integrity in large projects.
- Architecture: complete and detailed specification of the user interface
(for OS/360 the programming manual).
- We may want to consider what is the right specification for <project>
- Architect is "user's agent". Brings "professional and technical
knowledge to bear in the unalloyed interest of the user."
- Architecture tells what happens, implementation tells how.

Argues that designing implementations is equally creative work as
architecture. Cost-performance ratio depends most heavily on implementer;
ease of use most heavily on architect.

External provision of architecture enhances creativity of implementors. They
focus on what they uniquely do. Unconstrained most thought and debate goes
into archtectural decisions with not enough effort on implementation.

Experience shows that integral systems go together faster and take less time
to test.

Need coordination and feedback between architect and builder to bound
architectural enthusiasm.

Second system effect:
- overextend and add too many bells and whistles
- may spend too much optimizing something being superceded by events

Communication & decision making
- Strong belief in written specifications
- architects meetings
- emphasis on creativity in discussions
- detailed change proposals come up for decisions
- chief architect presides & has decision making power
- broad "supreme court" sessions handle backlog of issues, gripes, etc.

Organization:
- Talks of "producer" & "technical director or architect"
- Either can report to the other depending on circumstances & people

"Plan to throw one away; you will, anyhow."

"The most pernicious and subtle bus are system bugs arising from mismatched
assumptions made by the authors of various components. ... Conceptual
integrity of the product not only makes it easier to use, it also makes it
easier to build and less subject to bugs."
 
A

Anne & Lynn Wheeler

Thanks for a stimulating topic.

I heartily agree that Mythical Man Month is essential reading for
anyone who wants to understand large scale software projects.

The other essential on my book case is Lakos' "Large Scale C++
Software Design". It's applicable to any language and has enough
rationale that's grounded in real development practices and the
problems of large scale projects that I think it's relevant to the
original topic.

A few years ago, I happened to reread Brooks and wrote up a
collection his insights that resonated with me. I've attached it
below in hopes of whetting the appetite of anyone who hasn't already
read it and as a reminder for those who haven't reread it recently.
I encourage everyone to (re)read the full book.

one of boyd's observation about general US large corporations starting
at least in the 70s was rigid, non-agile, non-adaptable operations.
he traced it back to training a lot of young people received in ww2 as
how to operate large efforts (who were starting to come into positions
of authority) ... and he contrasted it to guderian and the blitzgreig.

guderian had a large body of highly skilled and experienced people
.... who he outlined general strategic objectives and left the tactical
decisions to the person on the spot .... he supposedly proclaimed
verbal orders only ... in the theory that the auditors going around
after the fact would not find a paper trail to blaim anybody when
battle execution had glitches. the theory was the the trade-off of
letting experierenced people on the spot feel free to make decisions
w/o repercusions, more than offset any possibility that that they
might make mistakes.

boyd contrasted this with the much less experienced american army with
few really experienced people which was structured for heavy top-down
direction (to take advantage of skill scarcity) ... the rigid top-down
direction with little local autonomy would rely on logistics and
managing huge resource advantage (in some cases 10:1).

part of the issue is that rigid, top-down operations is used to manage
large pools of unskilled resources. on the other hand, rigid top-down
operations can negate any advantage of skilled resource pool (since
they will typically be prevented from exercising their own judgement).

so in the guderian scenario .... you are able to lay out strategic
objectives and then allow a great deal of autonomy in achieving
tactical objectives (given a sufficent skill pool and clear strategic
direction).

random boyd refs:
http://www.garlic.com/~lynn/subboyd.html#boyd
http://www.garlic.com/~lynn/subboyd.html#boyd2
 
A

Anne & Lynn Wheeler

Harlan Mills proposed "surgical team" approach. [Not applicable everywhere.]

Conceptual integrity:
- Analogy to architectural unity of Reims cathedral vs. others that were
"improved" inconsistently.

- "I will contend that conceptual integrity is the most important
consideration in system design. It is better to have a system omit
certain anomalous features and improvements, but to reflect one set of
design ideas, than to have one that contains many good but independent
and uncoordinated ideas."

- The purpose of a programming system is to make a computer easy to
use. [We may modify purpose to be to make it easy to do the things that
our customers need done.]
- Ratio of function to conceptual complexity is the ultimate test of
system design.
- For a given level of function, that system is best in which one can
specify things with the most simplicity and straightforwardness.

i was at a talk that harlan gave at the 1970 se symposium ... that
year it was held in DC (which was easy for harlan since he was local
in fsd) ... close to the river on the virginia side (marriott? near a
bridge ... I have recollections of playing hooky one day and walking
across the bridge to the smithsonian).

is was all about super programmer and librarian .... i think the super
programmer was re-action to the large low-skilled hordes ... and the
librarian was to take some of administrative load of the super
programmer.

i remember years later somebody explaining that managers tended to
spend 90% of their time with the 10% least productive people ... and
that 90% of the work was frequently done by the 10% most productive
people; it was unlikely that anything that a manager did was going to
significantly improve the 10% least productive members .... however if
they spent 90% of their time helping remove obstacles from the 10%
most productive ... and even if that only improved things by 10%
.... that would be the most benefical thing that they could do. This
was sort of the librarian analogy from harlan ... that managers
weren't there to tell the high skilled people what to do ... managers
were to facilitate and remove obstacles from their most productive
people.

this is somewhat more consistant with one of boyd's talks on the
organic design for command and control.
 
A

Anne & Lynn Wheeler

Anne & Lynn Wheeler said:
i was at a talk that harlan gave at the 1970 se symposium ... that
year it was held in DC (which was easy for harlan since he was local
in fsd) ... close to the river on the virginia side (marriott? near
a bridge ... I have recollections of playing hooky one day and
walking across the bridge to the smithsonian).

this marriott has bug'ed my memory across some period of posts
http://www.garlic.com/~lynn/2000b.html#20 How many Megaflops and when?
http://www.garlic.com/~lynn/2000b.html#24 How many Megaflops and when?
http://www.garlic.com/~lynn/2000b.html#25 How many Megaflops and when?
http://www.garlic.com/~lynn/2000c.html#64 Does the word "mainframe" still have a meaning?
http://www.garlic.com/~lynn/2001h.html#48 Whom Do Programmers Admire Now???
http://www.garlic.com/~lynn/2002i.html#49 CDC6600 - just how powerful a machine was it?
http://www.garlic.com/~lynn/2002q.html#51 windows office xp
http://www.garlic.com/~lynn/2003g.html#2 Share in DC: was somethin' else
http://www.garlic.com/~lynn/2003k.html#40 Share lunch/dinner?
http://www.garlic.com/~lynn/2004k.html#25 Timeless Classics of Software Engineering

so doing some searching ... this is a picture of approx. what i remember
http://www.hostmarriott.com/ourcompany/timeline_twin.asp?page=timeline

this lists a ieee conference at twin bridge marriott, washington dc in '69
http://www.ecs.umass.edu/temp/GRSS_History/Sect6_1.html

this lists first marriott motor hotel, twin bridges, washington dc
http://www.hrm.uh.edu/?PageID=185

and this has a reference to the site of the former Twin Bridges Marriott
having been razed several years ago
http://www.washingtonpost.com/wp-srv/local/counties/arlngton/longterm/wwlive/crystal.htm
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top