a defense of ad hoc software development

M

Marc Girod

That's your problem.

And not the company's you mean?
That's a matter of trade-off. If 'the company'
chooses to hire somebody who matches some
non-programmer's pre-conceptions, rather than
somebody who would write better code, it
should be 'its' problem as well.

I don't like personalizing entities such as
'companies'. In practice, the issue is one of
power inside the company: what expertise is
valued, and what is not. The interests which
get optimized are those of individuals.
 Clearly there are employed programmers.

Sure. But are they employed for what they
really do, or for the perception their
managers (or the HR department) have from it?
How good is the match?
Whatever.  I am not a job search agency.
(BTW, these "I challenge you"- type
utterances must be a clear winner in job interviews).

This remark shows a worrisome asymmetry in
the contractual relationship. Again, such an
asymmetry does not optimize the interest of
-say- the share holders.
These things are project specific, and are usually decided between you
and your employer/customer.  Eg if you are writing a library for
numerical applications, most employers would prefer that you have a
clear understanding of floating-point arithmetic (eg on the level of
"What Every Scientist Should Know ...").

You mean, of something *they* don't have...
The risk is this of the urban legend...
I believe concretely ones meets it with GUIs,
which are felt important by people who don't
use them, or comments in the code, by people
who don't read them, etc.
Floating-point arithmetic also shows a
view point, even if arguably a more educated
one.
But the more you know, the less your customer will have to explain.

Optimizing communications by avoiding it.
I.e. relying upon the fact that it has
already happened: that the knowledge is
indeed shared, relevant, complete...
That's not cutting-edge stuff... No
competitive advantages to be gained.
Is it really relevant to software?
And the flipside is that if you don't know anything, sometimes
introducing you to the problem domain is prohibitively costly, so
people become their own programmers (which happens in science very
frequently).

Right. That's why Torvalds designed git,
and Behlendorf subversion: because they
felt it was less work to write from
scratch what they perceived they needed,
than to communicate with SCM people.

Marc
 
C

ccc31807

I'm less familiar with military specifications, but I would guess
these are like InterNet specs, where you might code an API module
for each set of specs, coding directly from the specs, then write
the rest of the application in a more experimental style.

http://homepage.mac.com/simon.j.wright/pushface.org/mil_498/index.htm

Read it and weep.
It is sometimes desireable to quickly write non-functional UI which
shows the user how it "looks", and get their approval of that,
before starting the real code under the UI.

Absolutely. However, one hundred percent of my scripts are designed to
be run from the command line, sometimes as cron jobs, sometimes called
by DOS batch files (for users uncomfortable with the CLI). I've
learned to write real specific prompts for the input data, and write
sanity checks throughout (e.g., "You entered the date of August 71,
2009. Do you want to continue? [y|n]")

CC
 
G

Greg Menke

ccc31807 said:

Nice!

And whatever your write Shall Conform to several (possibly
contradictory) coding standards both locally defined and those visited
upon the world by ISO/CMMI (as mandated by management who doesn't
actually have to use them). And whatever you do end up coding has to be
documented according to appendices of the aforementioned standards
specs, and all changes to it also Shall Conform to the change control
specs. And when you put your (surviving) code through Validation &
Verification, the pencil pushers who apply that process have full
editorial control over your code- from accept to change to reject, and
are unencumbered by any knowledge, responsibility or accountability for
the project.

And while you contend with all that, you have the undocumented changes
(demands) levied upon you by others up the food chain which have to go
in amongst the formal stuff.

It is sometimes desireable to quickly write non-functional UI which
shows the user how it "looks", and get their approval of that,
before starting the real code under the UI.

Absolutely. However, one hundred percent of my scripts are designed to
be run from the command line, sometimes as cron jobs, sometimes called
by DOS batch files (for users uncomfortable with the CLI). I've
learned to write real specific prompts for the input data, and write
sanity checks throughout (e.g., "You entered the date of August 71,
2009. Do you want to continue? [y|n]")

And sometimes the "look" of the operating code requires the real code.
Sometimes the "UI" is a packet exchange protocol, and with debugging
enabled, a series of messages sent to a debugging console.

Gregm
 
G

Greg Menke

+---------------
| And whatever your write Shall Conform to several (possibly
| contradictory) coding standards both locally defined and those visited
| upon the world by ISO/CMMI (as mandated by management who doesn't
| actually have to use them). And whatever you do end up coding has to be
| documented according to appendices of the aforementioned standards
| specs, and all changes to it also Shall Conform to the change control
| specs. And when you put your (surviving) code through Validation &
| Verification, the pencil pushers who apply that process have full
| editorial control over your code- from accept to change to reject, and
| are unencumbered by any knowledge, responsibility or accountability for
| the project.
+---------------

Charles Stross <http://www.antipope.org/charlie/blog-static/fiction/faq.html>
does a pretty good job of capturing the insanity of this in his
"Laundry" stories ["The Atrocity Archives", "The Concrete Jungle",
"The Jennifer Morgue", and "The Fuller Memorandum" (due July 2010)].
E.g., exactly how *does* a top-secret intelligence agency deal with
ISO-9002 conformance and outsourced COTS application infrastructure
and audits by the BSA(!) and DRM on essential operational software?!?
;-} ;-}

I can't speak for the spooks but the problem exists to some degree for
anyone who has to work with sensitive data. In part you deal with it by
being very nuanced with respect to "conforming".

For instance, perhaps requirements from On High (well meaning but
blissfully abstract) say that 100% of code Shall Conform to ISO blahblah
and CMMI Bend-Over-And-Take-It Standard Rev
259-stroke-V-stroke-XYZPDQ-rev-OMFG!!! and you have to use a proprietary
operating system. So you say OK, well we don't have the $$$ for a code
audit of the OS and theres no way we could get a copy of the source
anyhow, (and even if we did that P.O.S wouldn't stand up to a review by
my 4 yr old anyhow) so we shall view it as a COTS product and not
something we write, and thus mark it "inapplicable" in the code review
packages. And, hope we can defend that choice later on. Probably will
work OK because everybody else does that too, and nobody is going to
look at that stuff closely unless we really blow it, the project tanks
because of us, and it was worth enough for Official Reviews with the
knives out.

You can dodge some of the phone home licensing stupidity by spending
many extra tax-payer $$$ for node-locked and perpetual licenses so as to
avoid the internet traffic back to the vendor. (many such licenses are
discarded at the end of the project because by then they're old or just
forgotten about).

I've not witnessed it myself but I've often wondered if some of the big
enterprise support contracts with the big IT vendors represents payolla
for the inevitible licensing snafus which show up in any big
organizations- easier to just pay the "protection money" up front- lots
of extra $$$ but at least it saves the audit chaos.

Sometimes work is like a big s&@t sandwich and you're paid for your
appetite... but if you can stomach the yuck there is a lot of
opportunity to work on interesting projects.

Gregm
 
R

Robert Maas, http://tinyurl.com/uh3t

No matter how precise the specifications, programming always involves
From: (e-mail address removed) (Pascal J. Bourguignon)
In theory. But in practice, you still need to bump against other
implementations to learn how to treat the cases that are not covered
by the protocol.

Yeah. I was thinking more of the well established RFCs that have
become de facto technical standards, not poorly written (ambiguous)
early drafts. Indeed RFC means Request For Comments, and with an
early ambiguous draft that should be taken literally, i.e. comments
*should* be offered. (-: "should" in the RFC sense of course. :)

And just a clarification: I said "very little" rather than "no"
experimentation. With well established de-facto-standard RFCs, I
emphasize "very". With not so established RFCs, omit "very", and
with poorly written RFCs your point wins.
There are RFC that are specifically describing the
"best practices", which means that there are a lot of
implementations implementing "worst practices" or "good-enough
practices".

I was thinking of the tech-spec RFCs, which say things like return
code 550 means refused for policy reasons, which IMO really ought
to be respected as best as is possible. There's a little bit of
judgement if more than one return code could possibly apply, but
the basic protocol ought to be very rigid regarding syntax of
client requests and syntax of server responses.
 
M

Mark Tarver

On page 229 of Paul Graham's 'ANSI Common Lisp' he writes:

 <quote>
     If programming were an entirely mechanical process -- a matterof
simply translating specifications into code -- it would be reasonable
to do everything in a single step. But programming is never like that.
No matter how precise the specifications, programming always involves
a certain amount of exploration -- usually a lot more than anyone had
anticipated.
     It might seem that if the specifications wee good, programming
would simply be a matter of translating them into code. This is a
widespread misconception. Programming necessarily involves
exploration, because specifications are necessarily vague. If they
weren't vague, they wouldn't be specifications.
     In other fields, it may be desirable for specifications to be as
precise as possible. If you're asking for a piece of metal to be cut
into a certain shape, it's probably best to say exactly what you want.
But this rule does not extend to software, because programs and
specifications are made out of the same thing: text. You can't write
specifications that say exactly what you want. If the specification
were that precise, then they would be the program.
</quote>

In a footnote, Graham writes:

<quote>
     The difference between specifications and programs is a
difference in degree, not a difference in kind. Once we realize this,
it seems strange to require that one write specifications for a
program before beginning to implement it. If the program has to be
written in a low-level language, then it would be reasonable to
require that it be described in high-level terms first. But as the
programming language becomes more abstract, the need for
specifications begins to evaporate. Or rather, the implementation and
the specifications can become the same thing.
     If the high-level language is going to be re-implemented in a
lower-level language, it starts to look even more like specifications.
What Section 13l.7 is saying, in other words, is that the
specifications for C programs could be written in Lisp.
</quote>

In my SwE classes, we spent a lot of time looking at process and
processes, including MIL STD 498 and IEEE Std 830-1984, etc. My
professors were both ex-military, one who spent his career in the USAF
developing software and writing Ada, and both were firmly in the
'heavy' camp (as opposed to the lightweight/agile camp).

In my own job, which requires writing software for lay users, all I
ever get is abbreviated English language requests, and I have learned
better than to ask users for their requirements because they have no
idea what requirements are. (As a joke, I have sent a couple a copy of
IEEE Std 830-1984 and told them that I needed something like that, but
the joke went over like a lead balloon -- not funny at all.) Of
necessity I have learned to expect to write a number of versions of
the same script before the user accepts it.

I understand that Graham isn't talking about requirements, and to many
people specifications and requirements amount to the same thing. I
also understand the necessity for planning. However, the Graham quote
seems to me a reasonable articulation for ad hoc development. (It's
something I wanted to say to jue in particular but couldn't find the
words.)

Comments?

CC

QUOTE
You can't write specifications that say exactly what you want. If the
specification
were that precise, then they would be the program.
UNQUOTE

That's a very clever and profound observation from Paul Graham. The
formal methods people might disagree though.

I think the obvious counterexample comes from constructive type
theory, where the specification is a type designed to determine the
program. But the specification is nevertheless not itself a program.
The program emerges as a byproduct of an attempt to prove that the
specification can be met (that the type is inhabited). That's not the
end of the argument; its just pawn to e4, pawn to e5 in this debate.

Mark
 
D

dan

Mark Tarver said:
QUOTE
You can't write specifications that say exactly what you want. If the
specification
were that precise, then they would be the program.
UNQUOTE

That's a very clever and profound observation from Paul Graham. The
formal methods people might disagree though.

So would I.

I would like a program that calculates the number x for which x*x=52

That specification says exactly what I want, but it's of no help at all
in creating an algorithm for how to get it.


-dan
 
M

Mark Tarver

So would I.

I would like a program that calculates the number x for which x*x=52

That specification says exactly what I want, but it's of no help at all
in creating an algorithm for how to get it.

-dan

Actually, and coincidentally, that's the very example which is used in
Constable's book of 25 years ago to introduce CTT via Nuprl. And the
specification looks almost exactly like yours. And the program is
derived without hacking any code.
The book is

Implementing Mathematics with the Nuprl Proof Development System.
Prentice-Hall, Engelwood Cliffs, NJ, 1986 (with PRL Group).

ftp://ftp.cs.cornell.edu/pub/nuprl/doc/book.ps.gz

might deliver it up.

Mark
 
M

Mark Tarver

Actually, and coincidentally, that's the very example which is used in
Constable's book of 25 years ago to introduce CTT via Nuprl.  And the
specification looks almost exactly like yours.  And the program is
derived without hacking any code.
The book is

Implementing Mathematics with the Nuprl Proof Development System.
Prentice-Hall, Engelwood Cliffs, NJ, 1986 (with PRL Group).

ftp://ftp.cs.cornell.edu/pub/nuprl/doc/book.ps.gz

might deliver it up.

Mark- Hide quoted text -

- Show quoted text -

However that said, Graham is 90% right. You just have to reformulate
what he says a little from

You can't write specifications that say exactly what you want. If the
specification were that precise, then they would be the program.

to

You can't write specifications that say exactly what you want. If the
specification were that precise, then they would be the program or as
long as the program.

And that's about right. But there is not a lot of point in
specifications that are as long as the program because then there is
as much chance of making an error in the specification as in the
program. Hence the specification brings no actual security.

Ideally specifications should be

1. Totally determinant of the correctess of the program.
2. Short and dazzingly clear.

However these two requirements are generally at odds with one
another. If you buy into 'short and dazzingly clear' then your
specification will likely underspecify. If you buy into 'totally
determinant' then your specification is likely to approach the length
of the program.

In the case of CTT, when you count in the axioms and rules needed to
synthesise a program, the length actually exceeds the length of the
program by a wide margin. For instance in my short intro to CTT in
Qi, I formally specify and synthesise a program that adds two numbers
together - with the aid of 24 pages of math'l logic.

http://www.lambdassociates.org/Lib/Logic/Martin-Lof/martin-lof.pdf

The lesson to be learned is that though, with great effort, you can
reduce programming to a math'l activity but it may not gain in clarity
or accuracy from so doing.

Mark
 
M

Martijn Lievaart

You can't write specifications that say exactly what you want. If the
specification were that precise, then they would be the program or as
long as the program.

There is a difference. The specifications should be readable for domain
experts and the programmers. I've seen plenty of cases where the
specifications were actually a fair bit longer than the program itself.

M4
 
A

Alessio Stalla

It is also demonstrably false.


The formal methods people are also wrong, but for different reasons.


You don't have to get anywhere near that esoteric.  There are many
precise specifications that are nonetheless very hard or impossible to
render into code, the classic example being F(X,Y)=TRUE if and only if X
is the encoding of a program that halts when run on input Y.  Imprecise
specifications are only one of many potential challenges to producing
code that does what you want.  Inverting SHA1, factoring RSA1024, and
decrypting AES256 are all specifications that can be rendered with
absolute precision.  But that will not help you at all when it comes
time to code them up.

Those are all *requirements* specifications, i.e. *what* the program
should do. Like the OP, I don't think PG is talking about
requirements, but rather about design specifications (e.g. UML
diagrams and the like) - that is, about a high-level description of
*how* the program should work. With that in mind, I agree 100% with
Paul Graham.
That said, high-level descriptions of a program can still have a
useful role for documentation purposes, that is, if they are
synthesized from the program (and not vice-versa as software
engineering adepts believe).

Alessio Stalla
 
T

Tim X

Alessio Stalla said:
Those are all *requirements* specifications, i.e. *what* the program
should do. Like the OP, I don't think PG is talking about
requirements, but rather about design specifications (e.g. UML
diagrams and the like) - that is, about a high-level description of
*how* the program should work. With that in mind, I agree 100% with
Paul Graham.
That said, high-level descriptions of a program can still have a
useful role for documentation purposes, that is, if they are
synthesized from the program (and not vice-versa as software
engineering adepts believe).
I agree with your interpretation of what PG was referring to. I also
think your other points are correct and concisely expressed.

I have seen specifications that go down to the detail of specifying the
function names, arguments the functions will accept and the return
values - all of which being defined before any code has been written
(remember the Z notation/specification). I believe this is a mistake.
Note that this is different from specifying something like a protocol
and is even different to specifying a high level API interface.
Specifications of this form can be useful and sometimes need to be
there. Such specifications are OK provided the programmer is not limited
to only defining those artifacts. I don't mind a specification that says
I must provide specific function calls with specific signatures provided
I'm also free to create other functions of varying signatures during the
development of the software. These 'other' functions will change and
develop as I explore the problem and will become more refined as I
understand both the problem area and the solution better. In such a
situation, I would not be surprised to find limitations or short comings
in the original protocol/api spec, but thats what revisions and updated
standards are for.

I also think one of the things PG was warning against was the type of
analysis paralysis you often encounter with things like UML and other
formal specificaiton processes. All too often, you end up with a laundry
list of specification requirements, much of which are only of
theoretical use and are never actually used. To a large extent, this is
what much of the agile philosophy attempts to avoid. All too often, you
will see, particularly in OO specs, classes with getters/setters for
absolutely everything, many of which never actually get used. The agile
approach would argue that you don't actually define a method until you
actually need it - you don't define it because you think you may need it
someday.

For me, the key part is that I don't believe you can really understand
all the detailed intricacies of a problem until you actually sit down
and start trying to solve it. Its similar to trying to learn a
programming language. Its not enough to just read about it and look at
examples. You ahve to actually sit down and start using the language
before you really get to know and understand it. Ultimately, its the
actual coding that transforms our abstract understanding to more
concrete real understanding. Both processes are important and we should
avoid any process that artificially limits one or the other. Too often,
formal specifications limit our ability to explore and gain increased
insight into the problem, limiting our ability to find the best
solution. Unfortunately, management doesn't like exploration and
experimentation because there are no guarantees of success and you can't
easily estimate completion time. such things make them nervous because
they have hard limits wrt resources and deadlines.

Tim
 
R

Ron Garret

Alessio Stalla said:
Those are all *requirements* specifications, i.e. *what* the program
should do. Like the OP, I don't think PG is talking about
requirements, but rather about design specifications (e.g. UML
diagrams and the like) - that is, about a high-level description of
*how* the program should work. With that in mind, I agree 100% with
Paul Graham.

Ah. Yes, going back and looking at the context it seems you're right.
The phrase "what you want" is a bit misleading out of context.
That said, high-level descriptions of a program can still have a
useful role for documentation purposes, that is, if they are
synthesized from the program (and not vice-versa as software
engineering adepts believe).

No, I don't think so. My point was that there's an unbridgeable gulf
between "what" and "how". You can't go back and forth between them
automatically in either direction (except in trivial ways like
extracting function signatures).

rg
 
P

Peter J. Holzer

So would I.

I would like a program that calculates the number x for which x*x=52

That specification says exactly what I want,

No. Most importantly it doen't say what the output should look like.
Mathematically, the result is 2*sqrt(13). But most likely you don't want
a symbolic result, but a floating point number with some number of
(binary or decimal) digits.
but it's of no help at all in creating an algorithm for how to get it.

It is if your programming language includes a primitive or library
function for solving quadratic equations.

I would like to change one word in PG's quip:

You can't write specifications that say exactly what you want. If the
specification were that precise, then they would be *a* program.

If the specification doesn't say how to solve a small, well-understood
subproblem, that's not very much different from calling a library
function in a program. As a different example, consider sorting: The
specification may specify the intended sort order but not the sorting
algorithm. In a similar way, many programming languages provide a sort
function. The programmer using this function typically doesn't care
whether that sort function implements a quickersort, heapsort or
mergesort, or some hybrid scheme. He just specifies an order (often in
the form of a comparison function) and leaves the details to the
implementation.

The more I think about this the more I think PG has stumbled upon a
truth here: A specification really is a program. But the "programming
language" in which it is written is almost always declarative and not
imperative. There usually isn't any interpreter for the language except
the brains of the audience (although parts of the specification/program
may be written in a formal language).

hp
 
P

Peter J. Holzer

QUOTE
You can't write specifications that say exactly what you want. If the
specification were that precise, then they would be the program.
UNQUOTE

That's a very clever and profound observation from Paul Graham.

It is also demonstrably false. [...]
You don't have to get anywhere near that esoteric. There are many
precise specifications that are nonetheless very hard or impossible to
render into code, the classic example being F(X,Y)=TRUE if and only if X
is the encoding of a program that halts when run on input Y. Imprecise
specifications are only one of many potential challenges to producing
code that does what you want. Inverting SHA1, factoring RSA1024, and
decrypting AES256 are all specifications that can be rendered with
absolute precision.

They can also be coded almost as easily as specified. The program just
won't complete in any reasonable time.
But that will not help you at all when it comes time to code them up.

That's because you don't want just any code which solves the problem,
you want fast code which solves it. And while it is easy to write into
the specification "must finish within X seconds" it may be hard to find
code which satisifies this constraint.

IMHO that fits well with comment in the other posting: The specification
is *a* program, but it is not *the* program, because even if a suitable
interpreter exists (or translation into an existing programming language
is straightforward) it would often be ridiculously inefficient.

hp
 
R

Ron Garret

Peter J. Holzer said:
No. Most importantly it doen't say what the output should look like.
Mathematically, the result is 2*sqrt(13). But most likely you don't want
a symbolic result, but a floating point number with some number of
(binary or decimal) digits.


It is if your programming language includes a primitive or library
function for solving quadratic equations.

I would like to change one word in PG's quip:

You can't write specifications that say exactly what you want. If the
specification were that precise, then they would be *a* program.

If the specification doesn't say how to solve a small, well-understood
subproblem, that's not very much different from calling a library
function in a program. As a different example, consider sorting: The
specification may specify the intended sort order but not the sorting
algorithm. In a similar way, many programming languages provide a sort
function. The programmer using this function typically doesn't care
whether that sort function implements a quickersort, heapsort or
mergesort, or some hybrid scheme. He just specifies an order (often in
the form of a comparison function) and leaves the details to the
implementation.

The more I think about this the more I think PG has stumbled upon a
truth here: A specification really is a program. But the "programming
language" in which it is written is almost always declarative and not
imperative. There usually isn't any interpreter for the language except
the brains of the audience (although parts of the specification/program
may be written in a formal language).

Ironically, it's PG's own lack of precision in specifying what he meant
that is leading to confusion here.

There are two fundamentally different kinds of specifications, those
that specify WHAT the result should be, and those that specify HOW the
result is to be obtained. Both of these can be either vague or
specific. Notwithstanding PG's use of the phrase "what you want" it
appears that he was actually talking about specifying how, not what.

(This distinction is not unique to software. PG's own example of a
piece of metal cut to a particular shape is a HOW specification. A WHAT
specification would be something like: a bracket with a particular
tensile strength...)

The issue PG was addressing (and that this thread seems to be about) is
the perennial debate about how specific a HOW spec should be before you
make your first attempt to render it into a form that is suitable as
input to a compiler. The right answer has a lot to do with how
expensive it is to run your compiler. If compilation is expensive it
makes sense to put more effort into making sure you've thought things
through before you try to compile. If compilation (and more to the
point, recompilation) is cheap then it makes less sense.

Then there is the completely orthogonal issue of how to render WHAT
specs into HOW specs, which is a fundamentally difficult problem. It
has to be. If it weren't you could just take the following WHAT spec:

* A program that takes a WHAT spec and renders it as a HOW spec

and render *that* as a how spec and you'd never again have to write any
code.

In fact, just *writing down* a WHAT spec (let alone finding a
corresponding HOW spec) is a fundamentally difficult problem. You want
a program to compute the square root of 52? What does that actually
mean? What form do you want the answer to take? Symbolic? Floating
point? BCD? A drawing of the diagonal of a square with an area of 52?
How long are you willing to wait for your answer? How many digits of
precision do you want? How much are you willing to pay?

rg
 
J

Jürgen Exner

Ron Garret said:
There are two fundamentally different kinds of specifications, those
that specify WHAT the result should be,
Yes.

and those that specify HOW the
result is to be obtained.

That would be over-specified.

jue
 
P

Peter J. Holzer

Peter J. Holzer said:
QUOTE
You can't write specifications that say exactly what you want. If the
specification were that precise, then they would be the program.
UNQUOTE

That's a very clever and profound observation from Paul Graham. The
formal methods people might disagree though.

So would I.

I would like a program that calculates the number x for which x*x=52

That specification says exactly what I want,

No. [...]
but it's of no help at all in creating an algorithm for how to get it.

It is if your programming language includes a primitive or library
function for solving quadratic equations.

I would like to change one word in PG's quip:

You can't write specifications that say exactly what you want. If the
specification were that precise, then they would be *a* program.

If the specification doesn't say how to solve a small, well-understood
subproblem, that's not very much different from calling a library
function in a program. As a different example, consider sorting: The
specification may specify the intended sort order but not the sorting
algorithm. In a similar way, many programming languages provide a sort
function. The programmer using this function typically doesn't care
whether that sort function implements a quickersort, heapsort or
mergesort, or some hybrid scheme. He just specifies an order (often in
the form of a comparison function) and leaves the details to the
implementation.

The more I think about this the more I think PG has stumbled upon a
truth here: A specification really is a program. But the "programming
language" in which it is written is almost always declarative and not
imperative. There usually isn't any interpreter for the language except
the brains of the audience (although parts of the specification/program
may be written in a formal language).

Ironically, it's PG's own lack of precision in specifying what he meant
that is leading to confusion here.

There are two fundamentally different kinds of specifications, those
that specify WHAT the result should be, and those that specify HOW the
result is to be obtained. Both of these can be either vague or
specific. Notwithstanding PG's use of the phrase "what you want" it
appears that he was actually talking about specifying how, not what.

Since I haven't read PG's book I can't comment on that.

The issue PG was addressing (and that this thread seems to be about) is
the perennial debate about how specific a HOW spec should be before you
make your first attempt to render it into a form that is suitable as
input to a compiler. The right answer has a lot to do with how
expensive it is to run your compiler. If compilation is expensive it
makes sense to put more effort into making sure you've thought things
through before you try to compile. If compilation (and more to the
point, recompilation) is cheap then it makes less sense.

Compilation time is only one factor in this. You also need to consider
the cost of testing, the cost of rewriting code, the cost of changing
interfaces, the cost of missing deadlines, etc.

And most importantly it misses the fact that the development of your HOW
spec doesn't stop when "you make your first attempt to render it into a
form that is suitable as input to a compiler", and the development of
your WHAT spec doesn't stop when you start developing your HOW spec.
Users often have only a very vague notion of what they need and they
have poor imagination and abstraction facilities. When you show them
your first prototype they will tell you that they wanted something
completely different.

Then there is the completely orthogonal issue of how to render WHAT
specs into HOW specs, which is a fundamentally difficult problem. It
has to be. If it weren't you could just take the following WHAT spec:

* A program that takes a WHAT spec and renders it as a HOW spec

That's not a WHAT spec. It neither specifies the input nor the output.

and render *that* as a how spec and you'd never again have to write any
code.

I think you missed my point: A formal WHAT spec *is* code. So if you
have that translator you just write code in your declarative WHAT
language instead of your imperative HOW language. And as you write
yourself:

In fact, just *writing down* a WHAT spec (let alone finding a
corresponding HOW spec) is a fundamentally difficult problem.

So you may have solved the simpler part of the problem and be left with
the harder part. (this is not true in all cases - obviously there are
problems which are simple to describe and hard to solve, but in my
experience most of the time the really hard part is the WHAT, not the
HOW).

[rest deleted as it only repeats what I wrote before]

hp
 
R

Ron Garret

The issue PG was addressing (and that this thread seems to be about) is
the perennial debate about how specific a HOW spec should be before you
make your first attempt to render it into a form that is suitable as
input to a compiler. The right answer has a lot to do with how
expensive it is to run your compiler. If compilation is expensive it
makes sense to put more effort into making sure you've thought things
through before you try to compile. If compilation (and more to the
point, recompilation) is cheap then it makes less sense.

Compilation time is only one factor in this. You also need to consider
the cost of testing, the cost of rewriting code, the cost of changing
interfaces, the cost of missing deadlines, etc.[/QUOTE]

All true. If any of these are expensive it makes more sense to do more
design up front.
And most importantly it misses the fact that the development of your HOW
spec doesn't stop when "you make your first attempt to render it into a
form that is suitable as input to a compiler", and the development of
your WHAT spec doesn't stop when you start developing your HOW spec.
Users often have only a very vague notion of what they need and they
have poor imagination and abstraction facilities. When you show them
your first prototype they will tell you that they wanted something
completely different.
Yep.


That's not a WHAT spec. It neither specifies the input nor the output.

Sure it does. The input is a WHAT spec. The output is a HOW spec. I
haven't gone into detail about what WHAT specs and HOW specs are and how
they are represented, but I could.
I think you missed my point: A formal WHAT spec *is* code.

I didn't miss that point, I just disagree with it. There are lots of
WHAT specs that are not and cannot be code. The halting problem.
Computing Chaitin's omega. Even inverting SHA1 or factoring large RSA
keys could be examples.
So if you
have that translator you just write code in your declarative WHAT
language instead of your imperative HOW language. And as you write
yourself:



So you may have solved the simpler part of the problem and be left with
the harder part. (this is not true in all cases - obviously there are
problems which are simple to describe and hard to solve, but in my
experience most of the time the really hard part is the WHAT, not the
HOW).

They can both be hard.

rg
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,009
Latest member
GidgetGamb

Latest Threads

Top