Software bugs aren't inevitable

S

Sybren Stuvel

Terry Hancock enlightened us with:
This is ludicrous sophistry. The technical reason for having ANY high
level languages is "psychological". Computers are happier with binary
code, over ANY language that must be interpreted.

Computers aren't happy. They couldn't care less about the programming
language.
Programming languages are an interface to Human minds, so the
argument that one system of representation is easier to understand
is an argument that that system is *better* in that it is a better
interface.

Well said! I'm going to remember that one :)

Sybren
 
P

phil hunt

Ah, yes, you got me on that one.

But there is a difference: writing assembly is *hard*, which is why we
prefer not to do it. Are you suggesting that functional programming is
significantly easier to do than declarative?

No. I'm saying that under some circumstances it might be easier to
write code as a recursive routine than a loop. In those
circumstances, why should I care if the compiler then re-encodes my
recursive routine as a loop, so long as the program gives the
correct answer.

Compilers/interpreters/runtimes are black boxes: we don't (or
shouldn't) care how they do their work as long as they run correctly
and aren't too heavy on system resources like CPU time and memory.
 
T

Terry Hancock

But there is a difference: writing assembly is *hard*, which is why we
prefer not to do it. Are you suggesting that object oriented programming is
significantly easier to do than old style imperative?

The answer in both cases, is that it depends very much on what you're
trying to write.
FWIW, IMO once you've learnt functional programming's idioms it certainly
can be easier and more natural.

In particular domains of problem-solving, yes.
The problem is the tools that make things
like recursion efficient aren't available normally in mainstream languages
meaning that most people simply don't get the practice.

There's also an assumption here (which I regard as a fallacy), that different
models of programming are equally intuitive. Some models, like OOP, draw
heavily on experience *outside* the realm of programming (in many cases,
you can successfully apply machine-shop experience to a programming problem
if the program is object-oriented -- in fact, ISTM, this is the *primary*
power of OOP, that it allows you to provide experience from the problem
domain to the process of writing a software program to work with that domain).

Functional programming (and I am no expert here), seems to me to draw
heavily on the problem domain *of* computer science. So naturally,
computer scientists think it's brilliant and intuitive.

But for those of us who primarily regard computers as tools for solving
problems in other domains -- such as image-processing, web-application,
handling business transactions, or simulating supernovae -- FP just doesn't
seem very natural at all. In fact it seems completely backwards, perverse,
and twisted, as if the designer were simply trying to confuse and
obfuscate the problem instead of solving it.

Or at least, it frequently does.
Essentially it's about expressiveness.

Yes, and expressiveness means keeping vocabulary and grammar for
different applications, not throwing one out on the basis that it
is somehow superior. If you *are* making the latter claim, then
the burden is on you to prove that your method is really that much
better.
Think of it this way - we normally write left to write, however some
languages read up and down. Neither is inherently better or easier
than the other, but for some things *may* be more expressive.

I *really* doubt that, actually. ;-)
But I heard what you meant.
If you think about it being about choosing the most clear/expressive way to
describe an algorithm, the argument may become clearer. After all, the
recursive definition of some things is clearer than the non-recursive.

Yes. I think all we're claiming is that the reverse is equally true.

In addition to this kind of elegance argument, there is also a pragmatic
argument that, with existing technology, it is sometimes much more efficient
to write an iterative solution than to insist on recursion. Even if it can
then be shown for particular cases that a recursive solution that is also
efficient exists, and even if it can be proven that, in general, all such
efficient iterative solutions have equivalent recursive solutions, it does
not defeat the simple point that the obvious iterative solution is faster
and more efficient than the obvious recursive solution.

Let's not forget that this started with an example of the Fibonacci series.

Who in their right mind would *actually* be trying to solve the Fibonacci
series for a practical programming application? It's a toy, like the
"hello world" program. I have no doubt that I can handle the extra burden
of understanding the cleverly structured "efficient recursive" algorithm,
versus the alternatives. But I can also see that it *is* more of a burden
to understand it -- it causes confusion.

Now what happens when the complexity goes up ten-fold, and we add extra
bells and whistles to it that theoretical programs usually skip over, and
real world programs have in abundance?

At that point, I *really* want that core problem to be simple to understand.
Ideally, as simple as it can possibly be.
 
T

Terry Hancock

Clearly Jerry here believes that arrogance and intelligence must go hand
in hand: just for your education, there is a difference between *being*
intelligent and feeling like you have to *prove* it to everyone you meet.
I learned a long time ago, that there are plenty of *unavoidably* complex
problems out there, so that there's so need to make simple ones complex,
just to prove how clever you are. You're darned right I avoid wasting
time on problems that don't require it.

I *probably* should've used a little more restraint here. ;-)

I *definitely* should've spelled Jerzy's name correctly.
Sorry, no offense intended.
 
R

Ron Adam

Paul said:
I think you mean imperative. Yes, there is a community that believes
that writing bug-free programs in functional style is easier than
writing them in imperative style. Writing buggy programs might be
easier in imperative style, but in some application areas, that's not
good enough.

Why don't you read the Hughes paper I cited instead of taking cheap
shots at it without reading it, if you want to understand the issues
better.


Just some comments from my own experiences concerning bugs and writing
dependable software. It pretty much depends on a number of things and
isn't just a matter of good design or what language is used.

* The dependability of the hardware
* The dependability of the OS
* The stability of the language used
* The programmers knowledge of the language used
* Program design
* How many programmers are working on the same project
* How many extensions are used and written by other people
* The stability of those extensions
* Rate of change of the program design, and all underlying parts.

A small program with no external modules and written by a single person
can be fairly dependable, or as dependable as the underlying language,
OS, and hardware.

But as soon as the softwares complexity increases and/or multiple
programmers are used, either directly or indirectly through third party
extensions, then the probability of bugs increases substantially.

Given *enough* time, effort, and testing, without changing the design,
even a large program can be nearly as dependable as the least dependable
part of the platform it is running on. ("enough" could be a long time
here.)

To increase reliability to nearly 100%, you need to run different
versions of a program on several different platforms simultaneously and
use only the results that have a majority agreement.

Or to put it another way; risk management by ... "keep it simple",
"don't have too many cooks", "get second opinions", and "don't put all
your eggs in one basket".

Cheers,
Ron
 
R

Ron Adam

Terry said:
You cannot tell whether a function object will act
recursive or not just by looking at its code body. Trivial examples:

I was thinking last night that maybe it would be useful to be able to
define a function explicitly as a recursive object where it's frame is
reused on successive calls, but then I realized that it's nearly
identical to a loop in that context, so why not just write it as a loop
to start with.

Cheers,
Ron
 
P

Paddy

The article states that their method calls for them to have much more
than normal access to many more people in the clients organisation than
is normal; from the CEO to the receptionist.
The specification stage can take a year without the customer seeing a
line of code.
And they deliberately "write one to throw away".

In short, they *are* aware of the human element .

- Pad.
 
M

Mike Meyer

Compilers/interpreters/runtimes are black boxes: we don't (or
shouldn't) care how they do their work as long as they run correctly
and aren't too heavy on system resources like CPU time and memory.

Maybe in academia. Not in the real world. Or maybe you just phrased
that last clause poorly. In the real world, programs have performance
constraints. Some of them are seriously inflexible - in which case we
call what we're doing "real-time" or "embedded" or words to that
effect. Others are softer, but in the end they matter *very much*. I
would have phrased that last clause to make reasonableness a
requirement, rather than making "not unreasonable" the requirement.

Because of that, you have to care about how your implementation
works. If you don't know how strings work in Python, you tend to write
O(n^2) algorithms instead of O(n) ones for fundamental
operations. Things like that make a difference.

<mike
 
P

phil hunt

Maybe in academia. Not in the real world. Or maybe you just phrased
that last clause poorly.

I think perhaps I did.
In the real world, programs have performance
constraints.

Yes i know, that's why I said "aren't too heavy on system resources
like CPU time and memory".
Some of them are seriously inflexible - in which case we
call what we're doing "real-time" or "embedded" or words to that
effect.

If a program is too slow to respond isn't that about "system time"?
Others are softer, but in the end they matter *very much*. I
would have phrased that last clause to make reasonableness a
requirement, rather than making "not unreasonable" the requirement.

Because of that, you have to care about how your implementation
works. If you don't know how strings work in Python, you tend to write
O(n^2) algorithms instead of O(n) ones for fundamental
operations.

What does "CPU time" mean again?
 
S

Sybren Stuvel

phil hunt enlightened us with:
If a program is too slow to respond isn't that about "system time"?

Not by definition. Could be anything. If it's slow to respond due to a
slow harddisk, then you're right. If it's slow to respond due to not
putting the I/O and the GUI main loop in different threads, then it's
not about "system time".
What does "CPU time" mean again?

The time the CPU takes to run (part of) a program. And this means the
time the CPU actually spends on running that program, and not some
other piece of software.

Sybren
 
A

Aahz

That seems amazingly silly. Sort of like refusing to hoist function
definitions because not all function definitions can be hoisted. Or
choose your favorite "sometimes-I-can-sometimes-I-can't" optimization.

Since the BDFL is *not* known for doing even mildly silly things when
it comes to Python's design and implementation, I suspect there's more
to the story than that.

Note that I said "one reason". The primary reason is that tail-call
optimization destroys the call stack, which means that exception
semantics would have to change. If tail-call optimization were more
useful, he might be willing to consider the tradeoff, but since it
isn't...
 
T

Terry Reedy

Aahz said:
Note that I said "one reason". The primary reason is that tail-call
optimization destroys the call stack, which means that exception
semantics would have to change. If tail-call optimization were more
useful, he might be willing to consider the tradeoff, but since it
isn't...

The prime reason I remember Guido giving, as I reported previously, is that
tail-call optimization is semantically wrong otherwise, given the language
as now defined, and would require a semantic change that he does not want
to make.

Terry J. Reedy
 
S

Scott David Daniels

Sybren said:
... Computers aren't happy. They couldn't care less about the
programming language.

This reminds me of a quote I love (and wish I could cite the
originator):

Don't anthropomorphize computers, they don't like that.

--Scott David Daniels
(e-mail address removed)
 
T

Terry Hancock

This reminds me of a quote I love (and wish I could cite the
originator):

Don't anthropomorphize computers, they don't like that.

Actually, I wrote a long (and very off-topic) reply justifying the
philosophical point that whether or not computers can feel "happy"
is an article of faith, either way, and depends heavily on what
sense you are using for the word "happy" not to mention "feel"
or "be". I thought maybe it would be best to drop it, but since
this bone is still being worried, perhaps it's worth summarizing ...

Fascinating subject, actually. There is a widely held *faith* among
materialist atheists that the criterion for "being able to feel"
is an as-yet undefined, but definable level or form of computing
complexity mythologized as a "true AI". The need for this belief
is an artifact of the materialist atheist belief -- or rather the
disbelief in any form of spiritualism or animism.

I contend that there is no such thing as a "true AI", or a "true
natural intelligence" for that matter in the sense of there being
some sharp line between "sentient" and "non-sentient" matter.

Obviously there is *something* going on, in continuity of memory and
reasoning capacity that has something to do with the objective
capability to react in complex ways, and thus to express one's own
sense of being in a way that is more comprehensible to "others".
Thus far, however, there is no need for any sharp line -- just a
gradually increasing level of intelligent responsiveness.

But, the actual state of sensation? We know nothing about that,
and science cannot illuminate it, because it's not an objective
statement. We by definition cannot know what someone or something
else "feels". We can only know how he/she/it *reacts*.

It seems a simpler and more natural assumption to think that
sensation *pre-exists* the reasoning power to express or remember
that sensation. Indeed, it seems more sensible to regard it as
a fundamental property of matter (or energy or space -- it being
hard to define which bit of reality the soul adheres to, but
"matter" will do for sake of argument).

It's no more unreasonable, I would contend, then, to say that a
computer is "happy" when it acts on data it "understands". If
"thought" is a "mechanism", then a "mechanism" can be "thought".

I do not "anthropomorphize" the machine in that I do not regard
thought as a uniquely Human capacity. That I have some "theory
of mind" for a PC does not mean that I think it's a Human, nor
that I would be stupid enough to credit it with a Human mind's
capabilities.

So I personally find it completely sane and often more natural
to speak of what a computer "knows" or "understands" or in this
case, "is happy with".

But if you really insist on a reductionist, mechanistic explanation,
then my point is simply this: a computer, in order to act on ANY
program, must first be made to act on a prior program (a compiler or
an interpreter -- in addition to the BIOS and operating system which
must first run in order to initiate the said program), which contains
instructions for converting said program into a format which the
computer is able to directly process.

I personally found "the computer is happier with binary" to
be a much more concise and understandable way to say that, but
clearly, some people on the list find it heretical to use any
statement which assigns "agent status" to computers or programs.

But getting back to the point -- the fact that the computer itself
would be "happier" with binary program instructions shows that there
is certainly NO objective, technical sense in which ANY computer
programming language is "inherently" superior.

Programming languages can ONLY be evaluated in "psychological"
terms. The "technical" design of programming languages -- and
indeed ALL systems for describing and characterizing cognition
of all forms is ultimately a "psychological" discipline. It
obviously depends on the function of the mind of the programmer,
and the ability of programmers' minds to process the
information is the *metric of success* of that programming
language.

Given that programmers' minds are neither identical nor
unchanging, it pretty much goes without saying that the choice
of programming language, notation, or technique will be
subjective -- and also changeable.

I said this, because an earlier poster had *dismissed* mere
"psychological" reasons as unimportant, claiming that
functional programming was superior on "technical" grounds.

I hope I have demonstrated that that statement is nonsensical --
ALL statements about programming languages or techniques are
ultimately dependent on "psychological reasons". If functional
programming reduces bugs, then it also does so for
*psychological* reasons.
 
M

Michael Sparks

Giles said:
I think you can argue (I would) that any behaviour that is in the
specification this "isn't right" is not a software bug, but a
specification error.

To a user there is no difference. If the software doesn't do what they
wanted it to do/asked for, then to *them* the person who is accepting
the software it's a bug. It doesn't matter to them what caused it - be it a
buffer overflow, a misunderstanding of a language feature or an incorrectly
written formal spec, it's still a bug to them.

I'm actually a big fan of formal specification (specifically VDM), but
that doesn't stop me realising that it's not a cure-all, and it's also not
an excuse - if code doesn't do what it was asked to do, it's bust.

Regards,


MIchael.
 
J

Jerzy Karczmarczuk

Terry Hancock wrote:

/a few statements which seem to be there - apparently - just for the sake
of quarreling/
The FP camp (apparently) wants to advance the claim that FP will *always*
reduce bugs. I find that very hard to believe.

Good.
Now go, and talk to some FP people before accusing them of being *so*
sectarian. Your supposition that they claim that FP is always better is
unjustified. Were I more aggressive, I would say: 'sheer nonsense'.
I would not say - as you did - a 'ludicrous sophistry', because it is
not ludicrous. Quite sad, in fact...

Your further posting, about twists and perversion of functional programming
makes me invite you to learn a bit more of FP. It won't harm you, and it
might raise in your spirit the question why in thousands of educational
establishment this programming style is considered good for beginners.
I might agree that thousands of teachers are more stupid than you, but that
they are all perverts, I believe not.

Anyway. In a further posting you comment the "psychological" aspect of
language choice in such a way:
I said this, because an earlier poster had *dismissed* mere
"psychological" reasons as unimportant, claiming that
functional programming was superior on "technical" grounds.

1. I never said that FP was technically superior.
2. I never dismissed psychological reasons as unimportant.

Read it again, please.
Please, stop putting in other people mouths fake arguments, just to
have something to argue about, OK?


FP appeals to many. Well, *why* people who jump into Python from other
languages very often like functional constructs, and dislike the fact
that destructive methods return nothing?...



Jerzy Karczmarczuk
 
S

Steven D'Aprano

This conversation is rapidly approaching flame-war territory. Just a few
comments before I hope we can put this to bed.

Now go, and talk to some FP people before accusing them of being *so*
sectarian. Your supposition that they claim that FP is always better is
unjustified. Were I more aggressive, I would say: 'sheer nonsense'.
I would not say - as you did - a 'ludicrous sophistry', because it is
not ludicrous. Quite sad, in fact...

I work with some people who are absolutely infatuated with functional
programming. I can assure you that in my experience, at least some FP
folks *do* say that it is always better. Need I point out that they are
invariably extremely bright, highly educated, academically minded, and
utterly inexperienced with the commercial and practical realities of
real-world development?

I say that as somebody who is fascinated by the concepts of FP, and would
like to see Python at least keep the existing functional programming
constructs, if not expand them. But my reason for doing so is that there
are no magic bullets, not FP, not generators, OO, or any one of a hundred
other programming patterns. The more tools you have, the more likely you
will find one that works for your particular problem.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,483
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top