spinoza1111wrote:
There already is significant legislation covering many software
applications including brain surgery.
I'm talking about legislation concerning who can write code.
Formal software proofs of the type that you mention rattled around
30+ years ago as a good idea had conceptual fatal flaws. Update your
reading, I suggest starting with papers on mechanical reliability that
were being published about a 100 years ago.
What were those fatal flaws?
It was thought that the proof might have an error.
Which showed that the critic didn't understand how we need to reduce
probability of error to one over six sigma without ever eliminating
errors, by taking one extra step.
Furthermore, nobody understood Dijsktra's point: that the proof could
be developed alongside the program.
Also, I'd add that a properly constructed software walkthrough is a
venue for the presentation of software correctness proofs in ordinary
language. This never happened because of the toxic personal relations
fostered by American "free market" competition: consistently, people
with the purity of heart and verbal abilities to be able to construct
sentences of the form "my code works because" were made targets of
bullying with management's tacit approval, because management needs
office bullies to get people to Obey. As a result, structured
walkthroughs and other engineering reviews degenerated very rapidly
after the election of Ronald Reagan (and its message that verbal
abilities and purity of heart were for fools, and that being glamorous
and coated with Teflon were the way to go).
A direct result was the normalized deviance of the Challenger review
at NASA subcontractor Morton-Thiokol in early 1986, where engineers
with verbal abilities and purity of heart tried to tell NASA
management that the performance of allows in Space Shuttle O-rings was
unpredictable at low temperatures such as were obtaining in Florida in
that January. NASA, which was under pressure from the Reagan
administration to show that the USA could put a teacher in space,
bullied these engineers and ruined their careers by pressuring them to
"think like a manager and not an engineer", and in my experience at
Bell Northern Research, this happened in software as well.
As in these disgusting "newsgroups", little constructive work gets
done because nothing like a logical argument, much less a software
correctness proof, is ever presented. Therefore, the possibility of
presenting even an informal correctness proof is dismissed out of
hand, and instead the word of loudmouths is treated as Holy Writ.
For more information on NASA's Challenger disaster, and how
"skepticism" of the possibility of knowledge or partial proof killed
the crew of Challenger, cf Diane Vaughan, THE CHALLENGER LAUNCH
DECISION, U of C press 1999.
Nobody can "prove" that a real system won't fail. The Airbus A320's
"fly by wire" avionics system is software designed to make sure the
A320 doesn't go outside its performance limits: if the pilot pulls up
the "stick" (which is no longer a stick at all) the airplane won't
obey beyond the angle for which it is designed. Nonetheless an Airbus
crashed last June. Likewise the Toyota's accelerators (and now its
brakes, according to a story in today's International Herald Tribune)
are failing although they are software controlled for safety.
But this means...more software and correctness proofs too, not less.
You can't (according to my source which is Langewiesche) go back to
the cockpit of a 707 in the 1960s, with a flight engineer laboring
with a slide rule to recalculate course when the weather changes. The
level of traffic and the demand for air travel won't permit it.
Unlike Dijkstra, I think that English used well, and uninterrupted by
thugs, is a form of correctness proof, even a comment that says
clearly what a module does when read alongside the code.