Ada vs Ruby

E

Eleanor McHugh

Is there any (serious) language made after, say, 1985 that doesn't
have exception handling? Static typing or dynamic typing, strong
typing or weak typing -- they pretty much all have some kind of
exception handling mechanism.


It's not enough to have the mechanism, you also have to code the
system to use it intelligently otherwise you won't fail-safe.


Ellie

Eleanor McHugh
Games With Brains
http://slides.games-with-brains.net
 
E

Eleanor McHugh

I wouldn't fly in an aeroplane that relies on the runtime to catch
errors.

I wouldn't fly in an aeroplane where a runtime error couldn't be
caught. That's because there will be runtime errors regardless of how
well designed and analysed the code is.
Take the Space Shuttle as an extreme. Does the language breed
perfection
in the Shuttle's source, or is it the process NASA uses?

That process includes implementing fail safe conditions for runtime
errors. Without those the developers would be legally culpable for any
deaths that occurred as a result of their negligence. Waking up in the
morning knowing that is surprisingly good at focusing the attention on
detail...
I bet you dollars to doughnuts that it is the process, with
more-than-due-diligence in writing and testing the software. That the
requirements are clear cut and well understood is another bonus.

Languages don't matter. Compilers don't matter. Process, however,
does.

Or methodology. TDD has its benefits, as does BDD. Without these, the
Agile way wouldn't work. QA is the key, not that language.

The court is still out on TDD and BDD. None of my friends in the
avionics industry has much confidence in these techniques, but the
main goal there is systems which don't kill people or destroy millions
of dollars of equipment. The only argument I see in favour of that
particular brand of agile development is that the problems involved
are essentially human rather than technical and the code is just a way
of forcing people to make decisions in a timely fashion.

Also whilst QA techniques transfer fairly well between languages, if
given the choice between two languages with different levels of
verbosity it is always advisable to use the less verbose language:
there's less to test, less to go wrong, and less likelihood of
muddling your (often vague) requirements.


Ellie

Eleanor McHugh
Games With Brains
http://slides.games-with-brains.net
 
R

Robert Dober

Amen, Sister! And languages which rely on static typing have a
tendency to do much more random things when things go wrong. Language
like Ruby tend to have a more vigilant runtime.

Reminds me of the old story about Donald Knuth (I do not know if it is
actually true) who was lecturing formal code proves and was asked by a
student if the code actually worked now. He replied:
I do not have any idea I only proved it correct I never tested it.
Although most of you know this story I believe that it is particularly
of interest in this context.

Cheers
Robert
 
P

Phillip Gawlowski

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Eleanor McHugh wrote:
| On 16 Apr 2008, at 03:40, Phillip Gawlowski wrote:
|> I wouldn't fly in an aeroplane that relies on the runtime to catch
|> errors.
|
| I wouldn't fly in an aeroplane where a runtime error couldn't be caught.
| That's because there will be runtime errors regardless of how well
| designed and analysed the code is.

Of course. But solely relying on trusting that the runtime will do The
Right Thing isn't the way to go. Error catching and handling is a tool
to the user, not a silver bullet.

|> Take the Space Shuttle as an extreme. Does the language breed perfection
|> in the Shuttle's source, or is it the process NASA uses?
|
| That process includes implementing fail safe conditions for runtime
| errors. Without those the developers would be legally culpable for any
| deaths that occurred as a result of their negligence. Waking up in the
| morning knowing that is surprisingly good at focusing the attention on
| detail...

And introducing large amounts of stress that are counterproductive. ;)

I doubt, however, that there is a single undefined state in the Space
Shuttle's software. No uncaught exception, no reliance on language
features to do the right things, but well understood and diligent
implementation of those, together with rigorous QA.

|
| The court is still out on TDD and BDD. None of my friends in the
| avionics industry has much confidence in these techniques, but the main
| goal there is systems which don't kill people or destroy millions of
| dollars of equipment. The only argument I see in favour of that
| particular brand of agile development is that the problems involved are
| essentially human rather than technical and the code is just a way of
| forcing people to make decisions in a timely fashion.

As I said in another reply in this thread, methodologies are but one
skill set. What works for a billing system doesn't necessarily work for
a cruise missile or the A380. Different problem domains require
different solutions.

And Agile's domain is in the face of changing or evolving requirements.

I suspect that aeronautical problems are well understood, and
requirements (while not easily) determined well before the first line of
code is written.

As far as I understand it, TOPCASED does work like this:
http://www.heise-online.co.uk/open/TOPCASED-System-development-using-Open-Source--/features/110028

| Also whilst QA techniques transfer fairly well between languages, if
| given the choice between two languages with different levels of
| verbosity it is always advisable to use the less verbose language:
| there's less to test, less to go wrong, and less likelihood of muddling
| your (often vague) requirements.

No silver bullets. Picking the right tool for the job is key.

But what use is a less verbose language, if only a handful of people
understand it well enough? Sure, often there is time to train, but
sometimes there is not.

Trade offs are everywhere, and none of them are easy.


- --
Phillip Gawlowski
Twitter: twitter.com/cynicalryan

~ - You know you've been hacking too long when...
...you dream you have to write device drivers for your refrigerator,
washing machine, and other major household appliances before you can use
them.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkgGAlYACgkQbtAgaoJTgL+D8wCfV3C36WahI5nn8mGme3aOzjw+
TaMAn3ocQN5tmE8FebmGbdeE0EvpaZR8
=RCFz
-----END PGP SIGNATURE-----
 
S

Sean O'Halpin

Reminds me of the old story about Donald Knuth (I do not know if it is
actually true) who was lecturing formal code proves and was asked by a
student if the code actually worked now. He replied:
I do not have any idea I only proved it correct I never tested it.
Although most of you know this story I believe that it is particularly
of interest in this context.

Cheers
Robert

FYI: http://www-cs-faculty.stanford.edu/~knuth/faq.html - see last
question on page.

Regards,
Sean
 
E

Eleanor McHugh

Well, yeah. But Matt made it sound like exception handling was a
rare beast and a major decision criterion for selecting languages.
I can only think of one language left in wide, common use that
doesn't have exception handling: C. (I'm sure others will
immediately jump up and list others, but that's just life.;)

And I've seen a lot of C programmers code their own with longjmp ;)


Ellie

Eleanor McHugh
Games With Brains
http://slides.games-with-brains.net
 
E

Eleanor McHugh

I doubt, however, that there is a single undefined state in the Space
Shuttle's software. No uncaught exception, no reliance on language
features to do the right things, but well understood and diligent
implementation of those, together with rigorous QA.

It's a lovely idea, but ponder the impact of G=F6del's Incompleteness =20=

Theorems or Turing's proof of the Halting Problem. In practice there =20
are program states which can occur which cannot be identified in =20
advance because they are dependent on interactions with the =20
environment, or are artefacts of the underlying problem space.

That's why run-time error handling and fail-safe behaviour are so =20
important regardless of the rigour of Q&A processes.
As I said in another reply in this thread, methodologies are but one
skill set. What works for a billing system doesn't necessarily work =20=
for
a cruise missile or the A380. Different problem domains require
different solutions.

And Agile's domain is in the face of changing or evolving =20
requirements.

I suspect that aeronautical problems are well understood, and
requirements (while not easily) determined well before the first =20
line of
code is written.

Never rely upon suspicions when talking with people who actually know =20=

for sure. As I pointed out earlier in this thread I've written and =20
certified cockpit systems (for both civilian and paramilitary use) and =20=

requirements have tended to be just as amorphous as in any other =20
industry I've subsequently worked in. The main difference has been one =20=

of management realising in the former case that good systems rely on =20
good code and that this is something a small percentage of developers =20=

can produce, whereas in the latter there's a belief that any two =20
coders are interchangeable so long as the process and tools are right.

Personally I'll always bet on a small team of motivated hackers =20
determined to understand their problem domain over a larger team of =20
professional developers with the latest tools and methodologies but a =20=

less consuming passion.
No silver bullets. Picking the right tool for the job is key.
But what use is a less verbose language, if only a handful of people
understand it well enough? Sure, often there is time to train, but
sometimes there is not.


If I have a large safety-critical or mission-critical codebase that =20
needs maintaining I'm more interested in finding developers who =20
understand the problem domain than who understand the language it's =20
developed in. Any half-competent developer will pick up a new language =20=

in a matter of weeks, but learning a problem domain can take years.


Ellie

Eleanor McHugh
Games With Brains
http://slides.games-with-brains.net
 
P

Phillip Gawlowski

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Eleanor McHugh wrote:

| It's a lovely idea, but ponder the impact of Gödel's Incompleteness
| Theorems or Turing's proof of the Halting Problem. In practice there are
| program states which can occur which cannot be identified in advance
| because they are dependent on interactions with the environment, or are
| artefacts of the underlying problem space.
|
| That's why run-time error handling and fail-safe behaviour are so
| important regardless of the rigour of Q&A processes.

Sure. But to know these states, the software should be tested as
thoroughly as possible. I somehow doubt that anybody using something
mission-critical to flying or medical health wants to call the hotline
during the final approach of a plane or when a surgical robot gets
fantasies of being SkyNET. ;)

Anyway, this problem is (AFAIK, anyway), countered by using redundant
implementations of the hardware and software (well, as far as possible,
anyway), to minimize the effect of unknown states.

I don't think that I ever heard of a pilot encountering an unhandled
exception during normal operation, for example. I guess we mean the
same, after all.

(At least I don't see a contradiction in our arguments?)

|
| Never rely upon suspicions when talking with people who actually know
| for sure. As I pointed out earlier in this thread I've written and
| certified cockpit systems (for both civilian and paramilitary use) and
| requirements have tended to be just as amorphous as in any other
| industry I've subsequently worked in. The main difference has been one
| of management realising in the former case that good systems rely on
| good code and that this is something a small percentage of developers
| can produce, whereas in the latter there's a belief that any two coders
| are interchangeable so long as the process and tools are right.

How does that rebuke my assertion that requirements are well understood
nonetheless? Requirements can change for many reasons, and not all are
related to the actual software, but possibly its implementation?

I mean, we pretty much know the physics that make flight work, for
example. That a different airframe needs different software to work is
obvious (can't trim a fighter the same as a jumbo, for example).

However, the math stays the same, "just" the implementation changes
(Which, as I fully recognize, is a challenge in itself). And, sooner or
later, the requirements have to, for want of a better term, gel into
something that doesn't change anymore (or at least not as easy as in
more conventional development situations)?

Mind you, I'm not discounting your expertise in the matter at all.

| Personally I'll always bet on a small team of motivated hackers
| determined to understand their problem domain over a larger team of
| professional developers with the latest tools and methodologies but a
| less consuming passion.

Same here.

|
| If I have a large safety-critical or mission-critical codebase that
| needs maintaining I'm more interested in finding developers who
| understand the problem domain than who understand the language it's
| developed in. Any half-competent developer will pick up a new language
| in a matter of weeks, but learning a problem domain can take years.

Well, I kind of assumed that as a given. ;)

I'd be interested in the kinds of trade offs have to be made in this
particular problem domain (since I can't speak from experience, and
never claimed to, either, and didn't mean to imply as much).

I haven't worked on anything more mission critical than CRUD style apps,
and I can only infer from my knowledge what kind of problems development
teams face.

Still, it seems to be that no level of genius can create software as is
necessary for the Space Shuttle or a more average airplane, without the
level of testing the NASA or Boeing brings to bear for their software.

After all, smart people should recognize the difficulties they face when
working on mission critical software?


- --
Phillip Gawlowski
Twitter: twitter.com/cynicalryan

Rule of Open-Source Programming #48:

The number of items on a project's to-do list always grows or remains
constant.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkgGL+UACgkQbtAgaoJTgL/sUwCgggZ5KbDmLs5/SFnvDVQOoU6p
njsAoJxX7yJj8idpx7kPybDLCTR+pCZN
=PKTz
-----END PGP SIGNATURE-----
 
L

Lionel Bouton

Phillip said:
[...]
Anyway, this problem is (AFAIK, anyway), countered by using redundant
implementations of the hardware and software (well, as far as possible,
anyway), to minimize the effect of unknown states.

This solves some problems but not all of them. If the software and
hardware are designed based on flawed specifications you get a rocket
explosion (Ariane V first test flight: the 3 redundant systems all
failed one after the other because some flight patterns constraints were
reused from Ariane IV but weren't applicable to the new rocket...).
I don't think that I ever heard of a pilot encountering an unhandled
exception during normal operation, for example. I guess we mean the
same, after all.

I think it happened at least once on an Airbus where a pilot had to
switch to manual controls or deactivate a safety measure because the
autopilot was going to bring the plane to stall (sensors reporting
incorrect measures). I could't find the reference for this specific
instance, but google brought other problems :

# Unforeseen conditions leading to autopilot misbehaving :
http://aviation-safety.net/database/record.php?id=19940630-0
http://shippai.jst.go.jp/en/Detail?fn=0&id=CA1000621

# Software glitches putting aircraft in danger :
http://online.wsj.com/article/SB114...KEYWORDS=flight+check&COLLECTION=wsjie/6month

# several incidents, look for the "Cause" lines to filter the problems
# caused by flight systems.
http://www.airsafety.com/aa587/faa_fltcont.pdf
|
| Never rely upon suspicions when talking with people who actually know
| for sure. As I pointed out earlier in this thread I've written and
| certified cockpit systems (for both civilian and paramilitary use) and
| requirements have tended to be just as amorphous as in any other
| industry I've subsequently worked in. The main difference has been one
| of management realising in the former case that good systems rely on
| good code and that this is something a small percentage of developers
| can produce, whereas in the latter there's a belief that any two coders
| are interchangeable so long as the process and tools are right.

In my experience this latter case is widespread in companies with no
real inside CS knowledge believing they can manage IT projects
themselves (with pointy-haired bosses :)). It's hard to realize that
you need sharp minds to produce good code when your own view of software
building is limited to playing with Lego systems... This is a recurrent
problem for CS people : making other people aware of the inherent
complexities of software design.
I mean, we pretty much know the physics that make flight work, for
example. That a different airframe needs different software to work is
obvious (can't trim a fighter the same as a jumbo, for example).

If only you could have advised Arianespace :)

Lionel
 
A

Arved Sandstrom

ThoML said:
IIRC D is capable of doing some type inferencing too. But D is still
on my to-be-learned list.

AFAIK the type inferencing for D is similar to that for C# 3.0. IOW you can
omit the type on a declaration if you can infer the type from the
initializer. Sounds trivial but it does save typing if the datatype is quite
complex. Where it will really save time is in combination with object
initializer syntax, to provide anonymous types. I don't know enough about
it, but I'm guessing that such anonymous types would then fit in well with
the lambda expressions also showing up in C# now.

AHS
 
A

Arved Sandstrom

Eleanor McHugh said:
You provide the budget, I'll provide the code ;) Having designed and
implemented avionics systems I see nothing in Ruby or any other
scripting language that would stand in the way of using it to do the
same thing. In fact Lua began its life as a language for device
control. That's not to say that MRI is particularly suited to the
task, but the necessary changes could be made if anyone wanted to
without having to change the language syntax and semantics.
[ SNIP ]

I know nothing of avionics software, but I'd assume
http://en.wikipedia.org/wiki/Avionics_software is reasonably accurate. Half
of the stuff in that article is what you'd like to do on any project if you
didn't have impossible deadlines and shabby processes, and the other half is
simply extra rigour because errors are much less acceptable.

What I don't see is any particular emphasis on specific languages.
Considering that there seems to be no shortage of avionics software written
in C/C++, I don't immediately see why Ruby or Python wouldn't work either,
especially considering the intense process the software goes through.

I tend not to discount any particular language prima facie. I recall over
ten years ago having a colleague demonstrate a responsive, reliable (as near
as I could tell) and feature-rich moving map display program for small boat
navigation, and I asked him what it was written in. He replied, Visual
Basic. He later went on to sell it commercially.

I'm inclined to think that 90%+ of software reliability comes from training,
experience and above all, process. Not the programming language.

AHS
 
R

Robert Dober

It's a lovely idea, but ponder the impact of G=F6del's Incompleteness
Theorems or Turing's proof of the Halting Problem. In practice there are
program states which can occur which cannot be identified in advance beca= use
they are dependent on interactions with the environment, or are artefacts= of
the underlying problem space.
I am not sure but on a first approach I believe that neither G=F6del nor
Turing apply because they are talking about systems describing
themselves. IIRC it is a theorem in TNT(1) making an assumption about
TNT in the first case and a turing machine reading the description of
a turing machine on its tape in the second case.
I do not believe that Aircraft Control Systems have this degree of
self awareness, but I can stand corrected if I am wrong, because
although I have been taught a lot about TM and TNT I do not know a lot
about Aircraft Control.
That's why run-time error handling and fail-safe behaviour are so import= ant
regardless of the rigour of Q&A processes.
That however I agree with!

(1) http://en.wikipedia.org/wiki/Typographical_Number_Theory
Cheers
Robert
--=20
http://ruby-smalltalk.blogspot.com/
 
M

Mike Silva

What I don't see is any particular emphasis on specific languages.
Considering that there seems to be no shortage of avionics software written
in C/C++, I don't immediately see why Ruby or Python wouldn't work either,
especially considering the intense process the software goes through.

I tend not to discount any particular language prima facie. I recall over
....
I'm inclined to think that 90%+ of software reliability comes from training,
experience and above all, process. Not the programming language.

But that still leaves 10%-. For example, as noted here (http://
www.praxis-his.com/sparkada/pdfs/spark_c130j.pdf), an analysis of
safety-critical code written in three languages (C, Ada and SPARK),
all of which was already certified to DO-178B Level A (the most
stringent level), it was found that the SPARK code had one tenth the
residual error rate of the Ada code, and the Ada code had only one
tenth the residual rate of the C code. That's a 100:1 difference in
residual error rates in code all of which was certified to the highest
aviation standards. Would anybody argue that putting out safety-
critical software with an error rate 100 times greater than the
current art allows is a good thing? In fact, would anybody argue that
it is not grossly negligent?

Oh, and the anecdote about the compiler finding in minutes a bug that
had defied testing for a week should not be lightly dismissed either.
 
E

Eleanor McHugh

I'd be interested in the kinds of trade offs have to be made in this
particular problem domain (since I can't speak from experience, and
never claimed to, either, and didn't mean to imply as much).

I haven't worked on anything more mission critical than CRUD style
apps,
and I can only infer from my knowledge what kind of problems
development
teams face.

Still, it seems to be that no level of genius can create software as
is
necessary for the Space Shuttle or a more average airplane, without
the
level of testing the NASA or Boeing brings to bear for their software.

Oh definitely. Testing that code performs correctly is essential to
any embedded development process, as is validating that the code
written solves the correct problem. The latter is by far the more
difficult though.

The guidelines for developing civilian aviation software are
documented in RTCA-DO178B (see http://en.wikipedia.org/wiki/DO-178B)
which is abstract, non-prescriptive and an excellent alternative to
sleeping pills. Numerous concrete processes have emerged to suit how
various teams work, but in general the more critical the software then
the more that testing will result in hand-analysis of both source and
object code. Unit testing will be heavy on white boxing so the
majority of tests are likely to be disposable with unit changes but
there's lots of fun to be had with unglamorous and time-consuming old-
school software engineering (SLOCs, cyclomatic complexity, various
forms of test partitioning) that's independent of implementation
language or life-cycle methodology.

The maintenance of a clear audit trail on requirements and requirement
changes is essential for civil certification, so processes which lack
effective change control mechanisms are inappropriate. However I've
used RAD and Agile approaches (especially evolutionary prototyping)
successfully and had them pass certification so the myth that aviation
development is always monolithic waterfall is definitely unfounded.

In terms of the actual tradeoffs in mission critical systems (aviation
or otherwise) most come down to smoothing interaction with external
stimuli and breaking up costly computations and database queries into
discrete manageable chunks. There's very little genius required, just
careful attention to detail and an ability to analyse problem spaces:
that's probably why so many physicists, chemists and applied
mathematicians end up in this particular discipline.

For a theoretical foundation I recommend "Cybernetics" by Norbert
Wiener although it's a dense read.


Ellie

Eleanor McHugh
Games With Brains
http://slides.games-with-brains.net
 
E

Eleanor McHugh

I tend not to discount any particular language prima facie. I recall
over
ten years ago having a colleague demonstrate a responsive, reliable
(as near
as I could tell) and feature-rich moving map display program for
small boat
navigation, and I asked him what it was written in. He replied, Visual
Basic. He later went on to sell it commercially.

Much kudos to your friend. Twelve years ago I did the same thing in VB
for helicopters and whilst it was pushing the hardware at that time,
it was still usable. Of course these days most mobile phones have more
computational grunt and memory than that :)


Ellie

Eleanor McHugh
Games With Brains
http://slides.games-with-brains.net
 
E

Eleanor McHugh

I am not sure but on a first approach I believe that neither G=F6del = nor
Turing apply because they are talking about systems describing
themselves. IIRC it is a theorem in TNT(1) making an assumption about
TNT in the first case and a turing machine reading the description of
a turing machine on its tape in the second case.
I do not believe that Aircraft Control Systems have this degree of
self awareness, but I can stand corrected if I am wrong, because
although I have been taught a lot about TM and TNT I do not know a lot
about Aircraft Control.

Any process that is algorithmic is necessarily implementable as a =20
Turing machine so I'd argue that the very act of coding the system =20
with a process defines TNT(1) whilst the target system itself becomes =20=

TNT. Therefore until the system runs in situ and responds to its =20
environment one cannot make any firm statements regarding when the =20
system will halt. And if you can't tell when an autopilot will halt, =20
you have the potential for all kinds of mayhem...

Of course this is a terrible abuse of Church-Turing, but it seems to =20
fit the real world pretty well.
That however I agree with!

:)


Ellie
Who wonders what the hell this thread will look like in Google searches.

Eleanor McHugh
Games With Brains
http://slides.games-with-brains.net
 
E

Eleanor McHugh

I'm not sure how the Halting Problem relates to presence or
absence of undefined states. What does it mean to call a state
"undefined" anyway, in this context? Presumably there's always
a finite number of states, which may be extremely large for most
programs that perform useful calculations. If we start with very
small, simple (and not useful) programs, we can enumerate all
the states. Are any of these undefined? As we increase the size
of the program, the number of states increases, presumably at
an alarming rate. At what point do we become unable to identify
program states?

You're making the mistake of viewing program states as a discrete set,
all of which can be logically enumerated. If that were the case then
whilst complexity would make managing the creation of complex software
difficult, it would still be theoretically possible to create
'correct' programs. However Godel's incompleteness theorems tell us
that for any mathematical system based upon a set of axioms there will
be propositions consistent with that system which cannot be proven or
disproved by application of the system (i.e. they are unprovable,
which is what I meant by the casual short-hand 'undefined').

Both Turing machines and Register machines are axiomatic mathematical
systems and therefore can enter states which in terms of their axioms
are unknowable. A program is essentially a meta-state comprised of
numerous transient states and is thus a set of meta-propositions
leading to state propositions which need to be proved, any of which
may be unknowable. This applies equally to both the runtime behaviour
of the program _and_ to the application of any formal methods used to
create it.

For most day-to-day programming Godel incompleteness is irrelevant, in
the same way that quantum indeterminacy can be ignored when playing
tennis, but when you build large software systems which need to be
highly reliable unknowable states do have potential to wreak havoc.
This is why beyond a certain level of complexity it's helpful to use
statistical methods to gain additional insight beyond the normal
boundaries of a development methodology, and

Now the Halting Problem is fascinating because it's very simple in
conception: given a program and a series of inputs (which in Godel's
terms comprises a mathematical system and a set of axioms) determine
whether or not the program will complete. Turing proved that in the
general case this problem is insoluble, and not only does this place
an interesting theoretical limitation of all software systems but it
also applies to sub-programs right the way down to the finest grain of
detail. Basically anytime a program contains a loop condition the
Halting Problem will apply to that loop.

So in essence we're left with a view of software development in which
we can never truly know if a program is correct or even if it will halt.
An example of a simple program that does a barely useful task
is one that reads an input level from a 16 bit A/D, say, and
writes half that value to a 16 bit D/A. Can we be confident we
can write a 100% reliable and correct program to perform this
task? If not, why not? If so, let us increase the complexity
of the task progressively in small increments. At what point
are we forced to admit that we cannot be sure our program does
what it is meant to do?

That's very difficult to say for sure. Conceptually I'd have
confidence in the old BASIC standby of:

10 PRINT "HELLO WORLD"
20 GOTO 10

as this will run infinitely _and_ is intended to. But of course under
the hood the PRINT statement needs to be implemented in machine code
and that implementation could itself be incorrect, so even with a very
trivial program (from a coder's perspective) we see the possibility of
incorrect behaviour.

However balancing this is the fact that a program exists across a
finite period of time and is subject to modification and improvement.
This means that the set of axioms can be adjusted to more closely
match the set of propositions, in principle increasing our confidence
that the program is correct for the problem domain in question.
I'm not trying to prove you wrong; I just want to get a better
handle on the problem.

Douglas Hoffstadter has written extensively on Godel and computability
if you want to delve deeper, but none of this is easy stuff to get
into as it runs counter to our common sense view of mathematics.


Ellie

Eleanor McHugh
Games With Brains
http://slides.games-with-brains.net
 
T

Tom Cloyd

Eleanor said:
Any process that is algorithmic is necessarily implementable as a
Turing machine so I'd argue that the very act of coding the system
with a process defines TNT(1) whilst the target system itself becomes
TNT. Therefore until the system runs in situ and responds to its
environment one cannot make any firm statements regarding when the
system will halt. And if you can't tell when an autopilot will halt,
you have the potential for all kinds of mayhem...

Of course this is a terrible abuse of Church-Turing, but it seems to
fit the real world pretty well.


:)


Ellie
Who wonders what the hell this thread will look like in Google searches.

Eleanor McHugh
Games With Brains
http://slides.games-with-brains.net
My thanks to the core contributors to this fascinating thread. It's
stretched me well past my boundaries on several points, but also
clarified some key learning I've picked from my own field (psychology &
psychotherapy).

For me, the take-away here (and this is not at all news to me) is that
valid formal approaches to reliability are efficient (at least at times)
and powerful, and should definitely be used - WHEN WE HAVE THEM. The
problem is that they all stop short of the full spectrum of reality in
which our processes must survive. Thus we must leave ultimately the
comfort of deduction and dive into dragon realm of inferential
processes. Ultimately, there simply is not substitute, or adequate
simulation of, reality. Sigh.

Thanks, again. I've much enjoyed plowing through these posts.

t.

--

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Tom Cloyd, MS MA, LMHC
Private practice Psychotherapist
Bellingham, Washington, U.S.A: (360) 920-1226
<< (e-mail address removed) >> (email)
<< TomCloyd.com >> (website & psychotherapy weblog)
<< sleightmind.wordpress.com >> (mental health issues weblog)
<< directpathdesign.com >> (web site design & consultation)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
A

Arved Sandstrom

What I don't see is any particular emphasis on specific languages.
Considering that there seems to be no shortage of avionics software
written
in C/C++, I don't immediately see why Ruby or Python wouldn't work either,
especially considering the intense process the software goes through.

I tend not to discount any particular language prima facie. I recall over
....
I'm inclined to think that 90%+ of software reliability comes from
training,
experience and above all, process. Not the programming language.

******************************
But that still leaves 10%-. For example, as noted here (http://
www.praxis-his.com/sparkada/pdfs/spark_c130j.pdf), an analysis of
safety-critical code written in three languages (C, Ada and SPARK),
all of which was already certified to DO-178B Level A (the most
stringent level), it was found that the SPARK code had one tenth the
residual error rate of the Ada code, and the Ada code had only one
tenth the residual rate of the C code. That's a 100:1 difference in
residual error rates in code all of which was certified to the highest
aviation standards. Would anybody argue that putting out safety-
critical software with an error rate 100 times greater than the
current art allows is a good thing? In fact, would anybody argue that
it is not grossly negligent?

Oh, and the anecdote about the compiler finding in minutes a bug that
had defied testing for a week should not be lightly dismissed either.
******************************

I won't dispute the fact that some languages have more inherent support for
"correct" programming than others do. SPARK wouldn't be the only one; Eiffel
and various functional languages come to mind also. For others you can get
add-ons, such as JML for Java (see
http://en.wikipedia.org/wiki/Design_by_contract)

Having said that, it seems to me that the better correctness of programs in
SPARK or Ada compared to C/C++, say, would also be due to the qualities of
organizations that tend to use/adopt these languages. Those qualities
include programmer competence/experience/education, organizational
standards, processes in place, and external requirements (as in legal ones
for avionics or medical software). Not to mention, there is a correlation
between the ease of use of a language and the rate of poor coding (I may get
flak for that statement), which is not necessarily a fault of that language.
Note that by ease of use I do not mean masterability, I simply mean how
quickly a programmer can write something that sort of works.

For example, is shabby software written in Java or C or Python or PHP or
JavaScript shabby because one of those languages was chosen, or is it shabby
because the requirements analysis sucks, design is basically absent, there
is no documentation, testing is a myth, and the coders haven't mastered the
language? I've seen more than a few ads in my area advertising Web developer
jobs for $9 or $10 an hour...you could use the best language in the world at
a job like that and you'd still end up with crap. Conversely, get a team of
really experienced and smart coders who are well-versed in process, have
management backing for process, and I don't see the language of choice
mattering _that_ much. IOW, in that MoD analysis you refer to, was
everything else equal? Throw Ruby at a CMM Level 5 team and I wonder whether
the product is going to be an order or two of magnitude worse than if they
had Ada. Myself I doubt it.

AHS
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,776
Messages
2,569,603
Members
45,196
Latest member
ScottChare

Latest Threads

Top