I assure you, you are quite mistaken. I have a first-hand data point on
that as well:
http://ti.arc.nasa.gov/m/pub/archive/2000-0176.pdf
Interesting paper.
But I don't think it proves your point.
1) Perl isn't mentioned at all, so you cannot draw any conclusions about
Perl from the paper (neither negative nor positive).
2) The verification tool mentioned doesn't work on Lisp input, but uses
a specialized modelling language called PROMELA. The authors state:
The modeling effort, i.e. obtaining a PROMELA model from the LISP
program, took about 12 man weeks during 6 calendar weeks, while the
verification effort took about one week. The modeling effort
consisted conceptually of an abstraction activity combined with a
translation activity. Abstraction was needed to cut down the program
to one with a reasonably small finite state space, making model
checking tractable. Translation, from LISP to PROMELA, was needed to
obtain a PROMELA model that the SPIN model checker could analyze.
[...]
The translation phase was non-trivial and time consuming due to the
relative expressive power of LISP when compared with PROMELA.
Spending 12 man-weeks hand-translating Lisp to PROMELA doesn't
exactly sound like it's easy - in fact the authors explicitely called
the task "non-trivial".
3) Then the authors built an automated translation tool, but again it
doesn't translate Lisp to PROMELA, it translates (a subset of) Java
to PROMELA. So, again, they had to hand-translate Lisp to Java, which
could then be automatically processed. At least translating Lisp to
Java was very quick (two hours!) but that may be because
Some of us spotted the potential error situation because it
resembled the similar error we had found using SPIN in 1997[...]
and the subsequent
focus on the particular code fragment
The error is a text-book example of a race-condition, btw. It is
discussed in almost identical form in any systems programming class,
and they could have spotted it because of that and not just because
the encountered it before.
4) The RAX team found the error within 24 hours, the JPF team within
48 hours (but they also had identified the code fragment as
suspicious within 12 hours).
In conclusion, there is nothing in the paper which suggests that Lisp in
particularly easy to model. If anything, they seem to suggest that Java
is easy to model, although I suspect that "shrinking down" real world
Java programs to a manageable complexity is still non-trivial, even with
the help of their abstraction tool.
That two independent teams each found the race condition within less
than two days is impressive. That may be a hint that Lisp makes it
relatively easy to spot that kind of error. But without more knowledge
about the specific case and a direct comparison to other programming
languages it isn't conclusive.
hp