Is there a "Large Scale Python Software Design" ?

A

Andrew Dalke

Alex said:
For a reasonably experienced programmer, with decent tools, and without
hair-raising problems of deployment, optimization, continuous fast
changes to specs, etc, etc, 10k SLOC should be within the threshold of
"can be sensibly developed and maintained by one person"; 100k SLOC
won't be; the threshold is somewhere in-between.

I concur. I've seen several C++ projects developed by one
person which are bit higher than O(100,000) LOC (not SLOC).
That's about the point where it beomes unwieldy for one person
to maintain.

The only exception I've encountered is for some very long time
research projects where the software is naturally modularizable
and the developer is the sole user, so there's no external
pressure for modifying the code.

I have seen one person projects with much larger code bases
than that but only because it was using 3rd party libraries,
in effect making it a multi-person team.


Andrew
(e-mail address removed)
 
A

Alex Martelli

JCM said:
Be civil. There are those who prefer it, in some cases. If you don't
like it, don't do it.

Indeed I don't do it, but do I have to suffer it silently from others to
"be civil"? Why ever?


Alex
 
D

Dave Brueck

[somebody before me wrote the above, but it was attributed to me - could have
been my own fault while replying, but who knows...]
No, I think you're wrong here. Typos are just as frequent for just
about all classes of coders, lazy or eager, experienced or not -- the
eager experienced ones often use faster typing (nothing to do with
static typing;-).

Hmm.. when I wrote that I was merely guessing at what "classes of errors" he was
referring to, and I honestly didn't even consider typos. Instead I was thinking
of cases where the programmer is reading too much into information returned from
the compiler. Two specific instances I've encountered in the past came to mind:

a) Slopping casting - the attitude that casts are needed only when the compiler
complains, and warnings/errors due to type mismatches are given a cast without
much thought - the goal is to silence the compiler.

b) Sloppy refactoring - "I added a parameter to the method signature and
recompiled. In each case where the compiler complained, I added the new
parameter to the call until all instances were fixed".

Basically, they're cases where static typing appears to give an advantage in
locating problems, but they are sometimes misleading (and therefore can
_sometimes_ be a disadvantage if relied on inappropriately) and sometimes the
remedy more involves helping the developer progress & improve (as such efforts
yield greater fruits anyway).
If you mean typesystems such as Haskell's or ML's, allowing extended
inference (and, in Haskell's case, the wonder of typeclasses), I think
you're being a bit unfair here. You can refactor your types and
typeclasses just as much as any other part of your code, so the "when
they must be defined" seems a bit of a red herring to me

True enough - the "when" bit apples to the static type systems of e.g. C/C++ -
where up front you have to specify more detail than you may have (what set of
values might it have, how will it be used, does it ever need to behave like
objects of a similar type, how do we represent "nothing", do we need to be able
to differentiate between uninitialied and "not present", etc.), and refactoring
later on can have interesting side-effects (not necessarily due to the typing
system directly; rather, such a type system encourages programmers to
prematurely think about things like saving memory by using a short vs an int or
making sure a structure wastes the minimum possible number of bytes on
alignment padding - later on when you better understand how the data is used and
would like to refactor it, it can get messy trying to separate decisions that
were made due to requirements versus those made for other reasons).

-Dave
 
D

Dave Brueck

Dave said:
[somebody before me wrote the above, but it was attributed to me - could
have been my own fault while replying, but who knows...]

<sigh> Actually, I just miscounted the ">"s.

:)
 
P

Peter Hansen

Jonathan said:
Almost four years ago I started working at a company with about 500
kloc of Java code. Thanks largely to tool support I was able to get in
and start fixing bugs my first day [...snip rest of anecdote]

These are all things that don't really seem to be issues with the
approach to development that I use, so I can't really say anything
except that perhaps it's possible to approach development in such
a way that you won't miss such tools. I've entirely given up on
fancy IDEs, and believe I'm more productive without than I ever
was with.
I haven't jumped into a project of similar size with python, but the
tool support for this approach to working with a large codebase just
isn't there, and I haven't seen any convincing arguments that
alternative methodologies are enough better to make up for this.

I'm getting the impression you also haven't tried any significant
test-driven development. The benefits of this approach are *easily*
convincing to most people, and it also fits the bill as removing the
need for a very sizable portion of the tool support which you rightly
point out is not there in most tools for dynamically typed languages.
Testing is good; preventing entire classes of errors from ever
happening at all is better, particularly when you get large. Avoiding
connectedness helps, but that's not always possible.

I think I can sum up everything I have to say on this subject with
the following:

Most people who adopt Python (and I'm pretty sure *all* those
who use it for large projects) have worked with at least one
other language, often statically typed languages.

Yet they now use Python.

Most people who adopt test-driven development have worked with
at least one other approach to programming, usually those heavily
dependent on good tool support.

Yet they now use TDD.

Having experience with both approaches, and choosing one over
the other, gives one greater credibility than having experience
with just one approach, yet clinging to it...

-Peter
 
A

Andrea Griffini

I concur. I've seen several C++ projects developed by one
person which are bit higher than O(100,000) LOC (not SLOC).
That's about the point where it beomes unwieldy for one person
to maintain.

One of the parts that is going to be replaced by the new
project is currently about 130K lines of C and is indeed
being maintained by a single programmer. When discussing
the reimplementation and hence re-analizing what are the
solutions that made in during the years it has been clear
to me that that program is beyond the reasonable limit
(to many questions about why something has been done that
way or how a certain thing is used the maintainer replies
with "I do not remember" or "no idea, let's see", and
this even about parts developed by him).
Not surprisingly in the last period bug hunting has been
quite costly, and you can smell the fear (terror?) of
changes.

What is also IMO quite evident is that a *big* part of
those lines of code are just uninteresting minutiae that
wouldn't be present if coding in python.

Andrea
 
A

Andrea Griffini

Will it be 2D or 3D (3D I
would assume), and what kind of geometry engine?
Probably one of the open-source ones that already have
a Python API, no?

It's 3D+2D and also covers parts that are not strictly
CAD/CAM but that are quite interconnected to the CAD.
However it's not a mechanical CAD (it's for the shoe
industry) and the problems to be solved are completely
different.

The geometry part in particular is simpler from a
mathematical point of view, but contains complications
that are not present in a mechanical CAD (everything is
parametrized to the size of the shoe, and in somewhat
complex ways). Also another big difference is the
intended audience, from which you can expect a much
lower experise level about computers: the user interface
plays a *central* role.

Andrea
 
A

Andrew Dalke

Andrea said:
One of the parts that is going to be replaced by the new
project is currently about 130K lines of C and is indeed
being maintained by a single programmer.

I've been on one such project (hence one of my examples
was from experience).
the maintainer replies
with "I do not remember" or "no idea, let's see", and
this even about parts developed by him).

I realized I had learned to expect maintainance programming
when a year or so later I had to go back to some old
code. I didn't know what was going on at a certain
spot, looked up a few lines, and saw the comment I had
written explaining the tricky spot. I thought it was
very nice of the past me to help out then present me. :)

Andrew
(e-mail address removed)
 
A

Alex Martelli

Dave Brueck said:
Hmm.. when I wrote that I was merely guessing at what "classes of errors"
he was referring to, and I honestly didn't even consider typos. Instead I
was thinking

Ah, OK, sorry, it was the first thing that came to mind.
of cases where the programmer is reading too much into information
returned from the compiler. Two specific instances I've encountered in the
past came to mind:

a) Slopping casting - the attitude that casts are needed only when the
compiler complains, and warnings/errors due to type mismatches are given a
cast without much thought - the goal is to silence the compiler.

"The goal is to silence the compiler" is a common mindset, quite
understandable given the popular and reasonable shop rule "-Wall, and
can't be checked in until it given no warnings". Not exclusive to lazy
or inexperienced coders. Still, you do have a point here.
b) Sloppy refactoring - "I added a parameter to the method signature and
recompiled. In each case where the compiler complained, I added the new
parameter to the call until all instances were fixed".

OK, nolo contendere -- I _have_ seen this happen. Without a decent
refactoring browser in the toolset it will keep happening. Running the
unit tests is a better way to check your refactoring, but it isn't
perfect either -- unit tests typically don't aim at 100% code coverage,
much less 100% branch coverage.

Basically, they're cases where static typing appears to give an advantage
in locating problems, but they are sometimes misleading (and therefore can
_sometimes_ be a disadvantage if relied on inappropriately) and sometimes
the remedy more involves helping the developer progress & improve (as such
efforts yield greater fruits anyway).

OK, you've made your point well, thanks. With all the "sometimes"
there, I can't disagree;-).

True enough - the "when" bit apples to the static type systems of e.g.
C/C++ - where up front you have to specify more detail than you may have
(what set of values might it have, how will it be used, does it ever need
to behave like objects of a similar type, how do we represent "nothing",
do we need to be able to differentiate between uninitialied and "not
present", etc.), and refactoring later on can have interesting
side-effects (not necessarily due to the typing system directly; rather,
such a type system encourages programmers to prematurely think about
things like saving memory by using a short vs an int or making sure a
structure wastes the minimum possible number of bytes on alignment padding
- later on when you better understand how the data is used and would like
to refactor it, it can get messy trying to separate decisions that were
made due to requirements versus those made for other reasons).

Again a good point, this one about strongly pushing programmers (who are
quite prone to this mistake anyway -- yeah, even the expert ones!-) to
premature optimization. In theory, optional-typing systems like Dylan's
should be perfect here -- you start typeless (in the static sense) and
only add typing later, and optionally, and selectively, as an
optimization. But then, such systems will never provide the "error
safety" which many static-typing enthusiasts claim: in such cases,
typing _is_ rather, seen strictly as an optimization-aid hint to the
compiler. We've seen by the widespread overuse of __slots__ what can
all too easily happen when such "optional hints for optimization only"
are exposed: a substantial minority (at least) of programmers will
overuse them in a way quite different from their design intentions, and
one which they don't even support all that well.

Perhaps we need a broader "hinting system" design -- assert is fine, I
think, but clearly most programmers don't agree. Anyway, such hints as
could be used for both error-checking and easier (for the compiler)
optimization, will generally check errors at runtime (just like, say,
design-by-contract does) -- only occasionally, not systematically, will
the compiler be able to prove that a certain hint's check is bound to
fail. So, this isn't really germane to static typing.

Thanks for making several good points clearly and convincingly.


Alex
 
A

Andrew Dalke

Alex
"The goal is to silence the compiler" is a common mindset, quite
understandable given the popular and reasonable shop rule "-Wall, and
can't be checked in until it given no warnings". Not exclusive to lazy
or inexperienced coders. Still, you do have a point here.

One shop I worked in, in fact my first commercial job, ignored
warnings. Didn't even try casting. The framework used void*
function pointers for the three main event handlers, which could
take different argument in certain cases.

My first job was to learn how to build the system. The second
week I cleaned up the 1,200 or so warnings (judicious use of
unions; didn't need many casts) and found the half dozen actual
errors embedded in those warnings.

I used the term "code mucking" for that case. As in "stable
mucking" (cleaning out horse manure). Doing a search now and
it appears that phrase is rarely used, and even less frequently
with that meaning. Mostly "mucking" is used in the phrasal
verbs "mucking about", "mucking up", or "mucking with".

Huh, and it comes from the Middle English "muk" meaning "dung"
and not as I conjectured a bowdlerized version of a stronger
rhyming expletive.

To any maintenance programmers out there, feel free to use
it as needed. Perhaps in the retelling of the labors of
Hercules as a programmer he'll use a refactoring browser to
clean the Augean code base.

Andrew
(e-mail address removed)
 
G

Greg Ewing

Andrew said:
I didn't know what was going on at a certain
spot, looked up a few lines, and saw the comment I had
written explaining the tricky spot. I thought it was
very nice of the past me to help out then present me. :)

Indeed. It would be nice to have access to a time machine
so one could go back and ask oneself about things like
this...
 
A

Andrew Dalke

Greg said:
Indeed. It would be nice to have access to a time machine
so one could go back and ask oneself about things like
this...

As in James P. Hogan's book "Thrice Upon a Time".


Also notable for having no antagonist and for being
the only story I know of that uses quantum time as
the way to resolve time travel paradoxes.

Andrew
(e-mail address removed)
 
M

Michele Simionato

That is the book I want to write, the one I have always wanted to write;
the Nutshell and the Cookbook (and now their second editions) keep
delaying that plan, but, in a sense, that's good, because I keep
accumulating useful experiences to enrich those notes, and Python keeps
growing (particularly but not exclusively in terms of third-party
extensions and tools) in ways that refine and sometimes indeed redefine
some key aspects. To give a simple technical example: I used to have
substantial caveats in those notes cautioning readers to use multiple
inheritance in an extremely sparing, cautious way, due to traps and
pitfalls that made it fragile. Nowadays, with the advent of 2.3, most
of those traps and pitfalls have gone away (in the newstyle object
model), to the point that the whole issue can be reconsidered.

Uhm ... I must say that I was quite keen of Multiple Inheritance
but having seen the (ab)use of it in Zope I am starting questioning
the wisdom of it. The problem I see with MI (even done well) is that
you keep getting methods from parent classes and each time you have
to think about the MRO and the precedence rules. It is an additional
burden in the programmer's mind. I miss the clean simple concept of
superclass; the MRO may be cool but it is not as simple to learn, to
teach
and especially remember. Notice, I am not referring to the algorithm,
it is
not important to remember it; what is disturbing to me is to be aware
that the resolution of the methods can be non-trivial and that I
should call .mro() each time to check exactly what is happening.
Also 'super' is hard to understand and to use :-(
So, I wonder if Matz was right after all and single inheritance +
mixins
à la Ruby are the right way to go. Yes, from a purist point of view
they are inferior to MI, however from the pragramatist point
of view I don't think you loose very much, and you get a big gain in
short learning curve and expliciteness. Especially 'super' stays
simple.

However I lack experience in Ruby with mixins: do you have experience
or do you know people with experience on that? What they think?
Are they happy with the approach or they wish Ruby had real MI?
Of course in simple systems there is no real issue, I am talking
about large systems. Also, I am not talking about wrong design choice
(for
instance Zope 3 use MI much less than Zope 2: I interpret this as a
recognition that the design was wrong) but in general: assuming you
have
an application where the "right" design is via mixins, is there a real
difference in doing it à la Ruby or with real MI?
It does not look there is a big difference, in practice.
Yes, you do not have the full power of cooperative methods but you
also avoid the burden of them and you can always find workarounds; I
would
say there are compensations.

I have not yet a definite opionion on this point, so I would like to
hear the opinion of others, especially people with real world
experience in complex
systems.

Michele Simionato
 
J

Jonathan Ellis

Stephen said:
Try "glimpse" (http://webglimpse.net) -- it uses a superset of
grep's arguments and can search large collections of files at
a single bound! Re-indexing takes a few seconds, but doesn't
need to be done unless there are major changes.

glimpse addresses grep's speed problem, but unfortunately has no more
semantic understanding beyond "it's just text." Etags is a little
better but not much, and also suffers from the
have-to-remember-to-reindex-if-you-want-accurate-results "feature."
-Jonathan
 
J

Jonathan Ellis

What classes of errors are completely avoided by "static typing" as
implemented by C++ (Java)? Just out of curiosity, because this is
usually stated as "true by axiomatic definition" in this kind of
discussions.

As one example: in this codebase (closer to 700 kloc than 500 by this
time, if it matters) the very oldest code used a Borland wrapper over
JDBC. At the time, it allowed doing things JDBC version 1 did not; by
the time I got fed up, JDBC version 3 had caught up and far surpassed
Borland's API. There was also a lot of JDBC code that was suboptimal
-- for the application I worked on, it almost always made sense to use
a PreparedStatement rather than a simple Statement, but because binding
parameters in jdbc is something of a PITA we often went with the
Statement anyway. Both the Borland-style and the JDBC code also dealt
with calls to stored procedures, most of them not in CallableStatements
(the "right" way to do this).

I volunteered to write a more friendly wrapper over JDBC than Borland's
that would handle caching of [Prepared|Callable]Statement objects and
parameter binding transparently, nothing fancy (in particular my select
methods returned ResultSets, where Borland had their own class for
this) and rewrite these thousands of calls to use the new API. Of
course I wrote scripts to do this; 5 or 6, each handling a different
aspect.

To write unit tests for this by hand would have been obscene. (As an
aside, writing unit tests for anything that deals with many tables in a
database is a PITA already and usually ends up not really a "unit" test
anymore.) Even generating unit tests with more scripts would have
required a significantly deeper semantic understanding of the code
being filtered, and hence a lot more work.

As it was, with the compiler letting me know when I screwed up and
improve my scripts accordlingly, out of the thousands of calls, I
ultimately had to do a few dozen by hand (because that was less work
than getting my scripts able to deal with the very worst examples), and
the compiler let me know what those were. After the process was
complete, QA turned up (over several weeks) 4 or 5 places where I'd
broken things despite the static checking, which I considered a very
good success ratio.

-Jonathan
 
J

Jonathan Ellis

Peter said:
I'm getting the impression you also haven't tried any significant
test-driven development. The benefits of this approach are *easily*
convincing to most people, and it also fits the bill as removing the
need for a very sizable portion of the tool support which you rightly
point out is not there in most tools for dynamically typed languages.

I think I responded to this already --

Oh yes; so I did. :) (See my reply to another subthread for one
example of when static type checking saved me a LOT of work.)

What is the biggest system you have built with python personally? I'm
happy to be proven wrong, but honestly, the most enthusiastic "testing
solves all my problem" people I have seen haven't worked on anything
"large" -- and my definition of large agrees with Alex's; over 100
kloc, more than a handful of developers.

So people don't get me wrong: I love python. Most of my programming
friends call me "the python zealot" behind my back. I just don't think
it's the right tool for every problem.

Specifically, in my experience, statically-typed languages make it much
easier to say "okay, I'm fixing a bug in Class.Foo; here's all the
places where it's used." This lets me see how Foo is actually used --
in a perfect world, Foo's documentation is precise and up to date, but
I haven't worked anywhere that this was always the case -- which lets
me make my fix with a reasonable chance of not breaking anything.
Compile-time type checking increases those chances. Unit tests
increase that further, but relying on unit tests as your first and only
line of defense is suboptimal when there are better options.
Having experience with both approaches, and choosing one over
the other, gives one greater credibility than having experience
with just one approach, yet clinging to it...

You are incorrect if you assume I am unfamiliar with python. I readily
admit I have no experience with truly large python projects; I would
classify my the python application I work on as "small," but it seems I
am in good company here in that respect... I do claim to have fairly
extensive experience with large projects in a statically typed language
(Java).

-Jonathan
 
A

Alex Martelli

Jonathan Ellis said:
What is the biggest system you have built with python personally? I'm
happy to be proven wrong, but honestly, the most enthusiastic "testing
solves all my problem" people I have seen haven't worked on anything
"large" -- and my definition of large agrees with Alex's; over 100
kloc, more than a handful of developers.

I have the experience, both with Python and with C++, and I can confirm
that test-driven development (with more code for tests, particularly
unit- but also system-/integration-/acceptance-, than code to implement
actual functionality) scales up.

The C++ system had about five times the number of developers and ten
times the code size for about the same amount of functionality (as
roughly measured in function points) as the Python system.

Type safety and const-correctness in the C++ system were of very minor
help; not 100% negligible, but clearly they were not pulling their
weight, by a long shot.

In both systems, the trouble spots came invariably where testing had
been skimped on, due to time pressures and insufficient acculturation of
developers to testing; the temptation to shirk is a bit bigger in C++,
where one can work under the delusion that the compiler's typechecks
compensate (they don't).

_Retrofitting_ tests to code developed any old how is not as effective
as growing the tests and code together. It appears to me that the
experience you relate is about code which didn't have a good battery of
unit tests to go with it.

Lastly, I'm still looking for systematic ways to test system integration
that are as effective as unit tests are for each single component or
subsystem; but that's an area where type and const checking are of just
about negligible help.


Alex
 
G

GerritM

"Alex Martelli" <[email protected]> schreef in bericht
Lastly, I'm still looking for systematic ways to test system integration
that are as effective as unit tests are for each single component or
subsystem; but that's an area where type and const checking are of just
about negligible help.
System integration has a completely different nature than unit testing.
During system integration the "unforeseens" and the "unknowns" pop-up. And
of course the not-communicated, implicit human assumptions are uncovered.
And the "non-functional" behavior is a source of problems (response times,
memory footprint, etc). Many system integration problems are semantic
problems. system integration is often dufficult due to heterogeniety of the
problems, technologies and the people involved. In other words the larger
the system the more challenging systems integration becomes

All of these problems are not addressed at all by static typing. However,
design clarity and compactness does help tremendously. I would expect for
these reasons that Python is a big plus during system integration of large
systems. Of course design attention is required to cope with the
"non-functional" imapct of Python, such as CPU and memory consumption. on
top of that (run-time) instrumentation is very helpful. Here again the
dynamic nature of Python is a big plus.

kind regards, Gerrit Muller
Gaudi Systems Architecting www.extra.research.philips.com/natlab/sysarch/
 
A

Alex Martelli

GerritM said:
"Alex Martelli" <[email protected]> schreef in bericht

System integration has a completely different nature than unit testing.
During system integration the "unforeseens" and the "unknowns" pop-up. And
of course the not-communicated, implicit human assumptions are uncovered.

Exactly -- which is why I'm still looking (doesn't mean I think I'll
find;-).
All of these problems are not addressed at all by static typing. However,

Essentially not.
design clarity and compactness does help tremendously. I would expect for
these reasons that Python is a big plus during system integration of large

Not as much as one might hope, in my experience. Protocol Adaptation
_would_ help (see PEP 246), but it would need to be widely deployed.
systems. Of course design attention is required to cope with the
"non-functional" imapct of Python, such as CPU and memory consumption. on
top of that (run-time) instrumentation is very helpful. Here again the
dynamic nature of Python is a big plus.

But the extreme difficulty in keeping track of what amount of memory
goes where in what cases is a big minus. I recall similar problems with
Java, in my limited experience with it, but for Java I see now there are
commercial tools specifically to hunt down memory problems. In C++
there were actual _leaks_ which were a terrible problem for us, but
again pricey commercial technology came to the rescue.

With Python, I've found, so far, that tracking where _time_ goes is
quite feasible, with systematic profiling &c (of course profiling is
always a bit invasive, and so on, but no more so in Python than
otherwise), so that in the end CPU consumption is no big deal (it's easy
to find out the tiny hot spot and turn it into an extension iff needed).
But memory is a _big_ problem, in my experience so far, with servers
meant to run a long time and having very large code bases. I'm sure
there IS a commercial niche for a _good_ general purpose Python tool to
keep track of memory consumption, equivalent to those available for C,
C++ and Java...


Alex
 
P

Peter Hansen

Jonathan said:
I think I responded to this already --


Oh yes; so I did. :) (See my reply to another subthread for one
example of when static type checking saved me a LOT of work.)

And you've reemphasized my point. "Testing" is not test-driven
development. In fact, test-driven development is about *design*,
not just about testing. The two are related, but definitely not
the same thing, and eliminating TDD with a wave of a hand intended
to poo-poo mere testing is to miss the point. Once someone has
tried TDD, they are unlikely to lump it in with simple "unit testing"
as it has other properties that aren't obvious on the surface.
What is the biggest system you have built with python personally? I'm
happy to be proven wrong, but honestly, the most enthusiastic "testing
solves all my problem" people I have seen haven't worked on anything
"large" -- and my definition of large agrees with Alex's; over 100
kloc, more than a handful of developers.

The topic of the thread was large projects with _large teams_,
I thought, so I won't focus on my personal work. The team I
was leading worked on code that, if I recall, was somewhat over
100,000 lines of Python code including tests. I don't recall
whether that number was the largest piece, or combining several
separate applications which ran together but in a distributed
system... I think there were close to 20 man years in the main
bit.

(And remembering that 1 line of Python code corresponds to
some larger number, maybe five or ten, of C code, that should
qualify it as a large project by many definitions.)
So people don't get me wrong: I love python. Most of my programming
friends call me "the python zealot" behind my back. I just don't think
it's the right tool for every problem.

Neither do I. The above project also involved some C and
some assembly, plus some Javascript and possibly something else
I've forgotten by now. We just made efforts to use Python *as
much as possible* and it paid off.
Specifically, in my experience, statically-typed languages make it much
easier to say "okay, I'm fixing a bug in Class.Foo; here's all the
places where it's used." This lets me see how Foo is actually used --
in a perfect world, Foo's documentation is precise and up to date, but
I haven't worked anywhere that this was always the case -- which lets
me make my fix with a reasonable chance of not breaking anything.
Compile-time type checking increases those chances. Unit tests
increase that further, but relying on unit tests as your first and only
line of defense is suboptimal when there are better options.

But what if you already had tests which allowed you to do exactly
the thing you describe? Is there a need for "better options"
at that point? Are they really better? When I do TDD, I can
*trivially* catch all the cases where Class.Foo is used
because they are all exercised by the tests. Furthermore, I
can catch real bugs, not just typos and simple things involving
using the wrong type. A superset of the bugs your statically
typed language tools are letting you catch. But obviously
I'm rehashing the argument, and one which has been discussed
here many times, so I should let it go.
You are incorrect if you assume I am unfamiliar with python.

I assumed no such thing, just that you were unfamiliar with
large projects in Python and yet were advising the OP on its
suitability in that realm. You're bright and experienced, and
your comments have substance, but until you've actually
participated in a large project with Python and seen it fail
gloriously *because it was not statically typed*, I wouldn't
put much weight on your comments in this area if I were the
OP. That's all I was saying...

-Peter
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,009
Latest member
GidgetGamb

Latest Threads

Top