What is Expressiveness in a Computer Language

J

Joachim Durchholz

Sacha said:
Many lists are heterogenous, even in statically typed languages.
For instance lisp code are lists, with several kinds of atoms and
sub-lists..

Lisp isn't exactly a statically-typed language :)
A car dealer will sell cars, trucks and equipment..
In a statically typed language you would need to type the list on a common
ancestor...

Where's the problem with that?

BTW the OO way isn't the only way to set up a list from heterogenous data.
In statically-typed FPL land, lists require homogenous data types all
right, but the list elements aren't restricted to data - they can be
functions as well.
Now the other specialty of FPLs is that you can construct functions at
run-time - you take a function, fill some of its parameters and leave
others open - the result is another function. And since you'll iterate
over the list and will do homogenous processing over it, you construct
the function so that it will do all the processing that you'll later need.

The advantage of the FPL way over the OO way is that you can use ad-hoc
functions. You don't need precognition to know which kinds of data
should be lumped under a common supertype - you simply write and/or
construct functions of a common type that will go into the list.
What would then be the point of statical typing , as you stilll need to type
check each element in order to process that list ?

Both OO and FPL construction allow static type checks.
Sure you can do this in a
statically-typed
language, you just need to make sure some relevant ancestor exists. In my
experience
you'll end up with the base object-class more often than not, and that's
what i call dynamic typing.

Not quite - the common supertype is more often than not actually useful.

However, getting the type hierarchy right requires a *lot* of
experimentation and fine-tuning. You can easily spend a year or more
(sometimes *much* more) with that (been there, done that). Even worse,
once the better hierarchy is available, you typically have to adapt all
the client code that uses it (been there, done that, too).

That's the problems in OO land. FPL land doesn't have these problems -
if the list type is just a function mapping two integers to another
integer, reworking the data types that went into the functions of the
list don't require those global changes.
I might want to test some other parts of my program before writing this
function.

That's unrelated to dynamic typing. All that's needed is an environment
that throws an exception once such an undefined function is called,
instead of aborting the compilation.

I'll readily admit that very few static languages offer such an
environment. (Though, actually, C interpreters do exist.)
Or maybe will my program compile that function depending on user input.

Hmm... do I really want this kind of power at the user's hand in the age
of malware?
As long as i get a warning for calling a non-existing function, everything
is fine.

That depends.
For software that's written to run once (or very few times), and where
somebody who's able to correct problems is always nearby, that's a
perfectly viable strategy.
For safety-critical software where problems must be handled within
seconds (or an even shorter period of time), you want to statically
ensure as many properties as you can. You'll take not just static
typing, you also want to ascertain value ranges and dozens of other
properties. (In Spark, an Ada subset, this is indeed done.)

Between those extremes, there's a broad spectrum.
 
J

Joachim Durchholz

Raffael said:
There is a very large class of software where user inputs are
unpredictable and/or where input data comes from an untrusted source. In
these cases run-time checks are going to be needed anyway so the
advantages of static type checking are greatly reduced - you end up
doing run-time checks anyway, precisely the thing you were trying to
avoid by doing static analysis.

There's still a large class of errors that *can* be excluded via type
checking.
Ideally one wants a language with switchable typing - static where
possible and necessary, dynamic elsewhere.

That has been my position for a long time now.
To a certain extent this is
what common lisp does but it requires programmer declarations. Some
implementations try to move beyond this by doing type inference and
alerting the programmer to potential static guarantees that the
programmer could make that would allow the compiler to do a better job.

I think it's easier to start with a good (!) statically-typed language
and relax the checking, than to start with a dynamically-typed one and
add static checks.
With the right restrictions, a language can make all kinds of strong
guarantees, and it can make it easy to construct software where static
guarantees abound. If the mechanisms are cleverly chosen, they interfere
just minimally with the programming process. (A classical example it
Hindley-Milner type inference systems. Typical reports from languages
with HM systems say that you can have it verify thousand-line programs
without a single type annotation in the code. That's actually far better
than you'd need - you'd *want* to document the types at least on the
major internal interfaces after all *grin*.)
With a dynamically-typed language, programming style tends to evolve in
directions that make it harder to give static guarantees.
It seems to
me that if we set aside that class of software where safety is paramount
- mostly embedded software such as aircraft and medical devices - we are
left mostly with efficiency concerns.

Nope. Efficiency has taken a back seat. Software is getting slower
(barely offset by increasing machine speed), and newer languages even
don't statically typecheck everything (C++, Java). (Note that the
impossibility to statically typecheck everything in OO languages doesn't
mean that it's impossible to do rigorous static checking in general.
FPLs have been quite rigorous about static checks; the only cases when
an FPL needs to dynamically typecheck its data structures is after
unmarshalling one from an untyped data source such as a network stream,
a file, or an IPC interface.)

The prime factor nowadays seems to be maintainability.

And the difference here is this:
With dynamic typing, I have to rely on the discipline of the programmers
to document interfaces.
With static typing, the compiler will infer (and possibly document) at
least part of their semantics (namely the types).
So static typing should be invoked for that small portion of a program
where efficiency is really needed and that dynamic typing should be the
default elswhere. This is how common lisp works - dynamic typing by
default with static guarantees available where one needs them.

Actually static typing seems to become more powerful at finding errors
as the program size increases.
(Yes, that's a maintainability argument. Efficiency isn't *that*
important; since maintenance is usually the most important single
factor, squelching bugs even before testing is definitely helpful.)

Regards,
Jo
 
J

Joachim Durchholz

Darren said:
Write a function that takes an arbitrary set of arguments and stores
them into a structure allocated on the heap.

If the set of arguments is really arbitrary, then the software can't do
anything with it. In that case, the type is simply "opaque data block",
and storing it in the heap requires nothing more specific than that of
"opaque data block".
There's more in this. If we see a function with a parameter type of
"opaque data block", and there's no function available except copying
that data and comparing it for equality, then from simply looking at the
function's signature, we'll know that it won't inspect the data. More
interestingly, we'll know that funny stuff in the data might trigger
bugs in the code - in the context of a security audit, that's actually a
pretty strong guarantee, since the analysis can stop at the function't
interface and doesn't have to dig into the function's implementation.
See the Tcl "unknown" proc, used for interactive command expansion,
dynamic loading of code on demand, etc.

Not related to dynamic typing, I fear - I can easily envision
alternatives to that in a statically-typed context.

Of course, you can't eliminate *all* run-time type checking. I already
mentioned unmarshalling data from an untyped source; another possibility
is run-time code compilation (highly dubious in a production system but
of value in a development system).

However, that's some very specialized applications, easily catered for
by doing a dynamic type check plus a thrown exception in case the types
don't match. I still don't see a convincing argument for making dynamic
typing the standard policy.

Regards,
Jo
 
R

Raffael Cavallaro

I think it's easier to start with a good (!) statically-typed language
and relax the checking, than to start with a dynamically-typed one and
add static checks.
With the right restrictions, a language can make all kinds of strong
guarantees, and it can make it easy to construct software where static
guarantees abound. If the mechanisms are cleverly chosen, they
interfere just minimally with the programming process. (A classical
example it Hindley-Milner type inference systems. Typical reports from
languages with HM systems say that you can have it verify thousand-line
programs without a single type annotation in the code. That's actually
far better than you'd need - you'd *want* to document the types at
least on the major internal interfaces after all *grin*.)
With a dynamically-typed language, programming style tends to evolve in
directions that make it harder to give static guarantees.

This is purely a matter of programming style. For explorative
programming it is easier to start with dynamic typing and add static
guarantees later rather than having to make decisions about
representation and have stubs for everything right from the start. The
lisp programming style is arguably all about using heterogenous lists
and forward references in the repl for everything until you know what
it is that you are doing, then choosing a more appropriate
representation and filling in forward references once the program gels.
Having to choose representation right from the start and needing
working versions (even if only stubs) of *every* function you call may
ensure type correctness, but many programmers find that it also ensures
that you never find the right program to code in the first place. This
is because you don't have the freedom to explore possible solutions
without having to break your concentration to satisfy the nagging of a
static type checker.
 
J

Joachim Durchholz

Raffael said:
This is purely a matter of programming style. For explorative
programming it is easier to start with dynamic typing and add static
guarantees later rather than having to make decisions about
representation and have stubs for everything right from the start.

Sorry for being ambiguous - I meant to talk about language evolution.

I agree that static checking could (and probably should) be slightly
relaxed: compilers should still do all the diagnostics that current-day
technology allows, but any problems shouldn't abort the compilation.
It's always possible to generate code that will throw an exception as
soon as a problematic piece of code becomes actually relevant; depending
on the kind of run-time support, this might abort the program, abort
just the computation, or open an interactive facility to correct and/or
modify the program on the spot (the latter is the norm in highly dynamic
systems like those for Lisp and Smalltalk, and I consider this actually
useful).

I don't see static checking and explorative programming as opposites.
Of course, in practice, environments that combine these don't seem to
exist (except maybe in experimental or little-known state).

Regards,
Jo
 
R

Raffael Cavallaro

I don't see static checking and explorative programming as opposites.
Of course, in practice, environments that combine these don't seem to
exist (except maybe in experimental or little-known state).

Right. Unfortunately the philosophical leanings of those who design
these two types of languages tend to show themselves as different
tastes in development style - for example, static type advocates don't
often want a very dynamic development environment that would allow a
program to run for testing even when parts of it arent defined yet, and
dynamic type advocates don't want a compiler preventing them from doing
so because the program can't yet be proven statically correct. Dynamic
typing advocates don't generally want a compiler error for ambiguous
typing - for example, adding a float and an int - but static typing
advocates generally do. Of course there's little reason one couldn't
have a language that allowed the full range to be switchable so that
programmers could tighten up compiler warnings and errors as the
program becomes more fully formed. Unfortunately we're not quite there
yet. For my tastes something like sbcl*, with its type inference and
very detailed warnings and notes is as good as it gets for now. I can
basically ignore warnings and notes early on, but use them to allow the
compiler to improve the code it generates once the program is doing
what I want correctly.

[*] I don't mean to exclude other common lisp implementations that do
type inference here - I just happen to use sbcl.
 
?

=?iso-8859-1?q?Torben_=C6gidius_Mogensen?=

Raffael Cavallaro said:
This is purely a matter of programming style. For explorative
programming it is easier to start with dynamic typing and add static
guarantees later rather than having to make decisions about
representation and have stubs for everything right from the start.

I think you are confusing static typing with having to write types
everywhere. With type inference, you only have to write a minimum of
type information (such as datatype declarations), so you can easily do
explorative progrmming in such languages -- I don't see any advantage
of dynamic typing in this respect.
The
lisp programming style is arguably all about using heterogenous lists
and forward references in the repl for everything until you know what
it is that you are doing, then choosing a more appropriate
representation and filling in forward references once the program
gels. Having to choose representation right from the start and needing
working versions (even if only stubs) of *every* function you call may
ensure type correctness, but many programmers find that it also
ensures that you never find the right program to code in the first
place.

If you don't have definitions (stubs or complete) of the functions you
use in your code, you can only run it up to the point where you call
an undefined function. So you can't really do much exploration until
you have some definitions.

I expect a lot of the exploration you do with incomplete programs
amount to the feedback you get from type inference.
This is because you don't have the freedom to explore possible
solutions without having to break your concentration to satisfy the
nagging of a static type checker.

I tend to disagree. I have programmed a great deal in Lisp, Scheme,
Prolog (all dynamically typed) and SML and Haskell (both statically
typed). And I don't find that I need more stubs etc. in the latter.
In fact, I do a lot of explorative programming when writing programs
in ML and Haskell. And I find type inference very helpful in this, as
it guides the direction of the exploration, so it is more like a
safari with a local guide than a blindfolded walk in the jungle.

Torben
 
R

Rob Thorpe

Torben said:
Indeed. So use a language with type inference.

Well, for most purposes that's the same as dynamic typing since the
compiler doesn't require you to label the type of your variables. I
occasionally use CMUCL and SBCL which do type inference, which is
useful at improving generated code quality. It also can warn the
programmer if they if they reuse a variable in a context implying that
it's a different type which is useful.

I see type inference as an optimization of dynamic typing rather than a
generalization of static typing. But I suppose you can see it that way
around.
 
?

=?iso-8859-1?q?Torben_=C6gidius_Mogensen?=

Rob Thorpe said:
Well, for most purposes that's the same as dynamic typing since the
compiler doesn't require you to label the type of your variables.

That's not really the difference between static and dynamic typing.
Static typing means that there exist a typing at compile-time that
guarantess against run-time type violations. Dynamic typing means
that such violations are detected at run-time. This is orthogonal to
strong versus weak typing, which is about whether such violations are
detected at all. The archetypal weakly typed language is machine code
-- you can happily load a floating point value from memory, add it to
a string pointer and jump to the resulting value. ML and Scheme are
both strongly typed, but one is statically typed and the other
dynamically typed.

Anyway, type inference for statically typed langauges don't make them
any more dynamically typed. It just moves the burden of assigning the
types from the programmer to the compiler. And (for HM type systems)
the compiler doesn't "guess" at a type -- it finds the unique most
general type from which all other legal types (within the type system)
can be found by instantiation.
I
occasionally use CMUCL and SBCL which do type inference, which is
useful at improving generated code quality. It also can warn the
programmer if they if they reuse a variable in a context implying that
it's a different type which is useful.

I see type inference as an optimization of dynamic typing rather than a
generalization of static typing. But I suppose you can see it that way
around.

Some compilers for dynamically typed languages will do a type analysis
similar to type inference, but they will happily compile a program
even if they can't guarantee static type safety.

Such "type inference" can be seen as an optimisation of dynamic
typing, as it allows the compiler to omit _some_ of the runtime type
checks. I prefer the term "soft typing" for this, though, so as not
to confuse with static type inference.

Soft typing can give feedback similar to that of type inference in
terms of identifying potential problem spots, so in that respect it is
similar to static type inference, and you might get similar fast code
development. You miss some of the other benefits of static typing,
though, such as a richer type system -- soft typing often lacks
features like polymorphism (it will find a set of monomorphic
instances rather than the most general type) and type classes.

Torben
 
G

George Neuner

If you don't have definitions (stubs or complete) of the functions you
use in your code, you can only run it up to the point where you call
an undefined function. So you can't really do much exploration until
you have some definitions.

Well in Lisp that just drops you into the debugger where you can
supply the needed return data and continue. I agree that it is not
something often needed.

I expect a lot of the exploration you do with incomplete programs
amount to the feedback you get from type inference.

The ability to write functions and test them immediately without
writing a lot of supporting code is _far_ more useful to me than type
inference.

I'm not going to weigh in on the static v dynamic argument ... I think
both approaches have their place. I am, however, going to ask what
information you think type inference can provide that substitutes for
algorithm or data structure exploration.

George
 
?

=?iso-8859-1?q?Torben_=C6gidius_Mogensen?=

George Neuner said:
On 19 Jun 2006 10:19:05 +0200, (e-mail address removed) (Torben Ægidius
Mogensen) wrote:

The ability to write functions and test them immediately without
writing a lot of supporting code is _far_ more useful to me than type
inference.

I can't see what this has to do with static/dynamic typing. You can
test individula functions in isolation is statically typed languages
too.
I'm not going to weigh in on the static v dynamic argument ... I think
both approaches have their place. I am, however, going to ask what
information you think type inference can provide that substitutes for
algorithm or data structure exploration.

None. But it can provide a framework for both and catch some types of
mistakes early.

Torben
 
C

Chris Smith

Torben Ægidius Mogensen said:
That's not really the difference between static and dynamic typing.
Static typing means that there exist a typing at compile-time that
guarantess against run-time type violations. Dynamic typing means
that such violations are detected at run-time. This is orthogonal to
strong versus weak typing, which is about whether such violations are
detected at all. The archetypal weakly typed language is machine code
-- you can happily load a floating point value from memory, add it to
a string pointer and jump to the resulting value. ML and Scheme are
both strongly typed, but one is statically typed and the other
dynamically typed.

Knowing that it'll cause a lot of strenuous objection, I'll nevertheless
interject my plea not to abuse the word "type" with a phrase like
"dynamically typed". If anyone considers "untyped" to be perjorative,
as some people apparently do, then I'll note that another common term is
"type-free," which is marketing-approved but doesn't carry the
misleading connotations of "dynamically typed." We are quickly losing
any rational meaning whatsoever to the word "type," and that's quite a
shame.

By way of extending the point, let me mention that there is no such
thing as a universal class of things that are called "run-time type
violations". At runtime, there is merely correct code and incorrect
code. To the extent that anything is called a "type" at runtime, this
is a different usage of the word from the usage by which we may define
languages as being statically typed (which means just "typed"). In
typed OO languages, this runtime usage is often called the "class", for
example, to distinguish it from type.

This cleaner terminology eliminates a lot of confusion. For example, it
clarifies that there is no binary division between strongly typed
languages and weakly typed languages, since the division between a "type
error" and any other kind of error is arbitrary, depending only on
whether the type system in a particular language happens to catch that
error. For example, type systems have been defined to try to catch unit
errors in scientific programming, or to catch out-of-bounds array
indices... yet these are not commonly called "type errors" only because
such systems are not in common use.

This also leads us to define something like "language safety" to
encapsulate what we previously would have meant by the phrase "strongly
dynamically typed language". This also is a more general concept than
we had before. Language safety refers to a language having well-defined
behavior for as many operations as feasible, so that it's less likely
that someone will do something spectacularly bad. Language safety may
be achieved either by a type system or by runtime checks. Again, it's
not absolute... I'm aware of no language that is completely determinate,
at least if it supports any kind of concurrency.

This isn't just a matter of preference in terminology. The definitions
above (which are, in my experience, used widely by most non-academic
language design discussions) actually limit our understanding of
language design by pretending that certain delicate trade-offs such as
the extent of the type system, or which language behavior is allowed to
be non-deterministic or undefined, are etched in stone. This is simply
not so. If types DON'T mean a compile-time method for proving the
absence of certain program behaviors, then they don't mean anything at
all. Pretending that there's a distinction at runtime between "type
errors" and "other errors" serves only to confuse things and
artificially limit which problems we are willing to concieve as being
solvable by types.
Anyway, type inference for statically typed langauges don't make them
any more dynamically typed.

Indeed it does not. Unless it weakens the ability of a compiler to
prove the absence of certain program behaviors (which type inference
does not), it doesn't move anything on the scale toward a type-free
language.

That being said, though, it is considered a feature of some programming
languages that the programmer is asked to repeat type information in a
few places. The compiler may not need the information, but for
precisely the reason that the information is redundant, the compiler is
then able to check the consistency of the programmer in applying the
type. I won't get into precisely how useful this is, but it is
nevertheless present as an advantage to outweigh the wordiness.
 
R

Rob Thorpe

Chris said:
Knowing that it'll cause a lot of strenuous objection, I'll nevertheless
interject my plea not to abuse the word "type" with a phrase like
"dynamically typed". If anyone considers "untyped" to be perjorative,
as some people apparently do, then I'll note that another common term is
"type-free," which is marketing-approved but doesn't carry the
misleading connotations of "dynamically typed." We are quickly losing
any rational meaning whatsoever to the word "type," and that's quite a
shame.

I don't think dynamic typing is that nebulous. I remember this being
discussed elsewhere some time ago, I'll post the same reply I did then
...


A language is statically typed if a variable has a property - called
it's type - attached to it, and given it's type it can only represent
values defined by a certain class.

A language is latently typed if a value has a property - called it's
type - attached to it, and given it's type it can only represent values
defined by a certain class.

Some people use dynamic typing as a word for latent typing, others use
it to mean something slightly different. But for most purposes the
definition above works for dynamic typing also.

Untyped and type-free mean something else: they mean no type checking
is done.
 
P

Pascal Costanza

Chris said:
Knowing that it'll cause a lot of strenuous objection, I'll nevertheless
interject my plea not to abuse the word "type" with a phrase like
"dynamically typed". If anyone considers "untyped" to be perjorative,
as some people apparently do, then I'll note that another common term is
"type-free," which is marketing-approved but doesn't carry the
misleading connotations of "dynamically typed." We are quickly losing
any rational meaning whatsoever to the word "type," and that's quite a
shame.

The words "untyped" or "type-free" only make sense in a purely
statically typed setting. In a dynamically typed setting, they are
meaningless, in the sense that there are _of course_ types that the
runtime system respects.

Types can be represented at runtime via type tags. You could insist on
using the term "dynamically tagged languages", but this wouldn't change
a lot. Exactly _because_ it doesn't make sense in a statically typed
setting, the term "dynamically typed language" is good enough to
communicate what we are talking about - i.e. not (static) typing.
By way of extending the point, let me mention that there is no such
thing as a universal class of things that are called "run-time type
violations". At runtime, there is merely correct code and incorrect
code.

No, there is more: There is safe and unsafe code (i.e., code that throws
exceptions or that potentially just does random things). There are also
runtime systems where you have the chance to fix the reason that caused
the exception and continue to run your program. The latter play very
well with dynamic types / type tags.
To the extent that anything is called a "type" at runtime, this
is a different usage of the word from the usage by which we may define
languages as being statically typed (which means just "typed"). In
typed OO languages, this runtime usage is often called the "class", for
example, to distinguish it from type.

What type of person are you to tell other people what terminology to use? ;)

Ha! Here I used "type" in just another sense of the word. ;)

It is important to know the context in which you are discussing things.
For example, "we" Common Lispers use the term "type" as defined in
http://www.lispworks.com/documentation/HyperSpec/Body/26_glo_t.htm . You
cannot possibly argue that "our" use of the word "type" is incorrect
because in "our" context, when we talk about Common Lisp, the use of the
word "type" better be consistent with that definition. (You can say that
you don't like the definition, that it is unsound, or whatever, but that
doesn't change anything here.)
This cleaner terminology eliminates a lot of confusion. For example, it
clarifies that there is no binary division between strongly typed
languages and weakly typed languages, since the division between a "type
error" and any other kind of error is arbitrary, depending only on
whether the type system in a particular language happens to catch that
error. For example, type systems have been defined to try to catch unit
errors in scientific programming, or to catch out-of-bounds array
indices... yet these are not commonly called "type errors" only because
such systems are not in common use.

What type system catches division by zero? That is, statically? Would
you like to program in such a language?
This isn't just a matter of preference in terminology. The definitions
above (which are, in my experience, used widely by most non-academic
language design discussions) actually limit our understanding of
language design by pretending that certain delicate trade-offs such as
the extent of the type system, or which language behavior is allowed to
be non-deterministic or undefined, are etched in stone. This is simply
not so. If types DON'T mean a compile-time method for proving the
absence of certain program behaviors, then they don't mean anything at
all. Pretending that there's a distinction at runtime between "type
errors" and "other errors" serves only to confuse things and
artificially limit which problems we are willing to concieve as being
solvable by types.

Your problem doesn't exist. Just say "types" when you're amongst your
own folks, and "static types" when you're amongst a broader audience,
and everything's fine. Instead of focusing on terminology, just focus on
the contents.


Pascal
 
C

Chris Smith

Rob Thorpe said:
A language is latently typed if a value has a property - called it's
type - attached to it, and given it's type it can only represent values
defined by a certain class.

I'm assuming you mean "class" in the general sense, rather than in the
sense of a specific construct of some subset of OO programming
languages.

Now I define a class of values called "correct" values. I define these
to be those values for which my program will produce acceptable results.
Clearly there is a defined class of such values: (1) they are
immediately defined by the program's specification for those lines of
code that produce output; (2) if they are defined for the values that
result from any expression, then they are defined for the values that
are used by that expression; and (3) for any value for which correctness
is not defined by (1) or (2), we may define its "correct" values as the
class of all possible values. Now, by your definition, any language
which provides checking of that property of correctness for values is
latently typed. Of course, there are no languages that assign this
specific class of values; but ANY kind of correctness checking on values
that a language does (if it's useful at all) is a subset of the perfect
correctness checking system above. Apparently, we should call all such
systems "latent type systems". Can you point out a language that is not
latently typed?

I'm not trying to poke holes in your definition for fun. I am proposing
that there is no fundamental distinction between the kinds of problems
that are "type problems" and those that are not. Types are not a class
of problems; they are a class of solutions. Languages that solve
problems in ways that don't assign types to variables are not typed
languages, even if those same problems may have been originally solved
by type systems.
Untyped and type-free mean something else: they mean no type checking
is done.

Hence, they don't exist, and the definitions being used here are rather
pointless.
 
G

George Neuner

I can't see what this has to do with static/dynamic typing. You can
test individul functions in isolation is statically typed languages
too.

It has nothing to do with static/dynamic typing and that was the point
.... that support for exploratory programming is orthogonal to the
language's typing scheme.

George
 
C

Chris Smith

Pascal Costanza said:
Types can be represented at runtime via type tags. You could insist on
using the term "dynamically tagged languages", but this wouldn't change
a lot. Exactly _because_ it doesn't make sense in a statically typed
setting, the term "dynamically typed language" is good enough to
communicate what we are talking about - i.e. not (static) typing.

Okay, fair enough. It's certainly possible to use the same sequence of
letters to mean two different things in different contexts. The problem
arises, then, when Torben writes:

: That's not really the difference between static and dynamic typing.
: Static typing means that there exist a typing at compile-time that
: guarantess against run-time type violations. Dynamic typing means
: that such violations are detected at run-time.

This is clearly not using the word "type" to mean two different things
in different contexts. Rather, it is speaking under the mistaken
impression that "static typing" and "dynamic typing" are varieties of
some general thing called "typing." In fact, the phrase "dynamically
typed" was invented to do precisely that. My argument is not really
with LISP programmers talking about types, by which they would mean
approximately the same thing Java programmers mean by "class." My point
here concerns the confusion that results from the conception that there
is this binary distinction (or continuum, or any other simple
relationship) between a "statically typed" and a "dynamically typed"
language.

Torben's (and I don't mean to single out Torben -- the terminology is
used quite widely) classification of dynamic versus static type systems
depends on the misconception that there is some universal definition to
the term "type error" or "type violation" and that the only question is
how we address these well-defined things. It's that misconception that
I aim to challenge.
No, there is more: There is safe and unsafe code (i.e., code that throws
exceptions or that potentially just does random things). There are also
runtime systems where you have the chance to fix the reason that caused
the exception and continue to run your program. The latter play very
well with dynamic types / type tags.

Yes, I was oversimplifying.
What type system catches division by zero? That is, statically?

I can define such a type system trivially. To do so, I simply define a
type for integers, Z, and a subtype for non-zero integers, Z'. I then
define the language such that division is only possible in an expression
that looks like << z / z' >>, where z has type Z and z' has type Z'.
The language may then contain an expression:

z 0? t1 : t2

in which t1 is evaluated in the parent type environment, but t2 is
evaluated in the type environment augmented by (z -> Z'), the type of
the expression is the intersection type of t1 and t2 evaluated in those
type environments, and the evaluation rules are defined as you probably
expect.
Would you like to program in such a language?

No. Type systems for real programming languages are, of course, a
balance between rigor and usability. This particular set of type rules
doesn't seem to exhibit a good balance. Perhaps there is a way to
achieve it in a way that is more workable, but it's not immediately
obvious.

As another example, from Pierce's text "Types and Programming
Languages", Pierce writes: "Static elimination of array-bounds checking
is a long-standing goal for type system designers. In principle, the
necessary mechanisms (based on dependent types) are well understood, but
packaging them in a form that balances expressive power, predictability
and tractability of typechecking, and complexity of program annotations
remains a significant challenge." Again, this could quite validly be
described as a type error, just like division by zero or ANY other
program error... it's just that the type system that solves it doesn't
look appealing, so everyone punts the job to runtime checks (or, in some
cases, to the CPU's memory protection features and/or the user's ability
to fix resulting data corruption).

Why aren't these things commonly considered type errors? There is only
one reason: there exists no widely used language which solves them with
types. (I mean in the programming language type theory sense of "type";
since many languages "tag" arrays with annotations indicating their
dimensions, I guess you could say that we do solve them with types in
the LISP sense).
Your problem doesn't exist. Just say "types" when you're amongst your
own folks, and "static types" when you're amongst a broader audience,
and everything's fine.

I think I've explained why that's not the case. I don't have a
complaint about anyone speaking of types. It's the confusion from
pretending that the two definitions are comparable that I'm pointing
out.
 
M

Matthias Blume

George Neuner said:
I am, however, going to ask what
information you think type inference can provide that substitutes for
algorithm or data structure exploration.

Nobody wants to do such a substitution, of course. In /my/
experience, however, I find that doing algorithm and data structure
exploration is greatly aided by a language with static types and type
inference. YMMV.
 
Y

Yet Another Dan

Now I define a class of values called "correct" values. I define
these to be those values for which my program will produce acceptable
results. Clearly there is a defined class of such values: (1) they
are immediately defined by the program's specification for those lines
of code that produce output; ...
I'm not trying to poke holes in your definition for fun. I am
proposing that there is no fundamental distinction between the kinds
of problems that are "type problems" and those that are not.

That sounds like a lot to demand of a type system. It almost sounds like
it's supposed to test and debug the whole program. In general, defining the
exact set of values for a given variable that generate acceptable output
from your program will require detailed knowledge of the program and all
its possible inputs. That goes beyond simple typing. It sounds more like
contracts. Requiring an array index to be an integer is considered a typing
problem because it can be checked based on only the variable itself,
whereas checking whether it's in bounds requires knowledge about the array.
 
M

Matthias Blume

Rob Thorpe said:
I don't think dynamic typing is that nebulous. I remember this being
discussed elsewhere some time ago, I'll post the same reply I did then
..


A language is statically typed if a variable has a property - called
it's type - attached to it, and given it's type it can only represent
values defined by a certain class.

By this definition, all languages are statically typed (by making that
"certain class" the set of all values). Moreover, this "definition",
when read the way you probably wanted it to be read, requires some
considerable stretch to accommodate existing static type systems such
as F_\omega.

Perhaps better: A language is statically typed if its definition
includes (or ever better: is based on) a static type system, i.e., a
static semantics with typing judgments derivable by typing rules.
Usually typing judgmets associate program phrases ("expressions") with
types given a typing environment.
A language is latently typed if a value has a property - called it's
type - attached to it, and given it's type it can only represent values
defined by a certain class.

This "definition" makes little sense. Any given value can obviously
only represent one value: itself. "Dynamic types" are nothing more
than sets of values, often given by computable predicates.
Untyped and type-free mean something else: they mean no type checking
is done.

Look up "untyped lambda calculus".
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top