What is Expressiveness in a Computer Language

R

Rob Thorpe

Andreas said:
Rob said:
Andreas said:
Rob Thorpe wrote:

"A language is latently typed if a value has a property - called it's
type - attached to it, and given it's type it can only represent values
defined by a certain class."

"it [= a value] [...] can [...] represent values"?

???

I just quoted, in condensed form, what you said above: namely, that a
value represents values - which I find a strange and circular definition.

Yes, but the point is, as the other poster mentioned: values defined by
a class.

I'm sorry, but I still think that the definition makes little sense.
Obviously, a value simply *is* a value, it does not "represent" one, or
several ones, regardless how you qualify that statement.

You've clipped a lot of context. I'll put some back in, I said:-

I think this should make it clear. If I have a "xyz" in lisp I know it
is a string.
If I have "xyz" in an untyped language like assembler it may be
anything, two pointers in binary, an integer, a bitfield. There is no
data at compile time or runtime to tell what it is, the programmer has
to remember.

(I'd point out this isn't true of all assemblers, there are some typed
assemblers)
No, variables are insignificant in this context. You can consider a
language without variables at all (such languages exist, and they can
even be Turing-complete) and still have evaluation, values, and a
non-trivial type system.

Hmm. You're right, ML is no-where in my definition since it has no
variables.
You mean that the type of the value is not represented at runtime? True,
but that's simply because the type system is static. It's not the same
as saying it has no type.

Well, is it even represented at compile time?
The compiler doesn't know in general what values will exist at runtime,
it knows only what types variables have. Sometimes it only has partial
knowledge and sometimes the programmer deliberately overrides it. From
what knowledge it you could say it know what types values will have.
Nothing in the C spec precludes an implementation from doing just that.

True, that would be an interesting implementation.
The problem with C rather is that its semantics is totally
underspecified. In any case, C is about the worst example to use when
discussing type systems. For starters, it is totally unsound - which is
what your example exploits.

Yes. Unfortunately it's often necessary to break static type systems.

Regarding C the problem is, what should we discuss instead that would
be understood in all these newsgroups we're discussing this in.
 
G

George Neuner

Yet Another Dan sez:

... Requiring an array index to be an integer is considered a typing

You mean like
subtype MyArrayIndexType is INTEGER 7 .. 11
type MyArrayType is array (MyArrayIndexType) of MyElementType

If the index computation involves wider types it can still produce
illegal index values. The runtime computation of an illegal index
value is not prevented by narrowing subtypes and cannot be statically
checked.

George
 
J

Joe Marshall

Chris said:
If we agree about this, then there is no need to continue this
discussion. I'm not sure we do agree, though, because I doubt we'd be
right here in this conversation if we did.

I think we do agree.

The issue of `static vs. dynamic types' comes up about twice a year in
comp.lang.lisp It generally gets pretty heated, but eventually people
come to understand what the other person is saying (or they get bored
and drop out of the conversation - I'm not sure which). Someone always
points out that the phrase `dynamic types' really has no meaning in the
world of static type analysis. (Conversely, the notion of a `static
type' that is available at runtime has no meaning in the dynamic
world.) Much confusion usually follows.

You'll get much farther in your arguments by explaining what you mean
in detail rather than attempting to force a unification of teminology.
You'll also get farther by remembering that many of the people here
have not had much experience with real static type systems. The static
typing of C++ or Java is so primitive that it is barely an example of
static typing at all, yet these are the most common examples of
statically typed languages people typically encounter.
 
P

Pascal Costanza

Matthias said:
This statement is false.

The example I have given is more important than this statement.
For every program that can run successfully to completion there exists
a static type system which accepts that program. Moreover, there is
at least one static type system that accepts all such programs.

What you mean is that for static type systems that are restrictive
enough to be useful in practice there always exist programs which
(after type erasure in an untyped setting, i.e., by switching to a
different language) would run to completion, but which are rejected by
the static type system.

No, that's not what I mean.
I am quite comfortable with the thought that this sort of evil would
get rejected by a statically typed language. :)

This sort of feature is clearly not meant for you. ;-P


Pascal
 
R

Rob Thorpe

Darren said:
int x = (int) (20.5 / 3);

What machine code operations does the "/" there invoke? Integer
division, or floating point division? How did the variables involved in
the expression affect that?

In that case it knew because it could see at compile time. In general
though it doesn't.
If I divide x / y it only knows which to use because of types declared
for x and y.
No it doesn't.
int x = (int) 20.5;
There's no point at which bits from the floating point representation
appear in the variable x.

int * x = (int *) 0;
There's nothing that indicates all the bits of "x" are zero, and indeed
in some hardware configurations they aren't.

I suppose some are conversions and some reinterpretations. What I
should have said it that there are cases where cast reinterprets.
 
R

Rob Thorpe

Pascal said:
This sort of feature is clearly not meant for you. ;-P

To be fair though that kind of thing would only really be used while
debugging a program.
Its no different than adding a new member to a class while in the
debugger.

There are other places where you might add a slot to an object at
runtime, but they would be done in tidier ways.
 
D

Darren New

Rob said:
In that case it knew because it could see at compile time.

Well, yes. That's the point of static typing.
In general though it doesn't.

Sure it does. There are all kinds of formal rules about type promotions
and operator version selection.

In deed, in *general* every value has a type associated with it (at
compile time). Some values have a type associated with them due to the
declaration of the variable supplying the value, but that's far from the
general case.

Note that in
main() { char x = 'x'; foo(x); }
the value passed to "foo" is not even the same type as the declaration
of "x", so it's far from the general case that variables even determine
the values they provide to the next part of the calculation.
If I divide x / y it only knows which to use because of types declared
for x and y.

Yes? So? All you're saying is that the value of the expression "x" is
based on the declared type for the variable "x" in scope at that point.
That doesn't mean values don't have types. It just means that *some*
values' types are determined by the type of the variable the value is
stored in. As soon as you do anything *else* with that value, such as
passing it to an operator, a function, or a cast, the value potentially
takes on a type different from that of the variable from which it came.
I suppose some are conversions and some reinterpretations. What I
should have said it that there are cases where cast reinterprets.

There are cases where the cast is defined to return a value that has the
same bit pattern as its argument, to the extent the hardware can support
it. However, this is obviously limited to the values for which C
actually defines the bit patterns of its values, namely the scalar
integers.

Again, you're taking a special case and treating all the rest (which are
a majority) as aberations. For the most part, casts do *not* return the
same bit pattern as the value. For the most part
union {T1 x; T2 y;};
can be used to do transformations that
T2 y; T1 x = (T1) y;
does not. Indeed, the only bit patterns that don't change are when
casting from signed to unsigned versions of the same underlying scalar
type and back. Everything else must change, except perhaps pointers,
depending on your architecture. (I've worked on machines where char* had
more bits than long*, for example.)

(Funny that comp.lang.c isn't on this thread. ;-)
 
C

Chris Uppal

Chris said:
I'm actually not sure I agree with this at all. I believe that
reference values in Java may be said to be latently typed. Practically
all class-based OO
languages are subject to similar consideration, as it turns out.

Quite probably true of GC-ed statically typed languages in general, at least up
to a point (and provided you are not using something like a tagless ML
implementation). I think Rob is assuming a rather too specific implementation
of statically typed languages.

I'm unsure whether to consider explicitly stored array lengths, which
are present in most statically typed languages, to be part of a "type"
in this sense or not.

If I understand your position correctly, wouldn't you be pretty much forced to
reject the idea of the length of a Java array being part of its type ? If you
want to keep the word "type" bound to the idea of static analysis, then --
since Java doesn't perform any size-related static analysis -- the size of a
Java array cannot be part of its type.

That's assuming that you would want to keep the "type" connected to the actual
type analysis performed by the language in question. Perhaps you would prefer
to loosen that and consider a different (hypothetical) language (perhaps
producing identical bytecode) which does do compile time size analysis.

But then you get into an area where you cannot talk of the type of a value (or
variable) without relating it to the specific type system under discussion.
Personally, I would be quite happy to go there -- I dislike the idea that a
value has a specific inherent type.

It would be interesting to see what a language designed specifically to support
user-defined, pluggable, and perhaps composable, type systems would look like.
Presumably the syntax and "base" semantics would be very simple, clean, and
unrestricted (like Lisp, Smalltalk, or Forth -- not that I'm convinced that any
of those would be ideal for this), with a defined result for any possible
sequence of operations. The type-system(s) used for a particular run of the
interpreter (or compiler) would effectively reduce the space of possible
sequences. For instance, one could have a type system which /only/ forbade
dereferencing null, or another with the job of ensuring that mutability
restrictions were respected, or a third which implemented access control...

But then, I don't see a semantically critically distinction between such space
reduction being done at compile time vs. runtime. Doing it at compile time
could be seen as an optimisation of sorts (with downsides to do with early
binding etc). That's particularly clear if the static analysis is /almost/
able to prove that <some sequence> is legal (by its own rules) but has to make
certain assumptions in order to construct the proof. In such a case the
compiler might insert a few runtime checks to ensure that it's assumptions were
valid, but do most of its checking statically.

There would /be/ a distinction between static and dynamic checks in such a
system, and it would be an important distinction, but not nearly as important
as the distinctions between the different type systems. Indeed I can imagine
categorising type systems by /whether/ (or to what extent) a tractable static
implementation exists.

-- chris

P.S Apologies Chris, btw, for dropping out of a conversation we were having on
this subject a little while ago -- I've now said everything that I /would/ have
said in reply to your last post if I'd got around to it in time...
 
G

genea

Joe Marshall wrote:
{...}
The issue of `static vs. dynamic types' comes up about twice a year in
comp.lang.lisp It generally gets pretty heated, but eventually people
come to understand what the other person is saying (or they get bored
and drop out of the conversation - I'm not sure which). {...}

I think that the thing about "Language Expressiveness" is just so
elusive, as it is based on each programmers way of thinking about
things, and also the general types of problems that that programmer is
dealing with on a daily basis. There are folks out there like the Paul
Grahams of the world, that do wonderfully complex things in Lisp,
eschew totally the facilities of the CLOS, the lisp object system, and
get the job done ... just because they can hack and have a mind picture
of what all the type matches are "in there head". I used to use forth,
where everything goes on a stack, and it is up to the programmer to
remember what the heck the type of a thing was that was stored there...
maybe an address of a string, another word {read function}, or an
integer... NO TYPING AT ALL, but can you be expressive in forth... You
can if you are a good forth programmer... NOW that being said, I think
that the reason I like Haskell, a very strongly typed language, is that
because of it's type system, the language is able to do things like
lazy evaluation, and then though it is strongly typed, and has
wonderful things like type classes, a person can write wildly
expressive code, and NEVER write a thing like:
fromtoby :: forall b a.
(Num a, Enum a) =>
a -> a -> a -> (a -> b) ->
The above was generated by the Haskell Compiler for me... and it does
that all the time, without any fuss. I just wrote the function and it
did the rest for me... By the way the function was a replacement for
the { for / to / by } construct of like a C language and it was done in
one line... THAT TO ME IS EXPRESSIVE, when I can build whole new
features into my language in just a few lines... usually only one
line.. I think that is why guys who are good to great in dynamic or
if it floats your boat, type free languages like Lisp and Scheme find
their languages so expressive, because they have found the macro's or
whatever else facility to give them the power... to extend their
language to meet a problem, or fit maybe closer to an entire class of
problems.. giving them the tool box.. Haskeller', folks using Ruby,
Python, ML, Ocaml, Unicon.... even C or whatever... By building either
modules, or libraries... and understand that whole attitude of tool
building.. can be equally as expressive "for their own way of doing
things". Heck, I don't use one size of hammer out in my workshop, and
I sure as hell don't on my box... I find myself picking up Icon and
it's derivative Unicon to do a lot of data washing chores.. as it
allows functions as first class objects... any type can be stored in a
list... tables... like a hash ... with any type as the data and the
key... you can do things like
i := 0
every write(i+:=1 || read())
which will append a sequence of line numbers to the lines read in from
stdin.. pretty damn concise and expressive in my book... Now for other
problems .. Haskell with it's single type lists... which of course can
have tuples, which themselves have tuples in them with a list as one
element of that tuple... etc.. and you can build accessors for all of
that for function application brought to bear on one element of one
tuple allowing mappings of that function to all of that particular
element of ... well you get the idea.. It is all about how you see
things and how your particular mindset is... I would agree with
someone that early on in the discussion said the thing about "type's
warping your mind" and depending on the individual, it is either a good
warp or a bad warp, but that is why there is Ruby and Python, and
Haskell and even forth... for a given problem, and a given programmer,
any one of those or even Cobol or Fortran might be the ideal tool... if
nothing else based on that persons familiarity or existing tool and
code base.

enough out of me...
-- gene
 
D

Darren New

Chris said:
Personally, I would be quite happy to go there -- I dislike the idea that a
value has a specific inherent type.

Interestingly, Ada defines a type as a collection of values. It works
quite well, when one consistantly applies the definition. For example,
it makes very clear the difference between a value of (type T) and a
value of (type T or one of its subtypes).
 
M

Matthias Blume

Rob Thorpe said:
I think we're discussing this at cross-purposes. In a language like C
or another statically typed language there is no information passed
with values indicating their type.

You seem to be confusing "does not have a type" with "no type
information is passed at runtime".
Have a look in a C compiler if you don't believe me.

Believe me, I have.
No it doesn't. Casting reinterprets a value of one type as a value of
another type.
There is a difference. If I cast an unsigned integer 2000000000 to a
signed integer in C on the machine I'm using then the result I will get
will not make any sense.

Which result are you getting? What does it mean to "make sense"?
 
P

Pascal Costanza

Rob said:
To be fair though that kind of thing would only really be used while
debugging a program.
Its no different than adding a new member to a class while in the
debugger.

There are other places where you might add a slot to an object at
runtime, but they would be done in tidier ways.

Yes, but the question remains how a static type system can deal with
this kind of updates.


Pascal
 
C

Chris Smith

Chris Uppal said:
If I understand your position correctly, wouldn't you be pretty much forced to
reject the idea of the length of a Java array being part of its type ?

I've since abandoned any attempt to be picky about use of the word
"type". That was a mistake on my part. I still think it's legitimate
to object to statements of the form "statically typed languages X, but
dynamically typed languages Y", in which it is implied that Y is
distinct from X. When I used the word "type" above, I was adopting the
working definition of a type from the dynamic sense. That is, I'm
considering whether statically typed languages may be considered to also
have dynamic types, and it's pretty clear to me that they do.
If you
want to keep the word "type" bound to the idea of static analysis, then --
since Java doesn't perform any size-related static analysis -- the size of a
Java array cannot be part of its type.

Yes, I agree. My terminology has been shifting constantly throughout
this thread as I learn more about how others are using terms. I realize
that probably confuses people, but my hope is that this is still
superior to stubbornly insisting that I'm right. :)
That's assuming that you would want to keep the "type" connected to the actual
type analysis performed by the language in question. Perhaps you would prefer
to loosen that and consider a different (hypothetical) language (perhaps
producing identical bytecode) which does do compile time size analysis.

In the static sense, I think it's absolutely critical that "type" is
defined in terms of the analysis done by the type system. Otherwise,
you miss the definition entirely. In the dynamic sense, I'm unsure; I
don't have any kind of deep understanding of what's meant by "type" in
this sense. Certainly it could be said that there are somewhat common
cross-language definitions of "type" that get used.
But then you get into an area where you cannot talk of the type of a value (or
variable) without relating it to the specific type system under discussion.

Which is entirely what I'd expect in a static type system.
Personally, I would be quite happy to go there -- I dislike the idea that a
value has a specific inherent type.

Good! :)
It would be interesting to see what a language designed specifically to support
user-defined, pluggable, and perhaps composable, type systems would look like.
Presumably the syntax and "base" semantics would be very simple, clean, and
unrestricted (like Lisp, Smalltalk, or Forth -- not that I'm convinced that any
of those would be ideal for this), with a defined result for any possible
sequence of operations. The type-system(s) used for a particular run of the
interpreter (or compiler) would effectively reduce the space of possible
sequences. For instance, one could have a type system which /only/ forbade
dereferencing null, or another with the job of ensuring that mutability
restrictions were respected, or a third which implemented access control...

You mean in terms of a practical programming language? If not, then
lambda calculus is used in precisely this way for the static sense of
types.
But then, I don't see a semantically critically distinction between such space
reduction being done at compile time vs. runtime. Doing it at compile time
could be seen as an optimisation of sorts (with downsides to do with early
binding etc). That's particularly clear if the static analysis is /almost/
able to prove that <some sequence> is legal (by its own rules) but has to make
certain assumptions in order to construct the proof. In such a case the
compiler might insert a few runtime checks to ensure that it's assumptions were
valid, but do most of its checking statically.

I think Marshall got this one right. The two are accomplishing
different things. In one case (the dynamic case) I am safeguarding
against negative consequences of the program behaving in certain non-
sensical ways. In the other (the static case) I am proving theorems
about the impossibility of this non-sensical behavior ever happening.
You mention static typing above as an optimization to dynamic typing;
that is certainly one possible applications of these theorems.

In some sense, though, it is interesting in its own right to know that
these theorems have been proven. Of course, there are important doubts
to be had whether these are the theorems we wanted to prove in the first
place, and whether the effort of proving them was worth the additional
confidence I have in my software systems.

I acknowledge those questions. I believe they are valid. I don't know
the answers. As an intuitive judgement call, I tend to think that
knowing the correctness of these things is of considerable benefit to
software development, because it means that I don't have as much to
think about at any one point in time. I can validly make more
assumptions about my code and KNOW that they are correct. I don't have
to trace as many things back to their original source in a different
module of code, or hunt down as much documentation. I also, as a
practical matter, get development tools that are more powerful.
(Whether it's possible to create the same for a dynamically typed
language is a potentially interesting discussion; but as a practical
matter, no matter what's possible, I still have better development tools
for Java than for JavaScript when I do my job.)

In the end, though, I'm just not very interested in them at the moment.
For me, as a practical matter, choices of programming language are
generally dictated by more practical considerations. I do basically all
my professional "work" development in Java, because we have a gigantic
existing software system written in Java and no time to rewrite it. On
the other hand, I do like proving theorems, which means I am interested
in type theory; if that type theory relates to programming, then that's
great! That's probably not the thing to say to ensure that my thoughts
are relevant to the software development "industry", but it's
nevertheless the truth.
 
J

Joachim Durchholz

Torben said:
That's not really the difference between static and dynamic typing.
Static typing means that there exist a typing at compile-time that
guarantess against run-time type violations. Dynamic typing means
that such violations are detected at run-time.
Agreed.

This is orthogonal to
strong versus weak typing, which is about whether such violations are
detected at all. The archetypal weakly typed language is machine code
-- you can happily load a floating point value from memory, add it to
a string pointer and jump to the resulting value.

I'd rather call machine code "untyped".
("Strong typing" and "weak typing" don't have a universally accepted
definition anyway, and I'm not sure that this terminology is helpful
anyway.)
Anyway, type inference for statically typed langauges don't make them
any more dynamically typed. It just moves the burden of assigning the
types from the programmer to the compiler. And (for HM type systems)
the compiler doesn't "guess" at a type -- it finds the unique most
general type from which all other legal types (within the type system)
can be found by instantiation.

Hmm... I think this distinction doesn't cover all cases.

Assume a language that
a) defines that a program is "type-correct" iff HM inference establishes
that there are no type errors
b) compiles a type-incorrect program anyway, with an establishes
rigorous semantics for such programs (e.g. by throwing exceptions as
appropriate).
The compiler might actually refuse to compile type-incorrect programs,
depending on compiler flags and/or declarations in the code.

Typed ("strongly typed") it is, but is it statically typed or
dynamically typed?
("Softly typed" doesn't capture it well enough - if it's declarations in
the code, then those part of the code are statically typed.)
You miss some of the other benefits of static typing,
though, such as a richer type system -- soft typing often lacks
features like polymorphism (it will find a set of monomorphic
instances rather than the most general type) and type classes.

That's not a property of soft typing per se, it's a consequence of
tacking on type inference on a dynamically-typed language that wasn't
designed for allowing strong type guarantees.

Regards,
Jo
 
J

Joachim Durchholz

Matthias said:
Perhaps better: A language is statically typed if its definition
includes (or ever better: is based on) a static type system, i.e., a
static semantics with typing judgments derivable by typing rules.
Usually typing judgmets associate program phrases ("expressions") with
types given a typing environment.

This is defining a single term ("statically typed") using three
undefined terms ("typing judgements", "typing rules", "typing environment").

Regards,
Jo
 
J

Joachim Durchholz

Chris said:
In that sense, a static type system is eliminating tags, because the
information is pre-computed and not explicitly stored as a part of the
computation. Now, you may not view the tag as being there, but in my
mind if there exists a way of perfoming the computation that requires
tags, the tag was there and that tag has been eliminated.

On a semantic level, the tag is always there - it's the type (and
definitely part of an axiomatic definition of the language).
Tag elimination is "just" an optimization.
To put it another way, I consider the tags to be axiomatic. Most
computations involve some decision logic that is driven by distinct
values that have previously been computed. The separation of the
values which drive the compuation one-way versus another is a tag.
That tag can potentially be eliminated by some apriori computation.

Um... just as precomputing constants, I'd say.
Are the constants that went into a precomputed constant eliminated?
On the implementation level, yes. On the semantic/axiomatic level, no.
Or, well, maybe - since that's just an optimization, the compiler may
have decided to no precompute the constant at all.

(Agreeing with the snipped parts.)

Regards,
Jo
 
M

Matthias Blume

Joachim Durchholz said:
This is defining a single term ("statically typed") using three
undefined terms ("typing judgements", "typing rules", "typing
environment").

This was not meant to be a rigorous definition. Also, I'm not going
to repeat the textbook definitions for those three standard terms
here. Next thing you are going to ask me to define the meaning of the
word "is"...
 
D

David Hopwood

Chris said:
I've since abandoned any attempt to be picky about use of the word "type".

I think you should stick to your guns on that point. When people talk about
"types" being associated with values in a "latently typed" or "dynamically typed"
language, they really mean *tag*, not type.

It is remarkable how much of the fuzzy thinking that often occurs in the
discussion of type systems can be dispelled by insistence on this point (although
much of the benefit can be obtained just by using this terminology in your own
mind and translating what other people are saying to it). It's a good example of
the weak Sapir-Whorf hypothesis, I think.
 
D

David Hopwood

Pascal said:
The words "untyped" or "type-free" only make sense in a purely
statically typed setting. In a dynamically typed setting, they are
meaningless, in the sense that there are _of course_ types that the
runtime system respects.

Types can be represented at runtime via type tags. You could insist on
using the term "dynamically tagged languages", but this wouldn't change
a lot. Exactly _because_ it doesn't make sense in a statically typed
setting, the term "dynamically typed language" is good enough to
communicate what we are talking about - i.e. not (static) typing.

Oh, but it *does* make sense to talk about dynamic tagging in a statically
typed language.

That's part of what makes the term "dynamically typed" harmful: it implies
a dichotomy between "dynamically typed" and "statically typed" languages,
when in fact dynamic tagging and static typing are (mostly) independent
features.
 
B

Ben Morrow

Quoth David Hopwood said:
Oh, but it *does* make sense to talk about dynamic tagging in a statically
typed language.

Though I'm *seriously* reluctant to encourage this thread...

A prime example of this is Perl, which has both static and dynamic
typing. Variables are statically typed scalar/array/hash, and then
scalars are dynamically typed string/int/unsigned/float/ref.
That's part of what makes the term "dynamically typed" harmful: it implies
a dichotomy between "dynamically typed" and "statically typed" languages,
when in fact dynamic tagging and static typing are (mostly) independent
features.

Nevertheless, I see no problem in calling both of these 'typing'. They
are both means to the same end: causing a bunch of bits to be
interpreted in a meaningful fashion. The only difference is whether the
distinction is made a compile- or run-time. The above para had no
ambiguities...

Ben
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,581
Members
45,056
Latest member
GlycogenSupporthealth

Latest Threads

Top