Python from Wise Guy's Viewpoint

F

Fergus Henderson

Pascal Bourguignon said:
What are you writing about? Figments of your imagination or real
concrete systems?

He is talking about real concrete systems, e.g. Haskell or Lisp.
Where do you see "infinite precision by default" [in Lisp]?

We know that implementations of dynamically typed languages such as Lisp
represent small integers efficiently. So do good implementations of
statically typed languages in which arbitrary precision arithmetic is
the default. But in both cases, these implementations pay a price --
compared to statically typed languages using fixed precision arithmetic --
because of the possibility that adding two unknown word-sized values
may generate a result which no longer fits in a single word. The compiler
needs to generate extra code to cater for that possibility. Then in turn,
each subsequent operation needs to cater for the possibility that the input
will not fit in a word. The extra tests slow the code down, and the
extra code size reduces locality.

In a dynamically typed language, this price is normally not considered
to be much of an issue, because it has already been paid for up-front:
the cost is just the same cost as you would normally for pay for *every*
data access in a dynamically typed language.
 
P

Pascal Costanza

Stephen said:
I've no idea, I don't care that much what the default is since I
prefer to specify what the type/size should be if the compiler fails
to infer the one I wanted :)

:)


Pascal
 
J

Jacques Garrigue

Matthias Blume said:
Do you mean that you don't have any dynamically typed skeletons in
your closet? My excuse is that I have been attracted by the static
side of the force all along, but for a long time I didn't understand
that this was the case... :)

To be honest, I have been for a long time a fan of Prolog. To choose
an untyped language, I prefer it (pseudo) intelligent! And you can
also do plenty of fun stuff with meta-programming in Prolog.

Maybe the switch has been when I was (as undergrad) assigned a project
to write a lazy prolog interpreter in ML. This was so easy that I
didn't see the point of using prolog afterwards...
Not that I pretend that good compilers for untyped languages are easy
to write. But at least type inference (even trivial) is a bit harder
than interpreters, which gives you this warm feeling that you're doing
some real work.
 
L

Lex Spoon

Pascal Costanza said:
...and it requires you to go to all the places where they are defined.

Yes, I know the answer: "But they should be all in one place." No,
they shouldn't need to be all in one place. For example, I might want
to place test code close to the definitions that they test. Or I might
want to organize them according to some other criteria.

No, it's not hard to find them all, then. I can use grep or my IDE to
find them. But that's still more work than just commenting them
out. If I seldomly need to find all test cases, I can trade locality
of all test cases for some other possible advantages.


With a good IDE, "distance" should be the same as "semantic nearness",
if that term makes sense. In a good IDE, there already is an existing
way to browse all the tests, or it is easy to extend the IDE to allow
it. So things are in the "same place" whenever they have a semantic
attribute that the tools can index on. No matter how you layout
tests, there is sure to be a way for a decent IDE to show you all the
tests.


-Lex
 
L

Lex Spoon

Fergus Henderson said:
So which, if any, implementations of dynamic languages actually perform such
optimizations?


I'm sure every implementation does this "optimization", because it is
simply less code. The only time you get a dynamic type error are:

1. You try to call a method, but the object has no such method.

2. You call a primitive function or method, and the primitive
balks. (This would include trying to write a string into an
array-of-byte.)


I suppose you could add this one, which also applies to statically
typed languages:

3. The code explicitly checks for type information.


If you are simply doing "x := y" then there is no checking required.


Regarding your earlier question, though, the great trick in Self was
to remember the result of a check and thus avoid doing it again
whenever possible. If you do "y := x + 1", and you determine that "x"
is a floating point number, then you know that "y" will also be a
floating point number immediately afterwards.


This points to a general observation. Dealing with the #1 style
dynamic type errors is a subset of dealing with dynamic dispatch in
general. If you try to execute "x + 1" or "(foo data-structure)", you
will need to locate which "+" method or which branch of foo's case
statement to execute. A dynamic type error means that you decide
to use method "typeError" or branch "type error". Furthermore, any
optimizations that get rid of these dynamic lookups, will also get
rid of type checks just by their nature. If "x + 1" always uses the
floating-point "+", then clearly it cannot ever use the "typeError"
version of "+".


-Lex
 
P

Pascal Costanza

Lex said:
With a good IDE, "distance" should be the same as "semantic nearness",
if that term makes sense. In a good IDE, there already is an existing
way to browse all the tests, or it is easy to extend the IDE to allow
it. So things are in the "same place" whenever they have a semantic
attribute that the tools can index on. No matter how you layout
tests, there is sure to be a way for a decent IDE to show you all the
tests.

....an if the IDE is already that smart, why should it still require me
to comment out code just to make some other part of the program run?

A programming language environment should make programming as convenient
as possible, not in some areas convenient and in some other arbitrary
areas less convenient.

(And if you think that static type checking makes programming more
convenient, then yes, why not? Add that as an additional option! But
make it possible to switch it on or off on demand!)


Pascal
 
F

Fergus Henderson

Lex Spoon said:
Fergus Henderson said:
[someone wrote:]
Why? If the collection happens to contain only elements of a single type
(or this type at most), you only need to check write accesses if they
violate this condition. As long as they don't, you don't need to check
read accesses.

So which, if any, implementations of dynamic languages actually perform such
optimizations?

I'm sure every implementation does this "optimization", because it is
simply less code.

You're wrong. I think you misunderstood what optimization I'm talking about.
The only time you get a dynamic type error are:

1. You try to call a method, but the object has no such method.

Calling a method is a read access. We were discussing optimizations that
ensure that you *don't* need to do a dynamic check for each read access.
If you are simply doing "x := y" then there is no checking required.

Yes, we covered that already. But that's not what is happening in
the scenario that I was describing. The scenario that I'm describing is

Collection c;

...
foreach x in c do
use(x);

where use(x) might be a method call, a field access, or similar.
For example, perhaps the collection is a set of integers, and you
are computing their sum, so use(x) would be "sum += x".

I think that in these situations, dynamically typed language
implementations will do O(N) dynamic type checks, where N is the number
of elements in "c". In theory it is possible to optimize these away,
but I don't know of any such implementations that do, and I would be
suprised if there are any (except perhaps in limited cases, e.g. when
the collection is an array or is implemented using an array).
Regarding your earlier question, though, the great trick in Self was
to remember the result of a check and thus avoid doing it again
whenever possible. If you do "y := x + 1", and you determine that "x"
is a floating point number, then you know that "y" will also be a
floating point number immediately afterwards.

Sure. That helps an implementation avoid checking the type of the collection
"c" at every element access. But it doesn't help the implementation avoid
checking the type of the element "x" at each iteration of the loop.
This points to a general observation. Dealing with the #1 style
dynamic type errors is a subset of dealing with dynamic dispatch in
general. [....] any optimizations that get rid of these dynamic lookups,
will also get rid of type checks just by their nature.

That's true. But the difference between dynamically typed languages and
statically typed languages is that in dynamically typed languages, *every*
data access (other than just copying data around) involves a dynamic dispatch.
Sure, implementations can optimize a lot of them away. But generally you're
still left lots that your implementation can't optimize away, but which
would not be present in a statically typed language, such as the O(N)
dynamic type checks in the example above.
 
A

Adam Warner

Hi Fergus Henderson,
Yes, we covered that already. But that's not what is happening in
the scenario that I was describing. The scenario that I'm describing is

Collection c;

...
foreach x in c do
use(x);

where use(x) might be a method call, a field access, or similar.
For example, perhaps the collection is a set of integers, and you
are computing their sum, so use(x) would be "sum += x".

I think that in these situations, dynamically typed language
implementations will do O(N) dynamic type checks, where N is the number
of elements in "c". In theory it is possible to optimize these away,
but I don't know of any such implementations that do, and I would be
suprised if there are any (except perhaps in limited cases, e.g. when
the collection is an array or is implemented using an array).

I've implemented a collection of integers as a list:

* (disassemble
(compile nil
(lambda ()
(declare (optimize (safety 0)))
(let ((c '(1 2 3 4 5 6 7 8 9 10)))
(loop for x of-type (integer 1 10) in c
sum x of-type fixnum)))))

; Compiling lambda nil:
; Compiling Top-Level Form:

482C1160: .entry "lambda nil"() ; (function nil fixnum)
78: pop dword ptr [ebp-8]
7B: lea esp, [ebp-32]
7E: xor eax, eax ; No-arg-parsing entry point
80: mov ecx, [#x482C1158] ; '(1 2 3 4 ...)
86: xor edx, edx
88: jmp L1
8A: L0: mov eax, [ecx-3]
8D: mov ecx, [ecx+1]
90: add edx, eax
92: L1: cmp ecx, #x2800000B ; nil
98: jne L0
9A: mov ecx, [ebp-8]
9D: mov eax, [ebp-4]
A0: add ecx, 2
A3: mov esp, ebp
A5: mov ebp, eax
A7: jmp ecx

No type checks. And 32-bit assembly arithmetic (but the type bits are
still present. most-positive-fixnum is 536,870,911).
Sure. That helps an implementation avoid checking the type of the collection
"c" at every element access. But it doesn't help the implementation avoid
checking the type of the element "x" at each iteration of the loop.

Lisp provides standard ways to supply declarations. Some implementations
will trust those declarations in order to optimise the code at compile
time.
This points to a general observation. Dealing with the #1 style
dynamic type errors is a subset of dealing with dynamic dispatch in
general. [....] any optimizations that get rid of these dynamic lookups,
will also get rid of type checks just by their nature.

That's true. But the difference between dynamically typed languages and
statically typed languages is that in dynamically typed languages, *every*
data access (other than just copying data around) involves a dynamic dispatch.
Sure, implementations can optimize a lot of them away. But generally you're
still left lots that your implementation can't optimize away, but which
would not be present in a statically typed language, such as the O(N)
dynamic type checks in the example above.

Again, Lisp provides standard ways to supply declarations. Some
implementations will trust those declarations in order to optimise the
code at compile time. And use the declarations for type inference.

Followup-To is set as comp.lang.lisp.

Regards,
Adam
 
P

prunesquallor

Fergus Henderson said:
But the difference between dynamically typed languages and
statically typed languages is that in dynamically typed languages, *every*
data access (other than just copying data around) involves a dynamic dispatch.
Sure, implementations can optimize a lot of them away. But generally you're
still left lots that your implementation can't optimize away, but which
would not be present in a statically typed language, such as the O(N)
dynamic type checks in the example above.

That's what the type-checking hardware is for.
 
E

Ed Avis

Pascal Costanza said:
No, it doesn't.

Well, it's rather a meaningless question since all statically-typed
languages define their semantics only for well-typed expressions, and
so something like

f :: Int -> Int
x :: String = "hello"
f "hello"

has no semantics, and so cannot be said to 'fail' or 'not fail'. But
it seems pretty clear that if you did bypass the type checking and
compile such code, it would go wrong as soon as it was run.
Yes, that's one example. This doesn't mean that this implication
always holds. What part of "doesn't imply" is the one you don't
understand?

I just gave one example - 'Boolean' and 'string' in the above are just
examples of one possible type mismatch. But the same holds for any
other type mismatch. You cannot call a function defined for type X
and pass a value of type Y, where Y is not an instance of X.
I don't care for unreachable code in this specific context. A part of
the program that passes a value of type "don't know" to a variable of
type "don't know" might be unacceptable to a static type sytem,

Actually, an expressive static type system will allow this:

f :: a -> a

f takes a parameter 'don't know' and returns a result of the same
type. Or you can have the even more general a -> b, any type to any
type. Such a function isn't especially useful however, since if you
know nothing at all about what the type supports (eg, not even
equality might be defined) then you can't promise much about the
return value.
but might still not throw any exceptions at all at runtime.

Exactly.

Some statically-typed languages do support this, for example Haskell's
Dynamic library, but you have to ask for it explicitly. For me, the
common case of a type error is when I've simply made a mistake, and I
would like as much help as possible from the computer to catch the
mistake as early as possible. But one might want to suppress the
checking occasionally.

(However, with a more expressive type system you don't so often feel
the need to suppress it altogether.)
 
E

Ed Avis

What would you like to call those `syntactic constructs' that do not
currently type check in Haskell, yet *may* belong to a class of
syntactic constructs that *could* conceivably be type checked in a
successor to Haskell that has a better type inferencing engine?

Well of course, if you program in a different language you'd need a
different type checking system.
 
P

prunesquallor

Ed Avis said:
Well of course, if you program in a different language you'd need a
different type checking system.

Obviously.

But let us suppose that someone improved the type system of Haskell
such that some useful complicated constructs that did not pass the
type checker were now able to be verified as correct. Didn't this
happen when Haskell was extended with second-order polymorphic types?
(when the restriction on forall was lifted?)

You could say that lifting this restriction created a `new' language
and refuse to admit the notion that two are related, or you might take
the viewpoint that some programs that were invalid before are now
valid. The former point becomes rather strained if you have some sort
of switch on the implementation.

For instance, OCaml allows recursive function types if you specify
that you want them. Most people seem to view this as

`OCaml with recursive function types'

instead of

`completely new language unrelated to OCaml, but sharing the exact
same syntax in all places except for declaration of recursive function
types.'

And most people seem to think that my `black hole' `syntactic
construct', which does not type check under OCaml without the flag,
but *does* under OCaml with the flag, can be unambiguously called a
`program'.
 
E

Ed Avis

But let us suppose that someone improved the type system of Haskell
such that some useful complicated constructs that did not pass the
type checker were now able to be verified as correct.

Wouldn't you need to define the semantics for these constructs too?
And perhaps extend the compiler to generate code for them?

My original point was that the type-checker won't reject programs
which are valid Haskell, so it makes no sense to talk about the
checker being too strict or not allowing enough flexibility. A
type-checker for some other language such as Lisp would obviously have
to not flag errors for any legal Lisp program. (That would probably
mean not checking anything at all, with the programmer having to
explicitly state 'yes I don't want to wait until runtime to catch this
error'.)
 
J

Joachim Durchholz

And most people seem to think that my `black hole' `syntactic
construct', which does not type check under OCaml without the flag,
but *does* under OCaml with the flag, can be unambiguously called a
`program'.

I'm still hoping for an explanation on the practical relevance of that
black hole device.
I don't insist that the black hole function as given does anything
useful; if the black-hole function was just a condensed example of a
general usage pattern, I'd like to know what that pattern is. It would
help me to find out where in the expressivity spectrum the various
languages lie.

Regards,
Jo
 
P

prunesquallor

Ed Avis said:
Wouldn't you need to define the semantics for these constructs too?
And perhaps extend the compiler to generate code for them?

My original point was that the type-checker won't reject programs
which are valid Haskell, so it makes no sense to talk about the
checker being too strict or not allowing enough flexibility.

So any program that currently runs on Haskell will run on the very
first version of Haskell? No? But Haskell won't reject programs that
are valid Haskell, so is the later version wrong?
 
F

Fergus Henderson

That's what the type-checking hardware is for.

Did you forget a smiley?

In case not: type-checking hardware has been tried already, and failed.

(Anyway, type-checking hardware would only solve part of the problem, I think.
Dynamic type checking imposes two costs: one is the time cost of performing
the checks, and the other is the locality cost due to the code size increase.
Type-checking hardware avoids the code size increases, but I don't think it
helps with the time cost of performing the checks.)
 
P

prunesquallor

Fergus Henderson said:
Did you forget a smiley?

No, I never use smileys.
In case not: type-checking hardware has been tried already, and failed.

News to me. I've used type checking hardware and it works like a charm.
(Anyway, type-checking hardware would only solve part of the problem, I think.
Dynamic type checking imposes two costs: one is the time cost of performing
the checks, and the other is the locality cost due to the code size increase.
Type-checking hardware avoids the code size increases, but I don't think it
helps with the time cost of performing the checks.)

Actually it works quite well with performing the checks. In general,
type checking is much quicker than computation, and in general it can
be performed in parallel with computation (you simply discard the
bogus result if it fails). You don't need very much hardware, either.
 
E

Ed Avis

So any program that currently runs on Haskell will run on the very
first version of Haskell?

I should have said 'Haskell 98' and 'a type-checker in a working
Haskell 98 implementation'.

The same applies to any language, of course - Haskell 98 is just an
example. What I mean is don't shoot the messenger if the type checker
tells you your program is invalid. You might however want to change
to a different language (such as a later version of Haskell, or one
with nonstandard extensions) where your program is valid (and of
course the typechecker for that implementation will be happy).
 
G

Greg Menke

Actually it works quite well with performing the checks. In general,
type checking is much quicker than computation, and in general it can
be performed in parallel with computation (you simply discard the
bogus result if it fails). You don't need very much hardware, either.

As is also amply demonstrated by ECC hardware that operates right
alongside the memory- considerably easier than doing it in software.

Gregm
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,057
Latest member
KetoBeezACVGummies

Latest Threads

Top