What is Expressiveness in a Computer Language

M

Marshall

Joachim said:
That's how it's done in Eiffel, yes.


Sure. Though, while DbC gives rules for inheritance (actually subtypes),
these are irrelevant to the current discussion; DbC-minus-subtyping can
still be usefully applied.

Yes, subtyping. Of course I meant to say subtyping.<blush>

I can certainly see how DbC would be useful without subtyping.
But would there still be a reason to separate preconditions
from postconditions? I've never been clear on the point
of differentiating them (beyond the fact that one's covariant
and the other is contravariant.)

Exactly, and that's why you'd either uses a restricted assertion
language (and essentially get something that's somewhere between a type
system and traditional assertion); or you'd use some inference system
and try to help it along (not a simple thing either - the components of
such a system exist, but I'm not aware of any system that was designed
for the average programmer).

As to the average programmer, I heard this recently on
comp.databases.theory:

"Don't blame me for the fact that competent programming, as I view it
as an intellectual possibility, will be too difficult for "the
average programmer" -you must not fall into the trap of rejecting
a surgical technique because it is beyond the capabilities of the
barber in his shop around the corner." -- EWD512

I'd agree with that, but I'm not sure how well that would hold up in
practice.

I want to try it and see what it's like.


Marshall
 
A

Andreas Rossberg

David said:
George said:
All of this presupposes that you have a high level of confidence in
the compiler. I've been in software development for going in 20 years
now and worked 10 years on high performance, high availability
systems. In all that time I have yet to meet a compiler ... or
significant program of any kind ... that is without bugs, noticeable
or not.

[...]

One of the main reasons for this, IMHO, is that many compilers place too
much emphasis on low-level optimizations of dubious merit. For C and
Java, I've taken to compiling all non-performance-critical code without
optimizations, and that has significantly reduced the number of compiler
bugs that I see. It has very little effect on overall execution performance
(and compile times are quicker).

I don't think that over-complex type systems are the cause of more than a
small part of the compiler bug problem. In my estimation, the frequency of
bugs in different compilers *for the same language* can vary by an order of
magnitude.

Also, the use of typed intermediate languages within the compiler might
actually help drastically cutting down on the more severe problem of
code transformation bugs, notwithstanding the relative complexity of
suitable internal type systems.

- Andreas
 
J

Joachim Durchholz

Marshall said:
Yes, subtyping. Of course I meant to say subtyping.<blush>

No need to blush for that, it's perfectly OK in the Eiffel context,
where a subclass ist always assumed to be a subtype. (This assumption
isn't always true, which is why Eiffel has some serious holes in the
type system. The DbC ideas are still useful though, saying "subtype"
instead of "subclass" just makes the concept applicable outside of OO
languages.)
I can certainly see how DbC would be useful without subtyping.
But would there still be a reason to separate preconditions
from postconditions? I've never been clear on the point
of differentiating them (beyond the fact that one's covariant
and the other is contravariant.)

There is indeed.
The rules about covariance and contravariance are just consequences of
the notion of having a subtype (albeit important ones when it comes to
designing concrete interfaces).

For example, if a precondition fails, something is wrong about the
things that the subroutine assumes about its environment, so it
shouldn't have been called. This means the error is in the caller, not
in the subroutine that carries the precondition.

The less important consequence is that this should be reflected in the
error messages.

The more important consequences apply when integrating software.
If you have a well-tested library, it makes sense to switch off
postcondition checking for the library routines, but not their
precondition checking.
This applies not just for run-time checking: Ideally, with compile-time
inference, all postconditions can be inferred from the function's
preconditions and their code. The results of that inference can easily
be stored in the precompiled libraries.
Preconditions, on the other hand, can only be verified by checking all
places where the functions are called.
As to the average programmer, I heard this recently on
comp.databases.theory:

"Don't blame me for the fact that competent programming, as I view it
as an intellectual possibility, will be too difficult for "the
average programmer" -you must not fall into the trap of rejecting
a surgical technique because it is beyond the capabilities of the
barber in his shop around the corner." -- EWD512

Given the fact that we have far more need for competently-written
programs than for competent surgery, I don't think that we'll be able to
follow that idea.
I want to try it and see what it's like.

So do I :)

Regards,
Jo
 
J

Joachim Durchholz

Darren said:
As far as I understand it, Eiffel compilers don't even make use of
postconditions to optimize code or eliminate run-time checks (like null
pointer testing).

That's correct.

I think a large part of the reasons why this isn't done is that Eiffel's
semantics is (a) too complicated (it's an imperative language after
all), and (b) not formalized, which makes it difficult to assess what
optimizations are safe and what aren't.

(Reason (c) is that Eiffel compiler vendors don't have the manpower for
this kind of work, mostly in quantity but also, to some degree, in
quality: people with a solid language semantics background tend to be
repelled by the language inventor's know-it-all deny-the-problems
don't-bother-me-with-formalisms attitude. He still has moved the state
of the art ahead - mostly by pointing out a certain class of problems in
OO designs and explaining them lucidly, and proposing solutions that
hold up better than average even if still fundamentally flawed.)

Regards,
Jo
 
M

Marshall

Joachim said:
That's correct.

I think a large part of the reasons why this isn't done is that Eiffel's
semantics is (a) too complicated (it's an imperative language after
all), and (b) not formalized, which makes it difficult to assess what
optimizations are safe and what aren't.

I can see the lack of a formal model being an issue, but is the
imperative bit really all that much of an obstacle? How hard
is it really to deal with assignment? Or does the issue have
more to do with pointers, aliasing, etc.?


Marshall
 
M

Marshall

Joachim said:
There is indeed.
The rules about covariance and contravariance are just consequences of
the notion of having a subtype (albeit important ones when it comes to
designing concrete interfaces).

For example, if a precondition fails, something is wrong about the
things that the subroutine assumes about its environment, so it
shouldn't have been called. This means the error is in the caller, not
in the subroutine that carries the precondition.

The less important consequence is that this should be reflected in the
error messages.

The more important consequences apply when integrating software.
If you have a well-tested library, it makes sense to switch off
postcondition checking for the library routines, but not their
precondition checking.
This applies not just for run-time checking: Ideally, with compile-time
inference, all postconditions can be inferred from the function's
preconditions and their code. The results of that inference can easily
be stored in the precompiled libraries.
Preconditions, on the other hand, can only be verified by checking all
places where the functions are called.

Interesting!

So, what exactly separates a precondition from a postcondition
from an invariant? I have always imagined that one writes
assertions on parameters and return values, and those
assertions that only reference parameters were preconditions
and those which also referenced return values were postconditions.
Wouldn't that be easy enough to determine automatically?

And what's an invariant anyway?


Marshall
 
D

Darren New

Marshall said:
So, what exactly separates a precondition from a postcondition
from an invariant?

A precondition applies to a routine/method/function and states the
conditions under which a function might be called. For example, a
precondition on "stack.pop" might be "not stack.empty", and
"socket.read" might have a precondition of "socket.is_open", and
"socket.open_a_new_connection" might have a precondition of
"socket.is_closed".

A postcondition applies to a routine/method/function and states (at
least part of) what that routine guarantees to be true when it returns,
assuming it is called with true preconditions. So "not stack.empty"
would be a postcondition of "stack.push". If your postconditions are
complete, they tell you what the routine does.

An invariant is something that applies to an entire object instance, any
time after the constructor has completed and no routine within that
instance is running (i.e., you're not in the middle of a call). There
should probably be something in there about destructors, too. So an
invariant might be "stack.is_empty == (stack.size == 0)" or perhaps
"socket.is_open implies (socket.buffer != null)" or some such.

The first problem with many of these sorts of things are that in
practice, there are lots of postconditions that are difficult to express
in any machine-readable way. The postcondition of "socket.read" is that
the buffer passed as the first argument has valid data in the first "n"
bytes, where "n" is the return value from "socket.read", unless
"socket.read" returned -1. What does "valid" mean there? It has to
match the bytes sent by some other process, possibly on another machine,
disregarding the bytes already read.

You can see how this can be hard to formalize in any particularly useful
way, unless you wind up modelling the whole universe, which is what most
of the really formalized network description techniques do.

Plus, of course, without having preconditions and postconditions for OS
calls, it's difficult to do much formally with that sort of thing.

You can imagine all sorts of I/O that would be difficult to describe,
even for things that are all in memory: what would be the postcondition
for "screen.draw_polygon"? Any set of postconditions on that routine
that don't mention that a polygon is now visible on the screen would be
difficult to take seriously.

There are also problems with the complexity of things. Imagine a
chess-playing game trying to describe the "generate moves" routine.
Precondition: An input board with a valid configuration of chess pieces.
Postcondition: An array of boards with possible next moves for the
selected team. Heck, if you could write those as assertions, you
wouldn't need the code.
 
J

Joachim Durchholz

Marshall said:
I can see the lack of a formal model being an issue, but is the
imperative bit really all that much of an obstacle? How hard
is it really to deal with assignment? Or does the issue have
more to do with pointers, aliasing, etc.?

Actually aliasing is *the* hard issue.
Just one data point: C compiler designers spend substantial effort to
determine which data structures cannot have aliasing to be able to apply
optimizations.

This can bite even with runtime assertion checking.
For example, ordinarily one would think that if there's only a fixed set
of routines that may modify some data structure A, checking the
invariants of that structure need only be done at the exit of these
routines.
Now assume that A has a reference to structure B, and that the
invariants involve B in some way. (E.g. A.count = A->B.count.)
There's still no problem with that - until you have a routine that
exports a reference to B, which gets accessed from elsewhere, say via C.
Now if I do
C->B.count = 27
I will most likely break A's invariant. If that assignment is in some
code that's far away from the code for A, debugging can become
exceedingly hard: the inconsistency (i.e. A violating its invariant) can
live for the entire runtime of the program.

So that innocent optimization "don't check all the program's invariants
after every assignment" immediately breaks with assignment (unless you
do some aliasing analysis).

In an OO language, it's even more difficult to establish that some data
structure cannot be aliased: even if the code for A doesn't hand out a
reference to B, there's no guarantee that some subclass of A doesn't.
I.e. the aliasing analysis has to be repeated whenever there's a new
subclass, which may happen at link time when you'd ordinarily don't want
to do any serious semantic analysis anymore.

There's another way around getting destroyed invariants reported at the
time the breakage occurs: lock any data field that goes into an
invariant (or a postcondition, too). The rationale behind this is that
from the perspective of assertions, an alias walks, smells and quacks
just like concurrent access, so locking would be the appropriate answer.
The problem here is that this means that updates can induce *very*
expensive machinery - imagine an invariant that says "A->balance is the
sum of all the transaction->amount fields in the transaction list of A",
it would cause all these ->amount fields to be locked as soon as a
transaction list is hooked up with the amount. OTOH one could argue that
such a locking simply makes cost in terms of debugging time visible as
runtime overhead, which isn't entirely a Bad Thing.


There are all other kinds of things that break in the presence of
aliasing; this is just the one that I happen to know best :)

Without mutation, such breakage cannot occur. Invariants are just the
common postconditions of all the routines that may construct a structure
of a given type.

Regards,
Jo
 
J

Joachim Durchholz

Marshall said:
So, what exactly separates a precondition from a postcondition
from an invariant? I have always imagined that one writes
assertions on parameters and return values, and those
assertions that only reference parameters were preconditions
and those which also referenced return values were postconditions.
Wouldn't that be easy enough to determine automatically?

Sure.
Actually this can be done in an even simpler fashion; e.g. in Eiffel, it
is (forgive me if I got the details wrong, it's been several years since
I last coded in it):

foo (params): result_type is
require
-- list of assertions; these become preconditions
do
-- subroutine body
ensure
-- list of assertions; these become postconditions
end
And what's an invariant anyway?

In general, it's a postcondition to all routines that create or modify a
data structure of a given type.

Eiffel does an additional check at routine entry, but that's just a
last-ditch line of defense against invariant destruction via aliases,
not a conceptual thing. Keep aliases under control via modes (i.e.
"unaliasable" marks on local data to prevent aliases from leaving the
scope of the data structure), or via locking, or simply by disallowing
destructive updates, and you don't need the checks at routine entry anymore.

Regards,
Jo
 
M

Marshall

Joachim said:
Actually aliasing is *the* hard issue.

Okay, sure. Nice explanation.

But one minor point: you describe this as an issue with "imperative"
languages. But aliasing is a problem associated with pointers,
not with assignment. One can have assignment, or other forms
of destructive update, without pointers; they are not part of the
definition of "imperative." (Likewise, one can have pointers without
assignment, although I'm less clear if the aliasing issue is as
severe.)


Marshall
 
J

Joachim Durchholz

Marshall said:
Okay, sure. Nice explanation.

But one minor point: you describe this as an issue with "imperative"
languages. But aliasing is a problem associated with pointers,
not with assignment.

Aliasing is not a problem if the aliased data is immutable.
One can have assignment, or other forms
of destructive update, without pointers; they are not part of the
definition of "imperative."

Sure.
You can have either of destructive updates and pointers without
incurring aliasing problems. As soon as they are combined, there's trouble.

Functional programming languages often drop assignment entirely. (This
is less inefficient than one would think. If everything is immutable,
you can freely share data structures and avoid some copying, and you can
share across abstraction barriers. In programs with mutable values,
programmers are forced to choose the lesser evil of either copying
entire data structures or doing a cross-abstraction analysis of who
updates what elements of what data structure. A concrete example: the
first thing that Windows does when accepting userland data structures
is... to copy them; this were unnecessary if the structures were immutable.)

Some functional languages restrict assignment so that there can exist at
most a single reference to any mutable data structure. That way, there's
still no aliasing problems, but you can still update in place where it's
really, really necessary.

I know of no professional language that doesn't have references of some
kind.

Regards,
Jo
 
J

Joachim Durchholz

Darren said:
There are also problems with the complexity of things. Imagine a
chess-playing game trying to describe the "generate moves" routine.
Precondition: An input board with a valid configuration of chess pieces.
Postcondition: An array of boards with possible next moves for the
selected team. Heck, if you could write those as assertions, you
wouldn't need the code.

Actually, in a functional programming language (FPL), you write just the
postconditions and let the compiler generate the code for you.

At least that's what happens for those FPL functions that you write down
without much thinking. You can still tweak the function to make it more
efficient. Or you can define an interface using preconditions and
postconditions, and write a function that fulfills these assertions
(i.e. requires no more preconditions than the interface specifies, and
fulfills at least the postcondition that the interface specifies); here
we'd have a postcondition that's separate from the code, too.
I.e. in such cases, the postconditions separate the accidental and
essential properties of a function, so they still have a role to play.

Regards,
Jo
 
D

Darren New

Joachim said:
Actually, in a functional programming language (FPL), you write just the
postconditions and let the compiler generate the code for you.

Certainly. And my point is that the postcondition describing "all valid
chess boards reachable from this one" is pretty much going to be as big
as an implementation for generating it, yes? The postcondition will
still have to contain all the rules of chess in it, for example. At best
you've replaced loops with some sort of universal quanitifier with a
"such that" phrase.

Anyway, I expect you could prove you can't do this in the general case.
Otherwise, you could just write a postcondition that asserts the output
of your function is machine code that when run generates the same
outputs as the input string would. I.e., you'd have a compiler that can
write other compilers, generated automatically from a description of the
semantics of the input stream and the semantics of the machine the code
is to run on. I'm pretty sure we're not there yet, and I'm pretty sure
you start running into the limits of computability if you do that.
 
D

David Hopwood

Marshall said:
Joachim Durchholz wrote: [...]
Preconditions/postconditions can express anything you want, and they are
an absolutely natural extensions of what's commonly called a type
(actually the more powerful type systems have quite a broad overlap with
assertions).
I'd essentially want to have an assertion language, with primitives for
type expressions.

Now, I'm not fully up to speed on DBC. The contract specifications,
these are specified statically, but checked dynamically, is that
right? In other words, we can consider contracts in light of
inheritance, but the actual verification and checking happens
at runtime, yes?

For DBC as implemented in Eiffel, yes.
Wouldn't it be possible to do them at compile time? (Although
this raises decidability issues.)

It is certainly possible to prove statically that some assertions cannot fail.

The ESC/Java 2 (http://secure.ucd.ie/products/opensource/ESCJava2/docs.html)
tool for JML (http://www.cs.iastate.edu/~leavens/JML/) is one system that does
this -- although some limitations and usability problems are described in
Mightn't it also be possible to
leave it up to the programmer whether a given contract
was compile-time or runtime?

That would be possible, but IMHO a better option would be for an IDE to give
an indication (by highlighting, for example), which contracts are dynamically
checked and which are static.

This property is, after all, not something that the program should depend on.
It is determined by how good the static checker currently is, and we want to be
able to improve checkers (and perhaps even allow them to regress slightly in
order to simplify them) without changing programs.
 
M

Marshall

Joachim said:
Aliasing is not a problem if the aliased data is immutable.

Okay, sure. But for the problem you describe, both imperativeness
and the presence of pointers is each necessary but not sufficient;
it is the two together that causes the problem. So it strikes
me (again, a very minor point) as inaccurate to describe this as
a problem with imperative languages per se.

Sure.
You can have either of destructive updates and pointers without
incurring aliasing problems. As soon as they are combined, there's trouble.

Right. To me the response to this clear: give up pointers. Imperative
operations are too useful to give up; indeed they are a requirement
for certain problems. Pointers on the other hand add nothing except
efficiency and a lot of confusion. They should be considered an
implementation technique only, hidden behind some pointerless
computational model.

I recognize that this is likely to be a controversial opinion.


Functional programming languages often drop assignment entirely. (This
is less inefficient than one would think. If everything is immutable,
you can freely share data structures and avoid some copying, and you can
share across abstraction barriers. In programs with mutable values,
programmers are forced to choose the lesser evil of either copying
entire data structures or doing a cross-abstraction analysis of who
updates what elements of what data structure. A concrete example: the
first thing that Windows does when accepting userland data structures
is... to copy them; this were unnecessary if the structures were immutable.)

I heartily support immutability as the default, for these and other
reasons.

Some functional languages restrict assignment so that there can exist at
most a single reference to any mutable data structure. That way, there's
still no aliasing problems, but you can still update in place where it's
really, really necessary.

Are we speaking of uniqueness types now? I haven't read much about
them, but it certainly seems like an intriguing concept.

I know of no professional language that doesn't have references of some
kind.

Ah, well. I suppose I could mention prolog or mercury, but then
you used that troublesome word "professional." So I will instead
mention a language which, if one goes by number of search results
on hotjobs.com for "xxx progammer" for different value of xxx, is
more popular than Java and twice as popular as C++. It lacks
pointers (although I understand they are beginning to creep in
in the latest version of the standard.) It also posesses a quite
sophisticated set of facilities for declarative integrity constraints.
Yet for some reason it is typically ignored by language designers.

http://hotjobs.yahoo.com/jobseeker/jobsearch/search_results.html?keywords_all=sql+programmer


Marshall
 
M

Marshall

David said:
It is certainly possible to prove statically that some assertions cannot fail.

The ESC/Java 2 (http://secure.ucd.ie/products/opensource/ESCJava2/docs.html)
tool for JML (http://www.cs.iastate.edu/~leavens/JML/) is one system that does
this -- although some limitations and usability problems are described in
<http://secure.ucd.ie/products/opensource/ESCJava2/ESCTools/papers/CASSIS2004.pdf>.

I look forward to reading this. I read a paper on JML a while ago and
found it quite interesting.

That would be possible, but IMHO a better option would be for an IDE to give
an indication (by highlighting, for example), which contracts are dynamically
checked and which are static.

This property is, after all, not something that the program should depend on.
It is determined by how good the static checker currently is, and we want to be
able to improve checkers (and perhaps even allow them to regress slightly in
order to simplify them) without changing programs.

Hmmm. I have heard that argument before and I'm conflicted.

I can think of more reasons than just runtime safety for which I'd
want proofs. Termination for example, in highly critical code;
not something for which a runtime check will suffice. On the
other hand the points you raise are good ones, and affect
the usability of the language.


Marshall
 
C

Chris Smith

Marshall said:
Hmmm. I have heard that argument before and I'm conflicted.

I can think of more reasons than just runtime safety for which I'd
want proofs. Termination for example, in highly critical code;
not something for which a runtime check will suffice. On the
other hand the points you raise are good ones, and affect
the usability of the language.

There doesn't seem to be a point of disagreement here. Programmers
often need to require certain properties to be checked at compile-time.
Others could go either way. There is no property that a program would
rationally desire to *require* be checked at runtime; that would only
occur because the compiler doesn't know how to check it at compile time.
 
A

Andreas Rossberg

Marshall said:
Okay, sure. But for the problem you describe, both imperativeness
and the presence of pointers is each necessary but not sufficient;
it is the two together that causes the problem. So it strikes
me (again, a very minor point) as inaccurate to describe this as
a problem with imperative languages per se.

[...]

Right. To me the response to this clear: give up pointers. Imperative
operations are too useful to give up; indeed they are a requirement
for certain problems. Pointers on the other hand add nothing except
efficiency and a lot of confusion. They should be considered an
implementation technique only, hidden behind some pointerless
computational model.

Don't get yourself distracted by the low-level notion of "pointer". The
problem *really* is mutability and the associated notion of identity,
which explicit pointers just exhibit on a very low level.

When you have a language with mutable types (e.g. mutable arrays) then
objects of these types have identity, which is observable through
assignment. This is regardless of whether identity is an explicit
concept (like it becomes with pointers and comparison of pointer values,
i.e. addresses).

Consequently, you cannot possibly get rid of aliasing issues without
getting rid of (unrestricted) mutability. Mutability implies object
identity implies aliasing problems.

On the other hand, pointers are totally a futile concept without
mutability: if everything is immutable, it is useless to distinguish
between an object and a pointer to it.

In other words, pointers are essentially just an *aspect* of mutability
in lower-level languages. On a sufficiently high level of abstraction,
it does not make much sense to differentiate between both concepts -
pointers are just mutable objects holding other mutable objects
(immutable pointer types exist, but are only interesting if you also
have pointer arithmetics - which, however, is largely equivalent to
arrays, i.e. not particularly relevant either).

- Andreas
 
J

Joachim Durchholz

Marshall said:
Okay, sure. But for the problem you describe, both imperativeness
and the presence of pointers is each necessary but not sufficient;
it is the two together that causes the problem. So it strikes
me (again, a very minor point) as inaccurate to describe this as
a problem with imperative languages per se.

Sure.
It's just that I know that it's viable to give up destructive updates.
Giving up pointers is a far more massive restriction.
Right. To me the response to this clear: give up pointers. Imperative
operations are too useful to give up; indeed they are a requirement
for certain problems.

I don't know any.
In some cases, you need an additional level of conceptual indirection -
instead of *doing* the updates, you write a function that *describes* them.
Pointers on the other hand add nothing except
efficiency and a lot of confusion. They should be considered an
implementation technique only, hidden behind some pointerless
computational model.

I recognize that this is likely to be a controversial opinion.

Indeed.

In fact "modes" are a way to restrict pointer aliasing.
I heartily support immutability as the default, for these and other
reasons.

OK, then we're in agreement here.
Are we speaking of uniqueness types now? I haven't read much about
them, but it certainly seems like an intriguing concept.

Yup.
It's called "modes" in some other languages (Mercury or Clean IIRC).
Ah, well. I suppose I could mention prolog or mercury, but then
you used that troublesome word "professional." So I will instead
mention a language which, if one goes by number of search results
on hotjobs.com for "xxx progammer" for different value of xxx, is
more popular than Java and twice as popular as C++. It lacks
pointers (although I understand they are beginning to creep in
in the latest version of the standard.) It also posesses a quite
sophisticated set of facilities for declarative integrity constraints.
Yet for some reason it is typically ignored by language designers.

http://hotjobs.yahoo.com/jobseeker/jobsearch/search_results.html?keywords_all=sql+programmer

Oh, right. SQL is an interesting case of getting all the problems of
pointers without having them ;-)

Actually SQL has references - they are called "primary keys", but they
are references nevertheless. (Some SQL dialects also offer synthetic
"ID" fields that are guaranteed to remain stable over the lifetime of a
record. Seems like SQL is imperative enough that programmers want this,
else the SQL vendors wouldn't have added the feature...)
SQL also has updates.
The result: updates with undefined semantics. E.g. if you have a numeric
key field, UPDATE commands that increment the key by 1 will fail or work
depending on the (unspecified) order in which UPDATE touches the
records. You can have even more fun with updatable views.
With a "repeatable read" isolation level, you actually return to a
declarative view of the database: whatever you do with it, you won't see
it until you commit the transaction. (As soon as you commit, the
declarative peace is over and you better watch out that your data
doesn't become inconsistent due to aliasing.)


Aliasing isn't really related to specific programming practices. If two
accountants chat, and one talks about the hot blonde secretaire and the
other about his adorable wife, you can imagine the awkwardness that
ensues as soon as they find out they're talking about the same person!
The only thing that can really be done about it is not adding it
artificially into a program. In those cases where aliasing is part of
the modelled domain, you really have to carefully inspect all
interactions and never, never, never dream about abstracting it away.


Regards,
Jo
 
J

Joachim Durchholz

Darren said:
Certainly. And my point is that the postcondition describing "all valid
chess boards reachable from this one" is pretty much going to be as big
as an implementation for generating it, yes?

Yes. It's a classical case where the postcondition and the code that
fulfils it are essentially the same.
The postcondition will
still have to contain all the rules of chess in it, for example. At best
you've replaced loops with some sort of universal quanitifier with a
"such that" phrase.

Correct.

OTOH, isn't that the grail that many people have been searching for:
programming by simply declaring the results that they want to see?
Anyway, I expect you could prove you can't do this in the general case.
Otherwise, you could just write a postcondition that asserts the output
of your function is machine code that when run generates the same
outputs as the input string would. I.e., you'd have a compiler that can
write other compilers, generated automatically from a description of the
semantics of the input stream and the semantics of the machine the code
is to run on. I'm pretty sure we're not there yet, and I'm pretty sure
you start running into the limits of computability if you do that.

No, FPLs are actually just that: compilable postconditions.
Computability issues aren't more or less a factor than with other kinds
of compilers: they do limit what you can do, but these limits are loose
enough that you can do really useful stuff within them (in particular,
express all algorithms).

Regards,
Jo
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,780
Messages
2,569,611
Members
45,280
Latest member
BGBBrock56

Latest Threads

Top