Contents of an exception object

K

Kenny McCormack

Kiki Thompson <[email protected]> showed once again that he isn't happy with
our friend Malcolm. Why he hasn't just killfiled him and moved on remains
a mystery:
....
If you want to keep misusing the word "function", you aren't going
to be able to communicate your actual ideas -- something you could
achieve much more easily by not posting at all.

Boom goes the hammer. I guess Malcolm is banned for life now.

--
"The anti-regulation business ethos is based on the charmingly naive notion
that people will not do unspeakable things for money." - Dana Carpender

Quoted by Paul Ciszek (pciszek at panix dot com). But what I want to know
is why is this diet/low-carb food author doing making pithy political/economic
statements?

Nevertheless, the above quote is dead-on, because, the thing is - business
in one breath tells us they don't need to be regulated (which is to say:
that they can morally self-regulate), then in the next breath tells us that
corporations are amoral entities which have no obligations to anyone except
their officers and shareholders, then in the next breath they tell us they
don't need to be regulated (that they can morally self-regulate) ...
 
K

Kenny McCormack

Malcolm McLean said:
even this

void hello(char *str)
{
strcpy(str, "hello world");
}

is not portable. It will break on a Microsoft compiler in default mode.

I have no idea what you're discussing here or what relevance this topic has
to any other topic, but...

For the benefit of those of us in the peanut gallery, I'm going to venture
a guess that what you are getting at in this example is that MS now has
this thing about banning the normal functions (especially, string
functions, like strcpy) in favor of their silly new "s_" versions of those
functions. Is that right? Is that why the above code won't compile on MS?

--
But the Bush apologists hope that you won't remember all that. And they
also have a theory, which I've been hearing more and more - namely,
that President Obama, though not yet in office or even elected, caused the
2008 slump. You see, people were worried in advance about his future
policies, and that's what caused the economy to tank. Seriously.

(Paul Krugman - Addicted to Bush)
 
I

Ian Collins

Malcolm said:
The idea of bit-shuffling functions isn't an anti-subroutine design
paradigm in any way. The subroutines can be removed if necessary,
it's not saying they should be removed.

You might need to remove subroutines for efficiency, or to make the
function self-contained. Leaf functions have certain advantages.
They can be cut and pasted.

Is that an advantage? Functions should be called, no cut and pasted.
The function can be examined manually
for correctness.

So can the subroutines.

Extracting functionality into functions is one of the fundamentals of
good software design.
The subroutines can't be detached. I'm not saying
that those advantages will outweigh the advantages of subroutines
in any particular case, but, if you judge that it is the case that
a function is better as a leaf, you can always remove the
subroutines.

That should be categorised as an anti-pattern.
 
I

Ian Collins

James said:
One caveat of jumping back: depending on how resources were accounted for,
if one of the intervening modules allocated resources you might need to stop
off at that module on the way back so that the resources could be released.

Which is why exception aren't a good fit in C.
 
M

Malcolm McLean

On 01/06/14 17:16, Malcolm McLean wrote:

Functions can be "leaf" as far as C is concerned, while being far from
"leaf" in implementation. For example:

double foo(double a, double b) {

return a / b;

}
foo() is undefined for a bit state where b = 0. So if b = 0 there
is a programming error in caller.
This contains no function calls at the C level. But the implementation
might do all sorts of things, such as call subroutines to handle the
division in software, invoke some sort of error handler on a divide by
zero (including a nasal daemon launcher - which I think would be
classified as a "Malcolm-procedure" rather than a "Malcolm-function").
It might also change states in the system - such as putting the floating
point hardware (or software subsystem) into default rounding modes.
Exactly. We've got to tighten up bit shuffling functions so that sort of
thing can't happen.
By "cut and pasted", do you mean "inlined by a compiler" or "cut and
pasted by someone editing the source code" ? Either way, the
requirement for that is that the function's definition is known - there
is no need for it to be a leaf function, or to know the definitions of
all functions called.
Cut and pasted by a person editing the source.
They can see a leaf function, take it, and then call it from anywhere. But
not if it calls subroutines. Then it can only be called from an environment
in which the subroutines are available, it can only be used at all
in an environment in which the subroutines are known.
That's often a practical problem. Maybe it shouldn't be. The reality
is that subroutines often become detached from the main body of a function,
maybe because the subroutines are in a library, maybe because of
clerical errors by someone transcribing the function but not fully
understanding what he was required to do. That's then a headache for
anyone trying to use that function.
That depends on how that value is obtained. If it is a global constant,
for example, then we can say a great deal about it.
The difference seem to me IO. If you're grabbing a value from an external
source, you can apply function to that value, but you can't define a
function in terms of that value. At least as mathematicians understand
the term "function". Correct me if I'm wrong.
 
J

James Harris

Ian Collins said:
Which is why exception aren't a good fit in C.

A longjmp approach has that issue (but can still be written to cope with
it). It is not a problem at all for the exception word proposal.

James
 
B

BartC

James Harris said:
Yes, B, C and D would be expected to participate. If you wanted to avoid
that what about combining the approach with setjmp/longjmp? Then a longjmp
from E could get straight back to a setjmp test in A.

Yes, I understand that's the main mechanism available in C, apparently it's
not always so straightforward because of the need to end up with registers
in a stable state after the jump.

(I've never used it for myself apart from trying it out; normally I use
inline assembler, to restore the frame and stack pointers, and jump to the
required return point. That's another difference with this, as it is
necessary for A to first set up the data structure necessary for it to
work.)
One caveat of jumping back: depending on how resources were accounted for,
if one of the intervening modules allocated resources you might need to
stop off at that module on the way back so that the resources could be
released.

Then B, C and/or D would need to cooperate, if they're the only places where
those resources can be freed.

Sometimes there are ways for A to control the resources that could be used
by B, C and D (requests for memory or file handles for example), then it can
take care of that too (similar to what happens when A is part of an OS while
B, C etc comprise a separate application which A has to invoke. Here A can't
rely on B etc doing the right thing).
 
B

Ben Bacarisse

Malcolm McLean said:
I'd need to see the language to see how easy it is to recast in the
system.

It's C.
There's also the question of whether the language is
fundamentally misdesigned, or designed on an equally good but
incompatible principle.
I'm essentially talking in terms of C-like languages.
OK.

"Can be written in pure portable C" doesn't mean "cannot be written in
other languages".

OK, I see what you mean now, but it's not obvious. This meaning
suggests that I can only tell if something is a bit-shuffling function
by guessing determining if it can be written in pure, portable C.
That's a very odd definition (to my ears).
Same mistake. "Can be written without subroutines" doesn't mean "cannot
be written with subroutines".

Ah, right. Function calls must be eliminable. I don't like that at all
because of what it means for functional data. A change in one corner of
a program can suddenly alter the classification of a seeming innocent
function. (Of course, since I am not that interested in this
classification, calling this a problem is rhetorical).

For example, part of an expression evaluator:

typedef double operator(double, double);
struct exp { operator *op; struct exp *l, *r; };

double eval(struct exp *exp)
{
return exp ? exp->op(eval(exp->l), eval(exp->r)) : 0;
}

Is or is not a bit-shuffler depending on data definitions and
constructions that might be far away. It may suddenly switch from being
one to not being one when a non bit-shuffling operator gets added at
some point. But why would I care? I can reason about this function and
determine what it guarantees and what is does not guarantee without
caring about its status as bit-shuffler.
we can write max3

int max3(int a, int b, int c)
{
if(a > b && a > c)
return a;
if(b > c)
return b;
return b;
}

(type there: return c; }
Then we can drop that version into any program that calls max3. So if we
can't find the source to max2(), or can't figure out how it works because
no-one has told us about that weird syntax, all is not lost.

Now

void hello
{
printf("hello world");
}

cannot be rewritten without any external subroutines. It does't have that
characteristic.

But I can re-write it. I don't see why I should care that I can't avoid
some external routine.

You're right in a very non-trivial sense. IO procedures like fopen()
are widely portable, they can be implemented on lots of physically
very different devices. So ANSI standard C tries to make code portable
by specifying a standard interface for the task of reading and writing
streams for an external datasource.
What I am saying is that this approach doesn't work. Because you can
always get some physical device which won;t support your interface.

Why would I want run a program the generates output on a device that
can't do output? If it can, and it supports C, using C's stdio will
make it work.
Call to streams which pass over internet connections can't sensibly be
blocking, because internet connections hang, and users want some
sort of feedback. So stdio doesn't work. Everything breaks that relies
upon it.

Yes. C is under-specified in that sense. How does your classification
help?
Now you can easily say "OK, pass fgetc() a timeout callback". But that
too will break when the next technology comes along.

even this

void hello(char *str)
{
strcpy(str, "hello world");
}

is not portable. It will break on a Microsoft compiler in default mode.
So we need to be able to retain the right to ban subroutines.

How does banning subroutines help with a broken implementation? Whether
I can re-write a function is not tied to what you have banned subroutine
calls or not.
We can't ban malloc() unfortunately. That's the one subroutine we really
can't do without.
I'd like to extend this analysis to parallel programming. But let's
get everything straight for serial programs first.

The details are not important. I want to be able to argue about all C
functions, so I don't care if they are bit-shufflers or not.
No, if a function must "clock out bits on the rising edge", you can't
test that that is working by writing C on a Unix box. At least in any
way that I know.
The function sums n results from calls to the callback. So we black-box
test it with various callbacks that return different numbers, and show
that it's returning correct results for all combinations of n and
functions with different characteristics. I put "prove correct" in
scare quotes because of course that only proves it correct in an
engineering sense. But we know it's been written by someone who
wouldn't introduce any deliberate perversities, it sums the results
of n calls to cb().
Now we ship it, and our customer uses it with his own callback, which
is highly intricate and we couldn't develop ourselves. And we know it
will sum n calls.
But callbacks are special. You've lost a degree of safety

I think you misread the code. It was designed in response to a specific
statement about what a callback may do whilst still allowing the caller
to be tested independently of the callback.
A loop of n calls to a callback can be written

for(i=0;i<n;i++)
x += cb();
or

while(n--)
x += cb();

They're both correct and interchangeable, unless cb is allowed to modify
i or n. That's what you've got to ban.

No, the type of n is int. The two loops are quite different.
No, I give simple examples, but of course the method is entirely
pointless if applied to trivial functions. What matters is how it
scales up to massive programs which are too complex for any one
person to develop independently.

My objection was not about the examples. You focus on variability of a
subset of functions (bit-shufflers) with restrictions on the parameters
(when they are functions). I want to do it as widely as possible. Your
focus on bit-state means that higher-level idea just get lost:

function comp(f, g) { return function (x) { return g(f(x)); }; }

(ECMAscript). I don't know if it is a bit-shuffler or not, and I have
no idea if it meets the bit-state rules of input and output form a
function, but I can reason about it, and it is useful (though a trivial
example of a more general genre). The arguments about what it does are
based on higher-level reasoning than bit-state.
I'm not a mathematician. But I understand that

x + 2

is a function of x.

x + a value obtained from somewhere

is not a function of x, we can't analyse it or say anything about it.

It's too huge a topic. You can reason about IO very well (particularly
well in Haskell) but you can't do it in terms of bit-state.
"Malcolm-function" is flattering, but I can hardly claim to deserve to
have the concept of a mathematical function defined by discrete bits
named after me. "Bit-shuffling function" seems to cause as much confusion
as "function". But feel free to suggest a term.

I have no ideas, sorry. It seems to me to be not the property of the
function, but rather of the function+arguments (or maybe
function+possible argument set) so any name based on xxx-function seems
inappropriate.
You don't know that.
lets say we run both functions on identical machines, with the following code.

if(fork())
printhelloworld();
else
closedowntty();

so one thread is priting out the message, the other shuts down the tty
whilst it is executing. can we now guarantee indentical output for both
versions of printhelloworld() ?

No. You can't guarantee any output from either version. I call that
"equivalent" in that they are both as useless as each other (in this
context). I'd also say that they are both as useful as each other in
contexts where one of them is useful.

What's more, if I can reason about IO, I can re-write IO functions
without keeping them identical. They simply must meet the
specification, but that's all.
Because we can take the function which converts an ascii string to
a real, and use it in other contexts. But only if it is a bit-shuffling
function, not if it is tied to reading characters from IO.

That's just good code organisation. I can re-use code that does IO too,
if I separate it out properly. The trick is to pick the right
components and to abstract out the right parts of them. Your
distinction inhibits (or at least does not help) that.
[ IO procedure programming different from bit shuffling ]
That's not an advantage either.
There's no particular reason why someone who is a good at bit shuffling
should also be good at IO, or vice versa, other than the general
observation that people who are good at one task are usually at least
competent at a related one.
The fields have a different logic, they require a different mindset,
they require different physical resources. The software assets produced
have a different lifespan and applicability. We'll probably find they are
best handled in different languages.

And that's worth saying that "map" belongs in different places depending
on what function it's passed?

I think you get the same benefits from normal software engineering
principles. These usually operate at a finer level of granularity than
the bit-shuffling/IO distinction.
 
I

Ian Collins

James said:
A longjmp approach has that issue (but can still be written to cope with
it). It is not a problem at all for the exception word proposal.

Exceptions only become truly useful if callers don't have to explicitly
check for them being thrown. You may as well just replace errno with an
exception like object and include details of the error there.

The cleanup handler stack approach Kaz Kylheku posted to this thread is
the best option for C. The same technique is also used to handle
cleaning up when threads are cancelled in pthreads, which is a similar
situation to jumping out of a function on an exception.
 
J

James Harris

Ian Collins said:
Exceptions only become truly useful if callers don't have to explicitly
check for them being thrown.

I disagree about "useful". I found this immensely useful.

Besides, using an exception word the check for an exception is no more work
than the check for a return code.
You may as well just replace errno with an exception like object and
include details of the error there.

The cleanup handler stack approach Kaz Kylheku posted to this thread is
the best option for C. The same technique is also used to handle cleaning
up when threads are cancelled in pthreads, which is a similar situation to
jumping out of a function on an exception.

It is a good approach. I think you go too far in declaring it "best" though.
It would be best in some circumstances but not all.

James
 
M

Malcolm McLean

Malcolm McLean <[email protected]> writes:
OK, I see what you mean now, but it's not obvious. This meaning
suggests that I can only tell if something is a bit-shuffling function
by guessing determining if it can be written in pure, portable C.
That's a very odd definition (to my ears).
It's more of a description than a definition. You can rewrite that
more formally if you want, at cost of possibly losing some readers.
Ah, right. Function calls must be eliminable. I don't like that at all
because of what it means for functional data. A change in one corner of
a program can suddenly alter the classification of a seeming innocent
function. (Of course, since I am not that interested in this
classification, calling this a problem is rhetorical).
It means that we can't expect to write

void foo(void)
{
bar();
}

Then update bar(), and have a guarantee that foo() will have updated
in the way that we desire, as some methodologies promise. The reason
is that that promise is hard to fulfil. Not in this trivial case, of
course.
For example, part of an expression evaluator:

typedef double operator(double, double);

struct exp { operator *op; struct exp *l, *r; };



double eval(struct exp *exp)
{
return exp ? exp->op(eval(exp->l), eval(exp->r)) : 0;
}



Is or is not a bit-shuffler depending on data definitions and
constructions that might be far away. It may suddenly switch from being
one to not being one when a non bit-shuffling operator gets added at
some point. But why would I care? I can reason about this function and
determine what it guarantees and what is does not guarantee without
caring about its status as bit-shuffler.
No, because if op() is not a bit shuffler, you lose the strong guarantee
from op. behaviour depends on the physical state of hardware attached
to the machine. That might be good enough for many purposes, but not
for all.

Callbacks aren't eliminable, of course.
But I can re-write it. I don't see why I should care that I can't avoid
some external routine.
Because printf() might be taken away on some future version of
the hardware. Obviously you'll have to deal with this at some level,
but it's best to isolate all the calls to printf() is sections of
the program labelled "procedures".
Why would I want run a program the generates output on a device that
can't do output? If it can, and it supports C, using C's stdio will
make it work.
Because you may want to test and develop most of the program in a
Unix box type environment, and only put in the hardware-specific
calls when the hardware is actually built. You may wish to move
the program to new hardware, making small changes in the logic,
and to start software development before the new hardware becomes
available.
Yes. C is under-specified in that sense. How does your classification
help?
Because we've isolated all the calls to fgetc(). So it's easier to
replace them with calls to internet-enabled functions which deal
with slow connections.
How does banning subroutines help with a broken implementation? Whether
I can re-write a function is not tied to what you have banned subroutine
calls or not.
strcpy() can be taken away. It's not a disaster, if you know you can
rewrite the function without using strcpy().
The details are not important. I want to be able to argue about all C
functions, so I don't care if they are bit-shufflers or not.
Parallel programming is a whole different can of worms. We've plenty
to talk about just on serial programming.
No, the type of n is int. The two loops are quite different.
Oh, you mean machine representation issues. Unfortunately that's
a necessary concession. Functions can't be defined in terms of bits,
but in terms of machine representations.
My objection was not about the examples. You focus on variability of a
subset of functions (bit-shufflers) with restrictions on the parameters
(when they are functions). I want to do it as widely as possible. Your
focus on bit-state means that higher-level idea just get lost:
I think you're being misled by the fact that I necessarily use
short examples. It's to do with high-level code organisation, not
classifying trivial functions as "bit-shufflers".
function comp(f, g) { return function (x) { return g(f(x)); }; }


(ECMAscript). I don't know if it is a bit-shuffler or not, and I have
no idea if it meets the bit-state rules of input and output form a
function, but I can reason about it, and it is useful (though a trivial
example of a more general genre). The arguments about what it does are
based on higher-level reasoning than bit-state.
It's a different programming paradigm. Callbacks are difficult for
the bit-shuffling non-bit-shuffling paradigm, because it's difficult
to say if an indirect call should count. So they need to be short
bits of missing functionality. If you build a method on callbacks
you're programming in a different way.
I have no ideas, sorry. It seems to me to be not the property of the
function, but rather of the function+arguments (or maybe
function+possible argument set) so any name based on xxx-function seems
inappropriate.
When an argument is another function there's a question mark, do we regard
the function as tied (it's one function) or not (it's two functions).
No. You can't guarantee any output from either version. I call that
"equivalent" in that they are both as useless as each other (in this
context). I'd also say that they are both as useful as each other in
contexts where one of them is useful.
Any tty can fail at any time. So printf() is useless.
Well yes it is, as a "function". That's the whole point.
What's more, if I can reason about IO, I can re-write IO functions
without keeping them identical. They simply must meet the
specification, but that's all.
printf("H\bHello world\n");

is "Hello world" on the hardware I happen to have attached to my machine.
is it always "Hello world"?
That's just good code organisation. I can re-use code that does IO too,
if I separate it out properly. The trick is to pick the right
components and to abstract out the right parts of them. Your
distinction inhibits (or at least does not help) that.
No, it helps. It's all about good code organisation, I agree.
And that's worth saying that "map" belongs in different places depending
on what function it's passed?
No. Looks like an IO procedure which just happens to have the IO calls
abstracted away with callbacks, so it's a bit shuffler, which is good,
it can be tested without the IO stuff. That's the "stub function"
method, convert an IO procedure into a function temporarily.
So it would be written by a IO person, probably.

Maybe it's a bit of marginal case.
I think you get the same benefits from normal software engineering
principles. These usually operate at a finer level of granularity than
the bit-shuffling/IO distinction.
I've been accused of being too low level, now I'm accused of not
operating at a fine enough level of granularity.
 
B

Ben Bacarisse

Malcolm McLean said:
No, because if op() is not a bit shuffler, you lose the strong guarantee
from op. behaviour depends on the physical state of hardware attached
to the machine. That might be good enough for many purposes, but not
for all.

By "no", do you mean that I *can't* reason about this code? if so, I
think we'll have to agree to disagree about what that means, and about
it's role in program development.
Callbacks aren't eliminable, of course.

I think they are in pure portable C, but it would certainly be only in
principle. Mind you, I think that's what matters here -- that a call be
eliminable in principle.
Because printf() might be taken away on some future version of
the hardware. Obviously you'll have to deal with this at some level,
but it's best to isolate all the calls to printf() is sections of
the program labelled "procedures".

They are all isolated by the fact that they are calls to a function
called printf. I can deal with this bizarre situation by writing my own
printf. If the specification of the program means that the output is
optional (non-essential logging say), this might be done very easily
indeed.

Now, normal code organisation principles will mean that it's almost
certain that all logging will done through a logging function, so there
will be an alternative method as well.

That IO functions are often segregated is just good design. Other, non
IO functions will be organised and segregated, but both will be
organised at finer grain that bit-shuffling vs. IO and, in my opinion,
the two division won't always coincide (i.e. your two are not just
larger aggregations of the divisions that I might choose).
Because you may want to test and develop most of the program in a
Unix box type environment, and only put in the hardware-specific
calls when the hardware is actually built. You may wish to move
the program to new hardware, making small changes in the logic,
and to start software development before the new hardware becomes
available.

Yes, I do that kind of thing all the time. In fact the same pattern of
work is very useful for pure logic functions too.

What I'm missing is why this distinction of yours helps more than the
usual process by which people develop software. It has downsides --
some things I like are "banned", yet there's no sign that it will help
any more than good software engineering practice has been for decades.
Because we've isolated all the calls to fgetc(). So it's easier to
replace them with calls to internet-enabled functions which deal
with slow connections.

The calls to fgetc are isolated already by the fact that they are calls
to fgetc. Maybe all this comes from your using some really badly
designed software? It's normal to isolate functional components form
each other via well-defined interfaces, and that makes porting to new
environments simpler, but I see no advantage of your particular rules
over what is usually done.

C has no proper module system, so it is easy to make a mess of this
separation, but the solution to that is either to write in another
language, or to structure your code better. You can make a mess of the
pure logic bits as well as of the IO bits. Your division gives one cut
through the software where good design needs many. Since I don't see
that your cut is particularly useful, I would not use it. I'd start
with proper modular design.
strcpy() can be taken away. It's not a disaster, if you know you can
rewrite the function without using strcpy().

I don't see that as an answer. My point was in reply to the need "to
retain the right to ban subroutines" and so I asked how that helps. I
can re-write function to get round broken implementations even if you
have never had the right to ban subroutines. Case in point: you don't
reserve the right to ban subroutines in IO functions, but you happily
talk about re-writing them to work in IO encumbered situations.

Parallel programming is a whole different can of worms.

My point has got lost in snipping. I made it terms of a concurrent
program, but following your objection I re-made it in more general
terms. And now it's lost (and it was not a detail).
We've plenty to talk about just on serial programming.

Actually, no. I have already said more than I really want to, so I
won't be continuing. Feel free to rebut whatever points remain in this
post, but, unless I feel particularly outraged, I won't be replying.
Oh, you mean machine representation issues. Unfortunately that's
a necessary concession. Functions can't be defined in terms of bits,
but in terms of machine representations.

Eh? I just meant that you were wrong to say "this code is the same as
that code". The two pieces of code behave differently for some values
of the arguments. Without a specification, all I can do is conclude
that for one bit of code to be the same as the other, they should behave
the same in all cases.

But the fact that your re-write did no preserve the function's behaviour
is of no significance to your argument. I should just accept that you
are not a fan of details, where I am detail oriented. Pointing this out
has added nothing to the big picture, but, sadly, I obsess about details.
It's a different programming paradigm. Callbacks are difficult for
the bit-shuffling non-bit-shuffling paradigm, because it's difficult
to say if an indirect call should count. So they need to be short
bits of missing functionality. If you build a method on callbacks
you're programming in a different way.

There's no callbacks here. That's why I chose this example. I
understand, but disagree with, what you said about callbacks being
difficult. I was making a fresh point about higher-level functions. Do
they fit into your paradigm at all? If not, fine. But if they do, how
exactly? The talk of bit-states is alien in that sort of code.
When an argument is another function there's a question mark, do we regard
the function as tied (it's one function) or not (it's two functions).

More to the point, do you? The use of "we" is unusual here.
Any tty can fail at any time. So printf() is useless.
Well yes it is, as a "function". That's the whole point.

It's my whole point as well. I don't know how exactly the same facts
can lead the two of us to make such different statements, but I think it
reflects a fundamentally different view of what a program is.
Unravelling that, if it is indeed the case, is beyond me right now.

I've said my bit about IO and re-writing IO functions, but since I don't
know what you are saying about it I am stumped. I agree with every
low-level, factual, statement you've made about it, yet we don't seem to
agree about the most basic higher-level remarks.

<snip>
 
M

Malcolm McLean

I think they are in pure portable C, but it would certainly be only in
principle. Mind you, I think that's what matters here -- that a call be
eliminable in principle.
No, because we've defined a "Malcolm-function" (if you insist) as bit state on
function exit given bit state on function entry. The address of the callback is
part of that bit state. So the callback has to be evaluated, I don't see how it
can't be.
You can change the definition of a function to mean " C functions and bound
callbacks", but that's not going to be helpful.
They are all isolated by the fact that they are calls to a function
called printf. I can deal with this bizarre situation by writing my own
printf. If the specification of the program means that the output is
optional (non-essential logging say), this might be done very easily
indeed.
No, because the hardware attached to the machine might not easily support a
printf() type interface.
Now, normal code organisation principles will mean that it's almost
certain that all logging will done through a logging function, so there
will be an alternative method as well.

That IO functions are often segregated is just good design. Other, non
IO functions will be organised and segregated, but both will be
organised at finer grain that bit-shuffling vs. IO and, in my opinion,
the two division won't always coincide (i.e. your two are not just
larger aggregations of the divisions that I might choose).
IO and bit-shuffling is certainly not the only principle of organisation.
It's one principle.
What I'm missing is why this distinction of yours helps more than the
usual process by which people develop software. It has downsides --
some things I like are "banned", yet there's no sign that it will help
any more than good software engineering practice has been for decades.
It's not the only possible programming paradigm.
The reality is that "use the stdio interface and reimplement it" method doesn't
work very well, in practice.
The calls to fgetc are isolated already by the fact that they are calls
to fgetc. Maybe all this comes from your using some really badly
designed software? It's normal to isolate functional components form
each other via well-defined interfaces, and that makes porting to new
environments simpler, but I see no advantage of your particular rules
over what is usually done.
Because you've got to provide the IO subroutines, you've got to make sure they
behave exactly as specified.
C has no proper module system, so it is easy to make a mess of this
separation, but the solution to that is either to write in another
language, or to structure your code better. You can make a mess of the
pure logic bits as well as of the IO bits. Your division gives one cut
through the software where good design needs many. Since I don't see
that your cut is particularly useful, I would not use it. I'd start
with proper modular design.
You're putting access to /dev/random in the same source file as, say, a
Mersenne twister. So someone wants your Mersenne twister, but the file
breaks because /dev/random isn't available. Because he doesn't know much
about random number generators, he's now got a difficult problem.
I don't see that as an answer. My point was in reply to the need "to
retain the right to ban subroutines" and so I asked how that helps. I
can re-write function to get round broken implementations even if you
have never had the right to ban subroutines. Case in point: you don't
reserve the right to ban subroutines in IO functions, but you happily
talk about re-writing them to work in IO encumbered situations.
Yes, if we do Io, and things change, then it's problematic. There's no way round
that. We were happily loading in data from a file on disk, now it's on the
internet, and it hangs. users expect to be told "this is slow" and take the decision
themselves whether to terminate the operation or persist.
So let's at least isolate that.
Actually, no. I have already said more than I really want to, so I
won't be continuing. Feel free to rebut whatever points remain in this
post, but, unless I feel particularly outraged, I won't be replying.
No-one's forced to reply to me. It's an open newsgroup. I also have a policy
of only replying where I feel I have something useful to say.
Eh? I just meant that you were wrong to say "this code is the same as
that code". The two pieces of code behave differently for some values
of the arguments. Without a specification, all I can do is conclude
that for one bit of code to be the same as the other, they should behave
the same in all cases.
Basically, all bits are equivalent. A set bit is a set bit, regardless of whether
it's in RAM chip or a ferrite core store. That's not true of other devices attached
to the computer.
There's no callbacks here. That's why I chose this example. I
understand, but disagree with, what you said about callbacks being
difficult. I was making a fresh point about higher-level functions. Do
they fit into your paradigm at all? If not, fine. But if they do, how
exactly? The talk of bit-states is alien in that sort of code.
Basically we define a function in terms of bits on input versus bits on output.
It can then perform any calculation we wish. There's nothing we can't achieve
in logic terms with those rules.
Then if data isn't in the right format, we write little function to put it into
the right format. That's how we add flexibility, not be trying to pass
functions to other functions.
Except sometimes you have to leave out a little bit of functionality. that's
rare, it's done reluctantly, because it introduces all the problems you've been
raising.
More to the point, do you? The use of "we" is unusual here.
It's a royal we.
You can see it in two ways. Unles you impose some restriction on what callbacks are
allowed to do, the function can't be tested independently of the callbacks, and it's not
sensible to try to describe it at the "bits on entry versus bits on exit" level. But it
is sensible to describe a bound function in those terms.
It's my whole point as well. I don't know how exactly the same facts
can lead the two of us to make such different statements, but I think it
reflects a fundamentally different view of what a program is.

Unravelling that, if it is indeed the case, is beyond me right now.
A program is a mapping of an input state to an output state, hooked up to
some IO devices so that user can see the results and, possibly, run several
such mappings in sequence.
I've said my bit about IO and re-writing IO functions, but since I don't
know what you are saying about it I am stumped. I agree with every
low-level, factual, statement you've made about it, yet we don't seem to
agree about the most basic higher-level remarks.
Well yes, people disagree. That's the point of discussing things.
If I was completely sure of my ideas, I'd just publish in a book, rather than try to
thrash them out with other programmers.
 
I

Ian Collins

James said:
I disagree about "useful". I found this immensely useful.

Besides, using an exception word the check for an exception is no more work
than the check for a return code.

But you still have to and can forget to check. To be truly useful
exceptions have to be raised in such a way they can't be ignored. Then
you can start to write clearer, uncluttered code like

doThis();

doThat();

doTheOther();

rather than

if( doThis() != OK )
{
// some error handling
}

and so on.
 
J

James Harris

Ian Collins said:
But you still have to and can forget to check. To be truly useful
exceptions have to be raised in such a way they can't be ignored. Then
you can start to write clearer, uncluttered code like

doThis();

doThat();

doTheOther();

rather than

if( doThis() != OK )
{
// some error handling
}

and so on.

Well, on the usefulness, whether someone thinks it is *as* useful (and neat)
as possible or not it still *is* useful to be able to do as has been
discussed, very much so.

With respect to the brevity, at a minimum the proposal would have

if (exception) return;

after most function calls. It's not perfect but the addition could be
shorter than normal error-checking code so it's pretty tidy.

By contrast, what would you do to make your preference possible? It is neat
as written but at some point you need the mechanism to make it work. I
presume we are still talking about C. Were you thinking of setjmp/longjmp?
Imagine that you wanted to catch exceptions at different levels. How would
you code it? I know what I would do: use standards-compliant setjmp/longjmp
with a stack or list of jump buffers managed by a separate library module -
but even in a libarary that's not very nice. :-( Wouldn't that "wrapping"
code become verbose in order to achieve the brevity of what you wrote above?

To make an example, say you had a call stack which had on it these five
functions:

A B C D E

Imagine that A was the top level and caught most/all exceptions but that you
wanted to catch some exceptions in C and that before calling E function D
allocated resources that it had to free when the stack was unwound. Would
you end up with something along the lines of setjmp code at A, C and D -
that at A and C to catch exceptions and that at D to release the resources
allocated there?

Besides, the proposed exception word can easily work *with* setjmp/longjmp
if required. One doesn't preclude the other because the exception word idea
uses C code that is completely standard and doesn't do anything behind the
scenes that could interfere with setjmp/longjmp. So it's not an either/or
choice. The lightweight exception word can be used in some places and the
more extensive setjmp/longjmp in others.

James
 
I

Ian Collins

James said:
By contrast, what would you do to make your preference possible? It is neat
as written but at some point you need the mechanism to make it work. I
presume we are still talking about C. Were you thinking of setjmp/longjmp?
Imagine that you wanted to catch exceptions at different levels. How would
you code it? I know what I would do: use standards-compliant setjmp/longjmp
with a stack or list of jump buffers managed by a separate library module -
but even in a libarary that's not very nice. :-( Wouldn't that "wrapping"
code become verbose in order to achieve the brevity of what you wrote above?

You really need language support. I have uses something similar to
Kaz's library in the past as I also use that mechanism with pthreads in
C (the platform I use has run-time support for thread cancellation in C++).
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,733
Messages
2,569,440
Members
44,832
Latest member
GlennSmall

Latest Threads

Top