Necessity of multi-level error propogation

A

Annie Testes

Tony said:
That's a silly thing to say or at least a silly assumption (that there are
only 2 alternatives: exceptions and C-techniques).

I wasn't assuming only two alternatives, I was
assuming that someone working on your project would
make comparisons with C.
That's exactly what I said above!

Almost. It's certainly possible to make error-propagation
more reliable than in C, but I don't know whether it can be
made as reliable as exceptions. Mandatory error codes are
a first step, but a programmer may still forget to pass them
to the caller. Which kind of technique would you use to
ensure the error code is transmitted during stack unwinding ?
 
T

Tony

Annie Testes said:
I wasn't assuming only two alternatives, I was
assuming that someone working on your project would
make comparisons with C.

Nah, I wouldn't do that. (I'm the only one working on my project). And what
I'm doing is not C-like by a number of characteristic criteria.
Almost. It's certainly possible to make error-propagation
more reliable than in C, but I don't know whether it can be
made as reliable as exceptions.

I don't see why not.
Mandatory error codes are
a first step, but a programmer may still forget to pass them
to the caller.

Not if you develop a way to ensure that, at least to a high degree. There's
a large design space of possibilities and I'm hardly on my first iteration.
Which kind of technique would you use to
ensure the error code is transmitted during stack unwinding ?

There aren't that many alternatives. I think you referred to one of those.
I'm not in a position to offer up details of my implementation.

Tony
 
A

Alf P. Steinbach

* Annie Testes:
I wasn't assuming only two alternatives, I was
assuming that someone working on your project would
make comparisons with C.


Almost. It's certainly possible to make error-propagation
more reliable than in C, but I don't know whether it can be
made as reliable as exceptions. Mandatory error codes are
a first step, but a programmer may still forget to pass them
to the caller. Which kind of technique would you use to
ensure the error code is transmitted during stack unwinding ?

I'm not sure (I often have problems grokking Tony) but I think perhaps he's
thinking of something like Barton & Nackmann's Fallible, a return value that is
a struct that can be logically "empty", where accessing the carried value in a
logically empty one terminates or invokes some installed error handler or throws
an exception. For an exception free scheme it would have to terminate or invoke
error handler. Essentially that would be /imposed/ checking of return values,
not especially good with respect to code structure or efficiency, but at least
avoiding the problems of return values not being inspected by caller.

If he's thinking of Fallible then pointing that out to James would be sort of
misplaced.

For James is the one who usually argues for using Fallible (when appropriate)...


Cheers & hth.,

- Alf
 
T

Tony

Alf P. Steinbach said:
* Annie Testes:

I'm not sure

Then it didn't need your response if you didn't have an answer.
(I often have problems grokking Tony)

It's not me, it's you.
but I think perhaps he's thinking of something like

But you "were answering" "HER" question!
Barton & Nackmann's Fallible,

Oh of course a name rather than a concept, and I don't have a any idea of
what "fallible" means other than I have seen and rejected snake oil for
decades. I'm not going to read the rest. Apparently many here aspire to:
"The Wizzard of Oz".

Is "fallible" 'agile", lol!

Tony
 
A

Alf P. Steinbach

* Tony:
Oh of course a name rather than a concept, and I don't have a any idea of
what "fallible" means

Google is your friend. Or, it could be! In this case you'd only have to read on
for a few characters, though, instead of googling, but if you had chosen to
google then plugging those terms into Google you would have found

http://www.google.com/search?q=barton+nackman+fallible

then in the list of google hits e.g.

http://www.astron.nl/aips++/docs/casa/implement/Utilities/Fallible.html

(with complete implementation) as well as enlightening discussions such as


http://www.nabble.com/Interest-in-Barton-Nackman's-fallible<T>--td5250914.html

which additionally points you in the direction of boost::eek:ptional.

other than I have seen and rejected snake oil for
decades. I'm not going to read the rest. Apparently many here aspire to:
"The Wizzard of Oz".

Is "fallible" 'agile", lol!

It's often difficult for me to grok what you mean, Tony.

I guess that goes for some other readers, too.


Cheers, & hth.,

- Alf
 
J

James Kanze

Sure it does. If you use that pattern, especially if it's the
only one in your toolbox, then your constructors or something
called in those constructors is probably going to have the
possibility of raising an exception.

Apparently, you don't understand RAII. Despite the name, it has
nothing to do with constructors---std::auto_ptr, for example, is
a prime example a RAII, and none of its constructors can fail.

And drop the "only tool in your toolbox" business. You're the
one who's arguing to limit the tools in the toolbox.
"construction" is a subjective term. As I said above, your
need for exceptions to handle errors from constructors depends
on if you define "construction" to be "resource acquisition"
(large potential to raise error) or "data member
initialization" (small/no potential to raise error).

Construction is initializing the object so that it can be used.
There's nothing "subjective" about it. If a constructor returns
without throwing an exception, you have an instance of the
object.
"Throwing an exception from the constructor is the simplest way
to achieve that."
Avoiding the scenario seems much simpler.

Obviously, you've never written any non-trivial code, to make a
statement like that. Avoiding the scenario means expanding the
class invariants, and handling so-called zombie states. It
involves extra code in *every* member function.
So is a bulldozer, but I won't use one of those to shovel snow
off of my sidewalk, even if I did own one and even if it was
parked in the driveway.

And? How is that relevant to anything?
I'm not rejecting them. I'm just developing a better way to
handle errors.

Fine. Show it to us. The more possibilities we have to choose
from, the better it is.
It looks doable at this juncture but I have not encountered
all the scenarios yet. I don't need something as capable or
general as exceptions (yet? Time will tell.).

Now you give the impression that you don't understand
exceptions. Exceptions aren't more "general" or more "capable"
than other methods of reporting errors, per se. They are
different; in some cases, those differences represent an
advantage, and in others, a disadvantage.
 
J

James Kanze

That may be the side effect of architecting with the RAII
pattern. If instead one chooses to define construction as
initialization rather than resource acquisition (at least in
cases where errors can occur), then the above may not be a
justification for exceptions.

You seem to have a hang up about RAII, without even knowing what
it is. Despite the name, RAII has nothing to do with
initialization (nor, in some cases, with resources). It has to
do with clean-up and restoring a coherent state to the program,
regardless of why or how you leave the function (or block). I
was using RAII long before exceptions were added to the
language; it solves a real problem, and all of the other
solutions I've seen are extremely complicated.
I'm hedging that that is not so. It may be one of those "if
all you have is a hammer" things.

You're the one who seems to be wanting the restrict the tool kit.
 
J

James Kanze

On Mar 7, 5:57 am, "Tony" <[email protected]> wrote:
[...]
"Anytime you can't reasonable handle the error in the
immediately calling function. "
I don't see that as a rule of thumb anymore since error
codes can be reliably propogated up the stack pretty easily
without "jumping and introducing alternative unwind
mechanisms".
That was the whole point of James' comment, surely?
Probably not, as he didn't mention anything that recognizes
that returning an error code does not have to be error prone
in C++ (in C, yes it is).

I didn't mention what I considered obvious to any experienced
C++ programmer. (I was using smart error codes some twenty
years ago.) I also didn't mention it because it's irrelevant to
the question. Propagating an error code means additional code
in each function which propagates it. That's extra work, and
best avoided.
It's not so bad and it will cause one to search for ways of
implementing functions that cannot cause error.

Which often isn't possible, or if it is, requires some twisted,
overly complicated logic.
Nothing to introduce: the call stack unwind happens without
any alternative unwind mechanism. What could be simpler than
nothing?

That's the argument for exceptions (when it comes to
propagation).
If you're not doing multi-level propogation, and with error
return codes you don't need that, then there is nothing to
implement over std C.

The fact is that some types of errors require multi-level
propagation.
 
J

James Kanze

* Annie Testes:

[...]
I'm not sure (I often have problems grokking Tony) but I think
perhaps he's thinking of something like Barton & Nackmann's
Fallible, a return value that is a struct that can be
logically "empty", where accessing the carried value in a
logically empty one terminates or invokes some installed error
handler or throws an exception. For an exception free scheme
it would have to terminate or invoke error handler.
Essentially that would be /imposed/ checking of return values,
not especially good with respect to code structure or
efficiency, but at least avoiding the problems of return
values not being inspected by caller.

Not really Fallible, since his idiom doesn't necessarily suppose
a return value other than the return code. It's another very
old idiom (I think I was using it even before I learned
Fallible), although I can't remember ever seeing a name for it
(Smart return codes?). Basically, something like:

template< typename Status >
class ReturnStatus
{
public:
/* not explicit! */ ReturnStatus( Status status )
: myStatus( status )
, myIsRead( false )
{
}
ReturnStatus( ReturnStatus const& other )
: myStatus( static_cast< Status >( other )
, myIsRead( false )
{
}
~ReturnStatus()
{
assert( myIsRead ) ;
}
ReturnStatus& operator( ReturnStatus const& other)
{
ReturnStatus tmp( other ) ;
swap( tmp ) ;
return *this ;
}

void swap( ReturnStatus& other )
{
std::swap( myStatus, other.myStatus ) ;
std::swap( myIsRead, other.myIsRead ) ;
}

operator Status() const
{
myIsRead = true ;
return myStatus ;
}

private:
Status myStatus ;
mutable bool myIsRead ;
} ;

It's fairly easy to imagine scenarios where it fails to protect
(an object destructed without the return code having been
correctly handled), but such cases probably don't occur too
often in real code, and it does offer a fair amount of protect
against forgetting to check the return code.
If he's thinking of Fallible then pointing that out to James
would be sort of misplaced.
For James is the one who usually argues for using Fallible
(when appropriate)...

:)

I argued a lot for the above as well, in the distant past,
before we had exceptions. Today... almost all of the cases
where it would be appropriate are better handled by exceptions,
so it's an idiom (or pattern) that ceased to be "usual" or
relevant before patterns started getting named. (And thus,
before a lot of readers here started using C++.)

(BTW: the implementations I've used didn't use the swap idiom
for assignment, for the simple reason that the idiom hadn't been
invented when this idiom was frequent. And the assignment
operator was often a source of errors.)
 
J

James Kanze


In other words, you've no experience with C++. (I fear that
Jeff's going to jump again, but I'll stand by my statement: any
competent C++ programmer *should* know about Fallible.)
Google is your friend. Or, it could be! In this case you'd
only have to read on for a few characters, though, instead of
googling, but if you had chosen to google then plugging those
terms into Google you would have found

then in the list of google hits e.g.

(with complete implementation) as well as enlightening
discussions such as

which additionally points you in the direction of
boost::eek:ptional.

Yes. The idea has been reinvented several times, I'm sure.
It's often difficult for me to grok what you mean, Tony.
I guess that goes for some other readers, too.

I think it's usually just the case that he doesn't really know
what he's talking about, so he uses words in ways which don't
correspond to what others mean by them. (See his
characterizations of RAII elsethread, for example.) He also
seems to have an enormously large number of prejudices, and no
real experience with non-trivial applications.
 
J

James Kanze

True. So we're back in the boat of having to clear errno
manually. In general, do you clear and check errno for each
POSIX function, or do you prefer to avoid clearing errno when
possible?

I wouldn't say that I consciously avoid clearing errno, but I
don't bother clearing it if the function provides another means
of determining that there was an error. (I don't clear it
before open or read, for example.)

And just a nit: strtol and errno aren't just Posix---they're
part of the C standard, and by reference the C++ standard. So
the same rules apply under Windows.
I think Jean-Marc and you have changed my mind, but I still
would prefer to check for the tradiditional -1 (or null) when
possible.

Certainly. The problem is that *you* don't get the choice. How
you determine whether there was an error, and how you determine
what the error was if there was one, is specified by the
function, not by you. I'd never design a function to use errno,
and generally, the policy of Posix is also not to use it in new
functions, but rather to return a status code, and use out
arguments for other "return values" (see anything in
<pthread.h>, for example. I (and everyone I know) consider
errno a historical wart; something we have to live with, like it
or not.
Then the caller can use a try/catch block. That isn't much
more syntax than an if-statement,

Oh yes it is, since it implies creating a scope for the call
(and not just for the error handling). Which in turn may mean
either moving the error handling down further in the function,
or declaring the variable before initializing it.
it allows the compiler to favor the case in which an error
does not happen,

It can do this in any case.
and it keeps the error-handling code separate from the main
logic.

Which isn't necessarily an advantage, for errors which have to
be handled immediately.
I don't see any reason at all to prefer cursing a particular
return value; that seems more like dark magic to me.
Furthermore, if the caller forgets to check the return code
for its error value, a silent bug is likely to ensue, whereas
an exception will make itself heard unless explicitly
squashed.

Fallible aborts if you access the return value when the status
is invalide. Fallible supports some syntactic sugar as well,
e.g.: getSomething().elseDefaultTo( defaultValue ). And it's no
more dark magic than exceptions---less, really.
To me, strtol also seems to be a very low-level function, and
doesn't seem likely to know what to do on failure, anyway. At
the time strtol was written, of course, the situation probably
was different.

Even today, maybe. But I think the most frequent use would be
in the implementation of the operator>> in istream, or other low
level code of this sort. But at this point, we're so low that
you'll almost want to handle the error immediately, in order to
map it to the class specific error reporting mechanism.
To me, that means: "This isn't really an error. It's
something I expect to happen in the course of normal
operation,

That's also true. Although that's not really my argument. But
I don't think you can consider any format error in input
"exceptional". Just the opposite, you're almost certain to see
it from time to time.
and I want to be able to check for it at my leisure."

That's not Fallible; that's the idiom used by std::istream (and
IEEE floating point). It's also useful---in the case of output
(std::eek:stream) and floating point, it's probably the preferred
idiom. (Typically, I'll output an entire file, and only check
the status and generate an error return after close.) In the
case of input, I almost always want to know about the error
immediately, so I can output an error message with significant
context.
I don't see anything inherently wrong with that, but it really
changes the discussion. (I'd still prefer an exception for
strtol, but I could see the Fallible PoV.)

In the case of strtol, there's another, clinching argument
against exceptions: the function has to be usable from C:).
And since we can't break existing code, it has to use errno:-(.

If we're talking about a new function, with similar use, I think
it would depend very much on the requirements I was faced with.
As I think I already said, if the function were designed for
higher level used, I'd probably do something like istream.
Yes, at least insofar as it lets the client dictate what's an
error. To me, EOF isn't (necessarily) an error, and I don't
want an exception when I hit it; by contrast, failure to read
an integer where one is expected does constitute such an
error.

That's where we disagree. An error in input format is really to
be expected, at least in most cases. What I generally don't
expect (and would consider exceptional, and report by an
exception) might be a hardware read error (except, perhaps, in
very, very low level code, e.g. an implementation of a level 2
or a level 3 protocol, like LAP-D or IP).
 
A

Alf P. Steinbach

* James Kanze:
That's where we disagree. An error in input format is really to
be expected, at least in most cases. What I generally don't
expect (and would consider exceptional, and report by an
exception) might be a hardware read error (except, perhaps, in
very, very low level code, e.g. an implementation of a level 2
or a level 3 protocol, like LAP-D or IP).

For the purpose of least work and no surprise, the silent error mode of
iostreams is IMHO very ungood. And enabling exceptions is not an option, due to
EOF state setting badbit and then generating exception. So I agree with Jeff
there, at least as default error reporting, but doing that properly means not
just fixing EOF handling.

As the iostreams exemplify, it's possible to support both error reporting
models, passive and active error reporting, even if the iostreams support for
active reporting isn't in-practice usable.

However, as the iostreams also exemplify, trying to force-fit interactive and
file/pipe i/o into the same general framework, while a nice academic
abstraction, just leads to enormous problems. It's a bottleneck. The lower level
library software goes through all kinds of unnatural gymnastics to force the two
to appear the same, funneling everything into a point aperture, so to speak,
only to have the client code at top going through just as unnatural gymnastics
to "decode" that to be able to treat them differently, as they should be.


Cheers,

- Alf
 
T

TonyMc

Jeff Schwab said:
You know, there's an actual word for that. It's shorter, too.

I mention this only because it's the second use of "ungood" I've seen
here recently.

Thought crime! Room 101 for you, sunshine. Your personal torment will
probably involve COBOL. Or Win32s.

Tony
 
J

James Kanze

* James Kanze:
For the purpose of least work and no surprise, the silent
error mode of iostreams is IMHO very ungood.

What do you propose instead? The classical idiom for input may
not be pretty, but it works well in practice.
And enabling exceptions is not an option, due to EOF state
setting badbit and then generating exception.

Where did you get that? EOF causes eofbit to be set, and
directly, only eofbit. If the EOF means that we are unable to
read the requested data (e.g. it occurs on reading the first
character, or skipping initial blanks), the failure to read will
cause failbit to be set. But EOF never causes badbit to be set.

[...]
However, as the iostreams also exemplify, trying to force-fit
interactive and file/pipe i/o into the same general framework,
while a nice academic abstraction, just leads to enormous
problems.

The IO streams aren't designed for interactive I/O. Nobody
really said they were.
It's a bottleneck. The lower level library software goes
through all kinds of unnatural gymnastics to force the two to
appear the same, funneling everything into a point aperture,
so to speak, only to have the client code at top going through
just as unnatural gymnastics to "decode" that to be able to
treat them differently, as they should be.

That's the Unix heritage:). Also adopted by Windows, so
regardless of what one might think of it (and a lot of people
like it), we've got to live with it.
 
J

James Kanze

Not so; the call already has its own scope. If you find
yourself with a try/catch in the middle of a function body,
it's probably time to refactor.

Usually, but not always.
Neither of those is implied, either. Remember, we're
replacing an explicit clear of errno before the function call,
and a check afterward. The non-exception code would have to
look something like this:
long parse(std::string s, long default_, int base) {
typedef char** end_pointer;
errno = 0;
long const value = strtol(s.data(), end_pointer( ), base);
if (errno) {
// log the error, or do other special-case handling...
return default_;
}
return value;
}
Yuck. If my::strtol instead throws an exception, we can write:
long parse(std::string s, int base, long default_) try {
typedef char** end_pointer;
return my::strtol(s.data(), end_pointer( ), base);} catch (std::exception const&) {

// log the error, or do other special-case handling...
return default_;
}

I'm not sure I like that better. If strtol returned a Fallible,
I could simply write:

long pars( std::string s, int base, long default_ )
{
return strtol( s, base ).elseDefaultTo( default_ ) ;
}

(except that I probably wouldn't bother).

[...]
It could, if it knew which case to favor.

That's an easy one. It favors which ever path the profiling
data tells it to favor. (Of course, if you generate your
profiling data with a test set which has more error cases than
normal cases...)
GCC has extensions to let you specify which case should be
favored, but I'm not aware of anything in standard C++ that
serves the same purpose.

Most compilers have options to exploit profiling data when
optimizing.
I disagree. If it belongs in the main logic, then by
definition, it's part of that logic, and we're no longer
discussing error-handling. If it's semantically separate from
normal operation, then it ought to be lexically separate, as
well.

OK. So we're really arguing about whether incorrect input
should be called an "error", or whether it should receive some
other name. Handling incorrect input is certainly part of the
"normal operation" of my programs.

[...]
Who said anything about "any format error in input?" Anyway,
as long as we're calling it an error, I still think it ought
to have a corresponding exception.

It sounds to me like there's some circular logic involved there.
You're saying that the reason it should use an exception is
because we call it an error, and the reason we call it an error
is because it should use an exception. If calling it an error
implies using an exception, then you've just changed the
definition of error that I was using, and I'd have to argue that
it isn't an error.
Despite some of the older wisdom, I'm not convinced that
"exceptions" should only be for really exceptional situations.
IMO, they're best reserved for errors, and used consistently.
Really? So what was that elseDefaultTo method you just showed?

A member function of Fallible. But I don't see any relationship
to "check for it at my leisure", since elseDefaultTo checks
immediately. (It provides one type of simple error handling,
appropriate in a few situations.)
By default.
Yes.
"The" preferred idiom. Wow.

Based on the code I've seen.
Once the first error occurs, are subsquent operations
guaranteed not to clear it, or to otherwise do any harm?

Yes. That's an essential part of the idiom. Anytime a stream
is in failed state (failbit or badbit set), all operations on it
(except things like clear, of course) are guaranteed to be
no-ops.

[...]
Fair enough. The disagreement may be more terminological than
technical.

I'm beginning to suspect that too, at least partially. (I do
suspect that you'd choose exceptions in some cases I'd choose
return codes. But those cases would probably be judgement
calls.)
 
A

Alf P. Steinbach

* James Kanze:
What do you propose instead? The classical idiom for input may
not be pretty, but it works well in practice.

Nah, it does *not* work well.

It's inefficient, and about 0% of newbies understand what's going on, which
means that it's counter-intuitive. It makes it exceedingly easy to write
incorrect code. It makes it exceedingly hard to write correct code. It's
verbose. It's cryptic. It's lame. It's, well, I can't say that 4L word that
starts with s, has an h after the first letter and ends with it, here.

But, assuming you understand what that word would have been, it's 110% that.

Where did you get that? EOF causes eofbit to be set, and
directly, only eofbit. If the EOF means that we are unable to
read the requested data (e.g. it occurs on reading the first
character, or skipping initial blanks), the failure to read will
cause failbit to be set. But EOF never causes badbit to be set.

Maybe. Check the name. Perhaps it's called 'failbit' or something. I really
don't care about the name. Whatever the name, the effect with exception
generation turned on is a nasty, very unwelcome and very impractical exception.

[...]
However, as the iostreams also exemplify, trying to force-fit
interactive and file/pipe i/o into the same general framework,
while a nice academic abstraction, just leads to enormous
problems.

The IO streams aren't designed for interactive I/O. Nobody
really said they were.

Right. But it's all in the "really". Character oriented i/o is potentially the
simplest and easiest, yet the C++ interface to that functionality makes it more
difficult (to do correctly) than a graphical user interface! And that's absurd.

That's the Unix heritage:). Also adopted by Windows, so
regardless of what one might think of it (and a lot of people
like it), we've got to live with it.

Maybe. :)


Cheers,

- Alf
 
A

Alf P. Steinbach

* Alf P. Steinbach:

Regarding this, the extreme backwardness and sheer wrongheadedness of treating
interactive keyboard input like a buffered stream that could be a file, it's not
just a problem with the C++ iostream design.

It's a problem with our physical machines, where this less than intelligent
choice has been hardwired, and it's been annoying me since 1979 or thereabouts.
Many a times the itch has been so strong that I've started writing an article
about it. But then, there's so little to say, it's not stuff for article.

Here's the way Things Work on a typical PC or workstation:

PHYSICAL EVENTS -> AUTO-REPEATER -> BUFFER -> DECODING -> PROGRAM

1. Under the keys there's a matrix of some sort. A microcontroller in the
keyboard scans this matrix, detects key down and key up. On key down or
key up a series of bytes denoting the EVENT is sent to the computer.

2. When microcontroller detects that a key is held down for a while it
initiates a kind of REPEAT action, sending the same key down sequence
repeatedly to the computer.

3. In the computer, receiving hardware+software logic BUFFERS it all.

One consequence is that when e.g. the Firefox beast grinds to a near halt, as
it's wont to do now and then, then you can "type ahead" but you don't see what
you're typing, and so you don't see when the buffer is full such that further
keystrokes are ignored (in earlier times it beeped, but no more), nor do you see
the effect of edit and arrow keys, which can really mess up things.

Happily we old dinosaurs know better than pressing backspace repeatedly when
nothing happens, at least in editing (the dreaded "delayed delete everything").

But most users aren't that sophisticated, they have no mental model of the data
flow shown above, and think that what they see on the screen is what goes on.

Another consequence is that in programs that provide "smooth" scrolling, e.g.
again a web browser, the scrolling is far from smooth. For the program can't
easily detect that a key is being held down. What it can easily do is to react
to individual synthesized keystrokes resulting from a key being held down, and
so the effect is jumpy no matter how much the program tries to smooth it out.

One rational way to do things could instead be

PHYSICAL EVENTS -> BUFFER -> DECODING -> AUTO-REPEATER -> PROGRAM

I don't know any system that works that way, however. Though I suspect that
early machines at Xerox PARC did, because those folks were very keen on having
"non-decoded" keyboard events and, in particular, having time-stamped events.

The "problem" with this rational way is that it's not compatible with the
buffered everything-as-file-like-stream i/o concept, which the C and C++
standard i/o facilities are built on. That "problem" is a problem with the i/o
facilities. And I suspect that there is a connection, that the hardware has been
adapted to what was easy to handle within that less than practical i/o model.


Cheers,

- Alf (venting some 30 years of frustration over this)
 
J

James Kanze

* Alf P. Steinbach:
Regarding this, the extreme backwardness and sheer
wrongheadedness of treating interactive keyboard input like a
buffered stream that could be a file, it's not just a problem
with the C++ iostream design.

My take on this: a long, long time ago, the model of all IO
being a sequential stream was probably valid. At that time,
however, most OS's didn't handle them that way: outputting to
the printer was a different system request than outputting to a
mag tape or a file, for example. After a while (but still
before Unix came along), it was realized that the OS should
support device independent sequential stream output; this
evolution took place at different times, depending on the
system, and even as late as 1979, MS-DOS 1.0 had separate
requests for outputing to a printer or to a disk file.

Regretfully, about the time the OS's finally started adopting
the sequential stream model, it became a bit caduc, as disk
files (with random access) and interactive terminals
(particularly screens---even non graphic screens support a full
screen mode). So we have all of the OS's rushing to be "up to
date" by implementing a model which is no longer current.
It's a problem with our physical machines, where this less
than intelligent choice has been hardwired, and it's been
annoying me since 1979 or thereabouts. Many a times the itch
has been so strong that I've started writing an article about
it. But then, there's so little to say, it's not stuff for
article.
Here's the way Things Work on a typical PC or workstation:
PHYSICAL EVENTS -> AUTO-REPEATER -> BUFFER -> DECODING -> PROGRAM
1. Under the keys there's a matrix of some sort. A microcontroller in the
keyboard scans this matrix, detects key down and key up. On key down or
key up a series of bytes denoting the EVENT is sent to the computer..
2. When microcontroller detects that a key is held down for a while it
initiates a kind of REPEAT action, sending the same key down sequence
repeatedly to the computer.
3. In the computer, receiving hardware+software logic BUFFERS it all.

The problem isn't hardware. At the lowest level, there are two
possible modes: the OS receives an interrupt for each key
stroke, at which time it can read the scan code of the key and
the state of the "modifier" keys (shift, alt, etc.). Or the OS
receives an interrupt for each key down and key up event
(including those of the modifier keys), and manages everything
itself. In the second case, autorepeat is handled by the OS.

I'm not sure (it's been a very long time since I worked at this
low level), but I think X usually uses the second possibility.
At any rate, it's certainly possible to e.g. map the tab key to
behave like control (which obviously doesn't auto-repeat,
because that doesn't have any meaning for it).

I'm also pretty sure that you've simplified things greatly.
There are several buffers: the one the application sees is after
the auto repeat, but the lower level ones aren't.
One consequence is that when e.g. the Firefox beast grinds to
a near halt, as it's wont to do now and then, then you can
"type ahead" but you don't see what you're typing, and so you
don't see when the buffer is full such that further keystrokes
are ignored (in earlier times it beeped, but no more), nor do
you see the effect of edit and arrow keys, which can really
mess up things.

Note that this could easily be handled in Firefox itself. All
it has to do is purge its input each time it finishes a screen
update. Whether this is a good idea, however, depends
because...
Happily we old dinosaurs know better than pressing backspace
repeatedly when nothing happens, at least in editing (the
dreaded "delayed delete everything").

We old dinosaurs understand that just because we haven't seen
the echo doesn't mean that the command hasn't been taken into
account. So we occasionally "type ahead" a sequence of
commands, even if the system doesn't seem to be responding to
them.
But most users aren't that sophisticated, they have no mental
model of the data flow shown above, and think that what they
see on the screen is what goes on.
Another consequence is that in programs that provide "smooth"
scrolling, e.g. again a web browser, the scrolling is far
from smooth. For the program can't easily detect that a key is
being held down. What it can easily do is to react to
individual synthesized keystrokes resulting from a key being
held down, and so the effect is jumpy no matter how much the
program tries to smooth it out.
One rational way to do things could instead be
PHYSICAL EVENTS -> BUFFER -> DECODING -> AUTO-REPEATER -> PROGRAM
I don't know any system that works that way, however. Though I
suspect that early machines at Xerox PARC did, because those
folks were very keen on having "non-decoded" keyboard events
and, in particular, having time-stamped events.

I'm not sure what difference this would make. The reason Xerox
PARC could do things differently was that it intervened at a
lower level.
The "problem" with this rational way is that it's not
compatible with the buffered everything-as-file-like-stream
i/o concept, which the C and C++ standard i/o facilities are
built on. That "problem" is a problem with the i/o facilities.

Except that you don't use the sequential stream interface for
GUI I/O. You use specific functions in the X or MS Windows
libraries.
And I suspect that there is a connection, that the hardware
has been adapted to what was easy to handle within that less
than practical i/o model.

I don't think there's a problem with the hardware. But the OS's
have been designed to use it in a way that isn't necessarily
appropriate in todays world.
 
G

Guest

[normally I'd snip more but I'm scared of removing something that
is relevant even though incomprehensible]


It may be what you meant, but you didn't say it.
Then it didn't need your response if you didn't have an answer.


It's not me, it's you.

no, many people have trouble following you. Me included.
I suspect your first language isn't english or you are from a culture
that is sufficiently diverse from mine for me not to follow many of
your references. When I don't understand you I assume you are making
a culturally specific joke.
But you "were answering" "HER" question!


Oh of course a name rather than a concept, and I don't have a any idea of
what "fallible" means other than I have seen and rejected snake oil for
decades.

If you don't know what it means howe do you know it's snake oil?

I'm not going to read the rest. Apparently many here aspire to:
"The Wizzard of Oz".

<assume culture specific joke>
[I know who the Wizard of Oz is but I've no idea why he is relevant
here]

Is "fallible" 'agile", lol!
<assume culture specific joke>
<"lol" tag strongly increases probability of humourous reference>

see? Two in one post
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,774
Messages
2,569,596
Members
45,140
Latest member
SweetcalmCBDreview
Top