Adding the ability to add functions into structures?

C

Chuck F.

Keith said:
Chuck F. said:
Jordan Abel wrote: [...]
Speaking of which, I think it would have been better for
them to take the FILE* argument first, to match fprintf.
However, the reasons for this inconsistency lie forgotten in
the depths of history. (It strikes me as especially odd
given that UNIX functions that take a file descriptor number
do take it first. Anyone here who was at Bell Labs around
that time have any comment on this? I forget, does DMR read
this newsgroup?)

If you think of the assembly language historically needed to
implement those functions, all becomes clear (especially if
you have done such implementations). fprintf needs the FILE
first to allow the variadic mechanisms to functions. For the
others, remember that the usual stack passing mechanism is
last argument pushed first. Thus, for fputc(c, f) the stack
at function entry will look like:

stack-mark f c <top of stack>

This allows the routine to examine c and decide on such things
as 'does this get printed', or 'should I convert into a tab
sequence' before worrying about the actual destination. This
preliminary work can be done without disturbing the stack-mark
or f.

In this case the gain is fairly small. However when the
parameters include such things as fieldsize or precision, the
gains can be significant.

You seem to be assuming that the generated code can't easily
access items lower on the stack before accessing items higher on
the stack. In all the assembly languages I'm familiar with, it's
easy to access any item on the stack (within reason) by using a
stack-pointer-plus-offset addressing mode. All parameters are
available simultaneously and equally easily. (Some CPUs might
limit the offset to some small value, but plenty for a couple of
parameters.)

Historically, I know this is true for the PDP-11. What about
the PDP-7 (isn't that the first system where C was implemented)?

Did some CPUs actually require upper stack items to be popped
before lower stack items could be accessed?

In effect, yes. Take the 8080 as an example. I had routines to
address things offset from TOS, but they were relatively complex
and slow. However manipulating TOS, and possibly NOS, was fairly
easy. Using these methods it was possible to write pure code.

--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
More details at: <http://cfaj.freeshell.org/google/>
 
J

jacob navia

Jordan Abel a écrit :
Speaking of which, I think it would have been better for them to take
the FILE* argument first, to match fprintf. However, the reasons for
this inconsistency lie forgotten in the depths of history. (It strikes
me as especially odd given that UNIX functions that take a file
descriptor number do take it first. Anyone here who was at Bell Labs
around that time have any comment on this? I forget, does DMR read this
newsgroup?)

Ahhh living in the past.

Those never ending historical trivia that seem to have so
much success here.

I tried to start a discussion based in the interface concept
but it went completely down the drain.

The 8086, the PDP7, that is MUCH more fun. The only answers
I got pointed to some triviality. The discussion here never
gets to any substantive discussion about modern software construction.
 
E

Eric Sosman

Keith said:
Eric Sosman said:
Keith said:
[...]
The standard specifically requires FILE to be an object type. I
suppose an implementation could act as if void is an object type, but
IMHO that goes beyond what "conforming" means.

So it does; I think that clears up my confusion. Jack
Klein was right when he said FILE cannot be an opaque type,
but it's not because of "the simple reason that the standard
requires some functions to be implemented as macros." Rather,
it's because 7.19.1/2 requires that FILE be "an object type"
and 6.2.5/1 says that object types "fully describe" objects.
There's no formal definition of "opaque," but it appears
antithetical to "full description."

Thanks to Keith, and to Richard Heathfield for making
the point in a slightly different way.


But 7.19.1 says that FILE is

an object type capable of recording all the information needed to
control a stream, including [a bunch of stuff]

It doesn't necessarily have to record the information *directly*. I
can imagine FILE being a typedef for void* (making FILE* void**);
internally the void* can be converted to a pointer to a structure
containing the actual information.

Yes, of course: Even though <stdio.h> must reveal the
nature of a FILE, it does not follow that FILE is easily
decoded. Someone else suggested that FILE could be an int
that indexes arrays of stuff hidden away in the innards of
the library, out of the program's view, and there is no
shortage of other possibilities more and less obscure.

Nonetheless: Whatever a FILE may be, <stdio.h> must
"fully describe" it. That was the crux of my question,
even though the "full description" is useless in practice.
 
R

Richard Bos

It's already been done. They discuss it in comp.lang.c++, down the
hall. And in comp.lang.java, and I'm sure also in whatever group it
is on Microsoft's server that is devoted to C#.

I know I'll get flamed for this, but with the exception of inheritance
this is really nothing but syntactical sugar. You can write object
oriented programs in C right now.

You'd get flamed for that if you posted it in comp.lang.c++, I have no
doubt. Here, though... I've been agreeing with that for years. Ninety
percent of object orientation is merely what good programmers were doing
decades ago, long before I myself started programming.

Richard
 
C

Chris Torek

... 7.19.1 says that FILE is

an object type capable of recording all the information needed to
control a stream, including [a bunch of stuff]

It doesn't necessarily have to record the information *directly*.

And indeed, the Standard permits the (actual, historical)
System V implementation, in which FILE was a typedef-name
(or perhaps #define) for a structure that did in fact not
contain all of the data.

Specifically, there was a separate, private array with a name
something like "bufend" or "bufsize". The stdio FILE structure
included fields like "pointer", "file number", "flags" or "mode",
and so on, but was missing at least one data item: the size or
end-pointer for each stdio buffer associated with each open file.
Some stdio functions then did something essentially like this:

int some_stdio_function(FILE *fp) {
size_t n = bufend[fp - &__stdio_files[0]] - fp->_base;
...
}

That is, the actual value of the pointer "fp" was important:
fp had to point to one of the 20 or so (FOPEN_MAX) array entries,
and a separate, parallel array (bufend, or whatever its name was)
contained additional data.

This meant that code of the form:

void somefunc(FILE *fp) {
FILE fcopy = *fp;
...
result = fprintf(&fcopy, fmt, arg1, arg2, ...);
...
}

would (sometimes) break badly on some System V Unix systems, despite
working on almost every other Standard C system known to man.

The C Standard permits this breakage, by rendering undefined the
behavior of using "fcopy" as in somefunc() above.
 
K

Keith Thompson

jacob navia said:
Jordan Abel a écrit :

Ahhh living in the past.

Those never ending historical trivia that seem to have so
much success here.

Some of us find historical trivia interesting. Apart from being
interesting for their own sake, they can be helpful to understand why
the C language is the way it is, warts and all.
I tried to start a discussion based in the interface concept
but it went completely down the drain.

Yes, well, that happens sometimes. We're not obligated to discuss
whatever you find interesting, any more than you're obligated to
discuss what we find interesting.

Having said that, your idea for interfaces
<http://groups.google.com/group/comp.lang.c/msg/857ea8e70333c6c5?hl=en&fwc=1>
does look at least moderately interesting. To summarize:

struct ArrayList;

struct ArrayListInterface {
int (*GetCount)(struct ArrayList *AL);
int (*IsReadOnly)(struct ArrayList *AL);
int (*SetReadOnly)(struct ArrayList *AL, int flag);
int (*Push)(struct ArrayList *AL, void *str);
};

struct ArrayList {
struct ArrayListInterface *lpVtbl;
size_t count;
void **contents;
size_t capacity;
unsigned int flags;
size_t ElementSize;
};

(This isn't identical to the code you posted; I've fixed some syntax
errors and dropped the typedefs in favor of using struct tags
directly.)

One difficulty with this approach, as compared to the equivalent in a
language like C++ that support OO directly, is that you have to
manually build and initialize the function table yourself; the
compiler won't do it for you. If you forget to do this, Bad Things
Happen; for example:

struct ArrayList obj;
int count = obj.lpVtbl->GetCount();

invokes undefined behavior. But that's not a fatal flaw; it's easy
enough to declare an object that provides a default initial value:

const struct ArrayListInterface ArrayListInterfaceDefault = {
...
};

struct ArrayList ArrayListDefault = {
&ArrayListInterfaceDefault,
0,
...
}

and then:

struct ArrayList obj = ArrayListDefault;
int count = obj.lpVtbl->GetCount();

will consistently give you 0.

I think this kind of thing is often overkill if you don't actually
need more than one implementation of, say, the GetCount() function for
your type. Plain old type and function declarations, with no attempt
to encapsulate the functions in the types, is very often perfectly
adequate. But when it isn't, this seems like a reasonable approach.

(Please note that I'm discussing your idea because I find the idea
itself interesting, not because I'm influenced by your complaints
about other things being trivial.)
The 8086, the PDP7, that is MUCH more fun. The only answers
I got pointed to some triviality. The discussion here never
gets to any substantive discussion about modern software construction.

Maybe you should try comp.programming.
 
J

jacob navia

Keith Thompson a écrit :
One difficulty with this approach, as compared to the equivalent in a
language like C++ that support OO directly, is that you have to
manually build and initialize the function table yourself; the
compiler won't do it for you. If you forget to do this, Bad Things
Happen; for example:

struct ArrayList obj;
int count = obj.lpVtbl->GetCount();

invokes undefined behavior. But that's not a fatal flaw; it's easy
enough to declare an object that provides a default initial value:

const struct ArrayListInterface ArrayListInterfaceDefault = {
...
};

struct ArrayList ArrayListDefault = {
&ArrayListInterfaceDefault,
0,
...
}

and then:

struct ArrayList obj = ArrayListDefault;
int count = obj.lpVtbl->GetCount();

will consistently give you 0.

Normally you would call a constructor function that returns a pointer
to an allocated structure.

The advantage of this is the flexibility it gives you. You can change
the behavior of a function at run time, by assigning the function
pointer to another function. That function can call the old function
pointer and then do something special, or can reimplement the
old function completely. And this is achieved without having to change
a single line in user code.

The array list object is patterned like its name-sake in C#: a flexible
array that can hold items and grows automatically if needed.

Another plus point is that the name-space of the interface is private
to the containing structure, so you can use simple names like "Add"
or "Delete" without fear that it will clash with the user name space.

jacob
 
D

Dave Thompson

On Sun, 01 Jan 2006 08:26:19 -0500, Eric Sosman
There's also setjmp().
setjmp() isn't formally required; on some platforms (particularly
calling conventions) there's no good way to make it a true function,
but it could still be a compiler special-case rather than a macro.

va_start and va_arg are required to be macros (and by their nature
cannot be implemented as true functions). va_end and C99 va_copy are
unspecified. In C99 tgmath.h and the new-in-math.h generic
fpclassify/is{finite,inf,normal,nan}/signbit and isgreater etc. are --
even though their functionality doesn't really make sense as macros
and will probably end up being compiler special-cases anyway.
However, that wasn't what I asked. Jack Klein wrote that
the Standard *requires* some functions to be implemented as
macros, and this is true. However, I was unable to think of
any FILE-using functions that are *required* to be macros. <snip>
I'm well aware that the Standard *permits* an implementation
to provide macros for any library functions it chooses, so long
as it provides the actual function as well. I'm also aware that
the Standard makes special provision for getc() and putc() (but
not for getchar(), by the way) that make macro implementations
easier. But does the Standard *require* a macro implementation
for any FILE or FILE* function?
Concur.

... and what this all comes down to, of course, is whether
FILE can be a fully opaque type. I believe it can be, but both
Jack Klein and Richard Heathfield (two not-to-be-ignored people)
take the opposite view. I'm trying to learn why.

Except that it must be an object (=complete) type and thus must have a
definite size. It wasn't really necessary and perhaps wasn't wise to
standardize even that much, but thatt's water far out to sea now.

- David.Thompson1 at worldnet.att.net
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top