Adding the ability to add functions into structures?

J

Jack Klein

Jack Klein said:


FLAME ON!!!


Oh. Was that it? Sheesh. Flame off again. (sigh)


This is pretty much how I do it, yes. Just a minor nit - FILE isn't (quite)
a perfect example, in at least two ways:

1) FILE should, in my opinion, be a truly opaque type, with the structure
definition hidden internally. typedef struct _iobuf FILE; is already more
than we need to know. The whole schmeer is far, far more than we need to
know.

I agree that FILE is not a perfectly opaque type, although I
deliberately decided not to complicate the point I was making to the
OP by bringing that up.

FILE cannot truly be an opaque type as far standard C is concerned for
the simple reason that the standard requires some functions to be
implemented as macros, and so some of the details must be exposed by
<stdio.h>.

Nevertheless, it is opaque enough for the point of the article, in the
sense that none of its internal details or workings are documented.
There is nothing that a strictly conforming program can do with a FILE
* other than through the documented functions.
2) I wish I wish I wish that fclose took FILE ** rather than FILE *, don't
you?

The one I consider much more important than that would be defining
fclose(NULL) as a no-op, just like free(NULL) is. It would greatly
simplify cleaning up resources. Right now you have to write code like
this:

WARNING: pseudo code not compiled, tested, or sanity checked!

return_type some_func( /*...*/)
{
FILE *fin = NULL;
FILE *fout = NULL;
data_type *data_ptr = NULL;
return_type something = DEFAULT_VALUE;

/* ... */

fin = fopen(...);
fout = fopen(...);
data_ptr = malloc(...);

/* ... */

if (fin) fclose(fin);
if (fout) fclose(fout);
free(data_type);

return something;
}

The additional checking for null FILE pointers before calling fclose()
is not a major effort, but it is just annoying enough to earn a place
on the pet peeve list after writing it INT_MAX times.

BTW, what's wrong with:

void rh_fclose(FILE **fp)
{
if (*fp)
{
fclose(*fp);
*fp = NULL;
}
}

Of course, I could just as easily write:

void jk_fclose(FILE *fp)
{
if (fp) fclose(fp);
}

Maybe time for both of us to make a new year's resolution to stop
carping about the easily fixable stuff?

Nah, that would take all the fun out of it!
 
J

Jack Klein

Inheritance is crucial.

Says who? The first hit on Google for the phrase "object oriented"
including quotations is
http://java.sun.com/docs/books/tutorial/java/concepts/, which has the
page title "Object-Oriented Programming Concepts". The first question
and answer on this page are:

What Is an Object?

An object is a software bundle of related variables and methods.
Software objects are often used to model real-world objects you find
in everyday life.

Further down the page comes the question "What is Inheritance".

Inheritance is a heavily-used feature that is provided by most object
oriented languages and systems, but it is indeed an extra feature. The
true definition of object orientation is pretty well captured in the
first question and answer. Object orientation means that data is
encapsulated and not manipulated directly, only via specifically
defined functions, often called "methods." And the C FILE type meets
this definition.
An object is any set of data items that are "part of the same thing". C
structures are therefore objects. (The C standard further specifies that an
object must be stored contigously in memory. This is a language issue and a
fairly obvious thing to do, but not strictly necessary).

That's quite correct. In actual fact, the definition of "object" in
the C++ language standard is exactly the same as it is in C, and has
nothing at all to do with classes or object oriented programming. In
C++, as in C, an int is an object.
A program becomes "object-oriented" not when it uses objects, but when the
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is an interesting point of view, and one perhaps subscribed to by
many programmers who use object oriented paradigms, perhaps in
languages that have more built-in support for it than C does. Still,
I would be mildly surprised if you could find an authoritative
definition of "object orientation" that states that inheritance is
required.
objects enter into relationships with each other. In C++ like most lanauges
that support obect-orientation, this is achieved via inheirtance. However
there are other ways, for example Microsoft Windows objects all respond to
the same message system, Java interfaces specify common methods, text
adventure objects have verb handlers.

Actually, inheritance can be done in C as well, but the result tends
to be rather messy. Most people who attempt it are trying to write
C++ in C and, in my experience, the result is neither good C++ nor
good C.
 
E

Eric Sosman

Jack said:
[...]
FILE cannot truly be an opaque type as far standard C is concerned for
the simple reason that the standard requires some functions to be
implemented as macros, and so some of the details must be exposed by
<stdio.h>.

What functions that deal with FILE or FILE* are required
to be macros?
 
M

Mark McIntyre

Jack said:
[...]
FILE cannot truly be an opaque type as far standard C is concerned for
the simple reason that the standard requires some functions to be
implemented as macros, and so some of the details must be exposed by
<stdio.h>.

What functions that deal with FILE or FILE* are required
to be macros?

getchar() and getc() may be implemented as a macro, as may the put...
equivalents.
..


Mark McIntyre
 
K

Keith Thompson

Mark McIntyre said:
Jack said:
[...]
FILE cannot truly be an opaque type as far standard C is concerned for
the simple reason that the standard requires some functions to be
implemented as macros, and so some of the details must be exposed by
<stdio.h>.

What functions that deal with FILE or FILE* are required
to be macros?

getchar() and getc() may be implemented as a macro, as may the put...
equivalents.

Any library function may be implemented as a macro. I think assert()
is the only one that's required to be a macro.

getc() and putc() are equivalent to fgetc() and fputc(), respectively,
except that *if* they're implemented as macros they can evaluate their
stream arguments more than once.

A conforming implementation could make getc() and putc() regular
functions without macro equivalents, and make FILE as opaque as it
likes (it still has to be an object type), but making them macros is
probably important for performance.

Then again, FILE could be just an array of N unsigned chars (or a
struct containing such an array), with the code that uses it
converting FILE* to REAL_FILE* to access the internals.
 
R

Richard Heathfield

Jack Klein said:
FILE cannot truly be an opaque type as far standard C is concerned for
the simple reason that the standard requires some functions to be
implemented as macros, and so some of the details must be exposed by
<stdio.h>.

Absolutely true, but a nuisance nonetheless.
The one I consider much more important than that would be defining
fclose(NULL) as a no-op, just like free(NULL) is.

That, too.

The additional checking for null FILE pointers before calling fclose()
is not a major effort, but it is just annoying enough to earn a place
on the pet peeve list after writing it INT_MAX times.

For sufficiently low values of INT_MAX, anyway.
BTW, what's wrong with:

void rh_fclose(FILE **fp)
{
if (*fp)
{
fclose(*fp);
*fp = NULL;
}
}

Nothing. That's how most of my opaque type destructors look. But (and this
will perhaps raise a chuckle) whilst I freely and consciously borrowed the
FILE * model for my opaque types - with the above modification - it had not
actually occurred to me to wrap C's I/O streams in this model! Thanks for
the suggestion. :)
 
C

Chuck F.

.... snip wishes from RH about fclose(FILE**) ...
The additional checking for null FILE pointers before calling
fclose() is not a major effort, but it is just annoying enough
to earn a place on the pet peeve list after writing it INT_MAX
times.> BTW, what's wrong with:

void rh_fclose(FILE **fp)
{
if (*fp)
{
fclose(*fp);
*fp = NULL;
}
}

If this is called from a function that can be passed a generic
FILE*, that may actually be stdin or stdout. I don't believe the
standard insists on those being modifiable pointers.

--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
More details at: <http://cfaj.freeshell.org/google/>
 
E

Eric Sosman

Keith said:
Mark McIntyre said:
Jack Klein wrote:

[...]
FILE cannot truly be an opaque type as far standard C is concerned for
the simple reason that the standard requires some functions to be
implemented as macros, and so some of the details must be exposed by
<stdio.h>.

What functions that deal with FILE or FILE* are required
to be macros?

getchar() and getc() may be implemented as a macro, as may the put...
equivalents.


Any library function may be implemented as a macro. I think assert()
is the only one that's required to be a macro.

There's also setjmp().

However, that wasn't what I asked. Jack Klein wrote that
the Standard *requires* some functions to be implemented as
macros, and this is true. However, I was unable to think of
any FILE-using functions that are *required* to be macros. Can
you suggest any such?

I'm well aware that the Standard *permits* an implementation
to provide macros for any library functions it chooses, so long
as it provides the actual function as well. I'm also aware that
the Standard makes special provision for getc() and putc() (but
not for getchar(), by the way) that make macro implementations
easier. But does the Standard *require* a macro implementation
for any FILE or FILE* function?

... and what this all comes down to, of course, is whether
FILE can be a fully opaque type. I believe it can be, but both
Jack Klein and Richard Heathfield (two not-to-be-ignored people)
take the opposite view. I'm trying to learn why.
 
R

Richard Heathfield

Eric Sosman said:
I'm well aware that the Standard *permits* an implementation
to provide macros for any library functions it chooses, so long
as it provides the actual function as well. I'm also aware that
the Standard makes special provision for getc() and putc() (but
not for getchar(), by the way) that make macro implementations
easier. But does the Standard *require* a macro implementation
for any FILE or FILE* function?

... and what this all comes down to, of course, is whether
FILE can be a fully opaque type. I believe it can be, but both
Jack Klein and Richard Heathfield (two not-to-be-ignored people)
take the opposite view. I'm trying to learn why.

Well, to be perfectly honest with you I was just nodding "wisely" at Jack's
assertion rather than think it through myself. (So much for
not-to-be-ignored!)

But now that I'm actually thinking about it (albeit not terribly brightly,
since dinner is moderately imminent), I don't know of anything that would
stop the following program from being strictly conforming:

#include <stdio.h>

int main(void)
{
FILE foo;
return 0;
}

and of course that wouldn't be legal if FILE were truly opaque.

Also, I was half-thinking (er, yes, I think that's what I mean) of the
possibility that, since getc et al *can* be implemented as macros, it is
necessary to make FILE visible in order to allow implementors the freedom
to implement it as a macro if they so choose. This is probably what Jack
meant, although even now I will freely admit that I haven't thought this
through thoroughly, or at least I don't think I have.
 
E

Emmanuel Delahaye

Albert a écrit :
So structures are useful to group variables, so you can to refer to a
collection as a single entity. Wouldn't it be useful to also have the
ability to collect variable and functions?

Ask K&R say, C programs consist of variables to store the input and
functions to manipulate them.

This would make C object-oriented - how cool would that be?

Are there any problems with adding the ability to have functions
encapsulated in structures?

There is no good reason to do that (multipliying function instances ?
Why in the world ?).

The idea behind OOP is that the code is organized around objects and
methods (functions) to manipulate the objects. The objects are designed
to be instanciable, but not the functions. The idea is to have a langage
trick that 'connects' the object and its method:

obj.method()

as in C++, but behind the scene, the method stays unique. What you can
do in C is to store the address of the method (function) in the
structure using a pointer to a function, but it brings nothing really
interesting.

The C-way is rather to define function names like:

type_function()

and to pass the address of the object as the first parameter of these
functions

type_function(type *self)

It's also common to have explicit constructors and destructors:

type * type_create()
type_delete(type *self)

Note that if you are only using pointers to the objects, the type can be
completely opaque:

typedef struct type type;

A good standard-C-example is the FILE function family.

Constructors / destructors:
fopen() / fclose()

Methods:
fgetc() / fputc() / ferror() etc.
 
J

jacob navia

Emmanuel Delahaye a écrit :
Albert a écrit :



There is no good reason to do that (multipliying function instances ?
Why in the world ?).

The idea behind OOP is that the code is organized around objects and
methods (functions) to manipulate the objects. The objects are designed
to be instanciable, but not the functions. The idea is to have a langage
trick that 'connects' the object and its method:

obj.method()

as in C++, but behind the scene, the method stays unique. What you can
do in C is to store the address of the method (function) in the
structure using a pointer to a function, but it brings nothing really
interesting.

The C-way is rather to define function names like:

type_function()

and to pass the address of the object as the first parameter of these
functions

type_function(type *self)

It's also common to have explicit constructors and destructors:

type * type_create()
type_delete(type *self)

Note that if you are only using pointers to the objects, the type can be
completely opaque:

typedef struct type type;

A good standard-C-example is the FILE function family.

Constructors / destructors:
fopen() / fclose()

Methods:
fgetc() / fputc() / ferror() etc.

I disagree.

I proposed (in another message in this thread) to program using
interfaces, i.e. arrays of function pointers that can be used
(as it is used routinely in COM programming) to encapsulate
the functions that act in a given object.

This approach has several advantages:
1) You can use common names like Add, Delete, etc, without
polluting the global namespace. The problem with the approach
you propose is that you *have* to prefix the function with
some prefix since you can't have two functions called "Add"
in the same program.

2) You can change or extend the interface without recompiling
client code. For instance, you can add new methods at the
end of the interface ("inheritance") without any recompilation.

3) You can "subclass" a functionality of the object by saving the stored
pointer, then replacing it by your own one, without any need
for recompilation.

For instance:

The procedure in the interface "Add" is lacking some functionality.
When you create the object, you save the pointer to the "ADD" function
then you assign a pointer to a new function "Add1", and store it in the
same place:
Object->InterfaceTable->Add = add1;

All the code that calls that function pointer will continue to
work unmodified using your new "Add1" function. If you save the
pointer to the function you can use it to call the old functionality
in case you do not want to totally replace it but just extend it.

This is much more flexible than C++.

jacob
 
J

Jordan Abel

For sufficiently low values of INT_MAX, anyway.

INT_MAX is required to be at least 32767, of course. Though if you do
not include <limits.h>, you may define it yourself to some other value.
This, of course, should never be done.
 
J

Jordan Abel

Keith said:
Mark McIntyre said:
On Sat, 31 Dec 2005 16:29:02 -0500, in comp.lang.c , Eric Sosman

Jack Klein wrote:

[...]
FILE cannot truly be an opaque type as far standard C is concerned for
the simple reason that the standard requires some functions to be
implemented as macros, and so some of the details must be exposed by
<stdio.h>.

What functions that deal with FILE or FILE* are required
to be macros?

getchar() and getc() may be implemented as a macro, as may the put...
equivalents.


Any library function may be implemented as a macro. I think assert()
is the only one that's required to be a macro.

There's also setjmp().

However, that wasn't what I asked. Jack Klein wrote that
the Standard *requires* some functions to be implemented as
macros, and this is true. However, I was unable to think of
any FILE-using functions that are *required* to be macros. Can
you suggest any such?

I'm well aware that the Standard *permits* an implementation
to provide macros for any library functions it chooses, so long
as it provides the actual function as well. I'm also aware that
the Standard makes special provision for getc() and putc() (but
not for getchar(), by the way)

getchar does not have any arguments, therefore it wouldn't make sense to
give it permission to evaluate its arguments more than once. I believe
it is implicitly permitted to evaluate the expression (stdin) more than
once, since (stdin) doesn't have side effects.

In other words, #define getchar() getc(stdin) is legal. Particularly
since "The getchar function is equivalent to getc with the argument
stdin ."
that make macro implementations
easier. But does the Standard *require* a macro implementation
for any FILE or FILE* function?

... and what this all comes down to, of course, is whether
FILE can be a fully opaque type. I believe it can be, but both
Jack Klein and Richard Heathfield (two not-to-be-ignored people)
take the opposite view. I'm trying to learn why.

Even if the standard required a macro, that wouldn't mean such a macro
would be required to expose the innards of the file structure.

#define getc(f) getc(f) is legal, and glibc appears to in fact do this,
for some unknown reason.

What did they say that indicated a view that FILE can't be an opaque
type? It's not even really required to be a structure - An
implementation could make FILE an integer type if it wants - or void.
 
J

Jordan Abel

Eric Sosman said:


Well, to be perfectly honest with you I was just nodding "wisely" at Jack's
assertion rather than think it through myself. (So much for
not-to-be-ignored!)

But now that I'm actually thinking about it (albeit not terribly brightly,
since dinner is moderately imminent), I don't know of anything that would
stop the following program from being strictly conforming:

#include <stdio.h>

int main(void)
{
FILE foo;
return 0;
}

and of course that wouldn't be legal if FILE were truly opaque.

Suppose that FILE is another pointer, though - or an integer. That would
be opaque enough to make no difference.

Or FILE is void and the implementation does not fail to compile on
encountering a field or variable declared void. (There's nothing
forbidding an implementation from emitting a diagnostic on encountering
a declared FILE, or on encountering anything else.)

Besides, there are multiple levels of "opaque". There are types which
can only be declared by the program and passed around by reference.
fpos_t is one.

Whether the definition can be modified while retaining the ability to
execute older binaries is really not the standard's business, since it
doesn't specify compatibility between different implementations.
Also, I was half-thinking (er, yes, I think that's what I mean) of the
possibility that, since getc et al *can* be implemented as macros, it is
necessary to make FILE visible in order to allow implementors the freedom
to implement it as a macro if they so choose. This is probably what Jack
meant, although even now I will freely admit that I haven't thought this
through thoroughly, or at least I don't think I have.

But arguably the implementors can choose not to make it visible if they
do not wish to implement these as macros. Both are under the
implementors' control.
 
J

Jordan Abel

A good standard-C-example is the FILE function family.

Constructors / destructors:
fopen() / fclose()

Methods:
fgetc() / fputc() / ferror() etc.

Speaking of which, I think it would have been better for them to take
the FILE* argument first, to match fprintf. However, the reasons for
this inconsistency lie forgotten in the depths of history. (It strikes
me as especially odd given that UNIX functions that take a file
descriptor number do take it first. Anyone here who was at Bell Labs
around that time have any comment on this? I forget, does DMR read this
newsgroup?)
 
C

Chuck F.

Jordan said:
Speaking of which, I think it would have been better for them to
take the FILE* argument first, to match fprintf. However, the
reasons for this inconsistency lie forgotten in the depths of
history. (It strikes me as especially odd given that UNIX
functions that take a file descriptor number do take it first.
Anyone here who was at Bell Labs around that time have any
comment on this? I forget, does DMR read this newsgroup?)

If you think of the assembly language historically needed to
implement those functions, all becomes clear (especially if you
have done such implementations). fprintf needs the FILE first to
allow the variadic mechanisms to functions. For the others,
remember that the usual stack passing mechanism is last argument
pushed first. Thus, for fputc(c, f) the stack at function entry
will look like:

stack-mark
f
c
<top of stack>

This allows the routine to examine c and decide on such things as
'does this get printed', or 'should I convert into a tab sequence'
before worrying about the actual destination. This preliminary
work can be done without disturbing the stack-mark or f.

In this case the gain is fairly small. However when the parameters
include such things as fieldsize or precision, the gains can be
significant.

--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
More details at: <http://cfaj.freeshell.org/google/>
 
K

Keith Thompson

I think we need to define just what we mean by "opaque".

FILE is IMHO sufficiently opaque as far as the standard is concerned;
it's not impossible to peek into its innards, but any program that
does so is non-portable -- not just in the pedantic "the standard
doesn't guarantee anything" sense, but in the sense that it will break
when ported to a different platform.

(I'm assuming here that the definition of type FILE really does differ
across implementations; I've never actually checked it.)
Suppose that FILE is another pointer, though - or an integer. That would
be opaque enough to make no difference.

Or FILE is void and the implementation does not fail to compile on
encountering a field or variable declared void. (There's nothing
forbidding an implementation from emitting a diagnostic on encountering
a declared FILE, or on encountering anything else.)

The standard specifically requires FILE to be an object type. I
suppose an implementation could act as if void is an object type, but
IMHO that goes beyond what "conforming" means.
 
K

Keith Thompson

Chuck F. said:
Jordan Abel wrote: [...]
Speaking of which, I think it would have been better for them to
take the FILE* argument first, to match fprintf. However, the
reasons for this inconsistency lie forgotten in the depths of
history. (It strikes me as especially odd given that UNIX
functions that take a file descriptor number do take it first.
Anyone here who was at Bell Labs around that time have any
comment on this? I forget, does DMR read this newsgroup?)

If you think of the assembly language historically needed to implement
those functions, all becomes clear (especially if you have done such
implementations). fprintf needs the FILE first to allow the variadic
mechanisms to functions. For the others, remember that the usual
stack passing mechanism is last argument pushed first. Thus, for
fputc(c, f) the stack at function entry will look like:

stack-mark
f
c
<top of stack>

This allows the routine to examine c and decide on such things as
'does this get printed', or 'should I convert into a tab sequence'
before worrying about the actual destination. This preliminary work
can be done without disturbing the stack-mark or f.

In this case the gain is fairly small. However when the parameters
include such things as fieldsize or precision, the gains can be
significant.

You seem to be assuming that the generated code can't easily access
items lower on the stack before accessing items higher on the stack.
In all the assembly languages I'm familiar with, it's easy to access
any item on the stack (within reason) by using a
stack-pointer-plus-offset addressing mode. All parameters are
available simultaneously and equally easily. (Some CPUs might limit
the offset to some small value, but plenty for a couple of
parameters.)

Historically, I know this is true for the PDP-11. What about the
PDP-7 (isn't that the first system where C was implemented)?

Did some CPUs actually require upper stack items to be popped before
lower stack items could be accessed?
 
E

Eric Sosman

Keith said:
[...]
The standard specifically requires FILE to be an object type. I
suppose an implementation could act as if void is an object type, but
IMHO that goes beyond what "conforming" means.

So it does; I think that clears up my confusion. Jack
Klein was right when he said FILE cannot be an opaque type,
but it's not because of "the simple reason that the standard
requires some functions to be implemented as macros." Rather,
it's because 7.19.1/2 requires that FILE be "an object type"
and 6.2.5/1 says that object types "fully describe" objects.
There's no formal definition of "opaque," but it appears
antithetical to "full description."

Thanks to Keith, and to Richard Heathfield for making
the point in a slightly different way.
 
K

Keith Thompson

Eric Sosman said:
Keith said:
[...]
The standard specifically requires FILE to be an object type. I
suppose an implementation could act as if void is an object type, but
IMHO that goes beyond what "conforming" means.

So it does; I think that clears up my confusion. Jack
Klein was right when he said FILE cannot be an opaque type,
but it's not because of "the simple reason that the standard
requires some functions to be implemented as macros." Rather,
it's because 7.19.1/2 requires that FILE be "an object type"
and 6.2.5/1 says that object types "fully describe" objects.
There's no formal definition of "opaque," but it appears
antithetical to "full description."

Thanks to Keith, and to Richard Heathfield for making
the point in a slightly different way.

But 7.19.1 says that FILE is

an object type capable of recording all the information needed to
control a stream, including [a bunch of stuff]

It doesn't necessarily have to record the information *directly*. I
can imagine FILE being a typedef for void* (making FILE* void**);
internally the void* can be converted to a pointer to a structure
containing the actual information.

Or if you want the information stored directly in the FILE, make it an
appropriately sized array of unsigned char.

It's probably not worth the effort, though. Even if FILE is a typedef
for a struct containing the required information explicitly, the fact
that the standard says nothing about the content makes it sufficiently
opaque for most purposes. Any scheme to hide the information can be
circumvented by a programmer sufficiently determined to shoot himself
in the foot.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,904
Latest member
HealthyVisionsCBDPrice

Latest Threads

Top