Programming in standard c

J

jacob navia

Bart said:
I've been using a function like the following:

unsigned int getfilesize(FILE* handle)
{
unsigned int p,size;
p=ftell(handle); /*p=current position*/
fseek(handle,0,2); /*get eof position*/
size=ftell(handle); /*size in bytes*/
fseek(handle,p,0); /*restore file position*/
return size;
}

What is wrong with this, is it non-standard? (Apart from the likely 4Gb
limit)

I proposed exactly that, and it works in most systems...

But I was flamed without end. I am interested to see what you will get

:)
 
R

Richard Heathfield

[My first reply appears to have got lost in the ether. This reply is
somewhat shorter, alas.]

Bart C said:
unsigned int getfilesize(FILE* handle)
{
unsigned int p,size;
p=ftell(handle); /*p=current position*/
fseek(handle,0,2); /*get eof position*/
size=ftell(handle); /*size in bytes*/
fseek(handle,p,0); /*restore file position*/
return size;
}

What is wrong with this, is it non-standard?

Yes. See the Standard's definitions of fseek and ftell, which make it clear
why this is not a portable way for determining a file's size.
 
F

Flash Gordon

jacob navia wrote, On 26/12/07 23:10:
If there are file systems where there is no way to know that besides
by reading the whole file, then THOSE SYSTEMS would be forced to do that
not everyone!

For systems where we would have to special-case things, we will need
'one-off' functions to handle it. Do these functions belong in the
standard? BTW, the systems where we have to special case things is
"almost all of them."[*]

If we define filesize as the number of bytes that would
be returned when reading the file we do not have to special case
anything.

Ah, so you don't want to know if the file will fit on the floppy disk
you are about to write it to...

BTW, I can sometimes fit a file larger than the floppy disk (as far as
ls is concerned) on the floppy disk, I just have to do it the
appropriate way.
But granted, a binary/text mode would be nice.

<snip>

I actually agree that some of what you are saying would be nice, but I
can see the problems. I can also see that none of what you are asking
for is actually required for the problem you were trying to solve.
 
U

user923005

Who cares?

The C implementation ALREADY abstracts that away from us.

I know that I can open a file for writing under OpenVMS
or whatever and I can write two bytes in it and read them again.

SO FAR the abstraction works. What I am proposing is just a
bit more of FUNCTIONALITY.

This functionality cannot be achieved. It is literally impossible on
a multi-user system for obvious reasons.
In the same manner that fopen abstract all that away from
me.

But fopen(), by abstraction of those details, means that in reality it
won't work for most files on an OpenVMS system. That is why the DEC/
COMPAQ/HP C compilers for OpenVMS have tons of extensions for fopen()
that are specific to OpenVMS. Your technique could give you an
answer, but it would be wrong almost all of the time.
The value returned should be equivalent to the bytes that I would read
in binary mode.

You realize, of course, that with a compressed file, that value has
more than one meaning. In addition, with a multiuser system, the
microsecond after you collect the file, it can be truncated or even
deleted. So the number that it tells you is only a guess at best and
may be a total lie. Do you want an answer that you cannot rely on?
What are you going to do with that answer?
Ah well. I am dreaming then all the time.

I write

dir

and the multi-user file system tells me the size of each file.

ANd in unix I do

ls

and (WONDER) I get a meaningless result with the file size of
each file.

And if you do ls again, one second later, all of those files might be
gone or different sized. If you relied on the answer that you got,
you would (at best) get the wrong answer on occasion. If you expected
that the size you got would hold all of the file, and if someone added
a record, then your memory allocation to hold it is too small and when
you read the data, the operation will overwrite memory. If you can
come up with a simple work-around for this obvious and fundamental
problem, I would like to hear of it.
For systems where we would have to special-case things, we will need
'one-off' functions to handle it.  Do these functions belong in the
standard?  BTW, the systems where we have to special case things is
"almost all of them."[*]

If we define filesize as the number of bytes that would
be returned when reading the file we do not have to special case
anything.

I am surprised that you do not understand the ramifications of not
being the only one allowed to access a file.
But granted, a binary/text mode would be nice.




In system 3090 it returns the size of the file. Even in that system,
I can see the size of each file.

But you are surely a bit of outmoded I would say.

According to IBM, you can easily upgrade your system 3090 to a system
9000.

Only thing is that system 9000 was introduced in 1990 (so system 3090
must be a mainframe of 198x!). Even system 9000 doesn't exist anymore,
since IBM retired it in 1998...

Most of the data in the world resides on IBM 3090 hardware. But I
guess that it is not very important.
You search for VERY current examples isn't it?
Exactly.




So what?

If a mapping from the native error to the given error palette is not
possible, the implementation can return that error code!

But we could PORTABLY test for IO errors, "no memory" errors, etc!

I think that you will find the errno.h file contains:
1. The error values mandated by the standard (hint: there is more
than 1)
2. Any other error values that are pertiant for the system on which
the code is compiled.
 
J

Julienne Walker

If you reduce the requirements of course, it is easy...
But then the usage of your utility is greatly reduced.

Perhaps I missed some of the requirements, but my version was based on
your description of the problem: "we all some day needed to read an
entire file into RAM to process it". Given that, it's trivial to write
a function in standard C (pick a standard).
But the point is that that error mechanism wouldn't be standard.

I do not want to argue trhat it is impossible to write this program
in C. I am arguing that it is not possible to write it in STANDARD C.

If you throw in a bunch of non-portable requirements, it's pretty much
a given that the resulting program will also be non-portable.
Here's a prototype:
#include <stdio.h>
#include <stdlib.h>
char *readall ( FILE *in, int *n )
{
char *result = NULL;
size_t size = 0;
int curr = 0;
int ch;
while ( ( ch = fgetc ( in ) ) != EOF ) {
if ( curr == size ) {
char *save = realloc ( result, size + BUFSIZ + 1 );
if ( save == NULL )
break;
result = save;
size += BUFSIZ;
}
result[curr++] = (char)ch;
}
*n = curr;
return result;
}
It's fairly naive for a start, but with more detailed requirements I
still don't see how it can't be done in standard C.

1) You suppose that the file pointer is at the beginning of the file
To be sure you should to an fseek before reading...

Actually, I made an explicit decision not to rewind the stream because
that limits the usefulness of this function. Just like I expect the
file to be open, I also expect the "file pointer" to be located
wherever the caller wants. This is not a bug.
2) You allocate always in BUFSIZE chunks, and you have an almost 100%
probability of wasting memory.

It's a naive example (I'm reasonably sure I said that already) for
illustrative purposes. I didn't intend it to be flawless production
code. I literally spent two minutes writing it. Anything more
sophisticated would be terribly buggy after two minutes. ;-)
3) If you run out of memory you return a truncated file, giving the user
NO WAY to know that the data is missing!

Once again, naive example for illustrative purposes. Didn't I say that
it's a start and not the finished product?
4) The string is not zero terminated... You write the EOF value at the
end in most cases.

Again, this was an explicit decision, not a bug. Your alternative is
equally valid.
Look, all those bugs can be easily corrected and your approach is maybe
sounder than mine. You will agree however, that

fpos_t filesize(FILE *);

would be useful isn't it?

Taken at face value, yes. But when you throw in all of the variables,
it's not quite as obvious. For example, how would we define the size
of a file? The number of bytes? The actual storage cost? The number of
characters after textual conversions are made? All of those are useful
metrics, yet if we include a function for each, the standard library
starts to become bloated. If we define a function that can handle all
of the options, we're likely to get yet another weird function that's
way more complicated than we want.

There's more to standardizing functions than saying "wouldn't it be
useful?".
You took down the most important use of this utility:
Abstracting away the difference from binary/text files
from the user. If we take that away, it would be useful only
for binary files.

You didn't abstract it away either. The caller still has to consider
these things to pass in the proper mode. If you "take it away", as you
say I've done, the function is still useful for both binary and text
files because fgetc does the right thing for the stream without any
special work on my part.

I get the distinct impression that you're basing these complaints on
requirements that I'm not aware of. Can you give me a formal
description of this function so that I have a better idea of what I'm
dealing with?
Thanks for your input.


No no, thank you. I noticed that you've been quick to work on bug
reports with lcc-win32. ;-)
 
U

user923005

I've been using a function like the following:

unsigned int getfilesize(FILE* handle)
{
unsigned int p,size;
p=ftell(handle);                /*p=current position*/
fseek(handle,0,2);              /*get eof position*/
size=ftell(handle);             /*size in bytes*/
fseek(handle,p,0);              /*restore file position*/
return size;

}

What is wrong with this, is it non-standard? (Apart from the likely 4Gb
limit)


Because anything could happen between getting the size and making use of it?
In that case pretty much everything is impossible.

Exactly. Your solution is portable across every single user system
that has only one point of access (but Windows does not qualify, even
though single user because drivers, folders and files can be shared).
I guess it works well for a toaster IC.
So the file is compressed, so what? If the compression is made transparent
by the OS, I will get the full filesize.

No. You only get an estimate. At least with some file systems that I
know of. The only way to get the real count is to do a table scan,
and that only works if you lock the file. Chances are very good that
even if you are allowed to lock the file, users will be very angry at
you for locking the file for the duration of a table scan. I also
guess that they won't be too happy when you map a 12 gig file into
memory on a machine that has 32 bit virtual memory but only 8 gigs
physical RAM.
If not, will get the size of a
compressed file. The compression is likely irrelevant, and I can't do
anything with it anyway. And if I can, I will know how to decompress and how
to get the inflated size.

The operating system does it. You have no control over it
whatsoever. You are not even told the algorithm that they are using.
Someone mentioned streams in this thread, but on my computer as an example,
I have so many hundred thousand files 99.99...% of which are just a bunch of
so many bytes. This type of 'File' seems so dominant that surely it should
have been given special treatment apart from streams.

I agree that it would be nice to have a stream classifier.

e.g.:
struct file_kinds fdescribe(FILE *F);
 
J

jacob navia

user923005 said:
This functionality cannot be achieved. It is literally impossible on
a multi-user system for obvious reasons.

Please stop that. With the same arguments I can tell that

fseek() is bogus since somebody else can erase the
file after you do your fseek.

ftell: same problem.

fread: the same
fwrite: the same. You wrote something but root took it away.

etc etc.

Please: LET'S BE REALISTIC.

filesize returns the size in bytes that reading character
by character would return if the file is unchanged.

It does NOT guarantee that your coffee has sugar, that your shoes
do not hurt, or that the file will still be there tomorrow.
 
U

user923005

Please stop that. With the same arguments I can tell that

fseek() is bogus since somebody else can erase the
file after you do your fseek.

ftell: same problem.

fread: the same
fwrite: the same. You wrote something but root took it away.

Really? You have the file open and seek to a position and the OS lets
someone else delete it? Marvelous. Show me this system that I may
stand in astonishment. For instance, in Windows:

C:\tmp>cl /D_CRT_SECURE_NO_WARNINGS /W4 /Ox fst.c
Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 14.00.50727.762
for 80x86
Copyright (C) Microsoft Corporation. All rights reserved.

fst.c
Microsoft (R) Incremental Linker Version 8.00.50727.762
Copyright (C) Microsoft Corporation. All rights reserved.

/out:fst.exe
fst.obj

C:\tmp>type fst.c

#include <string.h>
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>

char s[32767];

int main(void)
{
FILE *f;
f = fopen("fst.c", "r");
if (f == NULL) {
puts(strerror(errno));
exit(EXIT_FAILURE);
}
fseek(f, 9, SEEK_SET);
puts("Waiting for user input...");
fgets(s, sizeof s, stdin);
fclose(f);
return 0;
}

C:\tmp>fst
Waiting for user input...

In another window:
C:\tmp>del fst.c
C:\tmp\FST.C
The process cannot access the file because it is being used by another
process.

C:\tmp>
etc etc.

Please: LET'S BE REALISTIC.

filesize returns the size in bytes that reading character
by character would return if the file is unchanged.

It does NOT guarantee that your coffee has sugar, that your shoes
do not hurt, or that the file will still be there tomorrow.

I would like the number to be actually useful. Unfortunately you have
not proposed any system where it would have utility.
 
U

user923005

I've been using a function like the following:

unsigned int getfilesize(FILE* handle)
{
unsigned int p,size;
p=ftell(handle);                /*p=current position*/
fseek(handle,0,2);              /*get eof position*/
size=ftell(handle);             /*size in bytes*/
fseek(handle,p,0);              /*restore file position*/
return size;

}

What is wrong with this, is it non-standard? (Apart from the likely 4Gb
limit)


Because anything could happen between getting the size and making use of it?
In that case pretty much everything is impossible.


So the file is compressed, so what? If the compression is made transparent
by the OS, I will get the full filesize. If not, will get the size of a
compressed file. The compression is likely irrelevant, and I can't do
anything with it anyway. And if I can, I will know how to decompress and how
to get the inflated size.

Someone mentioned streams in this thread, but on my computer as an example,
I have so many hundred thousand files 99.99...% of which are just a bunch of
so many bytes. This type of 'File' seems so dominant that surely it should
have been given special treatment apart from streams.

An aside:

12.25: What's the difference between fgetpos/fsetpos and ftell/fseek?
What are fgetpos() and fsetpos() good for?

A: ftell() and fseek() use type long int to represent offsets
(positions) in a file, and may therefore be limited to offsets
of about 2 billion (2**31-1). The newer fgetpos() and
fsetpos()
functions, on the other hand, use a special typedef, fpos_t,
to
represent the offsets. The type behind this typedef, if
chosen
appropriately, can represent arbitrarily large offsets, so
fgetpos() and fsetpos() can be used with arbitrarily huge
files.
fgetpos() and fsetpos() also record the state associated with
multibyte streams. See also question 1.4.

References: K&R2 Sec. B1.6 p. 248; ISO Sec. 7.9.1,
Secs. 7.9.9.1,7.9.9.3; H&S Sec. 15.5 p. 252.
 
G

Gordon Burditt

You do understand that this operation has no real meaning on a multi-
Ah well. I am dreaming then all the time.

I write

dir

and the multi-user file system tells me the size of each file.

ANd in unix I do

ls

and (WONDER) I get a meaningless result with the file size of
each file.

You mean it tells you what the size of the file USED TO BE when the
command ran. It might not be that size now. Files can grow. Or
be chopped to zero length. Or be deleted.

Try running "ls -l > log.txt" and see what it lists for the file
size of log.txt.

If you want to use a filesize() function to provide an INITIAL ESTIMATE
of the file size, it might work, provided you can tolerate the actual
value being higher or lower. And in this case, issues of line endings
might not matter so much. The estimate for "reasonable" files might
be off by less than 10% and not hurt efficiency too much.

Even some unitasking systems might have problems with file sizes
changing, if, for example, it was invoked on the file stdout is
redirected to.
If we define filesize as the number of bytes that would
be returned when reading the file we do not have to special case
anything.

Only if you can make the answer not change between the time you compute
the answer and the time you use it.
 
U

user923005

You mean it tells you what the size of the file USED TO BE when the
command ran.  It might not be that size now.  Files can grow.  Or
be chopped to zero length.  Or be deleted.

Try running "ls -l > log.txt" and see what it lists for the file
size of log.txt.

If you want to use a filesize() function to provide an INITIAL ESTIMATE
of the file size, it might work, provided you can tolerate the actual
value being higher or lower.  And in this case, issues of line endings
might not matter so much.  The estimate for "reasonable" files might
be off by less than 10% and not hurt efficiency too much.


Even some unitasking systems might have problems with file sizes
changing, if, for example, it was invoked on the file stdout is
redirected to.


Only if you can make the answer not change between the time you compute
the answer and the time you use it.

This also assumes that the answer is correct.
For instance, many operating systems report file sizes in blocks. That
tells you (even if you lock the file) only an approximate size.

You do know that the real file size is <= (reported blocks * bytes per
block) but this number will be totally bogus for compressed files.

It seems like such a handy thing to have this "file size" function.
Is it astounding that we don't have one? Not at all, when you
consider the problems associated with collection of a number like that
in a way that is reliable. In fact, the remarkable thing is that
thinking people would even debate it, because the problems with the
collection of such a number are so patently obvious. The system
library functions are designed to return something better than a
guess. If a given function would only be able to produce a guess, the
implementors decided not to write them. After all, someone might use
that guess and it will be wrong sometimes.
 
G

Gordon Burditt

This functionality cannot be achieved. It is literally impossible on
Please stop that. With the same arguments I can tell that

fseek() is bogus since somebody else can erase the
file after you do your fseek.

On a POSIX system, fseek() still works EVEN IF someone remove()s
the file between the fseek() and a fread() or fwrite() after it.
ftell: same problem.
On a POSIX system, ftell() still works EVEN IF someone remove()s
the file between a ftell() and a fread(), fwrite(), or fseek() after
it.

fread: the same
fwrite: the same. You wrote something but root took it away.
On a POSIX system, fread() and fwrite() still work EVEN IF someone
remove()s the file. Ok, the file does go away after it gets
fclose()d.
Please: LET'S BE REALISTIC.

Files that are continuously growing are common. This could include
the standard output of the running program itself, or files being
syslogged to, or all sorts of things. If a program creates a file,
chances are the file exists at some point with a size between zero
and its final size.
filesize returns the size in bytes that reading character
by character would return if the file is unchanged.

And if someone changes the file instead, it's acceptable to segfault?
 
G

Gordon Burditt

This functionality cannot be achieved. =A0It is literally impossible on
Really? You have the file open and seek to a position and the OS lets
someone else delete it? Marvelous.

Yes, and furthermore you can still read from the file, or write to it,
*AFTER* someone else deletes it (until you fclose() it).
Show me this system that I may
stand in astonishment.

UNIX or POSIX.

[Windows example of deleting failing on open file deleted.]
 
S

Stephen Montgomery-Smith

Eric said:
jacob said:
Eric said:
jacob navia wrote:
[...]
You can't do *anything* in just standard C.

Then why do you bother with this newsgroup? Why do
you waste your time on a powerless language? Why don't
you go away and become a regular on comp.lang.mumps or
comp.lang.apl or any newsgroup devoted to a language you
consider more useful than C? Since C has zero utility
(in your stated estimation), even comp.lang.cobol would
be a forum of more value. Go! Spend your talent on
something more useful than the torment of us poor old
dinosaurs! Go!

Stop whining and see the sentence in my message:
<quote>
This confirms my arguments about the need to improve the quality
of the standard library!
<end quote>

You wrote: "You can't do *anything* in just standard C."
Do you stand by that statement, or do you retreat from it?
If you stand by it, why are you here?

I think his "anything" was hyperbole, and clearly meant as such! IMHO
hyperbole is a proper form of communication, but does require that the
recipient isn't a pedant!
 
S

Stephen Montgomery-Smith

user923005 said:
When you act in an inflamatory way, surely you expect an inflamatory
response.
Of course, the game would not be nearly so fun if we all talked to
each other in a civil manner. But that would assume that we actually
wanted to *make* progress.

Yes, but inflammatory exchanges is a major part of what makes this group
so fun to read. So please don't discourage his participation!
 
S

Stephen Montgomery-Smith

jacob said:
In my "Happy Christmas" message, I proposed a function to read
a file into a RAM buffer and return that buffer or NULL if
the file doesn't exist or some other error is found.

It is interesting to see that the answers to that message prove that
programming exclusively in standard C is completely impossible even
for a small and ridiculously simple program like the one I proposed.

1 I read the file contents in binary mode, what should allow me
to use ftell/fseek to determine the file size.

No objections to this were raised, except of course the obvious
one, if the "file" was some file associated with stdin, for
instance under some unix machine /dev/tty01 or similar...

I did not test for this since it is impossible in standard C:
isatty() is not in the standard.

2) There is NO portable way to determine which characters should be
ignored when transforming a binary file into a text file. One
reader (CB Falconer) proposed to open the file in binary mode
and then in text mode and compare the two buffers to see which
characters were missing... Well, that would be too expensive.

3) I used different values for errno defined by POSIX, but not by
the C standard, that defines only a few. Again, error handling
is not something important to be standardized, according to
the committee. errno is there but its usage is absolutely
not portable at all and goes immediately beyond what standard C
offers.

We hear again and again that this group is about standard C *"ONLY"*.
Could someone here then, tell me how this simple program could be
written in standard C?

This confirms my arguments about the need to improve the quality
of the standard library!

You can't do *anything* in just standard C.


As a newcomer to this group who hasn't even read the FAQ, let me
nevertheless brazenly seek to answer your question.

I think you are correct in that standard C is of somewhat limited value.
But perhaps we should see standard C as perhaps a tool to be embedded
into real C, rather than as an object with value in of itself. By "real
C", I mean any implementation that is used in real life (Visual C, GCC
on Linux, etc).

Now there is a sense in which the kind of function you are asking about
- to put a file into memory - is really the kind of thing a systems
programmer would do. Any portable version of such a function would
typically be much slower than any special function designed around a
particular OS. Most importantly, it would be a pointless thing to add
to the standard, because rather than liberating OS creators it would
hamstring them. Instead of standard C being this tremendously powerful
springboard from which to create useful implementations, it would go the
way PASCAL went, great in theory, but too limited in practice. Even if
your function existed in standard C, I would still use mmap() for my
unix programs, because I know that mmap is designed to work well with my
operating system of choice.



Now this particular newsgroup has chosen to make standard C its only
legitimate discussion point. This is a bit awkward to newcomers to this
group, because for most groups the name is somewhat self-explanatory,
and one would normally expect a group with the name comp.lang.c to be a
general discussion ground of all things related to C. So people just
post their messages without reading the FAQ, and for most groups
newgroups this works just fine.

But I can also see why some folks would like a "standard C only"
discussion group. One problem is that "all things related to C" is a
huge subject, especially for the kinds of people likely to be using
newsgroups. Now I can see that discussion of standard C only will
necessarily be rather arcane discussions, but there should be a place to
do this, and why not this place?

They could rename their newsgroup to comp.lang.c-standard or such like,
but then the group would get far less postings. As such it would become
like alt.sci.math.galois_fields (a random example that came to mind),
which is mostly spam with only sporadic postings that are even slightly
on-topic. No. They are much better off with a name like comp.lang.c so
that the off-topic but non-spam postings at least outnumber the spam
postings. Now if only the regulars could learn to be more friendly and
patient in redirecting newcomers to the groups they really need, but I
am not the behavior police! And anyway, it is fun for lurkers like me
to read postings by those who have such an awkward combination of easily
giving offense and easily taking offense.

Stephen
 
B

backslash null

Stephen Montgomery-Smith said:
I think his "anything" was hyperbole, and clearly meant as such! IMHO
hyperbole is a proper form of communication, but does require that the
recipient isn't a pedant!

So 'stop whining' was from Jacob to Dr. Sosman. Tja.

Without alternatives such as Jacob's lcc, standard C, the syntax to which
*all* of ISO's sexiest syntaxes has reference, would be six thousand words
about const. And a thousand points of perfect, republican light.

'Pedant" is a badge he wears proudly. I love using antecedents improperly
as does Jabba. Arizonans don't seem to have a problem with being
systematically wrong.
--
Connecticut Sucks

Liebermann endorses McCain
[...]
You can't do *anything* in just standard C.

Then why do you bother with this newsgroup? Why do
you waste your time on a powerless language? Why don't
you go away and become a regular on comp.lang.mumps or
comp.lang.apl or any newsgroup devoted to a language you
consider more useful than C? Since C has zero utility
(in your stated estimation), even comp.lang.cobol would
be a forum of more value. Go! Spend your talent on
something more useful than the torment of us poor old
dinosaurs! Go!
 
S

Stephen Montgomery-Smith

backslash said:
So 'stop whining' was from Jacob to Dr. Sosman. Tja.

Without alternatives such as Jacob's lcc, standard C, the syntax to which
*all* of ISO's sexiest syntaxes has reference, would be six thousand words
about const. And a thousand points of perfect, republican light.

'Pedant" is a badge he wears proudly. I love using antecedents improperly
as does Jabba. Arizonans don't seem to have a problem with being
systematically wrong.

I have this feeling that you said something very intelligent here,
probably rubbing my face in the mud. But I confess I don't get it!
 
R

Richard Heathfield

[Stephen's reply, whilst long, was well worth reading. I only have comments
to make on a tiny portion of it. Please imagine that, instead of snipping
the rest, I had quoted it all and written <aol>I agree!</aol> underneath.]

Stephen Montgomery-Smith said:
jacob navia wrote:


As a newcomer to this group who hasn't even read the FAQ, let me
nevertheless brazenly seek to answer your question.

I think you are correct in that standard C is of somewhat limited value.

*All* tools are of somewhat limited value. I think many people would be
astounded at just how much can be done with standard C, and just how
widely that functionality can be implemented.
But perhaps we should see standard C as perhaps a tool to be embedded
into real C, rather than as an object with value in of itself.

How do you feel about s/rather than/as well/ - because I think that such a
change reflects reality rather more closely. Certainly for my own part, I
know that my use of what you call "real C" (by which you appear to mean "C
+ non-ISO9899 libraries") is dwarfed by my use of ISO C. Most of the C
programs I write are ISO C programs. Only a very small proportion use
non-ISO9899 libraries.

<snip>
 
M

Malcolm McLean

jacob navia said:
In my "Happy Christmas" message, I proposed a function to read
a file into a RAM buffer and return that buffer or NULL if
the file doesn't exist or some other error is found.

It is interesting to see that the answers to that message prove that
programming exclusively in standard C is completely impossible even
for a small and ridiculously simple program like the one I proposed.
/*
function to slurp in an ASCII file
Params: path - path to file
Returns: malloced string containing whole file
*/
char *loadfile(char *path)
{
FILE *fp;
int ch;
long i = 0;
long size = 0;
char *answer;

fp = fopen(path, "r");
if(!fp)
{
printf("Can't open %s\n", path);
return 0;
}

fseek(fp, 0, SEEK_END);
size = ftell(fp);
fseek(fp, 0, SEEK_SET);

answer = malloc(size + 100);
if(!answer)
{
printf("Out of memory\n");
fclose(fp);
return 0;
}

while( (ch = fgetc(fp)) != EOF)
answer[i++] = ch;

answer[i++] = 0;

fclose(fp);

return answer;
}

This will do it. Add 100 + size/10 for luck if paranoid.
You are right that a perverse implementation can break this, which is a bug
in the standard.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,774
Messages
2,569,598
Members
45,144
Latest member
KetoBaseReviews
Top