Printing files every step

M

Mokkapati

Hi all,

I have data that has to be written successively, that is, this data is
updated at every step in the loop and I need to print this in a file.
So, I need something like, for every step, 1.dat, 2.dat....and so on.

If there are 100 successive output files, for that, I guess I need 100
file pointers to be initialized.

FILE *fp_1,*fp_2 and so on..
....
....
....
for (i=0;i<100;i++)
{
fp_no = fopen("C:\\loopno.dat","w");
fprintf(fp_no,"%d, %lf", x, y);
fclose(fp_no);
}

I was wondering as to how to do the above...I have been looking for
this since sometime. Can anyone suggest me something on this?

Thanks in advance.

Mokkapati
 
E

Eric Sosman

Mokkapati wrote On 05/18/06 17:37,:
Hi all,

I have data that has to be written successively, that is, this data is
updated at every step in the loop and I need to print this in a file.
So, I need something like, for every step, 1.dat, 2.dat....and so on.

If there are 100 successive output files, for that, I guess I need 100
file pointers to be initialized.

FILE *fp_1,*fp_2 and so on..
...
...
...
for (i=0;i<100;i++)
{
fp_no = fopen("C:\\loopno.dat","w");
fprintf(fp_no,"%d, %lf", x, y);
fclose(fp_no);
}

I was wondering as to how to do the above...I have been looking for
this since sometime. Can anyone suggest me something on this?


for (i = 0; i < 100; i++) {
char filename[sizeof "C:\\xxx.dat"];
FILE *fp;
sprintf (filename, "C:\\%d.dat", i);
fp = fopen(filename, "w");
if (fp == NULL) ... handle error ...
...
if (fclose(fp) != 0) ... handle error ...
}
 
G

Gordon Burditt

I have data that has to be written successively, that is, this data is
updated at every step in the loop and I need to print this in a file.
So, I need something like, for every step, 1.dat, 2.dat....and so on.

The argument to fopen() need not be a constant. sprintf() is often
useful in constructing a filename in a character array to pass to fopen().
You don't have to use that, though, there are many ways of constructing
strings from pieces.

From the procedure you describe, you do not need more than one of these
files open simultaneously. One FILE * pointer is enough.
If there are 100 successive output files, for that, I guess I need 100
file pointers to be initialized.

Or you can use the same file pointer variable repeatedly. fopen(), write
something to the file, fclose(), repeat.
FILE *fp_1,*fp_2 and so on..
...
...
...
for (i=0;i<100;i++)
{
fp_no = fopen("C:\\loopno.dat","w");
fprintf(fp_no,"%d, %lf", x, y);
fclose(fp_no);
}


Gordon L. Burditt
 
M

Mokkapati

Thanks Mr. Sosman for the example and Mr. Burditt for your insights. I
could run a sample code.

Thanks so much.

Mokkapati
 
R

Roberto Waltman

Gordon said:
The argument to fopen() need not be a constant. sprintf() is often
useful in constructing a filename in a character array to pass to fopen().
You don't have to use that, though, there are many ways of constructing
strings from pieces.

From the procedure you describe, you do not need more than one of these
files open simultaneously. One FILE * pointer is enough.


Or you can use the same file pointer variable repeatedly. fopen(), write
something to the file, fclose(), repeat.

<OT>
Reusing the same FILE pointer leads to a simpler and cleaner design,
but there are other reasons to do this: Every operating system has
some limit on the number of files that can be open simultaneously,
both on a per-process and system-wide basis.
You may not be able to open 100 files in a single program, even if
your code is 100% correct from the coding and logic points of view.
And if you can open 100 files, that means 100 less are available to
the system as a whole, potentially causing other programs to fail.
</OT>
 
K

Keith Thompson

Roberto Waltman said:
<OT>
Reusing the same FILE pointer leads to a simpler and cleaner design,
but there are other reasons to do this: Every operating system has
some limit on the number of files that can be open simultaneously,
both on a per-process and system-wide basis.
You may not be able to open 100 files in a single program, even if
your code is 100% correct from the coding and logic points of view.
And if you can open 100 files, that means 100 less are available to
the system as a whole, potentially causing other programs to fail.

Does *every* system impose this kind of limit? I know that many do,
but it's easy to imagine a system where the number of simultaneously
open files is limited only by the amount of memory necessary to
describe them. Such a system would have no fixed-size tables of file
descriptors, for example.
 
K

Kenneth Brody

Keith said:
[...] [...]
You may not be able to open 100 files in a single program, even if
your code is 100% correct from the coding and logic points of view.
And if you can open 100 files, that means 100 less are available to
the system as a whole, potentially causing other programs to fail.

Does *every* system impose this kind of limit? I know that many do,
but it's easy to imagine a system where the number of simultaneously
open files is limited only by the amount of memory necessary to
describe them. Such a system would have no fixed-size tables of file
descriptors, for example.

I've seen a system which dynamically allocated its tables, but it still
"limited" the number of open files for a process to 2000. (Or was it
20,000? It's been quite a few years. In either case, it was much
better than the 20 file limit imposed by the systems I had previously
used.)

The problem with "limited only by the amount of memory" is that a runaway
program could use all of that memory, so arbitrary limits still have their
place in a multitasking environment. Think "file leak" as a situation not
unlike a "memory leak" with malloc().

--
+-------------------------+--------------------+-----------------------------+
| Kenneth J. Brody | www.hvcomputer.com | |
| kenbrody/at\spamcop.net | www.fptech.com | #include <std_disclaimer.h> |
+-------------------------+--------------------+-----------------------------+
Don't e-mail me at: <mailto:[email protected]>
 
R

Richard Heathfield

Kenneth Brody said:
I've seen a system which dynamically allocated its tables, but it still
"limited" the number of open files for a process to 2000.

MS-DOS implementations typically imposed a limit of 20 open files - and that
included stdin, stdout, stderr, and the unfortunately named stdprn and
stdaux - so you were really left with 15. Mind you, that was plenty. On the
one occasion that I needed more, I needed *loads* more - so I simply came
up with a way of keeping track of which ten or so files were busiest, and
kept them open, closing and opening others as and when required.
 
G

Gordon Burditt

Does *every* system impose this kind of limit? I know that many do,
but it's easy to imagine a system where the number of simultaneously
open files is limited only by the amount of memory necessary to
describe them. Such a system would have no fixed-size tables of file
descriptors, for example.

It is easy to imagine a system like the above where the system *STILL*
refuses to allow any one process from hogging so many resources that
prevent others from running.

Gordon L. Burditt
 
C

Clever Monkey

Keith said:
Does *every* system impose this kind of limit? I know that many do,
but it's easy to imagine a system where the number of simultaneously
open files is limited only by the amount of memory necessary to
describe them. Such a system would have no fixed-size tables of file
descriptors, for example.
On those platforms where nearly everything requires a file of some sort,
and resources to maintain them, it is pretty easy for a large server
application to hit these soft or hard limits.
 
C

CBFalconer

Keith said:
.... snip ...


Does *every* system impose this kind of limit? I know that many do,
but it's easy to imagine a system where the number of simultaneously
open files is limited only by the amount of memory necessary to
describe them. Such a system would have no fixed-size tables of file
descriptors, for example.

No. Try CP/M for example, and FCBs. The space is in the users
area, and limited only by what he can get. The disadvantage is
that the user has to close files properly.
 
R

Roberto Waltman

Keith Thompson said:
Does *every* system impose this kind of limit? I know that many do,
but it's easy to imagine a system where the number of simultaneously
open files is limited only by the amount of memory necessary to
describe them. Such a system would have no fixed-size tables of file
descriptors, for example.

Point well taken, I have not yet used *every* OS, please replace
"every" with "most" in my response ;)
All those that I did use had some fixed limit, (in some cases
changeable by the users,) but of course an implementer could choose a
different approach.
 
R

Roberto Waltman

CBFalconer said:
No. Try CP/M for example, and FCBs. The space is in the users
area, and limited only by what he can get. The disadvantage is
that the user has to close files properly.

I forgot about CP/M - Same thing with DEC RSTS-11/RSX-11 and HP
RTE-II/III - File control blocks in user space.
 
K

Keith Thompson

CBFalconer said:
No. Try CP/M for example, and FCBs. The space is in the users
area, and limited only by what he can get. The disadvantage is
that the user has to close files properly.

Does "the user has to close files properly" imply that if I fail to
close a file, it will continue to consume system resources after my
program terminates?

Note that the standard says that exit() closes all open streams (and
returning from main() is equivalent to calling exit(), so a C program
under a conforming implementation shouldn't have this problem.

Of course closing files you opened is a good idea anyway.
 
R

Roberto Waltman

Keith Thompson said:
Does "the user has to close files properly" imply that if I fail to
close a file, it will continue to consume system resources after my
program terminates?

No. The problem is not resource leakage, but potential data
corruption. A system such as CP/M does not keep track of files
accessed via FCBs in userland, therefore you can loose or otherwise
corrupt data if you do not close open files before terminating a
program.
Note that the standard says that exit() closes all open streams (and
returning from main() is equivalent to calling exit(), so a C program
under a conforming implementation shouldn't have this problem.

Correct. A conforming C program using user-space FCBs would close any
open files as part of the cleanup following main().
 
G

Gordon Burditt

Does "the user has to close files properly" imply that if I fail to
close a file, it will continue to consume system resources after my
program terminates?

Likely not, if the resources involved are all in user memory, and
that memory is all recovered on exit.

What might happen (and sometimes did in CP/M) is that the file
contents does not reflect all the writes you did (the file still
appears as zero-length, even if the data was actually written,
since the file length was not updated).
Note that the standard says that exit() closes all open streams (and
returning from main() is equivalent to calling exit(), so a C program
under a conforming implementation shouldn't have this problem.

I don't believe this is required to happen if, for example, the
program is terminated by typing an interrupt character rather
than by calling exit().
Of course closing files you opened is a good idea anyway.

*IF* you get a chance, which may not be the case if the program
gets killed externally.

Gordon L. Burditt
 
G

Gordon Burditt

Does "the user has to close files properly" imply that if I fail to
Likely not, if the resources involved are all in user memory, and
that memory is all recovered on exit.
[snip]

Thanks.

Gordon, *please* do us all a favor and don't snip attributions when
you post a followup. We've had this discussion before; see
<http://groups.google.com/group/comp.lang.c/msg/291038dca20a505e>.

Attribution = misattribution. Even though you and I think that
attributions indicate who wrote what, and we might even agree on
who wrote what, the authors of other articles in the thread don't
agree, and if I don't snip attributions, I get complaints from
several authors each claiming I mis-attributed the *SAME* text to
them. That can draw lawsuits. As far as I know, non-attribution
cannot.

Gordon L. Burditt
 
A

Andrew Poelstra

Does "the user has to close files properly" imply that if I fail to
close a file, it will continue to consume system resources after my
program terminates?

Likely not, if the resources involved are all in user memory, and
that memory is all recovered on exit.
[snip]

Thanks.

Gordon, *please* do us all a favor and don't snip attributions when
you post a followup. We've had this discussion before; see
<http://groups.google.com/group/comp.lang.c/msg/291038dca20a505e>.

Attribution = misattribution. Even though you and I think that
attributions indicate who wrote what, and we might even agree on
who wrote what, the authors of other articles in the thread don't
agree, and if I don't snip attributions, I get complaints from
several authors each claiming I mis-attributed the *SAME* text to
them. That can draw lawsuits. As far as I know, non-attribution
cannot.

Gordon L. Burditt
Quoting others without giving them credit may very well draw lawsuits,
although I can't imagine why anyone would sue over UseNET.

Nobody disagrees over who wrote what. You need to quote attributions.
 
B

Ben Pfaff

Attribution = misattribution. Even though you and I think that
attributions indicate who wrote what, and we might even agree on
who wrote what, the authors of other articles in the thread don't
agree, and if I don't snip attributions, I get complaints from
several authors each claiming I mis-attributed the *SAME* text to
them. That can draw lawsuits. As far as I know, non-attribution
cannot.

I don't understand your objection. I've been posting to Usenet
for many years now and I've never encountered anyone who made
such bizarre allegations. Is this something that really happened
or are you just inventing a hypothetical?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,537
Members
45,021
Latest member
AkilahJaim

Latest Threads

Top