Printing files every step

Discussion in 'C Programming' started by Mokkapati, May 18, 2006.

  1. Mokkapati

    Mokkapati Guest

    Hi all,

    I have data that has to be written successively, that is, this data is
    updated at every step in the loop and I need to print this in a file.
    So, I need something like, for every step, 1.dat, 2.dat....and so on.

    If there are 100 successive output files, for that, I guess I need 100
    file pointers to be initialized.

    FILE *fp_1,*fp_2 and so on..
    ....
    ....
    ....
    for (i=0;i<100;i++)
    {
    fp_no = fopen("C:\\loopno.dat","w");
    fprintf(fp_no,"%d, %lf", x, y);
    fclose(fp_no);
    }

    I was wondering as to how to do the above...I have been looking for
    this since sometime. Can anyone suggest me something on this?

    Thanks in advance.

    Mokkapati
     
    Mokkapati, May 18, 2006
    #1
    1. Advertising

  2. Mokkapati

    Eric Sosman Guest

    Mokkapati wrote On 05/18/06 17:37,:
    > Hi all,
    >
    > I have data that has to be written successively, that is, this data is
    > updated at every step in the loop and I need to print this in a file.
    > So, I need something like, for every step, 1.dat, 2.dat....and so on.
    >
    > If there are 100 successive output files, for that, I guess I need 100
    > file pointers to be initialized.
    >
    > FILE *fp_1,*fp_2 and so on..
    > ...
    > ...
    > ...
    > for (i=0;i<100;i++)
    > {
    > fp_no = fopen("C:\\loopno.dat","w");
    > fprintf(fp_no,"%d, %lf", x, y);
    > fclose(fp_no);
    > }
    >
    > I was wondering as to how to do the above...I have been looking for
    > this since sometime. Can anyone suggest me something on this?


    for (i = 0; i < 100; i++) {
    char filename[sizeof "C:\\xxx.dat"];
    FILE *fp;
    sprintf (filename, "C:\\%d.dat", i);
    fp = fopen(filename, "w");
    if (fp == NULL) ... handle error ...
    ...
    if (fclose(fp) != 0) ... handle error ...
    }

    --
     
    Eric Sosman, May 18, 2006
    #2
    1. Advertising

  3. >I have data that has to be written successively, that is, this data is
    >updated at every step in the loop and I need to print this in a file.
    >So, I need something like, for every step, 1.dat, 2.dat....and so on.


    The argument to fopen() need not be a constant. sprintf() is often
    useful in constructing a filename in a character array to pass to fopen().
    You don't have to use that, though, there are many ways of constructing
    strings from pieces.

    From the procedure you describe, you do not need more than one of these
    files open simultaneously. One FILE * pointer is enough.

    >If there are 100 successive output files, for that, I guess I need 100
    >file pointers to be initialized.


    Or you can use the same file pointer variable repeatedly. fopen(), write
    something to the file, fclose(), repeat.

    >
    >FILE *fp_1,*fp_2 and so on..
    >...
    >...
    >...
    >for (i=0;i<100;i++)
    >{
    > fp_no = fopen("C:\\loopno.dat","w");
    > fprintf(fp_no,"%d, %lf", x, y);
    > fclose(fp_no);
    >}


    Gordon L. Burditt
     
    Gordon Burditt, May 18, 2006
    #3
  4. Mokkapati

    Mokkapati Guest

    Thanks Mr. Sosman for the example and Mr. Burditt for your insights. I
    could run a sample code.

    Thanks so much.

    Mokkapati
     
    Mokkapati, May 18, 2006
    #4
  5. Gordon Burditt wrote:
    >Mokkapati wrote:
    >>I have data that has to be written successively, that is, this data is
    >>updated at every step in the loop and I need to print this in a file.
    >>So, I need something like, for every step, 1.dat, 2.dat....and so on.

    >
    >The argument to fopen() need not be a constant. sprintf() is often
    >useful in constructing a filename in a character array to pass to fopen().
    >You don't have to use that, though, there are many ways of constructing
    >strings from pieces.
    >
    >From the procedure you describe, you do not need more than one of these
    >files open simultaneously. One FILE * pointer is enough.
    >
    >>If there are 100 successive output files, for that, I guess I need 100
    >>file pointers to be initialized.

    >
    >Or you can use the same file pointer variable repeatedly. fopen(), write
    >something to the file, fclose(), repeat.


    <OT>
    Reusing the same FILE pointer leads to a simpler and cleaner design,
    but there are other reasons to do this: Every operating system has
    some limit on the number of files that can be open simultaneously,
    both on a per-process and system-wide basis.
    You may not be able to open 100 files in a single program, even if
    your code is 100% correct from the coding and logic points of view.
    And if you can open 100 files, that means 100 less are available to
    the system as a whole, potentially causing other programs to fail.
    </OT>
     
    Roberto Waltman, May 19, 2006
    #5
  6. Roberto Waltman <> writes:
    [...]
    > <OT>
    > Reusing the same FILE pointer leads to a simpler and cleaner design,
    > but there are other reasons to do this: Every operating system has
    > some limit on the number of files that can be open simultaneously,
    > both on a per-process and system-wide basis.
    > You may not be able to open 100 files in a single program, even if
    > your code is 100% correct from the coding and logic points of view.
    > And if you can open 100 files, that means 100 less are available to
    > the system as a whole, potentially causing other programs to fail.


    Does *every* system impose this kind of limit? I know that many do,
    but it's easy to imagine a system where the number of simultaneously
    open files is limited only by the amount of memory necessary to
    describe them. Such a system would have no fixed-size tables of file
    descriptors, for example.

    > </OT>


    --
    Keith Thompson (The_Other_Keith) <http://www.ghoti.net/~kst>
    San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
    We must do something. This is something. Therefore, we must do this.
     
    Keith Thompson, May 19, 2006
    #6
  7. Keith Thompson wrote:
    >
    > Roberto Waltman <> writes:
    > [...]

    [...]
    > > You may not be able to open 100 files in a single program, even if
    > > your code is 100% correct from the coding and logic points of view.
    > > And if you can open 100 files, that means 100 less are available to
    > > the system as a whole, potentially causing other programs to fail.

    >
    > Does *every* system impose this kind of limit? I know that many do,
    > but it's easy to imagine a system where the number of simultaneously
    > open files is limited only by the amount of memory necessary to
    > describe them. Such a system would have no fixed-size tables of file
    > descriptors, for example.


    I've seen a system which dynamically allocated its tables, but it still
    "limited" the number of open files for a process to 2000. (Or was it
    20,000? It's been quite a few years. In either case, it was much
    better than the 20 file limit imposed by the systems I had previously
    used.)

    The problem with "limited only by the amount of memory" is that a runaway
    program could use all of that memory, so arbitrary limits still have their
    place in a multitasking environment. Think "file leak" as a situation not
    unlike a "memory leak" with malloc().

    --
    +-------------------------+--------------------+-----------------------------+
    | Kenneth J. Brody | www.hvcomputer.com | |
    | kenbrody/at\spamcop.net | www.fptech.com | #include <std_disclaimer.h> |
    +-------------------------+--------------------+-----------------------------+
    Don't e-mail me at: <mailto:>
     
    Kenneth Brody, May 19, 2006
    #7
  8. Kenneth Brody said:

    > Keith Thompson wrote:
    >>
    >> Does *every* system impose this kind of limit? I know that many do,
    >> but it's easy to imagine a system where the number of simultaneously
    >> open files is limited only by the amount of memory necessary to
    >> describe them. Such a system would have no fixed-size tables of file
    >> descriptors, for example.

    >
    > I've seen a system which dynamically allocated its tables, but it still
    > "limited" the number of open files for a process to 2000.


    MS-DOS implementations typically imposed a limit of 20 open files - and that
    included stdin, stdout, stderr, and the unfortunately named stdprn and
    stdaux - so you were really left with 15. Mind you, that was plenty. On the
    one occasion that I needed more, I needed *loads* more - so I simply came
    up with a way of keeping track of which ten or so files were busiest, and
    kept them open, closing and opening others as and when required.

    --
    Richard Heathfield
    "Usenet is a strange place" - dmr 29/7/1999
    http://www.cpax.org.uk
    email: rjh at above domain (but drop the www, obviously)
     
    Richard Heathfield, May 19, 2006
    #8
  9. >Does *every* system impose this kind of limit? I know that many do,
    >but it's easy to imagine a system where the number of simultaneously
    >open files is limited only by the amount of memory necessary to
    >describe them. Such a system would have no fixed-size tables of file
    >descriptors, for example.


    It is easy to imagine a system like the above where the system *STILL*
    refuses to allow any one process from hogging so many resources that
    prevent others from running.

    Gordon L. Burditt
     
    Gordon Burditt, May 19, 2006
    #9
  10. Keith Thompson wrote:
    > Roberto Waltman <> writes:
    > [...]
    >> <OT>
    >> Reusing the same FILE pointer leads to a simpler and cleaner design,
    >> but there are other reasons to do this: Every operating system has
    >> some limit on the number of files that can be open simultaneously,
    >> both on a per-process and system-wide basis.
    >> You may not be able to open 100 files in a single program, even if
    >> your code is 100% correct from the coding and logic points of view.
    >> And if you can open 100 files, that means 100 less are available to
    >> the system as a whole, potentially causing other programs to fail.

    >
    > Does *every* system impose this kind of limit? I know that many do,
    > but it's easy to imagine a system where the number of simultaneously
    > open files is limited only by the amount of memory necessary to
    > describe them. Such a system would have no fixed-size tables of file
    > descriptors, for example.
    >

    On those platforms where nearly everything requires a file of some sort,
    and resources to maintain them, it is pretty easy for a large server
    application to hit these soft or hard limits.
     
    Clever Monkey, May 19, 2006
    #10
  11. Mokkapati

    CBFalconer Guest

    Keith Thompson wrote:
    > Roberto Waltman <> writes:
    >

    .... snip ...
    >
    >> You may not be able to open 100 files in a single program, even if
    >> your code is 100% correct from the coding and logic points of view.
    >> And if you can open 100 files, that means 100 less are available to
    >> the system as a whole, potentially causing other programs to fail.

    >
    > Does *every* system impose this kind of limit? I know that many do,
    > but it's easy to imagine a system where the number of simultaneously
    > open files is limited only by the amount of memory necessary to
    > describe them. Such a system would have no fixed-size tables of file
    > descriptors, for example.


    No. Try CP/M for example, and FCBs. The space is in the users
    area, and limited only by what he can get. The disadvantage is
    that the user has to close files properly.

    --
    "The most amazing achievement of the computer software industry
    is its continuing cancellation of the steady and staggering
    gains made by the computer hardware industry..." - Petroski
     
    CBFalconer, May 19, 2006
    #11
  12. Keith Thompson <> wrote:
    >Roberto Waltman <> writes:
    >[...]
    >> <OT>
    >> Reusing the same FILE pointer leads to a simpler and cleaner design,
    >> but there are other reasons to do this: Every operating system has
    >> some limit on the number of files that can be open simultaneously,
    >> both on a per-process and system-wide basis.
    >> You may not be able to open 100 files in a single program, even if
    >> your code is 100% correct from the coding and logic points of view.
    >> And if you can open 100 files, that means 100 less are available to
    >> the system as a whole, potentially causing other programs to fail.

    >
    >Does *every* system impose this kind of limit? I know that many do,
    >but it's easy to imagine a system where the number of simultaneously
    >open files is limited only by the amount of memory necessary to
    >describe them. Such a system would have no fixed-size tables of file
    >descriptors, for example.
    >
    >> </OT>


    Point well taken, I have not yet used *every* OS, please replace
    "every" with "most" in my response ;)
    All those that I did use had some fixed limit, (in some cases
    changeable by the users,) but of course an implementer could choose a
    different approach.
     
    Roberto Waltman, May 19, 2006
    #12
  13. CBFalconer <> wrote:
    >Keith Thompson wrote:
    >> Roberto Waltman <> writes:
    >>> You may not be able to open 100 files in a single program, even if
    >>> your code is 100% correct from the coding and logic points of view.
    >>> And if you can open 100 files, that means 100 less are available to
    >>> the system as a whole, potentially causing other programs to fail.

    >>
    >> Does *every* system impose this kind of limit? I know that many do,
    >> but it's easy to imagine a system where the number of simultaneously
    >> open files is limited only by the amount of memory necessary to
    >> describe them. Such a system would have no fixed-size tables of file
    >> descriptors, for example.

    >
    >No. Try CP/M for example, and FCBs. The space is in the users
    >area, and limited only by what he can get. The disadvantage is
    >that the user has to close files properly.


    I forgot about CP/M - Same thing with DEC RSTS-11/RSX-11 and HP
    RTE-II/III - File control blocks in user space.
     
    Roberto Waltman, May 19, 2006
    #13
  14. CBFalconer <> writes:
    > Keith Thompson wrote:
    >> Roberto Waltman <> writes:
    >>

    > ... snip ...
    >>
    >>> You may not be able to open 100 files in a single program, even if
    >>> your code is 100% correct from the coding and logic points of view.
    >>> And if you can open 100 files, that means 100 less are available to
    >>> the system as a whole, potentially causing other programs to fail.

    >>
    >> Does *every* system impose this kind of limit? I know that many do,
    >> but it's easy to imagine a system where the number of simultaneously
    >> open files is limited only by the amount of memory necessary to
    >> describe them. Such a system would have no fixed-size tables of file
    >> descriptors, for example.

    >
    > No. Try CP/M for example, and FCBs. The space is in the users
    > area, and limited only by what he can get. The disadvantage is
    > that the user has to close files properly.


    Does "the user has to close files properly" imply that if I fail to
    close a file, it will continue to consume system resources after my
    program terminates?

    Note that the standard says that exit() closes all open streams (and
    returning from main() is equivalent to calling exit(), so a C program
    under a conforming implementation shouldn't have this problem.

    Of course closing files you opened is a good idea anyway.

    --
    Keith Thompson (The_Other_Keith) <http://www.ghoti.net/~kst>
    San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
    We must do something. This is something. Therefore, we must do this.
     
    Keith Thompson, May 19, 2006
    #14
  15. Keith Thompson <> wrote:
    >CBFalconer <> writes:
    >> No. Try CP/M for example, and FCBs. The space is in the users
    >> area, and limited only by what he can get. The disadvantage is
    >> that the user has to close files properly.

    >
    >Does "the user has to close files properly" imply that if I fail to
    >close a file, it will continue to consume system resources after my
    >program terminates?


    No. The problem is not resource leakage, but potential data
    corruption. A system such as CP/M does not keep track of files
    accessed via FCBs in userland, therefore you can loose or otherwise
    corrupt data if you do not close open files before terminating a
    program.

    >Note that the standard says that exit() closes all open streams (and
    >returning from main() is equivalent to calling exit(), so a C program
    >under a conforming implementation shouldn't have this problem.


    Correct. A conforming C program using user-space FCBs would close any
    open files as part of the cleanup following main().
     
    Roberto Waltman, May 19, 2006
    #15
  16. >Does "the user has to close files properly" imply that if I fail to
    >close a file, it will continue to consume system resources after my
    >program terminates?


    Likely not, if the resources involved are all in user memory, and
    that memory is all recovered on exit.

    What might happen (and sometimes did in CP/M) is that the file
    contents does not reflect all the writes you did (the file still
    appears as zero-length, even if the data was actually written,
    since the file length was not updated).

    >Note that the standard says that exit() closes all open streams (and
    >returning from main() is equivalent to calling exit(), so a C program
    >under a conforming implementation shouldn't have this problem.


    I don't believe this is required to happen if, for example, the
    program is terminated by typing an interrupt character rather
    than by calling exit().

    >Of course closing files you opened is a good idea anyway.


    *IF* you get a chance, which may not be the case if the program
    gets killed externally.

    Gordon L. Burditt
     
    Gordon Burditt, May 19, 2006
    #16
  17. (Gordon Burditt) writes:
    >>Does "the user has to close files properly" imply that if I fail to
    >>close a file, it will continue to consume system resources after my
    >>program terminates?

    >
    > Likely not, if the resources involved are all in user memory, and
    > that memory is all recovered on exit.

    [snip]

    Thanks.

    Gordon, *please* do us all a favor and don't snip attributions when
    you post a followup. We've had this discussion before; see
    <http://groups.google.com/group/comp.lang.c/msg/291038dca20a505e>.

    --
    Keith Thompson (The_Other_Keith) <http://www.ghoti.net/~kst>
    San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
    We must do something. This is something. Therefore, we must do this.
     
    Keith Thompson, May 20, 2006
    #17
  18. >>>Does "the user has to close files properly" imply that if I fail to
    >>>close a file, it will continue to consume system resources after my
    >>>program terminates?

    >>
    >> Likely not, if the resources involved are all in user memory, and
    >> that memory is all recovered on exit.

    >[snip]
    >
    >Thanks.
    >
    >Gordon, *please* do us all a favor and don't snip attributions when
    >you post a followup. We've had this discussion before; see
    ><http://groups.google.com/group/comp.lang.c/msg/291038dca20a505e>.


    Attribution = misattribution. Even though you and I think that
    attributions indicate who wrote what, and we might even agree on
    who wrote what, the authors of other articles in the thread don't
    agree, and if I don't snip attributions, I get complaints from
    several authors each claiming I mis-attributed the *SAME* text to
    them. That can draw lawsuits. As far as I know, non-attribution
    cannot.

    Gordon L. Burditt
     
    Gordon Burditt, May 20, 2006
    #18
  19. On 2006-05-19, Gordon Burditt <> wrote:
    >>>>Does "the user has to close files properly" imply that if I fail to
    >>>>close a file, it will continue to consume system resources after my
    >>>>program terminates?
    >>>
    >>> Likely not, if the resources involved are all in user memory, and
    >>> that memory is all recovered on exit.

    >>[snip]
    >>
    >>Thanks.
    >>
    >>Gordon, *please* do us all a favor and don't snip attributions when
    >>you post a followup. We've had this discussion before; see
    >><http://groups.google.com/group/comp.lang.c/msg/291038dca20a505e>.

    >
    > Attribution = misattribution. Even though you and I think that
    > attributions indicate who wrote what, and we might even agree on
    > who wrote what, the authors of other articles in the thread don't
    > agree, and if I don't snip attributions, I get complaints from
    > several authors each claiming I mis-attributed the *SAME* text to
    > them. That can draw lawsuits. As far as I know, non-attribution
    > cannot.
    >
    > Gordon L. Burditt

    Quoting others without giving them credit may very well draw lawsuits,
    although I can't imagine why anyone would sue over UseNET.

    Nobody disagrees over who wrote what. You need to quote attributions.

    --
    Andrew Poelstra [apoelstra@wp____ware.net] < http://www.wpsoftware.net/blog >
    Get your game faces on, because this is not a game.
     
    Andrew Poelstra, May 20, 2006
    #19
  20. Mokkapati

    Ben Pfaff Guest

    (Gordon Burditt) writes:

    > Attribution = misattribution. Even though you and I think that
    > attributions indicate who wrote what, and we might even agree on
    > who wrote what, the authors of other articles in the thread don't
    > agree, and if I don't snip attributions, I get complaints from
    > several authors each claiming I mis-attributed the *SAME* text to
    > them. That can draw lawsuits. As far as I know, non-attribution
    > cannot.


    I don't understand your objection. I've been posting to Usenet
    for many years now and I've never encountered anyone who made
    such bizarre allegations. Is this something that really happened
    or are you just inventing a hypothetical?
    --
    int main(void){char p[]="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz.\
    \n",*q="kl BIcNBFr.NKEzjwCIxNJC";int i=sizeof p/2;char *strchr();int putchar(\
    );while(*q){i+=strchr(p,*q++)-p;if(i>=(int)sizeof p)i-=sizeof p-1;putchar(p\
    );}return 0;}
     
    Ben Pfaff, May 20, 2006
    #20
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. jaap de verwant slachter
    Replies:
    0
    Views:
    1,285
    jaap de verwant slachter
    Jul 1, 2003
  2. Roy in

    need step by step example

    Roy in, Aug 3, 2003, in forum: ASP .Net
    Replies:
    2
    Views:
    372
    Roy in
    Aug 3, 2003
  3. Steve Richter

    a step by step page

    Steve Richter, May 3, 2005, in forum: ASP .Net
    Replies:
    2
    Views:
    386
    Steve Richter
    May 3, 2005
  4. craig dicker
    Replies:
    1
    Views:
    398
    Peter Rilling
    Jul 10, 2005
  5. adi.norules
    Replies:
    0
    Views:
    293
    adi.norules
    Jun 24, 2008
Loading...

Share This Page