Integrating FILE * and int file handles

D

Denis Remezov

Woodster said:
I would have put it in a Windows or CE group however I am trying to get
my file handling routines platform independent. The dynamic_cast and
STL stuff I have pretty much got sorted out so far. it is just the file
handling that I need to organise. As far as I can tell, the open and
close functions are MS only so I am trying to get a FILE * out of an int
file handle that I can pass to my generic code.

Regards

Woodster

FILE*-based API (the corresponding header file is stdio.h) is a part of
the standard I/O library as defined by ANSI C. C++ has inherited it, so
it is standard C++. It is neither the most efficient nor the most
convenient of the options, but it is very portable.
Beware: not everything that may be found in stdio.h is standard C++ (or C).
One example is the curious int fileno(FILE*).

File I/O based on open, close and so on is not defined in the standard
C++ or C. It is a part of POSIX though (not MS), so it has good portability
too. There is no standard way to "convert" between the two APIs.

Denis
 
W

Woodster

I currently have some code for an application that is running on Win32.
I have tried to keep anything not directly gui related as separate as
possible for portability reasons, including file access. Now has come
the time to try and implement the program on a Windows CE platform.
Problem is, that the MFC CArchive class uses an integer file pointer and
calls to open, close etc and my file handling class is doing everything
with FILE *, such as fopen, fclose etc...

Am I correct in thinking that (as mentioned in the Microsoft Help files)
that file handles returned by open(...) are not ANSI compatible meaning
I should try and stick with my existing code using FILE *? Is there any
way to translate an integer file handle returned by open(...) to a FILE
* that I can use with my existing code?

I am hoping to (at some stage) go to a Palm platform as well so am
trying to avoid any platform specific code where possible. Unfortunately
this alsmo means stripping it any STL code and overriding dynamic_cast
for some platforms (ie: Windows CE)

Any advice would be much appreciated.

Thanks in advance

Woody
 
P

Pete Becker

Woodster said:
Any advice would be much appreciated.

You should probably address this to one of the CE newsgroups:
microsoft.public.win32.programmer.wince is appropriate. But in brief,
the libraries that come with MS's compilers for CE are missing much of
what's in the C++ standard library. You can work around what's missing,
or you can buy a third-party library (like ours) that provides a
complete C++ standard library.
 
D

Denis Remezov

Woodster said:
In this case, I may be better off converting my existing code from FILE
* to int file handles. You said that using FILE * was neither efficient
or convenient. How does that compare with the int file handles and
related functions?

The Standard I/O (FILE-based) is buffered, the POSIX I/O (open() etc.) is
not (not supposed to be). The latter requires you to select buffer sizes
manually. If you do it right for the task at hand, you may notice better
performance than with the standard I/O; if you don't, the performance may
deteriorate quite a lot.
With the standard I/O you don't have to worry about that. It is perfectly
adequate for a great many uses.
There are other, less portable ways that can be both much faster and much
easier to use (memory mapped files), specifically for random access, but
we are drifting off-topic here.
My file handling is rather simple so have not really had any troubles
but I am interested in what you mean mean by "nor the most convenient"
I had the C++ file-based streams (<fstream>) in mind. They are superior
to the ANSI C I/O library in several aspects of usage.

Denis
 
W

Woodster

You should probably address this to one of the CE newsgroups:
microsoft.public.win32.programmer.wince is appropriate. But in brief,
the libraries that come with MS's compilers for CE are missing much of
what's in the C++ standard library. You can work around what's missing,
or you can buy a third-party library (like ours) that provides a
complete C++ standard library.

I would have put it in a Windows or CE group however I am trying to get
my file handling routines platform independent. The dynamic_cast and
STL stuff I have pretty much got sorted out so far. it is just the file
handling that I need to organise. As far as I can tell, the open and
close functions are MS only so I am trying to get a FILE * out of an int
file handle that I can pass to my generic code.

Regards

Woodster
 
W

Woodster

FILE*-based API (the corresponding header file is stdio.h) is a part of
the standard I/O library as defined by ANSI C. C++ has inherited it, so
it is standard C++. It is neither the most efficient nor the most
convenient of the options, but it is very portable.
Beware: not everything that may be found in stdio.h is standard C++ (or C).
One example is the curious int fileno(FILE*).

File I/O based on open, close and so on is not defined in the standard
C++ or C. It is a part of POSIX though (not MS), so it has good portability
too. There is no standard way to "convert" between the two APIs.

Denis

In this case, I may be better off converting my existing code from FILE
* to int file handles. You said that using FILE * was neither efficient
or convenient. How does that compare with the int file handles and
related functions?

My file handling is rather simple so have not really had any troubles
but I am interested in what you mean mean by "nor the most convenient"

I will need to do a bit of a search to ensure that the int file handles
are supported by gcc (for PALM development) before going ahead and
converting my code across.

Woodster
 
W

Woodster

The Standard I/O (FILE-based) is buffered, the POSIX I/O (open() etc.) is
not (not supposed to be). The latter requires you to select buffer sizes
manually. If you do it right for the task at hand, you may notice better
performance than with the standard I/O; if you don't, the performance may
deteriorate quite a lot.
With the standard I/O you don't have to worry about that. It is perfectly
adequate for a great many uses.
There are other, less portable ways that can be both much faster and much
easier to use (memory mapped files), specifically for random access, but
we are drifting off-topic here.

I had the C++ file-based streams (<fstream>) in mind. They are superior
to the ANSI C I/O library in several aspects of usage.

Denis,

Thanks a lot for the information. I have done very little work with
templates so have not done anything with fstream to date. A quick check
however seems to indicate that the use of fstream (along with a lot of
other usefuly - supposedly ANSI standard stuff) has been thoughtfully
(sic) omitted by Microsoft for their Embedded Visual C++.

I am really beginning to get used to finding out that Microsoft in their
infinite wisdom has left out yet another standard item from their
implementation of C++. Looks like I will just need to through a whole
pile of "if defined"'s at my code in order to get it all up and running
under Pocket PC which I was hoping to avoid in favour of plactfrom
independent code / ANSI standard code. It is now maybe time to go to
Windows CE forums/groups and find out how developers of other
applications that run on platforms including PocketPC handle this
situation.

Thanks again for your responses to date however,

Woodster
 
D

Denis Remezov

P.J. Plauger said:
message
[snip]
With the standard I/O you don't have to worry about that. It is perfectly
adequate for a great many uses.
There are other, less portable ways that can be both much faster and much
easier to use (memory mapped files), specifically for random access, but
we are drifting off-topic here.

And into wild speculation. Chances are good that *any* of the
forms of I/O discussed so far will be good enough, absent any
performance data to the contrary.

Not speculations. Well, perhaps I should have said "could be appreciably
faster in some contorted scenarios". As to the data, I've just written a
simple test program that ran over 50% faster with mmap() than with
fopen()/fread() on Linux. It was reading a 12Mb file from both ends
simultaneously and examining its contents.

From the standpoint of type checking, yes. From the standpoint of
performance, Standard C++ I/O tends to be slightly worse than
Standard C I/O. For some C++ libraries, it is *much* worse.

Yes I suspected that, but thanks for the confirmation.

Denis
 
P

P.J. Plauger

I have done very little work with
templates so have not done anything with fstream to date. A quick check
however seems to indicate that the use of fstream (along with a lot of
other usefuly - supposedly ANSI standard stuff) has been thoughtfully
(sic) omitted by Microsoft for their Embedded Visual C++.

I am really beginning to get used to finding out that Microsoft in their
infinite wisdom has left out yet another standard item from their
implementation of C++. Looks like I will just need to through a whole
pile of "if defined"'s at my code in order to get it all up and running
under Pocket PC which I was hoping to avoid in favour of plactfrom
independent code / ANSI standard code. It is now maybe time to go to
Windows CE forums/groups and find out how developers of other
applications that run on platforms including PocketPC handle this
situation.

If you're willing to pay extra, you can get a complete Standard C/C++
library to supplement the eVC++ environments. That can quickly prove
to be cheaper than doctoring your code to adapt to disparate subsets.
See our web site.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
P

P.J. Plauger

Well, actually damn near every OS these days includes some form
of the original open/close/read/write interface pioneered by
Unix and standardized as Posix. It is indeed part of the Microsoft
environment.

And while there is no standard way to convert between file
descriptors (used by open etc.) and FILE objects (used by
fopen), there's almost always *some* way to do this in
every OS.

Watch out! Another Embedded C++ gotcha is that file handles are *not*
integers in this OS.
The Standard I/O (FILE-based) is buffered, the POSIX I/O (open() etc.) is
not (not supposed to be). The latter requires you to select buffer sizes
manually. If you do it right for the task at hand, you may notice better
performance than with the standard I/O; if you don't, the performance may
deteriorate quite a lot.

But probably not, given the smart buffering that a typical modern OS
does for you under the hood.
With the standard I/O you don't have to worry about that. It is perfectly
adequate for a great many uses.
There are other, less portable ways that can be both much faster and much
easier to use (memory mapped files), specifically for random access, but
we are drifting off-topic here.

And into wild speculation. Chances are good that *any* of the
forms of I/O discussed so far will be good enough, absent any
performance data to the contrary.
I had the C++ file-based streams (<fstream>) in mind. They are superior
to the ANSI C I/O library in several aspects of usage.

From the standpoint of type checking, yes. From the standpoint of
performance, Standard C++ I/O tends to be slightly worse than
Standard C I/O. For some C++ libraries, it is *much* worse.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
P

Pete Becker

Denis said:
The Standard I/O (FILE-based) is buffered, the POSIX I/O (open() etc.) is
not (not supposed to be). The latter requires you to select buffer sizes
manually. If you do it right for the task at hand, you may notice better
performance than with the standard I/O; if you don't, the performance may
deteriorate quite a lot.

You can also select buffer sizes for standard I/O, with exactly the same
consequences. One benefit of using standard I/O is that you don't have
to write the code to manage the buffering.
 
P

P.J. Plauger

P.J. Plauger said:
message
[snip]
With the standard I/O you don't have to worry about that. It is perfectly
adequate for a great many uses.
There are other, less portable ways that can be both much faster and much
easier to use (memory mapped files), specifically for random access, but
we are drifting off-topic here.

And into wild speculation. Chances are good that *any* of the
forms of I/O discussed so far will be good enough, absent any
performance data to the contrary.

Not speculations. Well, perhaps I should have said "could be appreciably
faster in some contorted scenarios". As to the data, I've just written a
simple test program that ran over 50% faster with mmap() than with
fopen()/fread() on Linux. It was reading a 12Mb file from both ends
simultaneously and examining its contents.

You *are* speculating. Until somebody writes a program with practical
uses and demonstrates by measurement that it's not fast enough for
those uses, discussions of the relative merits of different forms of
I/O are pure speculation. I've written code for a living for decades,
and only a tiny fraction of that has ever needed tuning. An even
tinier fraction was worth compromising readability to get adequate
performance.

Benchmarks can serve as reasonable general indicators of relative
performance, but they are too often used as an excuse to perform
premature optimization.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
D

Denis Remezov

P.J. Plauger said:
P.J. Plauger said:
message
[snip]


With the standard I/O you don't have to worry about that. It is perfectly
adequate for a great many uses.
There are other, less portable ways that can be both much faster and much
easier to use (memory mapped files), specifically for random access, but
we are drifting off-topic here.

And into wild speculation. Chances are good that *any* of the
forms of I/O discussed so far will be good enough, absent any
performance data to the contrary.

Not speculations. Well, perhaps I should have said "could be appreciably
faster in some contorted scenarios". As to the data, I've just written a
simple test program that ran over 50% faster with mmap() than with
fopen()/fread() on Linux. It was reading a 12Mb file from both ends
simultaneously and examining its contents.

You *are* speculating.

No, I've limited myself to a discussion of the relative merits of
different forms of I/O per se (up to this point). There was nothing between
the lines.

Until somebody writes a program with practical
uses and demonstrates by measurement that it's not fast enough for
those uses, discussions of the relative merits of different forms of
I/O are pure speculation. I've written code for a living for decades,
and only a tiny fraction of that has ever needed tuning. An even
tinier fraction was worth compromising readability to get adequate
performance.

It once happened to me and the group I was working with. It was about
7 years ago; we used memory mapped files on Win32 for working with
large volumes of geographical data in proprietary formats, and it
certainly helped. It turned out to be a very practical application, too.
Benchmarks can serve as reasonable general indicators of relative
performance, but they are too often used as an excuse to perform
premature optimization.

More information is good. Everyone can decide for himself how to use it.
Luckily, wherever it is imprecise, people like yourself will make the
corrections (and I will appreciate them and learn from them too).

Denis
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,743
Messages
2,569,478
Members
44,899
Latest member
RodneyMcAu

Latest Threads

Top