Is C faster than C++

G

Gabriel

zhaoyandong said:
In one of my interview, some people asked me why C is faster C++, and tell
me to illustrate at least two reasons.

I can't find the answer in the web.

I'll appreciate any suggestion on this.
Thank you.

> It is not faster.

I once was at a workshop help by sun's compiler development team. They
said, that their compiler does some optimizations in C which is does not
in C++, but I do not exactly recall the reason (might look it up later).
Is it possible, that a compiler can make assumptions in C which it
cannot make in C++ that allows better optimization in C?

Gabriel
 
K

Karl Heinz Buchegger

Greg said:
Greg said:
[snip]
In this case, asking for clarification would be in order:

"By what measurement have you found C++ to be slower than C?"

If the response is "average dipatch time per function call." you could
then explain how virtual functions differ from directly called
routines. But you could also note that the overhead is minimal and is
rarely an issue.

In fact it is *never* an issue, because it isn't slower. Not if you
compare a virtual function call with the *equivalent* functionality
in C. And that is all that matters: virtual function calls in a code
are there for a reason. So you need to satisfy that reason in C or C++.
When doing so, one figures out, that C++ virtual functions are the fastest
possible way to satisfy that reason. Plus there is one benefit: maintainability.

Other then that, I agree to everything else you said.

I would agree that the difference in dispatch time between a virtual
function call and a direct call should not be an issue in a
well-written C++ program;

This is not what I am talking about.

Example:

class Pet
{
public:
virtual void MakeNoise() = 0;
};

class Cat: public Pet
{
public:
virtual void MakeNoise() { printf( "Miau\n" ); }
};

class Dog: public Pet
{
public:
virtual void MakeNoise() { printf( "Wuff\n" ); }
};

void foo( Pet* p )
{
p->MakeNoise();
}

int main()
{
Cat c;
Dog d;

foo( &c );
foo( &d );
}

Now rewrite that in C and you will figure out, that you need an
additional mechanism besides the actual function call in order
to replace the virtual functions. Eg. some sort of type codes in
the structure and a switch statement in foo().

When talking about virtual functions, then yes, the actual dispatch
time of a virtual function is most always larger then the time costs
for an ordinary call. But that is comparing apples with oranges. In order
to make a fair comparison, you need to compare virtual functions with
ordinary functions *plus* that additional mechanism needed to supply the
functionality of a virtual function call. And now things turn around:
Virtual functions aren't that slow any more.

It is like comparing addition with multiplication. On most CPUs multiplication
takes longer then addition. But there really is no point in comparing them. If I
need multiplication, then replacing that multiplication by additions just
because addition is faster is not going to save the day. Except of course
the special case of multiplying by 2 which is a special case the compiler
knows about. In the same way the compiler knows about special cases of virtual
function calls and replaces them with ordinary function calls.
For instance, one could imagine that adding a virtual method to a class
like std::string would have a measurable negative effect on the
performance of a C++ program with many stack-based strings.

Could be, since the size of an object increases by the size of an additional
pointer (I know, nowhere it is written that a compiler has to use a vtable.
But in fact no other implementation then vtables are known for implementing
virtual functions). That increase could disturb the caches. But all of this
is extremely hardware dependent and outside the scope of C++.
C++ is obviously a more complex language than C. Greater complexity
does not necessarily imply less efficiency. But it does require more
care at times to recognize inefficiency when it arises.

I am with you here. But virtual functions really do not qualify as an example
for this. In fact the opposite is true. The C++ solution from above is much simpler
(and maintainable) then an equivalent C solution. The programmer must keep care
of much more things to get it right in C:

#define CAT 1
#define DOG 2

typedef struct Pet
{
unsigned char Type;
};

void MakeNoiseForCat( Pet* p )
{
printf( "Miau\n" );
}

void MakeNoiseForDog( Pet* p )
{
printf( "Wuff\n" );
}

void foo( Pet* p )
{
switch( p->Type ) {
case CAT:
MakeNoiseForCat( p );
break;

case DOG:
MakeNoiseForDog( p );
break;
}
}

int main()
{
Pet c = { CAT };
Pet d = { DOG };

foo( &c );
foo( &d );
}

Note that the C++ virtual function call:

p->MakeNoise();

has been replaced by the sequence:

switch( p->Type ) {
case CAT:
MakeNoiseForCat( p );
break;

case DOG:
MakeNoiseForDog( p );
break;
}

Now we have a fair comparison and I guess there is no doubt that the
virtual function call actually is faster on most machines then all of
that switch-case-function_call mumbo jumbo.
 
S

Stuart MacMartin

All very well stated.

Two other side points:

Years ago I was asked a similar question: "Why is C so much faster
than machine code?" When I asked for clarification, they said, "We had
an expert write a function in C, and another write the same function in
assembler. The C was faster by a factor of 2".

My unfortunate reply: "Well, since C is compiled into assembler, I
would assume that the compiler writer understood that assembly language
better than the person who wrote the assembler function in this case.
For that assembly language I've always beaten the C compiler any time I
tried." Wrong answer: the interviewer was the one who'd written the
assembler.

But the other point is more significant:

Who will win a race: the person who has a bicycle, or the person who
has a choice between a bicycle, a car, a motorcycle, and an airplane?
Typically the C programmer will use better algorithms than the
assembler just because the assembler programmer doesn't have the time,
and the C++ programmer will use better data structures than the C
programmer because he has the tools and the libraries to help him.
Where a C programmer might use an array, a C++ programmer might use a
vector, a set, a splay tree, a map, or something else more appropriate
to the problem space. Not to mention that the C++ programmer can more
easily model the problem domain directly in the language, so can create
a more maintainable and correct by design solution. So given the same
PROBLEM, the C++ programmer ought to get a faster solution than the C
programmer given the same time constraints.

Stuart
 
M

Mabden

Ganesh said:
My opinion is that C++ may introduce some hidden
costs, that may be visible only to an experienced programmer.
Lower the level of programming, (maybe) lower the overheads.
This is IMHO.

And it is wrong. Read "The Design and Evolution of C++" b Bjarne
Stroustrup.

Pg. 28:
"The explicit aim was to match C in terms of run-time code..."
"To wit: Someone once demonstrated a 3% systematic decrease in overall
run-time efficiency compared with C caused by the use of a spurious
temporary introducuced into the function return mechanism by the C with
Classes (FYI: precursor to c++) preprocessor. This was deemed
unacceptable and the overhead promptly removed."

The basic tenet of C++ is to be a better C, and to not add overhead into
programs that do not use C++ features.

Anything you believe about "hidden costs" are similar to your beliefs
about UFOs and gods. Ie: non-existent.
 
P

P.J. Plauger

Mabden said:
And it is wrong. Read "The Design and Evolution of C++" b Bjarne
Stroustrup.

Pg. 28:
"The explicit aim was to match C in terms of run-time code..."
"To wit: Someone once demonstrated a 3% systematic decrease in overall
run-time efficiency compared with C caused by the use of a spurious
temporary introducuced into the function return mechanism by the C with
Classes (FYI: precursor to c++) preprocessor. This was deemed
unacceptable and the overhead promptly removed."

The basic tenet of C++ is to be a better C, and to not add overhead into
programs that do not use C++ features.

Anything you believe about "hidden costs" are similar to your beliefs
about UFOs and gods. Ie: non-existent.

Well, no. Anything you believe about "hidden costs" are similar to
whatever other beliefs you can prove or disprove by reproducible
experiment. And experiments repeatedly show varying nonzero costs
intrinsic to C++ that are not present in C.

Exception handling is an obvious case in point -- the implementation
can effectively eliminate performance costs by raising code size,
or conversely, but the cost is there in some form. You can sometimes
argue that the equivalent checking in C has a nonzero space/time
cost, and that is true; but whether the costs are comparable depends
strongly on the particulars of a program. You can also argue (as
above) that the extra costs are being steadily eliminated as they
are discovered, but that is only true up to a point. It's fun to
highlight the successes, less fun to admit the failures (so far)
to eliminate extra overheads.

Virtual function calls raise issues similar to those for exceptions,
but typically with much smaller costs. OTOH, input/output using
the Standard C++ library drags in *way* more code than does the
Standard C library for (loosely) comparable operations. And the
I/O performance gap has closed dramatically over the past decade,
but it's still there.

Then there's the oft-repeated mantra that C++ can be *more*
efficient than C; and occasionally that's doubtless true.
But on average, IME, a C++ program will be 5 to 15 per cent
larger and/or slower than a comparable program written in C.
Again IME, that's a price well worth paying, in practically
all cases, for the improved productivity that you can get
by writing large programs in C++ instead of C. Historically,
C began winning over assembly language when its extra overhead
dropped to about 30 per cent. It's a rare application, even
embedded, where even 50 per cent overheads are worth addressing
by going to lower-level coding techniques. It's way cheaper to
use faster or bigger hardware, even when you're shipping
hundreds of thousands of devices.

I consider it a mark of zealotry to pretend that there are
*no* additional overheads when using C++ instead of C. That
requires a leap of faith akin to believing in UFOs and gods.
But more important, you don't have to be a C++ zealot to
decide on the basis of reasonable evidence that it's well
worth the real and measurable costs that are still there.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
D

Dave Rahardja

I consider it a mark of zealotry to pretend that there are
*no* additional overheads when using C++ instead of C. That
requires a leap of faith akin to believing in UFOs and gods.
But more important, you don't have to be a C++ zealot to
decide on the basis of reasonable evidence that it's well
worth the real and measurable costs that are still there.

The additional cost is there because the C++ programmer chooses to use
additional features that are not available in the C version of the code.
Exception handling adds overhead, but you get exception handling in return. No
fair comparing that to a C program that has no way of handling exceptions like
C++ does.

Same goes for virtual functions--a fair comparison can only be made to a
similar run-time dynamic dispatch in C, e.g. a function pointer table. In this
case the C++ code could conceivably be faster because an optimizer has the
opportunity to short-circuit calls to virtual functions that always resolve to
a single function within the link, say, thus removing the indirection. A C
optimizer may never have that opportunity.

-dr
 
P

P.J. Plauger

The additional cost is there because the C++ programmer chooses to use
additional features that are not available in the C version of the code.
Exception handling adds overhead, but you get exception handling in
return. No
fair comparing that to a C program that has no way of handling exceptions
like
C++ does.

Sorry, but it *is* fair. Any function you call whose contents
are unknown might throw an exception, so the compiled code must
be prepared to handle exceptions in the most innocent of code.
Perhaps the compiler is smart enough to avoid the worst overheads
in a function that contains no destructible autos, but it is not
likely to eliminate all overheads. IME, you at least get kilobytes
of stack-walking code for *any* C++ program, even a C program
compiled as C++. In some cases you also get slower function
entry/exit code as well.

C++ pays lip service to the doctrine that you don't pay for
what you don't use. Standard C++ fails to achieve that goal
in several important areas. Exception handling and I/O, which
I cited in my earlier post, are two biggies. Both can cause
C programs to swell even when compiled unchanged as C++.
Same goes for virtual functions--a fair comparison can only be made to a
similar run-time dynamic dispatch in C, e.g. a function pointer table. In
this
case the C++ code could conceivably be faster because an optimizer has the
opportunity to short-circuit calls to virtual functions that always
resolve to
a single function within the link, say, thus removing the indirection. A C
optimizer may never have that opportunity.

Not quite the same case. I did try to indicate that you have to
compare the intrinsic overhead of a C++ feature to the explicit
overhead of doing the same thing by hand in C. Again IME, the
use of virtuals tends to be a wash. It's an article of faith
among C++ zealots that optimization will give C++ the edge,
but I haven't seen a case where that's true (or at least where
it makes a dime's worth of difference).

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
J

Jeremy Jurksztowicz

I once was at a workshop help by sun's compiler development team. They
said, that their compiler does some optimizations in C which is does not
in C++, but I do not exactly recall the reason (might look it up later).
Is it possible, that a compiler can make assumptions in C which it
cannot make in C++ that allows better optimization in C?

Usually, it is the other way around. Specifically type based alias
analysis, and the resulting optimizations, occur more frequently with
C++, because of it's strong typing and compile time polymorphism
(templates). However, given the complexity of C++, especially
exceptions, I suppose there may be some assumptions which are valid in
a C program, but not in the equivalent C++ program. That being said, I
am not a compiler writer, so I am only speculating.

- Jeremy Jurksztowicz
 
B

Branimir Maksimovic

P.J. Plauger said:
Sorry, but it *is* fair. Any function you call whose contents
are unknown might throw an exception, so the compiled code must
be prepared to handle exceptions in the most innocent of code.
Perhaps the compiler is smart enough to avoid the worst overheads
in a function that contains no destructible autos, but it is not
likely to eliminate all overheads. IME, you at least get kilobytes
of stack-walking code for *any* C++ program, even a C program
compiled as C++. In some cases you also get slower function
entry/exit code as well.

Well I've jumped on my chair when I saw this, but fortunatelly
gcc only generates exception handlers for functions that have
automatic objects with destructors.
Handler code is really simple. It just calls destructors in
switch/case style and after that adjusts stack pointer,
then calls _Unwind_resume.
That's just a ten to twenty bytes of code if there is one destructor
or two.
But, every c++ object file get's exception handling stack frame
+ exception handler table (if there are any, every 'case' has handler)
which is more or less few kilobytes of data with or without
destructors.
IMO, it's acceptable overhead.
So we would say that c++ programs have at least more data then
c programs.

Greetings, Bane.
 
B

Branimir Maksimovic

Gabriel said:
I once was at a workshop help by sun's compiler development team. They
said, that their compiler does some optimizations in C which is does not
in C++, but I do not exactly recall the reason (might look it up later).
Is it possible, that a compiler can make assumptions in C which it cannot
make in C++ that allows better optimization in C?

Well, in c99 there is keyword 'restrict' (restricted pointer means that
pointer has no alias)
which allows optimisations that are common to fortran compilers.

Greetings, Bane.
 
D

Dave Rahardja

Sorry, but it *is* fair. Any function you call whose contents
are unknown might throw an exception, so the compiled code must
be prepared to handle exceptions in the most innocent of code.
Perhaps the compiler is smart enough to avoid the worst overheads
in a function that contains no destructible autos, but it is not
likely to eliminate all overheads. IME, you at least get kilobytes
of stack-walking code for *any* C++ program, even a C program
compiled as C++. In some cases you also get slower function
entry/exit code as well.

This is still an apples-to-oranges comparison. You pay extra for exception
handling because you are _using_ the exception handling facility. If exception
handling is not required, we can properly declare non-throwing functions with
the throw() specification, or resort to compiler switches. I also imagine that
a C++ compiler can assume that C functions declared extern "C" do not throw.
C++ pays lip service to the doctrine that you don't pay for
what you don't use. Standard C++ fails to achieve that goal
in several important areas. Exception handling and I/O, which
I cited in my earlier post, are two biggies. Both can cause
C programs to swell even when compiled unchanged as C++.

Still addressing the exception overhead you mentioned above, I admit it is
unfortunate that the C++ language assumes that plainly-declared functions can
"throw anything" instead of "throw nothing". The latter assumption could have
eliminated the exception overhead, but would require large bodies of code to
be modified before being reused.

Do you attribute the I/O bloat to the design of the C++ streams specification?
Does it help if the programmer sticks to C-style printf()s?
Not quite the same case. I did try to indicate that you have to
compare the intrinsic overhead of a C++ feature to the explicit
overhead of doing the same thing by hand in C. Again IME, the
use of virtuals tends to be a wash. It's an article of faith
among C++ zealots that optimization will give C++ the edge,
but I haven't seen a case where that's true (or at least where
it makes a dime's worth of difference).

I don't think anyone has made a claim that C++ may be anything but trivially
more efficient than C. However, my experience has shown that the counter-claim
that C is _always_ more efficient than C++ shows a misunderstanding of the
differences between the languages, or at least reveals the deep denial of a C
coder refusing to learn about C++.

The two languages start with different levels of basic functionality enabled
(C++ obviously starts with more features enabled, such as exception handling).
In C you add features explicitly, and in C++ you disable default features (by
adding a throw() specification to your functions, for example). The
understanding and use of these fundamental differences goes a long way to
demistify the apparent inneficiency of C++.

-dr
 
B

Branimir Maksimovic

Dave Rahardja said:
This is still an apples-to-oranges comparison. You pay extra for exception
handling because you are _using_ the exception handling facility. If
exception
handling is not required, we can properly declare non-throwing functions
with
the throw() specification, or resort to compiler switches. I also imagine
that
a C++ compiler can assume that C functions declared extern "C" do not
throw.

throw() wouldn't help.
To quote part of 15.5.1:
"An implementation is not permitted to finish stack unwinding prematurely
based on a determination that the unwind process will eventually cause
a call to terminate()."

in gcc implementation, exception spec would only add overhead to function
that is called (unexpected stuff et al).
Calling function does not care at all about exception specifications.
It only generate exception handling code if function calls destructors
for automatic objects,that is, of course, if there are no try/catch blocks.
Overhead is simply that every object file gets exception handling data.
There is no run time overhead, but executable size overhead.
I guess, there are implementations where true is exactly opposite :)

Greetings, Bane.
 
M

mlimber

Dave Rahardja wrote:
[snip]
Do you attribute the I/O bloat to the design of the C++ streams specification?
Does it help if the programmer sticks to C-style printf()s?
[snip]

As P.J.P. said above, printf/scanf do help as far as code bloat and I/O
performance go, but using them trades speed for type saftey and a
common coding style for streaming objects. The classic example of
printf failure cannot happen with iostreams:

char c='a';
printf( "%s", c ); // Boom!

IME, for all but the most thoroughly tested (or trivial) code, every
branch that prints an error message is not necessarily tested, meaning
that such bugs could be latent in the software waiting for the user to
find them. Thus, I'm quite willing to accept some bloat and
inefficiency in exchange for type safety.

Cheers! --M
 
P

P.J. Plauger

This is still an apples-to-oranges comparison. You pay extra for exception
handling because you are _using_ the exception handling facility. If
exception
handling is not required, we can properly declare non-throwing functions
with
the throw() specification, or resort to compiler switches. I also imagine
that
a C++ compiler can assume that C functions declared extern "C" do not
throw.

Yes, if you indulge in sufficient heroics, and you know what they
are, and you have an appropriate compiler, you *can* eliminate
the overhead of exception handling. Otherwise, you pay extra
for exception handling whether or not you are using that facility.
That is the norm in the world of C++.
Still addressing the exception overhead you mentioned above, I admit it is
unfortunate that the C++ language assumes that plainly-declared functions
can
"throw anything" instead of "throw nothing". The latter assumption could
have
eliminated the exception overhead, but would require large bodies of code
to
be modified before being reused.

Do you attribute the I/O bloat to the design of the C++ streams
specification?
Yes.

Does it help if the programmer sticks to C-style printf()s?

Sometimes. But, depending on the implementation, you might still
get a boatload of code just because the program unintentionally
forces the instantiation of the default locale object. (And that's
intimately associated with the bad design of iostreams.)
I don't think anyone has made a claim that C++ may be anything but
trivially
more efficient than C.

Uh, no. Some have argued that it can be *materially* more
efficient.
However, my experience has shown that the
counter-claim
that C is _always_ more efficient than C++ shows a misunderstanding of the
differences between the languages, or at least reveals the deep denial of
a C
coder refusing to learn about C++.

I've certainly never claimed that C is always more efficient
than C++, mostly because I don't believe that's true.
The two languages start with different levels of basic functionality
enabled
(C++ obviously starts with more features enabled, such as exception
handling).
In C you add features explicitly, and in C++ you disable default features
(by
adding a throw() specification to your functions, for example). The
understanding and use of these fundamental differences goes a long way to
demistify the apparent inneficiency of C++.

The apparent inefficiency is a *real* inefficiency if you have
to do sophisticated things to avoid the overheads of features
you don't use.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
D

David White

mlimber said:
Dave Rahardja wrote:
[snip]
Do you attribute the I/O bloat to the design of the C++ streams
specification? Does it help if the programmer sticks to C-style
printf()s?
[snip]

As P.J.P. said above, printf/scanf do help as far as code bloat and
I/O performance go, but using them trades speed for type saftey and a
common coding style for streaming objects. The classic example of
printf failure cannot happen with iostreams:

It's not immediately apparent to me why printf/scanf should be faster than
streams, given that streams have a dedicated function for I/O of each type
and can do what's appropriate for that type without delay, whereas
printf/scanf have to decode a format string first, including possibly ASCII
to decimal conversions for precision and field width (which themselves
require an amount of work similar to formatted input).

DW
 
P

P.J. Plauger

mlimber said:
Dave Rahardja wrote:
[snip]
Do you attribute the I/O bloat to the design of the C++ streams
specification? Does it help if the programmer sticks to C-style
printf()s?
[snip]

As P.J.P. said above, printf/scanf do help as far as code bloat and
I/O performance go, but using them trades speed for type saftey and a
common coding style for streaming objects. The classic example of
printf failure cannot happen with iostreams:

It's not immediately apparent to me why printf/scanf should be faster than
streams, given that streams have a dedicated function for I/O of each type
and can do what's appropriate for that type without delay, whereas
printf/scanf have to decode a format string first, including possibly
ASCII
to decimal conversions for precision and field width (which themselves
require an amount of work similar to formatted input).

First, the time spent decoding a format string is trivial compared
to practically any conversion. Second, the Standard C++ library
requires that all conversions be performed by locale facets, each
of which is a class typically with several virtual functions called
through public interfaces. And each conversion call requires the
construction of an ios object plus an istreambuf_iterator object.
The time overhead for all this nonsense is about 30 per cent greater
(at a minimum) than a corresponding printf/scanf call.

The space overhead can be *much* worse. The Dinkumware implementation
avoids instantiating most facets that never get called. That keeps
the space overhead around 50 KB, compared to about 500 KB (sic) for
those that don't perform this optimization. Once you instantiate a
facet, you link in all the virtuals for that facet, whether or not
they are actually used. (The zealots have been insisting for nearly
a decade now that unused virtuals can be optimized away, but
nobody has demonstrated a working implementation that does so.)
Compare this with a typical 10-20 KB overhead for linking in all
of printf/scanf, even including the format decoder and all those
conversions you don't use.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
H

Howard Hinnant

"P.J. Plauger said:
mlimber said:
Dave Rahardja wrote:
[snip]
Do you attribute the I/O bloat to the design of the C++ streams
specification? Does it help if the programmer sticks to C-style
printf()s?
[snip]

As P.J.P. said above, printf/scanf do help as far as code bloat and
I/O performance go, but using them trades speed for type saftey and a
common coding style for streaming objects. The classic example of
printf failure cannot happen with iostreams:

It's not immediately apparent to me why printf/scanf should be faster than
streams, given that streams have a dedicated function for I/O of each type
and can do what's appropriate for that type without delay, whereas
printf/scanf have to decode a format string first, including possibly
ASCII
to decimal conversions for precision and field width (which themselves
require an amount of work similar to formatted input).

First, the time spent decoding a format string is trivial compared
to practically any conversion. Second, the Standard C++ library
requires that all conversions be performed by locale facets, each
of which is a class typically with several virtual functions called
through public interfaces. And each conversion call requires the
construction of an ios object plus an istreambuf_iterator object.
The time overhead for all this nonsense is about 30 per cent greater
(at a minimum) than a corresponding printf/scanf call.

The space overhead can be *much* worse. The Dinkumware implementation
avoids instantiating most facets that never get called. That keeps
the space overhead around 50 KB, compared to about 500 KB (sic) for
those that don't perform this optimization. Once you instantiate a
facet, you link in all the virtuals for that facet, whether or not
they are actually used. (The zealots have been insisting for nearly
a decade now that unused virtuals can be optimized away, but
nobody has demonstrated a working implementation that does so.)
Compare this with a typical 10-20 KB overhead for linking in all
of printf/scanf, even including the format decoder and all those
conversions you don't use.

What really gets me is that just printing an integer instantiates all of
the code for formatting floating point (even with lazy facet
instantiation). You can't even strip it out at link time, at least not
without whole program analysis. And just for the reason P.J. says:
they're in the same facet. You either get all number formatting or none
of it. There's no way to select at compile time "just int" formatting.

There's also no way to select at compile time: And don't bother me with
all that thousands-separator stuff. It's in your executable just
itching to get executed even if you never mention "locale". And again,
the linker can't strip it out without fairly heroic WPA.

And of course since all the floating point code is in there, you've also
got code sitting around to select a custom decimal point character (at
run time), just in case you might decide to print that double in Germany
after you've compiled your code.

If you're printing to a file (or likely even just the console), then
you've also got some heavy lifting equipment allowing for very flexible
code conversion magic (and the specific encoding is to be selected at
run time).

-- All for the price of printing an int.

Yes, C++ I/O could be much smaller. But as currently specified it
isn't. There are a ton of major run time decisions buried under the
most innocent looking I/O.

We (the industry) have the expertise to do it much better today than we
did a decade ago. But will we? I really don't know.

-Howard
 
P

P.J. Plauger

Yes, C++ I/O could be much smaller. But as currently specified it
isn't. There are a ton of major run time decisions buried under the
most innocent looking I/O.

We (the industry) have the expertise to do it much better today than we
did a decade ago. But will we? I really don't know.

Thanks, Howard, for the added details. At the risk of stirring up
a slumbering dragon, I have to point out that there is one C++
library that avoids the worst excesses of Standard C++ -- the
one specified for EC++. We offer both in our Unabridged library
package so our customers can choose. Obviously, the better solution
would be to do as Howard suggests and fix the size/performance
problems in the Standard C++ library. But that work isn't on the
horizon.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
B

Branimir Maksimovic

Branimir Maksimovic said:
throw() wouldn't help.
To quote part of 15.5.1:
"An implementation is not permitted to finish stack unwinding prematurely
based on a determination that the unwind process will eventually cause
a call to terminate()."

in gcc implementation, exception spec would only add overhead to function
that is called (unexpected stuff et al).
Calling function does not care at all about exception specifications.
It only generate exception handling code if function calls destructors
for automatic objects,that is, of course, if there are no try/catch
blocks.
Overhead is simply that every object file gets exception handling data.
There is no run time overhead, but executable size overhead.
I guess, there are implementations where true is exactly opposite :)

Greetings, Bane.

I've just investigated further. Exception specification makes run time
overhead to function
only if function throws or calls function that does not have throw()
specification.
that means
void baz()throw();
void foo()throw()
{
// foo does not register unexpected (no runtime overhead)
// because baz has throw() specification
baz();
}

void baz()throw()
{
// no run time overhead , baz does not throws neither calls something that
can throw
}

On windows, with gcc, exceptions have zero cost if not used (neither code
nor data).
That means that c programs compiled as c++ with gcc produce exactly the same
object file (excluding symbol names) on windows.
I wonder why on linux c++ object files get eh frame ? strange

Greetings, Bane.
 
P

P.J. Plauger

I've just investigated further. Exception specification makes run time
overhead to function
only if function throws or calls function that does not have throw()
specification.
that means
void baz()throw();
void foo()throw()
{
// foo does not register unexpected (no runtime overhead)
// because baz has throw() specification
baz();
}

void baz()throw()
{
// no run time overhead , baz does not throws neither calls something that
can throw
}

On windows, with gcc, exceptions have zero cost if not used (neither code
nor data).
That means that c programs compiled as c++ with gcc produce exactly the
same
object file (excluding symbol names) on windows.
I wonder why on linux c++ object files get eh frame ? strange

Uh, I just tried a quick test with mingw (gcc V3.2) under Windows.
Compiled as C++, a smallish program is 100,000 bytes bigger than
when compiled as C.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top