Which is faster?

P

Prasoon

Which is faster "cout" or "printf" ?

I have written the following program

#include <iostream>
#include <cstdio>
#include <ctime>

int main()
{
std::clock_t start1, start2;
double diff1, diff2;
start1 = std::clock();

for ( long int i = 0; i < 1000000; ++i )
std::cout<<"*";

diff1 = ( std::clock() - start1 ) / (double)CLOCKS_PER_SEC;
start2 = std::clock();

for ( long int i = 0; i < 100000; ++i )
printf ( "*" );

diff2 = ( std::clock() - start2 ) / (double)CLOCKS_PER_SEC;

std::cout<<"\ncout: "<< diff1 <<'\n'<<"printf: "<< diff2 <<'\n';
getchar();
}

I got the output:

cout: 12.844
printf: 12.75

printf was slightly faster!

But I think the statement "printf is faster than cout " is nothing but
dangerous over generalization.

Am I correct?

I am using Intel Core 2 duo processor E7400 @ 2.8 GHz and 4GB of RAM

A friend of mine said "printf is always faster than cout" and got the
output of the same program as

cout : 0.14
printf: 0.10

How did he get the output so fast ?

I think for 1000000 iterations my friend's output is impossible! Tell
me whether I got approximately correct output or my friend?


Prasoon
 
P

Prasoon

Correction! the second loop is :

for ( long int i = 0; i < 1000000; ++i ) //10^6 iterations
printf ( "*" );

instead of

for ( long int i = 0; i < 100000; ++i )
printf ( "*" );

Prasoon
 
A

Alf P. Steinbach

* Prasoon:
Which is faster "cout" or "printf" ?

I have written the following program

#include <iostream>
#include <cstdio>
#include <ctime>

int main()
{
std::clock_t start1, start2;
double diff1, diff2;
start1 = std::clock();

for ( long int i = 0; i < 1000000; ++i )
std::cout<<"*";

diff1 = ( std::clock() - start1 ) / (double)CLOCKS_PER_SEC;
start2 = std::clock();

for ( long int i = 0; i < 100000; ++i )
printf ( "*" );

diff2 = ( std::clock() - start2 ) / (double)CLOCKS_PER_SEC;

std::cout<<"\ncout: "<< diff1 <<'\n'<<"printf: "<< diff2 <<'\n';
getchar();
}

I got the output:

cout: 12.844
printf: 12.75

printf was slightly faster!

But I think the statement "printf is faster than cout " is nothing but
dangerous over generalization.

Am I correct?

No. It is an overgeneralization but not a dangerous one. In practice, with
current C++ implementations and any I can imagine in the future (considering
that this state of affairs has persisted for about 10 years or so) printf will
be faster than cout. In theory cout *can* be faster, and I think it was Dietmar
Kuhl (modulo spelling) who once made a really really fast implementation --
Andrei Alexandrescu tried the same feat with some of the STL, called YASLI (Yet
Another Standard Library Implementation) but it was never completed except, as I
recall, an implementation of vector, and perhaps string but I'm not sure.

What you should be mainly be concerned about instead, is correctness and
maintainability.

Unfortunately for iostreams these concerns are in direct conflict. There is far
better type safety that for printf family, although still with UB for some input
operations. On the other hand, for any but the most trivial formatting and
parsing, the iostream code becomes really verbose & messy, downright ugly,
employing so complex functionality that whole tomes have been written about it.

But, for simple test & research & learning programs you can use a simple subset
of iostream functionality where the type safety outweights the verbosity. :)

For those kinds of small programs there's no contest really: at least for the
novice iostreams are there the default choice, the only sane choice.

I am using Intel Core 2 duo processor E7400 @ 2.8 GHz and 4GB of RAM

A friend of mine said "printf is always faster than cout" and got the
output of the same program as

cout : 0.14
printf: 0.10

How did he get the output so fast ?

Perhaps he directed the output to /dev/null (or nul in Windows)?

I think for 1000000 iterations my friend's output is impossible! Tell
me whether I got approximately correct output or my friend?

Probably both of you. <g>


Cheers & hth.,

- Alf
 
F

Fred Zwarts

Prasoon said:
Which is faster "cout" or "printf" ?

I have written the following program

#include <iostream>
#include <cstdio>
#include <ctime>

int main()
{
std::clock_t start1, start2;
double diff1, diff2;
start1 = std::clock();

for ( long int i = 0; i < 1000000; ++i )
std::cout<<"*";

diff1 = ( std::clock() - start1 ) / (double)CLOCKS_PER_SEC;
start2 = std::clock();

for ( long int i = 0; i < 100000; ++i )

I assume this should be 1000000.
printf ( "*" );

diff2 = ( std::clock() - start2 ) / (double)CLOCKS_PER_SEC;

std::cout<<"\ncout: "<< diff1 <<'\n'<<"printf: "<< diff2 <<'\n';
getchar();
}

I got the output:

cout: 12.844
printf: 12.75

Is this the whole output? Where are the 2000000 asterisks?
The reason I mention this is that this is a very bad comparison of the two.
You don not take into account formating and actual I/O.
When printing so many characters without explicitly flushing the output buffer,
the difference may be caused by different flushing strategies of cout and printf,
which in normal situations would not apply.
You only measure the time to put a character in a buffer and an unspecified flushing of the output buffer.
The timing for printing each time a floating point variable on a new line
may show very different results, depending on whether you use endl or '\n' with cout.
 
B

Bo Persson

Paavo said:
The actual console output probably dominates the timings anyway,
especially on Windows, so these numbers do not tell much.

And in this case it also influences the result. If you change the
order of the tests, the result also changes - the first one is
slightly slower.

:)


Bo Persson
 
J

Juha Nieminen

Prasoon said:
Which is faster "cout" or "printf" ?

When printing to the console? It doesn't matter because printing to
the console is probably hundreds if not thousands of times slower than
any speed difference between std::cout and std::printf. Any such
difference will be almost completely overwhelmed by the slowness of the
console.

Now, if you were writing to a file, that can make a big difference in
many cases.
 
J

James Kanze

Which is faster "cout" or "printf" ?

Which is tastier, apples or oranges?
I have written the following program
#include <iostream>
#include <cstdio>
#include <ctime>
int main()
{
std::clock_t start1, start2;
double diff1, diff2;
start1 = std::clock();
for ( long int i = 0; i < 1000000; ++i )
std::cout<<"*";
diff1 = ( std::clock() - start1 ) / (double)CLOCKS_PER_SEC;
start2 = std::clock();
for ( long int i = 0; i < 100000; ++i )
printf ( "*" );
diff2 = ( std::clock() - start2 ) / (double)CLOCKS_PER_SEC;
std::cout<<"\ncout: "<< diff1 <<'\n'<<"printf: "<< diff2 <<'\n';
getchar();
}
I got the output:
cout: 12.844
printf: 12.75
printf was slightly faster!

For this particular use, with the particular implementation you
were using.
But I think the statement "printf is faster than cout " is
nothing but dangerous over generalization.
Am I correct?

Yes. In particular, for the precise program you've written,
there's a good chance that actual IO is dominating both cases,
so the speed of the library code doesn't mean anything. In
fact, this will probably be the case for most uses of the
library.
I am using Intel Core 2 duo processor E7400 @ 2.8 GHz and 4GB
of RAM
A friend of mine said "printf is always faster than cout"

Which is ridiculous. Theoretically, cout can be slightly
faster, since it doesn't have to do any "parsing".
Theoretically, printf can be slightly faster, because there's
only one function call for complex formatting, as opposed to
many. Practically, it all depends, and if you find a large
difference, all it means is that one of them hasn't been
implemented very efficiently.
and got the output of the same program as
cout : 0.14
printf: 0.10
How did he get the output so fast ?

What was he outputting to? And what does clock() measure on
your system? (The presence of a getchar() at the end suggests
Windows, in which case, clock() is broken, and actually measures
elapsed time, rather than CPU.)
I think for 1000000 iterations my friend's output is
impossible! Tell me whether I got approximately correct output
or my friend?

If clock() works correctly, both of your figures are way too
large. On my Linux box, I get very close to 0 for both. (Your
output requires no formatting, so there is practically no CPU
involved in either case.)
 
J

James Kanze

* Prasoon:

[...]
No. It is an overgeneralization but not a dangerous one. In
practice, with current C++ implementations and any I can
imagine in the future (considering that this state of affairs
has persisted for about 10 years or so) printf will be faster
than cout. In theory cout *can* be faster, and I think it was
Dietmar Kuhl (modulo spelling) who once made a really really
fast implementation

Dietmar's implementation of iostream beat any implementation of
printf I've seen, in terms of speed, and in at least one version
of g++, outputting to cout was faster than printf for some types
of output. (It's hard to generalize---his output to std::cout
would normally be done with putc, and not printf, using
<stdio.h>, and putc is probably faster than printf.)

In practice, the major vendors haven't bothered because their
implementations of iostream are already "fast enough".
What you should be mainly be concerned about instead, is
correctness and maintainability.
Unfortunately for iostreams these concerns are in direct
conflict. There is far better type safety that for printf
family, although still with UB for some input operations. On
the other hand, for any but the most trivial formatting and
parsing, the iostream code becomes really verbose & messy,
downright ugly, employing so complex functionality that whole
tomes have been written about it.

Less so than printf, if you use it correctly. But advanced
formatting is never simple. (And of course, neither have any
support for formatting when variable width fonts are used. In
this sense, they're both from any earlier time.)
 
J

James Kanze

When printing to the console? It doesn't matter because
printing to the console is probably hundreds if not thousands
of times slower than any speed difference between std::cout
and std::printf. Any such difference will be almost completely
overwhelmed by the slowness of the console.
Now, if you were writing to a file, that can make a big
difference in many cases.

For small files, which the system can cache in its memory. For
a large enough file, you'll end up using all of the system
buffers, the writes will require an actual write to disk, and
things will slow up considerable. Try writing 100K, then 200K,
up to a couple of MB. You'll find that a graph of the elapsed
execution times is decidedly non-linear. (Of course, if you're
using clock(), under Linux, nothing will change, since it's only
under Windows that clock() doesn't work correctly.)
 
T

tni

James said:
If clock() works correctly, both of your figures are way too
large. On my Linux box, I get very close to 0 for both. (Your
output requires no formatting, so there is practically no CPU
involved in either case.)

Windows console output is extremely slow.
 
P

Prasoon

If clock() works correctly, both of your figures are way too
large. On my Linux box, I get very close to 0 for both. (Your
output requires no formatting, so there is practically no CPU
involved in either case.)

I think it measured the elapsed time in my case. I redirected the
output of the code to a file and got values too less as compared to
my previous ones.
 
T

Thomas Matthews

Prasoon said:
Which is faster "cout" or "printf" ?

I have written the following program

#include <iostream>
#include <cstdio>
#include <ctime>

int main()
{
std::clock_t start1, start2;
double diff1, diff2;
start1 = std::clock();

for ( long int i = 0; i < 1000000; ++i )
std::cout<<"*";

diff1 = ( std::clock() - start1 ) / (double)CLOCKS_PER_SEC;
start2 = std::clock();

for ( long int i = 0; i < 100000; ++i )
printf ( "*" );

diff2 = ( std::clock() - start2 ) / (double)CLOCKS_PER_SEC;

std::cout<<"\ncout: "<< diff1 <<'\n'<<"printf: "<< diff2 <<'\n';
getchar();
}

I got the output:

cout: 12.844
printf: 12.75

printf was slightly faster!

But I think the statement "printf is faster than cout " is nothing but
dangerous over generalization.

Am I correct?

I am using Intel Core 2 duo processor E7400 @ 2.8 GHz and 4GB of RAM

A friend of mine said "printf is always faster than cout" and got the
output of the same program as

cout : 0.14
printf: 0.10

How did he get the output so fast ?

I think for 1000000 iterations my friend's output is impossible! Tell
me whether I got approximately correct output or my friend?


Prasoon

If you are not using dynamic formatting, using constant data,
then cout::write is about as fast as fwrite(). The whole point
is that these block write functions take the data as-is and
send it on its merry way.

My favorite Hello World program:
int main(void)
{
static const char hw[] = "Hello World!\n";
static const unsigned int LENGTH = sizeof(hw) - 1;
cout.write(hw, LENGTH);
return EXIT_SUCCESS;
}

If you take a look at your results, the timings seem to be negligible.
The difference in timings are not significant due to the OS priorities
and the speed of the platform's I/O channel(s). In other words,
the time you save here will be wasted waiting for user input,
a hard drive, internet transmission, etc.

--
Thomas Matthews

C++ newsgroup welcome message:
http://www.slack.net/~shiva/welcome.txt
C++ Faq: http://www.parashift.com/c++-faq-lite
C Faq: http://www.eskimo.com/~scs/c-faq/top.html
alt.comp.lang.learn.c-c++ faq:
http://www.comeaucomputing.com/learn/faq/
Other sites:
http://www.josuttis.com -- C++ STL Library book
http://www.sgi.com/tech/stl -- Standard Template Library
 
J

James Kanze

Windows console output is extremely slow.

And how does that relate to clock()? The standard says that
"The clock function returns the implementation's best
approximation to the processor time used by the program since
the beginning of an implementation-defined era related only to
the program invocation." There are, of course, enough weasel
words in there to make just about anything formally conform, but
the intent is clear that it should be related to the CPU time
used by the program (not the system), insofar as such is
available. Console output under Linux isn't particularly fast
either, but it's system time, not charged to the program, and it
doesn't show up in clock().

(Presumably, the reason Windows does what it does is for
backwards compatibility with MS-DOS, where no better
approximation was available.)
 
J

James Kanze

I think it measured the elapsed time in my case. I redirected the
output of the code to a file and got values too less as compared to
my previous ones.

If you're under Windows, it measures elapsed time. If you're
only under Windows, you can use the function GetProcessTimes to
obtain the CPU time. (The lpUserTime field in the returned
struct corresponds roughly to what clock() should return.)
 
P

Prasoon

I also use Ubuntu 9.04 frequently. So no problem with that. :)
If you're under Windows, it measures elapsed time. If you're
only under Windows, you can use the function GetProcessTimes to
obtain the CPU time. (The lpUserTime field in the returned
struct corresponds roughly to what clock() should return.)

Thanks for that.
 
T

tni

James said:
And how does that relate to clock()? The standard says that
"The clock function returns the implementation's best
approximation to the processor time used by the program since
the beginning of an implementation-defined era related only to
the program invocation." There are, of course, enough weasel
words in there to make just about anything formally conform, but
the intent is clear that it should be related to the CPU time
used by the program (not the system), insofar as such is
available.

My interpretation of the weasel words is that it's very reasonable to
include system time.
> Console output under Linux isn't particularly fast
either,

Well, Linux (terminal is KDE Konsole 4.2.2) is faster than Windows by a
factor of 150. I would call that fast.
but it's system time, not charged to the program, and it
doesn't show up in clock().

Nope. System time is certainly included in the clock() value on my Linux
systems.

Windows (per 'GetProcessTimes()'; clock() reports real time):

Real time: 19500 ns/char
User time: 2100 ns/char
System time: 2750 ns/char
User+sys time: 4850 ns/char

Linux (per 'time'; clock() reports user + system time):

Real time: 117 ns/char
User time: 11.3 ns/char
System time: 19.7 ns/char
User+sys time: 31 ns/char

(ns as in nano seconds; 1 ns = 3 clock cycles on this CPU)

So any issue with clock() reporting real time on Windows is WAY smaller
than the difference in console output performance vs. Linux.

When redirecting the output to a file, Linux is about 4x faster; the
writes are completely cached, user+sys time is approximately equal to
real time on both.

(The numbers are for VS 2005 on Windows, GCC 4.3 on Linux; MinGW 4.4 on
Windows is about 10% faster than VS 2005 for this test. MinGW is using
the Windows standard libs, so it's not surprising that it's much slower
than GCC.)
 
J

James Kanze

My interpretation of the weasel words is that it's very
reasonable to include system time.

It's debatable. Is the system part of the program, or not. My
first interpretation would be that it isn't, but the point can
easily be argued both ways.

What is clear is that it shouldn't return elapsed time unless no
better alternatives exist (e.g under MS-DOS).
Well, Linux (terminal is KDE Konsole 4.2.2) is faster than
Windows by a factor of 150. I would call that fast.

I've not measured the actual difference, but Linux terminal
output is visibly slower than output to /dev/null, or even
output to a remote file. I would call that slow.
Nope. System time is certainly included in the clock() value
on my Linux systems.

I only tried it on one Linux system; the time from clock() was
the same whether the output whent to the terminal, or to
/dev/null.

Come to think of it, however, I don't think that the time used
drawing the characters in the terminal window is "system" time,
either; I'm pretty sure that it is charged to the X server,
which is a separate process. (Time management on Unix systems
is very, very primitive.)
Windows (per 'GetProcessTimes()'; clock() reports real time):
Real time: 19500 ns/char
User time: 2100 ns/char
System time: 2750 ns/char
User+sys time: 4850 ns/char
Linux (per 'time'; clock() reports user + system time):
Real time: 117 ns/char
User time: 11.3 ns/char
System time: 19.7 ns/char
User+sys time: 31 ns/char
(ns as in nano seconds; 1 ns = 3 clock cycles on this CPU)
So any issue with clock() reporting real time on Windows is
WAY smaller than the difference in console output performance
vs. Linux.

I'm afraid I don't understand that sentence.
When redirecting the output to a file, Linux is about 4x
faster; the writes are completely cached, user+sys time is
approximately equal to real time on both.

It depends on how much you're writing. There's a distinct point
where the caching stops working, and the elapsed time makes a
jump.

Of course, in most real applications, you'll be synchronizing
the important writes anyway, to avoid the caching. (In my work,
about the only non-synchronized writes are logging output. And
that very quickly becomes large enough that caching stops
working as well.)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top