c / c++ : is it end of era ?

S

santosh

Richard said:
jacob navia said:
The main advantages of my own string library are:

1) it caches string lengths, thus allowing fast len, cat and cpy operations;
2) it resizes buffers when necessary, without troubling the user, and keeps
extra space around so that it doesn't need to go to the well every time;
3) it contains several useful string functions not provided by the standard
library;
4) it is written in ISO C and doesn't depend on any particular compiler
features.

The first of these has genuine and significant time complexity reduction
benefits. The second takes a load off the programmer's shoulders. The third
is basically a sack of nice toys. And the fourth is essential if the code
is to remain viable in the long term.

Number two doesn't imply non-reentrancy, does it?
 
J

jacob navia

Richard Heathfield a écrit :
(heathfield)

Whilst that *is* an improvement, it's a micro-optimisation, since each
character in turn is being examined anyway.

Surely not.
memchr can be implemented with a hardware memory scan since processors
like the x86 have a hardware byte scan feature. This allows for
scanning for a single byte, but not for a byte OR zero byte.

This can make quite a difference.
[snip]
The main advantages of my own string library are:

1) it caches string lengths, thus allowing fast len, cat and cpy operations;

What a surprise. You have written a counted string
library. And after all this polemic ... you need it too.
Well lcc-win32's library caches string lengths too. :)
2) it resizes buffers when necessary, without troubling the user, and keeps
extra space around so that it doesn't need to go to the well every time;

The same for lcc-win32's library
3) it contains several useful string functions not provided by the standard
library;

The same: file to string, string to file, find first/last.
Lcc-win32's is roughly patterned after the C++ one.
4) it is written in ISO C and doesn't depend on any particular compiler
features.

Well, the library uses operator overloading, wht allows you
to write:
String s = "abc";
s[12] = 'h';

etc. It has the same feeling as the old C library.
The first of these has genuine and significant time complexity reduction
benefits. The second takes a load off the programmer's shoulders. The third
is basically a sack of nice toys.

I can say the same of lcc-wiN32's library
> And the fourth is essential if the code
is to remain viable in the long term.

The purpose of lcc-win32's library is to demonstrate that a single
modification to the compiler allows to build a general container
library. The same feature has been used to build a resizable
strings/vector package, a list package, a bitstring
package and I am doing the hash table package.

It is interesting to note that after all deprecating of
counted strings, heathfield has written one of his own!!!
 
K

Keith Thompson

jacob navia said:
Dave Vandervies a écrit :
If knowing the length of strings is that important, can you explain
how counted strings would have made this code easier to write, clearer,
or less error-prone?
/* -8<--Code starts here--8<- */ [snip]
ptr=start=strchr(sentence,'$');
-------
In the strchr call above, a counted strings implementation can
safely replace a strchr with memchr, that doesn't check at each
character the terminating zero but just makes a bounded
memory scan
---------

strchr() has to check for '\0' on each iteration of the loop. A
counted strings implementation has to check whether the index exceeds
the length on each iteration of the loop. My guess is that strchr()
is going to be slightly faster -- but that's only a guess, and it's
likely to vary on different implementations.

[...]
I presume you're referring to *your* string library, not really to
"the" string library. But I'm not convinced that a call to
Strfind_firstof is significantly better, in any sense, than the
existing code.

[...]
But again, it needs to check whether it's reached the end of the
string. Whether that check is done by comparing a character (that
we're already looking at anyway) against '\0', or by comparing an
index (that doesn't otherwise need to be computed) to a stored bound.

[...]
As you can see, the main advantages are the obviating for the tests
for the zero byte.

I don't believe you've demonstrated that this is an advantage in this
case.
 
J

jacob navia

Keith Thompson a écrit :
jacob navia said:
Dave Vandervies a écrit :
If knowing the length of strings is that important, can you explain
how counted strings would have made this code easier to write, clearer,
or less error-prone?
/* -8<--Code starts here--8<- */
[snip]
ptr=start=strchr(sentence,'$');

-------
In the strchr call above, a counted strings implementation can
safely replace a strchr with memchr, that doesn't check at each
character the terminating zero but just makes a bounded
memory scan
---------


strchr() has to check for '\0' on each iteration of the loop. A
counted strings implementation has to check whether the index exceeds
the length on each iteration of the loop. My guess is that strchr()
is going to be slightly faster -- but that's only a guess, and it's
likely to vary on different implementations.

No. The big difference is that byte scan is hardwired in some
processors, what makes it considerably FASTER than
searching for a zero byte OR the searched character
 
R

Richard Heathfield

santosh said:
Number two doesn't imply non-reentrancy, does it?

No, not at all. The string information is wrapped up in an aggregate object
to which the user-programmer has a pointer.
 
R

Richard Heathfield

jacob navia said:
Richard Heathfield a écrit :

Surely not.
memchr can be implemented with a hardware memory scan since processors
like the x86 have a hardware byte scan feature. This allows for
scanning for a single byte, but not for a byte OR zero byte.

This can make quite a difference.

It's likely to be a tiny difference.
[snip]
The main advantages of my own string library are:

1) it caches string lengths, thus allowing fast len, cat and cpy
operations;

What a surprise. You have written a counted string
library.

Well, I don't call it a "counted string" library, but yes, I've written one.
And after all this polemic ...

What polemic? I have said all along that there's nothing wrong with using
third-party string libraries - feel free to check the history if you don't
believe me.
you need it too.

I find it convenient, sometimes. And sometimes I don't. It's good to have
the choice.

4) it is written in ISO C and doesn't depend on any particular compiler
features.

Well, the library uses operator overloading, wht allows you
to write:
String s = "abc";
s[12] = 'h';

....which is non-standard syntax (unless String is just a typedef for an
array of a given, hard-coded size, which would defeat the point). It's a
fine idea for people who want to use it, but it ain't C, and it should be
discussed in a newsgroup where it's topical, e.g. your own.
etc. It has the same feeling as the old C library.

But if I use your syntax, my program *won't compile*.

The purpose of lcc-win32's library is to demonstrate that a single
modification to the compiler allows to build a general container
library.

This can be done without modifying the compiler, too.
It is interesting to note that after all deprecating of
counted strings, heathfield has written one of his own!!!

I haven't deprecated "counted strings" at all. What makes you think I have
done so?
 
R

Richard Heathfield

jacob navia said:
Keith Thompson a écrit :


No. The big difference is that byte scan is hardwired in some
processors, what makes it considerably FASTER than
searching for a zero byte OR the searched character

What is true for some processors is not true for all, and what is faster on
Machine A may turn out to be slower on Machine B. And even if that turns
out not to be the case, it's still microoptimisation. If this is truly the
program's bottleneck, the programmer is skilled indeed!

In such cases, write the clearest code you can.
 
D

Dave Vandervies

Dave Vandervies a écrit :

(Most of code snipped.)

-------
In the strchr call above, a counted strings implementation can
safely replace a strchr with memchr, that doesn't check at each
character the terminating zero but just makes a bounded
memory scan

The main use of this code will be processing data that has already come
through a serial port at 4800 bits per second. (It's part of a program
I'm playing with to display GPS data in a useful manner.) Can the amount
of time this micro-optimization would save on a modern system be expressed
as a percentage of the time taken to send the string through the serial
link with fewer than five zeros to the right of the decimal point?

In the common case (correct and well-formed input), the '$' will come
through the input port immediately following the end of the last input
sentence and therefore be the first character in the input buffer.
Can the amount of time saved in this case be expressed as a strictly
positive value?

One of the entries on my would-be-nice-to-do-sometime list is to port
this program (once I'm finished with it, which might take a while)
to a handheld device. Can the probability that such a device would
both have support for this micro-optimization and be slow enough that
it would help be expressed as a strictly positive value?



If a simple find-first-of was what I wanted, I'd've used strcspn or
strpbrk, which have the advantage of being supported on any hosted C
implementation. Does your Strfind_firstof also calculate the xor of
the values of the characters it skips over?


That should've been '\0'. Good catch.


Is that actually an improvement over just including '\0' in the set of
characters it searches for? It seems odd that doing the same check for
one more character would be slower than adding a different test.

But I may disappoint you since in the implementation of lcc-win32 the
strings are probably much slower than they ought to be since the
main thrust of the package is to provide more security and not speed.

I will start a faster version soon.

As you can see, the main advantages are the obviating for the tests
for the zero byte.

So basically you're saying that if I were to use your counted string
library I could get the benefits of some microoptimizations that avoided
looking for a terminating '\0', at the expense of using a library that's
much slower than it ought to be because speed wasn't a main design goal?
Somehow that seems kind of incongruous.


dave
 
J

jacob navia

Dave Vandervies a écrit :
The main use of this code will be processing data that has already come
through a serial port at 4800 bits per second. (It's part of a program
I'm playing with to display GPS data in a useful manner.) Can the amount
of time this micro-optimization would save on a modern system be expressed
as a percentage of the time taken to send the string through the serial
link with fewer than five zeros to the right of the decimal point?

If you are connecting a modern PC at 4800 bauds...
the main bottleneck is anyway outsiode string handling.

You change the rules after the fact, excuse me.

You asked:
< quote >
If knowing the length of strings is that important, can you explain
how counted strings would have made this code easier to write, clearer,
or less error-prone?

In the common case (correct and well-formed input), the '$' will come
through the input port immediately following the end of the last input
sentence and therefore be the first character in the input buffer.
Can the amount of time saved in this case be expressed as a strictly
positive value?

Again, you change the rules. You did NOT say that in the first
post!

Now it is obvious that you should do:
if (*sentence != '$') {
ptr = strchr(sentence,'$');
}
else ptr = sentence;
One of the entries on my would-be-nice-to-do-sometime list is to port
this program (once I'm finished with it, which might take a while)
to a handheld device. Can the probability that such a device would
both have support for this micro-optimization and be slow enough that
it would help be expressed as a strictly positive value?

If the processor belongs to the x86 family definitely yes.

That should've been '\0'. Good catch.

This is a common problem with zero terminated strings :)
I have done this error several times too.
So basically you're saying that if I were to use your counted string
library I could get the benefits of some microoptimizations that avoided
looking for a terminating '\0', at the expense of using a library that's
much slower than it ought to be because speed wasn't a main design goal?
Somehow that seems kind of incongruous.

Well, counted strings are more secure by design. It is not
much their intrinsic efficiency but the fact that they allow for
programs that do not start unbounded memory scans...

No, I haven't optimized it yet. And I am not trying to sell you
something. The main thrust in my work was to demonstrate the far
reaching implications of a small change to the language itself to
encourage other people and implementers to do the same.
 
D

Dave Vandervies

Dave Vandervies a écrit :

If you are connecting a modern PC at 4800 bauds...
the main bottleneck is anyway outsiode string handling.

You change the rules after the fact, excuse me.

You asked:
< quote >
If knowing the length of strings is that important, can you explain
how counted strings would have made this code easier to write, clearer,
or less error-prone?

< end quote>

I don't see "faster" in my list. Is code that calls a library routine
that does a fixed-length memory scan easier to write, clearer, or less
error-prone than code that calls a library routine that stops at the
'\0' that terminates a string?

If you're going to complain about changing the rules after the fact,
a better place to start complaining might be with the comment about
replacing a terminating-character check with a size-bounded memory scan
(both wrapped inside a library routine) for, as far as I can tell,
no reason except that some processors might be able to run it a little
bit faster.

Again, you change the rules. You did NOT say that in the first
post!

Now it is obvious that you should do:
if (*sentence != '$') {
ptr = strchr(sentence,'$');
}
else ptr = sentence;

Why should I do that? It's neither easier to write, nor clearer,
nor less error-prone than letting strchr check the first character
for me. It's also quite unlikely to save a noticeable (or probably even
measurable) amount of time.

If the processor belongs to the x86 family definitely yes.

How many handheld devices that I would be able to get my hands on at
some point in the not-too-distant future are likely to be built around
an x86 processor that's slow enough that it might have trouble keeping
up with a 4800bps input stream?
I don't pay a whole lot of attention to what goes in embedded systems,
but it seems to me that the only way it would be short on CPU cycles
for this kind of operation is if it's been optimized for insanely low
cost and power consumption, and I'm not sure the x86 family is a major
player in that market.


Well, counted strings are more secure by design. It is not
much their intrinsic efficiency but the fact that they allow for
programs that do not start unbounded memory scans...

Secure by design, or just with a different set of potential
security-related bugs to watch out for? (Do your counted strings keep
track of the available space and make sure it's not exceeded? Does a
set of bytes with a random length field make a valid counted string?
Can a programmer write safe code without knowledge about how to use them
and a fanatical (or at least not-nonexistent) dedication to correctness?
Why should the answers be any different for code that uses null-terminated
strings?)

I've never written string code that starts unbounded memory scans;
they're just bounded by a condition other than "have I gotten to N
characters past the beginning?". If my sets of bytes don't have a '\0'
at the end, I don't call them strings and don't treat them as strings -
it's that easy.


(It's worth noting that even the end-of-string check bug in the code I
posted wouldn't've resulted in walking off the end of the string; giving
it carefully crafted bad data (rather than putting it downstream of an
input handler that only passed on complete lines) would have caused the
assert farther down to fail (immediately giving a clue that Something
Isn't Right, after which a brief examination of the execution on that
data would have turned up the error), or with NDEBUG defined would have
resulted in a deterministic failure to correctly report some types of
badly-formatted data - not exactly a terribly security flaw.)

No, I haven't optimized it yet. And I am not trying to sell you
something. The main thrust in my work was to demonstrate the far
reaching implications of a small change to the language itself to
encourage other people and implementers to do the same.

It seems to me that you've yet to convince anybody that these "far
reaching implications" are all that far-reaching, or for that matter
relevant at all.


dave
 
J

jacob navia

Dave Vandervies a écrit :
Secure by design, or just with a different set of potential
security-related bugs to watch out for? (Do your counted strings keep
track of the available space and make sure it's not exceeded?
Yes


Does a
set of bytes with a random length field make a valid counted string?

No. I test that at the end of the length field there is a ZERO byte,
i.e. the length must point to a zero byte. Maybe I will change that
for a longer signature.
Can a programmer write safe code without knowledge about how to use them
and a fanatical (or at least not-nonexistent) dedication to correctness?

It is designed to port the code of existing string library code without
many modifications, That's the whole point.

Instead of strcat Strcat, etc. That's why operator overloading comes
handy since it allows to replace

char *a = "A string";

by
String a = "A string";

WITHOUT having to write:
String a = newString("A string"); // or whatever
Why should the answers be any different for code that uses null-terminated
strings?)

They are
 
R

Richard Heathfield

jacob navia said:

Instead of strcat Strcat, etc. That's why operator overloading comes
handy since it allows to replace

char *a = "A string";

by
String a = "A string";

Not in C, it doesn't.
 
K

Keith Thompson

jacob navia said:
Keith Thompson a écrit :

No. The big difference is that byte scan is hardwired in some
processors, what makes it considerably FASTER than
searching for a zero byte OR the searched character

Have you measured it? More importantly, have you measured it on
multiple platforms? Do you have any actual performance numbers to
share with us? Or are you just guessing?
 
R

Richard Tobin

jacob navia said:
No. The big difference is that byte scan is hardwired in some
processors, what makes it considerably FASTER than
searching for a zero byte OR the searched character

I would have expected that on modern machines such things were limited
by memory access (even for data in the cache).

-- Richard
 
J

jacob navia

Richard Heathfield a écrit :
jacob navia said:




Not in C, it doesn't.

BRAVO heathfield!

Bravo!

What an answer. I am really impressed by it.

So many arguments!

I am sure that is the best answer your
mind can give :)
 
J

jacob navia

Keith Thompson a écrit :
jacob navia said:
Keith Thompson a écrit :
jacob navia <[email protected]> writes:
[...]
-------
In the strchr call above, a counted strings implementation can
safely replace a strchr with memchr, that doesn't check at each
character the terminating zero but just makes a bounded
memory scan
---------

strchr() has to check for '\0' on each iteration of the loop. A
counted strings implementation has to check whether the index exceeds
the length on each iteration of the loop. My guess is that strchr()
is going to be slightly faster -- but that's only a guess, and it's
likely to vary on different implementations.

No. The big difference is that byte scan is hardwired in some
processors, what makes it considerably FASTER than
searching for a zero byte OR the searched character


Have you measured it? More importantly, have you measured it on
multiple platforms? Do you have any actual performance numbers to
share with us? Or are you just guessing?
#include <stdio.h>
#include <time.h>
#include <string.h>
#define MAXITER 10000000
int main(void)
{
char s[4096];
int i;
time_t t,tStrchr,tMemchr;

for (i=0; i<sizeof(s)-1;i++) {
s = 'a';
}
s[sizeof(s)-1] = 0;
t = time(NULL);
for (i=0; i<MAXITER;i++) {
strchr(s,'1');
}
tStrchr= time(NULL) - t;
printf("Time for strchr=%d\n",tStrchr);
t = time(NULL);
for (i=0; i<MAXITER;i++) {
memchr(s,'1',sizeof(s));
}
tMemchr=time(NULL)-t;
printf("Time for memchr=%d\n",tMemchr);
}
Time for strchr=84
Time for memchr=41

Machine: Amd64 2GHZ
Compiler: lcc-win32, no optimizations

With 64 bits MSVC the difference is:
Time for strchr=63
Time for memchr=62

In the same machine! This is because the memchr routine is probably
not optimized at all. When I stop the program under the debugger I see
routines written in C. In the 32 bit versions they were written in
assembler...

Measurements depend on the CPU of course but as I said before
this is for x86 Cpus or similar. They are quite popular
though.
 
D

Dave Vandervies

Keith Thompson a écrit :
Have you measured it? More importantly, have you measured it on
multiple platforms? Do you have any actual performance numbers to
share with us? Or are you just guessing?
#include <stdio.h>
#include <time.h>
#include <string.h>
#define MAXITER 10000000
int main(void)
{
char s[4096];
int i;
time_t t,tStrchr,tMemchr;

for (i=0; i<sizeof(s)-1;i++) {
s = 'a';
}
s[sizeof(s)-1] = 0;
t = time(NULL);
for (i=0; i<MAXITER;i++) {
strchr(s,'1');
}
tStrchr= time(NULL) - t;
printf("Time for strchr=%d\n",tStrchr);
t = time(NULL);
for (i=0; i<MAXITER;i++) {
memchr(s,'1',sizeof(s));
}
tMemchr=time(NULL)-t;
printf("Time for memchr=%d\n",tMemchr);
}
Time for strchr=84
Time for memchr=41

Machine: Amd64 2GHZ
Compiler: lcc-win32, no optimizations

With 64 bits MSVC the difference is:
Time for strchr=63
Time for memchr=62


Are you sure of your numbers? Here's what I got:

--------
dave@buttons:~/clc (0) $ cat jn_memchr-strchr-compare.c
#include <stdio.h>
#include <time.h>
#include <string.h>
#define MAXITER 10000000
int main(void)
{
char s[4096];
int i;
time_t t,tStrchr,tMemchr;

for (i=0; i<sizeof(s)-1;i++) {
s = 'a';
}
s[sizeof(s)-1] = 0;
t = time(NULL);
for (i=0; i<MAXITER;i++) {
strchr(s,'1');
}
tStrchr= time(NULL) - t;
printf("Time for strchr=%d\n",tStrchr);
t = time(NULL);
for (i=0; i<MAXITER;i++) {
memchr(s,'1',sizeof(s));
}
tMemchr=time(NULL)-t;
printf("Time for memchr=%d\n",tMemchr);
}

dave@buttons:~/clc (0) $ gcc -W -Wall -ansi -pedantic jn_memchr-strchr-compare.c
jn_memchr-strchr-compare.c: In function `main':
jn_memchr-strchr-compare.c:11: warning: comparison between signed and unsigned
jn_memchr-strchr-compare.c:20: warning: int format, time_t arg (arg 2)
jn_memchr-strchr-compare.c:26: warning: int format, time_t arg (arg 2)
jn_memchr-strchr-compare.c:27: warning: control reaches end of non-void function
dave@buttons:~/clc (0) $ cp jn_memchr-strchr-compare.c jn_memchr-strchr-compare-fixed.c
dave@buttons:~/clc (0) $ vi jn_memchr-strchr-compare-fixed.c
dave@buttons:~/clc (0) $ cat jn_memchr-strchr-compare-fixed.c
#include <stdio.h>
#include <time.h>
#include <string.h>
#define MAXITER 10000000
int main(void)
{
char s[4096];
size_t i;
time_t t,tStrchr,tMemchr;

for (i=0; i<sizeof(s)-1;i++) {
s = 'a';
}
s[sizeof(s)-1] = 0;
t = time(NULL);
for (i=0; i<MAXITER;i++) {
strchr(s,'1');
}
tStrchr= time(NULL);
printf("Time for strchr=%f\n",difftime(tStrchr,t));
t = time(NULL);
for (i=0; i<MAXITER;i++) {
memchr(s,'1',sizeof(s));
}
tMemchr=time(NULL);
printf("Time for memchr=%f\n",difftime(tMemchr,t));

return 0;
}

dave@buttons:~/clc (0) $ gcc -W -Wall -ansi -pedantic jn_memchr-strchr-compare-fixed.c
dave@buttons:~/clc (0) $ ./a.out
Time for strchr=0.000000
Time for memchr=0.000000
dave@buttons:~/clc (0) $ gcc --version
gcc (GCC) 3.3.6
Copyright (C) 2003 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

dave@buttons:~/clc (0) $ uname -a
Linux buttons 2.6.15.3 #12 Sun Nov 26 16:55:30 EST 2006 i686 unknown unknown GNU/Linux
dave@buttons:~/clc (0) $ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 13
model name : Intel(R) Pentium(R) M processor 1.73GHz
stepping : 8
cpu MHz : 800.219
cache size : 2048 KB
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 2
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat clflush dts acpi mmx fxsr sse sse2 ss tm pbe nx est tm2
bogomips : 1601.62

dave@buttons:~/clc (0) $
--------

So whatever microoptimization you've been doing deep inside memchr,
gcc and libc are still smart enough to do it Rather Faster even without
being asked for optimization.


dave
(completely ignoring the question of what proportion of the running time
a typical program spends inside strchr)
 
K

Keith Thompson

jacob navia said:
Richard Heathfield a écrit :

BRAVO heathfield!

Bravo!

What an answer. I am really impressed by it.

So many arguments!

I am sure that is the best answer your
mind can give :)

C does not support operator overloading. Any features you've
implemented that require operator overloading are therefore off-topic
in this newsgroup, and you should discuss them somewhere else. Any
such features that are implemented only in your own lcc-win32 are
useless (and not particularly interesting) to those of us who need to
use other compilers.

Do you understand that? Or are you going to repeat your standard line
that most of us are opposed to progress?

What better argument is needed?
 
J

jacob navia

Dave Vandervies a écrit :
jacob navia said:
Keith Thompson a écrit :
Have you measured it? More importantly, have you measured it on
multiple platforms? Do you have any actual performance numbers to
share with us? Or are you just guessing?

#include <stdio.h>
#include <time.h>
#include <string.h>
#define MAXITER 10000000
int main(void)
{
char s[4096];
int i;
time_t t,tStrchr,tMemchr;

for (i=0; i<sizeof(s)-1;i++) {
s = 'a';
}
s[sizeof(s)-1] = 0;
t = time(NULL);
for (i=0; i<MAXITER;i++) {
strchr(s,'1');
}
tStrchr= time(NULL) - t;
printf("Time for strchr=%d\n",tStrchr);
t = time(NULL);
for (i=0; i<MAXITER;i++) {
memchr(s,'1',sizeof(s));
}
tMemchr=time(NULL)-t;
printf("Time for memchr=%d\n",tMemchr);
}
Time for strchr=84
Time for memchr=41

Machine: Amd64 2GHZ
Compiler: lcc-win32, no optimizations

With 64 bits MSVC the difference is:
Time for strchr=63
Time for memchr=62



Are you sure of your numbers? Here's what I got:

--------
dave@buttons:~/clc (0) $ cat jn_memchr-strchr-compare.c
#include <stdio.h>
#include <time.h>
#include <string.h>
#define MAXITER 10000000
int main(void)
{
char s[4096];
int i;
time_t t,tStrchr,tMemchr;

for (i=0; i<sizeof(s)-1;i++) {
s = 'a';
}
s[sizeof(s)-1] = 0;
t = time(NULL);
for (i=0; i<MAXITER;i++) {
strchr(s,'1');
}
tStrchr= time(NULL) - t;
printf("Time for strchr=%d\n",tStrchr);
t = time(NULL);
for (i=0; i<MAXITER;i++) {
memchr(s,'1',sizeof(s));
}
tMemchr=time(NULL)-t;
printf("Time for memchr=%d\n",tMemchr);
}

dave@buttons:~/clc (0) $ gcc -W -Wall -ansi -pedantic jn_memchr-strchr-compare.c
jn_memchr-strchr-compare.c: In function `main':
jn_memchr-strchr-compare.c:11: warning: comparison between signed and unsigned
jn_memchr-strchr-compare.c:20: warning: int format, time_t arg (arg 2)
jn_memchr-strchr-compare.c:26: warning: int format, time_t arg (arg 2)
jn_memchr-strchr-compare.c:27: warning: control reaches end of non-void function
dave@buttons:~/clc (0) $ cp jn_memchr-strchr-compare.c jn_memchr-strchr-compare-fixed.c
dave@buttons:~/clc (0) $ vi jn_memchr-strchr-compare-fixed.c
dave@buttons:~/clc (0) $ cat jn_memchr-strchr-compare-fixed.c
#include <stdio.h>
#include <time.h>
#include <string.h>
#define MAXITER 10000000
int main(void)
{
char s[4096];
size_t i;
time_t t,tStrchr,tMemchr;

for (i=0; i<sizeof(s)-1;i++) {
s = 'a';
}
s[sizeof(s)-1] = 0;
t = time(NULL);
for (i=0; i<MAXITER;i++) {
strchr(s,'1');
}
tStrchr= time(NULL);
printf("Time for strchr=%f\n",difftime(tStrchr,t));
t = time(NULL);
for (i=0; i<MAXITER;i++) {
memchr(s,'1',sizeof(s));
}
tMemchr=time(NULL);
printf("Time for memchr=%f\n",difftime(tMemchr,t));

return 0;
}

dave@buttons:~/clc (0) $ gcc -W -Wall -ansi -pedantic jn_memchr-strchr-compare-fixed.c
dave@buttons:~/clc (0) $ ./a.out
Time for strchr=0.000000
Time for memchr=0.000000
dave@buttons:~/clc (0) $ gcc --version
gcc (GCC) 3.3.6
Copyright (C) 2003 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

dave@buttons:~/clc (0) $ uname -a
Linux buttons 2.6.15.3 #12 Sun Nov 26 16:55:30 EST 2006 i686 unknown unknown GNU/Linux
dave@buttons:~/clc (0) $ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 13
model name : Intel(R) Pentium(R) M processor 1.73GHz
stepping : 8
cpu MHz : 800.219
cache size : 2048 KB
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 2
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat clflush dts acpi mmx fxsr sse sse2 ss tm pbe nx est tm2
bogomips : 1601.62

dave@buttons:~/clc (0) $
--------

So whatever microoptimization you've been doing deep inside memchr,
gcc and libc are still smart enough to do it Rather Faster even without
being asked for optimization.


dave
(completely ignoring the question of what proportion of the running time
a typical program spends inside strchr)


GREAT DAVE!!!
1) You are a genius. You can even correct warnings that you yourself
provoked! Impressing.

2) I explicitely disabled optimizations. Apparently you don't. Then,
you got waht you want. ASTOUNDING.


1) thompson asks me if I have data for any difference between strchr and
memchr.
2) I take the effort to do it. I provide the data.
3) You start by correcting pedantic warnings. Sure being a pedant like
your mentors you stop there.

NICE.

Stay there
 
R

Richard Heathfield

jacob navia said:

GREAT DAVE!!!
1) You are a genius.

Well, he's pretty clever.

Sarcasm might work for Dan Pop, but it doesn't work for Bozo the Clown, and
it doesn't work for you.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,780
Messages
2,569,611
Members
45,265
Latest member
TodLarocca

Latest Threads

Top