Superbasic efficiency question

J

Jonathan Burd

Sethalicious said:
I'm thinking that your getting a silent error on the code. I don't
believe an integer can handle the value 1000000000 in the following:




So what's happening is your integer "i" is overflowing and getting set
to an unpredictable value (possibly negative). You might have better
luck using an "unsigned long" or even "std::size_t".

I'm not sure what the standard says, but I think most compilers
implement 32 bit integers. So an "unsigned" int is 0 - 2^32, and the
"signed" int is -2^16 to 2^16. (Subtract or add a 1 where needed).

Check your compiler's docs to see the sizes of the integral data types.


Otherwise, there should be no real difference in performance of the
code with optimizations on. When you measure the time, remember to
record cpu clock cycles instead of time in seconds.
Hope this helps!

Chris J. (aka Sethalicious)

Duplicate posts? Get Thunderbird and subscribe to this newsgroup
using a decent NG server (news.individual.net is good.)

Secondly, please keep C in c.l.c and C++ in c.l.c++.

A bit can have one of two states (values). A total of 2^1 states.
Range: 0 to 2^1-1.

An octet byte has 8 bits, each bit of which can have one of two
states. A total of 2^8 states. Unsigned Range: 0 to 2^8-1.
Signed range: -2^7 to (2^7-1).

32-bit number:
Unsigned Range: 0 to 2^32-1
Signed range: -2^31 to (2^31-1)
Total numbers: 2^32

% grep "INT_MAX" /mingw/include/limits.h
#define INT_MAX 2147483647
#define INT_MIN (-INT_MAX-1)
#define UINT_MAX 0xffffffff

1000000000
<
2147483647

Regards,
Jonathan.
 
M

matthias_k

I'm working on a scientific computing application. I need a class
called Element which is no more than a collection of integers, or
"nodes" and has only on method int getNode(int i).

I would like to implement in the most efficient was possible. So I
summoned up my programming intellect and asked myself: Do I want to
have members such as int a, b, c, d, e or a single member such as int
a[5]. So I wrote the following snippet and compiled it with a -O3 flag:

int main(char *argv[], int argc) {

/*
int a, b, c, d, e;
for (int i = 0; i < 1000000000; i++) {
a = 1;
b = 2;
c = 3;
d = 4;
e = 5;
}

*/


int a[5];
for (int i = 0; i < 1000000000; i++) {
a[0] = 1;
a[1] = 2;
a[2] = 3;
a[3] = 4;
a[4] = 5;
}

return 0;
}

The first (commented out) version ran twice as fast. (For doubles
instead of ints, it was a factor of 4). So the simpleton part of me
thinks that that answers my question. The remaining part tells me that
it is never that simple. Finally, the cynical part of me thinks that it
all doesn't matter and other parts of the program are bound to be far
more time consuming.

Please help me resolve my internal struggle.
Many thanks in advance!

Aaron Fude

First of all, your implementation does 1000000000-1 completely
unnecessary steps. You are assigning the same values to the same
variables a billion times. It's most probably that your compiler will
optimize the 999,999,999 unneeded iterations away... So this example
isn't a very good start to check for efficiency.

Second, as far as I can tell, the second version runs slower because you
need to dereference a pointer 5 billion times.

The expression:
a[2] = 3;
is analogous to:
*(a+2) = 3;

That may or may not explain the longer runtime, but it could be a reason.

Regards,
Matthias
 
P

Peter Koch Larsen

I'm working on a scientific computing application. I need a class
called Element which is no more than a collection of integers, or
"nodes" and has only on method int getNode(int i).

I would like to implement in the most efficient was possible. So I
summoned up my programming intellect and asked myself: Do I want to
have members such as int a, b, c, d, e or a single member such as int
a[5]. So I wrote the following snippet and compiled it with a -O3 flag:

int main(char *argv[], int argc) {

/*
int a, b, c, d, e;
for (int i = 0; i < 1000000000; i++) {
a = 1;
b = 2;
c = 3;
d = 4;
e = 5;
}

*/


int a[5];
for (int i = 0; i < 1000000000; i++) {
a[0] = 1;
a[1] = 2;
a[2] = 3;
a[3] = 4;
a[4] = 5;
}

return 0;
}

The first (commented out) version ran twice as fast. (For doubles
instead of ints, it was a factor of 4). So the simpleton part of me
thinks that that answers my question. The remaining part tells me that
it is never that simple. Finally, the cynical part of me thinks that it
all doesn't matter and other parts of the program are bound to be far
more time consuming.

Please help me resolve my internal struggle.
Many thanks in advance!

Aaron Fude
Do not forget to examine the assembly-code, either in the debugger or by
having the compiler produce an assembly-listing. This is to make sure that
you measure what you believe you do.

/Peter
 
C

chris

I'm working on a scientific computing application. I need a class
called Element which is no more than a collection of integers, or
"nodes" and has only on method int getNode(int i).

I would like to implement in the most efficient was possible. So I
summoned up my programming intellect and asked myself: Do I want to
have members such as int a, b, c, d, e or a single member such as int
a[5]. So I wrote the following snippet and compiled it with a -O3 flag:

int main(char *argv[], int argc) {

/*
int a, b, c, d, e;
for (int i = 0; i < 1000000000; i++) {
a = 1;
b = 2;
c = 3;
d = 4;
e = 5;
}

*/


int a[5];
for (int i = 0; i < 1000000000; i++) {
a[0] = 1;
a[1] = 2;
a[2] = 3;
a[3] = 4;
a[4] = 5;
}

return 0;
}

The first (commented out) version ran twice as fast. (For doubles
instead of ints, it was a factor of 4). So the simpleton part of me
thinks that that answers my question. The remaining part tells me that
it is never that simple. Finally, the cynical part of me thinks that it
all doesn't matter and other parts of the program are bound to be far
more time consuming.

What level of optimisation are you using?

When I use -O2 in the most recent version of gcc, then both of these
programs are compiled away to an empty loop (unfortunatly gcc doesn't
eliminate empty loops as of yet.)

Chris
 
L

Lawrence Kirby

(e-mail address removed) wrote:
....
int main(char *argv[], int argc) {

/*
int a, b, c, d, e;
for (int i = 0; i < 1000000000; i++) {
a = 1;
b = 2;
c = 3;
d = 4;
e = 5;
}

*/


int a[5];
for (int i = 0; i < 1000000000; i++) {
a[0] = 1;
a[1] = 2;
a[2] = 3;
a[3] = 4;
a[4] = 5;
}

return 0;
}

The first (commented out) version ran twice as fast. (For doubles
instead of ints, it was a factor of 4). So the simpleton part of me
thinks that that answers my question. The remaining part tells me that
it is never that simple.

Right, there's no point in assigning values to variables if you never use
the value. So even if the compiler generates the sort of code you expect
you are only measuring half of the problem at best.

Probably. Compilers can also be quite good at "optimising" pointless
operations. However it is easier for compilers to track usage of
individual variables than arrays. Your compiler may simply be optimising
the useless code in the first case better than the useless code in the
second. The situation when the values are used could be quite different.

If you need an indexed lookup for the values put them in an array, if
your code accesses the values directly and independently then don't.
First of all, your implementation does 1000000000-1 completely
unnecessary steps. You are assigning the same values to the same
variables a billion times. It's most probably that your compiler will
optimize the 999,999,999 unneeded iterations away... So this example
isn't a very good start to check for efficiency.

Although many compiler writers take the view that if you didn't want a
loop you could have written the code without one. So removing a loop
altogether is an optimisation they deliberately choose not to implement.
Second, as far as I can tell, the second version runs slower because you
need to dereference a pointer 5 billion times.

The expression:
a[2] = 3;
is analogous to:
*(a+2) = 3;

That appears to be the case at the source level, but if you think about it
a[2] just represents an int object in memory in much the same way as the
variable name c does. It is entirely possible for the code generated to
access them (i.e. using a constant index like this) to be identical. Or to
put it another way consider

int a;

*&a = 1;

which is very similar to what you are doing with

int a[1];

a[0] = 1;

The use of an array name in a context like this creates an address, so
loosely speaking there is an implicit & in a[0] as well as an implicit *.
However you look at it this is something that a compiler is likely to
generate reasonable code for.


Lawrence
 
C

CBFalconer

Sethalicious said:
Whoops. Totally forgot about 2's complement. :-o I stand corrected!

Think again. int is only guaranteed -2^15-1 .. 2^15-1. The above
range (-1 at the positive end) is only guaranteed for long int.
 
A

aaronfude

My feeling is that when you try to implement 'getNode', you will find
that an array-based implemenation is faster than a switch statement.
OTOH, you might find that templatizing 'getNode' on i, and then
specializing for values of i is faster yet. Presumably, there is some
more generic part of your application that makes 'getNodeA',
'getNodeB', etc., prohibitive. /david


This is more of an optimization question. Can't the compiler somehow
unwrap and inline the following:

int getNode(int i) {
switch (i) {
case 0: return a;
case 1: return b;
case 3: return c;
//...
}
}
 
E

E. Robert Tisdale

This is more of an optimization question.
Can't the compiler somehow unwrap and inline the following:
inline
int getNode(int i) {
switch (i) {
case 0: return a;
case 1: return b;
case 3: return c;
//...
}
}
 
S

shez

Gianni said:
statement.

That's true. A pointer to member might be an alternative.

Not really. A decent optimising compiler will implement your switch
statement using a static table. Just do whatever looks best for you
(i.e., most readable).

-shez-
 
D

davidrubin

shez said:
Not really. A decent optimising compiler will implement your switch
statement using a static table. Just do whatever looks best for you
(i.e., most readable).

-shez-

Meaning that switch and array-based implementations might produce the
same object code? /david
 
J

Jerry Coffin

(e-mail address removed) wrote:

[ ... ]
You might be able to squeeze out some more performance though with a
variable-based implementation if you templatize 'getNode' on i, and
specialize for values of i.

This might work, but it might easily be considerably slower --
specializing like this will produce a separate piece of code to be
executed for each item that's retrieved. That's not very likely to work
well with the cache, and it won't take very many cache misses for the
net effect on speed to be negative.
 
D

David White

Well, it would of course be inlined...

The inlining is the potential problem. You have different code for each
specialized call, so there's more code being executed in total and therefore
greater probability of a cache miss.

DW
 
D

davidrubin

This is interesting. I guess you are claiming that loading an array
address, computing the offset of an element, and dereferencing is
potentially cheaper than loading each individual variable since the
array address is more likely to stay in the cache than any one of the
variable values? Why would it be the case that any array element value
is more likely to be cached than a variable value? But if you are
correct, this suggests that having individual 'getNodeA', 'getNodeB',
etc., functions is not a good choice for at least this reason. /david
 
J

Jerry Coffin

This is interesting. I guess you are claiming that loading an array
address, computing the offset of an element, and dereferencing is
potentially cheaper than loading each individual variable since the
array address is more likely to stay in the cache than any one of the
variable values?

What I, at least, was suggesting, was that one piece of code to compute
the effective address, is typically a lot smaller than a million
individual pieces of code, each with a single hard-coded address.

IOW, the concern is with caching the instructions, not the data.
 
D

David White

This is interesting. I guess you are claiming that loading an array
address, computing the offset of an element, and dereferencing is
potentially cheaper than loading each individual variable since the
array address is more likely to stay in the cache than any one of the
variable values? Why would it be the case that any array element value
is more likely to be cached than a variable value? But if you are
correct, this suggests that having individual 'getNodeA', 'getNodeB',
etc., functions is not a good choice for at least this reason. /david

As JC said, it's the _code_ in the cache that's relevant here. However, the
same argument could apply to inlining in general, not just this case. I
haven't come across a case yet where inlining short, often-called functions
didn't result in a significant speed improvement, but in theory, at least, I
can see that increasing cache misses due to more executed code could result
in slower execution.

DW
 
C

CBFalconer

Well, it would of course be inlined...

What kind of well, what is it, etc. I believe wells and the
linings applied thereto are off topic in both c.l.c and c.l.c++
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,581
Members
45,057
Latest member
KetoBeezACVGummies

Latest Threads

Top