Which operation is more expensive?(a <= b and a != 0)

Z

zaemin

From a web-site that teaches optimization for c,

I saw bellow optimization rule.

int fact1_func (int n)
{
int i, fact = 1;
for (i = 1; i <= n; i++)
fact *= i;
return (fact);
}

int fact2_func(int n)
{
int i, fact = 1;
for (i = n; i != 0; i--)
fact *= i;
return (fact);
}

The author say 'fact2_func' is faster than 'fact1_func'.

That means 'i <= n' operation is more expensive than 'i != 0'.

So far, I thought all condition statements have same cost.

So I want to know the cost of condition states from the fast one to slow one.

Can anyone tell me?
 
A

Artie Gold

zaemin said:
From a web-site that teaches optimization for c,

I saw bellow optimization rule.

int fact1_func (int n)
{
int i, fact = 1;
for (i = 1; i <= n; i++)
fact *= i;
return (fact);
}

int fact2_func(int n)
{
int i, fact = 1;
for (i = n; i != 0; i--)
fact *= i;
return (fact);
}

The author say 'fact2_func' is faster than 'fact1_func'.

That means 'i <= n' operation is more expensive than 'i != 0'.

So far, I thought all condition statements have same cost.

So I want to know the cost of condition states from the fast one to slow one.

Can anyone tell me?

The ISO standard for C defines the language and its behavior; particular
implementations (and their interaction with the underlying platform)
determine the relative speed of different constructs.

So as far as is concerned, the answer is: *No*.

HTH,
--ag
 
N

Neil Kurzman

zaemin said:
From a web-site that teaches optimization for c,

I saw bellow optimization rule.

int fact1_func (int n)
{
int i, fact = 1;
for (i = 1; i <= n; i++)
fact *= i;
return (fact);
}

int fact2_func(int n)
{
int i, fact = 1;
for (i = n; i != 0; i--)
fact *= i;
return (fact);
}

The author say 'fact2_func' is faster than 'fact1_func'.

That means 'i <= n' operation is more expensive than 'i != 0'.

So far, I thought all condition statements have same cost.

So I want to know the cost of condition states from the fast one to slow one.

Can anyone tell me?

It depends on the CPU. The compiler converts source to the machine code of the
target CPU.
So it depends on what op-codes the compiler has to do the job. Usually CPU's
have a special test for zero op-code. So a test for zero may be faster.
 
C

Christian Bau

From a web-site that teaches optimization for c,

I saw bellow optimization rule.

int fact1_func (int n)
{
int i, fact = 1;
for (i = 1; i <= n; i++)
fact *= i;
return (fact);
}

int fact2_func(int n)
{
int i, fact = 1;
for (i = n; i != 0; i--)
fact *= i;
return (fact);
}

The author say 'fact2_func' is faster than 'fact1_func'.

That means 'i <= n' operation is more expensive than 'i != 0'.

So far, I thought all condition statements have same cost.

So I want to know the cost of condition states from the fast one to slow one.

Can anyone tell me?

No, because it is implementation dependent.

However, good compilers will often manage to transform the code you
wrote into faster code automatically, so if one were for any reason
faster than the other, then the compiler writer would usually have some
way to use the faster method.

With loops, it often helps if the compiler can determine the number of
iterations that the loop will perform before starting to execute it. If
you can write your loop in a way that makes this easy, it will often
help.

Function 1: There are three cases. If n <= 0 there are 0 iterations. If
n < INT_MAX then there are n iterations. If n == INT_MAX there will be
undefined behavior (i will be set to the maximum possible value, and
then incremented again. Not a good idea).

Function 2: There are two cases: If n >= 0 then there are n iterations.
If n < 0 you will have undefined behavior, as you start with negative i,
decrement it until it reaches the most extreme negative value, and then
decrement it further (not a good idea).

Most people write loops using <, <=, >, >= not == or !=, so compilers
are more likely to recognise these patterns and optimise for them. The
typical style is

for (i = 0; i < n; ++i)

which has 0 iterations if n <= 0 and n iterations otherwise. That style
is most likely recognised and produces fast loops.
 
T

Tim Prince

Neil Kurzman said:
It depends on the CPU. The compiler converts source to the machine code
of the
target CPU.
So it depends on what op-codes the compiler has to do the job. Usually
CPU's
have a special test for zero op-code. So a test for zero may be faster.

The second version is ambiguous. For example, it may require an additional
test if(n>0) in order to split the code into a version for the well-behaved
case and one for the ill-behaved case. That forces you into one of the
typical C++ STL performance loss situations. There's no point in attempting
to optimize the loop test for an 8080 when the integer multiplication is
deathly slow on that style of CPU.
 
D

Denis Bueno

From a web-site that teaches optimization for c,
The author say 'fact2_func' is faster than 'fact1_func'.

That means 'i <= n' operation is more expensive than 'i != 0'.

So far, I thought all condition statements have same cost.

So I want to know the cost of condition states from the fast one to slow one.

Can anyone tell me?

Read the other replies to this post, and check out the '-S' flag you can
give to gcc (assuming gcc is what you're using). It allows you to see the
assembler code produced by the C compiler. You could inspect the produced
code for each function and see which requires more instructions.

But, as another poster pointed out: /this is not a C language issue/. It
belongs in another forum.

-Denis
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,764
Messages
2,569,564
Members
45,039
Latest member
CasimiraVa

Latest Threads

Top