R
Ralf Hildebrandt
Hi all!
First of all: I am a C-newbie.
I have noticed a "strange" behavior with the standart integer
multiplication. The code is:
void main(void)
{
int a = 0x1234;
int b = 0xABCD;
long int result_1 = (a * b);
long int result_2 = (long int)(a * b); //the same as result_1
}
My machine is a 16 bit microcontroller (Texas Instruments MSP430).
Therefor "int" is 16 bits and "long int" is 32 bits long. The problem is
independent from the C-compiler (tested with IAR embedded workbench and
MSP430-GCC).
For clarity I use the following syntax (like VHDL):
a(15 downto 0) depicts all bits of variable a
result_1(31 downto 0) depicts all bits of result_1 (long int)
result_1(15 downto 0) depicts only the lower 16 bits (int)
The problem:
As everybody knows (16 bits) * (16 bits) = (32 bits).
Therefor the correct result would be (a * b)(31 downto 0). But only
result_1(15 downto 0) holds the correct lower bits of the result and
result_1(31 downto 16) is sign-exteded result(15 downto 0).
In other words: (a * b)(15 downto 0) is taken (and sign-extended) and (a
* b)(31 downto 16) is thrown away.
O.k., any arithmetic operation in C on two "int" variables lead to an
"int" as result and the type conversion to "long int" is done
afterwards. (So it is clear to see, that result_2==result_1.)
Let's have a look closer to hardware, before going on:
Every machine, that has a n*n hardware multiplier, computes the result
of this multiplication with bitwidth 2n. Therefor the correct result is
already stored in the output register of the hardware multiplier.
The first "solution" of this problem could be this:
void main(void)
{
long int a = 0x1234;
long int b = 0xABCD;
long int result_1 = (a * b);
}
It computes the correct result, but has a *major* disadvantage: Because
both "a" and "b" are now declared to be 32 bits long, the compiler has
to map this code to this multiplication (32 bits) * (32 bits) = (64
bits), that leads to the use of two times the hardware multiplier (and
even in software a 32 bit multiplication has to be done).
Finally - my questions:
Does a code construct exist in C, that forces the compiler to take all
32 bits from the result of a 16*16 bit multiplication?
Why is the result of the arithmetic operation "*" defined to be the same
data type like the inputs and has not doubled bitwidth?
Additionally: The same problem should occur on any other border of the
bitwidths (if a data type exists, that has two times the bitwidth of the
machine).
Thanks in advantage.
Ralf
First of all: I am a C-newbie.
I have noticed a "strange" behavior with the standart integer
multiplication. The code is:
void main(void)
{
int a = 0x1234;
int b = 0xABCD;
long int result_1 = (a * b);
long int result_2 = (long int)(a * b); //the same as result_1
}
My machine is a 16 bit microcontroller (Texas Instruments MSP430).
Therefor "int" is 16 bits and "long int" is 32 bits long. The problem is
independent from the C-compiler (tested with IAR embedded workbench and
MSP430-GCC).
For clarity I use the following syntax (like VHDL):
a(15 downto 0) depicts all bits of variable a
result_1(31 downto 0) depicts all bits of result_1 (long int)
result_1(15 downto 0) depicts only the lower 16 bits (int)
The problem:
As everybody knows (16 bits) * (16 bits) = (32 bits).
Therefor the correct result would be (a * b)(31 downto 0). But only
result_1(15 downto 0) holds the correct lower bits of the result and
result_1(31 downto 16) is sign-exteded result(15 downto 0).
In other words: (a * b)(15 downto 0) is taken (and sign-extended) and (a
* b)(31 downto 16) is thrown away.
O.k., any arithmetic operation in C on two "int" variables lead to an
"int" as result and the type conversion to "long int" is done
afterwards. (So it is clear to see, that result_2==result_1.)
Let's have a look closer to hardware, before going on:
Every machine, that has a n*n hardware multiplier, computes the result
of this multiplication with bitwidth 2n. Therefor the correct result is
already stored in the output register of the hardware multiplier.
The first "solution" of this problem could be this:
void main(void)
{
long int a = 0x1234;
long int b = 0xABCD;
long int result_1 = (a * b);
}
It computes the correct result, but has a *major* disadvantage: Because
both "a" and "b" are now declared to be 32 bits long, the compiler has
to map this code to this multiplication (32 bits) * (32 bits) = (64
bits), that leads to the use of two times the hardware multiplier (and
even in software a 32 bit multiplication has to be done).
Finally - my questions:
Does a code construct exist in C, that forces the compiler to take all
32 bits from the result of a 16*16 bit multiplication?
Why is the result of the arithmetic operation "*" defined to be the same
data type like the inputs and has not doubled bitwidth?
Additionally: The same problem should occur on any other border of the
bitwidths (if a data type exists, that has two times the bitwidth of the
machine).
Thanks in advantage.
Ralf