# Convert from unsigned char array to float!

Discussion in 'C Programming' started by Goran, Aug 23, 2003.

1. ### GoranGuest

Hi!

I need to convert from a unsigned char array to a float. I don't think
i get the right results in the program below.

unsigned char array1[4] = { 0xde, 0xc2, 0x44, 0x23}; //I'm not sure in
what order the data is stored so i try both ways.
unsigned char array2[4] = { 0x23, 0x44, 0xc2, 0xde};

float *pfloat1, *pfloat2;

pfloat1 = (float *)array1;
pfloat2 = (float *)array2;

printf("pfloat1 = %f, pfloat2 = %f\n", pfloat1, pfloat2);

The result here is:
pfloat1 = 0.000000, pfloat2 = -6999176012340658176.000000

I know this data is stored in single-precision floatingpoint number.
Calculating this with the formula: 1.mantissa * 2^(exp-127) gets the
following results:
-6614457784713468934.9861376 when using array1
-79.784837 when using array2.

Anyone know why i get so bad results? I'm running this program under
Linux(Red Hat) on a Intel pentium machine.

Regards,
Goran
Is there any easy way to do this?

Goran, Aug 23, 2003

2. ### Artie GoldGuest

Erm, you've invoked undefined behavior by printing values of type
pointer to float with a `%f' format specifier.

Perhaps you meant:

printf("pfloat1 = %f, pfloat2 = %f\n", *pfloat1, *pfloat2);

(of course, it's all UB anyway -- but this has a _chance_ of working on
a given platform)

HTH,
--ag

Artie Gold, Aug 23, 2003

3. ### Phil TregoningGuest

(Goran) wrote in
As Artie pointed out, these need to be floats, not pointers
to floats. Also, pfloat1 and pfloat2 might not be correctly
aligned for a float.
But if you fix the above, then these look OK for IEEE
single precision, although you have a bogus amount of
precision for pfloat2, and not enough for pfloat1. If
you use %g you should see results something like
pfloat1 = 1.06664e-17, pfloat2 = -6.99918e+18
No it doesn't.
I would suggest that you have made in mistake in your
calculations. Here is some code that does it for you:

\$ type ieee2float.c
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <float.h>
#include <assert.h>

double decode_ieee_single(const void *v, int natural_order)
{
const unsigned char *data = v;
int s, e;
unsigned long src;
long f;
double value;

if (natural_order) {
src = ((unsigned long)data[0] << 24) |
((unsigned long)data[1] << 16) |
((unsigned long)data[2] << 8) |
((unsigned long)data[3]);
}
else {
src = ((unsigned long)data[3] << 24) |
((unsigned long)data[2] << 16) |
((unsigned long)data[1] << 8) |
((unsigned long)data[0]);
}

s = (src & 0x80000000UL) >> 31;
e = (src & 0x7F800000UL) >> 23;
f = (src & 0x007FFFFFUL);

if (e == 255 && f != 0) {
/* NaN (Not a Number) */
value = DBL_MAX;
}
else if (e == 255 && f == 0 && s == 1) {
/* Negative infinity */
value = -DBL_MAX;
}
else if (e == 255 && f == 0 && s == 0) {
/* Positive infinity */
value = DBL_MAX;
}
else if (e > 0 && e < 255) {
/* Normal number */
f += 0x00800000UL;
if (s) f = -f;
value = ldexp(f, e - 150);
}
else if (e == 0 && f != 0) {
/* Denormal number */
if (s) f = -f;
value = ldexp(f, -149);
}
else if (e == 0 && f == 0 && s == 1) {
/* Negative zero */
value = 0;
}
else if (e == 0 && f == 0 && s == 0) {
/* Positive zero */
value = 0;
}
else {
/* Never happens */
printf("s = %d, e = %d, f = %lu\n", s, e, f);
assert(!"Woops, unhandled case in decode_ieee_single()");
}

return value;
}

int main(void)
{
unsigned char f[4] = {0xde, 0xc2, 0x44, 0x23};

printf("0x%02X%02X%02X%02X as an IEEE float is %f\n",
f[0], f[1], f[2], f[3], decode_ieee_single(f, 0));

printf("0x%02X%02X%02X%02X as an IEEE float is %f\n",
f[3], f[2], f[1], f[0], decode_ieee_single(f, 1));

return 0;
}

\$ cc ieee2float
\$ run ieee2float
0xDEC24423 as an IEEE float is 0.000000
0x2344C2DE as an IEEE float is -6999176012340658200.000000

This code has not been exhastively tested. The above output
was generated on system not using IEEE floating point, but its
output agrees with your IEEE system.

Phil T

Phil Tregoning, Aug 25, 2003
4. ### tabin

Joined:
Jun 7, 2007
Messages:
1
0
i've digged this out because i have a similar issue

i have a rgb-frame in an unsigned char array which holds the channel values in hex
the frame is 10bit per channel, meaning the there are two bytes used for each channel - the byteorder is therefore not like in a normal rgb-array (r g b r g b)
it is: rr gg bb

my idea was to read out the two bytes of each channel an convert them with the posted code, combine(add) them afterwards, while multiplying the upper part with 256.

any idea if this will work?

Code (Text):
convert_frame_float(unsigned char *frame){

const unsigned char* tmp_frame = frame;
double* floatframe;
int j = 0;

for (i=0; i<size; i+=2){

unsigned long upper = (unsigned long)tmpframe[i];
unsigned long lower = (unsigned long)tmpframe[i+1];

floatframe[j] = 256*decode_ieee_single(upper,1)+decode_ieee_single(lower,1);
j++;
}
return floatframe;

}

tabin, Jun 7, 2007