make this snippet efficient

K

KK

/* Target - read an integer from a binary file */
unsigned int Byte2Int(char *buff)
{
unsigned char* byte = reinterpret_cast<unsigned char*> (buff);
return ((byte[0]<<24)|(byte[1]<<16)|(byte[2]<<8)|(byte[3]));
}
/* part of main funciton */

ifstream fp("in.bin",ios::binary);
char buff[4];
fp.read(buff,4);
unsigned int loadSize = Byte2Int(buff);

Thank you.
KK
 
V

Victor Bazarov

KK said:
/* Target - read an integer from a binary file */
unsigned int Byte2Int(char *buff)
{
unsigned char* byte = reinterpret_cast<unsigned char*> (buff);
return ((byte[0]<<24)|(byte[1]<<16)|(byte[2]<<8)|(byte[3]));
}
/* part of main funciton */

ifstream fp("in.bin",ios::binary);
char buff[4];
fp.read(buff,4);
unsigned int loadSize = Byte2Int(buff);

What's *INefficient* about it?

V
 
P

pedagani

Victor said:
KK said:
/* Target - read an integer from a binary file */
unsigned int Byte2Int(char *buff)
{
unsigned char* byte = reinterpret_cast<unsigned char*> (buff);
return ((byte[0]<<24)|(byte[1]<<16)|(byte[2]<<8)|(byte[3]));
}
/* part of main funciton */

ifstream fp("in.bin",ios::binary);
char buff[4];
fp.read(buff,4);
unsigned int loadSize = Byte2Int(buff);

What's *INefficient* about it?

V
--
Must I use reinterpret_cast operator ? How can I avoid it?
 
F

Frederick Gotham

KK posted:
/* Target - read an integer from a binary file */
unsigned int Byte2Int(char *buff)
{
unsigned char* byte = reinterpret_cast<unsigned char*> (buff);
return ((byte[0]<<24)|(byte[1]<<16)|(byte[2]<<8)|(byte[3]));
}
/* part of main funciton */

ifstream fp("in.bin",ios::binary);
char buff[4];
fp.read(buff,4);
unsigned int loadSize = Byte2Int(buff);

Thank you.
KK


You don't specify the amount of bits in a byte, however, looking at your
code, we can make an educated guess of 8.

You don't specify the amount of bytes in an int, however, looking at your
code, we can make an educated guess of 4.

You don't specify the byte order of the integer stored in the file, so we
can only hope that it's the same as the system's.

You don't specify the negative number system used to represent the number
in the file, so we can only hope that it's the same as the system's.

You don't specify whether the integer in the file contains padding bits, or
where they're located, nor do you specify whether the system stores
integers with padding bits, or where they're located.

Working with the scraps being given, try this:

unsigned Func( char (&array)[4] )
{
return reinterpret_cast<int&>( array );
}
 
F

Frederick Gotham

Frederick Gotham posted:

unsigned Func( char (&array)[4] )
{
return reinterpret_cast<int&>( array );
}


Should have casted to unsigned&, rather than int&.
 
M

Markus Svilans

Frederick said:
KK posted:
/* Target - read an integer from a binary file */
unsigned int Byte2Int(char *buff)
{
unsigned char* byte = reinterpret_cast<unsigned char*> (buff);
return ((byte[0]<<24)|(byte[1]<<16)|(byte[2]<<8)|(byte[3]));
}
/* part of main funciton */

ifstream fp("in.bin",ios::binary);
char buff[4];
fp.read(buff,4);
unsigned int loadSize = Byte2Int(buff);

Thank you.
KK


You don't specify the amount of bits in a byte, however, looking at your
code, we can make an educated guess of 8.


When programming in C++, could one realistically expect to encounter a
system that does not have 8 bits in a byte?

Markus.
 
F

Frederick Gotham

Markus Svilans posted:

When programming in C++, could one realistically expect to encounter a
system that does not have 8 bits in a byte?


You're on a Standard C++ newsgroup, and people here like to be pedantic. It
pays off in the long run, you end up with code that will run perfectly for
eons.

Here's a few things that the Standard allows:

(1) Machines need not use two's complement.
(2) Null pointers need not be all-bits-zero.
(3) Bytes need not be eight bits.
(4) Primitive types may contain padding bits.

Either you take all these things into account, and write FULLY-portable and
Standard-compliant code, or you don't.

If it ever got to a point where an old-fashioned constraint was hindering
efficiency or functionality, the constraint would be lifted. But until
then, you use the following macros to tell you how many bits you have in a
byte:


#define CHAR_BIT \
(((unsigned char)-1)/(((unsigned char)-1)%0x3fffffffL+1) \
/0x3fffffffL%0x3fffffffL*30+((unsigned char)-1)%0x3fffffffL \
/(((unsigned char)-1)%31+1)/31%31*5 + 4-12/(((unsigned char)\
-1)%31+3))
 
L

Luke Meyers

Frederick said:
use the following macros to tell you how many bits you have in a
byte:


#define CHAR_BIT \
(((unsigned char)-1)/(((unsigned char)-1)%0x3fffffffL+1) \
/0x3fffffffL%0x3fffffffL*30+((unsigned char)-1)%0x3fffffffL \
/(((unsigned char)-1)%31+1)/31%31*5 + 4-12/(((unsigned char)\
-1)%31+3))

Why provide an implementation (especially one so... urk), rather than
just explain that this macro is available in the standard header
<climits>?

Luke
 
J

Jerry Coffin

[ ... ]
When programming in C++, could one realistically expect to encounter a
system that does not have 8 bits in a byte?

Yes. Under Windows CE, the smallest available type is 16 bits. A
number of DSPs don't have any 8-bit types either.
 
M

Markus Svilans

Frederick said:
Markus Svilans posted:




You're on a Standard C++ newsgroup, and people here like to be pedantic. It
pays off in the long run, you end up with code that will run perfectly for
eons.

I can see your point. But in the last 10-15 years, has there been a new
CPU or microprocessor produced that does not have 8 bits in a byte? Are
there any C++ compilers that compile code for non-8-bit-byte systems?

I'm not arguing about the C++ standard, I'm just surprised that
variable byte sizes are something that people worry about enough to
include in the standard.

On second thought... 16-bit character sets could be considered to be
the harbingers of future non-8-bit bytes, could they not?
Here's a few things that the Standard allows:

(1) Machines need not use two's complement.
(2) Null pointers need not be all-bits-zero.

I'm confused. To set a pointer to null in C++, isn't the standard way
to do that to assign zero to the pointer? If you're on a system where
null pointers are non-zero, what happens to the pointer you thought you
had set to null?
From what you say, would the truly portable way to do that be to
#define NULL depending on what system you're compiling for?
(3) Bytes need not be eight bits.
(4) Primitive types may contain padding bits.

I can see where padding bits would be necessary, for example
representing 32-bit integers on a 7-bit-per-byte system would require
five 7-bit bytes, with 3 padding bits. But are there any cases in
practice where primitive types actually contain padding bits?

Regards,
Markus.
 
T

Thomas J. Gritzan

Markus said:
I'm confused. To set a pointer to null in C++, isn't the standard way
to do that to assign zero to the pointer? If you're on a system where
null pointers are non-zero, what happens to the pointer you thought you
had set to null?

#define NULL depending on what system you're compiling for?

An integer constant with value zero (eg. 0, 7+1-8, 0x0) is magically
converted to the systems null-pointer-value if assign to a pointer type.
I can see where padding bits would be necessary, for example
representing 32-bit integers on a 7-bit-per-byte system would require
five 7-bit bytes, with 3 padding bits. But are there any cases in
practice where primitive types actually contain padding bits?

Don't know. But it would be a valid C++ system.

But only if the byte had 8 or more bits. A 7-bit-byte is not allowed.

Thomas
 
F

Frederick Gotham

Markus Svilans posted:
I can see your point. But in the last 10-15 years, has there been a
new CPU or microprocessor produced that does not have 8 bits in a
byte? Are there any C++ compilers that compile code for non-8-bit-byte
systems?

I'm not arguing about the C++ standard, I'm just surprised that
variable byte sizes are something that people worry about enough to
include in the standard.

On second thought... 16-bit character sets could be considered to be
the harbingers of future non-8-bit bytes, could they not?


Very possible. I think there's a certainty in life: Twenty years from
now, the world will have progressed more than we expected it, and in
unexpected ways.

Who's knows what the computers of tomorrow will bring?

I'm confused. To set a pointer to null in C++, isn't the standard way
to do that to assign zero to the pointer? If you're on a system where
null pointers are non-zero, what happens to the pointer you thought
you had set to null?


A "compile-time constant" is an expression whose value can be evaluated
at compile-time. Here's a few examples:

7

56 * 5 / 2 + 3

8 == 2 ? 1 : 6


If you have a compile-time constant which evaluates to zero, whether it
be:

0
5 - 5
2 * 6 - 12

Then it gets special treatment in C++, and qualifies as a null pointer
constant. A null pointer constant can be used to set a pointer to its
null pointer value, like so:

char *p = 0;

Because 0 qualifies as a null pointer constant, it gets special treatment
in the above statement (note how we'd normally have a type mismatch from
int to char*). Anyway, what the above statement does is set the pointer
to its null pointer value, whether that be:

0000 0000 0000 0000 0000 0000 0000 0000

or:

1111 1111 1111 1111 1111 1111 1111 1111

or:

1000 0000 0000 0000 0000 0000 0000 0000

or:

0000 0000 0000 0000 0000 0000 0000 0001

or:

1010 0101 1010 0101 1010 0101 1010 0101


From what you say, would the truly portable way to do that be to
#define NULL depending on what system you're compiling for?


No, all you do is:

char *p = 0;

And let your compiler deal with the rest.

I can see where padding bits would be necessary, for example
representing 32-bit integers on a 7-bit-per-byte system would require
five 7-bit bytes, with 3 padding bits.


In actual fact it would make more sense to have a 35-Bit integer type,
instead of a 32-Bit one with padding.
But are there any cases in practice where primitive types actually
contain padding bits?


Mostly on supercomputers, I think.

Here's a quotation from a recent post on comp.lang.c:

For example, I'm currently logged into a system with the following
characteristics:

CHAR_BIT = 8
sizeof(short) = 8 (64 bits)
sizeof(int) = 8 (64 bits)
sizeof(long) = 8 (64 bits)

SHRT_MAX = 2147483647 (32 padding bits)
USHRT_MAX = 4294967295 (32 padding bits)

INT_MAX = 35184372088831 (18 padding bits)
UINT_MAX = 18446744073709551615 (no padding bits)

LONG_MAX = 9223372036854775807 (no padding bits)
ULONG_MAX = 18446744073709551615 (no padding bits)

(It's a Cray Y/MP EL running Unicos 9.0, basically an obsolete
supercomputer.)
 
B

Bo Persson

Markus Svilans said:
I can see where padding bits would be necessary, for example
representing 32-bit integers on a 7-bit-per-byte system would
require
five 7-bit bytes, with 3 padding bits. But are there any cases in
practice where primitive types actually contain padding bits?

No, but what we do have is machines with 36-bit integers and 9 bits
per byte.

http://www.unisys.com/products/clearpath__servers/index.htm

Should we not allow C++ to be implemented on such a machine?


A much more common problem is DSPs having 16 or 32 bit words as the
smallest unit. Then that will be the byte size, making sizeof(char) ==
sizeof(short) == sizeof(int). Quite possible!


Bo Persson
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,483
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top