Basic Types

N

nvangogh

Is it fair (and technically accurate) to say that the only differences
between int, long, long long and short is the number of bits that is
required to represent the type in the computer? Or is there additional
differences than size?
 
V

Victor Bazarov

Is it fair (and technically accurate) to say that the only differences
between int, long, long long and short is the number of bits that is
required to represent the type in the computer? Or is there additional
differences than size?

AFA other differences go, they are likely to be derivative of the size,
like the alignment requirement. Also, the size can dictate whether
certain operations could be done intrinsically or need to be emulated.
For instance, 'long' may not fit in the CPU register and operations on
it would be more complex (in machine code) than on 'int', for example.

V
 
M

Marcel Müller

Am 10.03.2014 18:42, schrieb nvangogh:
Is it fair (and technically accurate) to say that the only differences
between int, long, long long and short is the number of bits that is
required to represent the type in the computer? Or is there additional
differences than size?

There are implications on other things like CPU usage, compiled code
size and memory layout. But all of them are platform dependant.

I.e some platforms may not be able to deal natively with all operand
sizes. In general only char and int are safely supported native. Other
types may or may not be supported. Strictly speaking even this is not
guaranteed, but except for some DSPs it should be reliable. I.e. x64
code does not handle short int (16 bit) efficiently.

Although many platforms require that the operands are aligned to the
operand size in memory. Everything else is either slow or impossible. So
if you think you could save some memory by using small types you might
only end up with the overhead of the smaller type without any memory
saving because the compiler adds padding.


Marcel
 
B

Barry Schwarz

For this reason I use int or unsigned when I just mean "some number, and
I don't care how big". A loop from 1 to 1000, or something, will use
unsigned. That tells the compiler to use its own judgement.

If you specify unsigned, the compiler has no option but to use
unsigned int.
 
8

88888 Dihedral

Is it fair (and technically accurate) to say that the only differences

between int, long, long long and short is the number of bits that is

required to represent the type in the computer? Or is there additional

differences than size?

Yeh, in a strongly typed computer language such as C, C++, JAVA, PASCAL
and etc. the basic operations of
different types might result in
truncated unexpected results.

But in a true OOP language in the
5-6-7 th gens such as python, pike,
ruby, and erlang, things are different
in the design concepts of the
various languages.
 
S

Seungbeom Kim

Is it fair (and technically accurate) to say that the only differences
between int, long, long long and short is the number of bits that is
required to represent the type in the computer? Or is there additional
differences than size?

Even if the sizes for some of the types are the same, they are still
considered distinct types: typeid's are different, for example.
So obviously there are additional differences other than the size.
 
S

Seungbeom Kim

For this reason I use int or unsigned when I just mean "some number,
and I don't care how big". A loop from 1 to 1000, or something, will
use unsigned. That tells the compiler to use its own judgement.

For such purposes, I tend to choose the smallest type, no smaller
than int, whose minimum range specified by the standard is enough
for the usage. For example, int is guaranteed to cover -32767..+32767,
so I use that for anything no greater, and the next candidate is long,
modulo signedness. (Booleans and characters are exceptions.)
If I used uint32_t I'd be forcing a Z80 to do long and complex maths,
and a 64-bit machine to do masking.

Yes, intN_t types seem to be a little overused; especially when
int_fastN_t or int_leastN_t is a better choice.
Though these days I don't really expect an int to be less than 32 bits.

Right, but choosing long over int when you need to represent 100,000
is a better choice and doesn't hurt either. :)
 
Ö

Öö Tiib

Is it fair (and technically accurate) to say that the only differences
between int, long, long long and short is the number of bits that is
required to represent the type in the computer? Or is there additional
differences than size?

Standard does avoid defining bits for those types. Instead it has such
minimum required ranges for them:

signed char: -127 to 127
unsigned char: 0 to 255
char: -127 to 127 or 0 to 255 (depends on default char signedness)
short: -32767 to 32767
unsigned short: 0 to 65535
int: -32767 to 32767
unsigned int: 0 to 65535
long: -2147483647 to 2147483647
unsigned long: 0 to 4294967295
long long: -9223372036854775807 to 9223372036854775807
unsigned long long: 0 to 18446744073709551615

Note that minimum range for 'short' and 'int' is same. Since 'int' is
required to be quick and also no inbuilt arithmetic operation returns
'short' it makes 'short' somewhat pointless type.

Bits are defined by standard for types like int32_t from <stdint.h> but
for my taste those types are too overused in actual code. There are
plenty of information about the mundane types.

Precise details about actual ranges and bits of integral types
of implementation one can get from std::numeric_limits<T>.
 
L

Luca Risolia

nvangogh said:
Is it fair (and technically accurate) to say that the only differences
between int, long, long long and short is the number of bits that is
required to represent the type in the computer? Or is there additional
differences than size?

Given one C++ implementation, you can see the all differences in detail by
comparing the all various std::numeric_limits<T> instantiations (where T is an
arithmetic type). For the difference between minimum requirements and other
constraints you should refer to the standard.
 
I

Ian Collins

Seungbeom said:
Right, but choosing long over int when you need to represent 100,000
is a better choice and doesn't hurt either. :)

Maybe the standard got it wrong and rather than fixed width types, we
should fixed bounds types?

up_to_100K_t anyone?

:)
 
Ö

Öö Tiib

Maybe the standard got it wrong and rather than fixed width types, we
should fixed bounds types?

up_to_100K_t anyone?

:)

Why such things are needed? Even the exact bit types are heavily overused
for my taste (and it looked like also for Kim's taste).

If there is need for int with exact bounds then a template
(most ideas described there http://accu.org/index.php/journals/313)
is fine enough. Not a rocket science so it is unsure why neither
boost nor standard contains such thing already.
 
S

Stuart

On 12.03.14, Öö Tiib:
[snip]
If there is need for int with exact bounds then a template
(most ideas described there http://accu.org/index.php/journals/313)
is fine enough. Not a rocket science so it is unsure why neither
boost nor standard contains such thing already.

+1.

That would be the first step to catch up with Ada95. Unfortunately,
other features of Ada95 cannot so easily be simulated by a C++ template,
like choosing the right integer type depending on the specified range.
Or am I wrong there?

For example, the declaration of up_to_100K_t leaves the underlying type
open, whereas up_to_100K_t_int explicitely states that Integer should be
used for this type.
type up_to_100K_t is range 0 .. 100000;
type up_to_100K_t is Integer range 0 .. 100000;

Most often it is not necessary to care about the size of the underlying
integer type (whether a loop index is 16 bit, 32 bit or 64 bit is mostly
irrelevant, the compiler should select the most appropriate type).
However, if the size of a type actually matters, you still specify your
restrictions:
for up_to_100K_t use 16; // now the compiler must use a 16 bit type.

IMHO, C++ has still a long way to go ...

Regards,
Stuart

PS: Don't get me wrong, C++ is still my favorite language. In my option
it is far better than Objective C, which I am forced to use for Mac
development.
 
J

Jorgen Grahn

Am 10.03.2014 18:42, schrieb nvangogh:

There are implications on other things like CPU usage, compiled code
size and memory layout. But all of them are platform dependant.

I.e some platforms may not be able to deal natively with all operand
sizes. In general only char and int are safely supported native.

You say "safely", but you seem to mean "efficiently". True, you might
end up on a CPU where e.g. division of 'long' is ten or a hundred
times slower than division of 'int' -- but it still /works/, or it's
not C++.

/Jorgen
 
S

Stuart

This seems "a bit" tricky (pun intended;)

LOL, you got me there.

gcc actually says:
"size for "up_to_100K_t" too small, minimum allowed is 17"
so I should have tried to compile it. And the size specification must be
for up_to_100K_t'SIZE use 32;

Regards,
Stuart
 
Ö

Öö Tiib

On 12.03.14, Öö Tiib:
[snip]
If there is need for int with exact bounds then a template
(most ideas described there http://accu.org/index.php/journals/313)
is fine enough. Not a rocket science so it is unsure why neither
boost nor standard contains such thing already.

+1.

That would be the first step to catch up with Ada95. Unfortunately,
other features of Ada95 cannot so easily be simulated by a C++ template,
like choosing the right integer type depending on the specified range.
Or am I wrong there?

Possibly you are wrong ... for example ...

boost::int_max_value_t<100000>::least

.... is the smallest, built-in, signed integral type that can hold all the
values in the inclusive range 0 - 100000. The parameter should be a
positive number. Things like that can be used to determine underlying
type.
For example, the declaration of up_to_100K_t leaves the underlying type
open, whereas up_to_100K_t_int explicitely states that Integer should be
used for this type.
type up_to_100K_t is range 0 .. 100000;
type up_to_100K_t is Integer range 0 .. 100000;

Most often it is not necessary to care about the size of the underlying
integer type (whether a loop index is 16 bit, 32 bit or 64 bit is mostly
irrelevant, the compiler should select the most appropriate type).
However, if the size of a type actually matters, you still specify your
restrictions:
for up_to_100K_t use 16; // now the compiler must use a 16 bit type.

There can be other issues (like speed) why one or other underlying type
has to be preferred despite best matching size.
IMHO, C++ has still a long way to go ...

It has, but I do not feel that determining underlying type can be
issue. ;) That feels only to be issue how to make convenient interface
while keeping flexibility.
 
N

nvangogh

On 10/03/14 17:42, nvangogh wrote:
I have a related follow up question on basic types. As I have limited
understanding I maths I need to ask this. This is from C++ Primer 5th
edition p41:

"What ,if any, are the differences between
int month = 9, day = 7;
int month = 09, day = 07"

I cannot see any material difference. The first line initializes the
variables with decimal values. The second line uses octal notation - as
the literals are preceded by 0. Do these two values work out exactly the
same or is octal 09 different from decimal 9 (& octal 07 decimal 7)?
 
V

Victor Bazarov

On 10/03/14 17:42, nvangogh wrote:
I have a related follow up question on basic types. As I have limited
understanding I maths I need to ask this. This is from C++ Primer 5th
edition p41:

"What ,if any, are the differences between
int month = 9, day = 7;
int month = 09, day = 07"

I cannot see any material difference. The first line initializes the
variables with decimal values. The second line uses octal notation - as
the literals are preceded by 0. Do these two values work out exactly the
same or is octal 09 different from decimal 9 (& octal 07 decimal 7)?

There is no "octal 09" since there is no octal digit '9'.

V
 
Ö

Öö Tiib

On 10/03/14 17:42, nvangogh wrote:
I have a related follow up question on basic types. As I have limited
understanding I maths I need to ask this. This is from C++ Primer 5th
edition p41:

"What ,if any, are the differences between
int month = 9, day = 7;
int month = 09, day = 07"

I cannot see any material difference. The first line initializes the
variables with decimal values. The second line uses octal notation - as
the literals are preceded by 0. Do these two values work out exactly the
same or is octal 09 different from decimal 9 (& octal 07 decimal 7)?

Octal is base 8 number. That means it contains digits from 0 to 7.
Integer literal like 09 is illegal and so second line is syntax error.

Usually, when someone is using octal constants in code then it is
either typo or sly obfuscation; very rare people can recognise
immediately that 042 means 34. Therefore it is good idea to avoid
them without very good reason despite it is valid C++.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Members online

Forum statistics

Threads
473,754
Messages
2,569,525
Members
44,997
Latest member
mileyka

Latest Threads

Top