2 questions on primitive types

G

Gonçalo Rodrigues

Hi all,

I have a few questions on primitive types that I will divide in two
main questions. I realize that some of these questions -- especially
2. below -- are not directly related to C++ as a language itself, so,
if there is a better newsgroup to make them would you be so kind and
please direct me to it?

1. Assume that in your platform (OS + compiler) all pointer types have
the same size. Is there a platform independent way to get at the
"natural signed integer" type for the platform? What I mean by the
natural signed integer type is the primitive signed integer type T
such that

(A) sizeof(T) == sizeof(char*)

In my code, what I have for now is a simple typedef

typedef int integer

in a header file that is include-d everywhere else and then use
integer as the integral type. This is good enough for the platform I
am using to test my code (Windows, btw).

What are some examples of plarforms where the two assumptions above
(every pointer type has the same size and int is the natural signed
integral type for the platform) are violated?

A side-question: is there a way to make an assertion like (A) a
*compile* time assertion?

2. The instances of classes in my project are heap-allocated and use a
reference count to keep track of them. They are then managed via a
smart pointer template. For now, there is no possibility of
circularity so reference count is good enough. But it just dawned on
me that the simple operation

(B) this->count++;

may not be atomic, which means that these objects are not thread-safe.
Is there anyway to find out if (B) above is atomic? And how
unreasonable is this assumption, that is, are there many and widely
used platforms where this assumption is violated?

TIA, with my best regards,
G. Rodrigues
 
V

Victor Bazarov

Gonçalo Rodrigues said:
I have a few questions on primitive types that I will divide in two
main questions. I realize that some of these questions -- especially
2. below -- are not directly related to C++ as a language itself, so,
if there is a better newsgroup to make them would you be so kind and
please direct me to it?

1. Assume that in your platform (OS + compiler) all pointer types have
the same size.

Pointers to objects _always_ have the same size. Pointers to members
are most likely not to have the same size as pointers to objects or
pointers to functions.
> Is there a platform independent way to get at the
"natural signed integer" type for the platform?
'int'.

> What I mean by the
natural signed integer type is the primitive signed integer type T
such that

(A) sizeof(T) == sizeof(char*)

That's not natural.

And in that case the answer is "no". It's not guaranteed that the
platform/implementation even has such a type.
In my code, what I have for now is a simple typedef

typedef int integer

in a header file that is include-d everywhere else and then use
integer as the integral type. This is good enough for the platform I
am using to test my code (Windows, btw).

Whatever floats your boat.
What are some examples of plarforms where the two assumptions above
(every pointer type has the same size and int is the natural signed
integral type for the platform) are violated?

Win64. Many other 64-bit platforms.
A side-question: is there a way to make an assertion like (A) a
*compile* time assertion?

Sure. 'sizeof' is a compile-time operator. You can always do

char dummy[sizeof(char*) == sizeof(int)];

since declaration of 0-sized arrays is prohibited in C++, you would get
an error if the sizes are different.
2. The instances of classes in my project are heap-allocated and use a
reference count to keep track of them. They are then managed via a
smart pointer template. For now, there is no possibility of
circularity so reference count is good enough. But it just dawned on
me that the simple operation

(B) this->count++;

may not be atomic, which means that these objects are not thread-safe.
Is there anyway to find out if (B) above is atomic?

Thread-related stuff is OT here. Try comp.programming.threads.
> And how
unreasonable is this assumption, that is, are there many and widely
used platforms where this assumption is violated?

++(this->count)

is a bit more atomic, IMHO. But still, ask in the 'threads' newsgroup.

V
 
A

Alf P. Steinbach

* Gonçalo Rodrigues:
Hi all,

I have a few questions on primitive types that I will divide in two
main questions. I realize that some of these questions -- especially
2. below -- are not directly related to C++ as a language itself, so,
if there is a better newsgroup to make them would you be so kind and
please direct me to it?

OK in this newsgroup, as far as I'm concerned.

Generally, just consider whether your posting would be accepted in the clean
but slow version of this group, [comp.lang.c++.moderated].

You may consult the moderation policy of that group, at
<url: http://www.gotw.ca/resources/clcm.htm>, see the section "Accepting or
Rejecting Articles".

1. Assume that in your platform (OS + compiler) all pointer types have
the same size.

All pointer types do not necessarily have the same size.

§3.9.2/4 "A void* shall be able to hold any object pointer".

A function is not an object.

Is there a platform independent way to get at the
"natural signed integer" type for the platform?

Be aware that "natural signed integer" means 'int' in C and C++, but apart
from that, using your definition below, the answer is no.

What I mean by the
natural signed integer type is the primitive signed integer type T
such that

(A) sizeof(T) == sizeof(char*)

There is no requirement that such a type exists! ;-) But in practice it
will exist. However, it need not be a standard C++ type.

In my code, what I have for now is a simple typedef

typedef int integer

in a header file that is include-d everywhere else and then use
integer as the integral type. This is good enough for the platform I
am using to test my code (Windows, btw).

That will not work for 64-bit Windows.

What are some examples of plarforms where the two assumptions above
(every pointer type has the same size and int is the natural signed
integral type for the platform) are violated?

One example is Windows using Microsoft's compiler. 'int' is not able to
represent even simple data pointers in 64-bit Windows. I'm not aware of an
example where function pointers are larger than void*, but that is allowed.

A side-question: is there a way to make an assertion like (A) a
*compile* time assertion?

Yes, just use a compile time assertion. ;-) E.g. the one in Boost.

2. The instances of classes in my project are heap-allocated and use a
reference count to keep track of them. They are then managed via a
smart pointer template. For now, there is no possibility of
circularity so reference count is good enough. But it just dawned on
me that the simple operation

(B) this->count++;

may not be atomic, which means that these objects are not thread-safe.
Is there anyway to find out if (B) above is atomic?
Platform-dependent.


And how
unreasonable is this assumption, that is, are there many and widely
used platforms where this assumption is violated?

I think Windows is, again, an example. A platform may provide special
atomic increment and decrement operations.
 
R

Rolf Magnus

Victor said:
Pointers to objects _always_ have the same size.

They do? Where does the C++ standard say that?

It "should" be, but isn't necessarily the natural type. The C++ standard
doesn't require it.
A side-question: is there a way to make an assertion like (A) a
*compile* time assertion?

Sure. 'sizeof' is a compile-time operator. You can always do

char dummy[sizeof(char*) == sizeof(int)];

since declaration of 0-sized arrays is prohibited in C++, you would get
an error if the sizes are different.

However, if the sizes are equal, that would still not guarantee that every
pointer value can be stored in an int.
I actually don't see any reason to store a pointer value in an integer
variable. Just store it in a pointer.
 
V

Victor Bazarov

Rolf said:
Victor Bazarov wrote:




They do? Where does the C++ standard say that?

It doesn't. That's just a common implementation trait.
It "should" be, but isn't necessarily the natural type. The C++ standard
doesn't require it.

Yes, it does. 3.9.1/2.
A side-question: is there a way to make an assertion like (A) a
*compile* time assertion?

Sure. 'sizeof' is a compile-time operator. You can always do

char dummy[sizeof(char*) == sizeof(int)];

since declaration of 0-sized arrays is prohibited in C++, you would get
an error if the sizes are different.


However, if the sizes are equal, that would still not guarantee that every
pointer value can be stored in an int.

No, but one can always templatise that based on one's type of interest
instead of 'char'.
I actually don't see any reason to store a pointer value in an integer
variable. Just store it in a pointer.

Neither do I. But some APIs have generic callbacks with 'long' or some
such as the "additional user argument", in which folks often pass
a pointer to an object. It's so common, I am not even sure why I have
to mention it...

V
 
R

Rolf Magnus

Victor said:
It doesn't. That's just a common implementation trait.

Then you shouldn't say "always" and especially not "_always_". This makes it
sound as if that were required for being standard compliant.
Yes, it does. 3.9.1/2.

Hmm, I thought it didn't. Where did I get that from?
Anyway, this basically means that all typical 64bit implementations violate
the standard, because the natural size is 64bit, but int usually is 32bit
on those.
A side-question: is there a way to make an assertion like (A) a
*compile* time assertion?

Sure. 'sizeof' is a compile-time operator. You can always do

char dummy[sizeof(char*) == sizeof(int)];

since declaration of 0-sized arrays is prohibited in C++, you would get
an error if the sizes are different.


However, if the sizes are equal, that would still not guarantee that
every pointer value can be stored in an int.

No, but one can always templatise that based on one's type of interest
instead of 'char'.

That's not what I mean. I'll rephrase:

However, if the sizes are equal, that would still not guarantee that every
char pointer value can be stored in an int.
Neither do I. But some APIs have generic callbacks with 'long' or some
such as the "additional user argument", in which folks often pass
a pointer to an object. It's so common, I am not even sure why I have
to mention it...

I know that there are such APIs, but it's a stupid idea, and the OP doesn't
seem to ask because he wants to use such an already existing API, but
rather because he wants to do something like that himself.
 
A

Alf P. Steinbach

* Rolf Magnus:
However, if the sizes are equal, that would still not guarantee that every
char pointer value can be stored in an int.

This boils down to what's meant by "sufficient size" in the standard.

I think as a practical matter equal size is "sufficient size".

But as (I read what) you write, it's by no means _formally_ guaranteed,
perhaps for reasons of imaginary efficiency, as much else. That
no-guarantee is IMO in the same league as the no-guarantee that a 'bool'
isn't 45 MiB in size. You'd need to make the compiler yourself to get that.
 
V

Victor Bazarov

Rolf said:
Victor Bazarov wrote:




Then you shouldn't say "always" and especially not "_always_". This makes it
sound as if that were required for being standard compliant.

I think there is a certain confusion about what's "standard" and what's
"real". You make is sound that *if* it's required for being standard
compliant, it's absolutely positively implemented in all compilers known
to exist.
Hmm, I thought it didn't. Where did I get that from?
Anyway, this basically means that all typical 64bit implementations violate
the standard, because the natural size is 64bit, but int usually is 32bit
on those.

Why do you say that 64 bits is natural for these? Do you judge from the
size of a common register or from the size of a common arithmetic operand?
A side-question: is there a way to make an assertion like (A) a
*compile* time assertion?

Sure. 'sizeof' is a compile-time operator. You can always do

char dummy[sizeof(char*) == sizeof(int)];

since declaration of 0-sized arrays is prohibited in C++, you would get
an error if the sizes are different.


However, if the sizes are equal, that would still not guarantee that
every pointer value can be stored in an int.

No, but one can always templatise that based on one's type of interest
instead of 'char'.


That's not what I mean. I'll rephrase:

However, if the sizes are equal, that would still not guarantee that every
char pointer value can be stored in an int.

No, of course not. But even if you think inside "the box", you should be
able to extend my proposal to templatise such comparison to make the 'int'
a template argument as well.

template<class T, class U> struct ptr_2_other {
char fits[sizeof(T*) <= sizeof(U)];
};

... ptr_2_other said:

V
 
R

Rolf Magnus

Victor said:
I think there is a certain confusion about what's "standard" and what's
"real". You make is sound that *if* it's required for being standard
compliant, it's absolutely positively implemented in all compilers known
to exist.

In clc++, I assume we talk about standard C++, not about common C++, unless
otherwise noted. So if someone says "always", I assume that means the same
as "on all standard compliant implementations". When I refer to what is
common, I use words like "typically" or "usually", or just "commonly" to
make clear that I don't talk about something that is required by the
standard.
Why do you say that 64 bits is natural for these?

Because some CPUs need to switch to 32bit mode to execute a 32bit execution.
I don't know many 64bit architectures in detail though. How would you
define the natural size on a specific architecture?
Do you judge from the size of a common register or from the size of a
common arithmetic operand?

I judge from the fact that they are called "64bit architectures", what seems
to suggest that 64bit could be considered their natural size. It's also the
biggest size that the ALU can handle in one piece for all arithmetic and
logic operations. And on some architectures, it would even be the size that
they are most efficient with. So what is the natural size on such an
architecture, if not 64bit, and why?
However, if the sizes are equal, that would still not guarantee that
every char pointer value can be stored in an int.

No, of course not. But even if you think inside "the box", you should be
able to extend my proposal to templatise such comparison to make the 'int'
a template argument as well.

template<class T, class U> struct ptr_2_other {
char fits[sizeof(T*) <= sizeof(U)];
};

... ptr_2_other<char,int>::fits ...

I don't see what advantage you get from that. Isn't it possible that the
integer has some padding bits and char* has not, so that an equal size
still doesn't mean that it can hold all the bits of a char*? In this case,
comparing sizes gives you no hint whether the type fits or not.
 
V

Victor Bazarov

Rolf said:
Because some CPUs need to switch to 32bit mode to execute a 32bit execution.

Really? I never thought of that. So, you're saying to multiply two
16-bit numbers my 32-bit CPU switches to "16-bit mode"? I don't believe
that.
I don't know many 64bit architectures in detail though. How would you
define the natural size on a specific architecture?

I don't know. It used to be the size of any all-purpose register or
the alignment of a memory address that can be accessed to read or to write
without any special adjustment.
I judge from the fact that they are called "64bit architectures", what seems
to suggest that 64bit could be considered their natural size. It's also the
biggest size that the ALU can handle in one piece for all arithmetic and
logic operations. And on some architectures, it would even be the size that
they are most efficient with. So what is the natural size on such an
architecture, if not 64bit, and why?

Speaking from what I read on a Microsoft forum, 64-bit usually refers to
the addressable memory space, and not necessarily the base register or the
arithmetic operations. Surprisingly enough, on Win64 a 'long' is only 32
bits wide. If it were unnatural and caused constant "switching to 32-bit
mode", Windows would definitely be the slowest OS out there. (No, I don't
intend to discuss the speed of Windows vs. any other, I am just appealing
to the common sense. I believe MSoft are not all braindead)
However, if the sizes are equal, that would still not guarantee that
every char pointer value can be stored in an int.

No, of course not. But even if you think inside "the box", you should be
able to extend my proposal to templatise such comparison to make the 'int'
a template argument as well.

template<class T, class U> struct ptr_2_other {
char fits[sizeof(T*) <= sizeof(U)];
};

... ptr_2_other<char,int>::fits ...


I don't see what advantage you get from that. Isn't it possible that the
integer has some padding bits

What would be the advantage of having padding bits in an integer? While
theoretically possible, I don't think it's practical, nor do I think such
systems/architectures exist.
> and char* has not, so that an equal size
still doesn't mean that it can hold all the bits of a char*? In this case,
comparing sizes gives you no hint whether the type fits or not.

I haven't encountered an architecture where integer would have padding.
Have you? On all architectures I've encountered, a simple comparison or
sizes should suffice.

V
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,011
Latest member
AjaUqq1950

Latest Threads

Top