Integer promotions, conversions, coercions and templates :(

Ö

Öö Tiib

The expression on the right side of "<<" is a continual source of worry.

When in doubt then practical people write tests.
If you have luxury to use constexpr and static_assert from C++0x then
you can put the tests right next to implementation.
 
M

MikeP

Parenthesis Corrections

A. return ((mask & ((CBTYPE)1) << (n-((CBTYPE)1))) != 0);
const CBTYPE one(1);

B. return ((mask & (one << (n-one))) != 0);

C. return ((mask & (CBTYPE(1) << (n-1))) != 0);

D. CBTYPE bit = CBTYPE(1) << (n-1);
return ((mask & bit) != 0);

// Helpful Functions for Evaluation and Analysis

//A.
//
#define CBTYPE uint32

inline bool CheckBitA(CBTYPE mask, uint32 n)
{
return ((mask & ((CBTYPE)1) << (n-((CBTYPE)1))) != 0);
}

//B.
//
inline bool CheckBitB(CBTYPE mask, uint32 n)
{
const CBTYPE one(1);
return ((mask & (one << (n-one))) != 0);
}

//C.
//
inline bool CheckBitC(CBTYPE mask, uint32 n)
{
return ((mask & (CBTYPE(1) << (n-1))) != 0);
}

//D.
//
inline bool CheckBitD(CBTYPE mask, uint32 n)
{
CBTYPE bit = CBTYPE(1) << (n-1);
return ((mask & bit) != 0);
}
 
M

MikeP

Öö Tiib said:
When in doubt then practical people write tests.

But that only gets someone information about a given platform or
compiler. Language-level analysis is the more important first step. I
posted some helpfull functions to test with in another post (as well as
corrected parenthesis errors in my prior post).
 
M

MikeP

Below are the results of compiling 5 "different" versions of a
non-templatized free function "CheckBit" without any compiler
optimizations turned on. Version E shows the desired implementation but
it cannot be had because CheckBit is, in reality, a template function
(the functions below are only to analyze and evaluate effects of various
implementations of CheckBit) and the "impedance mismatch" between
templates and the C++ type system will not allow version E to be
generated by the template machinery (consider that desired is to generate
CheckBit for all unsigned integer types, include a platform-specific
64-bit unsigned integer). Note that A, C and E generate the same assembly
and that B and D result in more instructions than A, C, E (but please
note that they look longer than they are because of the embedded line
number comments).. Full optimization had no effect on the code generated.

(It should be clear that "uint32" is an alias and the the suffixes used
in version E are non-standard).

11 //A.
12 //
13 #define CBTYPE uint32
14 #define INLINE
15 INLINE bool CheckBitA(CBTYPE mask, uint32 n)
16 {
17 return ((mask & ((CBTYPE)1) << (n-((CBTYPE)1))) != 0);
18 }
19
20 //B.
21 //
22 INLINE bool CheckBitB(CBTYPE mask, uint32 n)
23 {
24 const CBTYPE one(1);
25 return ((mask & (one << (n-one))) != 0);
26 }
27
28 //C.
29 //
30 INLINE bool CheckBitC(CBTYPE mask, uint32 n)
31 {
32 return ((mask & (CBTYPE(1) << (n-1))) != 0);
33 }
34
35 //D.
36 //
37 INLINE bool CheckBitD(CBTYPE mask, uint32 n)
38 {
39 CBTYPE bit = CBTYPE(1) << (n-1);
40 return ((mask & bit) != 0);
41 }
42
43 //E.
44 //
45 INLINE bool CheckBitE(CBTYPE mask, uint32 n)
46 {
47 return ((mask & (1ui32 << (n-1ui32))) != 0);
48 }

Generated Assembly (Release Build, Unoptimized, CBTYPE: uint32, INLINE: )
Compiler: VC++ 2010, Intrinsics: Off, Whole Program Optimization: Off.

// A
; Line 16
push ebp
mov ebp, esp
; Line 17
mov ecx, DWORD PTR _n$[ebp]
sub ecx, 1
mov eax, 1
shl eax, cl
and eax, DWORD PTR _mask$[ebp]
neg eax
sbb eax, eax
neg eax
; Line 18
pop ebp
ret 8

// B
; Line 23
push ebp
mov ebp, esp
push ecx
; Line 24
mov DWORD PTR _one$[ebp], 1
; Line 25
mov ecx, DWORD PTR _n$[ebp]
sub ecx, 1
mov eax, 1
shl eax, cl
and eax, DWORD PTR _mask$[ebp]
neg eax
sbb eax, eax
neg eax
; Line 26
mov esp, ebp
pop ebp
ret 8

// C
; Line 31
push ebp
mov ebp, esp
; Line 32
mov ecx, DWORD PTR _n$[ebp]
sub ecx, 1
mov eax, 1
shl eax, cl
and eax, DWORD PTR _mask$[ebp]
neg eax
sbb eax, eax
neg eax
; Line 33
pop ebp
ret 8

// D
; Line 38
push ebp
mov ebp, esp
push ecx
; Line 39
mov ecx, DWORD PTR _n$[ebp]
sub ecx, 1
mov eax, 1
shl eax, cl
mov DWORD PTR _bit$[ebp], eax
; Line 40
mov eax, DWORD PTR _mask$[ebp]
and eax, DWORD PTR _bit$[ebp]
neg eax
sbb eax, eax
neg eax
; Line 41
mov esp, ebp
pop ebp
ret 8

//E.
; Line 46
push ebp
mov ebp, esp
; Line 47
mov ecx, DWORD PTR _n$[ebp]
sub ecx, 1
mov eax, 1
shl eax, cl
and eax, DWORD PTR _mask$[ebp]
neg eax
sbb eax, eax
neg eax
; Line 48
pop ebp
ret 8
 
M

MikeP

Ian said:
That's OK because there aren't any in my example!

Well I don't know that, and I categorize that info as esoteric, in a way.
Not that it's not common knowledge and able to be known, but that it's a
waste of mind power and human life. I'd still like to know the process.
the '1' starts out as an int, n is unsigned.. blah, blah.. it's hardly
clear what is happening. Of course though, my laziness on the matter is
well-founded: why guess when a suffix would help a bit? While one cannot
escape the inelegance of the C++ type system completely, it can be made
incrementally, but significantly better.
 
M

MikeP

I opted for a preprocessor macro approach (I used the "super macro"
technique) to implement the actual code alluded to by the example. I feel
that aberrating the algorithm with C++-isms to make up for the
deficiencies in the language is an inferior solution to the macro
approach. Remember, that the example was not the actual code (the actual
code is a class and the example is indicative of one of the methods of
that class), and that the example suffers even more ills caused by the
inferior/deficient/incorrect C++ type system. Note that the point was to
generate 4 different definitions of the class (function, in the example)
for use and not meant to be a template or macro to be used directly.

Here's a stab at a macro (non "super macro") for the example function
which won't work because the compiler won't be able to decide on which
overload to use:

#define DecCheckBit(T, S, W) \
inline bool CheckBit(T & mask, uint32 n) \
{
\
Assert((n ge 1) and (n le W), ArgOutOfRangeError, "n")\
return ( (mask & (S << (n-1ui32)) ) != 0); \
}

DecCheckBit(uint8, 1ui8, 8)
DecCheckBit(uint16, 1ui16, 16)
DecCheckBit(uint32, 1ui32, 32)
DecCheckBit(uint64, 1ui64, 64)

Non-overload versions can be had (in place of the above) with
introduction of another macro to generate unique functions instead of
overloaded ones:

#define CheckBit(T) CheckBit_##T

#define DecCheckBit(T, S, W) \
inline bool CheckBit(T)(T & mask, uint32 n) \
{
\
Assert((n ge 1) and (n le W), ArgOutOfRangeError, "n")\
return ( (mask & (S << (n-1ui32)) ) != 0); \
}

The above examples use assertion but that must be changed to an
exception, and the suffixes used are not the standard suffixes. Finally,
I didn't test the above.


Conclusions

The C++ template system cannot fully replace preprocessor techniques.
The C++ type system sucks.
The above 2 things make C++ tedious to use and difficult to learn.
 
I

Ian Collins

I opted for a preprocessor macro approach (I used the "super macro"
technique) to implement the actual code alluded to by the example. I feel
that aberrating the algorithm with C++-isms to make up for the
deficiencies in the language is an inferior solution to the macro
approach. Remember, that the example was not the actual code (the actual
code is a class and the example is indicative of one of the methods of
that class), and that the example suffers even more ills caused by the
inferior/deficient/incorrect C++ type system. Note that the point was to
generate 4 different definitions of the class (function, in the example)
for use and not meant to be a template or macro to be used directly.

What your example really suffered from was your unwillingness to use the
facilities offered by the language.
Here's a stab at a macro (non "super macro") for the example function
which won't work because the compiler won't be able to decide on which
overload to use:

#define DecCheckBit(T, S, W) \
inline bool CheckBit(T& mask, uint32 n) \
{
\
Assert((n ge 1) and (n le W), ArgOutOfRangeError, "n")\
return ( (mask& (S<< (n-1ui32)) ) != 0); \
}

DecCheckBit(uint8, 1ui8, 8)
DecCheckBit(uint16, 1ui16, 16)
DecCheckBit(uint32, 1ui32, 32)
DecCheckBit(uint64, 1ui64, 64)

So a macro using non-standard suffixes and requiring the type to be
explicitly specified is superior a template where the type can be
deduced by the compiler?
Non-overload versions can be had (in place of the above) with
introduction of another macro to generate unique functions instead of
overloaded ones:

#define CheckBit(T) CheckBit_##T

#define DecCheckBit(T, S, W) \
inline bool CheckBit(T)(T& mask, uint32 n) \
{
\
Assert((n ge 1) and (n le W), ArgOutOfRangeError, "n")\
return ( (mask& (S<< (n-1ui32)) ) != 0); \
}

The above examples use assertion but that must be changed to an
exception, and the suffixes used are not the standard suffixes. Finally,
I didn't test the above.

There's a glowing recommendation.
Conclusions

The C++ template system cannot fully replace preprocessor techniques.

No one said it does. There are a small number of cases where a macro is
the only solution and few if any where it is a better solution. This
isn't one of them.
The C++ type system sucks.
How?

The above 2 things make C++ tedious to use and difficult to learn.

Not at all.
 
M

Michael DOUBEZ

I opted for a preprocessor macro approach (I used the "super macro"
technique) to implement the actual code alluded to by the example. I feel
that aberrating the algorithm with C++-isms to make up for the
deficiencies in the language is an inferior solution to the macro
approach.

AFAIS many solutions have been proposed. Clearly stating the desired
result would have helped.
Remember, that the example was not the actual code (the actual
code is a class and the example is indicative of one of the methods of
that class), and that the example suffers even more ills caused by the
inferior/deficient/incorrect C++ type system. Note that the point was to
generate 4 different definitions of the class (function, in the example)
for use and not meant to be a template or macro to be used directly.

You can use - not yet standard - static_assert() and - tr1 standard -
is_unsigned to limit the range of template parameter type:

template <typename T, unsigned int W = numeric_limit<T>::digits >
bool CheckBit(T& mask, unsigned n)
{
static_assert( is_integral<T>::value && is_unsigned<T>::value );
// assert n in range
const T mask_n = T(1) << (n-1ui32);
return mask & mask_n != 0;
}

And use your super macro for generating the
inline bool CheckBit8(uin8& mask, unsigned n)
{
return CheckBit<uint8,8>(mask,n)
}
....

Better to put as little as possible in the macro.
Here's a stab at a macro (non "super macro") for the example function
which won't work because the compiler won't be able to decide on which
overload to use:

Well, it will fail on signed type since you are using non-const
parameter by reference.
But, it shouldn't fail on unsigned type.

[snip - code]
Conclusions

The C++ template system cannot fully replace preprocessor techniques.

Well, it cannot generate new class/function name, cannot generate a
switch/case and cannot transform a parameter into its string
equivalent.

AFAIK whose are the only cases where MACRO are really needed.
The C++ type system sucks.

It is a security. Some prefer to drive without seatbelt.
 
M

MikeP

Ian said:
What your example really suffered from was your unwillingness to use
the facilities offered by the language.

At first it looked like a good idea because it was brief enough. On
further analysis, though, I realized that compromising the algorithm was
not prudent (for a number of reasons).
So a macro using non-standard suffixes

While I didn't/don't use the standard suffixes, I see no reason why they
couldn't be used.
and requiring the type to be
explicitly specified is superior a template where the type can be
deduced by the compiler?

Avoid compiler "black magic" is many times beneficial. Think "explicit"
rather than "implicit". In this case, I would say that is clearly so.
There's a glowing recommendation.

I just wanted to give anyone interested a starting point for further
investigation if they wanted it. I no longer have the dilemma which
prompted me to start this thread.
No one said it does. There are a small number of cases where a macro
is the only solution and few if any where it is a better solution. This
isn't one of them.

Indeed it is one, to me, that's why I rejected any of the many template
approaches. That there are so many, probably indicates something is
wrong.

Many ways, and this example shows some of them. It's a good question for
you to answer to show that you recognize the C++ shortcomings and are
able to objectively address them.
Not at all.

Relative to a language that is cohesive in it's mechanisms, the above is
true. Because the tediousness and difficulty is unnecessary, the above is
true. Of course, anyone who puts C++ on a pedestal is apt to call any
assessment of the language that is other than a glowing one, blasphemy.
 
M

MikeP

Michael said:
AFAIS many solutions have been proposed.

All rejected, in the end though.
Clearly stating the desired
result would have helped.

Surely it was clear that what desired was a "template" to generate a very
finite set of bit field classes. Efficiency counts for such. If elegance
of implementation surfaced as another requirement (indeed, that made the
macro approach trump the template approaches offered, in the end), so be
it. I think that this is a good case study.
You can use - not yet standard - static_assert() and - tr1 standard -
is_unsigned to limit the range of template parameter type:

template <typename T, unsigned int W = numeric_limit<T>::digits >
bool CheckBit(T& mask, unsigned n)
{
static_assert( is_integral<T>::value && is_unsigned<T>::value );
// assert n in range
const T mask_n = T(1) << (n-1ui32);
return mask & mask_n != 0;
}

How is that more clear than using zero and the width of the unsigned
integer type in the assertion?

(Aside: I have a problem with succumbing to "T(1)" instead of "1uiXX"
just to get type consistency, that's why I like the macro better).
And use your super macro for generating the
inline bool CheckBit8(uin8& mask, unsigned n)
{
return CheckBit<uint8,8>(mask,n)
}
...

Better to put as little as possible in the macro.

I have used the "generate unique names with a macro for macro-based
templates so much though, that it is a well-tested and accepted
technique.
Well, it will fail on signed type since you are using non-const
parameter by reference.

It should be const for the example.
But, it shouldn't fail on unsigned type.

Oh? You don't think that overloading a function on only all of the
unsigned integer types will be a problem?
[snip - code]
Conclusions

The C++ template system cannot fully replace preprocessor techniques.

Well, it cannot generate new class/function name,

That is DOES do, under the covers.
cannot generate a
switch/case and cannot transform a parameter into its string
equivalent.

It looks like an oversight (problem unaddressed) to design a generic type
mechanism and not address literals within that mechanism. I think that
pretty much characterizes the issue in the use case scenario in this
thread.
AFAIK whose are the only cases where MACRO are really needed.


It is a security. Some prefer to drive without seatbelt.

It's baggage from C.
 
M

Michael DOUBEZ

All rejected, in the end though.


Surely it was clear that what desired was a "template" to generate a very
finite set of bit field classes. Efficiency counts for such. If elegance
of implementation surfaced as another requirement (indeed, that made the
macro approach trump the template approaches offered, in the end), so be
it. I think that this is a good case study.

From experience, I find the super macro approach less elegant; reading
inside a macro definition is quite a pain. And the functions are not
caught by code analyzers.
How is that more clear than using zero and the width of the unsigned
integer type in the assertion?

The static assert ensures the range of template type parameters is
limited to unsigned integral type.
(Aside: I have a problem with succumbing to "T(1)"  instead of "1uiXX"
just to get type consistency, that's why I like the macro better).

Funny how c++ literal constants where designed for avoiding writing
the type and that they would be now required.
I have used the "generate unique names with a macro for macro-based
templates so much though, that it is a well-tested and accepted
technique.

And one I use extensively together with code generation of the super
macro. And even after two years of using it, I find it unclear, hard
to debug and a pain to read.
It should be const for the example.


Oh? You don't think that overloading a function on only all of the
unsigned integer types will be a problem?

Not if you forbid the overload on the signed types (my making them
private by example).
Note that static_assert() makes the compilation fails on signed types.

Do you see any other problem ?
[snip - code]
Conclusions
The C++ template system cannot fully replace preprocessor techniques.
Well, it cannot generate new class/function name,

That is DOES do, under the covers.
cannot generate a
switch/case and cannot transform a parameter into its string
equivalent.

It looks like an oversight (problem unaddressed) to design a generic type
mechanism and not address literals within that mechanism. I think that
pretty much characterizes the issue in the use case scenario in this
thread.

What syntax would you propose ?
 
M

MikeP

Michael said:
From experience, I find the super macro approach less elegant; reading
inside a macro definition is quite a pain. And the functions are not
caught by code analyzers.

It's for precise usage, not general usage in place of templates.
The static assert ensures the range of template type parameters is
limited to unsigned integral type.

Too much machinery for such a simple need. Too obfuscating also.
Funny how c++ literal constants where designed for avoiding writing
the type and that they would be now required.

The underlying type system does not play nice with the things sitting on
top of it.
And one I use extensively together with code generation of the super
macro. And even after two years of using it, I find it unclear, hard
to debug and a pain to read.

Don't know what to tell ya. It works great for me. Nary a problem. Super
macros allow stepping into and debugging whereas a traditional macro (all
on one line with line continuation chars) are hard to debug. I probably
have a more evolved way of handling the issues.
Not if you forbid the overload on the signed types (my making them
private by example).
Note that static_assert() makes the compilation fails on signed types.

Signed types are not an issue in this case, per se. Implicit
conversions/coercions/promotions are the issue in regards to overloaded
functions.
What syntax would you propose ?

I have a short-term solution (macros). While the issue should probably be
addressed, it can't really be fixed, only band-aided over. Perhaps
something that allows templates to have some text-substitution
capabilities is in order. Consider:

template <typename T>
class MyClass
{
T x;

public:
T To##T(){ return x; }
};
 
M

Michael DOUBEZ

It's for precise usage, not general usage in place of templates.

For now.
Too much machinery for such a simple need. Too obfuscating also.

True. I would have prefered concepts but AFAIS we won't get them for
another decade now.
The underlying type system does not play nice with the things sitting on
top of it.



Don't know what to tell ya. It works great for me. Nary a problem.

We don't have the same problem space.
Super
macros allow stepping into and debugging whereas a traditional macro (all
on one line with line continuation chars) are hard to debug. I probably
have a more evolved way of handling the issues.

The tools available are a bit dated but I didn't expect a debugger
could step into a macro definition.
Signed types are not an issue in this case, per se. Implicit
conversions/coercions/promotions are the issue in regards to overloaded
functions.

Could you give me a failing case/scenario ?
I have a short-term solution (macros). While the issue should probably be
addressed, it can't really be fixed, only band-aided over. Perhaps
something that allows templates to have some text-substitution
capabilities is in order.

Yes. That and iterating over the members, getting a meaningful name
identifier. In brief, compile time reflexion/introspection.
 
M

MikeP

Michael said:
True. I would have prefered concepts but AFAIS we won't get them for
another decade now.

IOW, never! (Because that is beyond C++'s lifespan potential).

We don't have the same problem space.

I don't see that as an issue, but I'm assuming you mean that proverbial
"large scale project". The issue is appropriate application of the
technique. Sure, if you over-use it instead of just using it when
necessary, problems may ensue (especially with no ancillary management of
the development process that recognizes various techniques instead of The
Big One). No need to continue this assumptive dialog though. Just a few
thoughts that immediately came to mind.
The tools available are a bit dated but I didn't expect a debugger
could step into a macro definition.

That is what "super macros" afford you. It's their biggest selling point.
Perhaps you don't know what I meant with "super macro"? A web search will
probably find it for you. But in short, the technique is such that the
parameterized code is put in a separate include file and the inclusion of
that file is wrapped with desired parameter values. For example:

#define T uint32
#include <myclasstemplate.h>
#undef T // Undefines can be put in the include file at the bottom.

The above will define whatever is in the include file but with the Ts as
uint32s. There are no line continuation characters in the supermacro in
myclasstemplate.h and that allows the compiler to step through instances
of objects of the supermacro.

Could you give me a failing case/scenario ?

Try the example and you are bound to run into problems. Overloading on
integer types is perilous with C++.
Yes. That and iterating over the members, getting a meaningful name
identifier. In brief, compile time reflexion/introspection.

Perhaps. I haven't thought much about that kind of thing other than
recognizing, as in this example, that something is missing.
 
M

Michael DOUBEZ

Until when?

What I mean is once a tool is in use, there is the temptation for
others to (mis)use it.
Once, it becomes common practice, in the team culture, there is no way
to stop it.
 
M

MikeP

Michael said:
What I mean is once a tool is in use, there is the temptation for
others to (mis)use it.
Once, it becomes common practice, in the team culture, there is no way
to stop it.

Kids should not play with knives. If the "team" lacks discipline, or the
management lacks capability or effectiveness, isn't the project doomed at
the onset? All programming constructs can be abused. Banning constructs
or techniques is not the solution, in general. The key is judicious use
(but that is not to say that banning isn't ever desireable). I try to
program defensively in cognizance of the future which surely will be one
without C++ in it (for me), so I don't want a "damn constructor in every
freekin' line of code". "Software maintenance and evolution and
lifetime", to me, has cross-language concerns.
 
M

Michael DOUBEZ

IOW, never! (Because that is beyond C++'s lifespan potential).

I disagree. We will need 2 years to get c++11 compliant compilers, 7
more years to get them main stream in production. Yet another 5 years
to get people trained using it and transfer it into the culture.
I don't see that as an issue, but I'm assuming you mean that proverbial
"large scale project".

I said problem space not problem scale.

In fact, IMHO the right solution would have been code generation but
that raised other problems such as variabilities specifications.

[snip]
That is what "super macros" afford you. It's their biggest selling point.
Perhaps you don't know what I meant with "super macro"? A web search will
probably find it for you.

Actually, it did not but I have two of three techniques that may
correspond: a big macro with another macro name in parameter or
playing with includes files.
But in short, the technique is such that the
parameterized code is put in a separate include file and the inclusion of
that file is wrapped with desired parameter values. For example:

#define T uint32
#include <myclasstemplate.h>
#undef T // Undefines can be put in the include file at the bottom.

In my case, the parameters is the variable part. Different problem
space.
The above will define whatever is in the include file but with the Ts as
uint32s. There are no line continuation characters in the supermacro in
myclasstemplate.h and that allows the compiler to step through instances
of objects of the supermacro.

Nice to know. Thanks.
Try the example and you are bound to run into problems. Overloading on
integer types is perilous with C++.

True.

[snip]
 
M

MikeP

Michael said:
I disagree.

That there will never be concepts in C++ or that C++'s life has less than
10 years left?
We will need 2 years to get c++11 compliant compilers, 7
more years to get them main stream in production. Yet another 5 years
to get people trained using it and transfer it into the culture.


I said problem space not problem scale.

"large scale projects" can be viewed as "a space".
In fact, IMHO the right solution would have been code generation but
that raised other problems such as variabilities specifications.

[snip]
That is what "super macros" afford you. It's their biggest selling
point. Perhaps you don't know what I meant with "super macro"? A web
search will probably find it for you.

Actually, it did not

Here ya go: http://s11n.net/papers/supermacros_cpp.html
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,535
Members
45,007
Latest member
obedient dusk

Latest Threads

Top