Type aliases

D

dom.k.black

Is it still common practice to use type aliases (INT, PCHAR etc).

It looks ugly and breaks the syntax highlighting, are there any
advantages these days (MSVC++6.0 and later)? I could understand it,
maybe, back in the days when some compilers had 16 bit ints.

Also, if your products is heavily dependent on a 3rd party library
which just uses the raw types, does the aliasing help in any way at
all?
 
V

Victor Bazarov

Is it still common practice to use type aliases (INT, PCHAR etc).

It depends what those aliases are for. INT instead of int makes
no sense, neither does PCHAR instead of char*. However, there are
probably some places where WORD is bettern than unsigned short
(like in a functional style cast) or DWORD instead of unsigned long
(for similar reasons). [you can see I've been working in Windows]
It looks ugly and breaks the syntax highlighting, are there any
advantages these days (MSVC++6.0 and later)? I could understand it,
maybe, back in the days when some compilers had 16 bit ints.

Uh... How do 16-bit ints play into the type aliasing?
Also, if your products is heavily dependent on a 3rd party library
which just uses the raw types, does the aliasing help in any way at
all?

No, but ask the reverse question.

V
 
R

Rolf Magnus

Victor said:
It depends what those aliases are for. INT instead of int makes
no sense, neither does PCHAR instead of char*.

IMHO, the funniest one is VOID instead of void.
However, there are probably some places where WORD is bettern than
unsigned short (like in a functional style cast) or DWORD instead of
unsigned long (for similar reasons). [you can see I've been working in
Windows]

But why the ugly ALL-UPPERCASE typedef names? Also, "word" can have
diffferent meanings. On ARM CPUs for example, a word is 32 bits wide, while
on x86, it is only 16 bits.
Nowadays, I prefer to use the typedefs from the C99 stdint.h header, which
hopefully make it into the next C++ standard too. Most C++ compilers have
it anyway.
 
D

dom.k.black

Is it still common practice to use type aliases (INT, PCHAR etc).

It depends what those aliases are for. INT instead of int makes
no sense, neither does PCHAR instead of char*. However, there are
probably some places where WORD is bettern than unsigned short
(like in a functional style cast) or DWORD instead of unsigned long
(for similar reasons). [you can see I've been working in Windows]

I thought the use of INT was to specify a minimum size, eg 32 bits, so
you know it is safe to use this type with values in the range
+/-2**31. Code needs to assume a minimum size, but it shouldn't need
to assume a maximum, or exact, size. INT was probably useful years
ago, I am not sure now.

Why this has to be done for chars or (FFS) bools I really don't know.
These things are never going to change size.

The use of WORD to specify exactly 16 bits (as opposed to SHORT to say
*at least* 16 bits) is usually either unnecessary or dodgy. eg people
sometimes create structures using BYTE, WORD, DWORD then assume they
know the exact layout of the bytes in memory, usually just to allow
lazy coding.

WORD has its uses, eg if you are directly accessing memory mapped
hardware registers, but most times it is abused.
Uh... How do 16-bit ints play into the type aliasing?

I thought that was the main point.

You write code with INT defined as int, assuming it is a 32 bit int.

If you ever need to compile with a compiler that uses 16 bit ints, you
redefine INT as long for that specific compiler.

Isn't that the basic reason it is done?

But unless you are targeting a machine from the late 70's, or some
really specialised embedded system, this isn't realy an issue anymore
is it?
No, but ask the reverse question.

If you mean does the aliasing do any harm, well I'd say it does. It is
ugly and it stops syntax highlighting working, to name two. If it has
absolutely no upside, then any downside is enough to say you shouldn't
use it.
 
V

Victor Bazarov

If you mean does the aliasing do any harm, well I'd say it does. It is
ugly and it stops syntax highlighting working, to name two. If it has
absolutely no upside, then any downside is enough to say you shouldn't
use it.

No, I meant if your products are heavily dependent on a 3rd party
library that does use aliases (at least in their declarations) and
provide those aliases (of course), would it be beneficial to keep
using those aliases in your own code even if you don't intend to
call their functions? If not, why?

V
 
D

dom.k.black

No, I meant if your products are heavily dependent on a 3rd party
library that does use aliases (at least in their declarations) and
provide those aliases (of course), would it be beneficial to keep
using those aliases in your own code even if you don't intend to
call their functions? If not, why?

Well I use a number of third party libraries, a couple each have their
own aliases, and I would absolutely not base my own code on them.
Which one would I choose? And what would happen to my code if the
third party changed the definitions?

They have at least prefixed the aliases with a library specific prefix
(another reason I wouldn't base my own code on them), but they get
cast right at the lowest possible level. I tend to try to isolate the
interface in a single module as far as is practical.

The worst thing is when 2 libraries both use their own versions of the
standard aliases (INT, CHAR etc).
 
B

Bo Persson

I thought that was the main point.

You write code with INT defined as int, assuming it is a 32 bit int.

If you ever need to compile with a compiler that uses 16 bit ints,
you redefine INT as long for that specific compiler.

Isn't that the basic reason it is done?

Yes, but it really doesn't work anyway. When moving from 32 bit to 16
bit, you really want *some* of the ints to be 16 bit, and others to
stay at 32 bits. How do you know which ones? Do you wanna bet that all
INT uses are correct? No!



Bo Persson
 
D

dom.k.black

Yes, but it really doesn't work anyway. When moving from 32 bit to 16
bit, you really want *some* of the ints to be 16 bit, and others to
stay at 32 bits. How do you know which ones? Do you wanna bet that all
INT uses are correct? No!

Bo Persson

Not quite. On the 16 bit architecture, you *need* some of the ints to
be 32 bit, and you would *prefer* some to be 16 bit (ie variables
which never hold large values are more efficient as 16 bit). But your
code ought to still work if all ints are 32 bit, just slightly slower
than it might be.

The idea is that you use SHORT in the cases where 16 bit is
acceptable, I guess.
 
J

Joe Greer

But why the ugly ALL-UPPERCASE typedef names? Also, "word" can have
diffferent meanings. On ARM CPUs for example, a word is 32 bits wide,
while on x86, it is only 16 bits.
Nowadays, I prefer to use the typedefs from the C99 stdint.h header,
which hopefully make it into the next C++ standard too. Most C++
compilers have it anyway.

Consistency. Originally MS started doing that so they could decorate
pointer types with appropriate __far, __near, __declspec etc (depending on
memory model and other switches). The rest sort of carries from that. I'm
not claiming they have a place in modern code, but sometimes history has
its way with you regardless.

joe
 
J

Joe Greer

Is it still common practice to use type aliases (INT, PCHAR etc).

It depends what those aliases are for. INT instead of int makes
no sense, neither does PCHAR instead of char*. However, there are
probably some places where WORD is bettern than unsigned short
(like in a functional style cast) or DWORD instead of unsigned long
(for similar reasons). [you can see I've been working in Windows]
It looks ugly and breaks the syntax highlighting, are there any
advantages these days (MSVC++6.0 and later)? I could understand it,
maybe, back in the days when some compilers had 16 bit ints.

Uh... How do 16-bit ints play into the type aliasing?

The macros were introduced to handle the various memory models that 16bit
windows supported. Pointers would quite often have __near, __far,
__syscall, etc associated with them and not all of them could appear in a
typedef. Therefore, macros and all uppercase. It has been a long time
since I have even had to think about that mess. Now, I feel all dirty
after having my thoughts drug through the muck. ;)


joe
 
J

James Kanze

Is it still common practice to use type aliases (INT, PCHAR etc).

It never was, as far as I can tell (and my C/C++ experience goes
back to the early 1980's).
It looks ugly and breaks the syntax highlighting, are there
any advantages these days (MSVC++6.0 and later)? I could
understand it, maybe, back in the days when some compilers had
16 bit ints.

Most of the C I wrote was for 16 bit machines, and I've never
seen such conventions. They look like pure obfuscation to me.
Also, if your products is heavily dependent on a 3rd party
library which just uses the raw types, does the aliasing help
in any way at all?

Aliasing as in your examples actively hurts.

Aliasing can be useful if the names give additional semantic
information:
typedef int WidgitCount ;
, for example, especially when linked to types which can change
from one implementation to the next, e.g. off_t under Posix.
 
J

Jeff Schwab

James said:
It never was, as far as I can tell (and my C/C++ experience goes
back to the early 1980's).


Most of the C I wrote was for 16 bit machines, and I've never
seen such conventions. They look like pure obfuscation to me.


Aliasing as in your examples actively hurts.

Aliasing can be useful if the names give additional semantic
information:

In C++, INT and PCHAR don't appear to serve any useful purpose. Those
aliases are very common in assembly (especially with MASM on x86),
because there is no other way to tell that register AX holds a signed
integer value, whereas DI holds the address of a character string. The
aliases were common in C under DOS, probably to make sure that C code
was using the exact same primitive types as libraries (or BIOS calls)
implemented in assembly language. In Win32 and MFC, the aliases are
still in common use, having been carried through the decades for no
apparent reason. (If there's a reason that's just not apparent, I hope
someone here will enlighten me.)
typedef int WidgitCount ;
, for example, especially when linked to types which can change
from one implementation to the next, e.g. off_t under Posix.

Good examples.
 
J

James Kanze

James Kanze wrote:

[...]
In C++, INT and PCHAR don't appear to serve any useful
purpose. Those aliases are very common in assembly
(especially with MASM on x86), because there is no other way
to tell that register AX holds a signed integer value, whereas
DI holds the address of a character string.

I'm of two minds about this. The Intel assembler allows (or
allowed---I haven't used it for ages) giving symbolic names to
registers, and I have done so in the past. The problem is that
when you need a register for something, you can't immediately
see which ones are actually being used, so you're likely to use
SI, even though it's already being used under the name of
srcPtr.

(Of course, INT and PCHAR are still names without sufficient
additional semantic content.)
The aliases were common in C under DOS, probably to make sure
that C code was using the exact same primitive types as
libraries (or BIOS calls) implemented in assembly language.

I've never seen them, and I've done a lot of assembler
programming on 8086. But admittedly, not that much under DOS.
(Most of my work at the time was on embedded real time
systems---using an OS that I wrote myself.)
 
J

Jerry Coffin

[ ... ]
In C++, INT and PCHAR don't appear to serve any useful purpose. Those
aliases are very common in assembly (especially with MASM on x86),
because there is no other way to tell that register AX holds a signed
integer value, whereas DI holds the address of a character string.

At least some assemblers (most obviously MASM) allow you to do exactly
that:

assume AX:sword
assume DI:ptr byte
or:
assume DI:ptr sbyte

Depending on whether the string is signed or not.
The
aliases were common in C under DOS, probably to make sure that C code
was using the exact same primitive types as libraries (or BIOS calls)
implemented in assembly language. In Win32 and MFC, the aliases are
still in common use, having been carried through the decades for no
apparent reason. (If there's a reason that's just not apparent, I hope
someone here will enlighten me.)

Microsoft's theory (outlined in at least the early versions of their
Win32 documentation) was that they specifying types that were
independent of the language binding. For example, a DWORD would be the
same logical type, regardless of the way you'd express that concept in a
particular target language.

One obvious problem, of course, is that Microsoft frequently abuses the
system and ignores the logical types they have designed. For example,
they have a BOOL type that's supposed to represent a boolean value --
but at least one function declared to return a BOOL can (under some
circumstances) return a handle to an object instead.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,780
Messages
2,569,611
Members
45,265
Latest member
TodLarocca

Latest Threads

Top