Ways to define C constants


J

James Harris

BartC said:
....


Disagree. So long as 'ui16' is itself defined in terms of uint16_t, then I
can't see the problem. It's not as though the name is completely obscure
or misleading.

I don't agree with either of you! I cannot see the point of redefining ui16
to be an alias of uint16_t just to alter the name, if that's what you are
suggesting when you say 'defined in terms of'.

I use ui16 because it gives me independence from the compilers I use, one
which includes definitions for stdints and one which does not.

James
 
Ad

Advertisements

J

James Harris

James Kuyper said:
BartC said:
news:[email protected] [...]
Don't use made-up types such as "ui16" - prefer to use the standard C99
types from <stdint.h>, such as uint16_t. They are standardised and
commonly used, and let you write more portable code.

Disagree. So long as 'ui16' is itself defined in terms of uint16_t, then
I
can't see the problem. It's not as though the name is completely obscure
or
misleading.

What's the advantage of using "ui16" rather than the standard "uint16_t"?

It allows him to use a name he likes better than the one provided by the
standard. That's more important to him than making it easier for
maintenance programmers to understand what's going on. It's essentially
the same reason some people do

typedef unsigned int uint;

I like the way people are going off on to their own assumptions about the
OP's motives!

In fact, it is nothing of the sort. (Have just posted the reasons in two
other posts so won't repeat them here.)

James
 
J

James Harris

James Kuyper said:
On 06/02/2014 06:50 AM, James Harris wrote:
....


There is a convention that such identifiers should be in all-caps - I
think that following that convention is a good idea.

To check, would you use all caps for names in enums as well as those in
#defines? Would you use all caps for all such names whether they were in a
header file (and thus intended to be used in other modules) or in a .c file
(and intended to be private to the .c file)?

James
 
J

James Kuyper

On 06/02/2014 01:18 PM, BartC wrote:
....
There would be no need to look it up anyway, as it wouldn't occur to anyone
who's not a C expert that it might be uint_least16_t or uint_fast16_t
(whatever they might mean).

"no need" would follow from that argument, only if C code is never read
by C experts. If it were true that this would never happen, that would
be a rather appalling indictment of the programming industry.

Being able to recognize uint_least16_t and uint_fast16_t strikes me as a
pretty low standard to set for identifying C experts. They've been
standard types for 14 years now, and were among the first of the C99
features to be widely implemented (they were fairly widely implemented
as extensions to C even before the standard was approved).
 
M

Malcolm McLean

I use ui16 because it gives me independence from the compilers I use, one
which includes definitions for stdints and one which does not.
This is the irony.
As soon as something is standardised, you can't use it. Not everything will
support the standard, and you can't provide it without confusing everyone.
 
J

James Kuyper

To check, would you use all caps for names in enums as well as those in
#defines? Would you use all caps for all such names whether they were in a
header file (and thus intended to be used in other modules) or in a .c file
(and intended to be private to the .c file)?

Yes. If I could have implemented it using a macro expanding to an
integer constant expression, the fact that I chose instead to make it an
enumeration constant doesn't affect my feeling that it should be all-caps.
 
Ad

Advertisements

K

Keith Thompson

James Kuyper said:
I was in a hurry, and went a little overboard there. When I have a group
of closely related integer constants, and either they all fit in the
range 0-65535 or they all fit in the range -32767 to 32767, I will
usually declare them as enumeration constants of the same enumeration
type.
[...]

That will break if INT_MAX < 65535, since enumeration constants are
always of type (signed) int.

If you can assume at least 32-bit int (as POSIX requires, for example),
then there's no real need to limit it to 65535.
 
K

Keith Thompson

James Harris said:
I don't agree with either of you! I cannot see the point of redefining ui16
to be an alias of uint16_t just to alter the name, if that's what you are
suggesting when you say 'defined in terms of'.

I use ui16 because it gives me independence from the compilers I use, one
which includes definitions for stdints and one which does not.

In that situation, I'd write my own "stdint.h" header that implements as
much of the standard <stdint.h> as possible, using the same names
(perhaps I'd give the header a different name). I'd then use #ifdef,
perhaps with some build-time configuration mechanism, to decide whether
to use my own header or <stdint.h>.

If I define my own typedef named "uint16_t", I see no great benefit in
distinguishing between it and the typedef of the same name provided by
<stdint.h>.

Eventually, with any luck, all the implementations I need to worry about
will properly implement <stdint.h>, and my own header can quietly go
away.

See, for example, http://www.lysator.liu.se/(nobg)/c/q8/index.html
 
J

James Kuyper

I was in a hurry, and went a little overboard there. When I have a group
of closely related integer constants, and either they all fit in the
range 0-65535 or they all fit in the range -32767 to 32767, I will
usually declare them as enumeration constants of the same enumeration
type.
[...]

That will break if INT_MAX < 65535, since enumeration constants are
always of type (signed) int.

I was thinking that the underlying type could be "unsigned int", so
65535 would be acceptable, but you're right, that doesn't work. I doubt
that I've ever defined an enumeration constant with a value greater than
16384, so the issue has never actually come up.
 
J

James Harris

Malcolm McLean said:
This is the irony.
As soon as something is standardised, you can't use it. Not everything
will
support the standard, and you can't provide it without confusing everyone.

That reminds me about a comment by Richard H in C Unleashed.... goes to
check .... these names being prohibited as externals:

strength
memorandum
isotope
tolerance

along with names such as E4, EX, town and isthmus. All reserved because of
their prefixes.

James
 
K

Keith Thompson

Malcolm McLean said:
This is the irony.
As soon as something is standardised, you can't use it. Not everything will
support the standard, and you can't provide it without confusing everyone.

I disagree, at least in this case: you *can* provide it yourself without
confusing everyone. If `uint16_t` meets the standard's requirements, it
doesn't matter much whether it's provided by <stdint.h> or by
"my_stdint.h".

"my_stdint.h" can even be implemented by a simple "#include <stdint.h>"
when it's available.
 
Ad

Advertisements

K

Keith Thompson

James Kuyper said:
I was in a hurry, and went a little overboard there. When I have a group
of closely related integer constants, and either they all fit in the
range 0-65535 or they all fit in the range -32767 to 32767, I will
usually declare them as enumeration constants of the same enumeration
type.
[...]

That will break if INT_MAX < 65535, since enumeration constants are
always of type (signed) int.

I was thinking that the underlying type could be "unsigned int", so
65535 would be acceptable, but you're right, that doesn't work. I doubt
that I've ever defined an enumeration constant with a value greater than
16384, so the issue has never actually come up.

Yes, the underlying type could be unsigned int, but that doesn't
help. The enumeration type (which, if you're just using it to define
constants, is likely to be anonymous and never used) is compatible
with some implementation-defined integer type, but the enumeration
constants themselves are always of type int. (Yes, that's ugly.)
 
J

James Harris

Keith Thompson said:
....


In that situation, I'd write my own "stdint.h" header that implements as
much of the standard <stdint.h> as possible, using the same names
(perhaps I'd give the header a different name). I'd then use #ifdef,
perhaps with some build-time configuration mechanism, to decide whether
to use my own header or <stdint.h>.

We discussed this here some time ago when I was thinking to set up what
became a types.h that would provide common definitions and work for 16-, 32-
and 64-bit compilers. Some people suggested using the standard names as you
do above. I can see merit in that but my reasons for using my own names are
unchanged, i.e.:

The source is clearly using custom definitions. No chance of maintenance
programmer assuming that code has access to stdint defs which might differ.
Isolation from bugs in one definition or the other. Control, especially as
one compiler has been found to be a little imperfect. And stability of
definitions over compiler upgrades. The silent migration from self-defined
to compiler-defined that you suggest makes me especially uneasy. I would
rather know what was going on even if it looks a little awkward - which in
this case I don't think it does, as it happens. Call it control freakery if
you like but I would rather see what's going on than cover it over. :)

OK, I know that paragraph is a little overstated(!) but it does explain a
bit where I am coming from.

James
 
D

David Brown

We discussed this here some time ago when I was thinking to set up what
became a types.h that would provide common definitions and work for 16-, 32-
and 64-bit compilers. Some people suggested using the standard names as you
do above. I can see merit in that but my reasons for using my own names are
unchanged, i.e.:

The source is clearly using custom definitions. No chance of maintenance
programmer assuming that code has access to stdint defs which might differ.
Isolation from bugs in one definition or the other. Control, especially as
one compiler has been found to be a little imperfect. And stability of
definitions over compiler upgrades. The silent migration from self-defined
to compiler-defined that you suggest makes me especially uneasy. I would
rather know what was going on even if it looks a little awkward - which in
this case I don't think it does, as it happens. Call it control freakery if
you like but I would rather see what's going on than cover it over. :)

OK, I know that paragraph is a little overstated(!) but it does explain a
bit where I am coming from.

James

That makes no sense at all. I have used plenty of pre-C99 compilers -
and since some of the embedded software I write occasionally needs
maintenance, I sometimes still need to run them. Most of these pre-C99
compilers had a <stdint.h> defining at least the basic uintXX_t and
intXX_t types (some skipped the "least" or "fast" types). If I have to
write code that is portable across such ancient tools that they don't
have <stdint.h>, then I make my own headers (as Keith and others
suggest) that define these standard types for that particular compiler.

Then the source is clearly using standard definitions (even if I wrote
them myself) and standard types, that everyone can understand. There is
no chance of a maintenance programmer getting it wrong, unlike if I have
my own special types.

It's your choice, of course, but making up your own types to replace
standard ones is seldom a good idea.
 
R

Richard Bos

Keith Thompson said:
I disagree, at least in this case: you *can* provide it yourself without
confusing everyone. If `uint16_t` meets the standard's requirements, it
doesn't matter much whether it's provided by <stdint.h> or by
"my_stdint.h".

"my_stdint.h" can even be implemented by a simple "#include <stdint.h>"
when it's available.

And note the crucial point here: if you need portability across
platforms which may or may not have <stdint.h>, you _can_ legally and
safely #include "my_stdint.h", with quote marks instead of angle
brackets. In fact, if this hasn't changed in C11[*], you can even
#include "stdint.h", and if you haven't provided a "stdint.h" elsewhere,
it will reliably fall back on the implementation's <stdint.h>.

In other words: use
#include "stdint.h"
in your program.
- If the implementation provides a <stdint.h>, you shouldn't provide
one, and the implementation will use its own.
- If the implementation doesn't provide <stdint.h>, you can provide one
with the necessary #definitions, and the implementation will use that.
- The same code will compile correctly in either case.

Richard

[*] Which I don't have - is there a link anywhere?
 
R

Richard Bos

James Harris said:
The source is clearly using custom definitions. No chance of maintenance
programmer assuming that code has access to stdint defs which might differ.

If they're stdint definitions, they may _not_ differ. Any implementation
which supplies <stdint.h> _must_ use the Standard definitions; any
programmer who supplies his own "stdint.h" really, really ought to, and
if he doesn't, smack him.

By contrast, if they are "Harrisesintegertypes.h" definitions, goodness
only knows what they mean.
Call it control freakery if you like

I call it control over-freakery. You're trying to control something
which _is already controlled_, and replacing it with something which
isn't.

Richard
 
Ad

Advertisements

K

Keith Thompson

Keith Thompson said:
I disagree, at least in this case: you *can* provide it yourself without
confusing everyone. If `uint16_t` meets the standard's requirements, it
doesn't matter much whether it's provided by <stdint.h> or by
"my_stdint.h".

"my_stdint.h" can even be implemented by a simple "#include <stdint.h>"
when it's available.

And note the crucial point here: if you need portability across
platforms which may or may not have <stdint.h>, you _can_ legally and
safely #include "my_stdint.h", with quote marks instead of angle
brackets. In fact, if this hasn't changed in C11[*], you can even
#include "stdint.h", and if you haven't provided a "stdint.h" elsewhere,
it will reliably fall back on the implementation's <stdint.h>.

In other words: use
#include "stdint.h"
in your program.
- If the implementation provides a <stdint.h>, you shouldn't provide
one, and the implementation will use its own.
- If the implementation doesn't provide <stdint.h>, you can provide one
with the necessary #definitions, and the implementation will use that.
- The same code will compile correctly in either case.

Richard

[*] Which I don't have - is there a link anywhere?

No, it hasn't changed in C11.

The latest pre-C11 draft (which is very close to the published standard)
is N1570:

http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf
 
I

Ian Collins

Malcolm said:
The constant is used once. From then on, it's a variable. Since it only appears once,
it's not important whether it appears as a literal or not.

It is if you care about clarity.
The snag with the #define method is that it's easy to write code which relies
on the #define being a certain value. Then when we change the definition, it
breaks. If the number is a variable, that's still possible, but it's much less likely.

Using #define in this context is a bad idea (as it usually is). Use a
const! Macros don't respect scope.
 
M

Malcolm McLean

It is if you care about clarity.
Were's setting it, once, in main().

setupIOprocedures is passed the address of the port to use for "Port a". It then
sets one or more variables to 0x74, which get passed down to subroutines.

So we can use a #define,a const int, or a hardcoded literal in main(), but
the value 0x74 appears in one place in the binary, the #define/ const int
is used in one place in main(). You can argue that a definition is a bit
clearer than a literal, but it's a very small point.
 
Ad

Advertisements

B

BartC

Malcolm McLean said:
Were's setting it, once, in main().

setupIOprocedures is passed the address of the port to use for "Port a".
It then
sets one or more variables to 0x74, which get passed down to subroutines.

What *is* the address or number of port A in your example, or doesn't it
mention it? Because your code is very confusing (the OP was talking about a
constant defining a port number).
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top