Is enum a suitable way to implement a "local define?"

D

David Brown

Sorry, but anything that involves doing X, Y or Z to make type specs 'easy'
isn't a solution! It shouldn't be necessary to do anything; they should be
self-explanatory. However if you are stuck with writing actual C (I'm not)
then I suppose you might need to use these methods.

Why should it be so easy to write a complex type definition as a single
statement? There is nothing wrong with using typedefs - just as there
is nothing wrong with using multiple lines of code when you /could/ get
the same effect from one line, or using extra temporary variables to
make code clearer.
(But I would probably still never use 'const' anyway.)

Why not? "const" is a very useful keyword in C. It helps catch
mistakes, and lets the compiler generate more efficient code - sometimes
significantly so. If you are not going to change something, why not let
the compiler know that so that it can help prevent accidents and take
advantage of the knowledge in code generation.
For the most part, "static const int sizeMax = 1024;" does the same
job as
"enum { sizeMax = 1024 };" or "#define sizeMax 1024", but it does so with
safer typing, better static error checking, more flexibility and - IMHO -
clearer code. The only exception is that you can't use the "static
const"
for the size of statically allocated arrays. It's a painful omission
from
the standards.
Of course, it is possible to write things in reverse:

static char buffer[1024];
static const int sizeMax = sizeof(buffer) / sizeof(buffer[0])
Whether or not you think that is better is a matter of opinion, but it's
an alternative that avoids the #define or the somewhat unnatural enum,
and
still avoids using the "magic number" twice in the code.

But, sizeMax is still, as far as I can gather, some value with reserved
storage, that you can take the address of. And you can't declare another
array using the same size as you say.

"static const" objects typically don't require storage - the compiler
knows they will never be exported or visible outside the compilation
unit (unless you take their address, of course), so it doesn't need to
make storage space. An object such as "static const int sizeMax =
1024;" is likely to generate exactly the same code in use as a #define.

You are correct that you can't declare another array of the same size -
that's the limitation of static consts.
So it still falls short of what you can do with a proper named constant.

It is closer than you seem to think, but it still falls short.
You're right that it's another thing that would have been very easy to get
right, but forty years on and the language still doesn't have a plain,
straightforward way of declaring a named constant!

C++ lets you use static const in this way (and non-static const too).
So I don't see why it's a problem for C.
(It's a little crazy really; I have now several language projects where I
translate source code that has 'proper' named constants, into actual C. Did
I use #defines, or enums? Both have limitations to do with scope and type.
But I've just checked and the solution I came up with was the following; if
the source uses this (made C-like so as not to frighten anyone):

const sizeMax = 1024 /* 'int' is optional */

"int" is not optional in well-written code, IMHO.
char buffer[sizeMax]
int i=sizeMax

The generated C is just this:

char buffer[1024];
int i=1024;

the problem simply disappears! It's almost a non-issue, unless you have
to write actual C.)
 
D

David Brown

That's not the only exception. Given the "static const int" declaration
sizeMax is not a constant expression, and there are a number of contexts
in which it can't be used (in a case label, for example).

I don't know that I'd call it a "painful omission", but I agree that it
would be nice to have a way to define name constants. Copying C++'s
semantics would probably be a decent approach.

"Painful" is subjective, of course. But I prefer to avoid macros where
possible - you often lose type-checking and scoping, it's often easy to
make small mistakes, and it is is often difficult to spot the problem
when mistakes are made. (For example, if you accidentally write
"#define maxSize 100;", then you will get cryptic error messages when
maxSize is used, rather than helpful messages about an error in the
definition.) For most purposes, static inline functions are better than
function-like macros, and static const's (or enum constants) are better
than #define'd constants. C++'s semantics for consts would remove more
pre-processor macros from C code.
Of course, it is possible to write things in reverse:

static char buffer[1024];
static const int sizeMax = sizeof(buffer) / sizeof(buffer[0]);

Whether or not you think that is better is a matter of opinion, but it's
an alternative that avoids the #define or the somewhat unnatural enum,
and still avoids using the "magic number" twice in the code.

One problem with that is that to doesn't give a name to the value 1024.
For example, you might want two or more buffers with the same size.

Indeed.
 
G

glen herrmannsfeldt

(snip)
I can imagine "const sizeMax = 1024" being perfectly reasonable in some
other language, where the type of sizeMax is inferred from the type of
the initializer. C++'s relatively new reuse of the "auto" keyword
allows something similar:
auto sizeMax = 1024;
If I were to suggest a new feature for C, and if I didn't feel like
accounting for C++ compatibility, I might add a new "constant" keyword,
where
constant identifier = constant-expression;

In case anyone is interested, Java does it with "static final".

-- glen
 
D

David Brown

David Brown said:
On 07/04/14 00:20, BartC wrote: [...]
(It's a little crazy really; I have now several language projects where I
translate source code that has 'proper' named constants, into actual C. Did
I use #defines, or enums? Both have limitations to do with scope and type.
But I've just checked and the solution I came up with was the following; if
the source uses this (made C-like so as not to frighten anyone):

const sizeMax = 1024 /* 'int' is optional */

"int" is not optional in well-written code, IMHO.

"int" is not optional in C, and it hasn't been since 1999. But I think
BartC meant that to be some C-like language, not C itself.

I can imagine "const sizeMax = 1024" being perfectly reasonable in some
other language, where the type of sizeMax is inferred from the type of
the initializer. C++'s relatively new reuse of the "auto" keyword
allows something similar:

auto sizeMax = 1024;

If I were to suggest a new feature for C, and if I didn't feel like
accounting for C++ compatibility, I might add a new "constant" keyword,
where

constant identifier = constant-expression;

would make "identifier" an alias for the given constant expression.
For example:

constant maxSize = 1024;

would be equivalent to

enum { maxSize = 1024 };

If this were a democracy, I'd vote for such a keyword. I guess the type
of the constant would be determined much like with "auto" in C++.

However, I think you could come far by just allowing C to use const
values as compile-time constants (and thus as array sizes, switch
labels, etc.). It works in C++ - there is no reason why it should not
work the same in C. The differences between C++ const and C const
should not affect things here.
and it would also permit expressions of other types.

It would also require programmers to understand the difference between
"const" (which means read-only) and "constant" (which refers to
expressions that are evaluated at compile time). If I didn't care about
backward compatibility, I'd re-spell "const" as "readonly", but that's a
non-starter.

I believe "const" started out as "readonly" in the early days of C++, or
"C with objects". Keeping it as "readonly" would have avoiding the
somewhat awkward keyword "constexpr" to mean /really/ constant.
 
I

Ian Collins

David said:
I believe "const" started out as "readonly" in the early days of C++, or
"C with objects". Keeping it as "readonly" would have avoiding the
somewhat awkward keyword "constexpr" to mean /really/ constant.

C++'s "constexpr" goes beyond constant values and includes constant
expressions. It also forces expressions to be evaluated at compiles
time, which is useful for embedded work where compile time constants can
be stored in non-volatile memory.

Like C=='s const, "constexpr" would be another worthwhile addition to C.
 
B

BartC

Ian Collins said:
C++'s "constexpr" goes beyond constant values and includes constant
expressions. It also forces expressions to be evaluated at compiles time,
which is useful for embedded work where compile time constants can be
stored in non-volatile memory.

I thought it would go without saying that in a feature such as:

constant name = value;

that 'value' would have to be an expression. Otherwise it would have bigger
limitations than any of #define, enum or const.

Obviously the expression can only includes terms that are actual constants,
or previous named constants.
 
B

BartC

glen herrmannsfeldt said:
(snip)





In case anyone is interested, Java does it with "static final".

What is it with language designers and the need to beat around the bush with
these things and not call things what they actually are?
 
B

BartC

David Brown said:
On 07/04/14 00:20, BartC wrote:
Why should it be so easy to write a complex type definition as a single
statement?

Typedefs are fine when you want typedefs. But it's a failing to have to use
them to decipher a fairly ordinary type-spec; to declare:

'a as const pointer to array of pointer to char' I would need:

char *(* const a)[];

while to declare 'a as pointer to array of const pointer to char' I'd need:

char * const (*a)[];

Maybe /you/ can figure out which const means what, but I can't! (And don't
want to.) Look at the English description of each however, and it's
perfectly clear to which part each const pertains to.

So you want the type-spec on one line, and ideally you want it to be
obvious.

But another thing about const: which bit exactly of that last spec, could I
write to? I think (looking at the English!) you can modify the top pointer,
but not an array element, but *can* modify what the const array element
points to. All a bit meaningless really! More sensible would be just to have
a const qualifier at the top level.

As it is, if I really wanted this whole thing readonly, I'd have to write:

const char * const (* const a)[];

Is this really making the code better?
"int" is not optional in well-written code, IMHO.

Why not? You don't have 'int' in a '#define constant, you don't have 'int'
in an enum constant, and you don't need 'int' in a const pseudo-constant!
This would make it just about the only place where it would be mandatory.

But the vast majority of named constants /will/ have int types.
"static const" objects typically don't require storage - the compiler
knows they will never be exported or visible outside the compilation
unit (unless you take their address, of course),

The compiler /might/ know (certainly one I'd write wouldn't!). Why rely on
whether a compiler may or may not treat the pseudo-constant the way you
want; why not just make it explicit:

readonly int abc=100;
constant int xyz=200;

.rodata
abc:
dd 100

xyz equ 200

That way there are no arguments!

(This works fine for scalar values such as ints and floats; but I also use
'constant' for large data such a strings. In this case storage /is/ needed,
and you start to get some overlap between constant and readonly.)
 
D

David Brown

C++'s "constexpr" goes beyond constant values and includes constant
expressions. It also forces expressions to be evaluated at compiles
time, which is useful for embedded work where compile time constants can
be stored in non-volatile memory.

Like C=='s const, "constexpr" would be another worthwhile addition to C.

I agree here - as an embedded developer, the more that can be done at
compile-time and stored in non-volatile memory, the better. constexpr
opens up many new possibilities, such as compile-time generated tables
for approximate maths functions, CRC tables, etc. It is one of the
reasons I think C++11 is much more appealing for embedded development
than C++98 was. (I don't mean to say that C++98 was unsuitable for
embedded development, just that C++11 has many nice new features making
it better.)
 
A

Alain Ketterlin

glen herrmannsfeldt said:
(snip)





In case anyone is interested, Java does it with "static final".

For primitive types only. For class-like types (i.e., a reference to an
object), "final" only guarantees that the variable will not be rebound,
but the value of the bound object can be modified.

BTW, C++ now also has "constexpr", which goes way beyond "const",
because complete expressions (including function calls) may be evaluated
at compile time.

-- Alain.
 
I

Ian Collins

BartC said:
I thought it would go without saying that in a feature such as:

constant name = value;

that 'value' would have to be an expression. Otherwise it would have bigger
limitations than any of #define, enum or const.

Obviously the expression can only includes terms that are actual constants,
or previous named constants.

Too restrictive, the constexpr may also be a function, the only
restriction is it has to be able to be evaluated at compile time.
 
B

BartC

Ian Collins said:
BartC wrote:

Too restrictive, the constexpr may also be a function, the only
restriction is it has to be able to be evaluated at compile time.

Too restrictive? Until now, we haven't even been able to define "constant
a=1;"!

Being able to evaluate absolutely anything at compile-time is far too
open-ended a feature, but then that is to be expected if coming from a C++.

So if you had:

constexpr b = f(3,7,8);

and f() was some 1000-line pure function with no other inputs, you would
expect the compiler to 'execute' the function to get the result? With f()
perhaps having its own constexpr defines that make use of f's parameters.

This would now go from a suggested feature that is ludicrously simple to
implement, to ludicrously complex!
 
D

David Brown

David Brown said:
On 07/04/14 00:20, BartC wrote:
Why should it be so easy to write a complex type definition as a single
statement?

Typedefs are fine when you want typedefs. But it's a failing to have to use
them to decipher a fairly ordinary type-spec; to declare:

'a as const pointer to array of pointer to char' I would need:

char *(* const a)[];

typedef char * pChar;
typedef pChar arrayPChar[];
typedef arrayPChar * pArrayPChar;
typedef const pArrayPChar cpArrayPChar;

Each step is clear and unambiguous, and it's about as close to English
as you get in C. (You might not want to have so many steps here, and it
is likely that your "final" type will have a name reflecting its use in
the program - I am just illustrating a point, not advocating using
these typedefs verbatim.)

I don't dispute your point that C makes writing complex types directly
unnecessarily unclear - I merely offer typedef as a way of making C clearer.
while to declare 'a as pointer to array of const pointer to char' I'd need:

char * const (*a)[];

Maybe /you/ can figure out which const means what, but I can't! (And don't
want to.) Look at the English description of each however, and it's
perfectly clear to which part each const pertains to.

So you want the type-spec on one line, and ideally you want it to be
obvious.

I agree on the obvious, but I don't agree that the type-spec has to be
on one line.
But another thing about const: which bit exactly of that last spec,
could I
write to? I think (looking at the English!) you can modify the top pointer,
but not an array element, but *can* modify what the const array element
points to. All a bit meaningless really! More sensible would be just to
have
a const qualifier at the top level.

There are /many/ occasions when you want a non-constant pointer to
constant data, or a constant pointer to non-constant data. This applies
to "real" constant data that will never change - it applies doubly to
"readonly" pointers to read-write data.
As it is, if I really wanted this whole thing readonly, I'd have to write:

const char * const (* const a)[];

It is not better than using typedefs to making things clear.
Is this really making the code better?


Why not? You don't have 'int' in a '#define constant, you don't have 'int'
in an enum constant, and you don't need 'int' in a const pseudo-constant!
This would make it just about the only place where it would be mandatory.

But the vast majority of named constants /will/ have int types.

Perhaps your programming is different, but I make a lot of use of
constants - and it is far from just being integers.

#define does not provide any type information - it's just textual
substitution. So it works perfectly well (baring the lack of checking
and safety) regardless of the type - people use #define with integral
types of all sorts, floating point constants, strings, etc.

"const" data is used for tables, strings, lists, etc., as well as just
integers. In embedded programming, it is common for resources such as
bitmaps to be encoded as const data arrays.

We have managed to get rid of several of the "default int" legacy of
older C standards, and compiler warnings can often be used to trap other
cases where it is still legal C. Introducing a new concept to C and
making it "default int" would be a solid leap backwards.
The compiler /might/ know (certainly one I'd write wouldn't!). Why rely on
whether a compiler may or may not treat the pseudo-constant the way you
want; why not just make it explicit:

readonly int abc=100;
constant int xyz=200;

.rodata
abc:
dd 100

xyz equ 200

That way there are no arguments!

You are /always/ relying on the compiler to generate good object code
that has the same visible result as the source code you give it - but
the compiler is not a translator. You cannot assume the compiler will
force the "readonly" data into addressable memory without limiting the
compiler's optimiser - conversely, you cannot assume that it will /not/
do so for "constant" data without limiting it. Programming is about
being accurate in describing what you want in a language common to you
and the compiler, and then letting the compiler do its job.

So back to reality of compilers as they exist today - "static const"
objects normally do not require storage unless you take their addresses.
/Big/ static const objects, such as strings or arrays, are typically
given read-only storage space, but they can also be optimised away by
the compiler. "const" data with external linkage is also usually given
storage space, especially if it is used in a compile unit where it is
declared but not defined. But even that is not guaranteed.
 
D

David Brown

Too restrictive? Until now, we haven't even been able to define
"constant a=1;"!

Being able to evaluate absolutely anything at compile-time is far too
open-ended a feature, but then that is to be expected if coming from a C++.

So if you had:

constexpr b = f(3,7,8);

and f() was some 1000-line pure function with no other inputs, you would
expect the compiler to 'execute' the function to get the result? With
f() perhaps having its own constexpr defines that make use of f's
parameters.

Yes, that is /exactly/ what is expected. If f is constexpr (and pure
functions certainly can be constexpr), then it should be evaluated at
compile time.

The norm for most programs is that they are compiled a few times, but
run many times - it makes sense to have longer compile times to say run
time.

And of course you are free not to use the feature.
This would now go from a suggested feature that is ludicrously simple to
implement, to ludicrously complex!

And yet, the people who implement C++ compilers have already implemented
it. In fact, people who implement /C/ compilers, or pre-C++11 C++
compilers, have already implemented much of it - compilers already do a
lot of compile-time evaluation to save run times. gcc goes out of its
way to make sure functions give exactly the same results when calculated
at compile time as they would at run time, regardless of the host/target
combination. C++11 "constexpr" functions is just an extension and
formalisation of the concept.
 
I

Ian Collins

BartC said:
Too restrictive? Until now, we haven't even been able to define "constant
a=1;"!

Being able to evaluate absolutely anything at compile-time is far too
open-ended a feature, but then that is to be expected if coming from a C++.

There is only so much that can be evaluated at compile time and C++
already had rules for this. The advantage of constexpr is you can use
it to force evaluation at compile time (to use read only memory), rather
than relying on optimisation (which probably won't use read only
memory). In the embedded world, that certainty is important.

For example, something like:

constexpr int f( int a, int b, int c ) { return (a+b)/c; }

int main()
{
constexpr int n = f( 2,3,4 );
}

Will guarantee n is a compile time constant.

If you want to do more open ended things at compile time, you have to
become a master of the black art of template meta-programming!
 
L

Les Cargill

David said:
I agree here - as an embedded developer, the more that can be done at
compile-time and stored in non-volatile memory, the better.

But you can do this now with pragmas and using the linker/locater. Some
embedded toolchains have their own suites of heresies (and even
keywords) for this.
 
B

BartC

Perhaps your programming is different, but I make a lot of use of
constants - and it is far from just being integers.

Maybe you're thinking about other uses you make of 'const' in your code.

Because apart from replacing numeric literals such as 32767 and 453.592 in
source code, what else would named constants be used for? I can't imagine
that literal pointer values are going to be used that often! (And the most
common, 0, is already taken care of.)

And between integers and floating point, the former do dominant most of my
programs.
We have managed to get rid of several of the "default int" legacy of
older C standards, and compiler warnings can often be used to trap other
cases where it is still legal C. Introducing a new concept to C and
making it "default int" would be a solid leap backwards.

OK, I'll give you that. But perhaps then we should write most integer
constants as:

123i (or 123s)

to get rid of the default int type here, and make it explicit?
 
B

BartC

David Brown said:
On 07/04/14 12:03, BartC wrote:

Yes, that is /exactly/ what is expected. If f is constexpr (and pure
functions certainly can be constexpr), then it should be evaluated at
compile time.

The norm for most programs is that they are compiled a few times, but
run many times - it makes sense to have longer compile times to say run
time.

I don't think I'm that comfortable with the idea.

If you did need something such as a set of pre-calculated tables, then you
might just use a script language to generate C code in an include file:

- You keep both languages simple

- You don't have to re-run the calculations on every compilation (which can
include running the user's possibly slow, buggy code with the likelihood of
hanging the compiler is something is not right)

- You use a tool (the script language) which can be more appropriate for the
job, especially if the output will be a string.

- It easier to generate multiple values (ie. a table) rather than the
single-value constant definer we've been talking about
And yet, the people who implement C++ compilers have already implemented
it. In fact, people who implement /C/ compilers, or pre-C++11 C++
compilers, have already implemented much of it - compilers already do a
lot of compile-time evaluation to save run times. gcc goes out of its
way to make sure functions give exactly the same results when calculated
at compile time as they would at run time, regardless of the host/target
combination. C++11 "constexpr" functions is just an extension and
formalisation of the concept.

This is the same language where we won't even have official binary constants
until 2017?

And the same one where you already /can/ define 'static const int
sizemax=1024', but it's not possible to use that to dimension an array?

I think maybe it ought to learn to walk before it can run!
 
D

David Brown

But you can do this now with pragmas and using the linker/locater. Some
embedded toolchains have their own suites of heresies (and even
keywords) for this.

Of course you can get your compiler/linker to store compile-time known
data in read-only memory. "constexpr" lets you get more calculations
done at compile time, so it is easier to get more data into read-only
memory or even handled directly in code. I don't think it actually
changes what can be generated in this way - template magic can be used
to calculate a lot of stuff - but it makes it easier and neater. The
restrictions on constexpr functions make some limitations and mean you
use a somewhat different style from "normal" C++ coding, but I think it
can help avoid some table-generating scripts that I use at the moment.
 
D

David Brown

Maybe you're thinking about other uses you make of 'const' in your code.

Because apart from replacing numeric literals such as 32767 and 453.592
in source code, what else would named constants be used for? I can't
imagine that literal pointer values are going to be used that often!
(And the most common, 0, is already taken care of.)

First, note that 453.592 is a floating point constant, not an integer -
thus you need to be able to write "const double sizeMax = 453.592".
Stick a "static" on the front, or use C++ (where "static" is implied for
initialised const declarations unless you explicitly add an "extern"
declaration), and you are already almost at the current C
implementation. The only missing point is for things like array sizes
and switch cases, which can legally use "const int" values in C++ but
not in C.

Second, I regularly use const tables, structs, strings, and combinations
thereof in my code. In embedded programming, you often have a lot of
fixed values and you need the efficiency and compactness of putting the
data in read-only parts of the code rather than reading in external
files or initialising at run-time. So if I have a program with a
hierarchic menu, I am going to have structs of arrays of structs with
lists of pointers to const char* for menu texts, pointers to functions
to call, flags for options, etc. And if the program is multi-lingual
there might be another layer of indirection to handle different
translations of the menu texts. And it is all declared "const" because
it will not change in the program, and should be in read-only flash memory.

And between integers and floating point, the former do dominant most of
my programs.



OK, I'll give you that. But perhaps then we should write most integer
constants as:

123i (or 123s)

to get rid of the default int type here, and make it explicit?

Why? What possible justification do you have for introducing an
inconsistent and arbitrary extra syntax when there is a perfectly good,
clear and simple existing syntax? If - like the original C designers' -
your keyboard is so awful that writing an extra "int" is a pain, then I
recommend buying a new keyboard.

There are lots of things that could be done to improve C - many of which
are simple because they could be "stolen" from existing C++
implementations. Weird syntax to avoid writing "int" on constants is
certainly not one of these possible improvements.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,768
Messages
2,569,574
Members
45,051
Latest member
CarleyMcCr

Latest Threads

Top