reasoning for a macro

M

Mark

Quite a lot people here argue about the use of complex macros, and it
appears that the most of those who know the C language very well, recommend
against the usage of macros, except some trivial cases. I'm trying to
understand the reasoning for it, and what factors should I consider when
deciding to have a macro or not.

Consider this example. I'm implementing a serial interface over GPIO lines
of a microcontroller; underlying procedure is to put '0' or '1' on pins, and
higher level functions are based on it. So I considered to wrap the API
dealing with GPIO registers in macro:

#define SET_BIT0(pin) \
do { \
gpio_out(dev, (pin), 0); \
gpio_outen(dev, (pin), (pin)); \
while (0)

#define SET_BIT1(pin) \
do { \
gpio_out(dev, (pin), (pin)); \
gpio_outen(dev, (pin), (pin)); \
while (0)

... and use these macros in the code implementing the guts of the serial
interface (ticking sync clock, sending commands, acknowledges and so on).

What would you say -- is it appropriate use of macro or it'd a way
better/safer to put it in inline routines and if so, why?
Thanks.
 
I

Ian Collins

Quite a lot people here argue about the use of complex macros, and it
appears that the most of those who know the C language very well,
recommend against the usage of macros, except some trivial cases. I'm
trying to understand the reasoning for it, and what factors should I
consider when deciding to have a macro or not.

Consider this example. I'm implementing a serial interface over GPIO
lines of a microcontroller; underlying procedure is to put '0' or '1' on
pins, and higher level functions are based on it. So I considered to
wrap the API dealing with GPIO registers in macro:

#define SET_BIT0(pin) \
do { \
gpio_out(dev, (pin), 0); \
gpio_outen(dev, (pin), (pin)); \
while (0)

#define SET_BIT1(pin) \
do { \
gpio_out(dev, (pin), (pin)); \
gpio_outen(dev, (pin), (pin)); \
while (0)

Missing '}'
... and use these macros in the code implementing the guts of the serial
interface (ticking sync clock, sending commands, acknowledges and so on).

What would you say -- is it appropriate use of macro or it'd a way
better/safer to put it in inline routines and if so, why?

I'd say it was a gratuitous use of macros. Just use a function and rely
on the compiler to inline them, a function is clearer (no pointless
while{}) and easier to debug (you can step into it).
 
B

bart.c

Mark said:
Quite a lot people here argue about the use of complex macros, and it
#define SET_BIT0(pin) \
do { \
gpio_out(dev, (pin), 0); \
gpio_outen(dev, (pin), (pin)); \
while (0)

#define SET_BIT1(pin) \
do { \
gpio_out(dev, (pin), (pin)); \
gpio_outen(dev, (pin), (pin)); \
while (0)

.. and use these macros in the code implementing the guts of the serial
interface (ticking sync clock, sending commands, acknowledges and so on).

What would you say -- is it appropriate use of macro or it'd a way
better/safer to put it in inline routines and if so, why?

These macros aren't complex, they're a fairly reasonable use of macros.

But since they already contain function calls, there seems little to lose
from using functions for them instead. (And having also set_bit(pin,value),
instead of only having dedicated set_bit0/set_bit1 routines.)

Also you won't have to bother with that do...while(0) stuff which might
baffle some people.

(BTW is there a "}" missing from those macros? And a '1' from the second
gpio_out call?)
 
B

Ben Bacarisse

Mark said:
Quite a lot people here argue about the use of complex macros, and it
appears that the most of those who know the C language very well,
recommend against the usage of macros, except some trivial cases. I'm
trying to understand the reasoning for it, and what factors should I
consider when deciding to have a macro or not.

I am surprised that you formed this opinion without picking up the
reasoning. My recollection is that most people will comment on a macros
and explain why it is best to avoid using them at the same time. Maybe
there's been a spate of blank "don't use macros".
Consider this example. I'm implementing a serial interface over GPIO
lines of a microcontroller; underlying procedure is to put '0' or '1'
on pins, and higher level functions are based on it. So I considered
to wrap the API dealing with GPIO registers in macro:

#define SET_BIT0(pin) \
do { \
gpio_out(dev, (pin), 0); \
gpio_outen(dev, (pin), (pin)); \
while (0)

#define SET_BIT1(pin) \
do { \
gpio_out(dev, (pin), (pin)); \
gpio_outen(dev, (pin), (pin)); \
while (0)

This missing } is one reason right there. When you use this macro, the
error will show up some way away. In fact it could be dozens of lines
later.

Oddest of all, though, is why do you have two of these? That's twice
the chance to make a mistake.
.. and use these macros in the code implementing the guts of the
serial interface (ticking sync clock, sending commands, acknowledges
and so on).

What would you say -- is it appropriate use of macro or it'd a way
better/safer to put it in inline routines and if so, why?

No, I use a function. I would not even bother it see it was inlined.
the lowness of IO almost always trumps the speed of a function call and
return though I admit that your IO may be different since I don't know
the device you are using. Anyway, you risk confusing error messages,
bugs due to double evaluation of 'pin', more obscure code, harder
testing and debugging for a dubious advantage.

When compilers were less sophisticated and machines were slower, the
pay-of was was higher, but I have gradually unlearnt my desire to reach
for a macro. There are reasons to use them, but they usually involve
things you can't do with a function.
 
E

Eric Sosman

Quite a lot people here argue about the use of complex macros, and it
appears that the most of those who know the C language very well,
recommend against the usage of macros, except some trivial cases. I'm
trying to understand the reasoning for it, and what factors should I
consider when deciding to have a macro or not.

It seems to me you've read much more into the debate than is
actually there. "Macros aren't right for everything" is not the
same as "Don't use macros."
Consider this example. I'm implementing a serial interface over GPIO
lines of a microcontroller; underlying procedure is to put '0' or '1' on
pins, and higher level functions are based on it. So I considered to
wrap the API dealing with GPIO registers in macro:

#define SET_BIT0(pin) \
do { \
gpio_out(dev, (pin), 0); \
gpio_outen(dev, (pin), (pin)); \
while (0)

#define SET_BIT1(pin) \
do { \
gpio_out(dev, (pin), (pin)); \
gpio_outen(dev, (pin), (pin)); \
while (0)

.. and use these macros in the code implementing the guts of the serial
interface (ticking sync clock, sending commands, acknowledges and so on).

Lotsa luck. Others have pointed out the missing brace; it also
looks like you may have messed up the third argument to gpio_out() in
one or the other macro. The hard-wired identifier `dev' is suspect,
unless it's some kind of well-known program-wide thing on a par with
`stdin' and `errno' (in which case it probably ought to have a more
descriptive name than merely `dev').
What would you say -- is it appropriate use of macro or it'd a way
better/safer to put it in inline routines and if so, why?

I find it hard to imagine a program in which the macros you've
shown would be helpful. I can't say whether cleaned-up and corrected
versions would be, because you haven't explained enough about the
situation. What advantage are you trying to obtain by macroizing
the thing rather than using a function or using open code? Why do
you write two macros/functions when it appears one might suffice?

Macros -- or functions, or arrays, or loops -- are not appropriate
or inappropriate in isolation, but only with reference to an objective
and the constraints that surround it. We don't know enough.
 
I

ImpalerCore

Quite a lot people here argue about the use of complex macros, and it
appears that the most of those who know the C language very well, recommend
against the usage of macros, except some trivial cases. I'm trying to
understand the reasoning for it, and what factors should I consider when
deciding to have a macro or not.

I often use macros in the case where I want to codify some expression,
but have it work over multiple types. The use macros becomes a cost
benefit analysis between the lack of type safety and other inherent
problems of macros against defining code segments that function over
multiple types. For example, consider my macro implementation for
setting a bit in an arbitrary unsigned integer.

\code snippet
/*!
* \brief Define the integer type used to create a bitmask of zeros
with
* the least-significant bit enabled.
*
* One of the basic constructs used in the \c bits macro library is to
* create a mask that is all zero except for a single bit at a given
* index. This is often implemented using the simple expression
* <tt>1 << index</tt>. This is used to isolate single bits to be set
* or cleared within an integer.
*
* There is a caveat to consider when using this expression. The
issue
* is that the value of \c 1 is evaluated to be of type integer. For
* environments where the integer type is 16-bit, this expression will
* fail when trying to set a bit with an index >= 16 in a 32-bit long
* integer. A similar problem arises for 32-bit environments when
* trying to use \c 1 to set a bit at index >= 32 in a 64-bit integer.
* If this is an issue, there will typically be a warning that says
* that the left shift count is larger than the width of the integer
* type. If this error is found, \c C_BITS_ONE will need to be
defined
* as a larger integer type.
*
* The method to increase the width used by the macro library is to
* specify the type explicitly. This can be specified using a
stdint.h
* constant like <tt>UINT64_C(1)</tt> to enable support for 64-bit
* integers. Keep in mind that increasing the width beyond the
default
* integer type for the processor may incur a performance penalty for
* all macros.
*
* Ideally this should be configured using Autoconf or a Makefile, but
* for now its location is here. The \c C_BITS_ONE define can be \c 1
* for \c int sized masks, <tt>UINT32_C(1)</tt> to enable 32-bit
support
* on 16-bit integer platforms, or <tt>UINT64_C(1)</tt> to enable 64-
bit
* support.
*/
#define C_BITS_ONE (UINT32_C(1))

/*!
* \brief Set a single bit at \c INDEX in \c WORD.
* \param WORD The word.
* \param INDEX The bit index.
*
* The \c INDEX range is from [0,N-1] where N is the number of bits
* of the \c WORD integer type.
*/
#define C_SET_BIT(WORD, INDEX) (WORD |= (C_BITS_ONE << (INDEX)))
\endcode

If I were to define a function implementation, I would need to
maintain practically duplicate code for a myriad of different types.

void set_bit8( uint8_t* b );
void set_bit16( uint16_t* w );
void set_bit32( uint32_t* dw );
void set_bit64( uint64_t* qw );

I decided to go with macros to implement my bit hacking primarily
because imo the overhead of implementing these functions with the lack
of templates is too high. There is also a very interesting article (I
think it's Scott Meyers) about comparing min/max in C++ with the C
macro version.

I also use macros to wrap type casting for generic containers, as I
think it simplifies the syntax of the interface in a beneficial way
that outweighs the cons of using macros. There may be people who
disagree, but I'm ok with that.
Consider this example. I'm implementing a serial interface over GPIO lines
of a microcontroller; underlying procedure is to put '0' or '1' on pins, and
higher level functions are based on it. So I considered to wrap the API
dealing with GPIO registers in macro:

#define SET_BIT0(pin) \
    do { \
        gpio_out(dev, (pin), 0); \
        gpio_outen(dev, (pin), (pin)); \
    while (0)

#define SET_BIT1(pin) \
    do { \
        gpio_out(dev, (pin), (pin)); \
        gpio_outen(dev, (pin), (pin)); \
    while (0)

.. and use these macros in the code implementing the guts of the serial
interface (ticking sync clock, sending commands, acknowledges and so on).

If I were to look at this, I would say that this doesn't need the type
independence provided by macros, and hence is better off in a
function, inlined if needed and it's available on your compiler.
What would you say -- is it appropriate use of macro or it'd a way
better/safer to put it in inline routines and if so, why?

Inline function is my recommendation.

Best regards,
John D.
 
S

Seebs

Quite a lot people here argue about the use of complex macros, and it
appears that the most of those who know the C language very well, recommend
against the usage of macros, except some trivial cases. I'm trying to
understand the reasoning for it, and what factors should I consider when
deciding to have a macro or not.
Okay.

Consider this example. I'm implementing a serial interface over GPIO lines
of a microcontroller; underlying procedure is to put '0' or '1' on pins, and
higher level functions are based on it. So I considered to wrap the API
dealing with GPIO registers in macro:
#define SET_BIT0(pin) \
do { \
gpio_out(dev, (pin), 0); \
gpio_outen(dev, (pin), (pin)); \
while (0)
Okay.

#define SET_BIT1(pin) \
do { \
gpio_out(dev, (pin), (pin)); \
gpio_outen(dev, (pin), (pin)); \
while (0)

I'm a bit surprised that 0 changes to (pin) rather than to 1 here, but okay.
What would you say -- is it appropriate use of macro or it'd a way
better/safer to put it in inline routines and if so, why?
Thanks.

I'd probably write these as functions without even thinking about it. I
could easily imagine someone writing:

pin = first;
while (pin < last)
SET_BIT0(pin++);

Note that the macros as provided have at least one error ({ without }).

-s
 
J

James Dow Allen

... recommend
against the usage of macros, except some trivial cases.

I once used a device whose programming
took the form
SPE[0] = command;
SPE[1] = datum;
for writes and
SPE[0] = command;
datum = SPE[1];
for reads.

I wrote a macro something like
#define CTRLR(command) \
(*(SPE[0] = command, &SPE[1]))
so that writes and reads would have ordinary forms:
CTRLR(command) = datum;
or
datum = CTRLR(command);


What do c.l.c'ers think of that?
Bad? Very bad? Horrifying?

James Dow Allen
 
N

Nick

James Dow Allen said:
... recommend
against the usage of macros, except some trivial cases.

I once used a device whose programming
took the form
SPE[0] = command;
SPE[1] = datum;
for writes and
SPE[0] = command;
datum = SPE[1];
for reads.

I wrote a macro something like
#define CTRLR(command) \
(*(SPE[0] = command, &SPE[1]))
so that writes and reads would have ordinary forms:
CTRLR(command) = datum;
or
datum = CTRLR(command);


What do c.l.c'ers think of that?
Bad? Very bad? Horrifying?

In that it plays with the language syntax it ought to be frowned upon.
But with a comment along the lines of this post attached to the macro
definition I actually quite like it.
 
I

Ian Collins

... recommend
against the usage of macros, except some trivial cases.

I once used a device whose programming
took the form
SPE[0] = command;
SPE[1] = datum;
for writes and
SPE[0] = command;
datum = SPE[1];
for reads.

I wrote a macro something like
#define CTRLR(command) \
(*(SPE[0] = command,&SPE[1]))
so that writes and reads would have ordinary forms:
CTRLR(command) = datum;
or
datum = CTRLR(command);


What do c.l.c'ers think of that?
Bad? Very bad? Horrifying?

Nice one. It's the sort of trick I often use in C++ (although not with
a macro) but it's the first time I've seen something like it in C.
 
S

Stargazer

Quite a lot people here argue about the use of complex macros, and it
appears that the most of those who know the C language very well, recommend
against the usage of macros, except some trivial cases. I'm trying to
understand the reasoning for it, and what factors should I consider when
deciding to have a macro or not.

Consider this example. I'm implementing a serial interface over GPIO lines
of a microcontroller; underlying procedure is to put '0' or '1' on pins, and
higher level functions are based on it. So I considered to wrap the API
dealing with GPIO registers in macro:

#define SET_BIT0(pin) \
    do { \
        gpio_out(dev, (pin), 0); \
        gpio_outen(dev, (pin), (pin)); \
    while (0)

#define SET_BIT1(pin) \
    do { \
        gpio_out(dev, (pin), (pin)); \
        gpio_outen(dev, (pin), (pin)); \
    while (0)

.. and use these macros in the code implementing the guts of the serial
interface (ticking sync clock, sending commands, acknowledges and so on).

What would you say -- is it appropriate use of macro or it'd a way
better/safer to put it in inline routines and if so, why?
Thanks.

The reason for macros is to make the code that using them simpler /
more convenient than it would be otherwise. Your particular example is
probably bad, because (besides the obvious missing '}') it is
unnecessary complicating the interface. Having two separate macros for
doing the same thing - set GPIO bit to a value - is bad, in my
opinion. If use macros at all (what is bad with original function
gpio_out(), which is called anyway?) then it's better something like
`#define GPIO_SET_BIT(pin, value) gpio_out(dev, (pin), (value & 1) <<
(pin));' (assuming that "pin" is a bit-mask of appropriate GPIO pin,
(1 << pin_number)).

Macros are processed by the preprocessor, which does only simple text
substitution; no language syntax analysis takes place. That is, if you
make a syntactic mistake (such as missing '}'), it will be pasted all
over the code, resulting in as many compiler's errors as there are
macro replacements.

Also, macros should be used with tested code, as they will make harder
functional debugging. Your code is functionally suspicious - GPIO
enables and direction should be set correctly before using the port;
also there is usually no need to change the "enable" assingments after
system init (certainly not with every GPIO output).

Daniel
 
S

spinoza1111

I'm a bit surprised that 0 changes to (pin) rather than to 1 here, but okay.


I'd probably write these as functions without even thinking about it.  I

Why am I not surprised that you don't like thinking?
could easily imagine someone writing:

        pin = first;
        while (pin < last)
                SET_BIT0(pin++);

Those of us who've studied computer science would know that macro
parameters are evaluated by name and that each time the formal
parameter name is used, the increment may occur depending on your
sequence points mess. Therefore, we pass simple names to macros.

Unlike you, we like to Think. You're the Don't Make Me Think
generation, kid.

Whereas hackers know only specific cases, like the lawyer in Bleak
House who prides himself on reading only cases connected with Jarndyce
v Jarndyce. For this reason, they live in a world where shit just
happens, like the BP disaster.
 
R

Rob Kendrick

Those of us who've studied computer science would know that macro
parameters are evaluated by name and that each time the formal
parameter name is used, the increment may occur depending on your
sequence points mess.

Do you have any actual arguements other than this one to play at
Seebs? Because this one's clearly faulty, given I've never studied
computer science in any official sense, and I know this. And I also
know people who have studied computer science officially who don't.

B.
 
S

Seebs

On Sun, 30 May 2010 06:29:08 -0700 (PDT)
Do you have any actual arguements other than this one to play at
Seebs? Because this one's clearly faulty, given I've never studied
computer science in any official sense, and I know this. And I also
know people who have studied computer science officially who don't.

All he's doing is posting ever more outrageous nonsense to try to get the
responses which validate him.

Computer science has a lot to do with general principles of programming,
and very little to do with the details of a given language. As always,
anything that looks like a technical detail in a Nilges post is probably
wrong; in this case, we have the lovely "evaluated by name", which
is probably wrong, but may be insufficently coherent to be considered
"wrong", on the grounds that it may not be stating a claim which could
be considered right or wrong at all.

It has nothing at all to do with whether or not the formal parameter name is
used, and everything to do with whether the resulting code evaluates an
expression repeatedly, and if so, how often.

-s
 
M

Mark

Thank to everyone who contributed to this topic, the concept of judicious
use of macros is now clear to me.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Members online

Forum statistics

Threads
473,767
Messages
2,569,573
Members
45,046
Latest member
Gavizuho

Latest Threads

Top