Quite a lot people here argue about the use of complex macros, and it
appears that the most of those who know the C language very well, recommend
against the usage of macros, except some trivial cases. I'm trying to
understand the reasoning for it, and what factors should I consider when
deciding to have a macro or not.
I often use macros in the case where I want to codify some expression,
but have it work over multiple types. The use macros becomes a cost
benefit analysis between the lack of type safety and other inherent
problems of macros against defining code segments that function over
multiple types. For example, consider my macro implementation for
setting a bit in an arbitrary unsigned integer.
\code snippet
/*!
* \brief Define the integer type used to create a bitmask of zeros
with
* the least-significant bit enabled.
*
* One of the basic constructs used in the \c bits macro library is to
* create a mask that is all zero except for a single bit at a given
* index. This is often implemented using the simple expression
* <tt>1 << index</tt>. This is used to isolate single bits to be set
* or cleared within an integer.
*
* There is a caveat to consider when using this expression. The
issue
* is that the value of \c 1 is evaluated to be of type integer. For
* environments where the integer type is 16-bit, this expression will
* fail when trying to set a bit with an index >= 16 in a 32-bit long
* integer. A similar problem arises for 32-bit environments when
* trying to use \c 1 to set a bit at index >= 32 in a 64-bit integer.
* If this is an issue, there will typically be a warning that says
* that the left shift count is larger than the width of the integer
* type. If this error is found, \c C_BITS_ONE will need to be
defined
* as a larger integer type.
*
* The method to increase the width used by the macro library is to
* specify the type explicitly. This can be specified using a
stdint.h
* constant like <tt>UINT64_C(1)</tt> to enable support for 64-bit
* integers. Keep in mind that increasing the width beyond the
default
* integer type for the processor may incur a performance penalty for
* all macros.
*
* Ideally this should be configured using Autoconf or a Makefile, but
* for now its location is here. The \c C_BITS_ONE define can be \c 1
* for \c int sized masks, <tt>UINT32_C(1)</tt> to enable 32-bit
support
* on 16-bit integer platforms, or <tt>UINT64_C(1)</tt> to enable 64-
bit
* support.
*/
#define C_BITS_ONE (UINT32_C(1))
/*!
* \brief Set a single bit at \c INDEX in \c WORD.
* \param WORD The word.
* \param INDEX The bit index.
*
* The \c INDEX range is from [0,N-1] where N is the number of bits
* of the \c WORD integer type.
*/
#define C_SET_BIT(WORD, INDEX) (WORD |= (C_BITS_ONE << (INDEX)))
\endcode
If I were to define a function implementation, I would need to
maintain practically duplicate code for a myriad of different types.
void set_bit8( uint8_t* b );
void set_bit16( uint16_t* w );
void set_bit32( uint32_t* dw );
void set_bit64( uint64_t* qw );
I decided to go with macros to implement my bit hacking primarily
because imo the overhead of implementing these functions with the lack
of templates is too high. There is also a very interesting article (I
think it's Scott Meyers) about comparing min/max in C++ with the C
macro version.
I also use macros to wrap type casting for generic containers, as I
think it simplifies the syntax of the interface in a beneficial way
that outweighs the cons of using macros. There may be people who
disagree, but I'm ok with that.
Consider this example. I'm implementing a serial interface over GPIO lines
of a microcontroller; underlying procedure is to put '0' or '1' on pins, and
higher level functions are based on it. So I considered to wrap the API
dealing with GPIO registers in macro:
#define SET_BIT0(pin) \
do { \
gpio_out(dev, (pin), 0); \
gpio_outen(dev, (pin), (pin)); \
while (0)
#define SET_BIT1(pin) \
do { \
gpio_out(dev, (pin), (pin)); \
gpio_outen(dev, (pin), (pin)); \
while (0)
.. and use these macros in the code implementing the guts of the serial
interface (ticking sync clock, sending commands, acknowledges and so on).
If I were to look at this, I would say that this doesn't need the type
independence provided by macros, and hence is better off in a
function, inlined if needed and it's available on your compiler.
What would you say -- is it appropriate use of macro or it'd a way
better/safer to put it in inline routines and if so, why?
Inline function is my recommendation.
Best regards,
John D.