Named parameters

C

Clever Monkey

Frederick said:
Clever Monkey posted:


I have great disdain for advice such as that.

I prefer to use my own head, make my own decisions. For instance:
No matter how many times I see a master programmer use "i++", I'm
still going to use "++i". No matter how much I see people use macros in
C, I'm always going to use a viable substitute where possible (e.g.
"enum" or "typedef").

In all walks of life, I detest the kind of people who blindly follow
doctrine, rather than "do things their own way", using their own
intelligence.
[Totally, completely and utterly OT]

*shrug* Not quite what I was saying, but I guess you are free to
extrapolate as far as you like. While I actually agree with your
comments, to a point, there is a large continuum between "listening and
evaluating" and "blindly following". I never advocated taking this
advice to any such extreme.

The rebuttal to the "do your own thang" comment you are making here is
that doing your own thing under well thought out but incorrect
assumptions leads only to tears and recriminations. Accepting and
evaluating advice from others is an excellent way to puts some checks
and balances against that other kind of knowledge.

Intelligence also includes listening to what other people say and
watching what they do, and then evaluating that using some common sense
and (as one becomes more adept) experience. The trivial case, of
course, is that junior coders don't have, you know, much experience.

Finding someone who knows what they are doing and sticking to them in
the first few years is actually quite good advice for many people. This
does not mean starting a new religion based on their interpretation of
all things, of course.

The context in this case was when I was first starting at a particular
company. This advices said a lot of things I was able to internalize
easily, since I was able to evaluate it within the context of a junior
programmer. It told me, among other things, that out coding style also
included things like keeping our C code looking like C.

This is important given that few new hires we get known anything about
ISO C anymore. One of the most important things to learn early on is
that coding is a community effort at most shops I've worked at, and
learning from one of the old-timers how they like their code styled and
formed is pretty important if you don't want your changes rejected by
reviewers.

There is room for maverick thinking and at least some requirement for
consensus. Truly intelligent people should have no problem with trying
to find that balance.
 
M

Mark McIntyre

I prefer to use my own head, make my own decisions.

This is good advice.
For instance:
No matter how many times I see a master programmer use "i++", I'm
still going to use "++i". No matter how much I see people use macros in
C, I'm always going to use a viable substitute where possible (e.g.
"enum" or "typedef").

However this does entirely against what you seem to be saying.
In all walks of life, I detest the kind of people who blindly follow
doctrine,

What you're doing above is, however, blind following of doctrine. In
C, there's no general reason to prefer typedefs over macros,

Its quite definitely a bad thing to bring what is efficient in one
language, and unthinkingly apply it in another. A perfect example
would be the first edition of Num Rec in C.
Anyway, the moral of the
story is that I wouldn't hesitate for a second to do things my own way,

hopefully however, within the constraints of house style and the
results of peer review. Trust me, anyone who ignores those becomes
unemployable quite fast.
In fact, I challenge them to write code as efficient as mine.

Hubris is also likely to affect employment prospects !
--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
 
K

Keith Thompson

Mark McIntyre said:
This is good advice.

As long as those decisions are based on something reasonable,
*including* the advice of experts.
However this does entirely against what you seem to be saying.


What you're doing above is, however, blind following of doctrine. In
C, there's no general reason to prefer typedefs over macros,

Really?

Can you think of any instance where a typedef and a macro could do the
same thing, and a macro would be preferable?

For example, given a choice between
#define BYTE unsigned char
and
typedef unsigned char BYTE;
I can think of no reason to prefer the macro, and a number of reasons
to prefer the typedef.
 
F

Frederick Gotham

Mark McIntyre posted:
In C, there's no general reason to prefer typedefs over macros


typedef unsigned Digit;


int SomeFuncFromLibrary(void)
{
typedef float Digit;

/* More code */
}


Macros may be more despised in C++ because it has namespaces, but
nonetheless, even in C, I'll prefer to use an alternative.
 
P

pete

Keith said:
As long as those decisions are based on something reasonable,
*including* the advice of experts.

In K&R, it's usually ++i, but sometimes i++.
Really?

Can you think of any instance where a typedef and a macro could do the
same thing, and a macro would be preferable?

For example, given a choice between
#define BYTE unsigned char
and
typedef unsigned char BYTE;
I can think of no reason to prefer the macro, and a number of reasons
to prefer the typedef.

Sometimes I use both.
I use the macro, in case I want to stringise the type,
and then I also make a typedef from the macro.

If I don't want to stringise the type,
then I just use a typedef.
 
B

Bill Pursell

Frederick said:
Mark McIntyre posted:



typedef unsigned Digit;


int SomeFuncFromLibrary(void)
{
typedef float Digit;

/* More code */
}


Macros may be more despised in C++ because it has namespaces, but
nonetheless, even in C, I'll prefer to use an alternative.

I'm not understanding your point. The library function is
going to be compiled seperately, and if Digit is a macro it
will be defined appropriately at the time SomeFuncFromLibrary
is compiled.
 
M

Mark McIntyre


Yes.
Can you think of any instance where a typedef and a macro could do the
same thing, and a macro would be preferable?

No, but then if you read what I actually wrote, you will see that I
never said there would be.

(of an example where typedef might make more sense)
I can think of no reason to prefer the macro,

I never said there would be, if you care to read what I actually
wrote.
and a number of reasons to prefer the typedef.

And are those reasons general?


--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
 
M

Mark McIntyre

Mark McIntyre posted:



typedef unsigned Digit;


int SomeFuncFromLibrary(void)
{
typedef float Digit;

Yack.

If you want to write unmaintainable dreck, you can use typedefs all
you like. I recall at least one major compiler writer does this.
Macros may be more despised in C++ because it has namespaces, but
nonetheless, even in C, I'll prefer to use an alternative.

*shrug*.

Feel free to find a way to define MAX_SIZE with a typedef:
double thing[MAX_SIZE];
--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
 
K

Keith Thompson

Mark McIntyre said:
Yes.


No, but then if you read what I actually wrote, you will see that I
never said there would be.

(of an example where typedef might make more sense)


I never said there would be, if you care to read what I actually
wrote.


And are those reasons general?

IMHO, yes. In any case where either a macro or a typedef will serve
the purpose (I call that "general"), I find a typedef preferable.
 
K

Keith Thompson

Mark McIntyre said:
On Thu, 22 Jun 2006 23:24:12 GMT, in comp.lang.c , Frederick Gotham
Macros may be more despised in C++ because it has namespaces, but
nonetheless, even in C, I'll prefer to use an alternative.

*shrug*.

Feel free to find a way to define MAX_SIZE with a typedef:
double thing[MAX_SIZE];

Obviously a typedef can only define an alias for a type, and since
MAX_SIZE isn't an alias for a type, it can't be defined using a
typedef. (You could of course define a typedef for thing[MAX_SIZE],
but that's neither relevant nor a good idea.)

Sometimes there's no good alternative to a macro; in such cases, of
course, a macro is the thing to use. Defining an alias for a type
isn't one of those cases.
 
M

Mark McIntyre

In any case where either a macro or a typedef will serve
the purpose (I call that "general"), I find a typedef preferable.

Thats fine. I don't, generally, since it buys me nothing much, but
either works.

I think we need to go back to the original posting to see my point,
which is that the OP made two contradictory statements "I use my own
head" and "I always prefer typedefs and enums". There's no generically
good reason to prefer enum and typedef over macros since they have
different purposes.
--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
 
K

Keith Thompson

Mark McIntyre said:
Thats fine. I don't, generally, since it buys me nothing much, but
either works.

So you find
typedef unsigned int widget_count;
and
#define widget_count unsigned int
equally acceptable? (Or use WIDGET_COUNT for the macro if you
prefer.) I strongly prefer the typedef; my reaction to the macro is
"Why on Earth didn't you use a typedef?". I suspect most good C
programmers would react the same way.

I won't (continue to) try to change your mind on this point; I'm just
establishing that this is the point on which we disagree, and which
might have led to a misunderstanding.
I think we need to go back to the original posting to see my point,
which is that the OP made two contradictory statements "I use my own
head" and "I always prefer typedefs and enums". There's no generically
good reason to prefer enum and typedef over macros since they have
different purposes.

It may have been poorly stated. I didn't have any trouble
interpreting the OP's statement, since I agree with what I *thought*
it meant, namely:

In those cases where either a typedef or a macro could serve the
same purpose, a typedef is preferred. (Likewise for enum
vs. macro.)

The case for using an enum:
enum { FOO = 42 };
vs. a macro:
#define FOO 42
isn't as strong IMHO, but I personally like the enum trick in the
limited cases where it can be used.

It simply didn't occur to me that the OP was talking about preferring
typedefs to macros in cases where only a macro would make sense.

(I use my own head as well, but I don't make a big deal about it;
other people's heads tend not be conveniently placed for my use.)
 
E

ena8t8si

pete said:
In K&R, it's usually ++i, but sometimes i++.

In my copy of K&R (1978, third printing), the postfix form
is definitely used more than the prefix form (in cases
where there is no semantic difference).
 
M

Mark McIntyre

So you find
typedef unsigned int widget_count;
and
#define widget_count unsigned int
equally acceptable?

Generally I'd not use either. Why call a spade an earth inverting
implement? If I want an unsigned int, I'll use one.
I suspect most good C programmers would react the same way.

Thanks for the insult. :)
establishing that this is the point on which we disagree,

I'm not sure we do, I just think you haven't read what I've written,
for reasons which there's no benefit to listing.
It may have been poorly stated. I didn't have any trouble
interpreting the OP's statement, since I agree with what I *thought*
it meant,

Sure, I knew what he probably meant too. However it seemed to me that
he was also saying "I have my own prejudices, and I intend to stick to
them" whilst simultaneously saying "its important not to accept things
blindly". These seem quite contradictory to me.
In those cases where either a typedef or a macro could serve the
same purpose, a typedef is preferred. (Likewise for enum
vs. macro.)

But why?

earlier in your post (in a part I snipped for brevity) you said
I strongly prefer the typedef; and
I personally like the enum trick

but you provided no reasons. My point, poorly stated though it might
have been, is that such decisions should have some justification, not
simply be prejudices.

Personally I try to avoid tricks like this because they can hide
meaning - can one tell the actual type of PHGLOBAL just by looking at
it? Is it compatible with pointer to DWORD? What is the underlying
type of LPCTSTR?
(I use my own head as well, but I don't make a big deal about it;
other people's heads tend not be conveniently placed for my use.)

Usenet is a wonderful thing.

--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
 
J

Joe Wright

Mark said:
Mark McIntyre posted:


typedef unsigned Digit;


int SomeFuncFromLibrary(void)
{
typedef float Digit;

Yack.

If you want to write unmaintainable dreck, you can use typedefs all
you like. I recall at least one major compiler writer does this.
Macros may be more despised in C++ because it has namespaces, but
nonetheless, even in C, I'll prefer to use an alternative.

*shrug*.

Feel free to find a way to define MAX_SIZE with a typedef:
double thing[MAX_SIZE];

No, you find it. There is no similarity between #define macros and
typedef declarations. The first is simply text replacement. The second
is type aliasing. You can't compare the two rationally.
 
K

Keith Thompson

Mark McIntyre said:
Generally I'd not use either. Why call a spade an earth inverting
implement? If I want an unsigned int, I'll use one.

Sure, sometimes it makes sense to use "unsigned int" directly, but
making an alias for it can be useful if there's a possibility you
might want to change the underlying type (to unsigned short, say)
later on. It's also useful for documentation; if a variable is
declared as a widget_count, it's obvious that it's intended for
counting widgets, not for holding bitmasks.

At time, using raw predefined types like unsigned int can be like
using magic numbers. If I want the number 42, I can just use 42 --
but I'd much rather use a declared constant of some sort.
Thanks for the insult. :)

No insult intended; I said "most".
I'm not sure we do, I just think you haven't read what I've written,
for reasons which there's no benefit to listing.

I've read what you've written, I just don't entirely understand it.
Based on our past discussions, I doubt that I ever will. I can live
with that.
Sure, I knew what he probably meant too. However it seemed to me that
he was also saying "I have my own prejudices, and I intend to stick to
them" whilst simultaneously saying "its important not to accept things
blindly". These seem quite contradictory to me.

I won't try to speak for the OP; I found his "I use my own head"
statement a bit silly, personally. For myself, I do make my own
decisions for my own reasons, usually (I like to think) ones that are
reasonably well thought out. I prefer to avoid macros where they're
not necessary. This has nothing to do with accepting anything
blindly; it has everything to do with doing things for good reasons.

If I choose to use something other than a macro, it's not necessarily
because I've thought through the consequences of using a macro in that
particular case. It's because I've found that avoiding macros except
when they're necessary is a good rule of thumb, and following it saves
me the time and effort of exploring all the possible consequences.
Macros are tricky in ways that typedefs are not.

A number of reasons. The typedef name is more likely to be visible in
a debugger and in compiler diagnostics. Typedefs have scope. If I
wanted an alias for "int*", a macro wouldn't work:

#define INT_PTR int*
INT_PTR x, y;

Note that I almost certainly wouldn't define either a typedef or a
macro for a pointer type, but using only typedefs means I don't have
to worry about any of the potential problems introduced by macro
expansion.
earlier in your post (in a part I snipped for brevity) you said


but you provided no reasons. My point, poorly stated though it might
have been, is that such decisions should have some justification, not
simply be prejudices.

There's plenty of justification; I though the reasons for preferring
typedefs over macros were sufficiently obvious. In case they're not,
I've stated some of them here.
 
M

Mark McIntyre

Mark said:
Feel free to find a way to define MAX_SIZE with a typedef:
double thing[MAX_SIZE];

No, you find it. There is no similarity between #define macros and
typedef declarations. The first is simply text replacement. The second
is type aliasing. You can't compare the two rationally.

Indeed. Thanks for making my point.

--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
 
M

Mark McIntyre

There's plenty of justification; I though the reasons for preferring
typedefs over macros were sufficiently obvious. In case they're not,
I've stated some of them here.

Thankyou. I'll read it and bear it in mind for the future.

--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,773
Messages
2,569,594
Members
45,123
Latest member
Layne6498
Top