Pass-by-reference instead of pass-by-pointer = a bad idea?

D

David White

John said:
Really? When I try this, the code compiles and the program crashes
when run.

Precisely. Isn't a crash is good evidence that strlen doesn't permit passing
a null pointer?
Of course, functions can check for null pointers (though
this won't stop compilation) but at least my copy of strlen
apparently doesn't.

Neither does mine. I made sure it crashed before using it as an example.
After all, there's no sensible length it can return for a null pointer.
void fooptr(int * ptr);
void fooptr(const int * ptr);

vs

void fooref(int & ref);
void fooref(const int & ref);

int main()
{
int x;
fooptr(&x);
fooref(x);
}

What do you know from the fooptr(&x) call that you don't know from the
fooref(x) call or vice versa?

Nothing. My point is that passing by value is very common, so the occasional
non-const reference doesn't stand out at all.
The only time you can know something from a function call is if you
*never* use references.

No, if you never use non-const references.
In that case, when you pass by value you know
that the value won't be changed. If you pass by pointer, you still
don't know either way.

That doesn't matter. You know it _might_ be changed.
The situation in which you never use
references is of course when programming in C. I think this
preference for pointers is just a hangover from C.

That's probably partly true, but I don't think it helps to deny the
existence of thousands of programmers who are more familiar with C than C++,
and for whom pass-by-reference might easily be assumed to be pass-by-value.
The real point is surely that when you call a function you are
supposed to know what it does.

You figure out what you want done and
you call a function that is documented to do it. You don't call
functions because the function call "looks" like it will do what you
want.

I might not have called the function. I might be reading someone else's
code.

As it happens, I'm not as opposed to passing by non-const reference as it
might appear. I still think it's probably a good idea to avoid it, but in
practice it's probably rare that a programmer won't already know what a
function does, or won't be able to determine a function's actions from its
name, so it's unlikely to be a significant cause of misunderstanding. I
mentioned it mainly as something to consider.

DW
 
R

Richard Herring

E. Robert Tisdale said:
E. Robert Tisdale wrote:
[...]
Second, if your function must modify an object,
you should return a reference or pointer to that object:
//Object& func(const Object& object) { Object& func(Object& object) {
// modify object
return object;
}
or
//Object* func(const Object* object) { Object* func(Object* object) {
// modify *object
return object;
}
instead of void

Ugh. I'd be more inclined to say "if your function modifies an object,
it MUST NOT return a reference to that object." It just invites the
reader to misinterpret what's going on. If you write

f(a);
g(a);

it's obvious that the functions are not being called for their return
values, and therefore some side effect is intended.

If you write

g(f(a));

or even

x = f(a);

it's not obvious to the reader that "a" is in effect used *twice*, even
though it only appears once.

A function which does a single thing cleanly is usually preferable to
one which does two things, or which does the single thing obscurely.

Which is only a benefit to authors of obfuscated code.
 
S

Steven T. Hatton

Larry said:
No, it is because the stupid programmer is invoking undefined
behavior by deref'ing a null pointer; it has nothing to do with
func().

We are talking about compile time errors, not runtime. The examples are all
detectable violations of the language rules. In the first two cases you
are attempting to initialize a non-const references with a temporary, and
in the third case you are trying to initialize a reference to a type with a
pointer to the type. That's a type missmatch. None of these has anything
to do with the value of the argument passed.
 
S

Steven T. Hatton

True, but that does not contradict what I said.

§4.10
"A null pointer constant is an integral constant expression (5.19)
rvalue of integer type that evaluates to zero. A null pointer constant
can be converted to a pointer type; the result is the null pointer
value of that type..."

As I pointed out, the null pointer constant is not the same thing as a
null pointer so I was agreeing with you.

This is, perhaps, pedantic, but 0 and NULL are integer literals. They are
replaced by the compiler with (instructions to generate) temporary integer
values which are used to initialize the lvalues to which they are assigned.
A null pointer constant would be something like `const int null_ptr = 0'.
In that case null_ptr would live throughout the life of the scope in which
it is defined. And that probably means global in this case.

Using null_ptr instead of NULL is also an error, but it is a different kind
of error. In the case of NULL, the error is attempting to initialize a
non-const reference with a temporary. In the case of null_ptr the error is
attempting to initialize a non-const reference with a const object. In the
former, you are trying to establish a persistent means of accessing
something which is transient. In the latter, you are trying to establish a
means of accessing and modifying something that is constant.
Undefined simply means you've left the world of standard C++ and
your program could do anything. As some on Usenet are fond of saying,
blue demons could fly out of your nose.

You enter the land of undefined behavior as soon as you dereference
a null pointer. How and when that manifests in some particular way
in a particular case is... well... undefined.

Exactly. It's the C++ analog to proving 1 = 0 in mathematics, or true =
false formal logic.
A segfault is one possible consequence of doing something that causes
undefined behavior.

I am aware that the Standard does not dictate that a segfault will happen.
There are times when undefined behavior is identical to what would happen
if the statement resulting in undefined behavior did not appear in the
program. A crash is actually more desirable than the alternative of having
the program continue to run with some subtle undefined behavior lurking in
it. For example, the program may run perfectly for almost all input, but
when it encounters a field beyond a certain length, it may truncate, or
overwrite the end of the field with garbage.
An lvalue is just an expression that refers to an object or function
(§3.10) and an object occupies storage. A local variable of type int
is an lvalue, for example. This says nothing about how a C++
implementation might choose to represent pointers and references
under the hood.

But to say that something occupies storage basically means it has an address
of some kind. I don't care if you want to say an lvalue is a box in a
warehouse. There will still be some way of referring to it by means of
coordinates.

Very well, the result of dereferencing a null pointer is undefined.

This is probably a more important distinction than it may seem. Undefined
behavior may appear anywhere, and at any time in the execution of the
program after the statement producing it has been encountered.
You said "AFAIK, you *can* pass a null to func() in your example",
and the example in question declared func as taking a parameter of
reference type.

I believe that was in response to the example using the integer literal 0 as
the argument to a function taking a non-const reference as a parameter. My
point was that you /can/ dereference a pointer and use it as an argument in
that situation. That code will compile, and the pointer being dereferenced
may be null when the program is executed. I had left it as implied that
this is probably not what the programmer would want.

Technically, the null pointer is passed to the indirection expression
appearing in the argument list, and the (result of evaluating the)
indirection expression is passed to func(). So what I really meant by
'passing a null' is that a null pointer can appear in the indirection
expression passed as an argument to func() without a compiler error being
produced.
At a practical level, I think I understand your general point that
using references instead of pointers does not really protect you
against undefined behavior. Undefined behavior could still occur
if you form a reference by dereferencing a null pointer, return
a reference to a local object, etc.

My point in the particular instance was that the examples presented by the
OP were not addressing the issues they were intended to address.
However, my response would be that it's the responsibility of
the code that initializes the reference (e.g., by dereferencing
a pointer in your example) to make sure the object actually exists.

As a general rule, I don't believe that's possible. Firstly, you don't want
to dereference a null pointer, you want to compare it to 0, or compare the
result of a dynamic_cast on it to 0. That protects against dereferencing a
null pointer. Unfortunately, you cannot depend on the result of a
dynamic_cast to determine that an object is valid. A dynamic_cast will
only tell you if the operand "claims to be" valid. For example, after
calling delete on an object of type Foo, the memory holding Foo may
(probably will) still hold exactly what it held before the delete.
dynamic_casting a pointer to that memory location will result in returning
a pointer to type Foo with the address of the deleted object. The object
is, by definition, invalid but may well behave as if it were valid.
That's the code that needs to be scrutinized for possible undefined
behavior. A function that takes a reference parameter should be
able to assume that the reference is actually bound to an object;
if it is not then the program is already undefined and the fault
lies elsewhere.

Yes, and I believe part of the OP's argument for using references over
pointers was that it reduces the need for detecting null pointers, and thus
results in more efficient code. I don't believe that reasoning is viable.
 
L

Larry I Smith

Steven T. Hatton wrote:
[snip]
Yes, and I believe part of the OP's argument for using references over
pointers was that it reduces the need for detecting null pointers, and thus
results in more efficient code. I don't believe that reasoning is viable.

Well, many folks believe that it is viable.

If references are evil and one should only use pointers (as you
have repeated over and over and over in this thread), then we might
as well have stayed with C.

This is like a religious discussion - neither point of
view will convince the other.

We'll keep using references, but to each his own...

Regards,
Larry
 
S

Steven T. Hatton

Larry said:
Steven T. Hatton wrote:
[snip]
Yes, and I believe part of the OP's argument for using references over
pointers was that it reduces the need for detecting null pointers, and
thus
results in more efficient code. I don't believe that reasoning is
viable.

Well, many folks believe that it is viable.

The truth of an assertion is not determined by the number of people who
ascribe to the belief that it is true.
If references are evil and one should only use pointers (as you
have repeated over and over and over in this thread),

I have never said references are evil, and I have never said we should only
use pointers. AAMOF, I have rather clearly stated that there /are/
reasonable places where using references as function parameters makes
sense.
then we might as well have stayed with C.

References are only one of many C++ features which are not present in C.
Why would I not want the other features even if references were excluded
from C++?
This is like a religious discussion - neither point of view will convince
the other.

Perhaps, but presenting specious arguments to support your position is
potentially harmful to people who accept them uncritically.
 
E

E. Robert Tisdale

Richard said:
E. Robert Tisdale writes


Ugh. I'd be more inclined to say,
"If your function modifies an object,
it MUST NOT return a reference to that object."
It just invites the reader to misinterpret what's going on.
If you write

f(a);
g(a);

it's obvious that the functions are not being called for their return
values, and therefore some side effect is intended.

But the same is true if you invoke

func(a);

You don't *need* to use the returned reference (pointer).
If you write

g(f(a));

or even

x = f(a);

it's not obvious to the reader that "a" is in effect used *twice*,
even though it only appears once.

Isn't it "obvious" that, in

std::cout << "i = " << 42 << std::endl;

ostream std::cout is used *thrice*?
A function which does a single thing cleanly is usually preferable to
one which does two things, or which does the single thing obscurely.

Would you then agree that
a function should *not* modify two or more arguments?
Which is only a benefit to authors of obfuscated code.

What's wrong with using functions in expressions?
If you are going to call one void function after another,
you might as well write your program in assembler.
 
B

belvis

Steven said:
I know of two companies who have concluded that passing by reference is, in
general, a bad choice. Trolltech, and SuSE.

I don't know about SuSE, but I wouldn't take Trolltech too seriously
when it comes to API design.

Bob
 
S

Steven T. Hatton

I don't know about SuSE, but I wouldn't take Trolltech too seriously
when it comes to API design.

Bob

They do a few things I don't care for and there are some features I wish Qt
provided. But the success of the KDE speaks for itself. They are certainly
one of the leaders in C++ GUI toolkit development.

I remember what the Unix desktop looked like in 1997.
 
B

belvis

Steven said:
They do a few things I don't care for and there are some features I wish Qt
provided. But the success of the KDE speaks for itself. They are certainly
one of the leaders in C++ GUI toolkit development.

Yes, the success of KDE speaks for itself. It does not, however, speak
for the quality of APIs developed at Trolltech or the recommendations
they make. Success in the market usually doesn't mean anything more
than success in the market.
I remember what the Unix desktop looked like in 1997.

So do I. <shudder>

Bob
 
N

niklasb

Steven said:
This is, perhaps, pedantic, but 0 and NULL are integer literals.
[snip]

Read the quote from the standard above. I know what an integer
literal is. It just happens that the standard defines the term
"null pointer constant" to mean any "integral constant expression
.... that evaluates to zero." The integer literal 0 is one example
of a null pointer constant. Here's another example:

void* p = 2 - 2;

The integral constant expression 2 - 2 is a null pointer constant
by the above definition. Therefore p is initialized to the null
pointer value.

Incidentally, this implicit conversion can be useful for
compile-time asserts, e.g.,

// CompileTimeAssertPow2
// value field is 1 if N is a power of two; otherwise,
// instantiating value yields a compile-time error.
template<int N>
struct CompileTimeAssertPow2
{
static const int value =
static_cast<void*>(0) == (N & (N-1));
}

[snip]
But to say that something occupies storage basically means it has an address
of some kind. I don't care if you want to say an lvalue is a box in a
warehouse. There will still be some way of referring to it by means of
coordinates.

Sure, every object has to have an address. However, it doesn't
follow that the internal representation of a pointer or reference
is the address of the pointee. It could be something else -- like
an index into an array of object descriptors maintained by the
run-time system, or an address plus some other bits that encode
type information, etc.
This is probably a more important distinction than it may seem. Undefined
behavior may appear anywhere, and at any time in the execution of the
program after the statement producing it has been encountered.

For convenience, let's use the term 'cause' to denote the
statement or expression that results in undefined behavior
'effect' to denote the undefined behavior itself.

So let's consider an example involving references:

int func(int& r)
{
return r;
}
int main()
{
int* p = 0;
return func(*p);
}

Now we both agree that the 'cause' of the undefined behavior
is the expression *p in main. The 'effect' could be anything
at any time, but let's suppose that in this case, it is a
"seg fault" during the execution of func.

You seem to think that the fact that the effect occurred in
func is somehow an indictment of func -- specifically of the
fact that it takes a reference parameter.

However, I would argue that where the effect occurred is
irrelevant, except for debugging purposes. The proper focus
of our attention is the cause of the undefined behavior.
Clearly, main is at fault here. Any code that dereferences
a pointer should "know" that the pointer is valid and not
null.
I believe that was in response to the example using the integer literal 0 as
the argument to a function taking a non-const reference as a parameter. My
point was that you /can/ dereference a pointer and use it as an argument in
that situation. That code will compile, and the pointer being dereferenced
may be null when the program is executed. I had left it as implied that
this is probably not what the programmer would want.

You were responding to the OP statement that using references makes the
code safer and more self-documenting because a reference cannot be
null.
Technically, the null pointer is passed to the indirection expression
appearing in the argument list, and the (result of evaluating the)
indirection expression is passed to func(). So what I really meant by
'passing a null' is that a null pointer can appear in the indirection
expression passed as an argument to func() without a compiler error being
produced.

True, but that doesn't mean a null refrence has been passed to func().
It means the caller has caused undefined behavior. The distinction is
significant because it suggests where the fix needs to be made: not in
func itself but in the calling statement that dereferences the pointer.
My point in the particular instance was that the examples presented by the
OP were not addressing the issues they were intended to address.


As a general rule, I don't believe that's possible.
[snip]

True, there is no run-time test that can reliably determine whether
a pointer is valid. By "make sure" I just meant the programmer needs
to be sure -- by clearly specifying the function's contract, enforcing
invariants, etc. Thus if the caller if func() initializes a reference
by deferencing a pointer, which turns out to be null, the fault lies
not with func() but with the caller -- or perhaps with some function
farther up the stack for passing invalid arguments to the caller.
Yes, and I believe part of the OP's argument for using references over
pointers was that it reduces the need for detecting null pointers, and thus
results in more efficient code. I don't believe that reasoning is viable.

If func takes a reference and the caller has a pointer, then the caller
has to deference the pointer. So in this case we've just moved the
pointer derefence, not eliminated it.

However, perhaps the caller already has something other than a pointer,
e.g., a local variable. By making func take a reference, we eliminate
the need for a pointer entirely. In this way, using reference
parameters
instead of pointers can give a net reduction in the total number of
pointers.

If one adopts a programming style which results in pointers being
relatively rare, it becomes feasable to scrutinize the remaining
pointers
even more closely for possible undefined behavior.
 
S

Stuart MacMartin

Of course that's not a problem with the definition of dynamic_cast.
Except that I don't really have any control over how it's implemented.
So if on one of my primary platforms (Windows) it's always a
performance bottleneck, that means I have to avoid it except where
performance is not an issue.

Stuart
 
E

E. Mark Ping

Without them invoking undefined behaviour first, yes it will. Any sane
programmer will check their pointers for validity before attempting to
derference it. (Either explicitly by testing it, or implicitly by the
enforcement of preconditions.).


Undefined behaviour. The caller has attempted to dereference NULL. Not
func()'s problem.

This has all the benefit of "neener neener neener, it's your fault".
No compilers I know of track pointers and terminate as soon as a null
pointer is derefereneced. So while this is true formally, it's of no
real use practically.

At any rate it's an academic distinction anyway. I said I prefer to
use ref arguments unless a null pointer is a valid argument (in which
case you have to use pointers).
 
S

Steven T. Hatton

Steven said:
This is, perhaps, pedantic, but 0 and NULL are integer literals.
[snip]

Read the quote from the standard above. I know what an integer
literal is. It just happens that the standard defines the term
"null pointer constant" to mean any "integral constant expression
... that evaluates to zero." The integer literal 0 is one example
of a null pointer constant. Here's another example:

Yes, I looked it up, and indeed an integer literal is an integral constant
expression.
void* p = 2 - 2;

The integral constant expression 2 - 2 is a null pointer constant
by the above definition. Therefore p is initialized to the null
pointer value.

Incidentally, this implicit conversion can be useful for
compile-time asserts, e.g.,

// CompileTimeAssertPow2
// value field is 1 if N is a power of two; otherwise,
// instantiating value yields a compile-time error.
template<int N>
struct CompileTimeAssertPow2
{
static const int value =
static_cast<void*>(0) == (N & (N-1));
}

I'll have to take your word that this is useful.
For convenience, let's use the term 'cause' to denote the
statement or expression that results in undefined behavior
'effect' to denote the undefined behavior itself.

So let's consider an example involving references:

int func(int& r)
{
return r;
}
int main()
{
int* p = 0;
return func(*p);
}

Now we both agree that the 'cause' of the undefined behavior
is the expression *p in main. The 'effect' could be anything
at any time, but let's suppose that in this case, it is a
"seg fault" during the execution of func.

You seem to think that the fact that the effect occurred in
func is somehow an indictment of func -- specifically of the
fact that it takes a reference parameter.

No. It is just a counterexample presented to demonstrate that the compiler
will not prevent you from attempting to pass a dereferenced null pointer to
func().
You were responding to the OP statement that using references makes the
code safer and more self-documenting because a reference cannot be
null.

Actually what the Standard says is that a reference connot be null in a well
defined program. I am beginning to understand what the comment about
self-documentation was intending. I was looking at it from the other point
of view. For what I've been doing, it makes no sense to work with
references. Shared data is part of the conceptual design, and using
pointers is therefore necessary. In order to pass a reference, I would
have to dereference a pointer. It would be a deviation from the pattern to
create a function that took a reference rather than a pointer.

I guess if I were working with data forms, or something like that, it might
make more sense to pass references.
True, but that doesn't mean a null refrence has been passed to func().

It doesn't mean it hasn't been either.
It means the caller has caused undefined behavior. The distinction is
significant because it suggests where the fix needs to be made: not in
func itself but in the calling statement that dereferences the pointer.

Agreed. I can see how one might argue that there is an advantage to that if
there is no need for polymorphic behavior, using reference may well be
preferable. OTOH, the fact remains that information is not available at
the point where the function is called which might be communicated by
passing a pointer.
My point in the particular instance was that the examples presented by
the OP were not addressing the issues they were intended to address.


As a general rule, I don't believe that's possible.
[snip]

True, there is no run-time test that can reliably determine whether
a pointer is valid. By "make sure" I just meant the programmer needs
to be sure -- by clearly specifying the function's contract, enforcing
invariants, etc. Thus if the caller if func() initializes a reference
by deferencing a pointer, which turns out to be null, the fault lies
not with func() but with the caller -- or perhaps with some function
farther up the stack for passing invalid arguments to the caller.

Yes. And if you can constrain the parameters to the point where you are not
using polymorphism, or if you are willing to either take the address of the
reference, or program by exception, then it may make some sense to pass by
reference. I've just not been in a situation where that is applicable.
If func takes a reference and the caller has a pointer, then the caller
has to deference the pointer. So in this case we've just moved the
pointer derefence, not eliminated it.

However, perhaps the caller already has something other than a pointer,
e.g., a local variable. By making func take a reference, we eliminate
the need for a pointer entirely. In this way, using reference
parameters
instead of pointers can give a net reduction in the total number of
pointers.

If one adopts a programming style which results in pointers being
relatively rare, it becomes feasable to scrutinize the remaining
pointers
even more closely for possible undefined behavior.

It also has to do with the design domain. I'm also wondering how this
impacts such notions as abstract interfaces.
 
L

Larry I Smith

Steven said:
Larry said:
Steven T. Hatton wrote:
[snip]
Yes, and I believe part of the OP's argument for using references over
pointers was that it reduces the need for detecting null pointers, and
thus
results in more efficient code. I don't believe that reasoning is
viable.
Well, many folks believe that it is viable.

The truth of an assertion is not determined by the number of people who
ascribe to the belief that it is true.
If references are evil and one should only use pointers (as you
have repeated over and over and over in this thread),

I have never said references are evil, and I have never said we should only
use pointers. AAMOF, I have rather clearly stated that there /are/
reasonable places where using references as function parameters makes
sense.
then we might as well have stayed with C.

References are only one of many C++ features which are not present in C.
Why would I not want the other features even if references were excluded
from C++?
This is like a religious discussion - neither point of view will convince
the other.

Perhaps, but presenting specious arguments to support your position is
potentially harmful to people who accept them uncritically.

Sigh....

Your dozens of posts also seem specious - streams of words asserting
that only pointers should be used in a well designed program; that
references are used only by the unwashed and uneducated, and that
your mission is to convince us of the error of our ways (at least
that's the way your posts read).

References are quite useful to many (most?) of us.

I counted the number of pointers and references used in a WEB
Services app just completed by one of our development teams:
37 pointers, 1983 references (in 300K+ lines of code).
This statistic means nothing in particular; I include it
merely to show that many apps work quite well with designs
that use references.

Regards,
Larry
 
N

niklasb

Steven said:
(e-mail address removed) wrote: [snip]
For convenience, let's use the term 'cause' to denote the
statement or expression that results in undefined behavior
'effect' to denote the undefined behavior itself.

So let's consider an example involving references:

int func(int& r)
{
return r;
}
int main()
{
int* p = 0;
return func(*p);
}

Now we both agree that the 'cause' of the undefined behavior
is the expression *p in main. The 'effect' could be anything
at any time, but let's suppose that in this case, it is a
"seg fault" during the execution of func.

You seem to think that the fact that the effect occurred in
func is somehow an indictment of func -- specifically of the
fact that it takes a reference parameter.

No. It is just a counterexample presented to demonstrate that the compiler
will not prevent you from attempting to pass a dereferenced null pointer to
func().

Sure, a compiler will not usually prevent you from causing undefined
behavior. However, the use references did not cause or contribute to
the undefined behavior.

If your point is that reference parameters are not a silver bullet
that make undefined behavior a thing of the past, then I agree.
They can help to some degree insofar as they help one reduce the
use of pointers generally.
Actually what the Standard says is that a reference connot be null in a well
defined program.

The purpose of the standard is to specify what a "well-defined program"
is so this isn't much of a caveat. Once you start invoking undefined
behavior your program is no longer a C++ program as defined by the
standard.
I am beginning to understand what the comment about
self-documentation was intending. I was looking at it from the other point
of view. For what I've been doing, it makes no sense to work with
references. Shared data is part of the conceptual design, and using
pointers is therefore necessary. In order to pass a reference, I would
have to dereference a pointer. It would be a deviation from the pattern to
create a function that took a reference rather than a pointer.

I guess if I were working with data forms, or something like that, it might
make more sense to pass references.

I don't deny there are many cases where pointers are the best choice.
However, it's hard to imagine a program that doesn't (also) have, say,
local variables.
It doesn't mean it hasn't been either.

If you want to talk about what happens *after* you invoke undefined
behavior then we're no longer talking about C++. In C++ there is no
such thing as a null reference.
Agreed. I can see how one might argue that there is an advantage to that if
there is no need for polymorphic behavior, using reference may well be
preferable.

References don't preclude polymorphism.
OTOH, the fact remains that information is not available at
the point where the function is called which might be communicated by
passing a pointer.

This is a separate objection from the one about undefined behavior.
I think it is a legitimate disadvantage of using references for out
parameters, but I personally still tend to prefer references and
compensate by naming.

[snip]
Yes. And if you can constrain the parameters to the point where you are not
using polymorphism, or if you are willing to either take the address of the
reference, or program by exception, then it may make some sense to pass by
reference. I've just not been in a situation where that is applicable.

Refererences don't preclude polymorphism.

Also, who said anything about "programming by exception"? If there
is a pointer to be dereferenced, then whoever does the dereferencing
is responsible for "knowing" that the pointer is not null -- either
by a runtime check or by understanding the invariants at that point.
The same thing applies whether the pointer is dereferenced in the
called function (i.e., the function takes a pointer) or at the call
site (i.e., the function takes a reference).

I don't see how either is inherently safer, except in cases where
use of a reference parameter elimaintes the pointer entirely (i.e.,
the caller didn't have a pointer to begin with). Also, a reference
parameter eliminates any possible ambiguity about whether a function
accepts NULL as a valid argument.

[snip]
It also has to do with the design domain. I'm also wondering how this
impacts such notions as abstract interfaces.

Again, references don't preclude polymorphism. I'll accept that the
design domain is a factor in how many pointers one uses.
 
B

belvis

John said:
It is not silly at all. The point is that you want a cheap indicator of what
a function does. Just look at the function call and you know. Well, if that
is the sort of cursory study that is involved, how do you know if the
function is one of "yours" or one from some other library? Plainly you can't
know, so your cheap indicator is as likely to mislead as inform. Sensible
programmers will ignore it.

void Foo(int*);
void Foo(int&);

void A()
{
int x(0);
int& xx(x);

Foo(xx);
}

void B()
{
int y(0);
int* yy(&y);

Foo(yy);
}

Looking at these two functions, I don't see how it's any easier to see
that the call to Foo() in B() is likely to change y than it is to see
that the call to Foo() in A() will change x.

It's been my experience with large bodies of code that try to use
pointer arguments to signal "may change the argument" (e.g., Qt) is
that it simply doesn't work very well. The theory is that seeing the
"&" at the call site tells you it's a pointer, so you should be aware
that the pointee might change. My contrived example above that the call
site can be easily missing the "&", in which case you're no better off
than you would have been with a reference -- you have to read the
documentation.

A more realistic example of losing the "&" is when a function passes on
a pointer argument to another function:

void F(int* p)
{
...

G(p);

...
}

Bob
 
P

Phlip

Mr said:
I've been thinking about passing parameteras using references instead
of pointers in order to emphasize that the parameter must be an
object.

Always use the weakest language construction with the fewest features. This
implies you should always use references unless you need a pointer's extra
features.

Use a pointer when you need the handle to be a value. The referent - the
invisible thing inside a reference that may secretly be implemented as a
pointer - is not itself a value. You cannot compare it to NULL or re-seat
it. You should not index it, but that's a different concern. (You can index
the address of the referend.)
Exemple:
void func(Objec& object); //object must be an object

The language already enforces object must be related to Objec.

If func() won't change object, prefer this:

void func(Objec const & object);

The const is after the Objec because that's where it _should_ be. The
notation 'const Objec &' is syntactic sugar with no other meaning than
'Objec const &'. Type declarations should put the most important part first.

Finally, folks might tell you to pass by address if you intend to change the
object. This is a "compilable comment", and the best compilable comments
don't change structure. Use an active verb in the function name to indicate
it will change the object:

void updateMembers(Object const & object);
Any comments on why this could be a bad idea or do you think it's just
a matter of taste?

Lot's of stuff is a matter of taste, but not dangerous things like pointers
and references...
 
S

Steven T. Hatton

Phlip said:
Always use the weakest language construction with the fewest features.
This implies you should always use references unless you need a pointer's
extra features.

And hope you never will need them.
The language already enforces object must be related to Objec.

If func() won't change object, prefer this:

void func(Objec const & object);

The const is after the Objec because that's where it _should_ be. The
notation 'const Objec &' is syntactic sugar with no other meaning than
'Objec const &'. Type declarations should put the most important part
first.

http://www.research.att.com/~bs/bs_faq2.html#constplacement
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,743
Messages
2,569,478
Members
44,899
Latest member
RodneyMcAu

Latest Threads

Top