References to temporaries and function-calls

  • Thread starter =?iso-8859-1?q?Erik_Wikstr=F6m?=
  • Start date
?

=?iso-8859-1?q?Erik_Wikstr=F6m?=

struct foo {
int i;
};

int bar(foo& f) {
return f.i++;
}

int main() {
bar(foo());
}

The above code does not compile since you can't bind a reference to a
temporary, you could solve this by using a 'const foo&' parameter
instead but then you have other problems (like trying to change the
value of a const). This much I understand, what I don't understand is
how come this is considered trying to bind a reference to a temporary,
are not all parameters supposed to be evaluated before the function is
executed? And these evaluations should take place in the same scope as
in which the function is called? While the function, including its
parameters, are executed in its own scope?

The way I see things the foo()-part of bar(foo()); should already have
executed (and thus have created a foo-object on the stack) when the
parameter f comes into scope* and thus, from the point of view of
bar() be non-temporary.

Or put another way, I can't quite see the difference between the
following two:
bar(foo());
and
foo f;
bar(f);

Can someone please explain?

* Or is declared, or defined or whatever it is called.
 
G

Gavin Deane

struct foo {
int i;

};

int bar(foo& f) {
return f.i++;

}

int main() {
bar(foo());

}

The above code does not compile since you can't bind a reference to a
temporary, you could solve this by using a 'const foo&' parameter
instead but then you have other problems (like trying to change the
value of a const). This much I understand,

I'm not sure there is anything else *to* understand.
what I don't understand is
how come this is considered trying to bind a reference to a temporary,

Because the definition of "temporary object" includes the foo object
created in

bar(foo());
are not all parameters supposed to be evaluated before the function is
executed? And these evaluations should take place in the same scope as
in which the function is called? While the function, including its
parameters, are executed in its own scope?

Yes to all three, none of which affect the fact that the foo object
created by the statement bar(foo()); is a temporary object.
The way I see things the foo()-part of bar(foo()); should already have
executed (and thus have created a foo-object on the stack)

Yes. An unnamed temporary foo object now exists in the scope of main.
when the
parameter f comes into scope* and thus, from the point of view of
bar() be non-temporary.

The scope of bar isn't where you should be thinking. Assuming you
rewrite bar to take a const foo& so your code compiles, there is no
way (as far as I know) that you can tell *from inside bar* whether the
object referred to by f is a temporary or not in the scope of the
calling function. But from inside bar isn't what's important. What's
important is from inside the function that calls bar (main in this
case).

Inside main, there is a temporary foo object created and you can't
bind non-const references to temporaries. If I add a line to your main
function so you have...

int main() {
bar(foo());
foo& a_reference = foo();
}

....both statements have the same problem. They both try and bind a
temporary to an non-const reference. The fact that in the first case
the non-const reference happens to be a function parameter doesn't
matter. In both cases the object *referred to* is a temporary so the
reference has to be const.
Or put another way, I can't quite see the difference between the
following two:
bar(foo());
and
foo f;
bar(f);

Maybe you can now?

HTH
Gavin Deane
 
?

=?iso-8859-1?q?Erik_Wikstr=F6m?=

I'm not sure there is anything else *to* understand.


Because the definition of "temporary object" includes the foo object
created in

bar(foo());


Yes to all three, none of which affect the fact that the foo object
created by the statement bar(foo()); is a temporary object.


Yes. An unnamed temporary foo object now exists in the scope of main.


The scope of bar isn't where you should be thinking. Assuming you
rewrite bar to take a const foo& so your code compiles, there is no
way (as far as I know) that you can tell *from inside bar* whether the
object referred to by f is a temporary or not in the scope of the
calling function. But from inside bar isn't what's important. What's
important is from inside the function that calls bar (main in this
case).

Inside main, there is a temporary foo object created and you can't
bind non-const references to temporaries. If I add a line to your main
function so you have...

int main() {
bar(foo());
foo& a_reference = foo();

}

...both statements have the same problem. They both try and bind a
temporary to an non-const reference. The fact that in the first case
the non-const reference happens to be a function parameter doesn't
matter. In both cases the object *referred to* is a temporary so the
reference has to be const.


Maybe you can now?

Sorry but no, in both cases the foo-object is temporary, but in one
there's a name and in the other there isn't. I guess I'm kind of
looking for a rationale for this behaviour, and the only thing I can
think of is that if it were allowed you would lose an opportunity to
optimize, namely the ability to create the copied parameter in place
in the stackframe of the function, whereas that would not be possible
if a reference was used.

I just can't see any advantage of the current behaviour over the one I
described nor can I see reason why it should not be possible to
implement (though, admittedly I'm no compiler developer). On the other
hand I can see some usages of allowing the behaviour I described,
among other things a number of algorithms in the standard library
could become more useful.
 
S

Sylvester Hesp

int main() {
bar(foo());
foo& a_reference = foo();
}

...both statements have the same problem. They both try and bind a
temporary to an non-const reference.

Also keep in mind that for both versions the copy ctor of foo has to be
accessible if the reference were to be const. And this, in fact, does work

int main()
{
foo& a_reference(foo());
}

Even without accessible copy ctor. These things strike me as odd. In what
situations a copy is needed to bind the temporary to a (const) reference?
And if [ foo& a = foo(); ] is not allowed, why is [ foo& a(foo()); ]
allowed? And why does the latter _not_ require a copy ctor?

- Sylvester
 
V

Victor Bazarov

Sylvester said:
Also keep in mind that for both versions the copy ctor of foo has to
be accessible if the reference were to be const. And this, in fact,
does work
int main()
{
foo& a_reference(foo());

It's a declaration of a function. Of course it "does work".
}

Even without accessible copy ctor. These things strike me as odd. In
what situations a copy is needed to bind the temporary to a (const)
reference? And if [ foo& a = foo(); ] is not allowed, why is [ foo&
a(foo()); ] allowed? And why does the latter _not_ require a copy
ctor?
- Sylvester

V
 
S

Sylvester Hesp

Victor Bazarov said:
It's a declaration of a function. Of course it "does work".

d'Oh!
I feel so stupid right now.

Nevertheless, why the need for accessible copy ctors?

- Sylvester
 
G

Gavin Deane

Sorry but no, in both cases the foo-object is temporary, but in one
there's a name and in the other there isn't.

Depends what you mean by "temporary". In a way, all variables with
automatic storage duration are "temporary" in that they don't last
forever. I had hoped to find a formal definition of "temporary" in the
standard, but if it's there it eluded me. But when I used the word
temporary in my post, i meant "temporary" as used by the wording of
the C++ standard. And by that (less precise than I would have liked)
definition, the foo object in

bar(foo());

is a temporary, while the foo object in

foo f;
bar(f);

is not a temporary.
I guess I'm kind of
looking for a rationale for this behaviour, and the only thing I can
think of is that if it were allowed you would lose an opportunity to
optimize, namely the ability to create the copied parameter in place
in the stackframe of the function, whereas that would not be possible
if a reference was used.

I just can't see any advantage of the current behaviour over the one I
described nor can I see reason why it should not be possible to
implement (though, admittedly I'm no compiler developer). On the other
hand I can see some usages of allowing the behaviour I described,
among other things a number of algorithms in the standard library
could become more useful.

Rationale is different. I was attempting to explain how the foo object
in bar(foo()); is temporary and so cannot be bound to a non-const
reference, assuming the definitions of all those terms as used in the
standard. As to *why* things are that way, that's a different question
(and one I'm not sure of - I'm not a compiler developer either). Maybe
someone else here can shed some light. Or perhaps comp.std.c++?

Gavin Deane
 
V

Victor Bazarov

Sylvester said:
d'Oh!
I feel so stupid right now.

Nevertheless, why the need for accessible copy ctors?

Because during binding to a reference a copy may need to be made.
It is needed if conversion happens, as in

void foo(const int&);
foo(3.1415926); // the temporary 'int' is created and bound
// to the reference

As to why the need to make a copy... I am not sure. Try looking
in the archives. This undoubtedly has been discussed before. Just
search for "accessible copy constructor bind reference" (without
the quotes).

V
 
M

Marcus Kwok

Erik Wikström said:
Sorry but no, in both cases the foo-object is temporary, but in one
there's a name and in the other there isn't. I guess I'm kind of
looking for a rationale for this behaviour, and the only thing I can
think of is that if it were allowed you would lose an opportunity to
optimize, namely the ability to create the copied parameter in place
in the stackframe of the function, whereas that would not be possible
if a reference was used.

From what I understand, the rationale is that if a conversion is
required, then the behavior can be surprising. For example, suppose
that it were possible to bind a temporary to a non-const reference:

void foo(int& i)
{
++i;
}

void bar()
{
foo(int(3));
foo(double(3.0));
}

OK, so this example is contrived, but stick with me. When we call foo()
with the temporary int, things are as expected and foo increments the
actual int. However, when we call foo() with the double, it must first
create another temporary int, and then it passes this temporary int to
foo(). Then, foo() will increment the temporary int and not the
original double.

This example seems pretty pointless, but imagine instead if int& were
replaced with some class type, and ++i were replaced with a call to some
non-const member function of that class.

Therefore it would appear that foo() works in some cases but not in
others. In order to reconcile this difference, they decided not to
allow binding temporaries to non-const references.
I just can't see any advantage of the current behaviour over the one I
described nor can I see reason why it should not be possible to
implement (though, admittedly I'm no compiler developer). On the other
hand I can see some usages of allowing the behaviour I described,
among other things a number of algorithms in the standard library
could become more useful.

AFAIK there is no technical reason why it cannot be done, and in fact
some compilers (e.g., recent versions of Visual Studio) actually allow
you to bind a temporary to a non-const reference, as an extension.
 
G

Grizlyk

I do not know wat is rationale. Some compilers can compile that and will
produce warnings if temporary will be created. I have read sometimes ago
something somewhere about, but already forget why.

Sylvester said:
In what situations a copy is needed to bind the temporary to a (const)
reference? And if [ foo& a = foo(); ] is not allowed, why is [ foo&
a(foo()); ] allowed? And why does the latter _not_ require a copy ctor?

First [ foo& a = foo(); ] does _not_ require a copy ctor also.

--
Maksim A. Polyanin
http://grizlyk1.narod.ru/cpp_new

"In thi world of fairy tales rolls are liked olso"
/Gnume/
 
V

Victor Bazarov

Grizlyk said:
[..]
Sylvester said:
In what situations a copy is needed to bind the temporary to a
(const) reference? And if [ foo& a = foo(); ] is not allowed, why is
[ foo& a(foo()); ] allowed? And why does the latter _not_ require a
copy ctor?

First [ foo& a = foo(); ] does _not_ require a copy ctor also.

foo& a = foo();

does *not* compile with a fully compliant compiler.

V
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,904
Latest member
HealthyVisionsCBDPrice

Latest Threads

Top