Operator Overloading: Assume rational use

  • Thread starter Tomás Ó hÉilidhe
  • Start date
T

Tomás Ó hÉilidhe

Operator overloads are just like any other member function, you
can make them do whatever you want. However, of course, we might
expect them to behave in a certain way. The ++ operator should perform
some sort of increment, the / operator should do something along the
lines of division.

Do you think it would have been worthwhile for the C++ Standard to
"codify" this expected use of operator overloads? I'll be specific:

Let's say you overload the ++ operator, both the pre and the post
form. In our class, they both behave as expected: The pre gives you
the new value, the post gives you the old value. In our particular
implementation, the pre version is much more efficient that the post
version because the post version involves the creation of a temporary
(and let's say our class object is quite expensive to construct).

Let's say we have code such as the following:

for (OurClass obj = 0; obj < 77; obj++) DoSomething;

When the compiler looks at this, it can see straight away that the
result of the incrementation is discarded. If it has some sort of
"codified expected usage" of operator overloads, it could invoke "+
+obj" instead.

Similarly, if you had a function such as:

void Func(ClassObject const arg)
{
return arg * 7 - 2;
}

it could treat it as:

void Func(ClassObject arg)
{
arg *= 7;
arg -= 2;
}

thus getting rid of temporary objects.

If there was a "codified expected usage" then I don't think it
would be too far removed from the current situation we have with
contructor elision. With constructor elision, the compiler just
assumes that the creation of a temporary object won't result in
something important happening like a rocket being sent to the moon, so
it just gets rid of the temporary. For it to have this "way of
thinking" though, the Standard basically had to say "well constructors
aren't supposed to do something important outside of the object". This
wouldn't be very different at all from saying "Well the pre-increment
should be indentical to the post-increment if the result is
discarded".

The net result of this is that code could be written more naturally;
for instance take the following function:

void Func(int const i)
{
return i * 7 - 2;
}

If we introduce a class object, it could be left as:

void Func(OurClass const i)
{
return i * 7 - 2;
}

instead of having to change it to:

void Func(OurClass i)
{
i *= 7;
i -= 2;
return i;
}

But then again, even if the Standard did have some sort of expected
usage of operator overloads, they're probably still be people who
wouldn't trust the compiler to do the right thing.
 
R

Rolf Magnus

Tomás Ó hÉilidhe said:
Operator overloads are just like any other member function, you
can make them do whatever you want. However, of course, we might
expect them to behave in a certain way. The ++ operator should perform
some sort of increment, the / operator should do something along the
lines of division.

Yes. However, there are cases where this rule is ignored. Think about the
bit shift operators that are used for stream I/O in the standard library.
Boost has a lot of operator abuse too (look at Spirit).
One could even consider operator+ for strings as misuse of operator
overloading, since a concatenation isn't really the same as an addition.
Let's say we have code such as the following:

for (OurClass obj = 0; obj < 77; obj++) DoSomething;

When the compiler looks at this, it can see straight away that the
result of the incrementation is discarded. If it has some sort of
"codified expected usage" of operator overloads, it could invoke "+
+obj" instead.

If the operator ++ can be inlined, the compiler might be able to optimize
the copied object away. But I see your point. I think it wouldn't be a good
idea to let the compiler do assumptions about what the overloaded operators
do.
Similarly, if you had a function such as:

void Func(ClassObject const arg)
{
return arg * 7 - 2;
}

it could treat it as:

void Func(ClassObject arg)
{
arg *= 7;
arg -= 2;
}

thus getting rid of temporary objects.

Now consider your ClassObject to be a matrix, and instead of 7, you multiply
it with another matrix. Such a multiplication needs a temporary anyway, so
you might choose to implement operator*= by using operator* instead of the
other way round. So in some cases, such a transformation might actually
_add_ another temporary object.

What I could imagine is that there could be a way for the programmer to
explicitly request such transformations to happen. But I don't think that
it's a good idea to implictly do it.
 
J

James Kanze

Operator overloads are just like any other member function,
you can make them do whatever you want. However, of course, we
might expect them to behave in a certain way. The ++ operator
should perform some sort of increment, the / operator should
do something along the lines of division.

And of course, + and * are commutative, where as - and / aren't.
Do you think it would have been worthwhile for the C++
Standard to "codify" this expected use of operator overloads?

I think this was rejected in the early days of C++. I'm not
sure I agree with this, but it's far too late to change it now.
It's even violated regularly in the standard: think of operator+
on strings, for example, and there are examples in mathematics
where operator* wouldn't be commutative either.

The rejection was complete; I think it arguable that there are
two difference cases: one concerning such "external" rules, and
another concerning internal rules, e.g. the relationship between
+ and +=, for example, or prefix and postfix ++.
I'll be specific:
Let's say you overload the ++ operator, both the pre and the
post form. In our class, they both behave as expected: The pre
gives you the new value, the post gives you the old value. In
our particular implementation, the pre version is much more
efficient that the post version because the post version
involves the creation of a temporary (and let's say our class
object is quite expensive to construct).
Let's say we have code such as the following:
for (OurClass obj = 0; obj < 77; obj++) DoSomething;
When the compiler looks at this, it can see straight away that
the result of the incrementation is discarded. If it has some
sort of "codified expected usage" of operator overloads, it
could invoke "++obj" instead.

If the functions involved are all inline, it can skip the
construction of the extra object anyway. And if they aren't,
and can't reasonably be made inline, then it is probable that
skipping the copy won't make a measurable difference anyway.
(It's easy to invent perverse cases where it would, but they
don't occur in real code.)

The important difference would be applying the rules of
associativity, commutivity, and why not, distributivity (should
the compiler also assume that addition is cheaper than
multiplication). Even more useful, IMHO, would be if the
compiler would automatically generate +=, given + and a copy
constructor, or vice versa. (In practice, today, all you have
to do is derive from an appropriate class template for this, so
it probably isn't worth it. But it certainly would have been
back before we had templates.)
Similarly, if you had a function such as:
void Func(ClassObject const arg)
{
return arg * 7 - 2;
}
it could treat it as:
void Func(ClassObject arg)
{
arg *= 7;
arg -= 2;
}
thus getting rid of temporary objects.

Well, you certainly wouldn't want the compiler to do this as an
"optimizing" measure, if you had written both functions, since
it's not clear which version will be faster. (Of course, if the
compiler can see enough of the functions to know which one will
be faster, it can do this transformation today, under the "as
if" rule.)
If there was a "codified expected usage" then I don't think it
would be too far removed from the current situation we have
with contructor elision.

Agree. Constructor elision is precisely an example of this.
 
E

Erik Wikström

Operator overloads are just like any other member function, you
can make them do whatever you want. However, of course, we might
expect them to behave in a certain way. The ++ operator should perform
some sort of increment, the / operator should do something along the
lines of division.

Do you think it would have been worthwhile for the C++ Standard to
"codify" this expected use of operator overloads? I'll be specific:

Let's say you overload the ++ operator, both the pre and the post
form. In our class, they both behave as expected: The pre gives you
the new value, the post gives you the old value. In our particular
implementation, the pre version is much more efficient that the post
version because the post version involves the creation of a temporary
(and let's say our class object is quite expensive to construct).

Let's say we have code such as the following:

for (OurClass obj = 0; obj < 77; obj++) DoSomething;

When the compiler looks at this, it can see straight away that the
result of the incrementation is discarded. If it has some sort of
"codified expected usage" of operator overloads, it could invoke "+
+obj" instead.

Similarly, if you had a function such as:

void Func(ClassObject const arg)
{
return arg * 7 - 2;
}

it could treat it as:

void Func(ClassObject arg)
{
arg *= 7;
arg -= 2;
}

thus getting rid of temporary objects.

If there was a "codified expected usage" then I don't think it
would be too far removed from the current situation we have with
contructor elision. With constructor elision, the compiler just
assumes that the creation of a temporary object won't result in
something important happening like a rocket being sent to the moon, so
it just gets rid of the temporary. For it to have this "way of
thinking" though, the Standard basically had to say "well constructors
aren't supposed to do something important outside of the object". This
wouldn't be very different at all from saying "Well the pre-increment
should be indentical to the post-increment if the result is
discarded".

The net result of this is that code could be written more naturally;
for instance take the following function:

void Func(int const i)
{
return i * 7 - 2;
}

If we introduce a class object, it could be left as:

void Func(OurClass const i)
{
return i * 7 - 2;
}

instead of having to change it to:

void Func(OurClass i)
{
i *= 7;
i -= 2;
return i;
}

But then again, even if the Standard did have some sort of expected
usage of operator overloads, they're probably still be people who
wouldn't trust the compiler to do the right thing.

You might want to look into expression templates if the performance-hit
of using arg * 7 + 2 is too great (or educate your users).

I do not think that it would be a good idea to have some kind of
expected behaviour in the standard since it would change the semantics
of the language and there might be cases where those expectations are
not true.
 
K

Kai-Uwe Bux

James said:
And of course, + and * are commutative, where as - and / aren't.

That would rule out many reasonable uses of * like matrix multiplication,
multiplication in other groups, or the use of + for string concatenation.


I think this was rejected in the early days of C++. I'm not
sure I agree with this, but it's far too late to change it now.
It's even violated regularly in the standard: think of operator+
on strings, for example, and there are examples in mathematics
where operator* wouldn't be commutative either.

Exactly. I would really deplore a language that allows operator overloading
but does not acknowledge the possibility of non-commutative multiplication.

I guess, it all comes down to your attitude toward operator overloading in
general. I see two main possible operator coding styles: (a) have your
operators mimmick the built-in versions so that a person with C++ knowledge
will understand the code easily, or (b) try to make client code look
similar to formula from text books about the problem domain so that a
persion with background knowledge can understand the code easily. I usually
follow (b) and in that case, * is clearly to be used for matrix
multiplication (since we cannot overload whitespace :). But I do see that
those coding guidelines should be local and do not generalize from one
place to another. Therefore, I think the standard made the right decision
not to legislate style.

The rejection was complete; I think it arguable that there are
two difference cases: one concerning such "external" rules, and
another concerning internal rules, e.g. the relationship between
+ and +=, for example, or prefix and postfix ++.






If the functions involved are all inline, it can skip the
construction of the extra object anyway. And if they aren't,
and can't reasonably be made inline, then it is probable that
skipping the copy won't make a measurable difference anyway.
(It's easy to invent perverse cases where it would, but they
don't occur in real code.)

The important difference would be applying the rules of
associativity, commutivity, and why not, distributivity (should
the compiler also assume that addition is cheaper than
multiplication).

Such rules cannot be applied by the compiler even for signed intergral types
as intermediate results could differ, which in case of overflows might turn
defined behavior into undefined behavior (if you are on a platform where
signed overflow really causes trouble, that is). Similarly for floating
point arithmetic, some path might yield nan whereas a mathematically
equivalent expression might yield 1.0.

For better or worse, arithmetic on computers simply does not obey the usual
mathematical laws; and pretending it does is a surefire method to get into
trouble.


[snip]


Best

Kai-Uwe Bux
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top