overloaded operators returning wrong values

A

andrew browning

i have overlaoded all of my arithmetic operators but all are
functioning as multiplication. below is a sample of the addition
operator:

Rational operator + (const Rational& r1, const Rational& r2){
//Postcondition: sum of r1 and r2 are returned

int numerator;
int denominator;

denominator = (r1.getDen() * r2.getDen());
numerator = (r1.getNum() * r2.getDen() + r2.getNum() *
r1.getDen());
Rational sum (numerator, denominator);

return sum;
}
 
L

loufoque

andrew browning wrote :
i have overlaoded all of my arithmetic operators but all are
functioning as multiplication. below is a sample of the addition
operator:

Rational operator + (const Rational& r1, const Rational& r2){
//Postcondition: sum of r1 and r2 are returned

int numerator;
int denominator;

denominator = (r1.getDen() * r2.getDen());
numerator = (r1.getNum() * r2.getDen() + r2.getNum() *
r1.getDen());
Rational sum (numerator, denominator);

return sum;
}

And your problem is ?
This is correct addition of rationals to me.
 
A

Axter

The following is the recommended approach to overloading arithmetic and
assignment operators:

T& T::eek:perator+=(const T&){
//.... impl
return *this
}

T operator+(const T& lhs, const T& rhs){
T temp(lhs);
return temp += rhs;
}

----------------------------------------------------------------------------------------
David Maisonave
http://axter.com

Top ten member of C++ Expert Exchange:
http://www.experts-exchange.com/Cplusplus
----------------------------------------------------------------------------------------
 
K

Kai-Uwe Bux

Axter said:
The following is the recommended approach to overloading arithmetic and
assignment operators:

T& T::eek:perator+=(const T&){
//.... impl
return *this
}

T operator+(const T& lhs, const T& rhs){
T temp(lhs);
return temp += rhs;
}

Recommended for which reason?

I would like to see the "recommended" approach carried out for matrix
multiplication or arbitrary precission integer multiplication. In both
cases, an in-place implementation is far from obvious (if possible); and
implementing * in terms of *= for matrices or BigInt is likely to create
more temporaries internally than implementing *= in terms of *.

Assuming there is an efficient swap method (like for std::vector), what is
wrong with:

T operator+ ( T const & a, T const & b ) {
// impl ...
return ( result );
}

T & operator+= ( T const & t ) {
T temp = *this + t;
swap( *this, temp );
return ( *this );
}

Also note that, if profiling shows the need for using expression templates,
an implementation of *= in terms of * is more natural that the other way
around -- in fact, I do not see how I would write an expression template
for * in terms of *=; but that could be a lack of imagination on my part.


Best

Kai-Uwe Bux
 
A

Alf P. Steinbach

* Kai-Uwe Bux:
Recommended for which reason?

Axter's example shows two independent guidelines:

1) Implement operator+ in terms of operator+=.

2) Implement operator+ as a non-member function.

(1) is a somewhat contested guideline. E.g., I seem to recall that
Microsoft has published the exact opposite guideline, and here you are
also arguing against it. As I see it, using (1) you can do no worse
than the opposite approach, and will mostly do better (allowing
operator+= to be as efficient as possible with no temporary involved,
where possible).

(2) has a simple rationale: to treat both arguments on an equal footing.
For example, to allow the same implicit conversions.

I would like to see the "recommended" approach carried out for matrix
multiplication or arbitrary precission integer multiplication. In both
cases, an in-place implementation is far from obvious (if possible); and
implementing * in terms of *= for matrices or BigInt is likely to create
more temporaries internally than implementing *= in terms of *.

Yes, there are cases where you can't do in-place operations, but how can
you get fewer temporaries by implementing operator+= in terms of
operator+? As I see it, for operator+= you can take advantage of access
to internals, in particular reusing an outer encapsulation or already
allocated internal memory. That seems to me to generally imply fewer
temporaries and such.

Assuming there is an efficient swap method (like for std::vector), what is
wrong with:

T operator+ ( T const & a, T const & b ) {
// impl ...
return ( result );
}

T & operator+= ( T const & t ) {
T temp = *this + t;
swap( *this, temp );
return ( *this );
}

Mostly, what's "wrong" is that this approach /can't/ guarantee an
efficient in-place addition, when type T is such that that could
otherwise be done. If client code has "T x, y; ...; x += y;" then only
the compiler's optimizations can turn that into an in-place addition.
I'm thinking here of client code like

T o;
for( hee; haa; hoo )
{
o += something;
}


Also note that, if profiling shows the need for using expression templates,
an implementation of *= in terms of * is more natural that the other way
around -- in fact, I do not see how I would write an expression template
for * in terms of *=; but that could be a lack of imagination on my part.

I'm not sure, it may be that you have a good point why expression
templates are simply different, but consider

template< typename T >
struct Multiply
{
static inline T apply( T const& a, T const& b )
{
T result( a );
return (result += b);
}
};

as a kind of generic implementation, and then e.g. for double, if
necessary for efficiency,

template<> struct Multiply<double>
{
static inline double apply( double a, double b ) { return a*b; }
};

and so on (disclaimer: I haven't tried this!).
 
K

Kai-Uwe Bux

Alf said:
* Kai-Uwe Bux:

Axter's example shows two independent guidelines:

1) Implement operator+ in terms of operator+=.

2) Implement operator+ as a non-member function.

(1) is a somewhat contested guideline. E.g., I seem to recall that
Microsoft has published the exact opposite guideline, and here you are
also arguing against it. As I see it, using (1) you can do no worse
than the opposite approach, and will mostly do better (allowing
operator+= to be as efficient as possible with no temporary involved,
where possible).

That works if (and I would contest only if) you can implement operator *=
in-place. I am not arguing against using that idiom where it works. I am
arguing against recommending this as a general rule.

(2) has a simple rationale: to treat both arguments on an equal footing.
For example, to allow the same implicit conversions.
Ok.


Yes, there are cases where you can't do in-place operations, but how can
you get fewer temporaries by implementing operator+= in terms of
operator+? As I see it, for operator+= you can take advantage of access
to internals, in particular reusing an outer encapsulation or already
allocated internal memory. That seems to me to generally imply fewer
temporaries and such.

In matrix multiplication, you cannot overwrite the coefficients because you
still need them. Thus, you will end up allocating a temporary matrix for *=
anyway. If then, on top of this, you implement * in terms of *=, you may
end up with more temporaries.

[snip]

I'm not sure, it may be that you have a good point why expression
templates are simply different, but consider

template< typename T >
struct Multiply
{
static inline T apply( T const& a, T const& b )
{
T result( a );
return (result += b);
}
};

as a kind of generic implementation, and then e.g. for double, if
necessary for efficiency,

template<> struct Multiply<double>
{
static inline double apply( double a, double b ) { return a*b; }
};

and so on (disclaimer: I haven't tried this!).


Probably, I need to me more specific on the expressions template business.
Here is a simple expression template implementation for vector addition --
something that I would implement using += as the primitive (since it can be
done in place) and then defining + in terms of +=. However, with expression
templates, it appears that the other way around is more natural:


#include <cstddef>
#include <iostream>

std::size_t const length = 4;
typedef double Number;

class VectorStoragePolicy {

Number data [length];

public:

VectorStoragePolicy ( Number e = 0 )
{
for ( std::size_t i = 0; i < length; ++i ) {
data = e;
}
}

VectorStoragePolicy ( VectorStoragePolicy const & other )
{
for ( std::size_t i = 0; i < length; ++i ) {
data = other;
}
}

Number operator[] ( std::size_t i ) const {
return ( data );
}

Number & operator[] ( std::size_t i ) {
return ( data );
}
};


template < typename ExprA, typename ExprB >
class VectorPlusVector {

ExprA a_ref;
ExprB b_ref;

public:

VectorPlusVector ( ExprA const & a, ExprB const & b )
: a_ref ( a )
, b_ref ( b )
{}

Number operator[] ( std::size_t i ) const {
return ( a_ref + b_ref );
}

};

template < typename Expr >
struct VectorTag : public Expr {

VectorTag ( void )
: Expr()
{}

template < typename A >
VectorTag ( A const & a )
: Expr( a )
{}

};

template < typename ExprA, typename ExprB >
VectorTag< VectorPlusVector< ExprA, ExprB > >
operator+ ( VectorTag< ExprA > const & a,
VectorTag< ExprB > const & b ) {
return ( VectorPlusVector< ExprA, ExprB >( a, b ) );
}

struct Vector : public VectorTag< VectorStoragePolicy > {

Vector ( Number a = 0 )
: VectorTag< VectorStoragePolicy >( a )
{}

template < typename Expr >
Vector & operator= ( VectorTag< Expr > const & other )
{
for ( std::size_t i = 0; i < length; ++i ) {
(*this) = other;
}
return ( *this );
}

template < typename Expr >
Vector & operator+= ( VectorTag< Expr > const & other ) {
*this = *this + other;
return ( *this );
}

};


template < typename Expr >
std::eek:stream & operator<< ( std::eek:stream & o_str,
VectorTag< Expr > const & v ) {
for ( std::size_t i = 0; i < length; ++i ) {
o_str << v << ' ';
}
return ( o_str );
}


int main ( void ) {
Vector a ( 1.0 );
Vector b ( 2.3 );
a += b;
std::cout << a+b << '\n';
}

As you can see, there is the VectorPlusVector template that postpones
evaluation of sums. So if you write

(a + b + c + d)[2];

there is no additional temporary vector but the expression gets translated
into:

a[2] + b[2] + c[2] + d[2]

The challenge is to define a VectorIncrement expression template
(representing +=) and then define + in terms of that. I just don't see how
to do that without loosing the advantage of expression templates
(elimination of temporaries).



Best

Kai-Uwe Bux
 
D

Dave Steffen

Axter said:
The following is the recommended approach to overloading arithmetic and
assignment operators:

T& T::eek:perator+=(const T&){
//.... impl
return *this
}

T operator+(const T& lhs, const T& rhs){
T temp(lhs);
return temp += rhs;
}

Or, more succinctly,

T operator+( T lhs, const T& rhs){
return lhs += rhs;
}

IIRC recommended by Meyers in Effective C++, 3rd Ed.

----------------------------------------------------------------------
Dave Steffen, Ph.D. Fools ignore complexity. Pragmatists
Software Engineer IV suffer it. Some can avoid it. Geniuses
Numerica Corporation remove it.
ph (970) 419-8343 x27 -- Alan Perlis
fax (970) 223-6797
(e-mail address removed)
 
A

Axter

Kai-Uwe Bux said:
Recommended for which reason?

See C++ Coding Standards by Herb Sutter and Andrei Alexandrescu
Item 27 [Prefer the canonical forms of arithmetic and assignment
operators]

----------------------------------------------------------------------------------------
David Maisonave
http://axter.com

Top ten member of C++ Expert Exchange:
http://www.experts-exchange.com/Cplusplus
----------------------------------------------------------------------------------------
 
K

Kai-Uwe Bux

Axter said:
Kai-Uwe Bux said:
Recommended for which reason?

See C++ Coding Standards by Herb Sutter and Andrei Alexandrescu
Item 27 [Prefer the canonical forms of arithmetic and assignment
operators]

Thanks for the reference. I just read it. Maybe I am missing something, but
that item does *not* at all contain a discussion of

+ in terms of += vs += in terms of +

Rather, the main concern is consistency: "In general, for some binary
operator @ (be it +, -, *, and so on), you should define its assignment
version such that a @= b and a = a @ b have the same meaning (other than
that the first form might be more efficient and only evaluates a once)."
Then the authors set out to propose the "canonical way" @ in terms of @=
observing that this achieves the stated design goal; but they do not
provide a rational as to why this is superior to the obvious alternative @=
in terms of @.

The next paragraph enters a discussion of the virtues of non-member
implementations for operators. This is undisputed. The section closes with
variations, examples, and finally a notable exception to the canonical
recommendation. This addendum is actually the first point in the item where
the alternative "@= in terms of @" is mentioned.

The way I read it, item 27 (as opposed to Alf's post) provides no technical
reason at all to prefer:

@ in terms of @=

to:

@= in terms of @


Best

Kai-Uwe Bux
 
A

Axter

Kai-Uwe Bux said:
Axter said:
Kai-Uwe Bux said:
Axter wrote:

The following is the recommended approach to overloading arithmetic and
assignment operators:

T& T::eek:perator+=(const T&){
//.... impl
return *this
}

T operator+(const T& lhs, const T& rhs){
T temp(lhs);
return temp += rhs;
}

Recommended for which reason?

See C++ Coding Standards by Herb Sutter and Andrei Alexandrescu
Item 27 [Prefer the canonical forms of arithmetic and assignment
operators]

Thanks for the reference. I just read it. Maybe I am missing something, but
that item does *not* at all contain a discussion of

+ in terms of += vs += in terms of +

Rather, the main concern is consistency: "In general, for some binary
operator @ (be it +, -, *, and so on), you should define its assignment
version such that a @= b and a = a @ b have the same meaning (other than
that the first form might be more efficient and only evaluates a once)."
Then the authors set out to propose the "canonical way" @ in terms of @=
observing that this achieves the stated design goal; but they do not
provide a rational as to why this is superior to the obvious alternative @=
in terms of @.

The next paragraph enters a discussion of the virtues of non-member
implementations for operators. This is undisputed. The section closes with
variations, examples, and finally a notable exception to the canonical
recommendation. This addendum is actually the first point in the item where
the alternative "@= in terms of @" is mentioned.

The way I read it, item 27 (as opposed to Alf's post) provides no technical
reason at all to prefer:

@ in terms of @=

to:

@= in terms of @


Best

Kai-Uwe Bux

I've reread you're previous post on this subject, but I don't see where
you're giving a valid reason why Herb Sutters recommend method should
not be recommended as a general rule.
Your followup post suggested the opposite approach using swap method.
But not all classes have swap method, and infact, most don't. Those
that do, don't always have an efficient swap method.

I'm sure there are times when using the opposite approach would be more
efficient, but in general, I would recommend Herb Sutters method.

Remember, that a general rule is for general purposes. It doesn't mean
that the rule is required to be followed for every requirement.
 
K

Kai-Uwe Bux

Axter wrote:

[snip]
I've reread you're previous post on this subject, but I don't see where
you're giving a valid reason why Herb Sutters recommend method should
not be recommended as a general rule.

Right. I just gave examples where it might be better/easier to go the other
way. Given the closing remark of your current posting, I see now that I was
reading too much into the recommendation. Sorry.

However, in the course of this conversation, I have been thinking about this
issue more carefully; and now I feel prepared to actually provide a reason
why the general recommendation should be to define operator@= in terms of
operator@ and operator=. I think it is less error prone than the
Sutter/Alexandrescu way.

The example of the OP can be used to illustrate this. Let us have a look at
a rational number class:

class rational {

long n; // numerator
long d; // denominator

public:

// ... stuff

};

Ignoring reduction to lowest possible terms, the Sutter/Alexandrescu way to
go about addition is:

rational & operator+= ( rational & lhs, rational const & rhs ) {
lhs.n = lhs.n * rhs.d + lhs.d * rhs.n;
lhs.d *= rhs.d;
return ( lhs );
}

rational operator+ ( rational const & lhs, rational const & rhs ) {
rational result ( lhs );
result += rhs;
return ( result );
}


This is easy to get wrong:

rational & operator+= ( rational & lhs, rational const & rhs ) {
lhs.d *= rhs.d;
lhs.n = lhs.n * rhs.d + lhs.d * rhs.n; // wrong: uses new lhs.d !
}

The archives show posts dealing with re-implementations of std::complex that
suffer from this kind of bug. And I myself found myself recently falling
for this while working on a matrix class (maybe that is why I am so
sensitive to this issue right now): I was using entries that I had just
overwritten.

On the other hand,

rational operator+ ( rational const & lhs, rational const & rhs ) {
rational result ( lhs.n * rhs.d + lhs.d * rhs.n, lhs.d * rhs.d );
return result;
}

rational & operator+= ( rational & lhs, rational const & rhs ) {
lhs = lhs + rhs;
return ( lhs );
}

is much less prone to this kind of bug. (Come to think of it, this might be
the reason that Sutter and Alexandrescu mention the complex number class as
a case where they would recommend the second approach.)

Your followup post suggested the opposite approach using swap method.

Actually, I wrote that in my first post on this topic. In my follow up, I
elaborated on expression templates.
But not all classes have swap method, and infact, most don't. Those
that do, don't always have an efficient swap method.

True, in that case, you would use assignment. I suggested the swap method
because of the examples that I mentioned: matrix multiplication and
arbitrary precission operations. In both cases, a swap method *should* be
there and more efficient than assignment.

I'm sure there are times when using the opposite approach would be more
efficient, but in general, I would recommend Herb Sutters method.

As far as efficiency is concerned, you are right. However, I would remark
that this optimizes operator@= only: operator@ will not get any faster than
a direct implementation. Also, I would contend that expression templates
are yet more efficient to eliminate unwanted temporaries (altough the added
complexity to the code is considerable and for this reason I would not
recommend this as a general approach). Anyway, as a measure to optimize
performance, I would agree with you.

However, as a general guideline I would rather recommend the following
procedure:

1) First, write operator@= in terms of operator@ and operator= because
that is the most easy version to get right.
2) If profiling shows a need for optimization, refactor operator@=. Use
your operator@ code for inspiration.
3) Run tests to check whether the semantics agree. This should catch
bugs like the one above.
4) Finally rewrite operator@ following the Sutter/Alexandrescu approach.
This eliminates code dubblication.

The Sutter/Alexandrescu recommendation, to me, looks a little bit like
premature optimization.

Remember, that a general rule is for general purposes. It doesn't mean
that the rule is required to be followed for every requirement.

Thanks for reminding me. I tend to forget about caveats that always apply.


Best regards

Kai-Uwe Bux
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,756
Messages
2,569,535
Members
45,008
Latest member
obedient dusk

Latest Threads

Top