Non-strictly Weak Sorting using STL?

D

Donovan Parks

Hello,

I am interesting in performing a non-strictly weak sorting of a vector
of objects. Specifically, I wish to sort these objects by a double
field unless they are equal in which case I wish to sort them based on
another field. As such, the ordering may no be strictly weak. Is it
possible to do this using STL?


I am using Visual C++ Express 2008 and it complains of a non-strictly
weak ordering when I run my program in debug mode. However, it will
let me do as I please in release mode. As an alternative, is there a
way to inform my compiler that I am OK with a non-strictly weak
ordering?

Thanks.
 
R

red floyd

Donovan said:
Hello,

I am interesting in performing a non-strictly weak sorting of a vector
of objects. Specifically, I wish to sort these objects by a double
field unless they are equal in which case I wish to sort them based on
another field. As such, the ordering may no be strictly weak. Is it
possible to do this using STL?


I am using Visual C++ Express 2008 and it complains of a non-strictly
weak ordering when I run my program in debug mode. However, it will
let me do as I please in release mode. As an alternative, is there a
way to inform my compiler that I am OK with a non-strictly weak
ordering?

That's compiler specific, and as such, OT here. Please check your
compiler's documentation, or search in a VS specific newsgroup.

That said, I believe that std::sort exhibits undefined behavior if you
don't have a strict weak ordering.
 
M

Maxim Yegorushkin

I am interesting in performing a non-strictly weak sorting of a vector
of objects. Specifically, I wish to sort these objects by a double
field unless they are equal in which case I wish to sort them based on
another field. As such, the ordering may no be strictly weak.

From the description you gave it still looks to be strict-weak
ordering.
Is it possible to do this using STL?

It should be.
I am using Visual C++ Express 2008 and it complains of a non-strictly
weak ordering when I run my program in debug mode. However, it will
let me do as I please in release mode. As an alternative, is there a
way to inform my compiler that I am OK with a non-strictly weak
ordering?

Could you post a complete piece of code that shows us what you
observe?
 
A

AnonMail2005

Hello,

I am interesting in performing a non-strictly weak sorting of a vector
of objects. Specifically, I wish to sort these objects by a double
field unless they are equal in which case I wish to sort them based on
another field. As such, the ordering may no be strictly weak. Is it
possible to do this using STL?

I am using Visual C++ Express 2008 and it complains of a non-strictly
weak ordering when I run my program in debug mode. However, it will
let me do as I please in release mode. As an alternative, is there a
way to inform my compiler that I am OK with a non-strictly weak
ordering?

Thanks.

Define a less than operator for your class and you should be fine - it
sounds like you just have a compound key. Something to this effect
assuming A is your class/struct:

struct A
{
double first; // high order of key
double second; // low order of key
};

bool operator < (A const & lhs, A const & rhs)
{
return lhs.first < rhs.first || lhs.first == rhs.first && lhs.second
< rhs.second.
}

Also, be aware of floating point comparisons discussed in a recent
thread.

HTH
 
5

5884629

Define a less than operator for your class and you should be fine - it
sounds like you just have a compound key. Something to this effect
assuming A is your class/struct:

struct A
{
  double first; // high order of key
  double second;  // low order of key

};

bool operator < (A const & lhs, A const & rhs)
{
  return lhs.first < rhs.first || lhs.first == rhs.first && lhs.second
< rhs.second.

}

Also, be aware of floating point comparisons discussed in a recent
thread.

HTH

http://seokarachi.blogspot.com/
 
D

Donovan Parks

Hello,

Thanks for all the replies. Based on the replies it sounds like there
is something about how the comparison function needs to be written
that I don't understand. The relevant code is as follows:

typedef struct sHEURISTIC_SORTER
{
sHEURISTIC_SORTER(double _heuristicValue, uint _index, uint
_degree): heuristicValue(_heuristicValue), index(_index), degree
(_degree) {}

double heuristicValue;
uint index;
uint degree;
} HeuristicSorter;

bool HeuristicSorterPredicate(HeuristicSorter elem1, HeuristicSorter
elem2)
{
if(elem1.heuristicValue < elem2.heuristicValue)
return true;
else if(elem1.heuristicValue > elem2.heuristicValue)
return false;
else
{
if(elem1.degree % 2 == 1)
return true;

return false;
}
}

bool SomeFunction(...)
{
...
std::sort(myHeuristicSorterVector.begin(),
myHeuristicSorterVector.end(), HeuristicSorterPredicate);
...
}

Do I need to write HeuristicSorterPredicate() in a particular way?

Much thanks!

Cheers,
Donovan
 
P

peter koch

Hello,

Thanks for all the replies. Based on the replies it sounds like there
is something about how the comparison function needs to be written
that I don't understand. The relevant code is as follows:

typedef struct sHEURISTIC_SORTER
{
  sHEURISTIC_SORTER(double _heuristicValue, uint _index, uint
_degree): heuristicValue(_heuristicValue), index(_index), degree
(_degree) {}

  double heuristicValue;
  uint index;
  uint degree;

} HeuristicSorter;

bool HeuristicSorterPredicate(HeuristicSorter elem1, HeuristicSorter
elem2)
{
  if(elem1.heuristicValue < elem2.heuristicValue)
    return true;
  else if(elem1.heuristicValue > elem2.heuristicValue)
    return false;
  else
  {
    if(elem1.degree % 2 == 1)
        return true;

    return false;
  }

}

bool SomeFunction(...)
{
  ...
  std::sort(myHeuristicSorterVector.begin(),
myHeuristicSorterVector.end(), HeuristicSorterPredicate);
  ...

}

Do I need to write HeuristicSorterPredicate() in a particular way?

You need to write you comparison function so that (among other things)
if a is less than b then b will not be less than a. This is not the
case for the code shown.

/Peter
 
P

peter koch

You want to define what you mean by "equal" in a double.  A double is a
(very close) approximation to a number;  it's possible for 1+1 to not be
exactly 2, but instead be 1.9999999999999999 or 2.00000000000000000001
(with a few more digits - usually)

Sorry, but that is just nonsense - and unrelated to the original post.

/Peter
 
D

Donovan Parks

Hello,

Andy: I agree one needs to be careful when looking for equality
between doubles. For reasons not interesting enough to list here, I'm
not worried about this issue here.

Peter: What is the rule for when a == b? In a strictly weak ordering
if a == b am I free to return true or false? If there someway I can
rewrite HeuristicSorterPredicate() to meet the requirements of a
strictly weak ordering or is what I'm doing inherently not obeying a
strictly weak ordering (which is what I believe)? Thanks for the help.

Cheers,
Donovan
 
J

James Kanze

Andy: I agree one needs to be careful when looking for
equality between doubles. For reasons not interesting enough
to list here, I'm not worried about this issue here.
Peter: What is the rule for when a == b? In a strictly weak
ordering if a == b am I free to return true or false?

Only false. The ordering must be such that !(a<b) && !(b<a)
defines an equivalence relationship.
 
D

Donovan Parks

Hello,

Andy and James: thanks for the help. I can clearly see that I don't
have a strictly weak ordering.

Given that I need to sort the list as specified above, is there any
STL (doesn't seem so) or Boost (not as family with this library set,
although I use parts of it) sorting algorithm that relaxes this
requirement or will I need to grab myself a sorting algorithm and
modify it for my needs. Clearly one can properly sort a list without
this requirement (i.e., where equality is treated as a special case).
I'm a big fan of not reinventing the wheel, but perhaps it is
necessary in this case.

Thanks for the help.

Cheers,
Donovan
 
P

peter koch

Hello,

Andy and James: thanks for the help. I can clearly see that I don't
have a strictly weak ordering.

Given that I need to sort the list as specified above, is there any
STL (doesn't seem so) or Boost (not as family with this library set,
although I use parts of it) sorting algorithm that relaxes this
requirement or will I need to grab myself a sorting algorithm and
modify it for my needs. Clearly one can properly sort a list without
this requirement (i.e., where equality is treated as a special case).
I'm a big fan of not reinventing the wheel, but perhaps it is
necessary in this case.

First: Don't toppost - I've rearranged the post. Now back to topic:
There is no reason whatsoever not to make your ordering strictly weak,
so just do that and use a standard sorting routine.

/Peter
 
P

peter koch

Actually, I do have a relevant point.   No.
He said "Specifically, I wish to
sort these objects by a double field unless they are equal".  I wish to
point out that "equal" is not a reliable measure for floating point
numbers.

That is wrong.
 Only when he has answered that is it worth considering
strictly weak ordering and the secondary key.

And this is also wrong.

/Peter
 
K

Kai-Uwe Bux

Donovan said:
Hello,

Thanks for all the replies. Based on the replies it sounds like there
is something about how the comparison function needs to be written
that I don't understand. The relevant code is as follows: [snip]
bool HeuristicSorterPredicate(HeuristicSorter elem1, HeuristicSorter
elem2)
{
if(elem1.heuristicValue < elem2.heuristicValue)
return true;
else if(elem1.heuristicValue > elem2.heuristicValue)
return false;
else
{
if(elem1.degree % 2 == 1)
return true;

return false;
}
}
[snip]

The following would be a strict weak order:

bool HeuristicSorterPredicate ( HeuristicSorter elem1,
HeuristicSorter elem2 ) {
if ( elem1.heuristicValue < elem2.heuristicValue ) {
return true;
}
if ( elem1.heuristicValue > elem2.heuristicValue ) {
return false;
}
if( ( elem1.degree % 2 ) > ( elem2.degree % 2 ) ) {
return true;
}
return false;
}

Is there a reason not to use this one?


Best

Kai-Uwe Bux
 
K

Kai-Uwe Bux

Andy said:
I don't want to start a flame-fest here Peter, but it's generally known
that a floating point number is an approximation to an actual value.

No. Each floating point number (with the exception of finitely many values
such as NAN) corresponds to one and only one rational number, although not
every rational number can be realized as a float (or double, or long
double). In particular, floating point numbers inherit a strict order from
the real numbers they represent.

You might _use_ floating point numbers to approximate real numbers. But
there is nothing inherently fuzzy about floats.

Perhaps you'd like to explain to us what floating point system is able
to accurately express the precise value of 1/3?

He did not claim that.
And what will the
result be if you double it? And if you subtract 1/3 from 1, how it will
compare to the previous result?

The results of floating point arithmetic are by and large implementation
defined (e.g., there is no guarantee that addition of two floats yields a
best possible approximation to the sum of the represented numbers), but one
could argue that the standard requires addition to yield the actual sum if
the sum is representable). Similarly, comparisons are covered by the
standard. The only tricky issue is that you only can compare objects
(regions of memory) since intermediate values are allowed to have extra
bits [5/10].

Whether the results of comparisons are _useful_ depends on the particular
application, and You are certainly right in directing the OP to thinking
about that question.


Best

Kai-Uwe Bux
 
J

James Kanze

I don't want to start a flame-fest here Peter, but it's
generally known that a floating point number is an
approximation to an actual value.

I don't know where it is "generally known"; all of the experts I
know seem to concure that a floating point number exactly
represents a specific value. It may not be the value you want,
but that's another problem; if you know what you're doing, in
many applications, you can ensure that it is the value you want.
Perhaps you'd like to explain to us what floating point system
is able to accurately express the precise value of 1/3?

All of the ones I know can: the exact value of 1/3 is 0 (at
least in C++), and all of the floating point representations I
know are capable of exactly representing 0.
And what will the result be if you double it?

0. While floating point arithmetic differs in a lot of aspects
from real arithmetic, 0 does behave the same in both of them.
And if you subtract 1/3 from 1, how it will compare to the
previous result?

With what previous results. 1-1/3 is 1.

But perhaps you meant 1.0/3.0. In that case, of course, all of
the answers are implementation defined. But precisely defined.
 
J

James Kanze

Andy Champ wrote:

[...]
The results of floating point arithmetic are by and large
implementation defined (e.g., there is no guarantee that
addition of two floats yields a best possible approximation to
the sum of the represented numbers), but one could argue that
the standard requires addition to yield the actual sum if the
sum is representable).

I don't think it does. It's been a long time since I looked at
this, so the situation may have changed (and I may be
remembering things wrongly), but IIRC, the standard makes no
requirements with regards to the accuracy of floating point
arithmetic. From a QoI point of view (and IEEE does make such
requirements), I would expect one specific, well defined result,
but I don't think that the requirement really requires that; I
think that an implementation which always returns 0.0 for
floating point arithmetic would be formally conformant (but not
very useful).
 
P

peter koch

I don't want to start a flame-fest here Peter, but it's generally known
that a floating point number is an approximation to an actual value.

It is not. When converting from one value to another, you might well
get an approximation, but this is an entirely different matter.
Perhaps you'd like to explain to us what floating point system is able
to accurately express the precise value of 1/3?  And what will the
result be if you double it?  And if you subtract 1/3 from 1, how it will
compare to the previous result?

It is only you that argue about arithmetic calculations. This thread
is about sorting - no calculations are carried out, only comparisons.
You incorrectly hinted to the OP that he should not use direct
comparisons and that is just nonsense.

/Peter
 
K

Kai-Uwe Bux

James said:
Andy Champ wrote:
[...]
The results of floating point arithmetic are by and large
implementation defined (e.g., there is no guarantee that
addition of two floats yields a best possible approximation to
the sum of the represented numbers), but one could argue that
the standard requires addition to yield the actual sum if the
sum is representable).

I don't think it does.

The standard says in [5.7/3]:

The result of the binary + operator is the sum of the operands.
...

So, _if_ the involved floats are rational numbers (i.e. not NAN or something
like that) _and_ their sum is a float, then I don't see where the standard
grants an exception. Surely, [5/5] doesn't apply:

If during the evaluation of an expression, the result is not
mathematically defined or not in the range of representable
values for its type, the behavior is undefined, ...

since the hypotheses are designed to avoid that case.
It's been a long time since I looked at
this, so the situation may have changed (and I may be
remembering things wrongly), but IIRC, the standard makes no
requirements with regards to the accuracy of floating point
arithmetic.

True, but that only applies when the sum is _not_ representable as a float.
In that case, you usually want a best approximation and the standard does
not go there.
From a QoI point of view (and IEEE does make such
requirements), I would expect one specific, well defined result,
but I don't think that the requirement really requires that; I
think that an implementation which always returns 0.0 for
floating point arithmetic would be formally conformant (but not
very useful).

As far as compatibility with the standard is concerned, I think that a
floating point format where each value is irregular (i.e., not a real
number but NAN or the like) would be compatible with the standard. However,
in that case the if-part of my statement is never satisfied.


Best

Kai-Uwe Bux
 
K

Kai-Uwe Bux

Jeff said:
James said:
I don't know where it is "generally known"; all of the experts I
know seem to concure that a floating point number exactly
represents a specific value.

This is a bit of a philosophical conundrum, because it comes down to the
definitions of the words like "actual," "specific," "number," and "value."

An IEEE floating point representation has only so many bits. Sometimes,
there would be no additional information in those bits, anyway. In this
case, we can think of the fp number as corresponding to a single point
on the real number line. For example, we may literally specify 0.0, or
0.5, with no loss in precision.

Other times, meaningful digits are lost by the encoding into IEEE fp
representation. This effectively introduces noise. [1] In this case,
the fp number corresponds to a range of values, and a contiguous region
on the real number line. For example, pi isn't really equal to
11.00100100001111110110101010001000100001011010001100001, but is within
some epsilon of that value.

What happens here is not that the number

11.00100100001111110110101010001000100001011010001100001b

is an approximation or carries an error. What happens is that you use it as
an approximation for some other number (in this case, pi). The number

11.00100100001111110110101010001000100001011010001100001b

neither knows nor cares about this usage; in fact it could be used to
approximate a whole lot of numbers and it would be best used to approximate

11.00100100001111110110101010001000100001011010001100001b


The number

11.00100100001111110110101010001000100001011010001100001b

is a particular (rational) value. Which meaning you attach to that value is
up to you.

As an engineering undergraduate, I was taught to represent noise as a
margin of error (MoE), usually represented as a percent of the original
value. Keeping track of the MoE in a result is straight-forward, if you
know the MoE of your inputs. Fortunately, IEEE specifies exactly the
epsilon (maximum amount of noise) introduced by fp encoding, so precise
calculations certainly are possible, in the sense that the MoE is
bounded and calculable. [2] The problem we now have in industry is that
some folks just do not seem willing to keep track of accumulated noise
values, so they just blindly hope the noise is not more than some
apparently large value, e.g. 0.1%.

Any two arbitrary fp numbers a and b, with positive epsilons e and f
respectively, really represent the ranges [a-e,a+e] and [b-f,b+f].

Well, floating point numbers are points on the real line. The interval you
_want_ it to represent is something only the particular application can
define. For instance, the error terms e and f need not be related to the
precision of the float if some form of physical measurement gives rise to
the values a and b. Nothing inherently about a float will give you margins
of error.

If we perform an operation on them, the resulting region is not hard to
calcluate; e.g., for multiplication, just distribute the terms:

[a-e,a+e] * [b-f,b+f] = [ab-af-be-ef,ab+af+be+ef]

As you can see, the result is ab +- (af+be+ef). The noise term doesn't
need to be represented exactly for real-world use, just bounded by
rounding outward. You may be able to prove a priori that you don't even
have to calculate the ef term; I'm not sure whether it's always zero, or
whether it might have some denormalized representation.

Maybe, interval arithmetic is more suitable for that kind of application as
it keeps track of the MoE along the way of a computation.


Best

Kai-Uwe Bux
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Members online

Forum statistics

Threads
473,769
Messages
2,569,581
Members
45,055
Latest member
SlimSparkKetoACVReview

Latest Threads

Top