If I don't use 'unordered', I could define an artificial order
by defining std::less<T>. The good thing of this approach is
that I don't have specify the comparator for std::map.
However, this could break other code. For example, I first
have two double's compared, later I change double to
std::complex<double>, which shall not be compared. But since
I defined std::less<T> for std::complex<T>, the compiler would
accept the code, which is wrong.
Not if the user code used < for the comparison. The whole point
of specializing std::less (rather than defining operator<) is
that user code won't use it; it will only be used for ordering
in containers.
If I could make an function object that gives artificial
order, but then I have to specify it whenever I use std::map,
which is too tedious.
Really?
Since both the above approaches have shortcoming, it is then
better to use 'unordered' if I can, right?
Using unordered means that you have to define a hash function,
defining a good hash function is a lot more complex than
defining ordering (usually), and if you provide a bad one, it
almost certainly won't show up in your tests, but your actual
application will run significantly slower. (Actually, it is
possible to more or less test the quality of a hash function;
hash a large number of random instances of your object, modulo
something (say 100), and count each of the results. Then do
some very simple statistical analysis on the counts. I actually
do this when reasonable, but most people I've seen don't, and
it's not necessarily simple to generate "random" values for many
classes.)