Sorting complexities ….
Explain briefly when we have O(nlogn) then why do we still
consider O(n2) algos ?
Because they're usually simpler. If you don't have a library
routine handy which can be used directly, it's usually a lot
less work to implement the simpler algorithm.
Because they're simpler, they usually have smaller constant
factors as well, which means that they're often faster for small
sets of data. But that's rarely a consideration in actual code.
I'll call std::sort even if I know the vector never has more
than five members, even if I could write an insertion sort which
would be faster for such a small data set. (Using a ready-made
library routine is always the simplest solution.) On the other
hand, I'll often use std::find for up to a hundred elements (or
more, depending on how often I need it), simply because it's
simpler than keeping the elements sorted and using
std::lower_bound.
And of course, all such decisions are well encapsulated, so I
can change them easily if the profiler says I have to.