we have O(nlogn) then why we still consider O(n2) algos ?

P

Pallav singh

Sorting complexities ….
Explain briefly when we have O(nlogn) then why do we still
consider O(n2) algos ?

Thanks
Pallav singh
 
B

Bart van Ingen Schenau

Sorting complexities ….
        Explain briefly when we have O(nlogn) then why do we still
consider O(n2) algos ?

This sounds like a homework question, but I will answer anyway.
The big-O complexity of an algorithm only tells us which is more
efficient as N gets large or huge.
For small N, the factors that are ignored for big-O still give a
significant contribution.

For example, for searching we have binary search (O(logN)) and linear
search (O(N)).
Which would be better for searching a short list (N < 10)?
Thanks
Pallav singh

Bart v Ingen Schenau
 
M

MiB

Sorting complexities ….
        Explain briefly when we have O(nlogn) then why do we still
consider O(n2) algos ?

Thanks
Pallav singh

In addition to Bart's excellent comment, I'd like to add that some
efficient algorithms for single processor execution do not scale well
on massively parallel systems; here sub-optimal approaches often
really shine.

Just my $.02,

MiB.
 
P

Pascal J. Bourguignon

Bart van Ingen Schenau said:
This sounds like a homework question, but I will answer anyway.
The big-O complexity of an algorithm only tells us which is more
efficient as N gets large or huge.
For small N, the factors that are ignored for big-O still give a
significant contribution.

For example, for searching we have binary search (O(logN)) and linear
search (O(N)).
Which would be better for searching a short list (N < 10)?

Another example, is to sort a vector that is almost always sorted (or
has only one element out of order).

Quick-sort will still have to do O(nlog n) operations, while insertion
sort will need only O(n) operations to check that the array is sorted
and will be more efficient to reorder it in the rare case when one
element is out of order.
 
J

Juha Nieminen

Pallav said:
Sorting complexities ….
Explain briefly when we have O(nlogn) then why do we still
consider O(n2) algos ?

Here's an answer for your homework:

Insertion sort (an O(n^2) algorithm) is faster for an array of 10
elements than any O(n log n) sorting algorithm. If you need to eg. sort
a million arrays with 10 elements each, it will be way faster to use
insertion sort on each one than any O(n log n) algorithm.
 
J

James Kanze

Sorting complexities ….
Explain briefly when we have O(nlogn) then why do we still
consider O(n2) algos ?

Because they're usually simpler. If you don't have a library
routine handy which can be used directly, it's usually a lot
less work to implement the simpler algorithm.

Because they're simpler, they usually have smaller constant
factors as well, which means that they're often faster for small
sets of data. But that's rarely a consideration in actual code.
I'll call std::sort even if I know the vector never has more
than five members, even if I could write an insertion sort which
would be faster for such a small data set. (Using a ready-made
library routine is always the simplest solution.) On the other
hand, I'll often use std::find for up to a hundred elements (or
more, depending on how often I need it), simply because it's
simpler than keeping the elements sorted and using
std::lower_bound.

And of course, all such decisions are well encapsulated, so I
can change them easily if the profiler says I have to.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,768
Messages
2,569,574
Members
45,051
Latest member
CarleyMcCr

Latest Threads

Top