Question on quicksort and mergesort calculations

C

Chad

I was looking at some old posts in comp.lang.c and found the following

http://groups.google.com/group/comp...nk=gst&q=recurrence+relation#b3b5046326994e18

I have some questions regarding the post.

First, and I quote

"Given an algorithm with loops or recursive calls, the way you find
a big-O equivalence class for that algorithm is to write down a
"recurrence relation" for the time taken to execute the code. For
instance, for merge sort, one gets (fixed width font required here;
$x$ denotes a variable x, etc.):


{
T(n) = { $a$, $n = 1$, $a$ a constant
{ $2T(n/2) + cn$, $n > 1$, $c$ a constant


You then apply induction-type reasoning to show that, e.g., when
$n$ is a power of two, $T(n) = an + cn \log_2 n$. In big-O notation,
all the constant factors vanish, so this shows that mergesort is
O(n log n). "

First, how do you get this recurrence relation for mergesort? Second,
how did he get O(n log n)


And how, later on

"(Personally, I always found the worst part of dealing with
recurrence
relations to be going from an open form to a closed one -- you just
have to have memorized all those formulae like "1 + 2 + ... + n =
n(n+1)/2", etc., and recognize them in the recurrences. Once you
can see this summation and recognize the closed form, it instantly
becomes obvious that, e.g., bubble sort is O(n*n).) "

I don't see how O(n*n) can be derived from 1 + 2 + ... + n = n(n+1)/2



"
 
B

Ben Bacarisse

I have set followup-to in order to try to keep this where it belongs
I was looking at some old posts in comp.lang.c and found the following

http://groups.google.com/group/comp...nk=gst&q=recurrence+relation#b3b5046326994e18

I have some questions regarding the post.

First, and I quote

"Given an algorithm with loops or recursive calls, the way you find
a big-O equivalence class for that algorithm is to write down a
"recurrence relation" for the time taken to execute the code. For
instance, for merge sort, one gets (fixed width font required here;
$x$ denotes a variable x, etc.):


{
T(n) = { $a$, $n = 1$, $a$ a constant
{ $2T(n/2) + cn$, $n > 1$, $c$ a constant


You then apply induction-type reasoning to show that, e.g., when
$n$ is a power of two, $T(n) = an + cn \log_2 n$. In big-O notation,
all the constant factors vanish, so this shows that mergesort is
O(n log n). "

First, how do you get this recurrence relation for mergesort? Second,
how did he get O(n log n)

You see that the algorithm has a pattern that does some stuff that
does not depend on n when n == 1 (that is the $a$ constant) and that
it calls itself with half the data twice otherwise (that is the
2T(n/2)). The + cn part is because there will be some work to split
the data into two, and that work depends on n.

Solving RRs is very similar to solving differential equations, and as
there, it is often simplest just to guess and test. In the case of
algorithms, you can almost always guess after a few goes. The above
turns into O(n log(n)) because O(n log(n)) == O(c1 + c1 n + n
log(n)). You can just throw away not only constants (as the text
says) but also less significant terms (like the term involving n). If
this is unsatisfying (and I guess it will be) you need to read a book
on the subject.
And how, later on

"(Personally, I always found the worst part of dealing with
recurrence
relations to be going from an open form to a closed one -- you just
have to have memorized all those formulae like "1 + 2 + ... + n =
n(n+1)/2", etc., and recognize them in the recurrences. Once you
can see this summation and recognize the closed form, it instantly
becomes obvious that, e.g., bubble sort is O(n*n).) "

I don't see how O(n*n) can be derived from 1 + 2 + ... + n =
n(n+1)/2

n(n+1)/2 = n*n/2 + n/2. In Big O, only the n*n matters -- neither the
/2 nor the n/2 has any asymptotic effect.
 
P

Paul Hsieh

I was looking at some old posts in comp.lang.c and found the following

http://groups.google.com/group/comp.lang.c/browse_thread/thread/d26ab...

I have some questions regarding the post.

First, and I quote

"Given an algorithm with loops or recursive calls, the way you find
a big-O equivalence class for that algorithm is to write down a
"recurrence relation" for the time taken to execute the code. For
instance, for merge sort, one gets (fixed width font required here;
$x$ denotes a variable x, etc.):

{
T(n) = { $a$, $n = 1$, $a$ a constant
{ $2T(n/2) + cn$, $n > 1$, $c$ a constant

You then apply induction-type reasoning to show that, e.g., when
$n$ is a power of two, $T(n) = an + cn \log_2 n$. In big-O notation,
all the constant factors vanish, so this shows that mergesort is
O(n log n). "

First, how do you get this recurrence relation for mergesort?

A merge sort of two or more elements is performed by doing two merge
sorts of half the list (top, then bottom) then merging the list.
Merging the list can be done by examining each element of each of the
list and doing a single comparison and move. There are at most (n-1)
compares, and n moves. So if we are counting comparisons or moves we
get something like:

X(n) <= 2 * X(n/2) + c1 * n

(2 merges of half the list, then a merge of the two half sized lists
to an output of the whole starting list.) For one element, X(1) =
c2. So if we just take a minimal upper bound on the number of
operations T(n) >= X(n), then we get:

T(n) = 2 * T(n/2) + c1 * n, T(1) = c2.
[...] Second, how did he get O(n log n) [...]

Let us suppose that n = 2**(i+1), or i = lg_2(n)-1 ( = lg_2(n/2)) .
We can form the following telescoping sum:

T( n) - 2 * T(n/2 ) = c1 * (n ) = c1 * n
2*T(n/2) - 4 * T(n/4 ) = 2 * c1 * (n / 2) = c1 * n
4*T(n/4) - 8 * T(n/8 ) = 4 * c1 * (n / 4) = c1 * n
8*T(n/8) - 16 * T(n/16) = 8 * c1 * (n / 8) = c1 * n
...
(2**i)*T(2) - (2**(i+1))*T(1) = c1 * n

Then summing vertically we get:

T(n) - (2**(i+1))*T(1) = (i+1) * c1 * n

or:

T(n) = n * c2 + (lg_2(n)) * c1 * n

We have only derived this for n as a power of two, but we can back
substitute to verify that it is correct in general:

(T(n) - c1 * n) / 2
= (n * c2 + (lg_2(n)) * c1 * n - c1 * n) / 2
= (n * c2 + (lg_2(n)-1) * c1 * n) / 2
= (n/2) * c2 + (lg_2(n) - 1) * c1 * (n / 2)
= (n/2) * c2 + (lg_2(n/2)) * c1 * (n / 2)
= T(n/2)

So we are set. In any event the dominant term with respect to n is
(lg_2(n)) * c1 * n so we can conclude that T(n) is O(n*ln(n)).
And how, later on

"(Personally, I always found the worst part of dealing with
recurrence relations to be going from an open form to a closed one
-- you just have to have memorized all those formulae like "1 + 2
+ ... + n = n(n+1)/2", etc., and recognize them in the recurrences.
Once you can see this summation and recognize the closed form, it
instantly becomes obvious that, e.g., bubble sort is O(n*n).) "

Whatever; this is just basic mathematical skill.
I don't see how O(n*n) can be derived from 1 + 2 + ... + n = n(n+1)/2

n(n+1)/2 = n**2/2 + n/2 which is dominated by n**2/2. More rigorously:

lim{n->inf} ( (n*(n+1)/2) / (n**2/2) ) = 1

which just brings us back to the basic definition of O(f(n)).
 
C

Chad

I was looking at some old posts in comp.lang.c and found the following

http://groups.google.com/group/comp.lang.c/browse_thread/thread/d26ab...

I have some questions regarding the post.

First, and I quote

"Given an algorithm with loops or recursive calls, the way you find
a big-O equivalence class for that algorithm is to write down a
"recurrence relation" for the time taken to execute the code.  For
instance, for merge sort, one gets (fixed width font required here;
$x$ denotes a variable x, etc.):

                {
        T(n) =  { $a$,              $n = 1$, $a$ a constant
                { $2T(n/2) + cn$,   $n > 1$, $c$ a constant

You then apply induction-type reasoning to show that, e.g., when
$n$ is a power of two, $T(n) = an + cn \log_2 n$.  In big-O notation,
all the constant factors vanish, so this shows that mergesort is
O(n log n). "

First, how do you get this recurrence relation for mergesort? Second,
how did he get O(n log n)

And how, later on

"(Personally, I always found the worst part of dealing with
recurrence
relations to be going from an open form to a closed one -- you just
have to have memorized all those formulae like "1 + 2 + ... + n =
n(n+1)/2", etc., and recognize them in the recurrences.  Once you
can see this summation and recognize the closed form, it instantly
becomes obvious that, e.g., bubble sort is O(n*n).) "

I don't see how O(n*n) can be derived from 1 + 2 + ... + n =  n(n+1)/2

"

Actually, now that I think about it, I have no idea how one gets the
following summation

1 + 2 + 3 +....n

for bubble sort.
 
C

Chad

Chad said:

<snip>




n(n+1)/2 is (n*n + n)/2. To get O(n*n) from this, bear in mind that n grows
insignificantly compared to n*n as n increases, and big-O is a sort of
big-picture measure that loses nitty-gritty detail such as n in the face
of something big and obvious like n*n. So we're down to (n*n)/2. To get
from there to n*n, simply buy a computer that runs at half the speed. (In
other words, constant factors aren't terribly interesting when compared to
the rate of growth of n - they don't change the /shape/ of the algorithmic
complexity.)




It's actually 1 + 2 + 3 + ... n-1

Consider { 6, 5, 4, 3, 2, 1 }

To sort this six-element array using bubble sort, you bubble the biggest
element to the right (five comparisons and swaps), and then you have an
unsorted five-element array and one sorted item. To sort the five-element
array using bubble sort, you bubble the biggest element to the right (four
comparisons and swaps), and then you have an unsorted four-element array
and two sorted items. To sort the four-element array using bubble sort,
you bubble the biggest element to the right (three comparisons and swaps),
and then you have an unsorted three-element array and three sorted items.

And so on. 5 + 4 + 3 + 2 + 1. It sums to (n-1)(n-2)/2, which is O(n*n).

Thank you for that clarification. Now, one last question. When Paul
sums up

T( n) - 2 * T(n/2 ) = c1 * (n ) = c1 * n
2*T(n/2) - 4 * T(n/4 ) = 2 * c1 * (n / 2) = c1 * n
4*T(n/4) - 8 * T(n/8 ) = 4 * c1 * (n / 4) = c1 * n
8*T(n/8) - 16 * T(n/16) = 8 * c1 * (n / 8) = c1 * n
...

He gets

T(n) - (2**(i+1))*T(1) = (i+1) * c1 * n


I don't see where he gets T(1) from. Maybe to get some insight into my
confusion, he has
T(n) = 2 * T(n/2) + c1 * n, T(1) = c2

I also don't see why he would introduce a second constant called c2
for T(1).
 
C

Chad

Thank you for that clarification. Now, one last question. When Paul
sums up

   T(  n) -  2 * T(n/2 ) =     c1 * (n    ) = c1 * n
   2*T(n/2) -  4 * T(n/4 ) = 2 * c1 * (n / 2) = c1 * n
   4*T(n/4) -  8 * T(n/8 ) = 4 * c1 * (n / 4) = c1 * n
   8*T(n/8) - 16 * T(n/16) = 8 * c1 * (n / 8) = c1 * n
   ...

He gets

T(n) - (2**(i+1))*T(1) = (i+1) * c1 * n

I don't see where he gets T(1) from. Maybe to get some insight into my
confusion, he has
T(n)  = 2 * T(n/2) + c1 * n, T(1) = c2

I also don't see why he would introduce a second constant called c2
for T(1).- Hide quoted text -

- Show quoted text -

Okay, I just sat and thought about how the merge sort derivation Paul
did and now I see how his calculation works.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,479
Members
44,899
Latest member
RodneyMcAu

Latest Threads

Top