A
aminer
Hello,
Look down the the following link , it's about parallel partition:
http://www.redgenes.com/Lecture-Sorting.pdf
I have tried to simulate this parallel partition method ,
but i don't think it will scale cause we have to do a merging,
which essentially is an array-copy operation but this array-copy
operations will be expensive compared to an integer compare
operation that you find inside the partition fuinction, and it will still
be expensive compared to a string compare operation that you find
inside the partition function. So since it's not scaling i have abondoned
the idea to implement this parallel partition method in my parallel
quicksort.
I have also just read the following paper about Parallel Merging:
http://www.economyinformatics.ase.ro/content/EN4/alecu.pdf
And i have implemented this algorithm just to see what is its performance.
and i have noticed that the serial algorithm is 8 times slower than the
merge
function that you find in the serial mergesort algorithm.So 8 times slower,
it's too slow.
So the only way to do a somewhat better parallel sorting algorithm,
it's to use the following algorithm;
http://www.drdobbs.com/parallel/parallel-merge/229204454?queryText=parallel+sort
The idea is simple:
Let's assume we want to merge sorted arrays X and Y. Select X[m]
median element in X. Elements in X[ .. m-1] are less than or equal to
X[m]. Using binary search find index k of the first element in Y greater
than X[m]. Thus Y[ .. k-1] are less than or equal to X[m] as well.
Elements in X[m+1..] are greater than or equal to X[m] and Y[k .. ]
are greater. So merge(X, Y) can be defined as
concat(merge(X[ .. m-1], Y[ .. k-1]), X[m], merge(X[m+1.. ], Y[k .. ]))
now we can recursively in parallel do merge(X[ .. m-1], Y[ .. k-1]) and
merge(X[m+1 .. ], Y[k .. ]) and then concat results.
Thank you,
Amine Moulay Ramdane.
Look down the the following link , it's about parallel partition:
http://www.redgenes.com/Lecture-Sorting.pdf
I have tried to simulate this parallel partition method ,
but i don't think it will scale cause we have to do a merging,
which essentially is an array-copy operation but this array-copy
operations will be expensive compared to an integer compare
operation that you find inside the partition fuinction, and it will still
be expensive compared to a string compare operation that you find
inside the partition function. So since it's not scaling i have abondoned
the idea to implement this parallel partition method in my parallel
quicksort.
I have also just read the following paper about Parallel Merging:
http://www.economyinformatics.ase.ro/content/EN4/alecu.pdf
And i have implemented this algorithm just to see what is its performance.
and i have noticed that the serial algorithm is 8 times slower than the
merge
function that you find in the serial mergesort algorithm.So 8 times slower,
it's too slow.
So the only way to do a somewhat better parallel sorting algorithm,
it's to use the following algorithm;
http://www.drdobbs.com/parallel/parallel-merge/229204454?queryText=parallel+sort
The idea is simple:
Let's assume we want to merge sorted arrays X and Y. Select X[m]
median element in X. Elements in X[ .. m-1] are less than or equal to
X[m]. Using binary search find index k of the first element in Y greater
than X[m]. Thus Y[ .. k-1] are less than or equal to X[m] as well.
Elements in X[m+1..] are greater than or equal to X[m] and Y[k .. ]
are greater. So merge(X, Y) can be defined as
concat(merge(X[ .. m-1], Y[ .. k-1]), X[m], merge(X[m+1.. ], Y[k .. ]))
now we can recursively in parallel do merge(X[ .. m-1], Y[ .. k-1]) and
merge(X[m+1 .. ], Y[k .. ]) and then concat results.
Thank you,
Amine Moulay Ramdane.