Mergesort algorithm for linked lists

J

Joerg Schoen

Hi folks!

Everyone knows how to sort arrays (e. g. quicksort, heapsort etc.)
For linked lists, mergesort is the typical choice.

While I was looking for a optimized implementation of mergesort for
linked lists, I couldn't find one. I read something about Mcilroy's
"Optimistic Merge Sort" and studied some implementation, but they
were for arrays. Does anybody know if Mcilroys optimization is applicable to
truly linked lists at all?

Eventually, I came up with my own implementation (see below) which seems to
perform quite well. The algorithm is my own idea - at least I don't know if
anybody else has come up with this before.

The algorithm is localized. It maintains a stack of sorting length and tries
to merge small pieces as soon as possible. This takes advantage of a CPU
caches.

Here are my questions about it:

 - Is my implementation new/genuine or has somebody else come up
   with this before?

 - How does it compare with other optimized implementations (provided there
   are such)? I would like to hear from better implementations!

 - When optimizing quicksort, one usually tries to use insertion sort
   and the like for SMALL parts. I tried to do the same for the mergesort
   below, i. e. sort small pieces of length ~ 10 with insertion sort.

   It actually became slower :( I don't quite understand why.

Thanks for any comments!

Joerg

PS: You can find this implementation at
        http://www.die-Schoens.de/prg/testsort.tgz

--------------------- snip ------------------------------
#include <stddef.h> /*  for NULL  */

typedef struct linkedlist_t linkedlist_t;
struct linkedlist_t {
  linkedlist_t *Next;
  int Key;
};

#define NATURAL /*  recognize natural runs  */

linkedlist_t *mergesort(linkedlist_t *list)
{
  struct {
    linkedlist_t *list;
    unsigned int size;
#ifdef NATURAL
    unsigned int level;
#endif
  } stack[sizeof(int) * 8];
  int nstack;

  for(nstack = -1 ; ; ) {
    linkedlist_t *plist, *qlist, *curr;
    unsigned int psize, qsize;

    /*  Can we or do we have to merge lists from the stack?  */
    if(nstack < 1 || (
#ifdef NATURAL
                       stack[nstack - 1].level != stack[nstack].level
#else
                       stack[nstack - 1].size != stack[nstack].size
#endif
                       && list != NULL)) {
      /* **  Push element(s) on the stack  ** */
      if(list == NULL)
        /*  This is where we break the loop. We have nstack == 0 now  */
        break;

      /* **  Push lists of length 1  ** */
      curr = list;
      psize = 1;

      list = list->Next;

#ifdef NATURAL
      /*  Check for a natural run  */
      for(plist = curr ; list != NULL ; plist = list, list = list->Next,
++psize)
        if(list->Key < plist->Key)
          break;

      /*  Properly terminate the run  */
      plist->Next = NULL;
#else
      /*  Properly terminate list  */
      curr->Next = NULL;
#endif

      /* **  Push this run on the stack  ** */
      ++nstack;
      stack[nstack].list = curr;
      stack[nstack].size = psize;
#ifdef NATURAL
      stack[nstack].level = 0;
#endif

      continue;
    }

    /* **  Merge top two entries from stack  ** */
    plist = stack[nstack].list; psize = stack[nstack].size;
    --nstack;
    qlist = stack[nstack].list; qsize = stack[nstack].size;

    /* **  Set up new stack element. This also simplifies the main loop
below ** */
    if(qlist->Key < plist->Key) {
      curr = qlist;
      qlist = qlist->Next; --qsize;
    } else {
      curr = plist;
      plist = plist->Next; --psize;
    }

    stack[nstack].list = curr;

    /*  Number of elements is just the sum  */
    stack[nstack].size += stack[nstack + 1].size;
#ifdef NATURAL
    stack[nstack].level++;
#endif

    /* **  Here is the main merging loop  ** */
    while(psize > 0 && qsize > 0) {
      if(qlist->Key < plist->Key) {
        curr->Next = qlist; curr = qlist;
        qlist = qlist->Next;   --qsize;
      } else {
        curr->Next = plist; curr = plist;
        plist = plist->Next;;  --psize;
      }
    }

    /*  Add remainder to result list  */
    if(psize > 0) {
      curr->Next = plist;
    } else {
      curr->Next = qlist;
    }
  }

  /*  Here we treat the case of list == NULL  */
  return nstack == 0 ? stack[nstack].list : NULL;
}
-------------------- snap -----------------------
 
C

CBFalconer

Joerg said:
Everyone knows how to sort arrays (e. g. quicksort, heapsort etc.)
For linked lists, mergesort is the typical choice.

While I was looking for a optimized implementation of mergesort for
linked lists, I couldn't find one. I read something about Mcilroy's
"Optimistic Merge Sort" and studied some implementation, but they
were for arrays. Does anybody know if Mcilroys optimization is
applicable to truly linked lists at all?

Eventually, I came up with my own implementation (see below) which
seems to perform quite well. The algorithm is my own idea - at least
I don't know if anybody else has come up with this before.

The algorithm is localized. It maintains a stack of sorting length
and tries to merge small pieces as soon as possible. This takes
advantage of a CPU caches.

.... snip code ...

Too long and complicated for me. There is an example in about 60
lines, including comments, in wdfreq.c, which in turn is an example
application in hashlib.zip. See:

<http://cbfalconer.home.att.net/download/>

--
<http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt>

"A man who is right every time is not likely to do very much."
-- Francis Crick, co-discover of DNA
"There is nothing more amazing than stupidity in action."
-- Thomas Matthews
 
D

David T. Ashley

Joerg Schoen said:
- Is my implementation new/genuine or has somebody else come up
with this before?

It is unlikely that your implementation is unique. Somebody somewhere has
probably done it before. Searching and sorting are very well explored
topics, both theoretically and practically.

But even if your actual implementation of a small aspect of it is unique,
nobody will care, and your place in history is not assured.

Generally, people will only care if it solves a practical problem worth a
lot of money, or adds insight or a solution to a theoretical problem. If
you have a reimplementation of an O(n log n) sorting algorithm, nobody will
care.

Now, if you develop a general O(log n) sorting algorithm ... there will be
more listeners.

Merging using linked lists doesn't seem fundamentally different than merging
using arrays.

Related question: We discussed the "bubble-sort" in class today. I have
this new sort where I find the last element instead of the first--I think
I'll call it the "rock-sort". Will I get the Nobel prize or something?

Sorting has been well explored. Here is what you're up against:

http://en.wikipedia.org/wiki/Category:Sort_algorithms
 
C

CBFalconer

David T. Ashley said:
.... snip ...

Now, if you develop a general O(log n) sorting algorithm ... there
will be more listeners.

They exist. See Sedgewick. Need more hardware though.
Merging using linked lists doesn't seem fundamentally different
than merging using arrays.

You use it without disturbing/moving/copying the actual data, and
lists are often the most convenient input mechanism for unknown
volume. In addition mergesort has no worst case, and is stable.

--
<http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt>

"A man who is right every time is not likely to do very much."
-- Francis Crick, co-discover of DNA
"There is nothing more amazing than stupidity in action."
-- Thomas Matthews
 
J

jo65

David said:
But even if your actual implementation of a small aspect of it is unique,
nobody will care, and your place in history is not assured.

Wasn't looking for a "place in history", just for comments on my
implementation.

I disagree that nobody cares. I actually searched around to find useful
practical implementations of mergesorts for linked lists. What I found
were
only textbooks type of implemtations - nice but slow. That's why I
posted
this here to ask for comments and maybe find a better implementation.
Generally, people will only care if it solves a practical problem worth a
lot of money, or adds insight or a solution to a theoretical problem. If
you have a reimplementation of an O(n log n) sorting algorithm, nobody will
care.

Theoretically, the whole "mergesort" stuff is well known and boring.
But still the
question remains how to do in practical terms in an efficient way.

You contradict yourself. One time you state that only theoretical
insights might
interest people or if practical problems are solved. But here I attempt
to solve a
practical problem, namely to have a real-world/efficient implementation
of linked
list mergesort, and you state that "no one cares".
Merging using linked lists doesn't seem fundamentally different than merging
using arrays.

Except for the fact that the implementations are different. One of my
original
questions was if the "optimized merge sort" which is an efficient
implementation
of mergesort also works for linked lists. The only implementation I
found were for
arrays.
Related question: We discussed the "bubble-sort" in class today. I have
this new sort where I find the last element instead of the first--I think
I'll call it the "rock-sort". Will I get the Nobel prize or something?

It seems you are not having a good day and try to be rude all the time.
Please
comment on the topic, i. e. my code and stop this useless comments.
 
J

jo65

CBFalconer said:
Too long and complicated for me. There is an example in about 60
lines, including comments, in wdfreq.c, which in turn is an example
application in hashlib.zip. See:

<http://cbfalconer.home.att.net/download/>

Thanks for the comment! Your implementation is basically the one from
Sedgewick's book. It has the disadvantage that you have to run through
the linked lists repeatedly to split it, but it is also localized, i.
e. takes
advantage of caches.
 
C

CBFalconer

Thanks for the comment! Your implementation is basically the one from
Sedgewick's book. It has the disadvantage that you have to run through
the linked lists repeatedly to split it, but it is also localized, i.e.
takes advantage of caches.

Well, I read Sedgewick long ago :). However I fail to see how you
can do a merge without first splitting.

--
<http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt>

"A man who is right every time is not likely to do very much."
-- Francis Crick, co-discover of DNA
"There is nothing more amazing than stupidity in action."
-- Thomas Matthews
 
W

wade

Joerg said:
- When optimizing quicksort, one usually tries to use insertion sort
and the like for SMALL parts. I tried to do the same for the mergesort
below, i. e. sort small pieces of length ~ 10 with insertion sort.

It actually became slower :( I don't quite understand why.

Using wikipedia's expected number-of-comparisons for randomized quick
sort (1.39 n lg2 n), insertion sort (n^2 / 4), and merge sort (n lg2 n
+ 1 - 1.26 n) says that for ten elements, quicksort would do 46
comparisons, insertion sort 25 comparisons, and merge sort 22
comparisons.

Also, for a large array, the insertion sort "helper" for quicksort can
be done as a single pass over the entire array at the end (each element
is already close to its final position) meaning the insertion sort
overhead is much less than the quicksort overhead for a bunch of very
small partitions.

For mergesort on a list, your insertion-sort helper has overhead almost
as large as the merge-sort overhead, since you have to do a bunch of
"little" insertion sorts before you can start merging.
 
D

David T. Ashley

David T. Ashley wrote:

It seems you are not having a good day and try to be rude all the time.
Please
comment on the topic, i. e. my code and stop this useless comments.

One has to differentiate different types of contributions to the world:

a)Practical "make life" easier contributions: these are contributions that
are convenient, but that could be replicated by many others. A mergesort
implementation that uses linked lists instead of arrays is in this category.

b)Fundamental contributions: these have a lower probability of being
replicated by others. The original development of quicksort and to a lesser
degree mergesort is in this category.

Your contribution is in category (a) rather than (b). It has value, just
not a lot of value. The key issue is whether anyone seeking the solution
would spend more time searching for and porting your solution rather than
implementing it themselves from scratch. It is toss-up.

You might investigate www.snippets.org.

My intention was not to be rude.
 
J

jo65

David said:
One has to differentiate different types of contributions to the world:

a)Practical "make life" easier contributions: these are contributions that
are convenient, but that could be replicated by many others. A mergesort
implementation that uses linked lists instead of arrays is in this category.

b)Fundamental contributions: these have a lower probability of being
replicated by others. The original development of quicksort and to a lesser
degree mergesort is in this category.

Your contribution is in category (a) rather than (b). It has value, just
not a lot of value. The key issue is whether anyone seeking the solution
would spend more time searching for and porting your solution rather than
implementing it themselves from scratch. It is toss-up.

Of course I never thought to be in category (b). Being in (a), the
question for
me was: How to do mergesort on linked list efficiently? The sample code
in
Sedgewick and in other sources didn't look too "efficient" to me, so I
was
looking for a better one and eventually came up with my piece of code.

I don't think that my solution requires porting at all. Besides that,
it is more
efficient than the "standard" solution. Of course this only means
"faster by some
percentage" - it is still O(n * log(n)). I actually want to use it in a
real world
application where I am looking for a reasonable implementation of
mergesort.
You might investigate www.snippets.org.

My intention was not to be rude.

Sorry for misunderstanding you. I am not a native speaker - maybe this
is the
problem.
 
B

Ben Pfaff

CBFalconer said:
Too long and complicated for me. There is an example in about 60
lines, including comments, in wdfreq.c, which in turn is an example
application in hashlib.zip. See:

<http://cbfalconer.home.att.net/download/>

As long as we're posting links to linked list merge sorts, my
implementation is at
http://cvs.savannah.gnu.org/viewcvs/pspp/src/libpspp/ll.c?rev=1.3&root=pspp&view=markup
in function ll_sort().

It's optimized for readability.

CBFalconer wrote in a later message:
Well, I read Sedgewick long ago :). However I fail to see how you
can do a merge without first splitting.

My implementation doesn't do any splitting either. At least, I
don't think it does; you can tell me if it contains something
that you'd interpret to be splitting.
 
R

Richard Harter

Hi folks!

Everyone knows how to sort arrays (e. g. quicksort, heapsort etc.)
For linked lists, mergesort is the typical choice.

While I was looking for a optimized implementation of mergesort for
linked lists, I couldn't find one. I read something about Mcilroy's
"Optimistic Merge Sort" and studied some implementation, but they
were for arrays. Does anybody know if Mcilroys optimization is applicable to
truly linked lists at all?

Eventually, I came up with my own implementation (see below) which seems to
perform quite well. The algorithm is my own idea - at least I don't know if
anybody else has come up with this before.

The algorithm is localized. It maintains a stack of sorting length and tries
to merge small pieces as soon as possible. This takes advantage of a CPU
caches.

Here are my questions about it:

 - Is my implementation new/genuine or has somebody else come up
   with this before?

 - How does it compare with other optimized implementations (provided there
   are such)? I would like to hear from better implementations!

 - When optimizing quicksort, one usually tries to use insertion sort
   and the like for SMALL parts. I tried to do the same for the mergesort
   below, i. e. sort small pieces of length ~ 10 with insertion sort.

   It actually became slower :( I don't quite understand why.

Thanks for any comments!

Joerg

[snip code]

Here are some suggestions: First of all, the proper newsgroup probably
should be comp.programming. Secondly, I would much prefer a more
extensive description of the actual algorithm. Your description is
quite vague; seemingly, the only way to understand what you are actually
doing is to read the code. Thirdly, your code is hard to read (in my
opinion, of course.) As a general rule code is easier to read if things
are cleanly indented and well lined up.


Here are some of the things that I find make it difficult to read.
First of all, in code for presentation it reads better if you get rid of
the ifdefs and replace them by normal tests on flags. As it is your
preprocessor tests break into the flow of the code.

Secondly I don't much like interspersing single line comments with code.
I suggest prefixing blocks of code with comment blocks that describe the
intent of the entire block. If do want to comment upon a single line I
prefer to have a sidebar comment. Stream of consciousness commenting is
helpful to the author but not to the reader.

Thirdly, my preference is that each variable be declared on a separate
line with a sidebar comment describing the variable. When someone else
is reading the code they don't have to puzzle out what each name means.

As a strictly C note, experience says do this

      if(list == NULL) break; /* Single statement if */

or this

      if(list == NULL) { /* If controlling block */
        break;
}

but not this

      if(list == NULL) /* This is a maintenance trap */
        break;

As a note on mergesorting linked list data, you don't need to scan the
list to get its length. Instead sort the first two elements. You now
have a sorted list of length 2. Sort the next two elements to get a
second sorted list; merge it with the first. You now have a sorted list
of length 4. Sort the next four elements and merge to get a list of
length 8. Repeat as needed. Season to taste.

I'm not sure how this compares with your algorithm because I haven't sat
down to figure out exactly what you're doing; however your algorithm
looks interesting.
 
B

Ben Pfaff

As a note on mergesorting linked list data, you don't need to scan the
list to get its length. Instead sort the first two elements. You now
have a sorted list of length 2. Sort the next two elements to get a
second sorted list; merge it with the first. You now have a sorted list
of length 4. Sort the next four elements and merge to get a list of
length 8. Repeat as needed. Season to taste.

Sure, that'll work, but it's somewhat brute force. I think that
a "natural" merge sort makes more sense: take the first two runs
of ascending values and merge them to produce the first output
run, and repeat that process until you've consumed the input.
Then start over from the top, until there's only a single output
run.
 
C

CBFalconer

Ben said:
.... snip ...
http://cvs.savannah.gnu.org/viewcvs/pspp/src/libpspp/ll.c?rev=1.3&root=pspp&view=markup in function ll_sort().
.... snip ...


My implementation doesn't do any splitting either. At least, I
don't think it does; you can tell me if it contains something
that you'd interpret to be splitting.

To merge something you need two things to merge. You start with
one list. I can imagine a technique where you use the end of a run
to form the end of a split, and then try to merge with the
remainder. I don't think such an algorithm will be nlogn,
especially in the worst case, and preserving stability will require
care. List splitting has a very taut inner loop.

I haven't looked at your implementation so far.

--
<http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt>

"A man who is right every time is not likely to do very much."
-- Francis Crick, co-discover of DNA
"There is nothing more amazing than stupidity in action."
-- Thomas Matthews
 
R

Richard Harter

Sure, that'll work, but it's somewhat brute force. I think that
a "natural" merge sort makes more sense: take the first two runs
of ascending values and merge them to produce the first output
run, and repeat that process until you've consumed the input.
Then start over from the top, until there's only a single output
run.

A natural merge sort might make more sense but it's not an improvement,
or at least it isn't unless you have something clever up your sleeve.
See the discussion in Knuth on merge sorts. Natural gains when runs are
long and few and loses when runs are short and many. Offhand your
version is going to much more expensive (though still O(n log n))
because you have to rescan the runs.
 
J

Joerg Schoen

That's basically what my code is doing, but it also takes advantage of
natural runs in the input! Richard Harter is right in noticing that I
should have described the algorithm more detailed. Sorry! I'll also improve
my indentation next time. Here now is a detailed explanation:

I break down the linked lists into natural runs (only if preprocessor
symbol "NATURAL" is defined). After collecting the first two runs, I merge
them into a bigger run. Then I collect the next two runs, merge them into a
bigger one. On the stack, I now have two bigger runs which I merge again.

Here is how the stack develops (separated letters denote different runs,
adjacent letters means merged runs, the comment at the end of the line
describes what happens before the next line):
# collect run
a # collect run
a b # merge
ab # collect run
ab c # collect run
ab c d # merge
ab cd # merge
abcd # collect run
abcd e # collect run
abcd e f # merge
abcd ef # collect run
abcd ef g # collect run
abcd ef g h # merge
abcd ef gh # merge
abcd efgh # merge
abcdefgh
....

The stack contains the start of the each run which is a NULL terminated
linked list and also the length of it. So there is enough information on
the stack to know the start and length of each run. Compared to other
implementations of mergesort the my implementation doesn't run through the
list a given number of steps just to find the start of the next run.

What I like about the implementation is that there is not much need for
treating special cases like ending lists when merging etc. That sounds
easy, but try to do it yourself and you see that it's a bit tricky!
Sure, that'll work, but it's somewhat brute force. I think that
a "natural" merge sort makes more sense: take the first two runs
of ascending values and merge them to produce the first output
run, and repeat that process until you've consumed the input.
Then start over from the top, until there's only a single output
run.

That is exactly the governing idea in my own implementation: Improve
mergesort by recognizing natural runs in the input.

There are two not so obvious problems with Ben's description:

One thing is that you need to call your comparison function very frequently
just to recognize runs you could already know about otherwise. Assume you
have just merged two runs of length 1000 and 2000 to a run of length 3000.
When you go over the list from the top, you need another 2999 comparisons
just to recognize the run of size 3000.

Note that this is easier in the simpler implementation cited by Richard: The
length of the runs is known to be exactly 2**n in the n-th iteration of the
outer loop.

The second problem in Ben's approach is non-locality: You have to run
repeatedly through the WHOLE input and each time increase the size of the
runs. This will render your CPU cache rather useless. In the test program
on my webpage (http://www.die-schoens.de/prg/testsort.tgz), I compare the
localized version with a non-localized one. The runtime of the localized
one is better by 35%! That's seems relevant to me for real world
application!

Of course, as said before, the algorithm is O(n * log(n)), we are just
talking about minimizing the constant factor in front of this.
 
B

Ben Pfaff

A natural merge sort might make more sense but it's not an improvement,
or at least it isn't unless you have something clever up your sleeve.
See the discussion in Knuth on merge sorts. Natural gains when runs are
long and few and loses when runs are short and many. Offhand your
version is going to much more expensive (though still O(n log n))
because you have to rescan the runs.

I'll keep that in mind as I have the need for optimization in
future, thanks for pointing it out.
 
R

Richard Harter

That's basically what my code is doing, but it also takes advantage of
natural runs in the input! Richard Harter is right in noticing that I
should have described the algorithm more detailed. Sorry! I'll also improve
my indentation next time. Here now is a detailed explanation:

I break down the linked lists into natural runs (only if preprocessor
symbol "NATURAL" is defined). After collecting the first two runs, I merge
them into a bigger run. Then I collect the next two runs, merge them into a
bigger one. On the stack, I now have two bigger runs which I merge again.

Here is how the stack develops (separated letters denote different runs,
adjacent letters means merged runs, the comment at the end of the line
describes what happens before the next line):
# collect run
a # collect run
a b # merge
ab # collect run
ab c # collect run
ab c d # merge
ab cd # merge
abcd # collect run
abcd e # collect run
abcd e f # merge
abcd ef # collect run
abcd ef g # collect run
abcd ef g h # merge
abcd ef gh # merge
abcd efgh # merge
abcdefgh
...

The stack contains the start of the each run which is a NULL terminated
linked list and also the length of it. So there is enough information on
the stack to know the start and length of each run. Compared to other
implementations of mergesort the my implementation doesn't run through the
list a given number of steps just to find the start of the next run.

What I like about the implementation is that there is not much need for
treating special cases like ending lists when merging etc. That sounds
easy, but try to do it yourself and you see that it's a bit tricky!


That is exactly the governing idea in my own implementation: Improve
mergesort by recognizing natural runs in the input.

There are two not so obvious problems with Ben's description:

One thing is that you need to call your comparison function very frequently
just to recognize runs you could already know about otherwise. Assume you
have just merged two runs of length 1000 and 2000 to a run of length 3000.
When you go over the list from the top, you need another 2999 comparisons
just to recognize the run of size 3000.

Note that this is easier in the simpler implementation cited by Richard: The
length of the runs is known to be exactly 2**n in the n-th iteration of the
outer loop.

The second problem in Ben's approach is non-locality: You have to run
repeatedly through the WHOLE input and each time increase the size of the
runs. This will render your CPU cache rather useless. In the test program
on my webpage (http://www.die-schoens.de/prg/testsort.tgz), I compare the
localized version with a non-localized one. The runtime of the localized
one is better by 35%! That's seems relevant to me for real world
application!

Of course, as said before, the algorithm is O(n * log(n)), we are just
talking about minimizing the constant factor in front of this.


Well done. Actually the algorithm I described is is closer to your
algorithm than to Ben's. IIANM Ben's is actually O(n^2) worse case if I
am not missing a fine point. An implementation of my algorithm has a
little gotcha - you have to be careful about recognizing the end of
data. I shall have to think about it a bit, but I think both can be
written nicely as recursive routines.
 
B

Ben Pfaff

Well done. Actually the algorithm I described is is closer to your
algorithm than to Ben's. IIANM Ben's is actually O(n^2) worse case if I
am not missing a fine point.

I don't think my code has an O(n^2) worst case, even if it is
slower than an "unnatural" merge sort. What do you think could
trigger an O(n^2) case in my code?
 
C

CBFalconer

Joerg said:
.... snip ...

Of course, as said before, the algorithm is O(n * log(n)), we are
just talking about minimizing the constant factor in front of this.

I think that you will find that an initially reverse sorted list
will eat up stack and time. You are doing a lot of copying, which
is totally unnecessary. Remember, records can be big.

--
"The mere formulation of a problem is far more often essential
than its solution, which may be merely a matter of mathematical
or experimental skill. To raise new questions, new possibilities,
to regard old problems from a new angle requires creative
imagination and and marks real advances in science."
-- Albert Einstein
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,764
Messages
2,569,566
Members
45,041
Latest member
RomeoFarnh

Latest Threads

Top