Thread-safe reference counts.

J

James Kanze

"James Kanze" <[email protected]> wrote in message
My only point is that, IMVHO of course, GC can be a wonderful
tool for all sorts of lifetime management schemes. This due to
the fact that it can inform the management protocol when an
object is in a quiescent state.

But you still haven't explained what you mean by "an object in a
quiescent state". All garbage collection can possibly tell you
is whether the object can be (might be) accessed. As far as I
can see, this has nothing to do with the state of the object.
IMO, lifetime management "basically implies" that an object
will only be destroyed when its unreachable;

For what meaning of "destroyed". I intentially chose the word
"terminated" in my discussion to avoid any link with the C++
destructors or delete; those are the usual ways of terminating
an object in C++, but the concept of termination is a design
concept, applicable to all languages, and delete (and typically
destructors in the absense of garbage collection) also do memory
management, which is really a separate, unrelated issue.
if it never dies, so be it.

In the terms I'm using, if an object has a determinate lifetime,
it must be terminated at a specific point in time, in response
to a specific external event, or something similar. Regardless
of who has pointers to it. If an object has an indeterminate
lifetime, then it doesn't matter. In a very real sense, it is
never terminated (never dies); it just disappears. Its
"lifetime" never ends.
If it can die, well, knowing when it's quiescent is a _very_
important piece of information indeed.

You still haven't defined quiescent, and what it means with
regards to object lifetime.
You don't really want to terminate an object if it is still
able to be accessed by other entities.

You don't really have the choice, if your program is to work.
And it's not really a problem, if the other entities don't
actually access it. On the other hand, if the object doesn't
need termination, then you never terminate it, even when there
are no more pointers to it. It just "disappears"; it's
invisible (since there is no way to "see" it), and presumably,
it's memory will somehow be recycled, but that's it.
Therefore, I conclude that GC and/or reference counting can be
important tools for a lifetime management scheme to use in
general.
Does that make any sense?

Not yet. I'm still having problems with vocabulary. My
interpretation of "quiescent", for example, implies that an
object will no longer change its state. But that can't be the
meaning your using---most of my dynamically allocated objects
which don't have explicit lifetimes are polymorphic agents
without any state.
 
D

David Schwartz

If you mean difference between implementation of strong thread safety
based on additional persistent mutex and based on atomic operation -
Yes. Implementation based on atomic operations doesn't give any
additional features.

I see. So the whole "problem" you are trying to solve is that an
optimization doesn't work.
One can easily achieve strong thread safety by combining
boost::shared_ptr and mutex:
boost::shared_ptr<application_settings> g_settings;
mutex_t g_mutex;
All operations involving g_settings (update, acquire) must lock
g_mutex. All operations on acquired object (release) can not lock
g_mutex.

Why would one ever even bother? This whole "strong thread safety"
issue is not an issue. It's a problem you can only run into by
choosing an inappropriate optimization. Just don't do that. Don't
optimize until you have a proven performance issue and then choose the
appropriate (as opposed to inappropriate) optimization.

Strong thread safety (a ridiculous name, IMO) is not something extra
beyond normal thread safety. It's normal thread safety. Whatever is
not strong thread safety (weak thread safety?) is a property of an
optimization that should only be used when it's appropriate.

That is, you have to do something *wrong* to need strong thread
safety.
But with atomic operations one can go further and eliminate all atomic
operations and mutation of shared state on fast-path. This way threads
work only with cached data in read-only mode most of the time. This
makes huge performance and scalability difference provided high load.
Basically this means superlinear negative scaling versus linear
positive scaling (I emphasize: provided high load).

If you can optimize code, great. If it runs better, that's good too.

But it sounds like you are framing this whole issue in completely the
wrong terms. There is no thread safety problem -- use locks
appropriately and there's no reference issue. It's impossible to get
into a situation where the object can become invalid before you can
increment its reference unless you're doing something mind-bogglingly
stupid.

Sane people just don't do such stupid things. That is, this is not
some optimization or extra feature. It's just a stupid bug, like an
off-by-error, that perhaps programmers need to understand so that they
can (trivially) avoid it.

DS
 
D

Dmitriy V'jukov

But it sounds like you are framing this whole issue in completely the
wrong terms. There is no thread safety problem -- use locks
appropriately and there's no reference issue. It's impossible to get
into a situation where the object can become invalid before you can
increment its reference unless you're doing something mind-bogglingly
stupid.


Why I must use locks unconditionally by default??? We are living not
in 60's.

You are considering smart pointer with strong thread safety as
application level primitive. It's not application level primitive,
it's low-level basic primitive. Like mutex for lock-based programming.

Implementation of low-level basic primitives is difficult.
Implementation of mutex (which you are proposing to use) is not
straightforward. And it's not portable at all. It's dependent on
compiler, on OS, on hardware.

Why you are not saying to implementor of your threading library "You
are stupid! Not even trying to use those atomic operations! Just don't
do such stupid things!"?

Why you are not saying "Just don't use databases. They are extremely
hard to implement."?

You are mixing up rules for application developers and for
multithreading support library developers.


Dmitriy V'jukov
 
C

Chris Thomasson

David Schwartz said:
I see. So the whole "problem" you are trying to solve is that an
optimization doesn't work.


Why would one ever even bother? This whole "strong thread safety"
issue is not an issue. It's a problem you can only run into by
choosing an inappropriate optimization. Just don't do that. Don't
optimize until you have a proven performance issue and then choose the
appropriate (as opposed to inappropriate) optimization.

Strong thread safety (a ridiculous name, IMO) is not something extra
beyond normal thread safety. It's normal thread safety. Whatever is
not strong thread safety (weak thread safety?) is a property of an
optimization that should only be used when it's appropriate.

That is, you have to do something *wrong* to need strong thread
safety.

[...]

How can I do a reader pattern without strong thread-safety? It looks like
the Standard Committee is considering adding this feature to shared_ptr
anyway:

http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2007/n2297.html#atomic
 
D

David Schwartz

Why I must use locks unconditionally by default??? We are living not
in 60's.

You don't have to use locks unconditionally. You have to use locks
where you have no other/better way to meet POSIX memory visibility and
concurrency rules. Otherwise, do whatever you want.
You are considering smart pointer with strong thread safety as
application level primitive. It's not application level primitive,
it's low-level basic primitive. Like mutex for lock-based programming.

I can't follow you here. Are you saying smart pointers are not meant
to be used by applications and only meant as internals of the
implementation of threading libraries? When you say "application level
primitive", do you mean implemented at application level or used at
application level?
Implementation of low-level basic primitives is difficult.
Implementation of mutex (which you are proposing to use) is not
straightforward. And it's not portable at all. It's dependent on
compiler, on OS, on hardware.

Absolutely. How you implement things like mutexes and smart pointers
is *very* platform specific. How you use them is not.
Why you are not saying to implementor of your threading library "You
are stupid! Not even trying to use those atomic operations! Just don't
do such stupid things!"?

The implementation of a threading-library is going to be very platform-
specific. Because I don't have any special interest in any particular
platform, I talk generally about what they should do *semantically*. I
don't care how they are implemented, I just hope it will be a good
implementation.
You are mixing up rules for application developers and for
multithreading support library developers.

Are you saying smart pointers should not be used by applications?

Sorry, I don't understand you. Maybe we are talking past each other.
Are you saying this strong thread safety issue is purely a threading
library internal issue and application developers don't have to worry
about it?

DS
 
C

Chris Thomasson

Chris Thomasson said:
David Schwartz said:
I see. So the whole "problem" you are trying to solve is that an
optimization doesn't work.


Why would one ever even bother? This whole "strong thread safety"
issue is not an issue. It's a problem you can only run into by
choosing an inappropriate optimization. Just don't do that. Don't
optimize until you have a proven performance issue and then choose the
appropriate (as opposed to inappropriate) optimization.

Strong thread safety (a ridiculous name, IMO) is not something extra
beyond normal thread safety. It's normal thread safety. Whatever is
not strong thread safety (weak thread safety?) is a property of an
optimization that should only be used when it's appropriate.

That is, you have to do something *wrong* to need strong thread
safety.

[...]

How can I do a reader pattern without strong thread-safety? It looks like
the Standard Committee is considering adding this feature to shared_ptr
anyway:

Global locking tables can help a decrementing thread atomically release and
destroy an object when the count has dropped to zero. This is important
because another thread could sneak in and concurrently attempt to increment
the reference count during this time. This is classic race-condition in
atomic reference counting pointers; Java references can solve it, so can
PThreads and C... The important part is that the lifetime of the mutexs
within the global table has static duration and should always outlast the
dynamic objects that which hash indexes...
 
C

Chris Thomasson

James Kanze said:
"James Kanze" <[email protected]> wrote in message
My only point is that, IMVHO of course, GC can be a wonderful
tool for all sorts of lifetime management schemes. This due to
the fact that it can inform the management protocol when an
object is in a quiescent state.
But you still haven't explained what you mean by "an object in a
quiescent state".

[...]

http://dictionary.reference.com/browse/quiescent

An object is at rest and has nothing to do. When an object reaches that
point in its lifetime it can decide to safely destroy itself and/or safely
reuse/cache itself for later resurrection and re-initialization;
whatever.... In RCU speak an object is in a quiescent-state after its
rendered unreachable and has been successfully deferred through the callback
system. All concurrently accessing threads within the epoch will go through
a quiescent-state. The information baking an epoch can be as fine-grain as
an embedded per-object proxy reference count, or it can be coarse-grain,
per-cpu and/or per-thread; whatever. When an epoch goes quiescent, all
objects contained within it have also quiesced.
 
D

Dmitriy V'jukov

You don't have to use locks unconditionally. You have to use locks
where you have no other/better way to meet POSIX memory visibility and
concurrency rules. Otherwise, do whatever you want.


Why I must meet POSIX memory visibility and concurrency rules by
default?
Large amount of code is written solely for Windows/x86.

I can't follow you here. Are you saying smart pointers are not meant
to be used by applications and only meant as internals of the
implementation of threading libraries? When you say "application level
primitive", do you mean implemented at application level or used at
application level?


By 'application level primitive' I mean implemented at application
level.
By 'low-level primitive' I mean implemented at 'system' level.
At application level one can *use* low-level primitives w/o the need
to implement them.

Are you saying smart pointers should not be used by applications?


They may be used by application developers, but they must not be
implemented by application developers.

Sorry, I don't understand you. Maybe we are talking past each other.
Are you saying this strong thread safety issue is purely a threading
library internal issue and application developers don't have to worry
about it?


I'm saying that *implementation* of both strong thread safety and
mutex is purely a threading library internal issue. API of both strong
thread safety and mutex can be used by application developer w/o the
need to knowing how it's implemented.

I'm talking from the point of view of threading library developer. You
are talking from the point of view of application developer. You can
even not bother how strong thread safety is implemented (with atomic
operations or with additional mutexes), you just have to know that you
can concurrently mutate single smart pointer w/o any additional
synchronization.


Dmitriy V'jukov
 
J

James Kanze

news:1e8d3d15-c2e1-4968-9dff-e505d4ae77fe@m36g2000hse.googlegroups.com...
[...]
My only point is that, IMVHO of course, GC can be a wonderful
tool for all sorts of lifetime management schemes. This due to
the fact that it can inform the management protocol when an
object is in a quiescent state.
But you still haven't explained what you mean by "an object in a
quiescent state".

An object is at rest and has nothing to do.

Including being terminated? (I.e. termination is a no-op for
the object.) If termination is not a no-op, then it still has
something to do. Otherwise, the question isn't so much whether
it has something to do or not, but whether it can still be used
by other objects or not. If not, then it probably needs some
sort of explicit termination, in order to inform the other
objects that it is no longer usable (and of course, whatever
event made it unusable should trigger termination, and this
notification). If so, then it lives on forever. Conceptually,
at least---when no other object can reach it, it's memory can be
recycled for other uses.
When an object reaches that point in its lifetime it can
decide to safely destroy itself and/or safely reuse/cache
itself for later resurrection and re-initialization;
whatever....

Why does the object have to decide?

Or perhaps more to the point: why does the object have nothing
more to do: because it has reached a state from which it can do
nothing more (and so probably requires explicit termination), or
because it normally only has something to do as a result of
requests from another object, and no other object can reach it.
In the later case, of course, that state is irrelevant to the
object; it's exterior to the object, and the object (normally)
has no way of knowing, nor should it.

In the first case, garbage collection will not reap the object
if there are any remaining pointers to it, even if its lifetime
has ended; this allows some additional error checking. In the
second case, garbage collection can be said to play an enabling
role; without garbage collection, somehow, the fact that the
object has become unreachable must be determined manually, so
that the object can be freed. (In many cases, some form of
smart pointer will do the job adequately. In a few, however, it
is more complicated.)
In RCU speak an object is in a quiescent-state after its
rendered unreachable and has been successfully deferred
through the callback system. All concurrently accessing
threads within the epoch will go through a quiescent-state.
The information baking an epoch can be as fine-grain as an
embedded per-object proxy reference count, or it can be
coarse-grain, per-cpu and/or per-thread; whatever. When an
epoch goes quiescent, all objects contained within it have
also quiesced.

I'm not familiar with this vocabulary, so I'll pass on it.
 
D

David Schwartz

Global locking tables can help a decrementing thread atomically release and
destroy an object when the count has dropped to zero. This is important
because another thread could sneak in and concurrently attempt to increment
the reference count during this time.

No, another thread can't sneak in and concurrent attempt to increment
the reference count during that time. The decrementing thread is the
only thread that holds a reference to the object, so no other thread
could even find the object. How could it attempt to increment the
reference count to an object it cannot find?
This is classic race-condition in
atomic reference counting pointers; Java references can solve it, so can
PThreads and C... The important part is that the lifetime of the mutexs
within the global table has static duration and should always outlast the
dynamic objects that which hash indexes...

The only way a thread can increment the reference count on an object
is if it has a pointer to the object. When using atomic reference
counting pointers, pointers are *always* accompanied by references.
That is the whole point of such pointers. If you have a pointer, you
have a reference. If you don't have a reference, you don't have a
pointer. You cannot obtain a reference unless something that has a
reference gives you one.

DS
 
D

David Schwartz

I'm saying that *implementation* of both strong thread safety and
mutex is purely a threading library internal issue. API of both strong
thread safety and mutex can be used by application developer w/o the
need to knowing how it's implemented.

I'm talking from the point of view of threading library developer. You
are talking from the point of view of application developer. You can
even not bother how strong thread safety is implemented (with atomic
operations or with additional mutexes), you just have to know that you
can concurrently mutate single smart pointer w/o any additional
synchronization.

I completely agree with everything you said, I just don't see what it
has to do with anything. Can you explain why applications have to deal
with this issue if they just make sure that every pointer is
accompanied by a reference? Isn't that the whole point of atomic
reference-counting pointers?

DS
 
C

Chris Thomasson

David Schwartz said:
No, another thread can't sneak in and concurrent attempt to increment
the reference count during that time. The decrementing thread is the
only thread that holds a reference to the object, so no other thread
could even find the object. How could it attempt to increment the
reference count to an object it cannot find?

Lets take some standard code into account... How about a PThread
implmentation for a strongly thread-safe counted pointers which can be found
here, it compiles fine:

http://appcore.home.comcast.net/misc/refcount-c.html
(refcount_copy/swap functions; returns non-zero on failure)

These functions are passed pointers to a shared location that in turn
contains a pointer to a refcount object; you can use them like this:
_____________________________________________________________________
extern "C" void userobj_dtor(void*);

class userobj {
friend void userobj_dtor(void*);
refcount m_refs;
int m_state;

public:
userobj(int state, refcount_refs refs = 1)
: m_state(state) {
refcount_create(&m_refs, refs, userobj_dtor, this);
}
};

void userobj_dtor(void* state) {
delete reinterpret_cast<userobj*>(state);
}



static refcount_shared* g_shared = NULL;



struct userobj_thread {
void readers() {
for (;;) {
refcount_local* local;
if (! refcount_copy(&g_shared, &local)) {
userobj* const uobj = (userobj*)refcount_get_state(local);
printf("(%p/%p/%d)-userobj_thread/userobj/userobj::state",
(void*)this, (void*)uobj, uobj->state);
refcount_release(local);
}
}
}

void writers() {
for (int i = 0 ;; ++i) {
userobj* const obj = new userobj(i);
refcount_local* local = &obj->m_refs;
if (! refcount_swap(&g_shared, &local)) {
refcount_release(local);
}
}
}
};
_____________________________________________________________________


Please check out the 'refcount_copy()/swap()' functions. How would implement
those API's differently? The readers are acquiring pointers to objects that
did not previously own a reference to. IMHO, the locking table is a good
synchronization scheme to use in this scenario.


[...]
 
D

David Schwartz

static refcount_shared* g_shared = NULL;
if (! refcount_copy(&g_shared, &local)) {
if (! refcount_swap(&g_shared, &local)) {
Please check out the 'refcount_copy()/swap()' functions. How would implement
those API's differently? The readers are acquiring pointers to objects that
did not previously own a reference to.

Of course the readers are acquiring pointers to object that *THEY* did
not previously have a reference to. But 'g_shared' has a reference to
them, since it has a pointer to them. A pointer should always include
a reference, then this whole "add ref race with dec ref" simply cannot
ever occur.

The whole point of these smart atomic reference counting pointers is
that you always keep a reference count with every pointer. The races
alleged to be possible simply cannot happen unless you deliberately do
something nonsensical.

You cannot add a reference unless you have a pointer. You cannot have
a pointer unless you have a reference. You can only get a reference by
getting the object from something, and that something must have the
object and thus have a reference.

DS
 
C

Chris Thomasson

David Schwartz said:
Of course the readers are acquiring pointers to object that *THEY* did
not previously have a reference to. But 'g_shared' has a reference to
them, since it has a pointer to them. A pointer should always include
a reference, then this whole "add ref race with dec ref" simply cannot
ever occur.
[...]

The locking table allows me to use raw shared pointers to reference counter
objects. There is no need to keep a separate reference count along with the
shared pointer to a counter when I have persistent locks, like the example
code I posted shows. This saves space as well; one word per-shared pointer.
E.g.,
_____________________________________________________
static refcount_shared* g_shared = NULL;

/* instead of something with like: */

struct refcount_shared {
refcount *ptr;
int refs;
};

static refcount_shared g_shared = { NULL, 0 };
_____________________________________________________



Can you modify the implementation of my refcount API here:

http://appcore.home.comcast.net/misc/refcount-c.html


to make it more efficient?
 
C

Chris Thomasson

[...]
My only point is that, IMVHO of course, GC can be a wonderful
tool for all sorts of lifetime management schemes. This due to
the fact that it can inform the management protocol when an
object is in a quiescent state.
But you still haven't explained what you mean by "an object in a
quiescent state".
[...]
http://dictionary.reference.com/browse/quiescent
An object is at rest and has nothing to do.
Including being terminated?

Calling the objects destructor is fine in this state.



(I.e. termination is a no-op for
the object.) If termination is not a no-op, then it still has
something to do.

When the counter has dropped to zero, the object can be destroyed, reused,
cached, ect.



Otherwise, the question isn't so much whether
it has something to do or not, but whether it can still be used
by other objects or not.

No dynamic object should be able to acquire a reference to an object whose
reference count is zero. A Proxy GC will call an in-quiescent state callback
function for an object when it determines that said object has quiesced
(e.g., unreachable). This is analogous to a reference counting algorithm
dropping the count to zero and subsequently notifying the application via.
callback. Imagine if shared_ptr did not call dtor, but called a function
that allowed an application to decide what to do. It can call the objects
dtor, or cache it, or immediately reuse it, whatever, the object is
quiescent.



If not, then it probably needs some
sort of explicit termination, in order to inform the other
objects that it is no longer usable (and of course, whatever
event made it unusable should trigger termination, and this
notification). If so, then it lives on forever. Conceptually,
at least---when no other object can reach it, it's memory can be
recycled for other uses.

It can be reused, cached, the dtor can be called, ect.





Why does the object have to decide?


The programmer who creates the logic can decide. E.g:

void object_quiescent(object* const _this) {
// you can call dtor; delete _this
// you can cache; object_cache_push(_this);
// the object is unreachable indeed.
}



Or perhaps more to the point: why does the object have nothing
more to do: because it has reached a state from which it can do
nothing more (and so probably requires explicit termination), or
because it normally only has something to do as a result of
requests from another object, and no other object can reach it.
In the later case, of course, that state is irrelevant to the
object; it's exterior to the object, and the object (normally)
has no way of knowing, nor should it.

The fact that the GC or reference-count can infrom the program logic that an
object is not able to be reached is valuable information to any lifetime
management scheme which deals with dynamic objects.



In the first case, garbage collection will not reap the object
if there are any remaining pointers to it, even if its lifetime
has ended; this allows some additional error checking. In the
second case, garbage collection can be said to play an enabling
role; without garbage collection, somehow, the fact that the
object has become unreachable must be determined manually, so
that the object can be freed. (In many cases, some form of
smart pointer will do the job adequately. In a few, however, it
is more complicated.)

A quiescent-state is like a GC determining that an object can be reaped, but
informing the application and letting it decide what to do. It can call the
dtor, or reuse, ect.




I'm not familiar with this vocabulary, so I'll pass on it.

Check this out:

http://en.wikipedia.org/wiki/Read-copy-update

Does that make any sense?
 
D

Dmitriy V'jukov

Of course the readers are acquiring pointers to object that *THEY* did
not previously have a reference to. But 'g_shared' has a reference to
them, since it has a pointer to them. A pointer should always include
a reference, then this whole "add ref race with dec ref" simply cannot
ever occur.


This is wrong. This whole race simply cannot ever occur only if (1)
pointer always includes a reference ***AND*** (2) owner of a pointer
makes copy of pointer (and reference increment) BY HIMSELF.

I agree that when condition (1) is violated is just a bad design.
But condition (2) can be violated sometimes, and it's NOT bad design,
it's just given situation. There can be no owner of a pointer (and
reference) at all. Here I mean 'active' owner which can make copy of a
pointer.


Dmitriy V'jukov
 
D

Dmitriy V'jukov

I completely agree with everything you said, I just don't see what it
has to do with anything. Can you explain why applications have to deal
with this issue if they just make sure that every pointer is
accompanied by a reference? Isn't that the whole point of atomic
reference-counting pointers?


You want example when following simple functions can't cope with the
task? Am I understand you correctly?

void acquire(T* x)
{
atomic_inc(x->rc);
}

void release(T* x)
{
if (0 == atomic_dec(x->rc))
delete x;
}


Dmitriy V'jukov
 
D

David Schwartz

You want example when following simple functions can't cope with the
task? Am I understand you correctly?

void acquire(T* x)
{
atomic_inc(x->rc);

}

void release(T* x)
{
if (0 == atomic_dec(x->rc))
delete x;

}

What is the problem with this supposed to be? This code appears race-
free to me. Of course, you can only call 'acquire' if you hold a
reference (to acquire another reference that you may then give to
something else). But you should never even consider doing anything
with a pointer if you don't have a reference.

DS
 
D

David Schwartz

This is wrong. This whole race simply cannot ever occur only if (1)
pointer always includes a reference ***AND*** (2) owner of a pointer
makes copy of pointer (and reference increment) BY HIMSELF.
I agree that when condition (1) is violated is just a bad design.
But condition (2) can be violated sometimes, and it's NOT bad design,
it's just given situation. There can be no owner of a pointer (and
reference) at all. Here I mean 'active' owner which can make copy of a
pointer.

I don't understand your requirement 2. It doesn't matter who or what
makes the copy of the pointer, so long as *something* has a reference.

The owner of a reference is notional, not enforced. For example, if a
global variable contains a pointer, it also has a [notional]
reference. Any thread that access that global variable can 'borrow'
the reference.

The only potential problem is if the owner of the 'notional' reference
releases its reference at the same time it is using it. But that race
is in the owner of the reference, not the atomic pointer
implementation or whatever.

A global variable can have a reference. A thread can have a reference.
A collection can have a reference. The owner is just conceptual.

If, for example, I have a hash table that has a bunch of objects in
it, it can also have a reference to each of those objects to ensure
they aren't removed while they're still in the table. But that
reference can be used by any thread that calls into the hash table's
functions.

DS
 
D

David Schwartz

This is wrong. This whole race simply cannot ever occur only if (1)
pointer always includes a reference ***AND*** (2) owner of a pointer
makes copy of pointer (and reference increment) BY HIMSELF.

Let me phrase my point more clearly: 2 is always satisfied. Anything
that is using a reference is, for all intents and purposes, the owner
of that reference (while it is using it). Of course, the use of a
reference must be properly synchronized. Even strong thread safety
won't let you use and release a reference at the same time.

DS
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,007
Latest member
obedient dusk

Latest Threads

Top