fast scan

G

George Mpouras

# scan a network in 2 seconds using fork. Very simplistic but with
potential !




#!/usr/bin/perl
use strict;
use warnings;
use feature qw/say/;
use Net::ping;
use Net::IP;

my $threads = 255;
my $duration = 2;
my @ip_team = ();
$|= 1;


my $ip = new Net::IP ('192.168.0.1 - 192.168.0.254') or die "Could not
initiate object because ". Net::IP::Error() ."\n";


while ($ip) {
push @ip_team, $ip++ ->ip();
if ( $threads == @ip_team ) { Scan(@ip_team); @ip_team = () }
}

Scan(@ip_team);



sub Scan
{
my @Pids;

foreach my $ip (@_)
{
my $pid = fork();
die "Could not fork because $!\n" unless defined $pid;

if (0 == $pid)
{
my $ping = Net::ping->new('icmp');
say "host $ip is up" if $ping->ping($ip, $duration);
$ping->close();
exit
}
else
{
push @Pids, $pid
}
}

foreach my $pid (@Pids) { waitpid($pid, 0) }
}
 
R

Rainer Weikusat

George Mpouras said:
# scan a network in 2 seconds using fork. Very simplistic but with
potential !
[...]

sub Scan
{
my @Pids;

foreach my $ip (@_)
{
my $pid = fork();
die "Could not fork because $!\n" unless defined $pid;

if (0 == $pid)
{
my $ping = Net::ping->new('icmp');
say "host $ip is up" if $ping->ping($ip, $duration);
$ping->close();
exit
}
else
{
push @Pids, $pid
}
}

foreach my $pid (@Pids) { waitpid($pid, 0) }
}

If you run code like this on somebody else's network (or network and
computer), you will potentially learn more details about lart than you
ever wanted to. In order to achieve maximum disaster, run it on a
gateway for a busy network which suffers from 'traditional' BSD
network buffer management, preferably in a loop. If you manage to
reach mbuf exhaustion, you've produced a stable 'congestion collapse'
(slight misuse of the term) situation: The gateway will drop incoming
ethernet frames until it got rid of enough pings to reconsider this
descision. At this point, a reply tsunami will hit it (ping replies,
TCP retransmissions, ARP replies and ARP queries) and it will
immediately run out of mbufs again. Repeat until heat death of the
universe ...

NB: This is not a story I just invented or some kind of theoretical
conjecture. I've had the mispleasure to encounter this exact problem
on such a gateway some years ago.
 
R

Rainer Weikusat

Rainer Weikusat said:
George Mpouras said:
# scan a network in 2 seconds using fork. Very simplistic but with
potential !
[...]

sub Scan
{
my @Pids;

foreach my $ip (@_)
{
my $pid = fork();
die "Could not fork because $!\n" unless defined $pid;

if (0 == $pid)
{
my $ping = Net::ping->new('icmp');
say "host $ip is up" if $ping->ping($ip, $duration);
$ping->close();
exit
}
else
{
push @Pids, $pid
}
}

foreach my $pid (@Pids) { waitpid($pid, 0) }
}

If you run code like this on somebody else's network (or network and
computer), you will potentially learn more details about lart than you
ever wanted to.

For completeness: It is not entirely inconceivable that the fork bomb
manages to slow transmissions down so much that the 'DDoS suicide'
doesn't happen.
 
C

Charles DeRykus

# scan a network in 2 seconds using fork. Very simplistic but with
potential !




#!/usr/bin/perl
use strict;
use warnings;
use feature qw/say/;
use Net::ping;
use Net::IP;

my $threads = 255;
my $duration = 2;
my @ip_team = ();
$|= 1;


my $ip = new Net::IP ('192.168.0.1 - 192.168.0.254') or die "Could not
initiate object because ". Net::IP::Error() ."\n";


while ($ip) {
push @ip_team, $ip++ ->ip();
if ( $threads == @ip_team ) { Scan(@ip_team); @ip_team = () }
}

Scan(@ip_team);



sub Scan
{
my @Pids;

foreach my $ip (@_)
{
my $pid = fork();
die "Could not fork because $!\n" unless defined $pid;

if (0 == $pid)
{
my $ping = Net::ping->new('icmp');
say "host $ip is up" if $ping->ping($ip, $duration);
$ping->close();
exit
}
else
{
push @Pids, $pid
}
}

foreach my $pid (@Pids) { waitpid($pid, 0) }
}


A less resource intensive alternative with POE:

http://poe.perl.org/?POE_Cookbook/Pinging_Multiple_Hosts
 
G

George Mpouras

A less resource intensive alternative with POE:

http://poe.perl.org/?POE_Cookbook/Pinging_Multiple_Hosts

I really do not know if it is really less resource intensive, and I am
not interesting to find out, but with the simple code you can always
have as many $threads you want.

What I was thinking is why I really need super bloated frameworks like
POE or similar if I can do much simpler the same using only basic
functions.
 
R

Rainer Weikusat

George Mpouras said:
I really do not know if it is really less resource intensive, and I am
not interesting to find out, but with the simple code you can always
have as many $threads you want.

Except that this is total nonsense here because all these different
processes end up stuffing IP datagrams into the TX queue for the same
network device which then sends them one after another. This means
you're basically just adding a lot of overhead because the scheduler
needs to deal with all the processes and other parts of the kernel
have to serialize them forcibly. Also, sending out ICMP echo requests
as fast as the kernel can manage to deal with the processes to
hundreds or thousands of hosts is really bad: That will end up as
hundreds or thousands of hosts hammering your single computer with
replies as fast as they can (if you ask 10,000 people to throw tennis
balls to you at the same time, you'll end up being stoned to death by
tennis balls).
What I was thinking is why I really need super bloated frameworks like
POE or similar if I can do much simpler the same using only basic
functions.

A relatively simple way to do this would be to use two processes: One
which sends pings (rate-limited(!)) and another which blocks in recv
on a raw socket in order to process the replies. A single process
utilizing select and non-blocking sends and receive would be a
somewhat better choice.
 
R

Rainer Weikusat

Charles DeRykus said:
On 8/2/2013 5:39 PM, George Mpouras wrote:

[insane networking code]
A less resource intensive alternative with POE:

http://poe.perl.org/?POE_Cookbook/Pinging_Multiple_Hosts

As far as I can tell, that suffers from the same "Hit me as fast as
you can, bazillions, 'cos really tired of his life!"
problem. Something equivalent can probably be implemented without POE
by writing (at worst) a little more text (my gut feeling says 'less',
actually).
 
C

Charles DeRykus

I really do not know if it is really less resource intensive, and I am
not interesting to find out, but with the simple code you can always
have as many $threads you want.

What I was thinking is why I really need super bloated frameworks like
POE or similar if I can do much simpler the same using only basic
functions.

I have to admit I hadn't dabbled with POE before. However, I quickly
installed two POE modules and had the above program running immediately.
So, even with a sub-idiot's grasp of POE's internals, you can leverage a
far more scalable, less resource-hogging code resource that can be
adapted to many different contexts.

It's true your simple fork example with no data sharing really doesn't
require POE or even threads. But later you may want something more
complicated which does. And there are many robust POE programs that you
can build upon without the riskiness and complexity of a fork (or an
even trickier thread) model. And, later, in another multi-tasking
scenario, what if you needed to fire off lots of MySQL queries, then
gather, analyze, maybe collate/reformat output. A viral fork model might
crash, not to mention seriously annoying other users... or re-kindle
thoughts about chucking it all for some simple goat herding in the Alps.
 
G

George Mpouras

# dizzying fast with port scanner, cpu is almost 0%
# this is forking the forks !









#!/usr/bin/perl
use strict;
use warnings;
use feature qw/say/;
use Net::ping;

my $threads = 80;
my $duration = 1;
my @ip_team = ();
my $db_dir = Unique_node_name('/tmp/.fastscan');


mkdir $db_dir or die "Could not create directory \"$db_dir\" because
\"$^E\"\n" unless -d $db_dir;
$|= 1;

my @ports = (21, 22, 80, 135, 443);
my ($o1a,$o1b, $o2a,$o2b, $o3a,$o3b, $o4a,$o4b) =
Check_and_define_octet( '192.168.0.[1-254]' );

foreach my $o1 ($o1a .. $o1b) {
foreach my $o2 ($o2a .. $o2b) {
foreach my $o3 ($o3a .. $o3b) {
foreach my $o4 ($o4a .. $o4b) {

push @ip_team, "$o1.$o2.$o3.$o4";

if ( $threads == @ip_team ) { Scan(@ip_team); @ip_team = () }


}}}}


Scan(@ip_team);
system("/bin/rm -rf $db_dir");


sub Scan
{
my @Pids;

foreach my $ip (@_)
{
my $pid = fork(); die "Could not fork because $!\n" unless defined $pid;

if (0 == $pid)
{
my
$ping= Net::ping->new('icmp');
$ping->service_check(0);

if ( $ping->ping($ip, $duration) )
{
mkdir "$db_dir/$ip";
my @SubPids;

foreach my $port (@ports)
{
my $pid = fork(); die "Could not fork because $!\n" unless defined $pid;

if (0 == $pid)
{
my
$subping = Net::ping->new('tcp', 2);
$subping->service_check(0);
$subping->port_number($port);
mkdir "$db_dir/$ip/$port" if $subping->ping($ip);
$subping->close;
exit 0
}
else
{
push @SubPids, $pid
}
}

foreach my $pid (@SubPids) { waitpid($pid, 0) }
say "$ip is up";
chdir "$db_dir/$ip";
foreach ( glob '*' ) { say "\t port $_ is open" }
}

$ping->close();
exit 0
}
else
{
push @Pids, $pid
}
}

foreach my $pid (@Pids) { waitpid($pid, 0) }
}


sub Unique_node_name
{
my ($dir,$file )= $_[0] =~/^(.*?)([^\/]*)$/;
if ( $dir=~/^\s*$/ ) { $dir = '.' } else { $dir =~s/\/*$// }
$file = 'node' if $file=~/^\s*$/;
return "$dir/$file" if ! -e "$dir/$file";
my $i=1; while ( -e "$dir/$i.$file" ) {$i++}
"$dir/$i.$file"
}


# Accepts a host definition like 192.168.[0-3].[1-254]
# and returns for every octet its first and stop number
# For example for the [10-12].1.86.[1-100]
# it will return
# 10,12, 1,1, 86,86, 1,100
#
sub Check_and_define_octet
{
my @O;
( my $hosts = $_[0] )=~s/\s+//g;
( $O[0]->[0] , $O[1]->[0], $O[2]->[0], $O[3]->[0] ) = $hosts
=~/^([^.]+)\.([^.]+)\.([^.]+)\.([^.]+)$/ or die "The host definition
argument is not like 192.168.[0-3].[1-254]\n";
my $i=0;
foreach my $start (1,0,0,1)
{
if ( $O[$i]->[0] =~/^\d+$/ )
{
@{$O[$i]}[0,1] = (( $O[$i]->[0] >= $start ) && ( $O[$i]->[0] < 255 ))
? @{$O[$i]}[0,0] : die "Octet \"$O[$i]->[0]\" should be an integer from
$start to 254\n"
}
elsif ( $O[$i]->[0] =~/\[(\d+)-(\d+)\]/ )
{
$O[$i]->[0] = (( $1 >= $start ) && ( $1 < 255 )) ? $1 : $start;
$O[$i]->[1] = (( $2 >= $start ) && ( $2 < 255 )) ? $2 : 254;
@{$O[$i]}[0,1] = $O[$i]->[0] > $O[$i]->[1] ? @{$O[$i]}[1,0] :
@{$O[$i]}[0,1]
}
else
{
die "Sorry but octet \"$O[$i]->[0]\" should be something like 12 or
[10 - 254]\n"
}
$i++
}

#use Data::Dumper; print Dumper \@O; exit;
@{$O[0]}, @{$O[1]}, @{$O[2]}, @{$O[3]}
}
 
R

Rainer Weikusat

George Mpouras said:
# dizzying fast with port scanner, cpu is almost 0%
# this is forking the forks !

[more of this]

Something which also deserves to be mentioned here: The sole reason
for this insane fork-orgy is that George has to work around the
library he chose to use for 'network communication' which offers only
a synchronous 'send request and wait for reply' interface. Which is
actually not atypical for 'technical solutions' starting with 'Welches
Gurkenglass, das hier dumm herumsteht, koennte sich wohl dafuer
eignen, diesen Nagel in die Wand zu schlagen?': It starts with some
clueless, devil-may-care individual selecting the wrong tool for the
job at hand because he reaches for the closest one and then proceeds
as set of 'ingenious' workarounds for the deficiencies of that.
 
G

George Mpouras

Στις 5/8/2013 10:45, ο/η Rainer Weikusat έγÏαψε:
George Mpouras said:
# dizzying fast with port scanner, cpu is almost 0%
# this is forking the forks !

[more of this]

Something which also deserves to be mentioned here: The sole reason
for this insane fork-orgy is that George has to work around the
library he chose to use for 'network communication' which offers only
a synchronous 'send request and wait for reply' interface. Which is
actually not atypical for 'technical solutions' starting with 'Welches
Gurkenglass, das hier dumm herumsteht, koennte sich wohl dafuer
eignen, diesen Nagel in die Wand zu schlagen?': It starts with some
clueless, devil-may-care individual selecting the wrong tool for the
job at hand because he reaches for the closest one and then proceeds
as set of 'ingenious' workarounds for the deficiencies of that.




Playing around for fan I’ve done the same think using the modules

threads
threads::shared

The code was much smaller but to my surprise cpu usage jumped from
almost 0 to 60% and script all together consume about 1.5Gb ram, wow !

I think this is because the threads module create real threads probably
duplicating each memory. Fork somehow creates something more light …
 
R

Rainer Weikusat

George Mpouras said:
Στις 5/8/2013 10:45, ο/η Rainer Weikusat έγÏαψε:
George Mpouras said:
# dizzying fast with port scanner, cpu is almost 0%
# this is forking the forks !

[more of this]

Something which also deserves to be mentioned here: The sole reason
for this insane fork-orgy is that George has to work around the
library he chose to use for 'network communication' which offers only
a synchronous 'send request and wait for reply' interface. Which is
actually not atypical for 'technical solutions' starting with 'Welches
Gurkenglass, das hier dumm herumsteht, koennte sich wohl dafuer
eignen, diesen Nagel in die Wand zu schlagen?': It starts with some
clueless, devil-may-care individual selecting the wrong tool for the
job at hand because he reaches for the closest one and then proceeds
as set of 'ingenious' workarounds for the deficiencies of that.
[...]

Playing around for fan I’ve done the same think using the modules

threads
threads::shared

The code was much smaller but to my surprise cpu usage jumped from
almost 0 to 60% and script all together consume about 1.5Gb ram, wow
!

I think this is because the threads module create real threads
probably duplicating each memory. Fork somehow creates something more
light …

The perl (bytecode) interpreter doesn't support
multithreading. Because of this, Perl threading support works such
that each thread gets its own copy of the interpreter created by
making a memory-to-memory copy of an already existing one. This
implies that creating threads is slow and multi-threaded Perl programs
need a lot of memory because despite running in a shared address space
they can't really share any of it.

In contrast to this, fork nowadays usually works by copying the page
table of the forking process and changing the memory access
permissions to read-only: Initially, both parent and child process
will share all memory. As soon as either of both tries to write to
something, a page fault occurs and the kernel handles that by
allocating a new page of memory, copying the page where the fault
occurred and then giving one to the child and the other to the parent
with access permissions changed back to read-write. This means copying
happens only when needed and only what really has to be copied is.

But for 'network scanning' this is an aside. The sensible way to do
that is to make it work asynchronously, ie, instead of waiting for a
reply after each request, keep sending whatever requests remain to be
sent (again, rate-limited) and process replies as they arrive.
 
G

George Mpouras

Στις 5/8/2013 17:24, ο/η Rainer Weikusat έγÏαψε:
George Mpouras said:
Στις 5/8/2013 10:45, ο/η Rainer Weikusat έγÏαψε:
# dizzying fast with port scanner, cpu is almost 0%
# this is forking the forks !

[more of this]

Something which also deserves to be mentioned here: The sole reason
for this insane fork-orgy is that George has to work around the
library he chose to use for 'network communication' which offers only
a synchronous 'send request and wait for reply' interface. Which is
actually not atypical for 'technical solutions' starting with 'Welches
Gurkenglass, das hier dumm herumsteht, koennte sich wohl dafuer
eignen, diesen Nagel in die Wand zu schlagen?': It starts with some
clueless, devil-may-care individual selecting the wrong tool for the
job at hand because he reaches for the closest one and then proceeds
as set of 'ingenious' workarounds for the deficiencies of that.
[...]

Playing around for fan I’ve done the same think using the modules

threads
threads::shared

The code was much smaller but to my surprise cpu usage jumped from
almost 0 to 60% and script all together consume about 1.5Gb ram, wow
!

I think this is because the threads module create real threads
probably duplicating each memory. Fork somehow creates something more
light …

The perl (bytecode) interpreter doesn't support
multithreading. Because of this, Perl threading support works such
that each thread gets its own copy of the interpreter created by
making a memory-to-memory copy of an already existing one. This
implies that creating threads is slow and multi-threaded Perl programs
need a lot of memory because despite running in a shared address space
they can't really share any of it.

In contrast to this, fork nowadays usually works by copying the page
table of the forking process and changing the memory access
permissions to read-only: Initially, both parent and child process
will share all memory. As soon as either of both tries to write to
something, a page fault occurs and the kernel handles that by
allocating a new page of memory, copying the page where the fault
occurred and then giving one to the child and the other to the parent
with access permissions changed back to read-write. This means copying
happens only when needed and only what really has to be copied is.

But for 'network scanning' this is an aside. The sensible way to do
that is to make it work asynchronously, ie, instead of waiting for a
reply after each request, keep sending whatever requests remain to be
sent (again, rate-limited) and process replies as they arrive.



Very sensible explanation.

For curiosity I fire up some similar scans using nmap.
It is very fast while it respects cpu and network utilization.
I have to look at its code to grab a general idea of how it works.
 
C

Charles DeRykus

Charles DeRykus said:
On 8/2/2013 5:39 PM, George Mpouras wrote:

[insane networking code]
A less resource intensive alternative with POE:

http://poe.perl.org/?POE_Cookbook/Pinging_Multiple_Hosts

As far as I can tell, that suffers from the same "Hit me as fast as
you can, bazillions, 'cos really tired of his life!"
problem. Something equivalent can probably be implemented without POE
by writing (at worst) a little more text (my gut feeling says 'less',
actually).


POE though is using non-blocking I/O and an event loop behind the scenes
which is certainly a reasonable approach IIUC. Admittedly there's a a
fair amount of overhead with POE's pseudo-kernel set-up, but, at least,
there's a single process rather than cloning perl interpreters or
forking a bazillion times.

And, what if the fork/thread bomber later finds he needs to save or
post-process the ip's that successfully pinged... He'll inevitably need
to delve into IPC or variable sharing/locking intricacies via
threads::shared, etc. POE's single process makes it trivial. Finally,
there's the extensive, robust POE networking examples already available
for templates...
 
R

Rainer Weikusat

Charles DeRykus said:
Charles DeRykus said:
On 8/2/2013 5:39 PM, George Mpouras wrote:

[insane networking code]
A less resource intensive alternative with POE:

http://poe.perl.org/?POE_Cookbook/Pinging_Multiple_Hosts

As far as I can tell, that suffers from the same "Hit me as fast as
you can, bazillions, 'cos really tired of his life!"
problem. Something equivalent can probably be implemented without POE
by writing (at worst) a little more text (my gut feeling says 'less',
actually).

POE though is using non-blocking I/O and an event loop behind the
scenes which is certainly a reasonable approach IIUC. Admittedly
there's a a fair amount of overhead with POE's pseudo-kernel set-up,
but, at least, there's a single process rather than cloning perl
interpreters or forking a bazillion times.

This will make the issue I referred to as 'DDos suicide' in another
posting worse because the probe packets are being sent out faster: The
original posting was about pinging all hosts on the local LAN. And
this local LAN may neither be 'small' (a /24 or less) nor sparsely
populated. Let's assume that a thousand hosts reply: Somewhat
simplified, these have 1000x the bandwidth and other resources
available for sending their messages than the host who sent the probes
can muster for processing them.
 
G

George Mpouras

POE though is using non-blocking I/O and an event loop behind the scenes
which is certainly a reasonable approach IIUC. Admittedly there's a a
fair amount of overhead with POE's pseudo-kernel set-up, but, at least,
there's a single process rather than cloning perl interpreters or
forking a bazillion times.



I have play with POE but not in depth, because I tired from its big
documentation and its many modules.
I think I will try once more but I will monitor carefully the system to
check if what it claims is true.
I mean the threads module in documentation is perfect but in real life
..... forget it, at least for real intensive tasks.
 
C

Charles DeRykus

This will make the issue I referred to as 'DDos suicide' in another
posting worse because the probe packets are being sent out faster: The
original posting was about pinging all hosts on the local LAN. And
this local LAN may neither be 'small' (a /24 or less) nor sparsely
populated. Let's assume that a thousand hosts reply: Somewhat
simplified, these have 1000x the bandwidth and other resources
available for sending their messages than the host who sent the probes
can muster for processing them.

Actually, it appears that POE has already dealt with than issue
(although tuning might take guesswork) with its own modulation setting:

http://search.cpan.org/~rcaputo/POE-Component-Client-Ping-1.173/lib/POE/Component/Client/Ping.pm

From above doc:

Parallelism => $limit

Parallelism sets POE::Component::Client::ping's maximum number of
simultaneous ICMP requests. Higher numbers speed up the processing of
large host lists, up to the point where the operating system or network
becomes oversaturated and begin to drop packets.
 
R

Rainer Weikusat

Charles DeRykus said:
Actually, it appears that POE has already dealt with than issue
(although tuning might take guesswork) with its own modulation
setting:

http://search.cpan.org/~rcaputo/POE-Component-Client-Ping-1.173/lib/POE/Component/Client/Ping.pm

From above doc:

Parallelism => $limit

The solution is really simply to rate-limit requests being sent which
can be accomplished by something as hideously arcane as doing
a (subsecond) 'sleep' between two sends when using a dedicated sending thread/
process (presumably, that's not something whose name looks like the
German word for 'trainwreck' would ever consider) or using select with
a suitable timeout to wait for replies between two sends. This
'parallelism' idea is inherently broken because replies aren't
guaranteed to arrive.

NB: I completely understand how to 'work around that' by employing a
general-purpose timer implementation (which might even use a sensible
algorithm for priority queues although this would really surprise me)
combined with a general purpose 'coroutine' aka 'cooperative userspace
threading' (Mark 45054) implementation with a combined size of 1E9
LOC (25% were never executed so far, another 25% are executed but
don't do anything useful, a further 25% workarounds for bugs in the
remaining 25%).
 
R

Rainer Weikusat

Rainer Weikusat said:
[...]
[...]

NB: I completely understand how to 'work around that' by employing a
general-purpose timer implementation


[...]

Something which suggests itself in this context (and a nice read):

A novice programmer was once assigned to code a simple financial package.

The novice worked furiously for many days, but when his master
reviewed his program, he discovered that it contained a screen
editor, a set of generalized graphics routines, an artificial
intelligence interface, but not the slightest mention of
anything financial.

When the master asked about this, the novice became
indignant. ``Don't be so impatient,'' he said, ``I'll put in
the financial stuff eventually.''

http://www.canonical.org/~kragen/tao-of-programming.html
 
C

Charles DeRykus

...

The solution is really simply to rate-limit requests being sent which
can be accomplished by something as hideously arcane as doing
a (subsecond) 'sleep' between two sends when using a dedicated sending thread/
process (presumably, that's not something whose name looks like the
German word for 'trainwreck' would ever consider) or using select with
a suitable timeout to wait for replies between two sends. This
'parallelism' idea is inherently broken because replies aren't
guaranteed to arrive.
NB: I completely understand how to 'work around that' by employing a
general-purpose timer implementation (which might even use a sensible
algorithm for priority queues although this would really surprise me)
combined with a general purpose 'coroutine' aka 'cooperative userspace
threading' (Mark 45054) implementation with a combined size of 1E9
LOC (25% were never executed so far, another 25% are executed but
don't do anything useful, a further 25% workarounds for bugs in the
remaining 25%).

Maybe, I'm confused but my understanding is that each boxcar on the
"POE-Zug" fires off its icmp ping, waits for the user-configurable
timeout, and gathers any/all responses. Overall send-rate is throttled
with the configurable parallelism setting. If that rate is properly
tuned to avoid saturating the network, where does the train run off the
tracks? (I'd guess even an alternative micro-sleep between requests
might potentially bog down a network without tuning)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,576
Members
45,054
Latest member
LucyCarper

Latest Threads

Top