When to check the return value of malloc

D

Dennis \(Icarus\)

Phil Carmody said:
An object being 'constructed'? Are you sure you were using C?

I know I was using C++. I doubt C would be any different in this respect.
:)

Dennis
 
P

Phil Carmody

sandeep said:
Multiple evaluation is very unlikely in this case. This answer looks
spurious to me.

The operative words are "to me". We've already ascertained that
your perspective is desperately naive. Giving us more evidence
does not strengthen your case at all. The macro you suggested
gains you _nothing_, in particular it doesn't give you the things
that you mindlessly asserted that it does give you in anything
apart from the compilers with the poorest QoI.

Sorry to break this to you, but you are *not* cleverer than your
compiler. Stop pretending you are. And please stop doing it in
public, it's painful to watch.

Phil
 
N

Nick Keighley

Suppose your program is a filter, used in a (unix shell: sorry!) commandline like:

$ find . -name \*\.c -print | sort | uniq | yourprogram | lpr
FAILED $

What could the user do to help you solve *your* problem ?
Would he have liked it, if "FAILED" had been written to stdout ?
Will he ever attempt to use your program again ?

I don't understand.
 
N

Nick Keighley

Nick Keighley wrote:

[not all users are programmers]
) Note yesterday I encountered someone who got a "not enough memory to
) perform operation" error and were surprised because their disk had
) plenty of space.

Quite understandable, given that many OSes use disk space as virtual
memory.

this user wouldn't have known what the term "virtual memory" meant.

Virtual memory doesn't usually eat your entire disk
 
E

Ersek, Laszlo

I don't understand.

I don't know what is you don't understand so I'll start with the pipeline.
- Create a list of *.c files present in the directory hierarchy rooted at
the current working directory,
- sort them lexicographically,
- eliminate duplicate lines (duplicate lines were unlikely in this case,
but I digress),
- process the remaining list with "yourprogram",
- pass the output of "yourprogram" to the line printer (ie. queue the
output as a print job -- further processing is possible "within" lpr,
ie. automagically determining whether the data is plain text,
PostScript, PDF and so on, and translating it to the configured printer's
language).

(These processes run in parallel.)

If "yourprogram" writes FAILED to stdout instead of stderr, then FAILED
will show up interleaved with (or after) the other data sent to "lpr".
Supposing "yourprogram" *either* produces data *or* it writes FAILED, and
that lpr knows to ignore an empty job, writing normal data and FAILED to
different output streams works correctly. Writing FAILED to stdout would
print a page with "FAILED (snicker snicker)" on it.

Of course, if "yourprogram" writes data to stdout before it emits FAILED
to stderr, you still end up with an incomplete page (or book).

Cheers,
lacos
 
M

Moi

I don't know what is you don't understand so I'll start with the
pipeline. - Create a list of *.c files present in the directory
hierarchy rooted at the current working directory,
- sort them lexicographically,
- eliminate duplicate lines (duplicate lines were unlikely in this case,
but I digress),
- process the remaining list with "yourprogram", - pass the output of
"yourprogram" to the line printer (ie. queue the output as a print job
-- further processing is possible "within" lpr, ie. automagically
If "yourprogram" writes FAILED to stdout instead of stderr, then FAILED
will show up interleaved with (or after) the other data sent to "lpr".
Supposing "yourprogram" *either* produces data *or* it writes FAILED,
and that lpr knows to ignore an empty job, writing normal data and
FAILED to different output streams works correctly. Writing FAILED to
stdout would print a page with "FAILED (snicker snicker)" on it.

Of course, if "yourprogram" writes data to stdout before it emits FAILED
to stderr, you still end up with an incomplete page (or book).

Exactly.
My whole point was that, if the program decides to fail
it could _at least_ try to minimize the damage.
Writing nonsense to stdout in some cases only increases the damage.

AvK
 
W

Willem

Nick Keighley wrote:
)> Nick Keighley wrote:
)
) [not all users are programmers]
)
)> ) Note yesterday I encountered someone who got a "not enough memory to
)> ) perform operation" error and were surprised because their disk had
)> ) plenty of space.
)>
)> Quite understandable, given that many OSes use disk space as virtual
)> memory.
)
) this user wouldn't have known what the term "virtual memory" meant.

However, he could very well have known that you get out-of-memory errors
more often when your disk is full. Which is true on some OSes. Especially
the ones that are used by people who are not computer-savvy.

) Virtual memory doesn't usually eat your entire disk

There are OSes where it does. Especially older versions of the one that
is most used by non-computer-savvy users.


SaSW, Willem
--
Disclaimer: I am in no way responsible for any of the statements
made in the above text. For all I know I might be
drugged or something..
No I'm not paranoid. You all think I'm paranoid, don't you !
#EOT
 
K

Keith Thompson

Nick Keighley said:
oops! Don't agree. A named constant makes it clear what the constant
is for- it's the good use of abstraction. Then there's multiple use of
the same number (less likely with really big numbers).

sm.state = 9;
write_port (9, 9);
for (equip_num = 0; equip_num < 9; equip_num++)
reset_equip (equip_num);

If you want to change one of those you have inspect every 9 in the
program

But in the safeMalloc case, there's no point in using a #define. The
fact that it's immediately followed by #undef implies that the writer
wants it to exist only in a limited scope. So why not just declare it
that way?

...
static void *emrgcy = 0;
void *x1;
const size_t ESIZE = 0x40000000;
if ((!(emrgcy=emrgcy?emrgcy:malloc(ESIZE)))
...

Of course I wouldn't call it ESIZE, and I wouldn't write that ugly
condition, and I'd use a lot more whitespace, and I wouldn't do it
this way in the first place.
 
K

Keith Thompson

sandeep said:
Multiple evaluation is very unlikely in this case. This answer looks
spurious to me.

The question you should be asking yourself is why you'd want to
write a macro rather than a function, not the other way around.

Use a function unless you have a good specific reason to use a
macro.
 
L

Lew Pitcher

This was quite a long rant!! I think I see your point but you have to
imagine your grannie when Word crashes. I think she will not like to see
a message like
"malloc failed at src\lib\unicode\mapper\table.c:762"
I think that will confuse her!

Anyway I have adopted your suggestion, also used a function instead of a
macro, and built in some extra functionality. Now it will keep some memory
back to use later on if allocations start failing for added robustness.

void* safeMalloc(size_t x)
{
    static void* emrgcy=0;
    void* x1;
#define ESIZE 0x40000000uLL
    if(!(emrgcy=emrgcy?emrgcy:malloc(ESIZE)))
#undef ESIZE
        printf("WARNING running in unstable mode, program may crash at any "
                " time....Close open programs, allow more virtual memory or "
                " install extra RAM");
    if(!(x1=malloc(x))) {
        if(emrgcy) {
            free(emrgcy);
            x1=malloc(x);
        } else {
            printf("Severe memory failure, program cannot continue at line "
#define STRGFY(x) #x
                    STRGFY(__LINE__)
#undef STRGFY
                    " stack dump follows");
            abort();
            exit(1);
        }
    }
    return x1;

}

The shop I worked in (before I retired) had a number of "rules of
thumb" wrt good design and implementation. I agreed with most of those
rules, and utilized them quite regularly.

The above code (and the code it was derived from) violates one of
those "rules" by making an application policy decision within a
support/utility function. In other words, I don't like the invocation
of the exit() function here; a memory allocation utility function
should return allocated memory, or NULL, and leave the handling of a
"no memory available" condition to upper layers of logic.

It is acceptable to have this utility function /log/ the error. It is
not acceptable to have the utility function decide that the
application /cannot recover/ from this error.

--
Lew Pitcher
Master Codewright & JOAT-in-training | Registered Linux User #112576
Me: http://pitcher.digitalfreehold.ca/ | Just Linux: http://justlinux.ca/
---------- Slackware - Because I know what I'm doing.
------
 
S

Seebs


Okay. The obvious answer is that you don't have to think about execution
context or use elaborate hacks with commas and ?:, you can just call it
like a function and it'll work.
Multiple evaluation is very unlikely in this case. This answer looks
spurious to me.

Again, you're trying way too hard. Don't spend a lot of time trying to
figure out whether a given macro is likely to cause trouble if you evaluate
its arguments too often. Just remember that this is a common risk, and
avoid it.

Anyway, it's not impossible to come up with a conceivable allocation where the
argument to the allocation includes a modifier.

extern int lengths[];
static int length_index = 0;

void
next_block(void) {
return malloc(lengths[length_index++]);
}

But mostly, the issue here is one of developing good habits. If you become
a programmer, you *will* end up having to write something quickly, on not
enough sleep, or while a bit sick, or something. You will make mistakes.
At that point, whether you succeed or fail will depend, in no small part,
on whether you have good *habits*. If the way you do something when you don't
have time or resources to think things through carefully and test everything
out works anyway, because it's conservative and straight-forward, you'll be
okay. If you default to creating macros unless you've already thought of
how multiple-evaluation issues will hurt you, you're going to get bitten,
badly.

Here's the thing. You say "multiple-evaluation is very unlikely". Things
that are "very unlikely" happen pretty often, in the course of a large
programming project. Which is to say, the chances are pretty good that,
in a large program, you'll get hit by it at least once.

This comes back to the problem with your evaluation that "many" users
won't understand an informative error message. You're picking the case you
think is the most common, and building everything around that, accepting
a very high risk of extremely bad outcomes in any case other than the most
common.

You have to remember the scale on which computers operate. They run extremely
fast, and there are a lot of them. No, a one in a million chance of failure
is NOT safe. My filesystem emulator, during the course of a typical build,
executes hundreds of thousands of fairly complex operations. If you turn on
logging, it can be ten million operations or more. Per run. We typically
build for about sixty targets per spin, which is roughly daily, plus we have
over a hundred developers doing several runs a day.

Something that is "vanishingly unlikely" typically happens 2-3 times a day,
at least.

You have the interest in programming to be good at this; put some time
into picking up a good philosophy of programming, and you'll have a great
time.

-s
 
S

Seebs

Note yesterday I encountered someone who got a "not enough memory to
perform operation" error and were surprised because their disk had
plenty of space.

Oh, sure, they were confused.

But! They could report the error to someone less confused, and the less
confused person could at least guess at what went wrong.
wow. And you're on decaff?

I have had a lot of very negative experiences involving "an unspecified
error occurred" and similar things.
the other cause for memory error is that some other program has eaten
the memory
True!
what universe do you live in? Are most of the people you know
programmers?

Consider, say, "World of Warcraft". Played by many people who are only
able to play it because someone smarter plugged in a mouse for them.

But out of eleven million users, there are hundreds to thousands of
programmers, at the least, which I consider to be "many" users.
I disagree. Have you seen Airport displays with Windows NT register
dumps?

Occasionally. I think they are not noticably worse than an airport
display containing an out-of-date schedule followed by the information
"unspecified error".
do you have a limit to this? "Database has deadlocked" "link layer
failure" "too many hash collisions"

My practical rule is usually: If you are targetting non-programmers,
and you don't have the resources to, say, spawn an error-handling application
that'll debug the executable automatically, package up a report, and
send it to the developers automatically, go for a simple textual explanation.
I usually aim for: If one of the non-technical users I know asked me "what
happened", what answers could I give them that I think would leave them
with a feeling that they'd gotten an answer which referred in some way to
an event... And then, if one of the technical users I know asked me "what
happened", what answers could I give them that I think would give them an
accurate (if not necessarily comprehensive) understanding of the failure.

The sets have, thus far, always had an intersection. It's okay to give less
information than someone who's been hired to debug the program would need,
and it's okay to give more information than someone who has only recently
been convinced that this is not actually magic would be able to understand.
The goal is to find something that's not hugely intimidating to reasonably
rational users (you can't do anything about that last five percent), and
not too vague to be useful at all for a more experienced user.

-s
 
S

Seebs

oops! Don't agree. A named constant makes it clear what the constant
is for- it's the good use of abstraction. Then there's multiple use of
the same number (less likely with really big numbers).
sm.state = 9;
write_port (9, 9);
for (equip_num = 0; equip_num < 9; equip_num++)
reset_equip (equip_num);
If you want to change one of those you have inspect every 9 in the
program

True.

And come to think of it, every time I've used a value "only once", it's ended
up being "twice" within a week or two anyway.

-s
 
N

Nick Keighley

reinsert snipped material
****
****





I don't know what is you don't understand so I'll start with the pipeline.

I suppose I was alittle terse. I know what a pipeline is. I couldn't
understand what your post had to do with abstraction leakage. You were
only addressing the "don't write to stdio" bit?

If "yourprogram" writes FAILED to stdout instead of stderr, then FAILED
will show up interleaved with (or after) the other data sent to "lpr".
Supposing "yourprogram" *either* produces data *or* it writes FAILED, and
that lpr knows to ignore an empty job, writing normal data and FAILED to
different output streams works correctly. Writing FAILED to stdout would
print a page with "FAILED (snicker snicker)" on it.

Of course, if "yourprogram" writes data to stdout before it emits FAILED
to stderr, you still end up with an incomplete page (or book).

well I don't write error data to stdout and I write occaisional
filters but this seems a bit nit picky. If it fails I don't get useful
output and where the error message goes isn't all that important. I
suppose "FAILED" on the printer is a bit poor! But then so is "MEMORY
ERROR"
 
S

sandeep

Seebs said:
Let me guess. Someone told you to define symbolic names for constants,
right?

Yes, of course!
This is not how you do it.

1. If you're going to define a constant, define it and leave it
defined.

This is very bad practise. Localizing the #define is good for the same
reason that using local instead of global variables.
2. Don't use a name starting with a capital E followed by
another capital letter, those are reserved for errno values. 3. If
you're only using it once, don't feel like you have to #define it.

I would say that the #define is a self-documenting form of code, no?
4.
Don't use "uLL" on a constant that's unambiguously within the size range
of an ordinary signed long. You don't need any qualifier at all,
although in theory a system could exist where that value is too big for
size_t, in which case you'd be allocating 0 bytes.

I think this is wrong. With no "qualifier", the number will be
interpreted as an int and undergo default promotions. Because int may be
16 bits this could overflow.
5. Don't get so
clever. Try:

if (!emrgcy) {
emrgcy = malloc(0x40000000);
if (!emrgcy) {
fprintf(stderr, "Uh-oh, failed to allocate spare memory.\n");
}
}

I think this is the same logic but with longer and more complicated
code...
6. Don't allocate a GIGABYTE of memory like that -- all this does is
massively increase the chance of catastrophic failure, as a likely
response from a system which overcommits is to determine that your
process allocated a TON of memory, doesn't use most of it, and is
probably the best candidate for being killed out of hand. A megabyte or
two, sure, I guess.

I choose a gigabyte because most allocations will be less than 1 GB... if
you only allow a few MB there could easily be a late allocation for more
that the emergency memory can't satisfy.
7. Actually, even then, this is just cargo cult
stuff.

I don't know what cargo cult stuff is.
Again, don't do this. With extremely rare exceptions, you should NEVER
be using #undef on something you just defined.

By the same argument, with extremely rare exceptions you should NEVER be
using block-scope variables.
Reading your code, I get the impression you're trying to aim for some
kind of code density, with cool tricks you've seen all thrown in
together to make the code look more impressive.

I like using advanced C features, yes. It makes programming fun. I think
all good programmers will be able to understand my code.
 
T

Tim Harig

Yes, of course!

I partially agree with removing so called "magic numbers" from the code in
favor of more descriptive names where it makes sense to do so.
I think this is the same logic but with longer and more complicated
code...

The logic may be the same; however, Seebs version is much more intuitive
then yours. The extra spacing and indentation provides visual clues that
make it easy to pick up what is going on and is consistant with other
properly indented code. I have seen *many* bugs created when using "?:" to
obfuscate the code that could clearly be seen using the normal if/else
syntax.
I choose a gigabyte because most allocations will be less than 1 GB... if
you only allow a few MB there could easily be a late allocation for more
that the emergency memory can't satisfy.

I work with many systems that *have* less then a gigabyte of memory. How
can you be sure that nobody will ever try to run your code on such a
system? How large is your memory? What if somebody runs multiple
instances of your program? You have already been told how this can
backfire on operating systems designed to overcommit.

The bottom line is that your "emergency memory" is a very bad idea. There
are much better ways of handling low memory conditions.
I like using advanced C features, yes. It makes programming fun. I think
all good programmers will be able to understand my code.

Good code isn't clever. Good code is clear for whoever has to read an
maintain it. You may think that showing of your 1337 skilz is fun. The
guy who has to clean up the bugs you have written because you obfuscated
your code isn't going to have much fun.

The end effect is that you are creating write-once code. Write once
because it is designed to be thrown away and re-written rather then making
any effort to maintain such poorly written code.
 
S

Seebs

Yes, of course!

I figured.
This is very bad practise. Localizing the #define is good for the same
reason that using local instead of global variables.

Wrong.

The entire POINT of a symbolic constant is to have every usage be the same!

With your system, it is quite easy to imagine:
#define SIZE 1024
v = malloc(size);
#undef SIZE

...

#define SIZE 2048
memcpy(v, src, SIZE);
#undef SIZE

Might I suggest that, since you are clearly at the very beginning newbie
level, you not go around telling people that something is "bad practice"
when they warn you that you're doing something very dangerous?
I would say that the #define is a self-documenting form of code, no?

Not as you used it.
I think this is wrong. With no "qualifier", the number will be
interpreted as an int and undergo default promotions. Because int may be
16 bits this could overflow.

Again, please consider the *remote* possibility that, with twenty years of
active experience using C, I might have a TINY bit of information.

Constants do not work that way. If a constant is too big to be an int,
it is AUTOMATICALLY made into a larger type, if needed. The constant
in question cannot overflow.

Furthermore, the rules for constants starting with 0x are different.

Furthermore, even if you needed to modify the type, "L" would be sufficient.
I think this is the same logic but with longer and more complicated
code...

No. It is the same logic (or close to it) with longer and SIMPLER code.

Always write something as simply as you can when first writing something.
If you need to do something fancy, do it after you've got the simple
version working.
I choose a gigabyte because most allocations will be less than 1 GB...

I understood that. However, what you've done is cause many systems to
be unable to allocate that memory at all, and many more to fail
catastrophically because you allocated a gigabyte of memory you didn't
need, when they would have been fine without it.
if
you only allow a few MB there could easily be a late allocation for more
that the emergency memory can't satisfy.

And the emergency memory *does not work* on many systems. At all. I have
used many systems on which your emergency memory would fail completely, or
cause the program to get killed preemptively by the OS. I have used many
on which freeing that "emergency" memory would have NO EFFECT AT ALL on
any allocation of under about 128MB, because the implementation treats
large allocations differently from small allocations.

The entire idea is just plain wrong. You have formed some kind of crazy
theory about "how malloc works", and that theory is incorrect, leading you
to do stuff that makes no sense.

This is like driving a car and making a point of manually triggering the
airbags before you even start the car, so that you'll be safe in the event
of a crash.
I don't know what cargo cult stuff is.

During WWII, various forces set up staging areas on islands in the Pacific,
some of which were inhabited. Some primitive cultures on the more isolated
islands were unable to comprehend why suddenly there were planes and food
and stuff. They didn't know how planes worked, or where the food came from.
When they were hungry, they did their best to build things that looked sort
of like airstrips, because that would make more "cargo" come.

What you are doing is like this. You don't understand malloc, you've seen
something sort of like this somewhere, and you're imitating it without
understanding what it was, how it worked, or when it would (or wouldn't)
be useful.

Don't do that.
By the same argument, with extremely rare exceptions you should NEVER be
using block-scope variables.

No, not the same argument at all.

The preprocessor isn't scoped, and isn't supposed to be scoped. If you
are #defining something, it should be because you want to make sure that
any possible reference to it will get the same value.
I like using advanced C features, yes. It makes programming fun. I think
all good programmers will be able to understand my code.

A couple of concerns.

1. Don't assume everyone will be a good programmer. You should write with
the intent that very inexperienced programmers will be able to understand
your code if at all possible. The fact is, someone will have to maintain it.
2. I understand it just fine, and it's bad, because you're trying to be
"clever".

I am a bit sympathetic to this, because I certainly did a bunch of crazy stuff
like this when I was first learning to program... But the best advice I ever
got was: "DON'T".

Having looked at that old code, and in a couple of cases tried to get it to
run on newer compilers, I am fully persuaded. Simple, clear, code is better.

The problem with using advanced features is that you have to know how to
use them well. Very good race drivers sometimes use the hand brake in a car
to control the car's behavior in unusual ways. This allows them to do
things that most of us could never do with a car. However, the solution
is not for me to, every time I pull up to a stop sign, use the hand brake
instead of the regular brakes. That would damage my car very severely,
very quickly.

If your skill at first aid extends about to applying band-aids, don't start
trying to do brain surgery.

-s
 
K

Keith Thompson

Lew Pitcher said:
It is acceptable to have this utility function /log/ the error. It is
not acceptable to have the utility function decide that the
application /cannot recover/ from this error.

Another way to look at it is that the application decided, by calling
this particular utility function, that it could not recover from
a memory allocation failure. If it could, it should have called
some malloc() or other function that would permit recovery.

If allocating memory and aborting the program on failure is a
common operation, combining the two into a single utility function
makes sense.

(Figuring out how to recover from allocation failures makes even
more sense, but that can be non-trivial.)
 
E

Eric Sosman

Yes, of course!


This is very bad practise. Localizing the #define is good for the same
reason that using local instead of global variables.

There are few similarities between variables and macros, even
macros that are "manifest constants," so the criteria for what is
good or bad are dissimilar.

But, okay: Let's take your "localization" dictum as Truth, and
see where it leads us:

#define SIZE 0x40000000uLL
void *ptr = malloc(SIZE);
#undef SIZE
if (ptr == NULL) ...
... forty lines ...
#define SIZE 0x40000000uLL
char *buf = malloc(SIZE);
#undef SIZE
if (buf == NULL) ...
... one hundred lines ...
#define SIZE 0x40000000uLL
void *tmp = realloc(bigbuf, SIZE);
#undef SIZE
if (tmp == NULL) ...; else bigbuf = tmp;
... still more lines ...

Each definition and use of SIZE is now localized to its minimal scope.
But one day you decide that a gigabyte is the wrong amount, and want
to change to forty megabytes instead. Seebs makes a one-line change;
you're faced with three (or perhaps more) and the possibility of having
missed a few. Who's in better shape?
I think this is wrong. With no "qualifier", the number will be
interpreted as an int and undergo default promotions. Because int may be
16 bits this could overflow.

You should re-read your C textbook or other reference, because
you are wrong about the treatment of literal constants in source code.
I don't know what cargo cult stuff is.

said:
By the same argument, with extremely rare exceptions you should NEVER be
using block-scope variables.

Again, the dissimilarities outweigh the similarities, and the
claim that the same argument applies has little weight.
I like using advanced C features, yes. It makes programming fun. I think
all good programmers will be able to understand my code.

In the first place, you're not using "advanced" features, just
unnecessarily convolutions of normal features. (I don't think C
even *has* any "advanced" features -- there are seldom-used areas,
particularly in corners of the library -- but "rare" and "advanced"
are not synonyms.)

In the second place, you might do well to consider the words of
Brian Kernighan (the "K" of "K&R," in case you don't recognize the
name): "Debugging is twice as hard as writing the code in the first
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it." Yes, he was
probably being a bit facetious, but I think he has a point worth
pondering -- especially by someone who's already shown that he's
writing beyond the limits of his own cleverness.
 
K

Keith Thompson

sandeep said:
Yes, of course!


This is very bad practise. Localizing the #define is good for the same
reason that using local instead of global variables.

So use a local variable.

I don't think I've ever seen C code in which a macro is #define'd, then
used, then immediately #undef'ed. I can see the argument in favor of
doing it, but in practice most macros are global anyway.

But again, there's no good reason to use a macro rather than a
constant object declaration:

const size_t esize = 0x40000000;

If you want it scoped locally, use a feature that lets the compiler
handle it for you.

Even your #define/#undef pair doesn't do the same thing as a local
declaration; it clobbers any previous definition.

[...]
I think this is wrong. With no "qualifier", the number will be
interpreted as an int and undergo default promotions. Because int may be
16 bits this could overflow.

Nope. An unqualified integer constant, unless it exceeds UINTMAX_MAX (I
think that's the right name) is always of some type into which its value
will fit.

[...]
I don't know what cargo cult stuff is.

Google it.

Basically, "cargo cult programming" means programming by rote
without understanding what you're doing. (The roots of the term
are fascinating, but not really relevant.)

[...]
I like using advanced C features, yes. It makes programming fun. I think
all good programmers will be able to understand my code.

A lot of good programmers have been reading your code. Not being able
to understand it isn't the problem.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top