Zero overhead overflow checking

M

Michael Foukarakis

2.A: A new pragma
------------------

#pragma STDC OVERFLOW_CHECK on_off_flag

When in the ON state, any overflow of an addition, subtraction,
multiplication or division provokes a call to the overflow handler.
Operations like +=, -=, *=, and /= are counted also.

Assignment can also cause an overflow. You should concider such cases
as well.
Only the types signed int and signed long long are concerned.

Signed integers that are converted to unsigned types for expression
evaluation are also concerned. Several such cases have led to
exploitable bugs. Pointers that wrap around also cause similar
problems. Do not limit your problem space unless necessary.
The initial state of the overflow flag is implementation defined.

2.B: Setting the handler for overflows

All your function and variable names should be named appropriately to
avoid namespace pollution, but I understand this is an early draft.
Just keep it in mind.
The function set_overflow_handler sets the function to be called in
case of overflow to the specified value. If "newvalue" is NULL,
the function sets the handler to the default value (the value
it had at program startup).

What you describe here implies runtime behaviour. How are you going to
determine "newvalue" in a static context? Are you considering partial
code evaluation? Emulation? Please describe your method more
thoroughly so that appropriate feedback can be given.
2.C: The handler function
-------------------------

typedef void (*overflow_handler_t)(unsigned line_number,
                                    char *filename,
                                    char *function_name,...);
This function will be called when an overflow is detected. The
arguments have the same values as __LINE__ __FILE__ and __FUNC__

If this function returns, execution continues with an implementation
defined value as the result of the operation that overflowed.

I assume this is provided for the user to handle his/her overflows as
deems fit. This is indeed zero-overhead, but is also zero-work. What
exactly is the novelty you're providing us with? There are well known
methods to detect overflows and many have already been posted in clc
lately - what is the fundamental advantage your compiler will provide
me that will give me reason to stop using those and perform my own
error (overflow) handling in my programs?
-------------------------------------------------------------------

Implementation.

I have implemented this solution, and the overhead is almost zero.

If you do not provide concrete numbers you cannot make such claims.
The most important point for implementors is to realize that the
normal flow (i.e. when there is no overflow) should not be disturbed.

No overhead implementation:
--------------------------
        1. Perform operation (add, subtract, etc)
        2. Jump on overflow to an error label
        3: Go on with the rest of the program

The overhead of this is below accuracy in a PC system.
It can't be measured.

Wrong. There are well known implemented methods to measure even number
of instructions executed between two observation points. Use them and
provide us with results for specific test setups.

Implementation with small overhead (3-5%)
        1. Perform operation
        2. If no overflow jump to continuation
        3. save registers
        4. Push arguments
        5. Call handler
        6. Pop arguments
        7. Restore registers
    continuation:

The problem with the second implementation is that the flow of control
is disturbed.

Your implementation is also changing the flow of control. Let me say
here that implementation-defined does not relieve you of all
responsibility - you could at the very least specify some desired
behaviour or properties that should be held true. You also need to
consider the case where an expression contains more than one variables
| expressions which can overflow. Where does program flow resume then?
You can be pessimistic and return before any of the expressions are
evaluated, but in concurrent environments there is the possibility
something is messed up even then.

The branch to the continuation code will be mispredicted
since it is a forward branch. This provokes pipeline turbulence.

The first solution provokes no pipeline turbulence since the forward
jump will be predicted as not taken.

This is not always true; consider the gcc likely/unlikely macros.
This will be a good prediction
in the overwhelming majority of situations (no overflow). The only
overhead is just an additional instruction, i.e. almost nothing.

Overall, very good initiative, but it needs a lot of work.
 
B

Bart

Bart said:
[...]
Have a look at C#'s basic types:
Byte: 8 bits; Short: 16 bits; Int: 32: bits; Long: 64 bits. Now try
and get the same hard and fast facts about C's types, you can't! It's
like asking basic questions of a politician.

     Basic questions like "How long is a piece of string?"

     Besides, what has this to do with detecting overflow?  Or are you
trying to make some obscure point about overflow detection in C#?

I've partly lost the thread but I think I was making a case for
processor-specific versions of C, one where it's clear whether or not
features such as overflow checking exist and exactly how they will
work.

C# was an example of one area where it's specs are transparent
compared to C.

If it's necessary to cover every computer in existence, and every one
not yet in existence, I don't think there will be any agreement (about
the overflow thing).
 
S

Stephen Sprunk

jacob said:
Dag-Erling Smørgrav a écrit :

There is nbo way gcc can function correctly for small architectures.
Obviously there are many ports of it to many architectures, most of
them full of bugs.

Given that GCC has become the default compiler for virtually every
"small architecture", it obviously can function correctly and isn't all
that buggy, at least compared to the commercial compilers it replaced.

My company uses GCC on several different "small architectures" and have
only found a handful of bugs over the years, almost all of which were
already fixed in a more recent version than what we were using at the
time. The remaining few were easy enough to work around.
The code of gcc is around 12-15MB source code. To rewrite a BIG
part of this for a small microprocessor is like trying to kill a fly
with an atomic bomb.

The back-end part that translates RTL to assembly is quite small, and
that is often all that needs porting for a new architecture. Most "new"
architectures are variations on existing ones, in part due to the
conscious desire to make it easier to port compilers, in which case you
only need to tweak a few things.
I am biased, of course. Like you are biased for gcc. I just want to
restore a sense of proportion. Porting gcc to a new architecture and
debug the resulting code is a work of several years until all is
debugged and fixed.

And no, it can't be done by a single person.

That depends on how different the new system is from the nearest
existing system(s). There are also companies dedicated to doing that
work, if you don't have folks in-house to do it. It doesn't take years;
the vendor effectively can't sell their new chip until GCC works on it,
so they get that part done very, very quickly. In fact, the better ones
even feed back info from the compiler team about how GCC generates code
for their chips so that they can improve future generations.

S
 
B

Bart

Assignment can also cause an overflow. You should concider such cases
as well.

And casts (implicit and explicit) as well. But I would call this range
checking, different from overflow, because with overflow you don't
have a value to check against, you only get a indication of the
overflow.

Range checking is also a possibility, but needs more care because I'm
sure many behaviours depend on truncating an out-of-range value.
 
K

Keith Thompson

Eric Sosman said:
Keith said:
jacob navia said:
[... concerning integer overflow vs. narrowing conversions ...]
The checking could be done of course, but I would make it a different
proposal and with a different name. Anyway for the language both
overflows are NOT the same anyway.

I disagree.

They're certainly different in the eyes of the C Standard. One
produces undefined behavior (3.4.3), while the other produces
implementation-defined behavior (6.3.1.3p3). Although they pose
similar hazards to the program, the language considers them different
conditions with different descriptions and different outcomes.

You're right, the language does treat them differently.

The point on which I disagree is that I think that any proposal for
integer overflow checking in the C standard should apply to both.
I should have said so more clearly.

I'm also (still) curious about *why* the standard makes this
distinction, rather than treating conversion like any other
arithmetic operation. A quick look at the C99 Rationale was
unilluminating.

I suspect it's just a matter of practicality, that real-world
behavior on arithmetic overflow varies enough that it wasn't
practical for the standard to narrow the options, but the range
of behavior on conversions made it possible to say simply that the
result is implementation-defined.
 
K

Keith Thompson

Michael Foukarakis said:
Assignment can also cause an overflow. You should concider such cases
as well.

Assignment itself cannot *directly* cause an overflow; it just
copies a value into an object. A conversion that's implicit in
an assignment can cause an "overflow", though the standard doesn't
use that term; see C99 6.3.1.3p3:

Otherwise, the new type is signed and the value cannot be
represented in it; either the result is implementation-defined
or an implementation-defined signal is raised.

For consistency, all forms of conversions should be treated alike.
That includes implicit conversions resulting from a cast, as well as
implicit conversions resulting from assignment, argument passing,
return statements, parameter passing, the "usual arithmetic
conversions", and whatever other cases I've forgotten.

I've argued that conversions should be treated like arithmetic
operators; there's obviously some disagreement on that point.

[big snip]
 
K

Keith Thompson

jacob navia said:
Francis Glassborow a écrit :

For a compiler like lcc it took me at least 8 months to get
some confidence into the modifications I was doing in the code
generator. It took me much longer to fully understand the
machine description and being able to write new rules
for it.

lcc's code was 250K or so. It has a VERY GOOD DOCUMENTATION.

gcc's code is 15MB or more, with confusing documentation.

With all respect, I disagree with you.

But we are going away from the main subject of this
discussion that was the overflow checking proposal.

Let's agree then, that we disagree in this point.

There's no need to just "agree to disagree" on this point, since it
can be answered more or less definitively with a little research.
gcc has been ported to new architectures many times. It shouldn't
be difficult to find out how long it typically takes.
 
M

Michael Foukarakis

Assignment itself cannot *directly* cause an overflow; it just
copies a value into an object.  A conversion that's implicit in
an assignment can cause an "overflow", though the standard doesn't
use that term; see C99 6.3.1.3p3:

    Otherwise, the new type is signed and the value cannot be
    represented in it; either the result is implementation-defined
    or an implementation-defined signal is raised.

Valid point. I had the = operator in mind, while it's clear that
conversion/truncation and friends are the cause of the actual
overflow.
For consistency, all forms of conversions should be treated alike.
That includes implicit conversions resulting from a cast, as well as
implicit conversions resulting from assignment, argument passing,
return statements, parameter passing, the "usual arithmetic
conversions", and whatever other cases I've forgotten.

I agree, to me it seems pointless to distinguish between those cases.
 
J

jacob navia

Keith Thompson a écrit :
There's no need to just "agree to disagree" on this point, since it
can be answered more or less definitively with a little research.
gcc has been ported to new architectures many times. It shouldn't
be difficult to find out how long it typically takes.

OK, sure.

Here is an example for the ATMEL microprocessor...

http://users.rcn.com/rneswold/avr/index.html

---------------------------------------------------------------
A GNU Development Environment for the AVR Microcontroller
Rich Neswold

(e-mail address removed)

Copyright © 1999, 2000, 2001, 2002 by Richard M. Neswold, Jr.

This document attempts to cover the details of the GNU Tools that are
specific to the AVR family of processors.

Acknowledgements

This document tries to tie together the labors of a large group of
people. Without these individuals' efforts, we wouldn't have a terrific,
free set of tools to develop AVR projects. We all owe thanks to:

*

The GCC Team, which produced a very capable set of development
tools for an amazing number of platforms and processors.
*

Denis Chertykov <[email protected]> for making the AVR-specific
changes to the GNU tools.
*

Denis Chertykov and Marek Michalkiewicz <[email protected]> for
developing the standard libraries and startup code for AVR-GCC.
*

Uros Platise for developing the AVR programmer tool, uisp.
*

Joerg Wunsch <[email protected]> for adding all the AVR
development tools to the FreeBSD ports tree and for providing the demo
project in Chapter 2.
*

Brian Dean <[email protected]> for developing avrprog (an alternate
to uisp) and for contributing Section 1.4.1 which describes how to use it.


---------------------------------------------------------------------

It took them from 1999 to 2002 (The copyright notices) and they
surely do not mention all people involved, just the main ones.

OK?

But this has nothing to do with the discussion. Please let's come back
to the overflow discussion.
 
B

Bart

Bart said:
Bart wrote:
[...]
Have a look at C#'s basic types:
Byte: 8 bits; Short: 16 bits; Int: 32: bits; Long: 64 bits. Now try
and get the same hard and fast facts about C's types, you can't! It's
like asking basic questions of a politician.
     Basic questions like "How long is a piece of string?"
     Besides, what has this to do with detecting overflow?  Or are you
trying to make some obscure point about overflow detection in C#?
I've partly lost the thread but I think I was making a case for
processor-specific versions of C, one where it's clear whether or not
features such as overflow checking exist and exactly how they will
work.

     Ah, yes: Back to the Good Old Days when computer time was scarce
and costly, programmer time cheap and plentiful.

     Computer hardware advances too quickly for processor-specific
software to be worthwhile, except in edge cases.  Are you, even now,
rewriting all your code for the "Nehalem" processor?  Or are you
running the same code on "Nehalem" as you did on "Core" and on
"Yonah" and so on back into the mists of time?

     Yes, there's a certain amount of processor-specific code in all
of these.  Most of *that* is to hide the model-to-model idiosyncracies,
to give "ordinary" code the illusion that nothing has changed.  A driver
here, a virtualization hook there, knit one, purl two, and behold!  It's
"the same" processor you've always known (or thought you knew).  And
what's the benefit of all this trickery?  It allows the ENORMOUS body
of existing portable code to run without change, that's what.

I've seen how portable C came be: a long int is 32-bits under Windows,
and 64-bits under Linux, under gcc on the *same machine*! And if have
to download an application, and it's only available as C source code
(as a myriad files in a bz2 tarball to boot), then I simply don't
bother; life's too short.

Other languages do the portable thing much better than C, so why not
let C concentrate on what it's good at -- implementing systems to
build on top of.

I suspect anyway that many are already using C in exactly the non-
portable ways I'm suggesting (there's no law about it). If on a
machine where char=int=64 bits, is it really possible to code a fast,
tight program completely oblivious of this fact?
 
K

Keith Thompson

Bart said:
I've seen how portable C came be: a long int is 32-bits under Windows,
and 64-bits under Linux, under gcc on the *same machine*!

And yet there's plenty of C source code that works in both
environments. You propose to take away that advantage.
And if have
to download an application, and it's only available as C source code
(as a myriad files in a bz2 tarball to boot), then I simply don't
bother; life's too short.

That's your decision, but as long as you have the right environment,
installing such a package is nearly trivial. (I use a wrapper script
that handles the half dozen or so commands required.)
Other languages do the portable thing much better than C, so why not
let C concentrate on what it's good at -- implementing systems to
build on top of.

It already does that.

[...]
 
R

robertwessel2

Alpha processors are still in production use.

Are there other processors, perhaps even new ones, that use similar
schemes to what the Alpha uses?


MIPS, for example.

Or ones that use other schemes?


S/360..zSeries, for example.
 
J

jacob navia

Michael Foukarakis a écrit :
Valid point. I had the = operator in mind, while it's clear that
conversion/truncation and friends are the cause of the actual
overflow.


I agree, to me it seems pointless to distinguish between those cases.

The disagreement (from my point of view) is in the scope
of this proposal.

I wanted to check overflow for the 4 operations. Truncating overflow
is used a lot as a way of discarding irrelevant data (maybe after some
shifts or maybe not) that any overlapping of the overflow proposal
with THAT problem would confuse everything.

To access some byte or some subset of the data stored in an integer
how much code does a
char c = (integer >> 8);

meaning

char c = (integer >> 8)&0xff;


I agree with you that it should have been written in the second form
but there is just too much code that is already written in the first
form.

Your proposal should be handled by ANOTHER check macro

#pragma STDC CHECK_ASSIGNMENT_OVERFLOW on-off-flag

That is another discussion.
 
K

Keith Thompson

jacob navia said:
The disagreement (from my point of view) is in the scope
of this proposal.

I wanted to check overflow for the 4 operations. Truncating overflow
is used a lot as a way of discarding irrelevant data (maybe after some
shifts or maybe not) that any overlapping of the overflow proposal
with THAT problem would confuse everything.

To access some byte or some subset of the data stored in an integer
how much code does a
char c = (integer >> 8);

meaning

char c = (integer >> 8)&0xff;

I don't know how much code does that kind of thing.
It's certainly not something I'd write, and it already
stores an implementation-defined value in c (or raises an
implementation-defined signal).
I agree with you that it should have been written in the second form
but there is just too much code that is already written in the first
form.

It *should* have been written using unsigned types. Bitwise or shift
operators should be used on signed types only if you're sure that the
result is representable; if it isn't, that's a bug in the program,
just the kind of thing that overflow checking should catch. And yes,
in this case it's the assignment, and the implicit conversion
associated with it, that's the immediate cause of the problem.
Your proposal should be handled by ANOTHER check macro

#pragma STDC CHECK_ASSIGNMENT_OVERFLOW on-off-flag

That is another discussion.

It should be CHECK_CONVERSION_OVERFLOW, not CHECK_ASSIGNMENT_OVERFLOW.
(And it's a pragma, not a macro.)

Ok, if it's going to be controlled by a pragma, I wouldn't object to
using a separate one for conversions. But I would object to adding
one to the language without the other.
 
J

jacob navia

Keith Thompson a écrit :
It should be CHECK_CONVERSION_OVERFLOW, not CHECK_ASSIGNMENT_OVERFLOW.
(And it's a pragma, not a macro.)

Ok, if it's going to be controlled by a pragma, I wouldn't object to
using a separate one for conversions. But I would object to adding
one to the language without the other.

Great.

Plan A:

I go on with the
#pragma STDC OVERFLOW_CHECK

and you start

#pragma STDC CHECK_CONVERSION_OVERFLOW

I will surely support your proposal and you mine.

:)

It would be much more productive for all if you worked
to propose things too.
 
D

Dik T. Winter

> No overhead implementation:
> --------------------------
> 1. Perform operation (add, subtract, etc)
> 2. Jump on overflow to an error label
> 3: Go on with the rest of the program

Do you check for overflow after every operation?
 
J

John Nagle

jacob said:
Abstract:

Overflow checking is not done in C. This article proposes a solution
to close this hole in the language that has almost no impact in the run
time behavior.

I actually implemented this back in 1982.

The DEC VAX had hardware to trap on integer overflow. The
trap was enabled by a bit in the "entry mask" generated for
each function. The CALL instruction read the entry mask and
set various modes for execution of the code.

I modified the C compiler that came with 4.1BSD to generate
the entry mask with integer overflow checking enabled. Then
I rebuilt all the standard programs with that compiler.
About half of them worked without any fixes. The others
were overflowing silently for one reason or another.

Because that compiler wasn't careful about generating
different instructions for signed and unsigned arithmetic,
it didn't quite generate the right code to allow unsigned
arithmetic to wrap. It looked like a big project to
clean up the compiler, so we never went on to do that.

Back in 1982, integer overflow wasn't considered OK.
Pascal and Ada checked for it. C was considered sloppy for not
doing that. There was more interest in correctness back then.

If C were to have overflow-free semantics, the right way
to do it would be this:

1. Overflow can only occur in user-defined variables.
The compiler must create longer intermediates for expressions
when necessary to prevent overflow within an expression
which would not cause overflow in the result. For example, in

short a,b,c,d;
a = (b * c) / d;

"(b * c)" needs to be a "long". If an intermediate is required
that is too large for the available hardware, a compile error
must be reported.

2. Unsigned arithmetic should be checked for overflow.
If wrapped arithmetic is desired, it should be indicated by
idioms such as

unsigned short a;
a = (a + 1) % 65536;

or

a = (a + 1) & 0xffff;

The compiler is free to optimize such expressions into unchecked
wrapped arithmetic. For the 1% or less of the time that you
really want wrapped arithmetic, that's how to express it.

This has the nice property that you get the same answer on all platforms.
However, with the final demise of the 36-bit ones-complement machines.
(UNISYS finally ended the ClearPath line, the successor to the UNISYS
B series, the UNIVAC 2200 series, and the UNIVAC 1100 series, in
early 2009) this is no longer a real issue. It was, back in 1982, when
DECsystem 36-bit machines were powering most of academia.

At this point, it's way too late in the history of C to fix this.
However, it could have been fixed.

I once put a lot of effort into overflow theory. See
http://www.animats.com/papers/verifier/verifiermanual.pdf
That's from 1982.

John Nagle
 
J

jacob navia

Dik T. Winter a écrit :
Do you check for overflow after every operation?

No, after addition, subtraction, and multiplication
only.

I haven't done division yet. It could undeflow only
in a special situation.
 
J

jacob navia

John Nagle a écrit :
I actually implemented this back in 1982.

The DEC VAX had hardware to trap on integer overflow. The
trap was enabled by a bit in the "entry mask" generated for
each function. The CALL instruction read the entry mask and
set various modes for execution of the code.

I modified the C compiler that came with 4.1BSD to generate
the entry mask with integer overflow checking enabled. Then
I rebuilt all the standard programs with that compiler.
About half of them worked without any fixes. The others
were overflowing silently for one reason or another.

Imagine that. 50% of the programs at that time were producing
incorrect results in some situations!

I think the number could be the same today. We will never know
until we get this into the language
Because that compiler wasn't careful about generating
different instructions for signed and unsigned arithmetic,
it didn't quite generate the right code to allow unsigned
arithmetic to wrap. It looked like a big project to
clean up the compiler, so we never went on to do that.

Back in 1982, integer overflow wasn't considered OK.
Pascal and Ada checked for it. C was considered sloppy for not
doing that. There was more interest in correctness back then.

No. Today there is even greater interest in correctness. The problem
is that in the C community an attitude of general sloppiness exists.
That is why people are leaving the language and going to others
like Java or C#.

If C were to have overflow-free semantics, the right way
to do it would be this:

1. Overflow can only occur in user-defined variables.
The compiler must create longer intermediates for expressions
when necessary to prevent overflow within an expression
which would not cause overflow in the result. For example, in

short a,b,c,d;
a = (b * c) / d;

"(b * c)" needs to be a "long". If an intermediate is required
that is too large for the available hardware, a compile error
must be reported.

Most machines report overflows. It is only necessary to TEST the
overflow flag at each operation. This is very cheap!
2. Unsigned arithmetic should be checked for overflow.
If wrapped arithmetic is desired, it should be indicated by
idioms such as

unsigned short a;
a = (a + 1) % 65536;

or

a = (a + 1) & 0xffff;

The compiler is free to optimize such expressions into unchecked
wrapped arithmetic. For the 1% or less of the time that you
really want wrapped arithmetic, that's how to express it.

This has the nice property that you get the same answer on all platforms.
However, with the final demise of the 36-bit ones-complement machines.
(UNISYS finally ended the ClearPath line, the successor to the UNISYS
B series, the UNIVAC 2200 series, and the UNIVAC 1100 series, in
early 2009) this is no longer a real issue. It was, back in 1982, when
DECsystem 36-bit machines were powering most of academia.

Unsigned arithmetic is not undefined behavior what overflow is
concerned... It should wrap around. My proposition goes only for
the signed arithmetic.

At this point, it's way too late in the history of C to fix this.
However, it could have been fixed.

Better late than never. I do not see why it should be "too late".

I once put a lot of effort into overflow theory. See
http://www.animats.com/papers/verifier/verifiermanual.pdf
That's from 1982.

John Nagle

Nothing moved since then. It is a pity. It could have been done in 1982.
 
N

Nick Keighley

Someone mentioned hundreds of embedded processors for each advanced
processor. I guess these must be all a little different.

I can't parse that. The point is chip designers *don't* have a
completely
free hand. They *do* have to pay some attention to history.

C does it with penalties, such as making some kinds of programming a
minefield because this won't work on processor X, and that is a wrong
assumption for processor Y, even though this application is designed
to run only on processor Z.

this is actually much less hard than you make it sound.
I know a program where a good chunk of the code ran on both a Z80
(8-bit microprocessor) and on a 32-bit Sun (I can't remember if the
Sun was a 68000 or a Sparc- but that's rather the point).

I'd always [thought] C was a mainstream language.

I didn't say it wasn't. Just that there no others that I know of,
which are like C, that are mainstream (but presumably plenty of
private or in-house ones, like one or two of mine).

well no other language is quite like C, because then it would be C!
Fortran was renowned for its portability. It's true the only examples
of portable programs I have experienced are C programs. I think that
says a lot about C (and perhaps a bit about me!). But many languages
are
highly portable.

Consider Chicken Scheme (an implementaion of a Lisp-like language)
according to its web page, Chicken is...

"Highly portable and known to run on many platforms, including x86,
x86-64,
IA-64, PowerPC, SPARC and UltraSPARC, Alpha, MIPS, ARM and S/390 "

though that list looks a little old they are trying to be portable.
And yes it can be compiled.

Of course it's implemented in and compiles to C...

But this doesn't apply to hardware? Why can't that abstraction layer
that you mention a bit later be applied to C-86?

you lost me. Hardware varies because it has to deal with the physics
of
the real world. Software can provide an insulation layer that hides
that variability. I'm not sure where you're trying to move the
abstraction
to.
Have a look at C#'s basic types:

Byte: 8 bits; Short: 16 bits; Int: 32: bits; Long: 64 bits.

which means it won't run efficiently on some hardware.
Now try
and get the same hard and fast facts about C's types, you can't! It's
like asking basic questions of a politician.

you get certain minimum guarantees. In my experience tying down
the size of things to that extent is unhelpful. I've worked
on a project where use of raw types was banned. I hated it.

character char
octet unsigned char
16 bits int
32 bits long

I hardly ever use short. I've not yet needed 64-bits and I'm aware C
has
problems there.

That's one thing that would be simpler;

I don't agree. You need to think a bit, but once you've thought it's
easy.
what sort of things would be
harder

writing software that runs on many platforms. I gave you the example
of embedded systems.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,055
Latest member
SlimSparkKetoACVReview

Latest Threads

Top