Infinity + Infinity (or NegInfinity - NegInfinity)

N

nmm1

So in what respect is that incompatible with IEEE 754? C specifies
not only that in (a+b)+c, a+b happens first, but that in a+b+c,
a+b happens first, so it's more prescriptive than Fortran.

You are clearly a revisionist, as well as being unfamiliar with the
ISO/ANSI C standards and their implementations!

There is no such statement in the C standard, and I can tell you
(from personal recollection) that requiring it was NOT WG14's intent
during the standardisation of C90. The BNF was intended to specify
ONLY the precedence of operators and NOT the evaluation order; that
was specified by the side-effect rules.

If you had looked up the section of the standard I pointed you to,
you would have seen both that the word used is "grouping", which
is clearly intended to distinguish it from execution order, and a
clear statement that the order of evaluation is unspecified. And
THAT was the intent of WG14 during the standardisation of C90.

I am fully aware that a lot of people are now claiming that the
BNF has always been meant to define the execution order, but that
flatly contradicts large chunks of other wording (especially the
side-effect rules). Whether it is now what compilers do, I don't
know (and don't much care, either).


Regards,
Nick Maclaren.
 
J

James Kuyper

On 10/12/2011 10:25 AM, (e-mail address removed) wrote:
....
There is no such statement in the C standard, and I can tell you
(from personal recollection) that requiring it was NOT WG14's intent
during the standardisation of C90. The BNF was intended to specify
ONLY the precedence of operators and NOT the evaluation order; that
was specified by the side-effect rules.

If you had looked up the section of the standard I pointed you to,
you would have seen both that the word used is "grouping", which
is clearly intended to distinguish it from execution order, and a
clear statement that the order of evaluation is unspecified. And
THAT was the intent of WG14 during the standardisation of C90.

I am fully aware that a lot of people are now claiming that the
BNF has always been meant to define the execution order, but that
flatly contradicts large chunks of other wording (especially the
side-effect rules). Whether it is now what compilers do, I don't
know (and don't much care, either).

The freedom of C implementations to rearrange the order of evaluation is
great, but it's not completely unconstrained.
While there are people who have misinterpreted the BNF as fully
specifying the execution order, I don't consider that to be a common
position among those most familiar with the standard. A more common
position, and IMO fully defensible, is that the BNF implies constraints
on the execution order. For instance, in ((a+b) + (c+d)), the a+b can be
executed before or after the (c+d), but both must be executed before the
final addition can be performed, because that addition requires the
results of those execution. The standard does not say anything
explicitly about that fact, because it doesn't need to - it's implicit
in what the standard does explicitly say about the dependency of the
final value on the values of the sub-expressions.

I've seen claims, (possibly from you?), that it's possible for a
conforming implementation to generate code for a+b+c which calculates
the result of a+b after (sic!) the result of that very calculation has
already been added to c. However, when I asked for details of how that
was supposed to work, I learned that the supposedly-conforming code
calculated a+b twice - once for adding to c, and the second time for the
sole purpose (as far as I could tell) of "proving" that it's permissible
to compute it afterward.

I'll concede that the as-if rule allows spurious extra computations to
be inserted at any time, so long as they don't affect the final result.
However, for an implementation that pre-#defines __STDC_IEC_559__, it
seems to me that the "final result" necessarily includes the values of
testable floating point environment flags; at least, if it occurs within
code that actual performs such tests.

You know this, but for the benefit of those who don't: C99 added
<fenv.h>, providing portable C support for such flags.
I remember a long discussion we had (partly off-line) in which you
claimed that C99's provision of such support, in that form, made the
situation worse than it would have been without it. You presented a very
real list of weaknesses in the specifications, primarily consisting of
things that are optional which, if I understood your arguments, could
only be considered useful if mandatory. It seems to me that if a feature
is optional, but I have a portable way of testing whether it's
supported, and a portable way of making use of it if it is supported,
that's unambiguously more useful than having nothing portably specified
about that feature - I never understood your claims to the contrary.
 
N

nmm1

The freedom of C implementations to rearrange the order of evaluation is
great, but it's not completely unconstrained.
While there are people who have misinterpreted the BNF as fully
specifying the execution order, I don't consider that to be a common
position among those most familiar with the standard.

Nor do I.
A more common
position, and IMO fully defensible, is that the BNF implies constraints
on the execution order. For instance, in ((a+b) + (c+d)), the a+b can be
executed before or after the (c+d), but both must be executed before the
final addition can be performed, because that addition requires the
results of those execution.

Well, yes, but I never said it wasn't! What I said was that there
were lots of OTHER interpretations, which were ALSO defensible.
The standard does not say anything
explicitly about that fact, because it doesn't need to - it's implicit
in what the standard does explicitly say about the dependency of the
final value on the values of the sub-expressions.

Unfortunately, it then contradicts that in all of its statements
about side-effect ordering, and there is nothing in the standard
that distinguishes the evaluation of an assignment operator from
the evaluation of any other operator!

I could respond to the rest of your points in detail, but I am
afraid that I don't have the time, so will select just one.
However, for an implementation that pre-#defines __STDC_IEC_559__, it
seems to me that the "final result" necessarily includes the values of
testable floating point environment flags; at least, if it occurs within
code that actual performs such tests.

I can tell you that most of the proponents of that agree with you;
unfortunately a lot of WG14 didn't, and so it was left up in the air.
You know this, but for the benefit of those who don't: C99 added
<fenv.h>, providing portable C support for such flags.

I know damn well it didn't!

Firstly, any ambiguous specification does NOT enable portability,
because the programmer assumes one thing and the implementor another.
In my experience, that is the cause of something like 85% of the
nasty bugs in C code written by experienced programmers!

Secondly, the problem is NOT solely optionality (as you claimed
that I showed) - it's serious ambiguity and even inconsistency.
Here is one example of the problem.

5.1.2.3 Program execution footnote 11:

... Floating-point operations implicitly set the status flags; ....
Implementations that support such floating-point state are required
to regard changes to it as side effects ....

6.5 Expressions para. 5:

If an exceptional condition occurs during the evaluation of an
expression (that is, if the result is not mathematically defined or
not in the range of representable values for its type), the behavior
is undefined.

7.6 Floating-point environment para. 1:

... A floating-point status flag is a system variable whose value
is set (but never cleared) when a floating-point exception is
raised, which occurs as a side effect of exceptional floating-point
arithmetic to provide auxiliary information. ...

Now, 6.5 states unequivocally that exceptional conditions lead to
undefined behaviour, which is assuredly sometimes the case. But
Annex F states they are not if __STDC_IEC_559__ is defined. Why
do you claim that the latter overrides the former, and not the
former the latter?

That would have been trivial to fix, but there are a zillion other
such inconsistencies, all of which give scope to implementors to
do something that the programmer doesn't expect, and where the
implementation can still claim to be conforming. That's NOT a
recipe for portability!


Regards,
Nick Maclaren.
 
J

James Kuyper

Well, yes, but I never said it wasn't! What I said was that there
were lots of OTHER interpretations, which were ALSO defensible.

I'm sure that there are alternative interpretations, many of them
defensible. The relevant thing would be a defensible interpretation that
differs from the one I gave in ways that would be problematic. Every
problematic interpretation I could come up with was indefensible for
reasons directly connected to the fact that was problematic. Could you
give an example?

....
I can tell you that most of the proponents of that agree with you;
unfortunately a lot of WG14 didn't, and so it was left up in the air.

An argument against that interpretation would be more relevant than an
assertion that there are people who believe in the validity of that
argument.
I know damn well it didn't!

It might not have done it as well as you might have liked - but it did
do it. :)

....
Secondly, the problem is NOT solely optionality (as you claimed
that I showed) ...

That's the main thing I remembered you arguing; I didn't mean to imply
that the arguments I remembered were the only ones you actually made -
my memory's not that good.
... - it's serious ambiguity and even inconsistency.
Here is one example of the problem.

5.1.2.3 Program execution footnote 11:

... Floating-point operations implicitly set the status flags; ....
Implementations that support such floating-point state are required
to regard changes to it as side effects ....

6.5 Expressions para. 5:

If an exceptional condition occurs during the evaluation of an
expression (that is, if the result is not mathematically defined or
not in the range of representable values for its type), the behavior
is undefined.

7.6 Floating-point environment para. 1:

... A floating-point status flag is a system variable whose value
is set (but never cleared) when a floating-point exception is
raised, which occurs as a side effect of exceptional floating-point
arithmetic to provide auxiliary information. ...

Now, 6.5 states unequivocally that exceptional conditions lead to
undefined behaviour, which is assuredly sometimes the case. But
Annex F states they are not if __STDC_IEC_559__ is defined. Why
do you claim that the latter overrides the former, and not the
former the latter?

Specifications for particular cases are routinely considered to override
any conflicting specification covering more the general cases; the
standard would require a lot of complicated re-wording if that weren't
true. #ifdef __STDC_IEC_559__ is more specific than #ifndef.

That would have been trivial to fix,

What fix do you think would be needed?
"An implementation that defines __STDC_IEC_559_ _ shall conform to the
specifications in this annex." (F1.1). Should they add the words "even
in circumstances where the rest of the standard says that the behavior
is undefined."? That would seem redundant with the simple word "shall"
to me. What does the "shall" mean in this context if it doesn't already
imply that?

4p1: "‘‘shall’’ is to be interpreted as a requirement on an
implementation or on a program;" This is clearly a requirement imposed
on the implementation.

3.4.3p1: undefined behavior
"behavior, ... for which this International Standard imposes no
requirements."

Annex F is a part of "this International Standard", is labeled as
normative, and imposes requirements for the behavior when an
implementation chooses to pre-#define __STDC_IEC_559__. That doesn't
seem to meet the standard's definition of "undefined behavior".
 
K

Keith Thompson

James Kuyper said:
On 10/12/2011 10:25 AM, (e-mail address removed) wrote:
...

The freedom of C implementations to rearrange the order of evaluation is
great, but it's not completely unconstrained.
While there are people who have misinterpreted the BNF as fully
specifying the execution order, I don't consider that to be a common
position among those most familiar with the standard. A more common
position, and IMO fully defensible, is that the BNF implies constraints
on the execution order. For instance, in ((a+b) + (c+d)), the a+b can be
executed before or after the (c+d), but both must be executed before the
final addition can be performed, because that addition requires the
results of those execution. The standard does not say anything
explicitly about that fact, because it doesn't need to - it's implicit
in what the standard does explicitly say about the dependency of the
final value on the values of the sub-expressions.

And C201X, or at least the N1570 draft, makes this explicit. 6.5p1:

An _expression_ is a sequence of operators and operands that
specifies computation of a value, or that designates an object
or a function, or that generates side effects, or that performs
a combination thereof. The value computations of the operands
of an operator are sequenced before the value computation of
the result of the operator.

The phrase "sequenced before" is new; it's defined in 5.1.2.3.
If there's a sequence point after x and before y, then x is
"sequenced before" y. But the reverse is not necessarily true; x can
be sequenced before y even if there's no sequence point between them.

In the example "a + b + c", the grammar says that the first "+"
applies to "a" and "b", and the second applies to the result of the
first and to "c"; in other words, "a + b + c" means "(a + b) + c",
*not* "a + (b + c)".

The order of evaluation of the operands of a given operator is
unspecified, and I don't believe anyone here has suggested otherwise.
"a + b" could evaluate "a" and then "b", or "b" and then "a"; it could
even evaluate them in parallel as long as the result is as if they were
evaluated sequentially.

But an operator cannot be applied until after its operands have
been evaluated. This is implicit in C90 and C99, and explicit (see
above) in C201X. The result of "a + b" *must* be computed before
the result of "a + b + c" can be computed. There's no sequence
point between the two additions, but their order is constrained by
the simple fact that you can't perform an operation without first
evaluating the operands. (The lack of a sequence point means that
the "value computation" of "a" is sequenced before the computation of
"a + b", but any side effects of evaluating "a" are not.)

[...]

(Followups to comp.lang.c only.)
 
N

nmm1

I'm sure that there are alternative interpretations, many of them
defensible. The relevant thing would be a defensible interpretation that
differs from the one I gave in ways that would be problematic. Every
problematic interpretation I could come up with was indefensible for
reasons directly connected to the fact that was problematic. Could you
give an example?

Dammit, you have JUST seen TWO!!!

One is that the BNF defines the execution order, except where
explicitly varied.

The other is that the grouping defines only the operator precedence
and not how the expression is evaluated. AS I SAID, I was active
in WG14 at the time, and I understood that to be the nearest it
got to a consensus.
It might not have done it as well as you might have liked - but it did
do it. :)

No, it didn't, BECAUSE IT'S NOT PORTABLE. That isn't just theory,
but the experience of being in WG14, interaction with vendors and
actually running tests.
Specifications for particular cases are routinely considered to override
any conflicting specification covering more the general cases; the
standard would require a lot of complicated re-wording if that weren't
true. #ifdef __STDC_IEC_559__ is more specific than #ifndef.

Well, that's NOT how they have been normally interpreted on the WG14
mailing list - I don't have the time to try to remember specific
examples now. It IS the case for implicit undefined behaviour, but
this is EXPLICIT.
What fix do you think would be needed?
"An implementation that defines __STDC_IEC_559_ _ shall conform to the
specifications in this annex." (F1.1). Should they add the words "even
in circumstances where the rest of the standard says that the behavior
is undefined."? That would seem redundant with the simple word "shall"
to me. What does the "shall" mean in this context if it doesn't already
imply that?

Please do think beyond black and white.

Does it apply when pragma FENV_ACCESS is not set and, if so, exactly
how? Yes, there is an 'obvious' best choice there.

What does it mean together with FP_CONTRACT or CX_LIMITED_RANGE on?
I can tell you that the BSI never got an answer out of WG14 on that
one!

Does it apply to the large amount of the C language and library NOT
mentioned in Annex F and, if so, exactly how?

Under what conditions does it apply if any function from an external
library is linked in? And, if you think there is a simple answer
to that, I recommend a year or two in an ISO standards committee.


Anyway, I don't have time for more.


Regards,
Nick Maclaren.
 
R

Robert Myers

No, it didn't, BECAUSE IT'S NOT PORTABLE.  


You scare me, Nick.

That's only fair, because, to the extent that you pay attention to
anything I say, I probably scare you.

What you are really saying is that, in a code where mysterious and
unreliable things are happening, you can't blindly transfer the code
from one ad hoc hardware and software standard to another. Listen
very carefully:

THAT'S A GOOD THING.

It may be the last thing standing between us and careerists who don't
have a clue "validating" their results by transferring a meaningless
calculation from one machine to another with no clue that something
totally crazy is going on.

The idea that the universe would be a safer place if only people would
listen to you doesn't even pass the laugh test.

Robert.
 
A

Andrew Reilly

It may be the last thing standing between us and careerists who don't
have a clue "validating" their results by transferring a meaningless
calculation from one machine to another with no clue that something
totally crazy is going on.

Nick does have a good point though, notwithstanding the last couple of
posts in which the authors asserted that C (at least) now *does* specify
a total order of operations within expressions (even ones that appear
ambiguous because of expectations of associativity). I didn't realize
that, and I'm still not sure that I believe them.

Nick's point (I believe) is that floating point arithmetic is fraught,
because it is fairly fundamentally unlike the mathematics that expression-
based languages appear to be offering. In maths and exact computer
arithmetic (eg integer), addition is associative and a bunch of other
nice rules for "simplification" apply. In floating point they don't,
because every floating point multiply (a * b), for example, is really
something like round(_g_current_FPU_rounding_mode, multiply(a,b)), and
every addition or subtraction is even more complicated, with a bunch of
normalizations thrown in for good measure. Round is, naturally, a lossy
operation. Throw in exceptions that can be raised in a bunch of ways and
you wind up with something that can only really be reasoned about as a
discrete sequence of assembly-language-like operations that both produce
results and mutate and depend on the "system state" in obscure and
platform-dependent ways. I haven't even mentioned an IEEE standard here:
it ought to be possible to define a stateless floating point method,
where rounding modes were explicit in the instructions and exceptions
were either defined away as in-band values or made of precice exceptions,
but I've never heard of it being done.

Clearly you can still have parallelism between independent operations,
but you can't manipulate floating point expressions that look like maths
as though they were maths, at least not without giving up being able to
reason about the answer.

Which I think just emphasises your point, too: people doing floating
point need to have a clue...

Cheers,
 
N

nmm1

Nick's point (I believe) is that floating point arithmetic is fraught,
because it is fairly fundamentally unlike the mathematics that expression-
based languages appear to be offering. ...

Actually, no. It's tricky, but not THAT tricky. My real point is
that the C99 and other modern 'improvements' make it appear to the
naive as if it is more portable and reliable, but actually introduce
ten times as many portability and reliability problems as they solve.

Almost everyone who has actually tried it has discovered that, but
most people code for one system, run a couple of sets of well-behaved
data, and then claim that their program is portable and robust and
any failures must be the fault of the compiler!
Clearly you can still have parallelism between independent operations,
but you can't manipulate floating point expressions that look like maths
as though they were maths, at least not without giving up being able to
reason about the answer.

Yes, you can. Look at most of the classic numerical analysis books.
You have to do it rather differently, and very, very few modern
'computer scientists' would know how to start. Wilkinson and Reinsch
"The Algebraic Eigenvalue Problem" is one example of such reasoning
that I have used in the past.
Which I think just emphasises your point, too: people doing floating
point need to have a clue...

Oh, THERE, I am in 100% agreement. When I teach it, I tell people
that the key is to think floating-point, not mathematical real,
but that 50+ years of Fortran experience shows that it's not as
hard as all that. Inter alia, we used to write code that was both
reliable and portable across a range of arithmetics that most
people nowadays cannot imagine :)

There ARE some books on how to write robust, portable floating-point
code, but all are very old-fashioned. And there were a large number
of experiments showing both that the exact details of the arithmetic
didn't matter much (though directed versus nearest rounding did),
and that relying on the the exact details led to the code being LESS
robust, rather than more.

Of courses, that's now all regarded as heresy ....


Regards,
Nick Maclaren.
 
A

Arivald

W dniu 2011-10-08 22:03, Kaba pisze:
wrote:

Simply asserting that isn't particularly helpful. Maybe you could step
in and make a better statement, on where the behaviour agreees and where
it differs.


I disagree. Mathematics has one zero, the IEEE floating point has two.

Start thinking about IEEE floating point number as a probable number -
number You can't measure exactly, there is always some immeasurable error.
In this case -0 means: number so small so we can assume it is zero, but
we certain it is negative. +0 is same, but positive.

So You never can say it is zero, because You never can be certain it is.
You can only compare to zero up to some precision.
 
A

Al Grant

The freedom of C implementations to rearrange the order of evaluation is
great, but it's not completely unconstrained.
While there are people who have misinterpreted the BNF as fully
specifying the execution order, I don't consider that to be a common
position among those most familiar with the standard. A more common
position, and IMO fully defensible, is that the BNF implies constraints
on the execution order. For instance, in ((a+b) + (c+d)), the a+b can be
executed before or after the (c+d), but both must be executed before the
final addition can be performed

That was what I was saying, with the further provision that
a + b + (c + d) is equivalent to ((a+b) + (c+d)). I.e. in C,
parentheses are a lexical feature and don't act as a further
constraint on the parse. The compiler has no more leeway
to reassociate a+b+c than it does to reassociate (a+b)+c.
 
A

Al Grant

Nick does have a good point though, notwithstanding the last couple of
posts in which the authors asserted that C (at least) now *does* specify
a total order of operations within expressions (even ones that appear
ambiguous because of expectations of associativity).  I didn't realize
that, and I'm still not sure that I believe them.

The point is that there is no difference in C between a+b+c
and (a+b)+c. Any order implied by the latter is also implied
by the former. The point is that operator precedence in C
uniquely defines the "evaluation tree". I don't believe anyone
has asserted that there is a total order of all computations
within the tree, i.e. between the LHS and RHS of a binary
operator.
Nick's point (I believe) is that floating point arithmetic is fraught

We all know it that floating-point arithmetic is not the same as
real arithmetic. Nick appears to be making an abstruse point,
about the relationship between ISO C and IEEE 754, without
being able to explain what that point is.
 
J

James Kuyper

On 10/13/2011 09:39 AM, Al Grant wrote:
....
That was what I was saying, with the further provision that
a + b + (c + d) is equivalent to ((a+b) + (c+d)). I.e. in C,
parentheses are a lexical feature and don't act as a further
constraint on the parse.

I'm not sure exactly what you mean by the statement that they "don't act
as a further constraint on the parse". To clarify, do you agree that
each of the following expressions is parsed into a different evaluation
tree?

a + b + c + d
a + (b + c) + d
a + b + (c + d)
a + (b + (c + d)

In particular, the conditions that make each of those expressions
produce an overflow may be quite different.
... The compiler has no more leeway
to reassociate a+b+c than it does to reassociate (a+b)+c.

I'll agree with that
 
S

Skybuck Flying

"
To be a defined result the inverse operation must generate the original
values and be unique.

so, if :

1.0/0.0 = inf, then inf * 0.0 = 1.0
and

2.0/0.0 = inf, then inf*0.0 = 2.0

That is why division by zero and calculations in the reals with inf are
undefined.
"

Feels as if math has a little short coming here.

Here is an idea:

How about:

Infinity * 0.0 = Anything

Bye,
Skybuck.
 
A

Al Grant

I'm not sure exactly what you mean by the statement that they "don't act
as a further constraint on the parse". To clarify, do you agree that
each of the following expressions is parsed into a different evaluation
tree?

        a +  b +  c  + d
        a + (b +  c) + d
        a +  b + (c  + d)
        a + (b + (c  + d)

Yes. Also, the third one is the only "balanced" tree where
two of the adds can happen at the same time.
In particular, the conditions that make each of those expressions
produce an overflow may be quite different.

And in floating-point the result may be numerically different...
 
R

Robert Myers

Nick does have a good point though, notwithstanding the last couple of
posts in which the authors asserted that C (at least) now *does* specify
a total order of operations within expressions (even ones that appear
ambiguous because of expectations of associativity).  I didn't realize
that, and I'm still not sure that I believe them.

Nick's point (I believe) is that floating point arithmetic is fraught,
because it is fairly fundamentally unlike the mathematics that expression-
based languages appear to be offering.

 I haven't even mentioned an IEEE standard here:
it ought to be possible to define a stateless floating point method,
where rounding modes were explicit in the instructions and exceptions
were either defined away as in-band values or made of precice exceptions,
but I've never heard of it being done.

Which I think just emphasises your point, too: people doing floating
point need to have a clue...

You make some good points that are worth remembering.

I'm not sure, though, that a stateless and platform-independent
definition of floating point arithmetic is either necessary or
desirable or even possible within the bounds of practical utility.

If the floating point details that people spend so much time on here
make all that much difference, there is probably something flaky about
either the algorithm or the code or both. Given that at least some of
the people who spend so much time talking about this issue here
probably already understand the claim I just made, I don't understand
why this alchemist's pursuit of turning lead into gold continues.

Given the choice between hammering it into people that floating point
arithmetic is not for the naive or even necessarily replicable from
one situation to another, and inventing an arbitrary standard that
makes it at least make the same mistake every time, I'd prefer the
current chaos to artificial predictability.

I could and would and do make the same objection about the
correspondence to mathematics that you make for floating point
arithmetic to every code that purports to represent mathematics where
a differential operator appears in the calculation. I know how to
make that correspondence precise and completely unarbitrary, but I am
repeatedly told that doing so is simply too expensive.

I conclude, as I believe I have said before, that this continuing
discussion is an instance of Nazrudin's lost key: looking where there
is light. Given that many if not most codes that use floating point
arithmetic rely on absurd mathematics (repeatedly differentiating
derivatives that are piecewise continuous even only in theory), I just
can't understand all the fuss about the last bit (rounding), which
idealized properties floating point arithmetic does or does not
possess, or what arbitrary thing the hardware does when the algorithm
leads to a nonsensical or ambiguous result.

Robert.
 
A

Andrew Reilly

If the floating point details that people spend so much time on here
make all that much difference, there is probably something flaky about
either the algorithm or the code or both.

Well, yes. I'm afraid that I have that particular rant on speed-dial,
because I often find myself having to hose down the desire to produce bit-
exact test-suites for inherently unstable (or at least somewhat
arbitrary) algorithms that just happen to behave in a particular way in
floating point with a particular version of a compiler, with a particular
set of command-line options. Believe me, it happens a lot.

Cheers,
 
J

Jasen Betts

I would in fact be much _more_ useful to have a mode where rounding was
arbitrary/random, and could even impact one or more extra low bits.

Intel tried something like that in the 90s :^)
 
P

Phil Carmody

Terje Mathisen said:
Rather fixed it again:

I did write most of the sw workaround for the FDIV (and FPATAN) bug on
the Pentium.

That's what I was trying to imply. If we're pretending the bug was
feature, then your fix 'broke' it. (I was assuming FDIV was what was
being refered to.)

Phil
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,071
Latest member
MetabolicSolutionsKeto

Latest Threads

Top