When shorts are longer than longs !

J

James Dow Allen

Sorry, I must have missed something. I'd definitely use CHAR_BIT, not
8, for two reasons. First, and perhaps *less* important, CHAR_BIT
isn't necessarily 8 on all systems (I say this may be less important
because I've never programmed for a system with CHAR_BIT != 8).
Second, it's clearer to use a symbolic name rather than a magic
number.

Why do you think 8 might be better than CHAR_BIT?

No one has, as yet in this thread, presented a specific system where
char_bit != 8, nor where integers have padding BUT an unchallenged
inference from Twirlip's comments suggest that this (hypothetical?)
machine has 9-bit chars and 32-bit integers. *Then* 8*sizeof would
work but char_bit*sizeof *not* work.

Another reason to suspect this is the correct approach is that
GMP uses 8, not char_bit. GMP is a very thoroughly tested and
ported piece of code.

Finally, if ints DO have 36 bits but code assumes 32 bits, the
code (if coded with care) will still work, but the converse
obviously doesn't apply.

The *real* solution is for the "standard header(s)" to contain
a BITS_PER_INT define and my question remains: Why don't they?
(Committee oversight is not an adequate answer; if the standard
needs to evolve, let it evolve.)

James Dow Allen
 
F

Flash Gordon

James said:
No one has, as yet in this thread, presented a specific system where
char_bit != 8,

Plenty have been presented in the past. It is actually quite common for
certain types of processors, such as DSPs.
nor where integers have padding

That would be more unusual.
BUT an unchallenged
inference from Twirlip's comments suggest that this (hypothetical?)
machine has 9-bit chars and 32-bit integers.

People are allowed to present hypothetical machines to make a point. I
assume that is 32 value bits with 4 padding bits.
*Then* 8*sizeof would
work but char_bit*sizeof *not* work.

That would be far more unusual that the situation where CHAR_BIT*sizeof
worked but 8*sizeof was wrong.
Another reason to suspect this is the correct approach is that
GMP uses 8, not char_bit. GMP is a very thoroughly tested and
ported piece of code.

That does *not* mean it is maximally portable. For a start, it's main
targets (according to its web page) are Unix like systems (and POSIX
requires CHAR_BIT==8) with it also having been ported to Windows. So
they do not claim it is portable to machines where CHAR_BIT!=8.
Finally, if ints DO have 36 bits but code assumes 32 bits, the
code (if coded with care) will still work, but the converse
obviously doesn't apply.
True.

The *real* solution is for the "standard header(s)" to contain
a BITS_PER_INT define and my question remains: Why don't they?
(Committee oversight is not an adequate answer; if the standard
needs to evolve, let it evolve.)

Put in a request for it to be added to the standard. You could start by
asking on comp.std.c where people on the standards committee are more
likely to see the question.
 
J

James Kuyper

James said:
No one has, as yet in this thread, presented a specific system where
char_bit != 8, nor where integers have padding BUT an unchallenged
inference from Twirlip's comments suggest that this (hypothetical?)
machine has 9-bit chars and 32-bit integers. *Then* 8*sizeof would
work but char_bit*sizeof *not* work.

CHAR_BIT will not give you what you want, and 8 will, on any system
which has more than 8 bits per byte, but the extra bits per byte are all
padding bits - such systems are extremely rare; they are possibly
entirely hypothetical. CHAR_BIT will give you what you want, and 8 will
not, on systems with no padding and more than 8 bits. There's a fair
number of systems out there, mostly DSPs, where CHAR_BIT is 16.
Therefore, I think you're better off using CHAR_BIT than 8.
Another reason to suspect this is the correct approach is that
GMP uses 8, not char_bit. GMP is a very thoroughly tested and
ported piece of code.

One that has probably never been ported to systems of either of the
types mentioned above.
 
L

lawrence.jones

James Dow Allen said:
The *real* solution is for the "standard header(s)" to contain
a BITS_PER_INT define and my question remains: Why don't they?

Because the contents of the "standard header(s)" was determined long
before the committee seriously considered the issue of padding bits in
integer types and it was not included in the proposal(s) that addressed
padding bits. As has been noted, they're extremely rare, so the
committee hasn't felt the need to address the issue without a concrete
proposal, which no one has supplied thus far.
 
J

James Dow Allen

CHAR_BIT will not give you what you want, and 8 will, on
... [systems which] are extremely rare; they are possibly
entirely hypothetical. CHAR_BIT will give you what you want,
on systems with no padding and more than 8 bits. There's a fair
number of systems out there, mostly DSPs, where CHAR_BIT is 16.
Therefore, I think you're better off using CHAR_BIT than 8.

OK, thanks for this. I'll have to plead dementia, rather than
innocent ignorance, since I once briefly encountered a Texas
Inst. DSP with, as you say, CHAR_BIT == 16. I'll blame GMP's
reversion to 8 for the repression of that memory. :)
One that has probably never been ported to systems of either of the
types mentioned above.

The GMP config file was extremely complex and full of erudite
comments about idiosyncratic compilers. I think gullible James
fell for it... :-(

Ben Bacarisse wrote, in another message:
Already posted elsewhere, but it deserves wider circulation:

/* Number of bits in ... any (1<<b)-1 where 0 <= b < 3E+10 */

BTW, does the standard actually require that INT_MAX, etc.
be of the form (1<<b)-1 ? I don't know why they wouldn't have
that form, but I didn't know of padding bits either.
#define IMAX_BITS(m) ((m) /((m)%0x3fffffffL+1) \
/0x3fffffffL %0x3fffffffL *30 \
+ (m)%0x3fffffffL /((m)%31+1)/31%31*5 \
+ 4-12/((m)%31+3))

Hallvard B Furuseth 30 Dec 2003, 18:38
comp.lang.c Message-ID: <[email protected]>

Thanks for this, Ben! It was fun to figure out why it
works. Just for fun, I came up with a slightly simpler
version which works if your largest integer has fewer
than 2040 bits (i.e. goes up *only* to 10**614) :

#define IMAX_BITS(m) \
((m) /((m)%255+1)/255%255*8 + 7-100/((m)%255+14))

I still think the standard should be modified to insist
that headers define BITS_IN_ULONG, etc. but for now, perhaps
I'll switch to use of the IMAX_BITS macro.

BTW, here comes
#define INT_MAX 2147483647
Trivia question: what was this number's "claim to fame"
long before it was encountered in C programming?


James Dow Allen
 
J

James Dow Allen

James Dow Allen said:

(For just over a century, from the 1770s to the 1880s, it was the
/highest/ known Mersenne prime.)

That's almost what I had in mind: for just *under* a century
it was the highest known prime of any sort.
It's ... one of only a tiny handful
of double Mersenne primes...

Do you have a proof of that fact, too long to fit in
a Usenet post? You'll be famous!

James
 
J

James Kuyper

James Dow Allen wrote:
....
Ben Bacarisse wrote, in another message:

BTW, does the standard actually require that INT_MAX, etc.
be of the form (1<<b)-1 ? I don't know why they wouldn't have
that form, but I didn't know of padding bits either.

I've heard the claim made that it does. The truth of that claim depends
upon how you interpret 6.2.6.2p2:

| If the sign bit is one, the value shall be
| modified in one of the following ways:
| — the corresponding value with sign bit 0 is negated (sign and
| magnitude);
| — the sign bit has the value -(2N) (two’s complement);
| — the sign bit has the value -(2N - 1) (ones’ complement).
| Which of these applies is implementation-defined, as is whether the
| value with sign bit 1 and all value bits zero (for the first two), or
| with sign bit and all value bits 1 (for ones’ complement), is a trap
| representation or a normal value.

Some people read this as giving permission for a maximum of one trap
representation that depends upon the value bits; all other trap
representations, if any, must be identified as such by looking at value
bits.

I look at that same clause, and see one specific example given where the
fact that a bit pattern is a trap reepresentation depends upon the value
bits, a case that is singled out because, in two of the permitted
interpretations for a sign bit, it is the only redundant bit pattern. I
don't see anything here prohibiting the existence of other trap
representations that depend upon value bits.

For example, I don't see any contradiction with any clause of the
standard if an implementation provides a 20-bit int with INT_MAX ==
999999, INT_MIN == -1000000, defining every bit pattern that would
otherwise represent values greater than 999999 or smaller than -1000000
to be a trap representation. Other people disagree.
 
B

Ben Bacarisse

James Dow Allen said:
BTW, does the standard actually require that INT_MAX, etc.
be of the form (1<<b)-1 ? I don't know why they wouldn't have
that form, but I didn't know of padding bits either.

Yes. The standard defines value bits and padding bits (along with a
sign bit for signed numbers). A footnote explains that some
combinations of padding bits can correspond to a trap representation
and the special case of negative zero in some number systems is also
permitted to be trap representation.

The possibility that some combination of value bits might be taken to
be a trap representation is excluded (in my opinion) by:

"If there are N value bits, each bit shall represent a different
power of 2 between 1 and 2 N −1 , so that objects of that type
shall be capable of representing values from 0 to 2 N − 1 using a
pure binary representation; this shall be known as the value
representation."

in 6.2.6.2.

BTW, here comes
#define INT_MAX 2147483647
Trivia question: what was this number's "claim to fame"
long before it was encountered in C programming?

It's a prime; a Mersenne prime to boot and, for nearly a century, the largest
know prime, but is that fame?
 
H

Hallvard B Furuseth

Ben said:
Yes. (...)
The possibility that some combination of value bits might be taken to
be a trap representation is excluded (in my opinion) by:

"If there are N value bits, each bit shall represent a different
power of 2 between 1 and 2 N −1 , so that objects of that type
shall be capable of representing values from 0 to 2 N − 1 using a
pure binary representation; this shall be known as the value
representation."

Pity that's just about unsigned integers. The following paragraph about
signed integers says each value bit (i.e. not sign/padding bits) has the
same value as with unsigned integers, but doesn't actually spell out
that all bit combinations of value bits are valid.


Regarding the subject line: 7-86/((m)%255+12) as the last term shaves
off one more digit:) That variant, also valid up to 2039 bits, was my
first version. The ridiculous-range variant began as a joke, but then
I figured, why bother with a 1-line disclaimer "breaks if someone
implements cryptography-sized integers" when one more line in the macro
dispense with the disclaimer? So with 2 lines I kept the largest range
that could be portably spelled, just for fun.
 
B

Ben Bacarisse

Hallvard B Furuseth said:
Pity that's just about unsigned integers. The following paragraph about
signed integers says each value bit (i.e. not sign/padding bits) has the
same value as with unsigned integers, but doesn't actually spell out
that all bit combinations of value bits are valid.

No, but I think the intent is clear. What is a value bit if not to
contribute value[1]? I think the argument that some combination of value
bits stops being a value and becomes a trap representation is pushing
the normal reading too far.

Do you think (or know!) that the intent was to allow combinations of
value bits to be something other than a value? It seems unlikely to
me given that any implementation that wants to take advantage of this
permission must abandon it for the corresponding unsigned type.

[1] Yes, there is the special case of negative zero but the very fact
that this is treated in such suggests that this is the only exception
the committee wanted to permit.
 
S

Spiros Bousbouras

Pity that's just about unsigned integers. The following paragraph about
signed integers says each value bit (i.e. not sign/padding bits) has the
same value as with unsigned integers, but doesn't actually spell out
that all bit combinations of value bits are valid.

If some combination of value bits was a trap representation then
those bits would no longer have the same value as the
corresponding unsigned type. I interpret the phrase

Each bit that is a value bit shall have the same value as
the same bit in the object representation of the corresponding
unsigned type

appearing in paragraph 2 of 6.2.6.2 as meaning that what is
stated in paragraph 1 namely

If there are N value bits, each bit shall represent a different
power of 2 between 1 and 2**(N-1) , so that objects of that type
shall be capable of representing values from 0 to 2**N - 1 using
a pure binary representation;

also applies to signed integers. Hence I believe the maximum
value for an integer type , signed or unsigned , is 2**N - 1.
 
J

James Kuyper

Ben Bacarisse wrote:
....
No, but I think the intent is clear. What is a value bit if not to
contribute value[1]? I think the argument that some combination of value
bits stops being a value and becomes a trap representation is pushing
the normal reading too far.

That argument is too subjective for me to provide any response other
than "I disagree".
[1] Yes, there is the special case of negative zero but the very fact
that this is treated in such suggests that this is the only exception
the committee wanted to permit.

The special recognition given to negative zero is adequately explained
by pointing out that it's the only redundantly represented value;
there's no need to assume that there's an unwritten restriction that
this is the only permitted case.
 
J

James Kuyper

Spiros said:
If some combination of value bits was a trap representation then
those bits would no longer have the same value as the
corresponding unsigned type.

Those bits have the same value as in the unsigned type, for every bit
pattern which is not a trap representation. The existence of even a
single such bit pattern is sufficient to meet that requirement.
... I interpret the phrase

Each bit that is a value bit shall have the same value as
the same bit in the object representation of the corresponding
unsigned type

appearing in paragraph 2 of 6.2.6.2 as meaning that what is
stated in paragraph 1 namely

If there are N value bits, each bit shall represent a different
power of 2 between 1 and 2**(N-1) , so that objects of that type
shall be capable of representing values from 0 to 2**N - 1 using
a pure binary representation;

also applies to signed integers. Hence I believe the maximum
value for an integer type , signed or unsigned , is 2**N - 1.

Your argument above implies that the statement describing the
representable range of values is redundant. The committee apparently
thought it was necessary, anyway, which suggests (but admittedly does
not prove) that they did not consider it redundant.

If it were meant to apply to signed integer types, why does that wording
occur in a clause explicitly restricted in it's application to unsigned
integer types? Why is there no comparable wording in the clause for
signed integer types - the standard is not shy about repeating itself in
other, comparable situations. Better yet, it could have been written
exactly once (with suitable modifications), in a clause that explicitly
stated that it applied to all integer types, signed and unsigned. Why
wasn't it?
 
S

Spiros Bousbouras

Pity that's just about unsigned integers. The following paragraph about
signed integers says each value bit (i.e. not sign/padding bits) has the
same value as with unsigned integers, but doesn't actually spell out
that all bit combinations of value bits are valid.

No, but I think the intent is clear. What is a value bit if not to
contribute value[1]? I think the argument that some combination of value
bits stops being a value and becomes a trap representation is pushing
the normal reading too far.
[...]

[1] Yes, there is the special case of negative zero but the very fact
that this is treated in such suggests that this is the only exception
the committee wanted to permit.

I agree with the conclusion but not the justification. First a
nitpick : negative zero is by definition a legal value. So
let's say instead negative zero bit pattern (NZBP).

Paragraph 2 says ``If the sign bit is zero, it shall not affect the
resulting value'' and then goes on to describe what happens when
sign bit is 1 mentioning the possibility that NZPB may be a trap
representation. Combining this with the quotes from the standard
quoted above I conclude that with the sign bit 0, any specific bit
pattern of the value bits represents the same value whether the
type is signed or unsigned.
 
B

Ben Bacarisse

James Kuyper said:
Ben Bacarisse wrote:
...
No, but I think the intent is clear. What is a value bit if not to
contribute value[1]? I think the argument that some combination of value
bits stops being a value and becomes a trap representation is pushing
the normal reading too far.

That argument is too subjective for me to provide any response other
than "I disagree".

Yes, I know and if fact I agree to some extent. There is wriggle
room, but it seems to be the very narrowest of spaces.

Let me try another argument. The key section says:

Each bit that is a value bit shall have the same value as the same
bit in the object representation of the corresponding unsigned type
(if there are M value bits in the signed type and N in the unsigned
type, then M ≤ N). If the sign bit is zero, it shall not affect the
resulting value.

Here, the phrase "the resulting value" suggests that a value always
results. If some set of M bit settings did not represent a value it
would have to say "the value" or better yet "the value (if any)".
What is "the resulting value" if there is no value for some
combinations of value bits?
[1] Yes, there is the special case of negative zero but the very fact
that this is treated in such suggests that this is the only exception
the committee wanted to permit.

The special recognition given to negative zero is adequately explained
by pointing out that it's the only redundantly represented value;
there's no need to assume that there's an unwritten restriction that
this is the only permitted case.

I should not have said negative zero. I meant the exceptional case of
the representation that is not a value because it is not negative
zero. Specific permission is given for the pattern that might be
negative zero to be, in fact, a trap representation. That is just a
special case of some set of value bits not being a value. Why discuss
this special case if combinations of value bits can be trap
representations in the usual reading of the text (as you take it)?
 
B

Ben Bacarisse

Spiros Bousbouras said:
Hallvard B Furuseth said:
Ben Bacarisse writes:
BTW, does the standard actually require that INT_MAX, etc.
be of the form (1<<b)-1 ? I don't know why they wouldn't have
that form, but I didn't know of padding bits either.
Yes. (...)
The possibility that some combination of value bits might be taken to
be a trap representation is excluded (in my opinion) by:
"If there are N value bits, each bit shall represent a different
power of 2 between 1 and 2 N ¿1 , so that objects of that type
shall be capable of representing values from 0 to 2 N ¿ 1 using a
pure binary representation; this shall be known as the value
representation."
Pity that's just about unsigned integers. The following paragraph about
signed integers says each value bit (i.e. not sign/padding bits) has the
same value as with unsigned integers, but doesn't actually spell out
that all bit combinations of value bits are valid.

No, but I think the intent is clear. What is a value bit if not to
contribute value[1]? I think the argument that some combination of value
bits stops being a value and becomes a trap representation is pushing
the normal reading too far.
[...]

[1] Yes, there is the special case of negative zero but the very fact
that this is treated in such suggests that this is the only exception
the committee wanted to permit.

I agree with the conclusion but not the justification. First a
nitpick : negative zero is by definition a legal value. So
let's say instead negative zero bit pattern (NZBP).

Yes, corrected in my reply to James. I meant only the pattern when it
is not a value.
Paragraph 2 says ``If the sign bit is zero, it shall not affect the
resulting value'' and then goes on to describe what happens when
sign bit is 1 mentioning the possibility that NZPB may be a trap
representation. Combining this with the quotes from the standard
quoted above I conclude that with the sign bit 0, any specific bit
pattern of the value bits represents the same value whether the
type is signed or unsigned.

That last phrase is too strong since there are clearly bit patterns
that do not represent the same value signed and unsigned. If M < N
this can occur without the sign bit being involved.
 
S

Spiros Bousbouras

Those bits have the same value as in the unsigned type, for every bit
pattern which is not a trap representation.

The requirement "for every bit pattern which is not a trap
representation" is one that you add yourself and does not appear
in the standard.
The existence of even a
single such bit pattern is sufficient to meet that requirement.

So according to your opinion, if for example all bits 0
represents the value 0 for both an unsigned and signed type then
the requirement "Each bit that is a value bit shall have the
same value as the same bit in the object representation of the
corresponding unsigned type" is met, yes?
Your argument above implies that the statement describing the
representable range of values is redundant. The committee apparently
thought it was necessary, anyway, which suggests (but admittedly does
not prove) that they did not consider it redundant.

By "statement" I assume you mean the one which says

If there are N value bits, each bit shall represent a
different power of 2 between 1 and 2**(N-1) , so that
objects of that type shall be capable of representing
values from 0 to 2**N - 1 using a pure binary
representation; this shall be known as the value
representation.

No it's not redundant because even if you know the value of each
individual bit, you still don't know how you are supposed to
combine the individual values together to get the value of the
representation. The above statement tells you essentially that
you add together the values of individual bits to get the value
of the representation. If you're saying that it is redundant to
explicitly mention "2**N - 1" then perhaps it is but note that
the same "redundancy" exists in footnote 40. And in any case, if
the standard is not shy about repeating itself as you say below,
then I don't think you can draw any conclusions by its choice to
mention something as short as "2**N - 1".
If it were meant to apply to signed integer types, why does that wording
occur in a clause explicitly restricted in it's application to unsigned
integer types? Why is there no comparable wording in the clause for
signed integer types - the standard is not shy about repeating itself in
other, comparable situations.

Well, it says "Each bit that is a value bit shall have the same
value as the same bit in the object representation of the
corresponding unsigned type". Looks enough to me. If the
standard is not shy about repeating itself then I wouldn't
expect it to be shy about mentioning for the first time a fairly
important piece of information namely that even in the absence
of padding bits and even if the sign bit is 0 it is still
possible to have trap representations.
Better yet, it could have been written
exactly once (with suitable modifications), in a clause that explicitly
stated that it applied to all integer types, signed and unsigned. Why
wasn't it?

Pretty much every part of the standard can be written in various
ways. You seem to be saying that just because this part of the
standard could be written in some other way which you find
"better" then there must be some hidden meaning. I assume the
standard covers separately signed and unsigned because in the
case of signed it wants to describe how the sign bit interacts
with the values.
 
S

Spiros Bousbouras

That last phrase is too strong since there are clearly bit patterns
that do not represent the same value signed and unsigned. If M < N
this can occur without the sign bit being involved.

Yes but I said "bit pattern of the value bits" meaning that the bits
involved are value bits in both the signed and unsigned case.

Which makes me ask are there systems where the precision of a
signed type is smaller than the precision of the corresponding
unsigned type? If yes are there legal values for the unsigned
type which are trap representations for the signed type although
the sign bit is 0?
 
B

Ben Bacarisse

Spiros Bousbouras said:
Yes but I said "bit pattern of the value bits" meaning that the bits
involved are value bits in both the signed and unsigned case.

Right. The phrase was ambiguous but you obviously mean the smaller
set of value bits.

<snip question I can't answer>
 
J

jameskuyper

Spiros said:
The requirement "for every bit pattern which is not a trap
representation" is one that you add yourself and does not appear
in the standard.

No, that's not a requirement, and I had not intended to imply that it
is one. It is a description of the bit patterns that justify
concluding that the requirement has been met; the description itself
is not part of the requirement.
So according to your opinion, if for example all bits 0
represents the value 0 for both an unsigned and signed type then
the requirement "Each bit that is a value bit shall have the
same value as the same bit in the object representation of the
corresponding unsigned type" is met, yes?

If that were the only applicable requirement, yes. However, it's only
a meaningful requirement when you consider bit patterns where the bits
are actually set, and the standard does require the existence of such
bit patterns. The standard defines the meaning of the *_MAX and *_MIN
macros, and imposes minimum values for those which are positive, and
maximum values for those which are negative. This implies that
requirement you're referring to must hold true for all of those
values, not just 0, and for all the bits that need to be non-zero in
order to represent one or more of those values. However, it's not
required to apply for values greater than INT_MAX, and the standard is
not as strict in it's requirements for the ranges of signed integer
types as it for the ranges of unsigned types.

By "statement" I assume you mean the one which says

If there are N value bits, each bit shall represent a
different power of 2 between 1 and 2**(N-1) , so that
objects of that type shall be capable of representing
values from 0 to 2**N - 1 using a pure binary
representation; this shall be known as the value
representation.

No it's not redundant because even if you know the value of each
individual bit, you still don't know how you are supposed to
combine the individual values together to get the value of the
representation. The above statement tells you essentially that
you add together the values of individual bits to get the value
of the representation. If you're saying that it is redundant to
explicitly mention "2**N - 1"

No, I'm saying that it would be redundant if your interpretation were
correct, which I deny. It needs to be said precisely because it is not
redundant, because it implies that all possible combinations of bit
patterns must actually represent valid values, something which is not
otherwise deducible, and which is therefore not required for signed
integer types.
... then perhaps it is but note that
the same "redundancy" exists in footnote 40. And in any case, if
the standard is not shy about repeating itself as you say below,
then I don't think you can draw any conclusions by its choice to
mention something as short as "2**N - 1".

I'm drawing conclusions from it's failure to contain a similar
statement for signed integers.

....
corresponding unsigned type". Looks enough to me. If the
standard is not shy about repeating itself then I wouldn't
expect it to be shy about mentioning for the first time a fairly
important piece of information namely that even in the absence
of padding bits and even if the sign bit is 0 it is still
possible to have trap representations.

It doesn't need to say that; it's already been said when the concept
of trap representations were introduced. In the absence of a statement
to the contrary, the fact that trap representations are permitted to
exist implies that they can include value, sign, and padding bits.

....
Pretty much every part of the standard can be written in various
ways. You seem to be saying that just because this part of the
standard could be written in some other way which you find
"better" then there must be some hidden meaning.

No, I don't consider that there must be some hidden meaning. It seems
to me a quite open and obvious one. One clause, specifically
restricted to unsigned integer types, imposes a requirement. One
clause, specifically restricted to signed integer types, imposes no
such requirement. I don't see anything hidden about that - the
conclusion that the requirement applies only to unsigned integer types
seems quite natural to me. What would have to count as "hidden" is the
"inheritance" of this requirement by signed integer types.

If you see a sign in an airport which says "Incoming passengers who
are not US citizens: use gates 1-4. You will have to pass through
Customs before you are allowed to leave the airport. Incoming
passengers who are US citizens: use gates 5-8." Would you conclude
that US citizens would have to go through Customs? That seems to me to
be implications of the "logic" you're using.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,780
Messages
2,569,611
Members
45,278
Latest member
BuzzDefenderpro

Latest Threads

Top