Unsigned types are DANGEROUS??

M

MikeP

Paul said:
There is a reason Java doesn't bother with a built in unsigned
numeric type.

And that reason is? I know that by not having unsigned types, they
eliminated the possibility of creating a whole class of programs with
Java. They forgot the second part of "Keep it simple stupid", which is
"but not simpler than required".
 
P

Paul

On 14 Mrz., 14:13, Leigh Johnston wrote:
<snip>

By the way, Leigh Johnson didn't quote the full phrase :
"/Because of the conversion rules that apply to +,/ if E1 is an array
and E2 an integer, then E1[E2] refers to the E2-th member of E1".

The C99 standard phrases it a bit differently:
"Because of the conversion rules that apply to the binary + operator,
if E1 is an array object (equivalently, a pointer to the initial
element of an array object) and E2 is an integer, E1[E2] designates
the E2-th element of E1 (counting from zero)."

The counting from zero is surprising because arrays are however zero-
based but not if you consider that when referencing a sub-object (a
sub-array), you would no longer be zero-based regarding the underlying
object..

Which tends to hint that E1 is however converted into a pointer and no
special treatment is performed because E1 is of array type. In this
case, I expect normal pointer arithmetic applies and a negative
integral part of the subscript operator is acceptable provided you
stay within the underlying object.

xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx


Of course it is ok, the standard states the case with "a pointer to the
initial element", it does not exclude the use of an offset pointer to
reference an array.

Just because Leigh does not see the need arrays of base indexing other than
0. This does not mean the C++ language is to be misintepreted to suit his
idiotic ideas. WHy should the language restrict itself to the use of 0
based indexing only, when it is quite capable of being more versatile.
At the end of the day I know what an array is, you know what an array is ,
what reasonable programmer needs to refer to the C++ standard to find out
what an array is?

Just because Leigh gets confused in a stromash of misinterpretations and
mangled of phrases, doesn't mean thats how it is.


It says *IF* E1 is a pointer to the initial element, not E1 *MUST BE* a
pointer to the initial element. THus it implies E1 *CAN BE* a pointer to
anyplace.
 
P

Paul

Leigh Johnston said:
On 14/03/2011 11:48, Leigh Johnston wrote:
On 14/03/2011 01:41, Paul wrote:

On 14/03/2011 00:09, Paul wrote:

On 13/03/2011 20:43, Paul wrote:

On 13/03/2011 19:29, Alf P. Steinbach /Usenet wrote:
* William Ahern, on 13.03.2011 03:01:

"If you stick to (unsigned int) or wider, then you're fine" is
generally
false. Use hammer for nails, screwdriver for screws. In short,
use the
right tool for the job, or at least don't use a clearly
inappropriate
tool: don't use signed types for bitlevel stuff, and don't use
unsigned
types for numerical stuff.


Bullshit. Using unsigned integral types to represent values that
are
never negative is perfectly fine. std::size_t ensures that the
C++
horse has already bolted as far as trying to avoid them is
concerned.


unsigned types can be safer because everything about the
arithmetic is well defined, including over- and underflows
which
occur
modulo 2^N; as opposed to signed, where those scenarios are
undefined.

The well-definedness of operations that you're talking about is
this:
that the language *guarantees* that range errors for unsigned
types
will
not be caught.

A guarantee that errors won't be caught does not mean "safer".

That's plain idiocy, sorry.

Plain idiocy is eschewing the unsigned integral types in C++.
Perhaps
you would prefer being a Java programmer? Java has less types to
"play
with" which perhaps would suit you better if you cannot cope with
C++'s richer set of types.

As Java, like C++, supports UDT's I don't think it's correct to
say
that
C++ suports a richer set of types.
class anyTypeYouLike{};

This is a troll. If is obvious I am referring to the set of
built-in
types. A user defined type is often composed of one or more
built-in
types.


There is a reason Java doesn't bother with a built in unsigned
numeric
type.
I think the people who created Java know more about programming
than you
do and it is not a case of Java being inadequete. This is just
your
misguided interpretation in an attempt to reinforce your idea that
std::size_t is the only type people should use in many
circumstances.
You obviously think std:size_t is the best thing since sliced
bread
and
this is the way forward in C++ and, as per usual, your opinion is
wrong.

C++'s std::size_t comes from C's size_t. It is the way forward
because
it is also the way backward.


The message you replied to ALf said to use the correct tool for
the
job,
which seems like a reasonable opinion. You replied saying this was
bullshit and implied Alf had said something about never using
unsigned,
your post looks like a deliberate attempt to start a flare.
You also make the point of saying using unsigned for values that
are
never negative is fine, but how do you know it is never going to
be
negative? Your idea of never negative is different from others',
you
think array indexes cannot be negative, but most other people know
they
can be.


If array indexes can be negative then please explain why
std::array::size_type will be std::size_t and not std::ptrdiff_t.


Just because that particular array type cannot have a negative index
doesn't mean this applies to all arrays.
Array indexes *can* be negative see:
http://www.boost.org/doc/libs/1_46_1/libs/multi_array/doc/user.html


From n3242:

"if E1 is an array and E2 an integer, then E1[E2] refers to the E2-th
member of E1"

E2 cannot be negative if the above rule is not to be broken.

The above says E2 is an integer not an unsigned integer.

Also:
int arr[5] = {0};
arr[4] =5; /*The 5th element not the 4th*/
arr[0] = 1; /*The 1st elelment, not the 0th*/
int*p_arr = &arr;
++p_arr;
p_arr[-1] = 1; /*The 1st element, not the -1st*/
p_arr[0] = 2; /*The 2nd element, not the 0th*/


In C++ an array index can't be negative; p_arr above is not an array it
is a pointer. Obviously "E2-th" is zero-based not one-based. The fact
that Boost provides an array-like container which accepts negative
indices is mostly irrelevant; my primary concern is C++ not Boost;
again:

From n3242:

"if E1 is an array and E2 an integer, then E1[E2] refers to the E2-th
member of E1"

if E2 is negative then E1[E2] cannot possibly refer to a member of E1.


Also from n1336:

"if E1 is an array object (equivalently, a pointer to the initial
element of an array object) and E2 is an integer, E1[E2] designates the
E2-th element of E1 (counting from zero)."

This more closely matches my thinking on this subject; a sub-array
reference that allows you to give a negative array index on most
implementations is not an array object.

Obviously one can do the following:

int main()
{
int n[2][2] = { { 11, 12 }, { 21, 22} };
std::cout << n[1][-1]; // outputs "12"
}

This should work on most implementations (Comeau warns with "subscript
out of range"); however I think it does run contra to the following:

"if E1 is an array and E2 an integer, then E1[E2] refers to the E2-th
member of E1"

as n[1][-1] is not a member of n[1]; it is a member of n[0].

so IMO it is an open question as to whether the above code is UB.

Interestingly g++ v4.1.2 (codepad.org) does not output "12"; this of
course either strengthens my position that using negative array indices
may be UB or that g++ is buggy; either way using negative array indices is
still poor practice.

/Leigh

Leigh STFU you idiot you don't know what you are talking about.
 
P

Paul

MikeP said:
And that reason is?

A numeric type is both negative an positive. Unsigned numeric types are not
required unless you have a very rare case where you need to squeeze out that
last extra bit of memory for each varaible. If that is the case just use the
purpose build arithmetic libraries.
I know that by not having unsigned types, they eliminated the possibility
of creating a whole class of programs with Java.
What class of program has this elimated? Java does have have unsigned types.


They forgot the second part of "Keep it simple stupid", which is "but not
simpler than required".
?
 
M

MikeP

MikeP said:
If you investigate the tcmalloc code (by Google), you will find the
following warning:

// NOTE: unsigned types are DANGEROUS in loops and other arithmetical
// places. Use the signed types unless your variable represents a bit
// pattern (eg a hash value) or you really need the extra bit. Do NOT
// use 'unsigned' to express "this value should always be positive";
// use assertions for this.

Is it just their idiom? What's the problem with using unsigned ints in
loops (it seems natural to do so)? Are C++ unsigned ints "broken"
somehow?

Well that's a relief. Now that I've read the thread posts and the links
to other threads, I see I have not been doing anything wrong by
preferring unsigned where it seems natural to do so. The biggie points
(for me) are:

1. C++ conversion rules are at the core of the problem.
2.Mixing signed/unsigned inadvertently is not that big of a problem given
that most compilers emit a warning upon such.
3. Using signed where unsigned is a better fit, loses semantic value.

I weight point 3 moreso than 1 or 2, for else I'd feel subserviant to the
shortcomings of the language rather than programming closer to an ideal
(and that is the goal). Of course in C++ one can wrap the built-in types
to achieve the desired semantics, which sounds like a great idea.

I know now too that Google code will be harder to decipher because of the
idiom they chose to follow.
 
M

MikeP

Paul said:
A numeric type is both negative an positive.

That would be a statement of the most general. The set of positive
numbers is a valid concept also and is more specific, hence gives more
semantic value.
Unsigned numeric types
are not required unless you have a very rare case where you need to
squeeze out that last extra bit of memory for each varaible.

"not required" and "not desired" are way different. Your statement is
akin to "C++ is not required, it can all be done in assembly!". That you
may not desire them is perfectly valid of course. But so it the
alternative preference.
If that
is the case just use the purpose build arithmetic libraries.

What class of program has this elimated? Java does have have unsigned
types.

Only unsigned char type (16-bit), so not a full set of unsigned ints.

That they left out unsigned types. i.e., They "threw the baby out with
the bathwater" by oversimplifying.
 
P

Paul

MikeP said:
That would be a statement of the most general. The set of positive numbers
is a valid concept also and is more specific, hence gives more semantic
value.
Yes it is a valid concept.
I can't think of many scenarios where it would be beneficial to think that a
number can only be positive. I see that more as a restriction in use of the
variable and a possible cause of errors.

"not required" and "not desired" are way different. Your statement is akin
to "C++ is not required, it can all be done in assembly!". That you may
not desire them is perfectly valid of course. But so it the alternative
preference.
I don't understand where the "desire" is to use unsigned numeric types.
Only unsigned char type (16-bit), so not a full set of unsigned ints.
Yes true but in all honesty it is very rare you would you ever need anything
else. In cases where you did it would be easy make a workaround.
That they left out unsigned types. i.e., They "threw the baby out with the
bathwater" by oversimplifying.
I agree Java is not ideal , But I doubt there would be many cases where you
would need to implement a workaround because of lack of large unsigned
integers.
 
N

Noah Roberts

The biggie points
(for me) are:

1. C++ conversion rules are at the core of the problem.
2.Mixing signed/unsigned inadvertently is not that big of a problem given
that most compilers emit a warning upon such.

Actually, it can be tough to establish how to turn this warning on in
some compilers. The g++ compiler for instance does not turn this
warning on even with -Wall. You actually have to dig into the
documentation for the specific warning (-Wconversions?) and I've had
less than perfect results using it. The newest versions of g++ seem
like they might respond to the flag as you'd expect, but older versions
not so much. That's my experience anyway.
3. Using signed where unsigned is a better fit, loses semantic value.

I weight point 3 moreso than 1 or 2, for else I'd feel subserviant to the
shortcomings of the language rather than programming closer to an ideal
(and that is the goal). Of course in C++ one can wrap the built-in types
to achieve the desired semantics, which sounds like a great idea.

I know now too that Google code will be harder to decipher because of the
idiom they chose to follow.

The google C++ team doesn't exactly use what I'd call good practices.
Some practices they do use are good, others questionable, and some are
downright ill-advised. You've just found yet one more example of the
latter most here. Unfortunately, since the company is so successful
they're seen as authorities on good practices by many people. Frankly I
wish they'd kept their broken practices to themselves where they *might*
legitimately apply. Their publication of their coding standards lead to
many to believe that they're generally applicable and this is simply not
true.
 
A

Alf P. Steinbach /Usenet

* MikeP, on 14.03.2011 18:49:
Well that's a relief. Now that I've read the thread posts and the links
to other threads, I see I have not been doing anything wrong by
preferring unsigned where it seems natural to do so. The biggie points
(for me) are:

1. C++ conversion rules are at the core of the problem.

Yes, right.

Another core issue is monkey-see-monkey-do habits & ideas, that is, habits &
ideas that are not founded in real understanding but are just mindlessly adopted
from some other context where they *are* meaningful.

I.e., that what's meaningful in e.g. Pascal (guaranteed range checking,
advantage) isn't meaningful in C++ (guaranteed lack of range checking,
disadvantage of adopting Pascal habits & ideas).

2.Mixing signed/unsigned inadvertently is not that big of a problem given
that most compilers emit a warning upon such.

This is, however, wrong.

Particularly so since the Google coder's note speaks of "arithmetical places".

Compilers generally warn about comparisions between signed and unsigned, but
can't warn you of equally-or-more dangerous arithmetic constructs.

3. Using signed where unsigned is a better fit, loses semantic value.

Yes, right.

And it's even worse: using signed where unsigned is a better fit (namely for
bit-level operations), can easily yield Undefined Behavor.

And vice versa, using unsigned where signed is a better fit (namely for
numerical stuff) loses semantic value and can far too easily introduce bugs.

I weight point 3 moreso than 1 or 2, for else I'd feel subserviant to the
shortcomings of the language rather than programming closer to an ideal
(and that is the goal). Of course in C++ one can wrap the built-in types
to achieve the desired semantics, which sounds like a great idea.

Well, until you've measured performance.

There is an abstraction cost.

I know now too that Google code will be harder to decipher because of the
idiom they chose to follow.

Huh?

This does not follow from the above.

On the contrary, the Google coder's note is in complete agreement with what
you've written above, namely, using unsigned and signed where each is natural.


Cheers & hth.,

- Alf
 
M

MikeP

Paul said:
Yes it is a valid concept.
I can't think of many scenarios where it would be beneficial to think
that a number can only be positive. I see that more as a restriction
in use of the variable and a possible cause of errors.

Of course it is a "restriction", but more appropriately, it is "more
specific". That is what immediately upon inspection gives more semantic
value. It says, "I'm one of THOSE kind of things".

Given the discussion in this thread and the links, I don't see using
signed overshadowing the semantic correctness of unsigned when natural to
do so. Whichever you choose though, consistency is key.
I don't understand where the "desire" is to use unsigned numeric
types.

Then you are probably one of those to adhere to the alternative of using
signed even though it seems semantically awkward (to me). As long as
idioms are used consistently, it's fine. Note that the TCMalloc code did
the right thing by indicating with a comment which idiom their code
should follow (whether all the developers adhere to it or not, I don't
know).
Yes true but in all honesty it is very rare you would you ever need
anything else.

Well rarity and importance are orthogonal concepts. The choice not to
have unsigned types in Java, to me, seems like over-simplification and a
"policy over mechanism" mindset: "We shouldn't have knives to slice bread
because using a knife opens the potential for cutting a finger". But of
course kids shouldn't have access to knives. Kinda makes Java a toy for
kids! (Just kidding, I couldn't resist that last part).
In cases where you did it would be easy make a
workaround.

Not without compromise in the design: you'd have to double the width of
the integer to use signed where unsigned is called for. That's a kludge
to make up for a defficiency in the language. Eliminating unsigned
integers necessarily means that some code will be inelegant, for the
language lacks such expressibility. Now THAT, is indeed a restriction. (I
like the way this post is going!).
I agree Java is not ideal , But I doubt there would be many cases
where you would need to implement a workaround because of lack of
large unsigned integers.

Again, rarity and importance or orthogonal. It's not a quantification
issue.
 
N

Noah Roberts

I see the same problem with g++ v4.3.4 (ideone.com):

#include <iostream>

int main()
{
int n[2][2] = { { 11, 12 }, { 21, 22} };
std::cout << n[0][1] << " " << n[1][-1]; // doesn't output "12 12"
}

outputs:

12 -1074688900

http://ideone.com/YgTvJ

/Leigh

n[0][3] isn't necessarily 21 either. In C and in C++ you're not
guaranteed that arrays within arrays follow directly after the end of
the previous; an implementation can pad, etc... The only things you are
guaranteed is that the first element of the first array is at the
beginning and that the whole thing is composed of a contiguous chunk of
memory.

You're not allowed to access elements that are not within your array.
Sub-arrays are no exception to this. Once you start trying to treat
n[2][2] as being the same as n[4] you're entering into undefined
behavior. It may or may not work as you expect it to.
 
M

MikeP

Alf said:
And vice versa, using unsigned where signed is a better fit (namely
for numerical stuff) loses semantic value and can far too easily
introduce bugs.

Too broad of a categorization and unrecognition of more specific cases of
numbers (subsets) for my liking.
Well, until you've measured performance.

There is an abstraction cost.

A lot of code isn't performance-critical and you can switch over to
built-in types in release builds. I do this, but not in a big way such as
encapsulating ALL primitve integers, which is what I was pondering doing.
Then, using a primitive type becomes an optimization and using one where
not necessary becomes premature optimization. Safety first.

It isn't as semantically rich because it does not recognize the subset of
numbers that are positive.
This does not follow from the above.

On the contrary, the Google coder's note is in complete agreement
with what you've written above, namely, using unsigned and signed
where each is natural.

Where that line of "natural" occurs, is drawn subtly perhaps, but surely
subjective.
 
N

Noah Roberts

I see the same problem with g++ v4.3.4 (ideone.com):

#include <iostream>

int main()
{
int n[2][2] = { { 11, 12 }, { 21, 22} };
std::cout << n[0][1] << " " << n[1][-1]; // doesn't output "12 12"
}

outputs:

12 -1074688900

http://ideone.com/YgTvJ

/Leigh

n[0][3] isn't necessarily 21 either. In C and in C++ you're not
guaranteed that arrays within arrays follow directly after the end of
the previous; an implementation can pad, etc... The only things you are
guaranteed is that the first element of the first array is at the
beginning and that the whole thing is composed of a contiguous chunk of
memory.

You're not allowed to access elements that are not within your array.
Sub-arrays are no exception to this. Once you start trying to treat
n[2][2] as being the same as n[4] you're entering into undefined
behavior. It may or may not work as you expect it to.

Thanks; that is what I was looking for: confirmation of what I believe
that using negative array indices is undefined behaviour; I would like a
quote from the standard though to back it up and the best I can find is:

"if E1 is an array and E2 an integer, then E1[E2] refers to the E2-th
member of E1"

In this case there's probably a notation somewhere about it, but
technically speaking such things are not required to be in the standard.
Unless there's a specific bit of text guaranteeing a certain behavior,
or labeling a behavior as one of the many XXXX-defined classifications,
the behavior is undefined. So unless there's text in the standard
specifically stating that the size of an array is the size of its
elements * its size AND there can be no padding between sub-arrays, an
implementation is free to do whatever.

Looking in 8.3.4 I don't see anything that explicitly guarantees this.
Some of the text in 8.3.4/8 could be seen to imply that I'm in error,
but I don't think so. That text in question is:

"int x[3][5]; ...then x+i is converted to the type of x, which involves
multiplying i by the length of the object to which the pointer points,
namely five integer objects."

The, "...namely five integer objects," is the only part that might be
construed to imply that x[0][5] is a valid expression and that it is the
same as x[1][0]. Experience and previous conversations wrt C make me
believe this is not meant to be a guarantee regarding the current topic.
 
A

Alf P. Steinbach /Usenet

* Noah Roberts, on 14.03.2011 20:59:
In C and in C++ you're not guaranteed that
arrays within arrays follow directly after the end of the previous; an
implementation can pad, etc...

Happily that's incorrect.

C++98 §5.3.3/2
... the size of an array of n elements is n times the size of an element.

This size requirement does not allow for any padding, other than padding within
elements themselves (which padding is then part of the element size).

For an array of m arrays of n elements, §5.3.3/2 defines the size of each inner
array as n times the size of an element, and then the size of the outer array as
m*n times the size of an inner array element, which allows no padding.


Cheers & hth.,

- Alf
 
J

James Kanze

If you investigate the tcmalloc code (by Google), you will find the
following warning:
// NOTE: unsigned types are DANGEROUS in loops and other arithmetical
// places. Use the signed types unless your variable represents a bit
// pattern (eg a hash value) or you really need the extra bit. Do NOT
// use 'unsigned' to express "this value should always be positive";
// use assertions for this.
Is it just their idiom? What's the problem with using unsigned ints in
loops (it seems natural to do so)? Are C++ unsigned ints "broken"
somehow?

They've exagerated the problems, greatly IMHO. But in general,
most (but not all) C++ experts do only use unsigned types when
they need the modula arithmetic (e.g. hash codes), or when they
will be using bitwise operators. For various reasons, the
unsigned types in C/C++ are slightly broken, and don't work well
in usual contexts.
 
P

Paul

MikeP said:
Of course it is a "restriction", but more appropriately, it is "more
specific". That is what immediately upon inspection gives more semantic
value. It says, "I'm one of THOSE kind of things".

Given the discussion in this thread and the links, I don't see using
signed overshadowing the semantic correctness of unsigned when natural to
do so. Whichever you choose though, consistency is key.
I agree I like consistency, if your code is consistent it should be aesy to
read whatever technique you adopt.
It's the oddball inconsistencies that would be hard to detect.
Then you are probably one of those to adhere to the alternative of using
signed even though it seems semantically awkward (to me). As long as
idioms are used consistently, it's fine. Note that the TCMalloc code did
the right thing by indicating with a comment which idiom their code should
follow (whether all the developers adhere to it or not, I don't know).
I used to be a big fan of unsigned , and I thought it was best used for loop
counters , array indexes etc. But my opinions have changed and I now think
signed are better. I think I understand the pros and cons for each
technique.
typedef UINT was the first piece of code I used to write in my programs :)
Well rarity and importance are orthogonal concepts. The choice not to have
unsigned types in Java, to me, seems like over-simplification and a
"policy over mechanism" mindset: "We shouldn't have knives to slice bread
because using a knife opens the potential for cutting a finger". But of
course kids shouldn't have access to knives. Kinda makes Java a toy for
kids! (Just kidding, I couldn't resist that last part).
:)


Not without compromise in the design: you'd have to double the width of
the integer to use signed where unsigned is called for. That's a kludge to
make up for a defficiency in the language. Eliminating unsigned integers
necessarily means that some code will be inelegant, for the language lacks
such expressibility. Now THAT, is indeed a restriction. (I like the way
this post is going!).
But I think if a program needs an unsigned integer larger than 2^16 its for
a very specific use. Normally numbers of that magnitute are represented with
doubles. I think most of these cases would be worthy of a UDT, which is why
I was thinking of using unsigned 16's to construct a large user-defined
integer type. I know its a massive pain and would be much simpler if there
was an unsigned integer, but do it once and its re-useable.
Again, rarity and importance or orthogonal. It's not a quantification
issue.
I agree that Java would be better with unsigned bytes and integers, but i
also respect their reasons for omitting them. It's a much simpler language
than C++ and each have their pros and cons. It's just another level IMO:
asm -> C++ -> Java.
 
J

James Kanze

On Mar 13, 2:07 am, "Alf P. Steinbach /Usenet" <alf.p.steinbach
(e-mail address removed)> wrote:

[..,]
Not in the sense that you're apparently asking about, that is,
there is not anything broken about e.g 'unsigned' itself. But
as part of a willy-nilly broken type hodge-podge inherited
from C, yes, it's broken. That's because implicit conversions
that lose information are all over the place.

I think that's the best explination I've seen to date. It's not
unsigned per se which is broken, it's the whole way integral
types interact (and implicitly convert) which is broken. (This
is why most of the real experts, starting with Stroustrup, use
int unless really forced to do otherwise.)
 
J

James Kanze

On 13/03/2011 19:29, Alf P. Steinbach /Usenet wrote:
Bullshit. Using unsigned integral types to represent values
that are never negative is perfectly fine. std::size_t
ensures that the C++ horse has already bolted as far as trying
to avoid them is concerned.
Plain idiocy is eschewing the unsigned integral types in C++.

And one of our resident trolls pipes up...

There are arguments for both sides. Overall, the arguments
against using unsigned for arithmetic values seems to prevail:
at least with most of the C++ experts. But of course, as a
resident troll, Leigh rather likes the idea of categorizing
the opinions of Stroustrup (and Kernighan and Richie in C) as
"plain idiocy".
 
J

James Kanze

Obviously one can do the following:
int main()
{
int n[2][2] = { { 11, 12 }, { 21, 22} };
std::cout << n[1][-1]; // outputs "12"
}
[...]
so IMO it is an open question as to whether the above code is UB.

Actually, it's not an open question. It's undefined behavior.
(At least in C, but I would hope that C++ follows C here.)
 
N

Noah Roberts

There are arguments for both sides. Overall, the arguments
against using unsigned for arithmetic values seems to prevail:
at least with most of the C++ experts. But of course, as a
resident troll, Leigh rather likes the idea of categorizing
the opinions of Stroustrup (and Kernighan and Richie in C) as
"plain idiocy".

Well, luckily for me neither I nor most of the people I've ever worked
with are "experts" and so we're completely free to disagree with all the
gods, their saints, and wanna-be bishops.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,057
Latest member
KetoBeezACVGummies

Latest Threads

Top