Making Fatal Hidden Assumptions

S

Stephen Sprunk

Andrew Reilly said:
Fine. Where can I find that? Can we make a sub-standard that includes
the common semantics for all "normal-looking" architectures, so that our
code can rely on them, please?

The problem is that there's really no such thing as a "normal-looking"
architecture. Every implementation differs in at least a few fundamental
things you'd find it useful to nail down, so to provide enough detail to be
meaningful your sub-standard would basically be defining the behavior of a
particular implementation.

Just about the only thing that all modern machines agree on is CHAR_BIT == 8
(and I bet someone will dispute that). Ints, pointers, address space
semantics, etc. are all up for grabs, and that's a good thing -- it allows
systems to evolve in useful ways instead of locking us into something that,
while appearing optimal today, is not likely to be tomorrow.

If you disagree, please list all of the various currently-undefined
behaviors you want to define and what implementations conform to your spec.
Who knows, ISO might adopt it...

S

--
Stephen Sprunk "Stupid people surround themselves with smart
CCIE #3723 people. Smart people surround themselves with
K5SSS smart people who disagree with them." --Aaron Sorkin

*** Free account sponsored by SecureIX.com ***
*** Encrypt your Internet usage with a free VPN account from http://www.SecureIX.com ***
 
J

Jordan Abel

The problem is that there's really no such thing as a "normal-looking"
architecture. Every implementation differs in at least a few
fundamental things you'd find it useful to nail down, so to provide
enough detail to be meaningful your sub-standard would basically be
defining the behavior of a particular implementation.

Just about the only thing that all modern machines agree on is
CHAR_BIT == 8 (and I bet someone will dispute that). Ints, pointers,
address space semantics, etc. are all up for grabs, and that's a good
thing -- it allows systems to evolve in useful ways instead of locking
us into something that, while appearing optimal today, is not likely
to be tomorrow.

If you disagree, please list all of the various currently-undefined
behaviors you want to define and what implementations conform to your
spec. Who knows, ISO might adopt it.

How about defining passing a positive signed int to printf %x, %u, %o?,
or an unsigned int < INT_MAX to %d? That doesn't seem too unreasonable.
It works fine on every existing platform, as far as I know, and it is
currently required to work for user-created variadic functions.
 
K

Keith Thompson

Stephen Sprunk said:
Just about the only thing that all modern machines agree on is
CHAR_BIT == 8 (and I bet someone will dispute that). Ints, pointers,
address space semantics, etc. are all up for grabs, and that's a good
thing -- it allows systems to evolve in useful ways instead of locking
us into something that, while appearing optimal today, is not likely
to be tomorrow.

I don't know of any modern hosted implementations with CHAR_BIT > 8
(though there have certainly been such systems in the past (though I
don't know whether any of them had C compilers)), but CHAR_BIT values
of 16 and 32 are common on DSPs (Digital Signal Processors).
 
D

Dik T. Winter

> This restriction (undefined semantics IS a restriction) makes
> pointer-walking versions of algorithms second-class citizens to otherwize
> equivalent indexed versions of algorithms.
>
> void
> foo(int *p, int n)
> {
> for (; --n >= 0;)
> p[n] = n;
> }
>
> is "legal" and "defined" on all architectures, but the equivalent with a
> pointer cursor isn't:
>
> void
> foo(int *p, int n)
> {
> p += n-1;
> for (; --n >= 0;)
> *p-- = n;
> }

I have no idea on what you base your assertion. When the first is valid,
the second is valid, and the reverse. In your first example your first
assignment is to p[n-1] (using the initial value of n), the same for the
second version. But it is worse:
void
foo(int *p, int n)
{
p += n;
for(; --n >= 0)
*--p = n;
}
is just as valid.
> I suppose that this is the root of my surprise and annoyance on
> discovering what the standard says. These versions *should* be
> equivalent, and equivalently well-defined.

They are.
 
A

Andrew Reilly

This restriction (undefined semantics IS a restriction) makes
pointer-walking versions of algorithms second-class citizens to otherwize
equivalent indexed versions of algorithms.

void
foo(int *p, int n)
{
for (; --n >= 0;)
p[n] = n;
}

is "legal" and "defined" on all architectures, but the equivalent with a
pointer cursor isn't:

void
foo(int *p, int n)
{
p += n-1;
for (; --n >= 0;)
*p-- = n;
}

I have no idea on what you base your assertion. When the first is valid,
the second is valid, and the reverse. In your first example your first
assignment is to p[n-1] (using the initial value of n), the same for the
second version.

But, the second version *finishes* with p pointing to the -1st element of
the array, which (we now know) is undefined, and guaranteed to break an
AS/400. The first version only finishes with the integer n == -1, and the
pointer p is stil "valid". This is the discrepancy that irks me.
But it is worse:
void
foo(int *p, int n)
{
p += n;
for(; --n >= 0)
*--p = n;
}
is just as valid.

Yes, that one is certainly going to fly, even on the AS/400, as p doesn't
ever point to p(initial) - 1. But it is (IMO) less idiomatic than the
other construction. Certainly, different people's experience will differ,
there, and certainly, different processor architectures often have better
or worse support for one form or the other. In my experience,
post-modification is more common (or, rather, more often fast, where both
are available), but quite a few processors have no specific support for
address register increment or decrement addressing modes.
They are.

Come again? This is the whole point that people (well, me, anyway) have
been arguing about! If they were truly equivalent (and the
non-unit-stride cousins), I'd go home happy.
 
A

Andrew Reilly

The problem is that there's really no such thing as a "normal-looking"
architecture. Every implementation differs in at least a few fundamental
things you'd find it useful to nail down, so to provide enough detail to be
meaningful your sub-standard would basically be defining the behavior of a
particular implementation.

Sure there is. All the world's a VAX (but with IEEE float), with
plausible exceptions for pointers different length than int. I'd also
wear alignment restrictions pretty happily, as long as they're reasonable.
Either-endian word significance is fine, too. Show me a "normal-looking"
modern architecture that doesn't fit that description, in a material
sense. Even most of the DSPs developed in the last ten years fit that
mould. Mostly, so that they can run existing C code well. [The few that
have been developed in that time frame, which *don't* fit that mould, are
not intended to be programmed in C, and there's no reason to expect that
they will be.]
Just about the only thing that all modern machines agree on is CHAR_BIT
== 8 (and I bet someone will dispute that). Ints, pointers, address
space semantics, etc. are all up for grabs, and that's a good thing --
it allows systems to evolve in useful ways instead of locking us into
something that, while appearing optimal today, is not likely to be
tomorrow.

If you disagree, please list all of the various currently-undefined
behaviors you want to define and what implementations conform to your
spec. Who knows, ISO might adopt it...

I'd like the pointer memory model to be "flat" in the sense that for p, a
pointer to some object, (p += i, p -= i) == p for any int i. (In a fixed
word-legnth, non-saturating arithmetic, the "flat" geometry is really
circular, or modulo. That's as it should be.)

[I'm not interested in arguing multi-processor or multi-thread consistency
semantics here. That's traditionally been outside the realm of C, and
that's probably an OK thing too, IMO.]

Cheers,
 
J

Jordan Abel

Sure there is. All the world's a VAX (but with IEEE float),

Ironic, considering VAXen don't have IEEE float. Why not just say all
the world's a 386? Oh, wait, 386 has that segmented-addressing
silliness.
 
D

David Holland

>> [segmented architectures and C]
>
> How about:
>
> int a[10];
> foo(a + 1);
>
> where
>
> foo(int *p)
> {
> p -= 1;
> /* do something with p[0]..p[9] */
> }
>
> > [snip]
>
> Does p -= 1 still trap, in the first line of foo, given the way that it's
> called in the main routine?
>
> If not, how could foo() be compiled in a separate unit, in the AS/400
> scenario that you described earlier?

I think you're overlooking something about how segmented architectures
actually work. I'm not sure exactly what, so I'll go through this in
excruciating detail and hopefully it'll help. (I don't know the
AS/400, so this is likely wrong in detail, but the general principle
is the same.)

Going back to the code:
> int a[10];

This compiles to "get me a new segment, of size 10*sizeof(int), to
hold a."

At runtime, this results in a new segment identifier, which I'll
suppose has the value 1234. This value isn't an address; it doesn't
"mean" anything except that it's an index into some OS-level or
machine-level table somewhere. That table holds the size of the
segment; supposing sizeof(int) is 4, that size is 40.
> foo(a + 1);

This compiles to "create a pointer to a, add 1 to it, and call foo."

At runtime, the pointer to a has two parts: a segment identifier,
namely 1234, and an offset, namely 0, which I'll write as
1234:0. Adding 1 (4 at the machine level) gives 1234:4. This is within
bounds, so no traps occur.

Meanwhile,
> foo(int *p)
> {
> p -= 1;

This compiles to "subtract 4 from the pointer p".

At runtime, when called as above, this subtraction converts 1234:4 to
1234:0. This is within bounds, and no traps occur.
> /* do something with p[0]..p[9] */

and this compiles to "add values between 0 and 36 to p and
dereference".

At runtime, these additions yield pointers between 1234:0 and 1234:36;
these are all within bounds, so no traps occur.

You'll note that the compilation of foo does not require knowing what
or how many values p points to.
> If it does trap, why? It's not forming an "illegal" pointer, even for the
> AS/400 world.

It doesn't.
> If it doesn't trap, why should p -= 1 succeed, but p -= 2 fail?

Because p -= 2, when performed on the pointer 1234:4, tries to deduct
8 from the offset field. This underflows and traps.
> What if my algorithm's natural expression is to refer to p[0]..p[-9], and
> expects to be handed a pointer to the last element of a[]?

That's fine too, because 1234:36 - 36 yields 1234:0, which is still
within bounds.

You may have noticed that the compilation I did above is not actually
standards-conforming; because of the one-past-the-end rule, the size
of the segment for the array "a" has to be one larger than the array.
Otherwise, forming the pointer to the one-past-the-end element would
trap.
> The significant difference of C, to other languages (besides the
> assembly language of most architectures) is that you can form, store, and
> use as arguments pointers into the middle of "objects". Given that
> difference, the memory model is obvious,

That doesn't follow. What I've described above allows pointing into
the middle of objects, but it doesn't yield the memory model you're
envisioning.
 
A

Andrew Reilly

I think you're overlooking something about how segmented architectures
actually work.

I think you're overlooking my assertions that I don't care how
(some) segmented architectures actually work. They are by *far* the least
significant of the processor architectures in use at the moment. Most of
the segmented architectures (by installed number: x86) are not used as
such, and even those that are do *not* have the particular behaviour with
respect to the range of the "offset" component of the non-flat pointer.

Sure, it's possible to write C code that operates within the restrictions
of such architecures. It's even easy. It is, however, not historically
representative of quite large bodies of C code. That's OK. The
architecture wasn't designed to run C code, and is not primarily coded in
C.
Because p -= 2, when performed on the pointer 1234:4, tries to deduct
8 from the offset field. This underflows and traps.

And this is the behaviour that is at odds with idiomatic C. The standard
has effectively forbidden such a construction (which is well defined and
works perfectly on every other architecture, and is convenient in many
obvious algorithms) just because this one architecture traps before any
attempt is made to access memory. The onus should instead have been on
the C implementation on this particular machine to paper over this machine
defect.
You may have noticed that the compilation I did above is not actually
standards-conforming; because of the one-past-the-end rule, the size
of the segment for the array "a" has to be one larger than the array.
Otherwise, forming the pointer to the one-past-the-end element would
trap.

Only because the machine architecture is antipathetic to C. The syntax
and behaviour of C operators offers no suggestion that symmetrical
behaviour, or non-unit steps past the end of the "object" would fail, when
that particular idiom is catered-to (by hacking the underlying object
model: over-allocating the memory segment).
That doesn't follow. What I've described above allows pointing into the
middle of objects, but it doesn't yield the memory model you're
envisioning.

Just because you can use a restricted subset of the behavour of C
"naturally" on such trap-on-point segment machines is a pretty poor
argument for restricting the semantically defined "correct" behaviour on
all other architectures to that particular subset.

Look: lots of machine architectures have restrictions such that the full
semantics of C require more effort on the part of the compiler. For
example, lots of modern processors have quite restricted range for
"fast" immediate address offsets. Does this mean that the standard
should restrict the size of stack frames and structures so that all
elements can be accessed with "fast" instructions? No, of course not. The
compiler must issue more instructions to load and use large offsets in
those situations so that larger structures can be accessed.
 
A

Al Balmer

You can't "catch it and do nothing"? What are you expected to _do_ about
an invalid or protected address being loaded [not dereferenced], anyway?
What _can_ you do, having caught the machine check? What responses are
typical?

Clean up, release resources, and get out.
 
C

CBFalconer

Andrew said:
On Fri, 24 Mar 2006 08:20:12 +0000, David Holland wrote:
.... snip about segmented address error catching ...
Only because the machine architecture is antipathetic to C. The
syntax and behaviour of C operators offers no suggestion that
symmetrical behaviour, or non-unit steps past the end of the
"object" would fail, when that particular idiom is catered-to
(by hacking the underlying object model: over-allocating the
memory segment).

No, because the machine architecture is designed to catch out of
bounds addressing in hardware, without serious effects on
run-time. In fact, the architecture is very usable with C, it just
exposes gross misprogramming.

You can always use assembly code if you want no rules. However the
demonstrated architecture will even catch out-of-range addressing
there.
 
A

Al Balmer

Because the ability to do so is implied by the syntax of pointer
arithmetic.

Heh. The presence of an open balcony on the 15th floor implies the
ability to jump off.
 
A

Andrew Reilly

Heh. The presence of an open balcony on the 15th floor implies the
ability to jump off.

Sure. And it's OK to talk about it, too. No harm, no foul.

Forming a pointer to non-object space is "talking about it". Outlawing
talking about it goes against the grain of C, IMO.
 
S

Stephen Sprunk

Andrew Reilly said:
I think you're overlooking my assertions that I don't care how
(some) segmented architectures actually work. They are by *far* the least
significant of the processor architectures in use at the moment. Most of
the segmented architectures (by installed number: x86) are not used as
such, and even those that are do *not* have the particular behaviour with
respect to the range of the "offset" component of the non-flat pointer.

Sure, it's possible to write C code that operates within the restrictions
of such architecures. It's even easy. It is, however, not historically
representative of quite large bodies of C code. That's OK. The
architecture wasn't designed to run C code, and is not primarily coded in
C.

Interestingly, I wasn't even aware the AS/400 did this until I started
reading comp.arch, yet in nearly 10 years of C coding I've never (even
accidentally) written code that would trigger such a trap. I thought the
rule against forming pointers outside an object made sense, and I never saw
any valid reason to do so since the alternative was always cleaner and more
obvious to me. If your pointers are never invalid, you never have to worry
if it's safe to dereference them. Then again, I set pointers to NULL after
every free() too, so you probably consider me paranoid.

I also follow several open-source projects that have AS/400 ports, and the
patches those port maintainers submit is very rarely in this area: it's
usually in the OS-specific API calls (which the Windows folks have to submit
as well), Makefile adjustments, etc.
And this is the behaviour that is at odds with idiomatic C. The standard
has effectively forbidden such a construction (which is well defined and
works perfectly on every other architecture, and is convenient in many
obvious algorithms) just because this one architecture traps before any
attempt is made to access memory. The onus should instead have
been on the C implementation on this particular machine to paper over
this machine defect.

This "defect", as you so impolitely call it, is considered by the folks that
use such machines to be a feature, not a bug. Specifically, it is a feature
that _catches_ bugs.

Personally, I'd love if x86 had some automatic means to catch invalid
pointers on formation instead of catching them on access. Even the latter
isn't very effective, since it only catches pointers that are outside that
page of _any_ valid object; it happily misses accesses not only outside the
original object but also outside of any valid object but on a valid page.

You might consider it "correct" to form invalid pointers, which I'll grant
seems to make a tiny bit of sense if you're used to algorithms that do that,
but if being unable to do that is the price one must pay to catch invalid
accesses, that's a tradeoff I'd make.
Only because the machine architecture is antipathetic to C. The syntax
and behaviour of C operators offers no suggestion that symmetrical
behaviour, or non-unit steps past the end of the "object" would fail, when
that particular idiom is catered-to (by hacking the underlying object
model: over-allocating the memory segment).

Over-allocating the segment defeats the main purpose of the model: catching
bugs. At best, when your hack does catch a bug, you'll usually be looking
in the wrong place.

S

--
Stephen Sprunk "Stupid people surround themselves with smart
CCIE #3723 people. Smart people surround themselves with
K5SSS smart people who disagree with them." --Aaron Sorkin

*** Free account sponsored by SecureIX.com ***
*** Encrypt your Internet usage with a free VPN account from http://www.SecureIX.com ***
 
P

Paul Keinanen

Interestingly, I wasn't even aware the AS/400 did this until I started
reading comp.arch, yet in nearly 10 years of C coding I've never (even
accidentally) written code that would trigger such a trap. I thought the
rule against forming pointers outside an object made sense, and I never saw
any valid reason to do so since the alternative was always cleaner and more
obvious to me.

After following this thread for quite a while, I still do not
understand what the problem is even with separate data and address
registers and memory access rules.

While very trivial addressing expressions (usually register indirect
or base+offset) can be handled directly by the memory access unit in
most architectures, any more complex addressing expressions (e.g.
multidimensional arrays) needs a lot of integer arithmetic processing
until the final result is _moved_ into the address register for the
actual memory access.

With separate data and address registers p=s-1 and p++ could as well
be calculated in integer registers and the final result (==s) would be
transferred to the address registers for memory access.

I do net find this problematic even in segmented access, unless
saturating integer arithmetic is used.

Paul
 
W

websnarf

pemo said:
Just in case it hasn't been mentioned [a rather long thread to check!], and
might be useful, Google has an interesting summary on finding a nul in a
word by one Scott Douglass - posted to c.l.c back in 1993.

"Here's the summary of the responses I got to my query asking for the
trick of finding a nul in a long word and other bit tricks."

http://tinyurl.com/m7uw9

The earliest citation I found was from 1987 by Alan Mycroft here:

http://tinyurl.com/qtdt3

As to when some company is going to patent this technique some time in
the future, we'll just have to wait and see.
 
A

Andrew Reilly

Interestingly, I wasn't even aware the AS/400 did this until I started
reading comp.arch, yet in nearly 10 years of C coding I've never (even
accidentally) written code that would trigger such a trap.

I do signal processing code. Working backwards through data, or in
non-unit strides happens all the time. I expect that *most* of my code,
for the last twenty years would break according to this rule. None of it
is "incorrect" (on the platforms that it runs on). None of it accesses
data outside allocated blocks of memory. It just happens that the
pointer-walking access code leaves the pointer dangling past the
allocated block, after the last access.

This rule essentially means that *p-- is an invalid access mechanism,
unless peculiar care is taken to exit loops early, while *p++ is valid,
*only* because they made a particular exception for that particular case,
because they figured that C compilers on AS/400 systems could afford to
over-allocate all arrays by one byte, so that that last p++ would not
leave the pointer pointing to an "invalid" location. That's a hack, plain
and simple.
I thought the
rule against forming pointers outside an object made sense, and I never
saw any valid reason to do so since the alternative was always cleaner
and more obvious to me.

Explicit index arithmetic, rather than pointer arithmetic, I guess?

See: it's not symmetrical or idempotic after all.
If your pointers are never invalid, you never
have to worry if it's safe to dereference them. Then again, I set
pointers to NULL after every free() too, so you probably consider me
paranoid.

Hey, I do that too. I just don't consider leaving a pointer dangling one
element short of an allocated array to be any less "invalid" than dangling
one element past the end. Or two elements, or some other stride.
This "defect", as you so impolitely call it, is considered by the folks
that use such machines to be a feature, not a bug. Specifically, it is
a feature that _catches_ bugs.

Sure. They write code for banks. Good for them. That machine feature
probably works beautifully in that situation, with the designed-for COBOL
or PL/1 codebase.

I write code that has to waste as few cycles as possible, and take up as
little code space as possible, and be portable to the widest variety of
chips as possible. I don't need to be unrolling the last loops of my
algorithms, just to make sure that the pointers aren't decremented that
one last time.
Personally, I'd love if x86 had some automatic means to catch invalid
pointers on formation instead of catching them on access. Even the
latter isn't very effective, since it only catches pointers that are
outside that page of _any_ valid object; it happily misses accesses not
only outside the original object but also outside of any valid object
but on a valid page.

You know, you can have that, if you want it. There are plenty of people
who build C compilers and run-time environments that put all sorts of
run-time checks into your code. Or: you could use a language (like Java)
that gave you good solid guarantees that you'll get an exception if you
even try to read something out of range. And they don't have pointers as
such to worry about. Peachy keen.
You might consider it "correct" to form invalid pointers, which I'll
grant seems to make a tiny bit of sense if you're used to algorithms
that do that, but if being unable to do that is the price one must pay
to catch invalid accesses, that's a tradeoff I'd make.

It *doesn't* catch invlaid accesses. If anything, some other mechanism
catches invalid accesses. Traping on out-of-range pointer formation just
gets in the way of clean code.
Over-allocating the segment defeats the main purpose of the model:
catching bugs. At best, when your hack does catch a bug, you'll usually
be looking in the wrong place.

You seem to have missed the part of the discussion where over-allocation
was the concession that the AS/400 C implementers gave to the standards
body so that the wildly more common loop-until-just-past-the-end idiom
worked OK. It's *their* hack. Bet their COBOL and PL/1 and Java
compilers don't have to over-allocate like that. They declined to
over-allocate by one element before an array because there's no telling
how large each element might be. That would be too costly. So you end up
with this pallid, asymmetric shadow of the C that might have been (and
once was).
 
J

Jordan Abel

It simply doesn't make sense to do things that way since the only
purpose is to allow violations of the processor's memory protection
model. Work with the model, not against it.

Because it's a stupid memory protection model.

Why can't the trap be caught and ignored?
 
S

Stephen Sprunk

Paul Keinanen said:
After following this thread for quite a while, I still do not
understand what the problem is even with separate data and address
registers and memory access rules.

While very trivial addressing expressions (usually register indirect
or base+offset) can be handled directly by the memory access unit in
most architectures, any more complex addressing expressions (e.g.
multidimensional arrays) needs a lot of integer arithmetic processing
until the final result is _moved_ into the address register for the
actual memory access.

Perhaps. Or perhaps the same operations are allowed on address registers.
I don't know the AS/400 well enough to say, but I'm certain that there are
instructions to increment/decrement address registers. I'd also expect
complex addressing modes to be available on instructions using those
registers, whereas you'd have to use several arithmetic instructions on
integer registers.
With separate data and address registers p=s-1 and p++ could as well
be calculated in integer registers and the final result (==s) would be
transferred to the address registers for memory access.

I do net find this problematic even in segmented access, unless
saturating integer arithmetic is used.

Considering that s is probably already in an address register, doing the
manipulation your way would require transferring it to an integer register,
doing the decrement, then doing the increment, then transferring it back to
an address register when it's needed for dereferencing. Why do that when
you can adjust the address register directly?

Also consider that pointers on such a system are likely to be longer than
the largest integer register. That means you'd have to store the pointer to
RAM, load part of it into an integer register, manipulate it, store that
part, load the other part, manipulate it, store it, and load the new pointer
back into an address register. That's a lot of work to do.

It simply doesn't make sense to do things that way since the only purpose is
to allow violations of the processor's memory protection model. Work with
the model, not against it.

S

--
Stephen Sprunk "Stupid people surround themselves with smart
CCIE #3723 people. Smart people surround themselves with
K5SSS smart people who disagree with them." --Aaron Sorkin

*** Free account sponsored by SecureIX.com ***
*** Encrypt your Internet usage with a free VPN account from http://www.SecureIX.com ***
 
K

Keith Thompson

Stephen Sprunk said:
Interestingly, I wasn't even aware the AS/400 did this until I started
reading comp.arch, yet in nearly 10 years of C coding I've never (even
accidentally) written code that would trigger such a trap. I thought
the rule against forming pointers outside an object made sense, and I
never saw any valid reason to do so since the alternative was always
cleaner and more obvious to me. If your pointers are never invalid,
you never have to worry if it's safe to dereference them. Then again,
I set pointers to NULL after every free() too, so you probably
consider me paranoid.

I also follow several open-source projects that have AS/400 ports, and
the patches those port maintainers submit is very rarely in this area:
it's usually in the OS-specific API calls (which the Windows folks
have to submit as well), Makefile adjustments, etc.
[...]

I asked in comp.std.c whether the AS/400 actually influenced the C
standard. Here's a reply from P.J. Plauger:

] AS/400 might have been mentioned. Several of us had direct experience
] with the Intel 286/386 however, and its penchant for checking anything
] you loaded into a pointer register. IIRC, that was the major exmaple
] put forth for disallowing the generation, or even the copying, of an
] invalid pointer.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top