Sizes of pointers

  • Thread starter James Harris \(es\)
  • Start date
P

Phil Carmody

Phil Carmody said:
Nobody interested in numeric computing should be using Java anyway.

/Branch Cuts for Complex Elementary Functions, or Much Ado About Nothing's Sign/ by Kahan
/How s Floating-Point Hurts Everyone Everywhere/ by Kahan and Darcy

/How Java's ...

Phil
 
G

glen herrmannsfeldt

Stephen Sprunk said:
On 06-Aug-13 17:18, James Harris wrote:
(snip)
That's the read()/write() model.
That's the mmap() model.

Before mmap(), IBM had Locate Mode I/O. PL/I has options on
record oriented I/O statements to do it.

In the case of READ, the READ statement returns a pointer to
where the data has been read. For WRITE, you ask the system for
the address of the output buffer, put put the data into it,
then have the system write it.

-- glen
 
G

glen herrmannsfeldt

(snip)
I'm not sure that's accurate because x86 FPU has control word
bits which can be set explicitly to change width of in-register
precision.

It has those bits, but they don't apply to all operations.
To get the expected rounding, you have to store and reload.
Hopefully to cache, and not through to main memory, but it
is still pretty slow.
This would only cause problems for constantly mixing precisions.
Not great but also not catastrophical.

Constanty changing also makes it harder.

-- glen
 
J

James Harris \(es\)

Stephen Sprunk said:
That's the read()/write() model.


That's the mmap() model.

Yes, at least in principle. AIUI mmap() has a different focus in that it is
specified only for machines with virtual memory and is reactive. It wouldn't
work more generally but where it can be used it does avoid the extra copy to
user space.

James
 
S

Stephen Sprunk

Yes, at least in principle. AIUI mmap() has a different focus in that
it is specified only for machines with virtual memory and is
reactive. It wouldn't work more generally but where it can be used it
does avoid the extra copy to user space.

mmap() itself probably wouldn't work without virtual memory; I'm not
sure POSIX as a whole can exist without it. But there's probably a
similar mechanism if the OS has the luxury (in space, complexity) of
implementing more than I/O model. Correct or not, the read()/write()
model is generally implemented first.

What do you mean by "reactive"?

S
 
S

Stephen Sprunk

so why cpus not support *at last*, operation[+-/*<><=>=&|^not] on
8 bit unsigned, and one another type among 16, 32, 64, 128
unsigned with its operation on unsigned ??

What if the CPU doesn't _have_ 8, 16, 32, 64, and 128-bit
integers?

they not are on the wave... and i think they calculate more complex;
more $ for doing programs for them

I'm not sure what "not on the wave" means, but a properly written C
program works just fine on such machines without modification, so it
won't cost any more to write.

OTOH, forcing all programs to use certain sizes of data types, even when
not actually necessary, does mean that programs will get more expensive
to _run_ on such machines, for no good reason.
they not are on the wave... and i think they calculate more complex;
more $ for doing programs for them

I'm not sure what "not on the wave" means, but a properly written C
program works just fine on such machines without modification, so it
won't cost any more to write.

OTOH, forcing all programs to use certain sizes of data types, even when
not actually necessary, does mean that programs will get more expensive
to _run_ on such machines, for no good reason.
No, they don't "have to", and they didn't. And there are good
reasons that they didn't.

what is the reason?

how is possible they don't know for a,b in AcN, who is a*b in A,
for know a and b

and in case of overflow not follow the same arbitrary
[a*b=(a*b)%maxNumber] result...

the same for all other mathematical operation as or, and, shr etc
etc

what is the reason? the reason they don't know mathematic?

You are assuming that pointers _should_ be represented as integers.
There are good reasons not to do that, and you haven't presented any
reason that they should be.

Also, operations like multiplication and division on pointers are
conceptually meaningless in the first place, so there is no need to
define them. Doing so only makes it more difficult to implement C,
which reduces the overall portability of the language itself.

S
 
J

James Harris

Stephen Sprunk said:
mmap() itself probably wouldn't work without virtual memory; I'm not
sure POSIX as a whole can exist without it. But there's probably a
similar mechanism if the OS has the luxury (in space, complexity) of
implementing more than I/O model. Correct or not, the read()/write()
model is generally implemented first.

Yes, it is simpler to understand and teach - and traditional - but not a
good model, IMHO, if one wants best performance.
What do you mean by "reactive"?

Simply that mmap (when used for file IO) is designed to react to access
attempts. Unless someone implements predictive logic along with it file
reads via mmap are going to suffer from latency. The time taken to read a
disk block is relatively long and could much more than negate the advantage
of avoiding the extra copy to user space, depending on system load.

As such, mmap is a good case in point when discussing the avoidance of
memory-to-memory copies, as we were, but I don't believe it is a good model
for the performance that the avoidance of such copies is intended to
achieve, if that makes sense.... A good example of avoiding copying. Not the
best way to achieve performance. Avoiding memory-to-memory copies is about
getting best performance.

James
 
S

Stephen Sprunk

Simply that mmap (when used for file IO) is designed to react to
access attempts. Unless someone implements predictive logic along
with it file reads via mmap are going to suffer from latency.

Same with read(), of course. Both are translated into requests for a
particular block from the disk, and both benefit from the same disk
caching techniques, especially prefetching.

However, being able to walk through files with a simple pointer rather
than making thousands (or millions) of syscalls and adding a layer of
buffering in user space translates into an enormous gain in programmer
and CPU efficiency.

S
 
T

Tim Rentsch

Bart van Ingen Schenau said:
Bart van Ingen Schenau said:
On Tue, 30 Jul 2013 11:54:03 -0700, Tim Rentsch wrote:

- Any struct pointer can be converted to any other kind
of struct pointer and back

- Any union pointer can be converted to any other kind
of union pointer and back

As a practical matter these conversions are likely to work on
most implementations, but the Standard doesn't guarantee that
they will.

Can you give an example how the DS9000 could make a conversion
between pointers to structs fail, given that the pointers involved
are required to have the same representation and alignment (and
that the intent of that requirement is to allow for
interchangeability)?

The pointers have the same alignment, but what they point to need
not have the same alignment. A simple example:

struct smaller { int x; }; // assume size == alignment == 4
struct larger { double z; }; // assume size == alignment == 8
union {
struct smaller foo[2];
struct larger bas;
} it;

(struct larger *) &it.foo[1]; // incorrect alignment

The last line exhibits undefined behavior, explicitly called out as
such under 6.3.2.3 p7. I expect most implemenations won't
misbehave in such cases, but the Standard clearly allows them to.

You are right. That runs afoul of 6.3.2.3/7. But this variation:

struct smaller* p = &it.foo[1];
(struct larger**) &p;

must work due to 6.2.5/26.

Yes, that is one corollary to the more general statement I gave
upthread.
Naturally, in the same way that the DS9000 compiler tries it hardest
to put p in a location that is only suitable for a struct smaller*.

I think of the DS9000 line as more capricious, steering
towards undefined behavior only when it is inconvenient
or unexpected, not all the time.
 
T

Tim Rentsch

Robert Wessel said:
glen herrmannsfeldt said:
On 08/01/2013 11:45 AM, Bart van Ingen Schenau wrote:

(snip)
Can you give an example how the DS9000 could make a conversion
between pointers to structs fail, given that the pointers
involved are required to have the same representation and
alignment (and that the intent of that requirement is to allow
for interchangeability)?

Well, the designers of the DS9000 are notorious for ignoring the
intent of the standard; some have even claimed that they go out
of their way to violate the intent.

Some years ago I was wondering about the possibility of generating
JVM code as output of a C compiler. [snip elaboration]

Doesn't anyone bother to look this stuff up? There are a handful
of existing C-to-JVM compilers or translators, the oldest more than
10 years old. Some suppport a full ANSI C runtime.

https://en.wikipedia.org/wiki/C_to_Java_Virtual_Machine_compilers

AFAIK, all of those take the approach of faking the C program's
address space, which makes the C programs odd little isolated
islands in a Java system.

I don't see the point of your comment. If the implementation is
conforming that's all that should matter. As far as that goes
the above characterization applies in many cases of emulators or
abstract machines, including real hardware - an underlying paging
and cache memory system "fakes" a C program's address space, and
the generated "instructions" are interpreted by a microengine.
There really isn't that much difference between that and running
in a JVM.
 
T

Tim Rentsch

Stephen Sprunk said:
You seem to be agreeing with me that if the function's 64-bit code
address and 64-bit data address were combined into a single 128-bit
value, as was reportedly done in the first Itanic implementations, that
could be reasonably called a 128-bit function pointer.

Not quite. If a particular C implementation has a 128-bit object
representation for (some) function pointers, whereever those 128
bits might come from, then it's appropriate to refer to such a
representation as a function pointer, when speaking in the context
of that implementation. But when speaking in a different context,
either a different implementation (with non-128-bit FPs) or outside
of C altogether, the same representation should not be called a
function pointer, because it is not one in that context.
Yes, Itanic today has a layer of indirection so that function pointers
can be 64-bit and therefore converted to/from void*, but that was my
point: the standard allows function pointers to be incompatible with
object pointers, [snip elaboration]

I was responding only to the statement about terminology.
 
T

Tim Rentsch

Stephen Sprunk said:
Stephen Sprunk said:
Most importantly, though, [x86-64] pointers must be sign-extended,
rather than zero-extended, when stored in a 64-bit register. You
could view the result as unsigned, but that is counter-intuitive
and results in an address space with an enormous hole in the
middle. OTOH, if you view them as signed, the result is a single
block of memory centered on zero, with user space as positive and
kernel space as negative. ... [snip unrelated]

IMO it is more natural to think of kernel memory and user memory
as occupying separate address spaces rather than being part of
one combined positive/negative blob; having a hole between them
helps rather than hurts. If you want to think kernel memory as
"negative" and user memory as "positive", and contiguous so a
pointer being decremented in user space falls into kernel space,
you are certainly welcome to do that, but don't insist that
others have to share your perceptual bias.

It's not _my_ perceptual bias;

Sure it is, just as we might speak of your political views or
your religious beliefs. Just because other people hold the same
views or beliefs you do, even if some held them before you did,
doesn't mean they aren't yours also.
it's how the architects thought of it,
and their view yet survives in various docs, e.g. from GCC's manual:

-mcmodel=kernel
Generate code for the kernel code model. The kernel runs in the
negative 2 GB of the address space. This model has to be used for Linux
kernel code.

This sounds like an appeal to authority. I agree one can
take the point of view that pointers with the high bit
set are "negative". It's still just a point of view.
It's also evidenced by the use of sign extension, which only
makes sense for signed values, rather than zero extension, which
would logically have been used for unsigned values.

Just because a bit value is replicated or "extended" doesn't
mean the bit in question has to be a sign bit.
Finally, the valid values of an x86-64 memory address are:

kernel: 0xffff8000,00000000 to 0xffffffff,ffffffff [-256TB to 0)
user: 0x00000000,00000000 to 0x00007fff,ffffffff [0 to +256TB)

which leads to an obvious reimagining of x86 addresses:

kernel: 0x80000000 to 0xffffffff [-2GB to 0)
user: 0x00000000 to 0x7fffffff [0 to +2GB)

which, when sign-extended, evinces an elegance that simply cannot be
explained away as mere perceptual bias:

kernel: 0xffffffff,80000000 to 0xffffffff,ffffffff [-2GB to 0)
user: 0x00000000,00000000 to 0x00000000,7fffffff [0 to +2GB)

The later invention of "canonical form", exactly equivalent to sign
extension but studiously avoiding use of the heretical term "sign",
smacks of Church officials who insisted on the other planets having
wobbly orbits around Earth in stark defiance of the laws of physics
because their dogma was incompatible with the far more elegant (and
correct) view that the planets actually orbit the Sun.

Compare our positions. My position is that pointers are
intrinsically neither signed nor unsigned, and can be viewed as
either; I have an opinion about which view is more natural but
acknowledge it is my opinion and others may have a different
opinon. Your position is that pointers should be viewed as
signed, and that that view is the only sensible one (as evidenced
by words like "elegant", "correct", etc, in the above passage).

Sounds to me like you're the one being dogmatic.
 
T

Tim Rentsch

glen herrmannsfeldt said:
I agree, but ... being able to store a pointer in 32 bits
when you don't need to address more is reasonable. Having parts
of the OS addressable, for example shared libraries usable by all,
is also convenient.

That has no bearing on my comment. Having separate address
spaces doesn't preclude any variety of representations that
might address memory in either address space.
Sign extending a 32 bit integer allows it to specify the lower 2GB
and top 2GB of a 64 bit address space.

Yes, _if_ you think of the two address spaces as a combined
positive/negative blob. I prefer to think of those address
spaces as separate.
That makes more sense than saying that it is a 32 bit number,
except the high bit has an unusual positive place value.

There's nothing wrong with having that opinion, as long
as you understand that it is nothing more than opinion.
 
M

Malcolm McLean

I don't see the point of your comment. If the implementation is
conforming that's all that should matter. As far as that goes
the above characterization applies in many cases of emulators or
abstract machines, including real hardware - an underlying paging
and cache memory system "fakes" a C program's address space, and
the generated "instructions" are interpreted by a microengine.

There really isn't that much difference between that and running
in a JVM.
You can emulate any Turing-equivalent language with any other. So you can
write a 6502 (or whatever) processor emulator in Java, take a C to 6502
compiler, and you've got a conforming C system. But it's a horrible solution.
You can't easily pass information from the rest of your system into the
little 652 emulator. You can't take advantage of a lot of the modern
processor's instructions, and you're going through layer after layer of
indirection, so it'll probably run at about the speed of a 1980s vintage BBC
computer, on hardware with a hundred times faster processor.

Performance matters. A Java Virtual machine is bad enough, a virtual machine
within a virtual machine - well yes, if you're actually writing a Beeb emulator
so you can play "Planetoid". But not as a general solution.
 
G

glen herrmannsfeldt

Tim Rentsch said:
Robert Wessel <[email protected]> writes:

(snip, someone wrote)
I don't see the point of your comment. If the implementation is
conforming that's all that should matter.

Maybe in theory, but not in practice.
As far as that goes
the above characterization applies in many cases of emulators or
abstract machines, including real hardware - an underlying paging
and cache memory system "fakes" a C program's address space, and
the generated "instructions" are interpreted by a microengine.
There really isn't that much difference between that and running
in a JVM.

Say you build a system that nicely runs C programs, follows the C
standard, but they run 1000 times slower than similar programs
written in another language. Theory is fine, and maybe a few
people will find use for it, but most won't.

At 10 times slower, you are likely to find some takers.
It is common for microprogrammed machines to run about 10
microinstructions per user instruction. That is fast enough,
and the underlying hardware advantages large enough, to actually
use.

In the case of C and Java, it might be nice to be able to
call a C function from Java. It might be that I have to do it
a little differently, and it might run a little slower, but it
should be possible, and not run 1000 times slower.

-- glen
 
G

glen herrmannsfeldt

Tim Rentsch said:
Most importantly, though, [x86-64] pointers must be sign-extended,
rather than zero-extended, when stored in a 64-bit register.
(snip)
It's not _my_ perceptual bias;
Sure it is, just as we might speak of your political views or
your religious beliefs. Just because other people hold the same
views or beliefs you do, even if some held them before you did,
doesn't mean they aren't yours also.

OK, say you have a machine with user and system address space
such that the origin of system address space was 6747653589
bytes from the origin of user address space, and the high bit
of the address allowed one to address that space. In that case,
there is no advantage to considering it as signed. Special
instructions could be supplied to allow one to address parts
of the larger space using smaller pointers. (Strangely, this is
reminding me of the PDP-10 lowseg/highseg addressing.)
This sounds like an appeal to authority. I agree one can
take the point of view that pointers with the high bit
set are "negative". It's still just a point of view.

The convenience of mapping to the bottom and top of the address
space is that ordinary twos complement sign extension allows one
to address the two regions using a smaller pointer, using ordinary
twos complement instructions.

One could provide a new set of instructions that did exactly
the same mapping function. There might even be some real reasons
for doing it. (The addressing system of IBM z/Architecture allowing
mixing of 24, 31, and 64 bit addressing comes to mind.)
Just because a bit value is replicated or "extended" doesn't
mean the bit in question has to be a sign bit.

This is true. And for that matter, one does not have to use
twos complement representation. Or one might use twos
complement on a processor that didn't provide much support
for it. (As I remember, the 8080 doesn't provide a twos
complement 16 bit compare instruction, so programs have to go
to extra work to provide for the operation.)

(snip)

The convenience is that if you consider the pointers as signed,
and allow for the use of ordinary sign extension things work out
nicely.
Compare our positions. My position is that pointers are
intrinsically neither signed nor unsigned, and can be viewed as
either; I have an opinion about which view is more natural but
acknowledge it is my opinion and others may have a different
opinon. Your position is that pointers should be viewed as
signed, and that that view is the only sensible one (as evidenced
by words like "elegant", "correct", etc, in the above passage).

There are other mappings you can make between address bits and
the numerical value of the address. On a 16 bit byte addressed
machine, you could consider all addresses as word addresses, with
the least significant bit being half a word. (You might do that
if the byte had never been invented.)

Even more, you might have a machine addressing 16 bit words,
where the high bit addresses bytes within words. You could even
still call that the sign bit, and consider what that does to
the addresses of consecutive bytes.
Sounds to me like you're the one being dogmatic.

So, yes, you can consider any mapping between numerical values
and addresses that you like. In the case of different sized pointers,
considering the top of the address space negative is convenient,
but not required. Much of the way we look at addressing is
for convenience.

-- glen
 
G

glen herrmannsfeldt

Stephen Sprunk said:
On 08-Aug-13 01:40, James Harris wrote:

(snip, someone wrote)
Same with read(), of course. Both are translated into requests for a
particular block from the disk, and both benefit from the same disk
caching techniques, especially prefetching.
However, being able to walk through files with a simple pointer rather
than making thousands (or millions) of syscalls and adding a layer of
buffering in user space translates into an enormous gain in programmer
and CPU efficiency.

Much of the reason for this is that processor speeds have increased
much faster than I/O speeds over the years, and also available
memory sizes have increased.

When computer main memory was 16K bytes, and those bytes cost a
few dollars each, there was no thought to do disk caching and
prefetching. Double buffering, so one can do I/O to one while
processing the other, goes pretty far back, though. OS/360
had the BUFNO parameter in the DCB so one can specify more than
two buffers.

Note also that IBM S/360 I/O hardware allows one to write disk
blocks of arbitrary sizes, up to the track length of the device.

That allows one to avoid a copy, much easier than the 512 byte
blocks commonly used today.

-- glen
 
S

Stephen Sprunk

Sure it is, just as we might speak of your political views or your
religious beliefs. Just because other people hold the same views or
beliefs you do, even if some held them before you did, doesn't mean
they aren't yours also.


This sounds like an appeal to authority. I agree one can take the
point of view that pointers with the high bit set are "negative".
It's still just a point of view.

The presumed invalidity of an appeal to authority comes from the
tendency for the authority to be a false (or at least disputed) one.

OTOH, if I were to invent a new ISA/ABI and declared that my pointers
were unsigned, then it would be completely valid for you to cite me in a
debate over signedness. That is what I'm doing here: the people who
actually designed the ISA/ABI said that pointers are signed, and they
_do_ have the authority to say that. As the undisputed inventors, they
are correct by definition.
Just because a bit value is replicated or "extended" doesn't mean the
bit in question has to be a sign bit.

There are only two relevant ways to extend a value when stored into a
larger register: sign extension and zero extension. In fact, elsewhere
in the documentation you will find dozens of examples saying that when a
32-bit value is stored into a 64-bit register, it is sign-extended
unless a special zero-extension instruction is used. The docs even tell
you not to use the zero-extension instruction for pointers; you are
supposed to use the default, which is sign extension--but it's not
called that, because that would be heresy. "Everybody knows" that
pointers can't be signed, right?
Compare our positions. My position is that pointers are
intrinsically neither signed nor unsigned, and can be viewed as
either; I have an opinion about which view is more natural but
acknowledge it is my opinion and others may have a different opinon.
Your position is that pointers should be viewed as signed, and that
that view is the only sensible one (as evidenced by words like
"elegant", "correct", etc, in the above passage).

Sounds to me like you're the one being dogmatic.

I acknowledged that there are two views. That one of them is simpler,
more elegant and more self-consistent is objectively true. You can
argue that doesn't automatically make it the correct one, but that
ignores Occam's Razor--and a valid appeal to authority.

S
 
S

Stephen Sprunk

Stephen Sprunk said:
[Both mmap() and read()] are translated into requests for a
particular block from the disk, and both benefit from the same
disk caching techniques, especially prefetching.

However, being able to walk through files with a simple pointer
rather than making thousands (or millions) of syscalls and adding a
layer of buffering in user space translates into an enormous gain
in programmer and CPU efficiency.

Much of the reason for this is that processor speeds have increased
much faster than I/O speeds over the years, and also available memory
sizes have increased.

When computer main memory was 16K bytes, and those bytes cost a few
dollars each, there was no thought to do disk caching and
prefetching.

That makes sense. As CPU speeds have increased, it seems like system
design has evolved into a single-minded exercise of hiding/eliminating
data latency so they don't spend all their time stalled.

Even in mobile chips, where power consumption is often more important
than raw speed, we're now throwing cache at relatively slow RAM and
flash so that the CPU can do its work and power off as quickly as
possible rather than stall in a high-power state.
Double buffering, so one can do I/O to one while processing the
other, goes pretty far back, though. OS/360 had the BUFNO parameter
in the DCB so one can specify more than two buffers.

The read() model really doesn't accommodate that, though, unless you
have one thread per buffer. You have to move to aio_read() for that to
work with a single thread, and AIO is sufficiently painful that most
programmers seem to prefer multiple threads--and the synchronization
problems that introduces.

Was explicit double-buffering on OS/360 related to batch processing and
the need to keep the CPU busy with a single job, whereas more modern
systems have roots in time-sharing systems?

S
 
G

glen herrmannsfeldt

(snip on the evolution of I/O systems)
That makes sense. As CPU speeds have increased, it seems like system
design has evolved into a single-minded exercise of hiding/eliminating
data latency so they don't spend all their time stalled.

The usual disks for S/360 and S/370 ran at 3600RPM. Not so many years
ago, that was still common. Now 7200RPM are reasonably common.
Bytes per track have increased, and so transfer rate, but not so
much for latency.
Even in mobile chips, where power consumption is often more important
than raw speed, we're now throwing cache at relatively slow RAM and
flash so that the CPU can do its work and power off as quickly as
possible rather than stall in a high-power state.
The read() model really doesn't accommodate that, though, unless you
have one thread per buffer. You have to move to aio_read() for that to
work with a single thread, and AIO is sufficiently painful that most
programmers seem to prefer multiple threads--and the synchronization
problems that introduces.

For the read() model, buffering should be done by the system, with
the assumption of sequential access. When at isn't sequential, then
it won't help much.

But I do remember first learning about C's character oriented
(getchar()/putchar()) I/O, and wondering about how efficient
it could be.
Was explicit double-buffering on OS/360 related to batch processing and
the need to keep the CPU busy with a single job, whereas more modern
systems have roots in time-sharing systems?

I believe double buffering went back to single task systems before
OS/360, such as those on the 7090. So, yes, and to when the processor
and I/O timing made it about right. I believe records were unblocked
at that time. A program might read 80 character cards, or 80 character
records off tape.

OS/360 was designed for multitask batch, but not so many running
at once, as you might expect for timesharing. Smaller OS/360 systems
would run unblocked, but larger ones would block records. Usual in
many cases was a 3520 byte block of 44 records, 80 bytes each.
Programs would process them 80 bytes at a time, but the I/O system
at 3520 bytes. That was half track on a 2314 disk, and reasonably
space efficient on 9 track tape. (9 track tape has an inter-block
gap of 0.6 inch at 800 or 1600 bytes/inch.)

OS/360 was close to the beginning of device independent I/O.
Programs could be written independent of the actual I/O device,
which would be selected in JCL when the program was run.
Unlike the unix character oriented model, it was record oriented,
but those records could be cards, paper tape, magnetic tape, or
magnetic disk.

That goes along with early Fortran having separate I/O statements
for cards, drums, and tape. With Fortran IV, and as standardized
in Fortran 66, programs use unit numbers, the actual device being
hidden from the program.

Similar to unix stdin and stdout, OS/360 (and still with z/OS) programs
use a DDNAME to select a device. The DDNAME is an 8 character name,
with common ones being SYSIN and SYSPRINT, somewhat corresponding
to unix stdin and stdout. (SYSIN often has 80 character records,
and SYSPRINT often 133, including carriage control.)

-- glen
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top