Sizes of pointers

  • Thread starter James Harris \(es\)
  • Start date
E

Eric Sosman

A more recent language made a pretty serious attempt to
produce exactly the same result on all machines; "write once,
run anywhere" was its slogan.
good

Its designers had to abandon
that goal almost immediately, or else "anywhere" would have
been limited to "any SPARC." Moral: What you seek is *much*
more difficult than you realize.

i see the only difficulty some computer has char not 8 bit...

I'm not sure whether you're speaking of C or of the "more
recent language." The MRE uses a 16-bit `char'. C requires
that `char' have at least 8 bits, but can be (and has been)
implemented on systems where it was wider. Insisting on a `char'
of exactly 8 bits would make C less portable than it is, because
implementations for those machines would be difficult if not
impossible.
all the remain can be in software not in hardware
it will be slow?

Yes, and probably lacking in function. Emulation is very
seldom perfect: You could write a PDP-11 emulator to run on
your favorite Intel chip and run your "PDP-11-defines-C" code
on it -- but will the device drivers work? Fat chance ...

As to your second question: Yes, it would be slow.
[from operation on 8 bit unsigned, one can definite operation on
16 bit, 32 bit etc unsigned
i don't know for signed but i think it will be the same]

can be difficult emulate float point hardware too
but could be easier fixed point software..

Look, Rosario, if you are under the impression that what you
suggest is even remotely practical, allow me to suggest a project.
Take a copy of the C Standard, and search through it for all
instances of implementation-defined, unspecified, or undefined
behavior. For each instance, write down exactly what your ideal C
should do.

If writer's cramp hasn't persuaded you that you're on a wild
goose chase, continue by making a list of a couple hundred C
implementations, and see how many of them conform to your improved
version of the Standard -- that is, see how many of today's C
implementations you want to see purged from the planet. Finally,
tell the maintainers of those implementations that their work has
been a waste and that they should scrap what they've done.
 
S

Stephen Sprunk

Look, Rosario, if you are under the impression that what you
suggest is even remotely practical, allow me to suggest a project.
Take a copy of the C Standard, and search through it for all
instances of implementation-defined, unspecified, or undefined
behavior. For each instance, write down exactly what your ideal C
should do.

If writer's cramp hasn't persuaded you that you're on a wild
goose chase, continue by making a list of a couple hundred C
implementations, and see how many of them conform to your improved
version of the Standard -- that is, see how many of today's C
implementations you want to see purged from the planet. Finally,
tell the maintainers of those implementations that their work has
been a waste and that they should scrap what they've done.

When you're finished with that, contact the thousands of vendors and
millions of customers, who collectively have billions of dollars
invested in such systems, and tell them that you want to outlaw the
software they depend on for the success of their business. See how well
that goes over.

The C standard is what it is for a reason. Until you understand that
reason--and you clearly don't yet--you are not qualified to suggest
making changes to it.

S
 
G

glen herrmannsfeldt

Eric Sosman said:
On 8/5/2013 1:31 PM, Rosario1903 wrote:
(snip)
... with two consequences: (1) the language will not be
available on machines where achieving the mandated result is
more trouble than it's worth, and (2) the language will self-
obsolesce as soon as new machines offer capabilities outside
the limits of what the language's original machine offered.

Note that Java requires specific sizes for its integer types,
and specific IEEE standard sizes for its floating point types.

Floating point was too varied when C originated, but the
designers of Java figured that they could require it.

It may or may not be the reason, but IBM added IEEE floating
point to ESA/390, I believe sometime after Java came out.


-- glen
 
S

Stephen Sprunk

Note that Java requires specific sizes for its integer types,
and specific IEEE standard sizes for its floating point types.

.... making Java impractical or impossible to implement on a large number
of systems that C has supported for decades.

S
 
J

James Kuyper

On 08/05/2013 09:01 PM, Robert Wessel wrote:
....
Errr... Epicycles don't get you to ellipses. Epicycles
(approximately) get you to the apparent motion of the other planets
around the earth. That's neither an (approximate) ellipse as viewed
from Earth or the Sun (a viewpoint which was not actually of much
interest in the pre-Copernican era).

See <http://en.wikipedia.org/wiki/Epicycle#Mathematical_formalism>. I'm
not sure about the quote from Norwood Russell Hanson - that doesn't fit
my mathematical intuition. However, I am sure about the accuracy of the
next sentence, the one that refers to using infinitely many epicycles.
Like the fourier series, a relatively small number of epicycles is
sufficient to provide a remarkably good fit to any sufficiently smooth
periodic curve.

I see from <http://en.wikipedia.org/wiki/Epicycle#Epicycles> that the
idea that medieval scholars used multiple epicycles to fit the motion of
the planets, while once popular, has recently been challenged. I won't
claim to know any better - the main thing of interest to me is that it
could have been done, even if it might not have been possible with the
mathematical tools available at that time.
 
R

Rosario1903

Yes, and probably lacking in function. Emulation is very
seldom perfect: You could write a PDP-11 emulator to run on
your favorite Intel chip and run your "PDP-11-defines-C" code
on it -- but will the device drivers work? Fat chance ...

As to your second question: Yes, it would be slow.

some big number library define mult, add, divide, etc from base
0xFFFFFFFF... in array of unsigned 32 bits; they in their calculus not
seem to much slow

so would be different in array of unsigned char 8 bit 0xFF base?
[from operation on 8 bit unsigned, one can definite operation on
16 bit, 32 bit etc unsigned
i don't know for signed but i think it will be the same]

can be difficult emulate float point hardware too
but could be easier fixed point software..
 
S

Siri Cruise

James Kuyper <[email protected]> said:
On 08/05/2013 09:01 PM, Robert Wessel wrote:
...

See <http://en.wikipedia.org/wiki/Epicycle#Mathematical_formalism>. I'm
not sure about the quote from Norwood Russell Hanson - that doesn't fit
my mathematical intuition. However, I am sure about the accuracy of the
next sentence, the one that refers to using infinitely many epicycles.
Like the fourier series, a relatively small number of epicycles is
sufficient to provide a remarkably good fit to any sufficiently smooth
periodic curve.

Epicycles on top of epicycles were accepted but not ellipses because the physics
they were using assumed that non-terrestrial objects were made of a fifth
element whose natural motion was circle and could move forever. Everything on
Earth was made of four elements, natural motion was straight lines, and movement
would halt.

The simpler solution was rejected because it violated the assumptions.

Kepler then Newton came up with simpler systems by changing the assumptions to
all objects were made of similar materials and obeyed the same rules.
 
S

Stephen Sprunk

Stephen Sprunk said:
OTOH, if you view [x86-64 pointers] as signed, the result is a
single block of memory centered on zero, with user space as
positive and kernel space as negative. Sign extension also has
important implications for code that must work in both x86 and
x86-64 modes, e.g. an OS kernel--not coincidentally the only code
that should be working with negative pointers anyway. [snip
unrelated]

IMO it is more natural to think of kernel memory and user memory as
occupying separate address spaces rather than being part of one
combined positive/negative blob; having a hole between them helps
rather than hurts. ...

I think it's reasonable for the application to consider its address
space to be distinct from the kernel. OTOH, the OS guys really want
the kernel's address space to contain one at least one user address
space at any given time.

My understanding is that it's common for a syscall to jump into the
kernel, move some data to/from kernel space, then return and continue
execution in user space. If you change address spaces on every
transition, then that data has to be copied to/from bounce buffers,
which hurts performance. Also, on x86(-64) at least, changing the page
tables is slow and IIRC flushes caches, which also hurts performance.

RedHat did have a special x86 kernel that did 4GB+4GB rather than the
usual 2GB+2GB, which gave a little bit of breathing room for
memory-hungry applications (e.g. databases), but it seems to have fallen
out of favor as soon as x86-64 hit the market. If that was such a great
idea, why wasn't it used for _all_ systems?

Windows on x86 has allowed 3GB+1GB since 2003/XP. Windows Server
2003SP1 on x86-64 allowed 4GB user space for x86 programs, but not on
x86 itself, and AFAICT that feature was dropped from later versions.

S
 
S

Stephen Sprunk

some big number library define mult, add, divide, etc from base
0xFFFFFFFF... in array of unsigned 32 bits; they in their calculus
not seem to much slow

so would be different in array of unsigned char 8 bit 0xFF base?

How well does that work on a system with 9-bit chars and 36-bit ints?
Or a system with 24-bit chars/ints and 48-bit longs? Or a system with
16-bit ints, which has to emulate 32-bit longs in software? Or a system
with ones-complement or signed-magnitude representation? IOW, on pretty
much anything that doesn't look like an x86/RISC chip?

C runs just fine on such systems, provided your code is written well.
Your "improvements" would render C implementations for those systems
either impractical (i.e. ridiculously slow) or impossible.

S
 
R

Rosario1903

some big number library define mult, add, divide, etc from base
0xFFFFFFFF... in array of unsigned 32 bits; they in their calculus not
seem to much slow

so would be different in array of unsigned char 8 bit 0xFF base?

perhaps i'm out your world...

so why cpus not support *at last*, operation[+-/*<><=>=&|^not] on 8
bit unsigned, and one another type among 16, 32, 64, 128 unsigned with
its operation on unsigned ??

the really shame would be who design these cpu could be not agree in
what the number a*b or a-b etc is...
they have to standardize their math functions for return the same
result if there is the same input
[from operation on 8 bit unsigned, one can definite operation on
16 bit, 32 bit etc unsigned
i don't know for signed but i think it will be the same]

can be difficult emulate float point hardware too
but could be easier fixed point software..
 
R

Rosario1903

A more recent language made a pretty serious attempt to
produce exactly the same result on all machines; "write once,
run anywhere" was its slogan. Its designers had to abandon
that goal almost immediately, or else "anywhere" would have
been limited to "any SPARC." Moral: What you seek is *much*
more difficult than you realize.

but that language is not java? i see java has some success in
the place where i see, more than C [some important prog 'they' choose
to write are in java]
 
R

Rosario1903

How well does that work on a system with 9-bit chars and 36-bit ints?
Or a system with 24-bit chars/ints and 48-bit longs?

these number sound me ridicules
Or a system with
16-bit ints, which has to emulate 32-bit longs in software?

ok if there are 8 bit too
Or a system
with ones-complement or signed-magnitude representation? IOW, on pretty
much anything that doesn't look like an x86/RISC chip?

they can have the representation they want but the result has to be
the same to 2 complement
 
J

James Harris

Stephen Sprunk said:
OTOH, if you view [x86-64 pointers] as signed, the result is a
single block of memory centered on zero, with user space as
positive and kernel space as negative. Sign extension also has
important implications for code that must work in both x86 and
x86-64 modes, e.g. an OS kernel--not coincidentally the only code
that should be working with negative pointers anyway. [snip
unrelated]

IMO it is more natural to think of kernel memory and user memory as
occupying separate address spaces rather than being part of one
combined positive/negative blob; having a hole between them helps
rather than hurts. ...

I think it's reasonable for the application to consider its address
space to be distinct from the kernel. OTOH, the OS guys really want
the kernel's address space to contain one at least one user address
space at any given time.

My understanding is that it's common for a syscall to jump into the
kernel, move some data to/from kernel space, then return and continue
execution in user space. If you change address spaces on every
transition, then that data has to be copied to/from bounce buffers,
which hurts performance. Also, on x86(-64) at least, changing the page
tables is slow and IIRC flushes caches, which also hurts performance.

Copying data between kernel and user space is indeed a cause of slowness. In
at least the traditional OS models that still happens more than it should. I
don't know how well modern OSes have got at avoiding that copying but the
nature of many system calls means there will still be some.

Changing address spaces doesn't prevent their being some parts of memory
mapped in common so an OS could use common mappings in the transfer and thus
avoid double copying via a bounce buffer.

Changing page tables usually requires flushing the old TLB which is a cache
but only a cache on page table entries. Yes, it can hurt performance and was
an issue also on x86_32. However, to reduce the impact some pages can be
marked as Global. Their entries are not flushed from the TLB with the rest.
These are usually pages containing kernel data and code which are to appear
in all address spaces.

As an aside, and I'm not sure which CPUs implement this, instead of marking
pages as global some processors allow TLB entries to be tagged with the
address space id. Then no flushing is required when address spaces are
changed. The CPU just ignores any entries which don't have the current
address space id (and which are not global) and refills itself as needed.
It's a better scheme because it doesn't force flushing of entries
unnecessarily.
RedHat did have a special x86 kernel that did 4GB+4GB rather than the
usual 2GB+2GB, which gave a little bit of breathing room for
memory-hungry applications (e.g. databases), but it seems to have fallen
out of favor as soon as x86-64 hit the market. If that was such a great
idea, why wasn't it used for _all_ systems?

The x86_32 application address space is strictly 4GB in size. Addresses may
go through segment mapping where you can have 4GB for each of six segments
but they all end up as 32 bits (which then go through paging). So 4GB+4GB
would have to multiplex on to the same 4GB address space (which would then
multiplex on to memory). I don't know what RedHat did but they could have
reserved a tiny bit of the applications' 4GB to help the transition and
switched memory mappings whenever going from user to kernel space. Such a
scheme wouldn't have been used for all systems because it would have been
slower than normal not just because of the transition but also because where
the kernel needed access to user space it would have to carry out a mapping
of its own.

OSes often reserve some addresses so that they run in the same address space
as the apps. That makes communication much easier and a little bit faster
but IMHO they often reserve far more than they need to, being built in the
day when no apps would require 4GB.

As an aside, the old x86_16 could genuinely split all an app's addressable
memory into 64k data and 64k code because it had a 20-bit address space.

James
 
J

James Harris

Stephen Sprunk said:
How well does that work on a system with 9-bit chars and 36-bit ints?
Or a system with 24-bit chars/ints and 48-bit longs? Or a system with
16-bit ints, which has to emulate 32-bit longs in software? Or a system
with ones-complement or signed-magnitude representation? IOW, on pretty
much anything that doesn't look like an x86/RISC chip?

C runs just fine on such systems, provided your code is written well.
Your "improvements" would render C implementations for those systems
either impractical (i.e. ridiculously slow) or impossible.

Those old machines would now appear to be ridiculously slow in any case!
ISTM that 2's complement integers and 8-bit bytes have pretty much won the
argument. C does a great job of adapting to both worlds.

James
 
J

James Harris

Eric Sosman said:
[...]
What do you mean by a language (as opposed to a program) being
"portable"?

for to be really portable, a computer language has to have
all its function that act the same...

... with two consequences: (1) the language will not be
available on machines where achieving the mandated result is
more trouble than it's worth, and (2) the language will self-
obsolesce as soon as new machines offer capabilities outside
the limits of what the language's original machine offered.

There's a balance to be struck. Most machines have - or can easily be made
to appear to have - a common set of features. As long as those features
which appear in the programming language are well chosen it should be very
possible to get the compiler to manage the mapping to different hardware,
making small adjustments as needed so that code sees an idealised machine.
Examples of (1): If C's semantics had been defined so as
to require the results given by its original machine, there
could never have been C implementations for three of the four
systems mentioned in the original K&R book (Honeywell 6000:
incompatible `char' range; IBM S/370 and InterData 8/32:
incompatible floating-point). That is, C would have been
available on the DEC PDP-11 and, er, and, um, nothing else
at the time, perhaps VAX later (if anyone had cared).

C did adapt brilliantly to the hardware of the time which was far more
diverse than what programmers have to use today. Maybe that was one thing
that helped C focus on the fundamentals and become the language that it did.
I don't know. I do think an argument could be made that C's model was an
influence on later hardware design.
Examples of (2): If C's semantics blah blah blah, C would
have been abandoned as soon as IEEE floating-point became
available, because nobody would have been content to stick
with PDP-11 floating-point.

It depends on the limits defined for C's original floating point. Completely
standard repeatable results of IEEE-754 have a place but Ada also has a good
approach where the parameters of numbers are specified and the compiler
chooses the representation by matching hardware to requirements.

Personally I'd like to see a software floating point library which focussed
on getting the best possible performance rather than compatibility with the
IEEE standards. I mean a portable library which could produce repeatable
results on any hardware and which used its specifications to avoid the parts
of FP processing which are normally expensive in software. Such a library
could be adapted to any width of floating point number.
[...]
if C would be portable for my above definition it not would have
undefinite behaviour
the UB are the region of the language where functions are not definite
from standard i.e. not have the same result for some arg
this could be the answer to log(0) [what return on error] too

A more recent language made a pretty serious attempt to
produce exactly the same result on all machines; "write once,
run anywhere" was its slogan. Its designers had to abandon
that goal almost immediately, or else "anywhere" would have
been limited to "any SPARC." Moral: What you seek is *much*
more difficult than you realize.

Why SPARC? Or were you kidding?

James
 
S

Stephen Sprunk

these number sound me ridicules

They exist whether you think they sound "ridicules" or not.
ok if there are 8 bit too

Bignum libraries using 32-bit longs that are themselves emulated in
software is a formula for horrific performance.
they can have the representation they want but the result has to be
the same to 2 complement

That is simply not possible. Do you not understand how the different
representations work?

S
 
S

Stephen Sprunk

so why cpus not support *at last*, operation[+-/*<><=>=&|^not] on 8
bit unsigned, and one another type among 16, 32, 64, 128 unsigned
with its operation on unsigned ??

What if the CPU doesn't _have_ 8, 16, 32, 64, and 128-bit integers?

What if the CPU doesn't represent pointers as plain integers in the
first place?
the really shame would be who design these cpu could be not agree in
what the number a*b or a-b etc is... they have to standardize their
math functions for return the same result if there is the same input

No, they don't "have to", and they didn't. And there are good reasons
that they didn't.

Just because x86/RISC do something a certain way doesn't mean that is
the only valid way to do it.

S
 
S

Stephen Sprunk

A more recent language made a pretty serious attempt to produce
exactly the same result on all machines; "write once, run anywhere"
was its slogan. Its designers had to abandon that goal almost
immediately, or else "anywhere" would have been limited to "any
SPARC." Moral: What you seek is *much* more difficult than you
realize.

but that language is not java? i see java has some success in the
place where i see, more than C [some important prog 'they' choose to
write are in java]

C is far more successful than Java. In particular, the JVM itself and
the OS it runs on are likely written in C (or C++, which has pretty much
the same undefined behaviors).

S
 
S

Stephen Sprunk

Those old machines would now appear to be ridiculously slow in any
case! ISTM that 2's complement integers and 8-bit bytes have pretty
much won the argument. C does a great job of adapting to both
worlds.

I'll grant that 36-bit machines have fallen out of favor, and I'm not
sure how common ones complement and signed magnitude are these days, but
16-bit (and 8-bit) machines are still common, as are 24-bit DSPs.

The industry is less diverse than it used to be, but it's still far more
diverse than Rosario1903 believes it to be. All the world is _not_ an x86.

Heck, even simple details like what happens when you shift a signed
value are not standardized. GCC on x86(-64) happens to insert extra
code to "correct" the native value to what they assume a programmer
expects, but there is no requirement to do so, and other compilers or
even GCC on other CPUs may not do so.

S
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,767
Messages
2,569,572
Members
45,045
Latest member
DRCM

Latest Threads

Top