C programming in 2011

K

Kleuskes & Moos

There's no reason you can't add a signed (possibly shorter) offset
to an unsigned number and get a reasonable unsigned result.  Yes,
it might go out of range and trap, or go out of range and execute
garbage, or wrap around and trap, or wrap around and execute garbage.
The same problem is also likely to happen if you didn't install the
maximum amount of memory that you could install, and run off the
end.
True.

Some machines that have relative branches have only relative branches
with short offsets (if you want to go a longer distance, use an
absolute branch, which likely takes a longer instruction, or perhaps
a longer relative branch, if there is one.  x86 has relative branches
with 8 and 16-bit offsets, for example).  How many bits are in the
offset tends to be a function of how many bits are left in the
instruction word without extending it, at instruction set design
time.  So you might have a relative branch with a 13-bit offset on
a machine with a 32-bit address.
True.

If the *only* branches are relative branches, then you need to be
able to branch from anywhere to anywhere, and traps on "overflow"
(signed or unsigned) are a problem.  It is usually not that much
of a problem to ignore the "overflow" indication.  (Note:  if you
can add *positive* integers, you can ignore trapping overflow by
adding a bit at the high-order end of the adder, and hard-wire the
inputs as 0's, and ignore the output of that bit.)

You could simply ignore the carry flag, but that's picking a nit.
Relative branches with short offsets tend to have signed offsets
because they are more useful that way than ones that only go forward
or only go backwards.  Instruction set designers are aware of this.
True.

Some machines have instructions that are multiples of and aligned to
something larger than bytes (e.g. 16 bits).  On these machines, the
formula might be:
        PC <-- PC + (2 * offset)
because the low-order bit of the offset is omitted (it's always 0)
and the low-order bit of the PC doesn't really exist (it's hardwired
to 0).  That 13-bit offset might actually get squeezed into 12 bits.

True, but that amounts to pretty much the same and still requires an
addition (in addition to a right-shift) to do a relative jump.

I've read your post with some interest, but i must admit your point
has managed to escape me. Could you send in another with a less
Houdini-like character?
 
K

Kleuskes & Moos

Virtual memory addresses may be represented with more complex
structure (e.g. permission bits, ring numbers, etc.) in which case
it's hard to tell what 'signed' or 'unsigned' even means as applied
to the address.  For real examples of this, see [3456]86 32-bit
protected mode large model (48-bit pointers), although I'm not sure
how much those are actually used.


Quite extensively, since segments can be quite convenient in
compartimentalizing memory and making code relocatable. Also memory
protection schemes and paging can be based (as they are with Intel
IA32) on segments. On an Intel/AMD processor you have no choice, since
segment-based adresing cannot be switched off. But then again, with
Intel processors things tend to get _very_ complicated _very_ fast,
since they're quite complex beasts.

Still Intel (after all the segment and paging dust is settled),
reduces the 'logical address' to a 'linear address' with the help of a
segment descriptor table and views the physical address space as
unsigned

<quote Intel IA32 Manual Vol III, section 3.3 "Physical address space"
http://www.intel.com/Assets/PDF/manual/325384.pdf>
In protected mode, the IA-32 architecture provides a normal physical
address space
of 4 GBytes (2^32 bytes). This is the address space that the processor
can address on
its address bus. This address space is flat (unsegmented), with
addresses ranging
continuously from 0 to FFFFFFFFH.
</quote>

It's also worthwhile to note that this manual views 'byte', 'word',
'double-word', 'quad-word' and 'double-quad-words' as fundamental
datatypes and (explicitly) views numerical types as superimposed on
that. See Vol I, chapter 4, (http://www.intel.com/Assets/PDF/manual/
253665.pdf)
It's actually pretty hard to set up a C-standard-compatible test
to see whether addresses are really signed or unsigned.

I'd say it's damn near impossible, since the _actua_ adressing may be
hidden from view by the compilers implementations.
I propose this as a test:  if you can allocate a struct or array
that straddles the signed boundary (0x7fff.../0x8000... for 2's
complement) and the address of the last element is > the address
of the first element, then addresses are *NOT* signed.  If the
address of the last element is < the address of the first element,
the implementation is broken.

Hmmm... So you're saying either addresses are unsigned _or_ the
implementation is broken?
If you can allocate a struct or array that straddles the unsigned
boundary (0xffff.../0x0000... for 2's complement) and the address
of the last element is > the address of the first element, then
addresses are *NOT* unsigned.  If the address of the last element
is < the address of the first element, the implementation is broken.

If the addresses are not signed *AND* not unsigned (you can allocate
structs or arrays that straddle both boundaries), the implementation
is broken.

Now, the problem is that many implementations, especially on the
large (e.g. an AMD 64 bit machine which has nowhere near a full
address space of memory actually populated) and tiny (e.g. TI
MSP430G2231, which has 128 bytes of RAM and 2k of program flash)
end of things, won't let you allocate structs or array that straddle
either boundary.  This especially applies with physical addresses
to machines with less than a full address space of memory.  So the
test result is inconclusive.  I suspect a lot of 32-bit machines
fall in this category.

Does anyone have a simpler test, or a test that generates answers
on more machines, that's doesn't rely on undefined or
implementation-defined behavior?

Nope. Besides you're ignoring the fact that the compiler may clean up
the mess for you and translate 'signed' addresses to a corresponding
'unsigned' internal representation rendering all tests in 'C'
pointless. The CPU _and_ the compiler in this case are strictly
hypothetical, though.
A probably easier but non-C-standard-compliant way to tell whether
addresses are treated as signed or unsigned is to compile a function
like:
int addrcmp(char *a, char *b)
{
        if (a > b)
                return 1;
        else
                return -1;}

and look at the generated code.  On a 2's complement machine, you're
likely to see a compare instruction followed by a conditional branch.
(Lots but certainly not all machines use "condition flags", bits
which are set by math and compare operations and used by conditional
branches, both ancient and tiny (e.g. 8008) and modern and large)
branch on carry/no carry is an unsigned comparison.  branch on
plus/minus is a signed comparison.  If it's not a 2's complement
machine, it probably uses either a signed comparison or unsigned
comparison instruction.

The existence of relative-branch instructions with a short signed
offset added to a longer program counter register proves nothing
about the signedness of the program counter.

True. However, imagine what happens to segment boundaries (on an intel
processor) if you were to use a truly signed program counter. They
would no longer correspond to the addresses of the segment-base, but
extend below that address and things would become needlessly
complicated and error-prone.
Structs and arrays have to have ascending addresses for their
members/elements.  This plus the operation of the > and < operators
on pointers can make it more than a viewpoint issue.

Yes. But that's a compiler restraint and can be dealt with behind the
scenes at compile-time, for instance by mapping signed addresses to an
equivalent unsigned representation internally.

Nevertheless, unless i very much misunderstand your post (and please
tell me if i do), your argument amounts to saying "signed addresses
are just not practical".
 
K

Kleuskes & Moos

The signedness of the offset in a relative branch instruction has
no bearing on whether the PC is considered signed or unsigned.
Agreed.
 
(However, I'll note that having NULL represented as all-bits-zero
makes a pretty good case for an unsigned PC (there are *LOTS* of
these machines), and having NULL represented the same as INT_MIN
or LONG_MIN makes a pretty good case for a signed PC (I know of
none of these).  It doesn't really make sense for NULL to punch a
hole in the "middle" of the memory space, with "middle" determined
by the signedness of the PC).

Good point.
 
K

Kleuskes & Moos

I'm referring here to the addresses that show up in pointers and
get compared with < and > operators.  Those tests (probably on
virtual addresses) need to work correctly.  If the compiler actually
goes to the trouble of translating virtual to physical addresses
and comparing those, fine, but that puts a lot of restriction on
remapping.

If addresses are translated from what you refer to as 'virtual', the
comparison needs to be done for _untranslated_ versions, and in
practice, i think they are.
For example, the comparison of two pointers to the same object (done
by the compiler with physical addresses) is not allowed to change
during the run of the program.  That means no reordering in physical
memory the pages containing parts of a single object.  This is a
major pain for a virtual memory implementation.  If the program
takes the address of something and hangs on to it, the virtual
memory system may not be allowed to load it anywhere in physical
memory besides where it was first loaded as long as the variable
containing that address (and any variables it was copied to) is
live.

Yes. But only if you compare _physical_ addresses, which, in practice,
you would not do, since the only requirement is that the pointers as
seen from 'C', that is the _virtual_ ones, behave nicely. What happens
at the physical level is unimportant from a 'C' point of view.

For instance, if you are running your software on a x86, the addresses
as seen/generated by your compiler are 'logical adresses' (Intel
terminology), which get translated by the CPU to physical addresses
behind the scenes, but this is completely transparent to the 'C'
program and does not need to be taken into account.
No.  Signed or Unsigned:  *PICK ONE*, you can't have both.  You
also can't have neither.

Hmmm... Not so long ago, you argued

<quote>
Virtual memory addresses may be represented with more complex
structure (e.g. permission bits, ring numbers, etc.) in which case
it's hard to tell what 'signed' or 'unsigned' even means as applied
to the address.
</quote>

Which i think was a pretty good point. Whence the change?
You can, however, set things up so it's impossible to rule out
either one.  I'll claim that this is the case with most 32-bit
systems, and systems that have 50% or less of their maximum
memory actually installed.

In those cases it makes no difference, agreed. I don't comment on
wether of not your claim is true.
If it's signed, you shouldn't be able to allocate an object that
straddles 0x7fff.../0x8000... .  Incidentally, a good representation
for NULL with signed addresses is the bit pattern 0x8000... but it
still has to compare equal to the integer constant 0.

<quote>
(However, I'll note that having NULL represented as all-bits-zero
makes a pretty good case for an unsigned PC (there are *LOTS* of
these machines), and having NULL represented the same as INT_MIN
or LONG_MIN makes a pretty good case for a signed PC (I know of
none of these). It doesn't really make sense for NULL to punch a
hole in the "middle" of the memory space, with "middle" determined
by the signedness of the PC).
</quote>

The above just seemed to fit here quite well.

The operation of < and > on pointers needs to work correctly.
struct a {
        int     firstmember;
        int     secondmember;

}foo;

        &foo.firstmember < &foo.secondmember had darn well better
be true, and it had better not change during the run of the program,
even if the struct gets split across pages, regardless of whether
the compiler is comparing virtual or physical addresses.

That's not _that_ hard to arrange, but again, it _only_ needs to be
valid for _logical_ addresses (Intel jargon, i think it corresponds,
loosely, to your 'virtual addresses', but i'm not quite sure), not
_physical_ ones.
It's hard to imagine a "truly signed program counter" with segment
numbers in it.  Are the segment numbers what is signed?  Signed program
counter doesn't have to imply (nor forbid) that the *offset* is signed.

Ummm... Not quite sure what you mean with "a PC with segment numbers
in it" but i was referring to signed offsets.
Using physical addresses (signed or unsigned) for comparison can
really kill your virtual memory performance.

Using physical addresses (signed or unsigned) for comparison is
stoopid and no CPU and/or Compiler manufacturer would dream of doing
that.
No, my argument is that on many machines, you simply CANNOT TELL
whether the addresses are signed or not (and therefore caring or
worrying about it is not worthwhile), because you cannot get (virtual)
pointers to feed to the < or > operators of appropriate values to
even do the test.

And it's a good point, too.
Often the reasons are things like "you don't
have that much memory (or swap space) installed", or "half the
address space is reserved for the kernel".

Incidentally, I think my first run-in with unsigned pointers involved
the loop:
        struct reallybig table[MAX];
        struct reallybig *p;

        for (p = &table[MAX]; p >= &table[0]; p--) {
                ... do something with p ...;
        }

and &table (data segment started at 0) turned out to be less than
sizeof(table).  Gee, why does this loop never terminate?

:)
 
D

David Thompson

On May 31, 3:23 pm, James Kuyper <[email protected]> wrote:
PDP-10 was (at least) one with integers only signed: 36bit word, 2sC
sign+35; for a few double operations, sign+70=sign+35+ignored+35.
Such a system would not be able to operate, since every relative jmp-
instruction involves the addition (or subtraction, which is basically
the same). So name that fabled platform.
First, some systems don't have relative jumps. Second, branch address
calculation, and for that matter other address calculation, need not
use the same operations as data arithmetic, especially since they
often use separate hardware i.e. wires/paths and gates and except for
<=16-bit CPUs the address width has often been less than the (integer)
data width and (well) within the range of signed-positive-half.
 
K

Kleuskes & Moos

PDP-10 was (at least) one with integers only signed: 36bit word, 2sC
sign+35; for a few double operations, sign+70=sign+35+ignored+35.

Which made it big in the '60, right along the Burroughs machine and
borrowed it's architecture from the PDP-6, dating back to 1963... 36-
bits appear to have been the 'in'-thing back in those days.

The PDP-11, however, dropped all that crazy stuff and was a lot easier
to program, being byte-adressable and all. And that's the machine that
'C' was developed on originally. So, all in all, it's on par with the
Burroughs machine, in that it would probably not support 'C' at all.
Nice fossil, though.
First, some systems don't have relative jumps.

Then they're, ipso facto, excluded from the argument.
Second, branch address
calculation, and for that matter other address calculation, need not
use the same operations as data arithmetic, especially since they
often use separate hardware i.e. wires/paths and gates and except for
<=16-bit CPUs the address width has often been less than the (integer)
data width and (well) within the range of signed-positive-half.

And ultimately all the specialized hardware will end up doing

PC <-- PC + offset.

Probably after a bunch of branch-prediction, reordering and/or
flushing of pipelines and all kinds of other smart stuff, but the
final result stays what it is.

If you think otherwise, please bring an example of what you mean.
 
K

Kleuskes & Moos

I don't think it *has* to be done for untranslated versions.  Doing
it for translated versions, however, may be equivalent to shooting
yourself in the foot in many situations, much like an implementation
where any undefined behavior launches missiles.  There are a few
where it might not.

An example here:  in x86 real-mode 16-bit huge model, pointers are
represented as abcd:efgh and point at physical address 16*(abcd) +
efgh.  (This is a fixed translation with no "map" subject to getting
changed at runtime, but it is a translation).  There are many
possible representations that point at the same place.  In some of
the memory models (e.g.  large), you can't get these different
representations because all objects are smaller than 64k and you
can't have carries into the segment register from pointer arithmetic
unless you've invoked undefined behavior.  In huge model this isn't
the case (objects are allowed to be > 64k), and you need to "normalize"
a pointer to ijkl:000h or i000:jklh before comparing pointers.
Here, you're comparing physical addresses, not the untranslated
ones.

Ok. Two bits of well intended advice:

a) Do not use Borland products
b) Don't do that.

and an observation:

'An abstract program language which obliges you to worry about the
contents of registers is not worth using'

It has to work right one way or another, and if you are doing full
virtual memory with page-in/page-out, you wouldn't select comparing
physical addresses.  In other (rare) situations, like the x86 16-bit
real-mode huge model with no swapping/paging, you would, because
the virtual ones don't behave nicely.

Ummm.... That would be an excellent reason not to use such platforms,
or reduce the size of your objects, since they are obviously not
supported.
<snip>

I am referring to a PC that *contains* an offset but it is not an
offset alone.  Consider x86 32-bit mode large-model segmented
pointers (48 bits wide).

Ummm... The PC (Program Counter) a.k.a. EIP on Intel-land is a
register and contains a 32-bit value. RTFM, if you don't believe me.

<quote src="http://www.intel.com/products/processor/manuals/, Vol 1,
section 3.5, misleadingly called 'instruction pointer'">
The instruction pointer (EIP) register contains the offset in the
current code segment
for the next instruction to be executed.
It contains an offset (32 bits), segment
number, ring number, and global/local bit.
The "program counter",
regardless of Intel terminology, includes the CS register in large
model since that is an essential part of determining where the
program executes, and the CS part changes arbitrarily often.  Function
and object pointers also include a segment portion.

Ah... What Intel has to say on the matter is of no consequence in the
light of your wisdom. I see. This explains rather a lot.

Sorry. I've had it with 'models'. If you want to discuss TurboC, i'm
sure there's a venue for that.
 
P

Patrick Scheible

But PDP-10 addresses were 18 bits (extended to 23 bits later), so you
never saw a negative address. Note that those address 36-bit words, not
bytes.
Which made it big in the '60, right along the Burroughs machine and
borrowed it's architecture from the PDP-6, dating back to 1963... 36-
bits appear to have been the 'in'-thing back in those days.

36 bits made a lot of sense, especially for floats. 32-bit floats don't
have enough precision for a lot of work. With the PDP-11, Vax, etc.,
lots of scientific users had to go to double precision at a substantial
cost in speed when the 36-bit floats would have been enough.
The PDP-11, however, dropped all that crazy stuff and was a lot easier
to program, being byte-adressable and all.

Byte addressing is generally wasteful. Address space is precious and
you don't actually need to address bytes in isolation from each other
very often. Character data is usually less than 8 bits wide. Sixbit is
fine in situations where you don't need control characters or need to
distinguish upper and lower case. ASCII is seven bits. When space is
precious, you don't use 8 bits on data that can be represented in 7. On
a machine with a good instruction set like the PDP-10, there's no speed
penalty for choosing 7 bits instead of 8 to represent characters. On
the PDP-11, on the other hand, programmers use 8 bit characters all the
time, wasting 1/8 of their machine's memory, just because of the hassle
and speed penalty for doing anything else.

Lots of byte-addressed machines are quite a bit slower accessing data
that isn't in word alignment. Generally speaking it's a mistake for a
programming language to act as if there's nothing wrong with unaligned
data when it actually involves a substantial penalty.
And that's the machine that
'C' was developed on originally. So, all in all, it's on par with the
Burroughs machine, in that it would probably not support 'C' at all.

I don't know about Burroughs, but there are a couple of C
implementations on the PDP-10. Because of C's peculiar requirement that
the word size must be a multiple of the char size, the implementers
chose a 9-bit char (even more wasteful than the PDP-11!). C didn't
become popular, partly because of the unfortunate size of char, partly
because due to that it's hard to port between other Cs and PDP-10 C,
partly because C was becoming popular at the same time the PDP-10 was
being phased out, and partly because PDP-10 assembly language is a very
nice language.

-- Patrick
 
U

Uno

Kleuskes& Moos said:
There is a reason, and a very good one, too, that sign-and-magnitude
isn't used anymore. If you have a different opinion, please post the
hardware in question. I'd be curious.

Note the decimal arithmetic was dead, buried, and fossilized until IBM
resurrected it a couple years ago and it suddenly became the [hottest]
thing in hardware design.

That's news to me. Is hot good?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,770
Messages
2,569,583
Members
45,073
Latest member
DarinCeden

Latest Threads

Top