Request

R

Richard Tobin

Mr McIntyre has strong hallucinations when he supposes that
you can have more than 64K with 16 bit pointers!
[/QUOTE]
Amazing then, that they ever bothered to build machines with more than
64k memory, since nothing could address it.

They didn't address it with 16-bit pointers. They addressed it with
16-bit pointers and segments. You can't put that combination in a
16-bit C pointer.

-- Richard
 
K

Keith Thompson

Mark McIntyre said:
The point is, they can - but not as a single contiguous chunk. Either
you've forgotten how 286's worked, or you didn't have much time to
play with 'em.

Personally, I've never had more than a vague idea of how 286's worked,
nor did I play with them much. Such knowledge shouldn't be a
prerequisite to understanding anything posted here.

A 16-bit pointer cannot, by itself, possibly specify more than 65536
distinct addresses. It's possible, of course, that some additional
context could alter the meanings of the bit patterns of these 16-bit
pointers; if you have 4 bits of such context, you could address 2**20
bytes. But any such context is behond the scope of standard C --
unless it's implicit in the *type* of the pointer, but even then
conversions to and from void* would have to work properly.
 
J

jacob navia

Cesar Rabak a écrit :
Mark McIntyre escreveu:



Would you mind to explain us how? I'm affraid you did not grok very well
some aspect of it...

I explained elsethread that that it is possible only with
16 bit segment registers and 16 bit pointer. In this case
the pointer is 16 + 16 --> 32 bits (a FAR pointer as it
was called in MSDOS)

NEAR pointer (16 bits) can only address 1 "segment" i.e. 64K

Mr McIntyre has misunderstood this stuff completely.
 
C

CBFalconer

Cesar said:
Mark McIntyre escreveu:

Would you mind to explain us how? I'm affraid you did not grok very
well some aspect of it...

Works just fine if CHAR_BIT is 128. :)
 
S

Steve Summit

jacob said:
I explained elsethread that that it is possible only with
16 bit segment registers and 16 bit pointer. In this case
the pointer is 16 + 16 --> 32 bits (a FAR pointer as it
was called in MSDOS)

Actually I think it was originally 16 + 16 --> 20 bits,
but that's a story for another day, another newsgroup,
another decade, another century.
 
M

Mark McIntyre

A 16-bit standard C pointer cannot address more than 2**16
bytes, period. If you want to talk about nonstandard,
nonportable C constructs, please say so, so that everyone else
knows that you are doing so.

Personally I think the vast majority of this thread has been about
precisely that. Trying to diss the point now, by bringing it back onto
pure C topicality, is pretty pointless, IMHO. YMMV, E&OE etc.
--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
 
K

Keith Thompson

Mark McIntyre said:
Personally I think the vast majority of this thread has been about
precisely that. Trying to diss the point now, by bringing it back onto
pure C topicality, is pretty pointless, IMHO. YMMV, E&OE etc.

Ok, if you don't want to be topical that's more or less ok. But if
you're going to assert that a 16-bit pointer can address more than
2**16 bytes, you should still explain how (without assuming we're all
familiar with the 286 architecture).
 
Y

Yevgen Muntyan

Keith said:
Personally, I've never had more than a vague idea of how 286's worked,
nor did I play with them much. Such knowledge shouldn't be a
prerequisite to understanding anything posted here.

A 16-bit pointer cannot, by itself, possibly specify more than 65536
distinct addresses. It's possible, of course, that some additional
context could alter the meanings of the bit patterns of these 16-bit
pointers; if you have 4 bits of such context, you could address 2**20
bytes. But any such context is behond the scope of standard C --
unless it's implicit in the *type* of the pointer, but even then
conversions to and from void* would have to work properly.

A funny implementation could keep a mapping from bit representation
to actual memory locations (you're calling it "context"). If a program
can't contain more than N pointers where N is less than 2**32, then
32-bit pointers can be used to point to more than 2**32 bytes of memory.
Such implementation would need to have very clever pointer operations,
and it would crash when you exceed number of pointers allowed at once,
but it would be conforming, wouldn't it?

Best regards,
Yevgen
 
Y

Yevgen Muntyan

Yevgen said:
A funny implementation could keep a mapping from bit representation
to actual memory locations (you're calling it "context"). If a program
can't contain more than N pointers where N is less than 2**32, then
32-bit pointers can be used to point to more than 2**32 bytes of memory.
Such implementation would need to have very clever pointer operations,
and it would crash when you exceed number of pointers allowed at once,
but it would be conforming, wouldn't it?

A little clarification: total amount of memory (in particular maximum
size of object allocated doesn't exceed this) on this ridiculous
implementation is 2**33 bytes; and maximum number of pointers
allowed is 2**32; and size of pointer is 4 bytes (8-bit bytes). Then
it seems you can't trick it by allocating huge array and stuffing
pointers to its bytes into this array.

Yevgen
 
R

Richard Tobin

Yevgen Muntyan said:
A funny implementation could keep a mapping from bit representation
to actual memory locations (you're calling it "context"). If a program
can't contain more than N pointers where N is less than 2**32, then
32-bit pointers can be used to point to more than 2**32 bytes of memory.

Very creative. But I can write pointers out to a file and read them
back, or print them out and ask the user to type them back in again,
so I can have as many pointers in use as I like.

Of course, the %p format could write out the necessary mapping
information, but I don't have to use %p. I could write out the bytes
of the pointer as integers.

-- Richard
 
Y

Yevgen Muntyan

Richard said:
Very creative. But I can write pointers out to a file and read them
back, or print them out and ask the user to type them back in again,
so I can have as many pointers in use as I like.

Of course, the %p format could write out the necessary mapping
information, but I don't have to use %p. I could write out the bytes
of the pointer as integers.

What about implementation without files? All you have is 2**33 bytes of
memory where program code and everything lives. (yes, ridiculous, but
it's not the point here)

Best regards,
Yevgen
 
R

Richard Tobin

What about implementation without files? All you have is 2**33 bytes of
memory where program code and everything lives. (yes, ridiculous, but
it's not the point here)

Use 2^32 bits of the memory to correspond to the 2^32 possible pointer
representations. Loop through the addresses of the 2^33 bytes,
setting the bit corresponding to the (32-bit) representation of the
pointer. Eventually there will be a collision: you will find that the
bit is already set. You can recover the pointer that caused that but
to be set - it's the same as the current one - and now you have two
pointers that compare equal but are supposed to be pointers to
different bytes.

-- Richard
 
Y

Yevgen Muntyan

Richard said:
Use 2^32 bits of the memory to correspond to the 2^32 possible pointer
representations. Loop through the addresses of the 2^33 bytes,
setting the bit corresponding to the (32-bit) representation of the
pointer. Eventually there will be a collision: you will find that the
bit is already set. You can recover the pointer that caused that but
to be set - it's the same as the current one - and now you have two
pointers that compare equal but are supposed to be pointers to
different bytes.

You won't have two pointers, you will only know that given bit
representation denoted some pointer at some point in past.
But your example may be modified so it stores bit representation
of first pointer, and then loops through all possible addresses
until there's a collision. But then again, the implementation may
be SUPER clever, and it may watch what you're doing - it may see
that you stored pointer (in any form, even if you split it into bits
and shuffle randomly, it's SUPER clever after all). You won't be
able to store all valid pointers, so it always can find a new bit
representation once you request new pointer. Super checked pointers.

Yevgen
 
C

CBFalconer

Richard said:
Very creative. But I can write pointers out to a file and read them
back, or print them out and ask the user to type them back in again,
so I can have as many pointers in use as I like.

Of course, the %p format could write out the necessary mapping
information, but I don't have to use %p. I could write out the bytes
of the pointer as integers.

But first you have to get the pointer. If malloc and friends
refuses to give you such, you don't have those pointers to bandy
about.
 
G

Giorgos Keramidas

David T. Ashley said:
It isn't immediately clear to me why the call has to fail.
2^16 * 2^16 is 2^32 (4 gigabytes). My system has more virtual
memory than that.

Well, just out of curiousity, I tried it out to see what the
largest approximate value is. Results below.

54135^2 is going to be on the order of 2.5G. That is a pretty
fair hunk of memory.

---------

[nouser@pamc ~]$ cat test3.c
#include <stdio.h>
#include <stdlib.h>

int main(void)
{
char *p;
int i;

for (i=65535; i>0; i--)
{
if (p = calloc(i,i))
{
printf("%d is apparently the largest integer that will succeed.\n",
i);
break;
}
}
}
[nouser@pamc ~]$ gcc test3.c
[nouser@pamc ~]$ ./a.out
54135 is apparently the largest integer that will succeed.
[nouser@pamc ~]$

Watch out for system specific limits though.

For instance, on a system with a 32-bit size_t type, it may be in
theory possible to represent the size of a single object with a
size of SIZE_MAX, but user-specific limits might kick in a lot
earlier.

<OT>

For instance, on x86 systems the default installation of FreeBSD
shows:

| $ ulimit -a
| core file size (blocks, -c) unlimited
=> | data seg size (kbytes, -d) 524288
| file size (blocks, -f) unlimited
| max locked memory (kbytes, -l) unlimited
| max memory size (kbytes, -m) unlimited
| open files (-n) 7149
| pipe size (512 bytes, -p) 1
| stack size (kbytes, -s) 65536
| cpu time (seconds, -t) unlimited
| max user processes (-u) 3574
=> | virtual memory (kbytes, -v) unlimited
| $

Note, above, that a system-specific limit for the size of the
data segment of a single process, will prevent a successful
allocation of memory long before you hit the 4 GB limit of a
32-bit size_t.

You can probably use something like the following program:

#include <assert.h>
#include <inttypes.h>
#include <limits.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>

int
main(void)
{
size_t currsize, allocsize;
char *p, *tmp;

/*
* Try allocating memory by doubling the size in each step.
*/
allocsize = 1;
currsize = 0;
p = NULL;
while (1) {
if (currsize)
allocsize = 2 * currsize;
if ((SIZE_T_MAX / 2) < allocsize) {
fprintf(stderr,
"size_t limit would be exceeded!\n");
free(p);
exit(EXIT_SUCCESS);
}
printf("allocating %10ju", (uintmax_t) allocsize);
fflush(stdout);
if (p == NULL) {
tmp = malloc(allocsize);
} else {
tmp = realloc(p, allocsize);
}
if (tmp == NULL) {
printf("\nswitching algorithm at %ju bytes\n",
(uintmax_t) currsize);
fflush(stdout);
break;
}
p = tmp;
currsize = allocsize;

printf(" zeroing");
fflush(stdout);
memset(p, 0, currsize);
printf(" success\n", (uintmax_t) currsize);
fflush(stdout);
}

if (p == NULL || currsize == 0) {
fprintf(stderr, "Bummer! No allocation is possible.\n");
exit(EXIT_FAILURE);
}
/*
* Now try allocating repeatedly with 'allocsize', until we fail. When we
* do, decrease allocsize and loop back.
*/
allocsize = currsize;
while (1) {
if ((SIZE_T_MAX - allocsize) < currsize || allocsize < 1) {
printf("Cannot allocate any more memory.\n");
fflush(stdout);
break;
}
printf("allocating %10ju+%ju",
(uintmax_t) currsize, (uintmax_t) allocsize);
fflush(stdout);

assert(p != NULL);
tmp = realloc(p, currsize + allocsize);
if (tmp == NULL) {
allocsize /= 2;
printf(" failed, reducing allocsize to %ju\n",
(uintmax_t) allocsize);
continue;
}
p = tmp;
currsize += allocsize;

printf(" zeroing");
fflush(stdout);
memset(p, 0, currsize);
printf(" success\n", (uintmax_t) currsize);
fflush(stdout);
}

printf("Total memory allocated: %ju bytes\n", (uintmax_t) currsize);
fflush(stdout);
free(p);
return EXIT_SUCCESS;
}

When run with an unlimited virtual memory size user-limit, this
will succeed in allocating a *lot* of memory.

But see what happens when user-limits are in place:

| $ ulimit -v 20000
| $ ulimit -a
| core file size (blocks, -c) unlimited
| data seg size (kbytes, -d) 524288
| file size (blocks, -f) unlimited
| max locked memory (kbytes, -l) unlimited
| max memory size (kbytes, -m) unlimited
| open files (-n) 7149
| pipe size (512 bytes, -p) 1
| stack size (kbytes, -s) 65536
| cpu time (seconds, -t) unlimited
| max user processes (-u) 3574
| virtual memory (kbytes, -v) 20000
| $ ./foo
| allocating 1 zeroing success
| allocating 2 zeroing success
| allocating 4 zeroing success
| allocating 8 zeroing success
| allocating 16 zeroing success
| allocating 32 zeroing success
| allocating 64 zeroing success
| allocating 128 zeroing success
| allocating 256 zeroing success
| allocating 512 zeroing success
| allocating 1024 zeroing success
| allocating 2048 zeroing success
| allocating 4096 zeroing success
| allocating 8192 zeroing success
| allocating 16384 zeroing success
| allocating 32768 zeroing success
| allocating 65536 zeroing success
| allocating 131072 zeroing success
| allocating 262144 zeroing success
| allocating 524288 zeroing success
| allocating 1048576 zeroing success
| allocating 2097152 zeroing success
| allocating 4194304 zeroing success
| allocating 8388608
| switching algorithm at 4194304 bytes
| allocating 4194304+4194304 failed, reducing allocsize to 2097152
| allocating 4194304+2097152 failed, reducing allocsize to 1048576
| allocating 4194304+1048576 failed, reducing allocsize to 524288
| allocating 4194304+524288 failed, reducing allocsize to 262144
| allocating 4194304+262144 failed, reducing allocsize to 131072
| allocating 4194304+131072 failed, reducing allocsize to 65536
| allocating 4194304+65536 failed, reducing allocsize to 32768
| allocating 4194304+32768 failed, reducing allocsize to 16384
| allocating 4194304+16384 failed, reducing allocsize to 8192
| allocating 4194304+8192 failed, reducing allocsize to 4096
| allocating 4194304+4096 failed, reducing allocsize to 2048
| allocating 4194304+2048 failed, reducing allocsize to 1024
| allocating 4194304+1024 failed, reducing allocsize to 512
| allocating 4194304+512 failed, reducing allocsize to 256
| allocating 4194304+256 failed, reducing allocsize to 128
| allocating 4194304+128 failed, reducing allocsize to 64
| allocating 4194304+64 failed, reducing allocsize to 32
| allocating 4194304+32 failed, reducing allocsize to 16
| allocating 4194304+16 failed, reducing allocsize to 8
| allocating 4194304+8 failed, reducing allocsize to 4
| allocating 4194304+4 failed, reducing allocsize to 2
| allocating 4194304+2 failed, reducing allocsize to 1
| allocating 4194304+1 failed, reducing allocsize to 0
| Cannot allocate any more memory.
| Total memory allocated: 4194304 bytes
| $

</OT>

Back to more topical stuff: the size_t type can represent the
size of much much bigger objects, but local configuration
prevents malloc() and realloc() from obtaining so much memory.
 
Y

Yevgen Muntyan

CBFalconer said:
But first you have to get the pointer. If malloc and friends
refuses to give you such, you don't have those pointers to bandy
about.

Of course we assume calloc(1 << 31 + 8, 2) succeeds here.

Yevgen
 
R

Richard Tobin

Yevgen Muntyan said:
You won't have two pointers, you will only know that given bit
representation denoted some pointer at some point in past.

I see nothing in the standard that prevents me from storing a pointer
by any means that I choose. If I store it by setting a bit in a table
of possible representations, how is that different from storing it in
a variable, writing it to a file, or encoding it in an integer, all of
which are generally agreed to be legal?

-- Richard
 
Y

Yevgen Muntyan

Richard said:
I see nothing in the standard that prevents me from storing a pointer
by any means that I choose. If I store it by setting a bit in a table
of possible representations, how is that different from storing it in
a variable, writing it to a file, or encoding it in an integer, all of
which are generally agreed to be legal?

Good point. Last attempt: you have read-only memory, you can read but
you can't write. So you can't use that table. Somehow I still believe
such a weird implementation is possible (not in practical sense of
course). C standard seems to have made pointers different enough from
simple structures or integers for that to be true.
In any case it's impossible to prove or disprove that such an
implementation exists, and it's unlikely someone will actually create
reasonably non-buggy implementation like that :)

Regards,
Yevgen
 
A

av

the "problem" is not in calloc or malloc but it is in the mathemaical
model ("modello matematico") used for size_t

possible i'm the only one in think that standard C is wrong on
definition of size_t?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,770
Messages
2,569,584
Members
45,075
Latest member
MakersCBDBloodSupport

Latest Threads

Top