Request

D

DJ Delorie

Rod Pemberton said:
2) you never built DJGPP demand for nmalloc by submitting it to the DJGPP
archives

He did, there were some buglets that needed tweaking long ago, it's
probably just waiting for someone with write access to review it and
put it in place. Plus backward compatibility (the malloc debug
interface) and all that.
4) DJ has never publicly responded to your posts on nmalloc...

Dude, go WAY back in the djgpp-workers mail archives.
 
C

CBFalconer

Ben said:
.... snip ...

I assume you're speaking of the segmented 16-bit x86 architecture
as featured in MS-DOS compilers. The object pointers used by
these compilers in some modes were 16 bits wide, and they could
only address 64 kB worth of data. You could use
implementation-specific extensions to address more than 16 kB of
memory, or you could switch your compiler to a mode where
pointers were 32 bits long, but there was no way to address more
than 64 kB of objects with 16-bit pointers and without using C
extensions.

Turbo C 2.01 says the following. In that mode (with some other
options set) I can simply compile virtually any standard C source.
Things will be unwell if the object code exceeds 64 k, but I can
put up with that. near and far are just things the compiler does
internally.

"Compact model: This model has 64K for code, 64K
for stack and static data, and up to 1 MB for
the heap. All functions are near by default and
all data pointers are far by default."

I suspect that this may blow up if function pointer are converted
to void * and back again. However that is standard conforming.
 
C

CBFalconer

jacob said:
.... snip ...

Obviously each 16 bit pointer could address only 64K!!!
Mr McIntyre has strong hallucinations when he supposes that
you can have more than 64K with 16 bit pointers!

Emulating Dan Pops tact again?
 
C

CBFalconer

Richard said:
.... snip ...

Many people were oppressed by King Henry VII. More suffered under
King Henry VIII.

<OT>
That may also reflect the world population at the time, and the
available oppression methods. The shrubish leader has exceeded
both, IMO. I suspect Adolph Schickelgruber holds the record.

Record candidates should include Genghis Khan, Napoleon,
Robespierre, Stalin, Cromwell, Queen Mary I, Mao, all spammers,
various Czars, Popes, and Roman Emperors. Maybe the results should
be normallized to world population at the time of oppression.
</OT>
 
B

Bob Martin

in 713360 20070105 054050 CBFalconer said:
<OT>
That may also reflect the world population at the time, and the
available oppression methods. The shrubish leader has exceeded
both, IMO. I suspect Adolph Schickelgruber holds the record.

Record candidates should include Genghis Khan, Napoleon,
Robespierre, Stalin, Cromwell, Queen Mary I, Mao, all spammers,
various Czars, Popes, and Roman Emperors. Maybe the results should
be normallized to world population at the time of oppression.
</OT>

So Adam pushing Eve around would be top of the list?
 
C

CBFalconer

Richard said:
Bob Martin said:

Other way round. :)

If we count Cain and Abel in either case they achieved a NOL
(normallized oppression level) of 25%. And they are tied by Cain.
Maybe we should limit the field to recorded history. Bible
thumpers can opt-out. Or, if we include a factor for the QOO
(quality of oppression) Cain 'wins' since his QOO is 1.0.
 
B

Ben Pfaff

CBFalconer said:
Turbo C 2.01 says the following. In that mode (with some other
options set) I can simply compile virtually any standard C source.
Things will be unwell if the object code exceeds 64 k, but I can
put up with that. near and far are just things the compiler does
internally.

"Compact model: This model has 64K for code, 64K
for stack and static data, and up to 1 MB for
the heap. All functions are near by default and
all data pointers are far by default."

I suspect that this may blow up if function pointer are converted
to void * and back again. However that is standard conforming.

I don't think this contradicts anything I wrote. The compact
model uses 32-bit object pointers, like I said. Function
pointers are different and need not have the same form.
 
C

CBFalconer

Ben said:
I don't think this contradicts anything I wrote. The compact
model uses 32-bit object pointers, like I said. Function
pointers are different and need not have the same form.

And my point is that no C extensions are needed. Just feed it
portable standard code. I primarily use it to check I haven't
inadverdently relied on an int size larger than 16 bits.
 
B

Ben Pfaff

CBFalconer said:
And my point is that no C extensions are needed. Just feed it
portable standard code. I primarily use it to check I haven't
inadverdently relied on an int size larger than 16 bits.

That's a non sequitur.

I don't think you've actually followed the discussion. The
original point I followed up on was this, posted by Jacob Navia:
Mark McIntyre a écrit :

Excuse me but then the pointer is bigger than 32 bits!!!

Mark McIntyre made the claim that 32-bit pointers could address
more than 2**32 bytes, claiming as evidence an architecture that
resembles 16-bit x86 real mode. I pointed out that in fact, no,
16-bit pointers in such an environment cannot address more than
2**16 bytes. Now you're posting non sequiturs about how 32-bit
pointers can address 2**20 bytes in x86 real mode.
 
M

Mark McIntyre

I assume you're speaking of the segmented 16-bit x86 architecture
as featured in MS-DOS compilers. The object pointers used by
these compilers in some modes were 16 bits wide, and they could
only address 64 kB worth of data.

The programming manual for the 286, which I have on my shelf, explains
how it can address 1MB memory using 16-bit pointers.
You could use
implementation-specific extensions to address more than 16 kB of
memory,

precisely. This doesn't contradict what I said, by the way.

--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
 
M

Mark McIntyre

Mark McIntyre made the claim that 32-bit pointers could address
more than 2**32 bytes, claiming as evidence an architecture that
resembles 16-bit x86 real mode. I pointed out that in fact, no,
16-bit pointers in such an environment cannot address more than
2**16 bytes.

The point is, they can - but not as a single contiguous chunk. Either
you've forgotten how 286's worked, or you didn't have much time to
play with 'em.

--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
 
M

Mark McIntyre

Mr McIntyre has strong hallucinations when he supposes that
you can have more than 64K with 16 bit pointers!

Amazing then, that they ever bothered to build machines with more than
64k memory, since nothing could address it.

Perhaps thats why someone or other said he didn';t think anyone would
ever need more than 64K. He was wrong too...


--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
 
I

Ian Collins

Mark said:
Amazing then, that they ever bothered to build machines with more than
64k memory, since nothing could address it.
Jacob didn't say that, hist point was you could only address 64K is one
block with a pointer comprising a 16 bit segment and 16 bit offset. You
can have as many blocks as there are available segment registers.
 
B

Ben Pfaff

Mark McIntyre said:
The programming manual for the 286, which I have on my shelf, explains
how it can address 1MB memory using 16-bit pointers.

Yes. But it doesn't explain how you can do so in standard C,
because you can't.
precisely. This doesn't contradict what I said, by the way.

By default, we talk about portable standard C here. If you want
to talk about pointers outside that context, please be specific,
so that others won't be confused.
 
B

Ben Pfaff

Mark McIntyre said:
The point is, they can - but not as a single contiguous chunk. Either
you've forgotten how 286's worked, or you didn't have much time to
play with 'em.

A 16-bit standard C pointer cannot address more than 2**16
bytes, period. If you want to talk about nonstandard,
nonportable C constructs, please say so, so that everyone else
knows that you are doing so.
 
J

Joe Wright

Mark said:
Amazing then, that they ever bothered to build machines with more than
64k memory, since nothing could address it.

Perhaps thats why someone or other said he didn';t think anyone would
ever need more than 64K. He was wrong too...
I think that was Bill Gates in 1981 or so commenting on the PC having
640K of memory instead of the lowly 64K of CP/M systems. "Nobody will
ever need more that 640K of memory!" will live in infamy.

My current desktop has 1GB of main memory. I got into this Internet
stuff in 1988 with netcom.com in California. I was (e-mail address removed), one
of the first 100 or so subscribers. Our netcom.com was a Sun workstation
with a 56Kbps link to sun.com just across the street in Silicon Valley.

Disk drives of the day were Seagate SCSI of less than 100 MB capacity.
To have a Gigabyte of disk storage on your Internet server was an
impossible dream. Today we can buy 200GB drive for $95.

Everybody predicting a limited future always catches it in the neck.
 
C

Cesar Rabak

Mark McIntyre escreveu:
The programming manual for the 286, which I have on my shelf, explains
how it can address 1MB memory using 16-bit pointers.

Would you mind to explain us how? I'm affraid you did not grok very well
some aspect of it...
 
C

Cesar Rabak

Mark McIntyre escreveu:
The point is, they can - but not as a single contiguous chunk. Either
you've forgotten how 286's worked, or you didn't have much time to
play with 'em.
But, then is no longer Standard compliant, right?
 
A

av

I think that was Bill Gates in 1981 or so commenting on the PC having
640K of memory instead of the lowly 64K of CP/M systems. "Nobody will
ever need more that 640K of memory!" will live in infamy.

My current desktop has 1GB of main memory. I got into this Internet
stuff in 1988 with netcom.com in California. I was (e-mail address removed), one
of the first 100 or so subscribers. Our netcom.com was a Sun workstation
with a 56Kbps link to sun.com just across the street in Silicon Valley.

Disk drives of the day were Seagate SCSI of less than 100 MB capacity.
To have a Gigabyte of disk storage on your Internet server was an
impossible dream. Today we can buy 200GB drive for $95.

Everybody predicting a limited future always catches it in the neck.

but how all that space is used?
is it used well?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,770
Messages
2,569,583
Members
45,073
Latest member
DarinCeden

Latest Threads

Top