R
Raj Pashwar
As per subject-line, thank-you
As per subject-line, thank-you
As per subject-line, thank-you
Raj Pashwar said:As per subject-line, thank-you
As per subject-line, thank-you
No [modern PC] OS is likely to have the capability to hand the run-time
library a chunk of memory smaller than the page size supported by the
memory management hardware. I don't know what this figure is nowadays,
but I'd guess maybe between 256K and 1M.
You typically wouldn't be able to allocate from the operating system,
for example, a 30-byte chunk of memory.
The run-time library is likely to take these large chunks allocated by
the operating system and subdivide them so it can hand out smaller
chunks via malloc().
There is no guarantee that the algorithm used by malloc() and free()
would allow any memory to be returned to the OS until every block had
been free()'d.
No [modern PC] OS is likely to have the capability to hand the run-time
library a chunk of memory smaller than the page size supported by the
memory management hardware. I don't know what this figure is nowadays,
but I'd guess maybe between 256K and 1M.
On x86, the page sizes are 4kB (normal) and 2MB/4MB (large), with the
latter depending on whether PAE is enabled. Other architectures vary.
I believe you, but I'm somewhat surprised at this figure. Hardware
memory management is usually a tradeoff between silicon complexity,
complexity of setting up the hardware by the software, and
granularity.
I figured with a modern PC having maybe 4G of memory, you might want
to divide the memory into maybe 1,000 - 10,000 pages, so I would have
figured page sizes in the range of 512K - 4M. 4K surprises me.
No [modern PC] OS is likely to have the capability to hand the run-time
library a chunk of memory smaller than the page size supported by the
memory management hardware. I don't know what this figure is nowadays,
but I'd guess maybe between 256K and 1M.
On x86, the page sizes are 4kB (normal) and 2MB/4MB (large), with the
latter depending on whether PAE is enabled. Other architectures vary.
I believe you, but I'm somewhat surprised at this figure. Hardware
memory management is usually a tradeoff between silicon complexity,
complexity of setting up the hardware by the software, and
granularity.
I figured with a modern PC having maybe 4G of memory, you might want
to divide the memory into maybe 1,000 - 10,000 pages, so I would have
figured page sizes in the range of 512K - 4M. 4K surprises me.
I'm too lazy to look it all up. The x86 world at the low levels has
an entry cost. Right now I'm in the ARM world.
David T. Ashley said:No [modern PC] OS is likely to have the capability to hand the run-time
library a chunk of memory smaller than the page size supported by the
memory management hardware. I don't know what this figure is nowadays,
but I'd guess maybe between 256K and 1M.
On x86, the page sizes are 4kB (normal) and 2MB/4MB (large), with the
latter depending on whether PAE is enabled. Other architectures vary.
I believe you, but I'm somewhat surprised at this figure. Hardware
memory management is usually a tradeoff between silicon complexity,
complexity of setting up the hardware by the software, and
granularity.
I figured with a modern PC having maybe 4G of memory, you might want
to divide the memory into maybe 1,000 - 10,000 pages, so I would have
figured page sizes in the range of 512K - 4M. 4K surprises me.
I'm too lazy to look it all up. The x86 world at the low levels has
an entry cost. Right now I'm in the ARM world.
No [modern PC] OS is likely to have the capability to hand the run-time
library a chunk of memory smaller than the page size supported by the
memory management hardware. I don't know what this figure is nowadays,
but I'd guess maybe between 256K and 1M.
On x86, the page sizes are 4kB (normal) and 2MB/4MB (large), with the
latter depending on whether PAE is enabled. Other architectures vary.
I believe you, but I'm somewhat surprised at this figure. Hardware
memory management is usually a tradeoff between silicon complexity,
complexity of setting up the hardware by the software, and
granularity.
I figured with a modern PC having maybe 4G of memory, you might want
to divide the memory into maybe 1,000 - 10,000 pages, so I would have
figured page sizes in the range of 512K - 4M. 4K surprises me.
I'm too lazy to look it all up. The x86 world at the low levels has
an entry cost. Right now I'm in the ARM world.
On 16-May-12 12:29, David T. Ashley wrote:
No [modern PC] OS is likely to have the capability to hand the run-time
library a chunk of memory smaller than the page size supported by the
memory management hardware. I don't know what this figure is nowadays,
but I'd guess maybe between 256K and 1M.
On x86, the page sizes are 4kB (normal) and 2MB/4MB (large), with the
latter depending on whether PAE is enabled. Other architectures vary.
I believe you, but I'm somewhat surprised at this figure. Hardware
memory management is usually a tradeoff between silicon complexity,
complexity of setting up the hardware by the software, and
granularity.
I figured with a modern PC having maybe 4G of memory, you might want
to divide the memory into maybe 1,000 - 10,000 pages, so I would have
figured page sizes in the range of 512K - 4M. 4K surprises me.
At 1000 pages, the external fragmentation would be bad. The nice thing
about small page sizes is that it is easier for a process to return
unneeded pages to the operating system. It's easier to find a contiguous
page-aligned 4K chunk of memory which is not in use and free it, than to
find such a 4Mb chunk of memory.
But, never mind that, the real problem is that address spaces do not share
pages, and neither do distinct memory mappings within address spaces.
Every single one of these uses would eat into your allotment of 1000 pages,
quickly gobbling up all of it.
I figured with a modern PC having maybe 4G of memory, you might want
to divide the memory into maybe 1,000 - 10,000 pages, so I would have
figured page sizes in the range of 512K - 4M. 4K surprises me.
The OS can have a larger page size. It's easy to make 16 KB pages on
hardware that is designed for 4 KB pages. But not the other way around.
The size of a 'segment' [unit that the memory management requests more
space from the OS] would be more like 2 MB.
On 16-May-12 12:29, David T. Ashley wrote:
No [modern PC] OS is likely to have the capability to hand the run-time
library a chunk of memory smaller than the page size supported by the
memory management hardware. I don't know what this figure is nowadays,
but I'd guess maybe between 256K and 1M.
On x86, the page sizes are 4kB (normal) and 2MB/4MB (large), with the
latter depending on whether PAE is enabled. Other architectures vary.
I believe you, but I'm somewhat surprised at this figure. Hardware
memory management is usually a tradeoff between silicon complexity,
complexity of setting up the hardware by the software, and
granularity.
I figured with a modern PC having maybe 4G of memory, you might want
to divide the memory into maybe 1,000 - 10,000 pages, so I would have
figured page sizes in the range of 512K - 4M. 4K surprises me.
I'm too lazy to look it all up. The x86 world at the low levels has
an entry cost. Right now I'm in the ARM world.
<off-topic> Consider the silicon cost of things like a TLB, and
the run-time cost of a TLB miss, and realize that a large page size
can reduce both at the same time. Many systems nowadays support several
page sizes simultaneously, just for better TLB use. </off-topic>
OTOH, as soon as you have to start swapping those large pages to and
from disk, you blow away the gains from more efficient TLB usage.
Except for workloads where a single process randomly accesses enough
active pages to overflow the TLBs _and_ the machine has enough RAM that
those pages can be locked, large pages don't seem to make sense. It
apparently occurred to the i386 designers to allow for that case, and
hear it's common for large database servers, but that's about it.
It seems advantageous to have intermediate page sizes, but that also
adds hardware complexity--and forces the OS to decide what page size to
use for what, which it will inevitably get wrong.
True. "Seven decimal orders of magnitude" has a charming
way of obliterating small gains and losses.
(In other words: As soon as you swap, you've already blown
your whole performance story. CPU ~= 2GHz, disk ~= 0.0000001GHz,
CPU runs ~20,000,000 cycles while waiting for one count them one
disk operation. If each CPU cycle were one breath at an adult
rate of <20 respirations per minute, one disk operation would
take upwards of two years. Hell, by the time the disk finishes
the cache is stony cold; the CPU has "forgotten" what it was
doing. What were *you* doing on May 18, 2010, and what did you
have for dinner on that day?)
Stephen Sprunk said:There's that problem too, but I was thinking more along the lines of 4MB
pages taking ~1000 times as long as 4kB pages to evict and reload.
I question whether this is a good idea.
It is a good idea. These are -disk drives-.
If your drive has a decent buffer, the time to get additional pages
could be no more than to transfer data over the bus.
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.