Garbage collection

L

lawrence.jones

Paul Hsieh said:
If C was merely augmented with total memory consumption statistic
functions and/or heap walking mechanisms and heap checking, all the
main complaints about C's memory management problems would be
immediately addressed.

And a number of very fast and very useful allocations methodologies
would no longer be valid.

-Larry Jones

I hate it when they look at me that way. -- Calvin
 
C

Chris Torek

I've little idea about GC or how it works ...
What happens when [the only pointer to a successful malloc() result]
goes out of scope?
What about a linked list when it's no longer needed? How does GC know that,
do you tell it? And it knows where all the pointers in each node are. ...

There are a bunch of different ways to implement garbage collectors,
with different tradeoffs. To make one that works for all cases,
though, including circularly-linked data structures, one ultimately
has to compute a sort of "reachability graph".

Suppose, for instance, we have a table somewhere of "all allocated
blocks" (whether this is a list of malloc()s made into an actual
table or tree or whatever, or a set of pages for a page-based
collector, just picture it mentally as a regular old table, as if
it were written out on a big sheet of paper). The table has whatever
entries it needs to keep track of addresses and sizes -- if it is
a malloc() table, that might well be all it needs -- except that
it has a single extra bit, which we can call "referenced".

To do a full GC, we run through the table and clear all the
"referenced" bits. Then, we use some sort of System Magic to find
all "root pointers". In C, this could be "every address register,
plus every valid address" (on a machine that has those), or "those
parts used by the C compiler, ignoring things that might be used
by other languages" (this will probably speed up the GC -- if we
know the C compiler never stores a pointer in registers 5, 6, and
7, we can ignore them; if we know the C compiler never stores
pointers in "text segments" we can skip those, and so on). In
non-C languages, it may be much easier to find all root pointers
(for instance a Lisp system may have just one or a few such values).
Then we hit the really tricky part: for each root pointer, if it
points into some entry in the table, we set the "referenced" bit
to mark that entry as used, and -- recursively -- check the entire
memory area described by that table entry for more pointers. (If
the referenced bit is already set, we assume that the memory area
has been scanned earlier, and skip this scan. This prevents infinite
recursion, if a malloc()ed block contains pointers to itself.)

When we have finished marking each entry according to each root
pointer and all "reachable pointers" found via that pointer, we
make one final pass through the table, freeing those table entries
whose bits are still clear.

In other words, in code form, and making even more assumptions
(that there is just one "kind" of pointer -- if there are three
kinds, VALID_PTR() and/or find_entry_containing() become complicated):

void __scan(TABLE_ENTRY *);
TABLE_ENTRY *__find_entry_containing(POINTER_TYPE);

__GC() {
TABLE_ENTRY *e;
POINTER_TYPE p;

for (e in all table entries)
e->ref = 0;
for (p in all possible root pointers)
if (VALID_PTR(p) && (e = __find_entry_containing(p)) != NULL)
__scan(e);
for (e in all table entries)
if (!e->ref)
__internal_free(e);
}

void __scan(TABLE_ENTRY *e) {
TABLE_ENTRY *inner;

if (e->ref)
return; /* nothing to do */
e->ref = 1;
for (p taking on all possible valid pointers based on e)
if ((inner = __find_entry_containing(p)) != NULL)
__scan(inner);
}

As you might imagine, all of these are pretty tricky to implement
correctly, and the scanning can take a very long time. These are
why there are so many kinds of GC systems.
 
P

Paul Hsieh

And a number of very fast and very useful allocations methodologies
would no longer be valid.

Its easy to talk isn't it? If you can perform free() and realloc()
correctly that means you, in some sense *know* what those memory
contents are. All the credible strategies I have looked at (bitmaps,
buddy systems, and just the header/footer linkage systems), one way or
another can support *all* the useful relevant extensions with minimal
performance impact.

People are voting with their feet with this programming language and
its because of false dichotomies like this. If there's some marginal
crazy system that can't support this, its not going to be updated to
the latest standard anyway. Its like you guys didn't take away any
lesson from the lack of C99 adoption.
 
C

cr88192

user923005 said:
Looking at the past is always easy.
Prophecy proves more difficult.

yes, however, IME it is not always so difficult...

my approach consists of taking the current observed trends, and "running
them forwards".

the reason I can try to predict so far ahead, is given the incredibly slow
rate of change WRT programming languages.


now, my claim here is not accuracy, only a vague guess...
however, even vague guesses can be worthwhile...


something different could happen, so then one has to consider, how likely is
it that it will happen this way, and how likely it is that it will happen
some other way...

my understanding of the trends, is because I have lived through many of the
changes, and have watched them progress at the terribly slow rate at which
they are going, and so, can make a guess as to what will happen within a
related timeframe.

I think, the next 10 years are not too hard to guess (ignoring any
catastrophic events or revolutionary changes, but these are rare enough to
be ignored).

moving much outside this, say, 15 or 20 years, requires a good deal more
speculation.


the reason for a decline in windows and rise of linux would be due to the
current decline in windows and current rise in linux. I had just assumed
that these continue at the current rates.

the reason for a decline in the process model would be because of the rise
of VM-like developments (however, it is very possible that processes may
remain as a vestigial feature, and thus not go away).

as for filesystems, the prediction is that they move from being pure
heirarchies, to being annotated with so much metadata that they are, in
effect, based on the network-model, rather than heirarchical.


other guesses:

I can also claim, in a similar line of thought that, most likely, in 10
years, x86-64 will most likely be the dominant architecture.

estimating from current curves, computers will likely have around 256GB RAM,
and HDs will be around 125TB...

the model doesn't hold up so well WRT processor power, a crude guess is that
there will be around 64 cores, processing power being somewhere between 1.1
and 1.5 TFLOP (assuming only a gradual increase in terms of per-core power
and that the per-core complexity will go up a little faster than the
transistor density). an uncertainty here is in the scope and nature of
future vector extensions, so my guess is likely to be conservative...

3D lithography and Stack-chips, allong with reaching a maximal transistor
density, are other factors that could throw predictions (currently based
simply on mental curve approximation).
 
U

user923005

yes, however, IME it is not always so difficult...

my approach consists of taking the current observed trends, and "running
them forwards".

the reason I can try to predict so far ahead, is given the incredibly slow
rate of change WRT programming languages.

Exactly. COBOL and Fortran are still going strong.
now, my claim here is not accuracy, only a vague guess...
however, even vague guesses can be worthwhile...

That, and $3.75 will get you a cup of coffee.
something different could happen, so then one has to consider, how likely is
it that it will happen this way, and how likely it is that it will happen
some other way...

my understanding of the trends, is because I have lived through many of the
changes, and have watched them progress at the terribly slow rate at which
they are going, and so, can make a guess as to what will happen within a
related timeframe.

I think, the next 10 years are not too hard to guess (ignoring any
catastrophic events or revolutionary changes, but these are rare enough to
be ignored).

I think that the next ten years will be an even bigger surprise than
the previous ten years, and the surprises will accelerate. Since
total knowledge doubles every 5 years now, and the trend is going
exponential-exponential, I think that projecting what will be is much
harder than you think.
moving much outside this, say, 15 or 20 years, requires a good deal more
speculation.

I think that 6 months from now is difficult, even for stock prices,
much less industry trends. We can mathematically project these things
but you will see that the prediction and confidence intervals bell out
in an absurd manner after the last data point.

the reason for a decline in windows and rise of linux would be due to the
current decline in windows and current rise in linux. I had just assumed
that these continue at the current rates.

I guess that 15 years from now Windows will still dominate, unless the
Mac takes over or Linux does really well. Linux is not a desktop
force, but it is making server inroads.
the reason for a decline in the process model would be because of the rise
of VM-like developments (however, it is very possible that processes may
remain as a vestigial feature, and thus not go away).

VM developmments are nice, but we lose horsepower. I'm not at all
sure it is the right model for computation.
as for filesystems, the prediction is that they move from being pure
heirarchies, to being annotated with so much metadata that they are, in
effect, based on the network-model, rather than heirarchical.

I think that file systems should be database systems, like the AS/400
and even OpenVMS have.
other guesses:

I can also claim, in a similar line of thought that, most likely, in 10
years, x86-64 will most likely be the dominant architecture.

I guess five years, but maybe not. We are going 64 bit here now, both
with machines and OS, but it's still a few weeks off. However, I
guess that our office is atypical.
estimating from current curves, computers will likely have around 256GB RAM,
and HDs will be around 125TB...

256G Ram is way too low for 10 years from now. My guess would be 1 TB
give or take a factor of 2.

I suspect we won't be using platter based hard drive technology ten
years from now. I know of some alternate technologies that will store
1 TB on one square cm and so something like that will take over once
the disk drive manufacturers have depreciated their current capital.
the model doesn't hold up so well WRT processor power, a crude guess is that
there will be around 64 cores, processing power being somewhere between 1.1
and 1.5 TFLOP (assuming only a gradual increase in terms of per-core power
and that the per-core complexity will go up a little faster than the
transistor density). an uncertainty here is in the scope and nature of
future vector extensions, so my guess is likely to be conservative...

I don't know if core count will be the thing that pushes technology or
some completely new idea.
It went like this before:
Mechanical switch
Relay
Vacuum tube
Transistor
IC
VLSI
3D lithography and Stack-chips, allong with reaching a maximal transistor
density, are other factors that could throw predictions (currently based
simply on mental curve approximation).

I guess something new will come along right about the time that
silicon can no longer keep up. It always has before.

We get ourselves into trouble when we try to predict the future. Look
at the economic analysis by Marx and Engels. It was very good. But
their predictions on what the future held were not so good.

I think that we have a hard time knowing for sure about tomorrow.
Next week is a stretch. Next year is a guess. Five years is
speculation. Ten years is fantasy. Twenty years is just being silly.

IMO-YMMV

Now, is there some C content in here somewhere? Oh, right -- C is
going to lose force. I predict:
1. C will lose force.
OR
2. C will gain force.
OR
3. C will stay the same.

That's trichotomy for you. And I can guarantee that I am right about
it.
 
K

Keith Thompson

jacob navia said:
Please, that stuff appears every time I mention the GC here.

I posted it in direct response to a discussion about whether GC breaks
C standard conformance.
"I *could* store pointers in disk and read it later, so the GC
is not conforming"

Storing pointers on disk is not the only case I mentioned.

I can easily imagine a program that encrypts some in-memory data
structure and decrypts it later, to protect against snooping by other
users on the machine. That data structure could easily include the
only pointers to some chunk of memory outside the encrypted region.
You know that this has no practical significance at all. Since a
conforming program that stores pointers in the disk is using
malloc/free and that system is still available, those programs
would CONTINUE TO RUN!

The only case where the program would NOT run is if they use the GC
allocated pointers and put them in the disk!

It's something you have to think about when deciding to use GC.
That's all.

Now that you mention it, I think the point about malloc vs. gc_malloc
(sp?) was brought up in this context before. If memory allocated by
malloc isn't garbage-collected, then there may not be a conformance
issue; the behavior of gc_malloc and of memory allocated to it is
implementation-defined, and the standard doesn't impose the same
requirements on it as it does on malloc-created pointers and what they
point to. So I may have been mistaken. I've participated in a lot of
discussions here; I do occasionally forget some of the details.

Even so, a programmer using GC still has to be aware that there are
some (admittedly exotic) things that he can't do with the resulting
pointers.

And even if I made a minor error, that is no excuse for your
insufferable rudeness. If you're bored by my bringing it up in
response to somebody else's question, then DON'T ANSWER. I wasn't
talking to you anyway.

[snip]
 
I

Ian Collins

cr88192 said:
I can also claim, in a similar line of thought that, most likely, in 10
years, x86-64 will most likely be the dominant architecture.
It already is, at least on the laptop, desktop and low to mid range
server market. You'd be hard pushed to find a new 32 bit machine these
days.
 
C

cr88192

user923005 said:
Exactly. COBOL and Fortran are still going strong.

yes, but are still less dominant than they were...

assembler is still going strong as well FWIW...

That, and $3.75 will get you a cup of coffee.

depends, I have not really been to cafes.
I can get steel cans of coffee 2 for 1$...

I think it is 0.75$ for coffee in a plastic bottle.
all this is at a nearby convinience store...

I think that the next ten years will be an even bigger surprise than
the previous ten years, and the surprises will accelerate. Since
total knowledge doubles every 5 years now, and the trend is going
exponential-exponential, I think that projecting what will be is much
harder than you think.

yeah.

well, one can wait and see I guess...

I think that 6 months from now is difficult, even for stock prices,
much less industry trends. We can mathematically project these things
but you will see that the prediction and confidence intervals bell out
in an absurd manner after the last data point.

stocks are likely to be much harder I think, considering that they vary
chaotically and have strong feedback properties, rather than gradually over
a long period of time...

my guess is that predictions get far worse the further one moves into the
future, which is why I can speculate 10 years, but I don't really make any
solid claims for 20 years.

in the reverse direction, I can assert with pretty good confidence that
within 6 months or 1 year that the programming-language landscape will be
much the same as it is right now.

I guess that 15 years from now Windows will still dominate, unless the
Mac takes over or Linux does really well. Linux is not a desktop
force, but it is making server inroads.

Windows dominance is possible, but unless recent trends change, I don't
expect it to last.

I also personally doubt that Mac is going to regain dominance (though it
can't really be ruled out either).
the great problem with Mac is that there is not many good reasons for people
TO use it, so it seems most likely that they will not.

at this point, Linux is not a desktop force (or even a very good desktop
OS), however, the Linux driver support situation is steadily improving, and
opensource apps are gaining a strong footing vs commercial apps (for
example, at my college they use OpenOffice rather than MS Office, ...).

as such, it may be not too long before Windows and Linux are "comprable" in
many respects, and if MS throws out much more total crap (like they have
done with Vista, ...), they may just end up convincing a lot more people to
make the switch...

VM developmments are nice, but we lose horsepower. I'm not at all
sure it is the right model for computation.

I agree, technically.

VMs are not necessarily the "best" idea, but they may well continue to gain
sway over more conventional static-compilation approaches in many cases.
however, this was more in my longer-term speculation (I still expect static
compilation will be dominant in 10 years, but VMs and hybrid approaches may
become common as well).

I expect there will be a lot more hybrid apps/VMs (usually, with part of the
app being statically compiled, and part of the app running in a VM), than
either purely statically compiled, or purely VM-based apps.

however, I don't really know in which direction this will go (or which exact
form things will take).

I think that file systems should be database systems, like the AS/400
and even OpenVMS have.

I doubt, however, that something like the AS/400 or OpenVMS will gain sway,
more likely, it will be conventional filesystems with some huge amount of
crufty metadata tacked on, probably with features like union-directories,
.... fouling up what had formerly been a clean heirarchical filesystem.

some of this had already been added to Linux, and may change primarily in
the way of becoming "standard" filesystem features, in much the same way as
long filenames, unicode filenames, and symbolic links...

notice on a modern Linux distro just how long the list of mounts can end up
being...

I guess five years, but maybe not. We are going 64 bit here now, both
with machines and OS, but it's still a few weeks off. However, I
guess that our office is atypical.

possibly, or at least this will be when the "bulk" transition is
completed...

it has been about 12 years since 32 bit OSs have been the norm, but we still
have some 16 bit apps floating around...

in 10 years, likely x86-64 will still be dominant, but maybe there will be a
competitor by this point or a transition to a new major replacement...

256G Ram is way too low for 10 years from now. My guess would be 1 TB
give or take a factor of 2.

it was a linear guess based on vague memory.

in 1998, a new desktop came with like 64MB RAM and a P2, now a desktop can
come with 4GB.
so, my estimate was linear...

I suspect we won't be using platter based hard drive technology ten
years from now. I know of some alternate technologies that will store
1 TB on one square cm and so something like that will take over once
the disk drive manufacturers have depreciated their current capital.

yeah. I was ignoring the technology, again, guessing based on linear
estimation from past trends.
as for current magnetic recording, AFAIK they are using tricks like vertical
polarization and have multi-layer magnetic recording, ...

I was just making a crude guess that they continue increasing the density on
magnetic platters much as they have done for the past few decades. nevermind
that by the time magnetic platters could pull off densities like this,
probably the underlying recording will work very different.

for example, rather than polarizing specific points in the medium, the
recording will consist of a good number of likely overlapping magnetic
fields, and likely be geometric rather than linear (for example, each point
represents a 3D vector or unit-quaternion value rather than a scalar). the
encoding and decoding electronics then go about converting chunks of data
into masses of overlapping geometric waveforms (data being encoded in terms
of both the orientation and the nature of the paths taken around the surface
of a 4D unit sphere...).

note that a single head would be used, but would likely consist of 4+
directional magnetic sensors (a tripod and a vertical sensor), the
orientation measured via triangulation or similar (note that a transform
would be needed, since the sensor axes would be non-orthogonal, ...).
additional sensors could allow a vertical component to be added as well
(basically, we could then have 6 axes per track, XYZW VL, where XYZW form a
Quaternion, V represents a limited-range vertical component, possibly
allowing encoding arcs or spirals, or stacking spheres, or similar, and L
represents the linear position along the track).

so, I suspect thay, yes, we can squeeze a little more density out of these
here magnetic platters (currently, we only exploit 2 magnetic axes, or 3 in
some newer drives...).


I don't know if core count will be the thing that pushes technology or
some completely new idea.
It went like this before:
Mechanical switch
Relay
Vacuum tube
Transistor
IC
VLSI

I don't really know here...

I guess something new will come along right about the time that
silicon can no longer keep up. It always has before.

yeah.


We get ourselves into trouble when we try to predict the future. Look
at the economic analysis by Marx and Engels. It was very good. But
their predictions on what the future held were not so good.

IMO, they screwed up really hard in that they analyzed everything, and then
set out to rework everything in their own image. necessarily that does not
go over as well...

personally I have a lot more respect for Nietzsche...

I think that we have a hard time knowing for sure about tomorrow.
Next week is a stretch. Next year is a guess. Five years is
speculation. Ten years is fantasy. Twenty years is just being silly.

yes, but even for being speculative, there can be at least some merit...

of course, note that I personally tend to use bottom-up planning, rather
than top-down planning, so my approaches tend to be a lot more tolerant to
prediction errors (in my case, I often assume that predictions are likely to
be incorrect anyways, so it is always good to keep a good fallback handy in
case things work out differently than expected...).

IMO-YMMV

Now, is there some C content in here somewhere? Oh, right -- C is
going to lose force. I predict:
1. C will lose force.
OR
2. C will gain force.
OR
3. C will stay the same.

That's trichotomy for you. And I can guarantee that I am right about
it.

2 is unlikely (it is already near the top), as is 3 (what was ideal when C
was going strong are becomming gradually less the case).

assuming 1, we can then guess from the available most-likely options.
 
F

Friedrich Dominicus

Bartc said:
What happens in this code fragment:

void fn(void) {
int *p = malloc(1236);
}

when p goes out of scope? (Presumably some replacement for malloc() is
used.)
I think this one is a reayll simply example. And I would be very
suprised if any GC would fail on this simple thing. As you can see p
can never be reached after leaving the function. So it's clear that
it's "free" to be collected.

The point with GC is that you better have some language support. C
does not support you with anything. As Ken has pointed out you can
take apart a pointer and reconstruct it (hopefulyl ones knows how to
do that) But I can not see that in simple cases this can be a
problem, it if was then the Boehm Weisser GC could not work at
all. But it works quite well.

So I'd argue as long as you do not play "dirty" tricks with your
Pointers you can assume that the GC works. Of course if anyone beeing
against GC wouldn't mind to post a very simply example on where this
failed I realyl would be curious to learn that.

I for my part just can see that the Boehm Weisser GC is used in very
many different packages and it seems that works flawless. If I'd the
chance of using it I do not hesitate to do so.

For me this is one part in langauge development where the "machine"
can do better than me, and I'm very happy to give that task to it.


What about a linked list when it's no longer needed? How does GC know that,
do you tell it? And it knows where all the pointers in each node are. I can
think of a dozen ways to confuse a GC so it all sounds like magic to me, if
it works.
Well if you keep track of every allocation, then you know where your
pointers are, after that it's "just" travesing them all if you find
some are not any longer reachable it's safe to throw them away. Of
course bugs can and will happen, but I'd argue after more than 60
years of GCs in diverse languages the big bummers have gone. I for my
part would trust any application "controlled by GC" more than any a
bit larger new C program doing that all by "hand".

But the nice thing is that you have the choice (well sometimes you
don't), but if I think of a well known OS and it's many "memory
allocation schemes" I start to shiver. It's a miracle that it
works, and it's IMHO a big achievement of the people involved in
programming to get that into that state... Howerver I doubt
anyone knows how much time they were hunting down resource problems,
and how much time and effort they spend on developging tools for
helping them finding bugs. At least it seems a not so small industry has
lived well (and still lives) on hunting down such resouce
problems. And I can't remember having it seen in another context as C
and derivates or Pascal and derivates, language without initial GC.

Regards
Friedrich
 
C

cr88192

Ian Collins said:
It already is, at least on the laptop, desktop and low to mid range
server market. You'd be hard pushed to find a new 32 bit machine these
days.

yes, and it will probably remain so for a while.
in not too long probably the software will start to catch up...

 
I

Ian Collins

cr88192 said:
yes, and it will probably remain so for a while.
in not too long probably the software will start to catch up...
I caught up a long time ago, at least in the Unix world.
 
H

Herbert Rosenau

lcc-win has been distributing a GC since 2004.

Fool, stop spamming for your properitary product that has nothing to
do with the topic of this group!

jacob navia <[email protected]> ignores constantly the topic of clc to
spam for his faulty compiler of something that has nothing to do with
the topic of this group. Spammer go away!

jacob navia <[email protected]> knows nothing about the standard as he
proves constantly with his spam - or where in the standard is GC
mentioned? Giving something away for free does not allow to spam for
it anyway!

--
Tschau/Bye
Herbert

Visit http://www.ecomstation.de the home of german eComStation
eComStation 1.2R Deutsch ist da!
 
H

Herbert Rosenau

Surely my intention wasn't to make an alternative out of GC or
not GC! There are many other possibilities.

There is absolutely no cause to trust a compiler whoes author is
unable to understund the standard and likes to ignore it anyway.

So jacob navia <[email protected]> proves himsel again and again as
fool noit only by spamming but by ignoring the standard too.

--
Tschau/Bye
Herbert

Visit http://www.ecomstation.de the home of german eComStation
eComStation 1.2R Deutsch ist da!
 
R

Richard

Ian Collins said:
It already is, at least on the laptop, desktop and low to mid range
server market. You'd be hard pushed to find a new 32 bit machine these
days.

For new machines maybe (maybe!), but the fact is that people do not
upgrade that often anymore - the curve has flattened IMO.
 
V

vippstar

In what ways does that implementation violate the standard?

My bet is that it will incorrectly free pieces of allocated memory
when the only references to that memory are in a file (written by
that process and later read back in by the same process). If lcc-win
actually handles this, its performance likely sucks if it has to
scan gigabyte (or worse, terabyte) files for pointer references.
I think the standard also allows, under the right circumstances,
for pointers to be *encrypted*, then stored in a file, and later
read back, decrypted, and used.
It is possible for C99.
You change the pointer to intptr_t, encrypt the integer type, write it
to the file, etc.
 
F

Flash Gordon

It is possible for C99.
You change the pointer to intptr_t, encrypt the integer type, write it
to the file, etc.

It's possible in C89 as well. You just treat it as an array of unsigned
char!

I can actually see potentially good reasons for us (my company) to store
some pointers in a temporary table in a database on occasion, probably
the only reason we do not do this is that the original author did not
think of it. The database is written in something close to standard C
(record locking being the main issue for standard C).
 
B

Bartc

Chris Torek said:
I've little idea about GC or how it works ...
What happens when [the only pointer to a successful malloc() result]
goes out of scope?
What about a linked list when it's no longer needed? How does GC know
that,
do you tell it? And it knows where all the pointers in each node are. ...
Suppose, for instance, we have a table somewhere of "all allocated
blocks" (whether this is a list of malloc()s made into an actual . . .
To do a full GC, we run through the table and clear all the
"referenced" bits. Then, we use some sort of System Magic to find
all "root pointers". In C, this could be "every address register,

Probably I haven't fully understood, but are you saying something like:

"Scan through all the memory areas that C can store values in (static areas,
automatic (eg. stack) areas, and you're saying registers too), but ignore
heap areas at first.

And look for a value that, if it happens to be a pointer, points into one of
the allocated areas in this table. Then you can extend the search to that
block too."

OK, the bit I have trouble with is: how does GC know if a value in memory is
a pointer, and not an int, character string, float etc that by coincidence
has a bit-pattern that is in the address range of an allocated block?
 
B

Bartc

Ben Bacarisse said:
Mostly (in C) they don't. Anything that looks like it *might* be a
pointer is treated as if it were one. The term "conservative GC" is
sometimes used for this behaviour.

And I thought it was doing something clever!
 
C

Chris Torek

Probably I haven't fully understood, but are you saying something like:

"Scan through all the memory areas that C can store values in (static areas,
automatic (eg. stack) areas, and you're saying registers too), but ignore
heap areas at first.

And look for a value that, if it happens to be a pointer, points into one of
the allocated areas in this table. Then you can extend the search to that
block too."

Right. (Using a suitable definition for "heap areas", anyway.)
OK, the bit I have trouble with is: how does GC know if a value in memory is
a pointer, and not an int, character string, float etc that by coincidence
has a bit-pattern that is in the address range of an allocated block?

As I think Ben said, "it doesn't". It just has to assume the worst:
if something "might be" a pointer, it "is" a pointer, for the purpose
of the GC.

In other languages, the collector has more information, and can
really tell which things are pointers. GCs thus work better in
those other languages. Of course, it is also the case that in a
language designed with garbage collection in mind, one can dispense
with huge amounts of ugliness found in C (and C++ and other "non-GC"
languages with dynamic allocation) that really exists only because
GC does not (exist). So if you want to work with GC, you probably
should be working with a language other than C (or C++ or any of
those other "non-GC" languages).

(It does not hurt if those languages offer inter-language conventions
so that one can hook up "C modules" for places where C's lack of
dynamic typing, exceptions, reflection, object-orientation, matrix
operations, etc., all turn out to be strengths instead of weaknesses.
Of course, even calling them "C modules" is a bit of a stretch, since
C does not really work in terms of "modules". :) )
 
R

Richard Tobin

That's why the toast crumbs keep accumulating at the bottom.
[/QUOTE]
There might be a little tray that you can slide out to remove the crumbs
easily.

In my experience it's a little tray that you can slide out to dump all
the crumbs back into the toaster.

-- Richard
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,053
Latest member
BrodieSola

Latest Threads

Top