multithreading.

C

Chris Thomasson

Jon Harrop said:
The problem is that it is irrelevant, not that it is wrong.

It is very relevant INDEED! First you say that a reference count is not 100%
by using the following example:

{
refcount<Bar> bar(new Bar);
f(bar);
g()
}


And claming that 'bar' will not be collected while g() is executing. Then I
say your totally wrong and give a simple counter example that renders your
non-sense example totally false:

{
{
refcount<Bar> bar(new Bar);
f(bar);
}
g()
}


And what do you do? Of course you make a silly statement: "it is
irrelevant". LOL! You are getting ridiculous!
 
C

Chris Thomasson

Chris Thomasson said:
It is very relevant INDEED! First you say that a reference count is not
100% by using the following example:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

It is very relevant INDEED! First you say that a reference count is not 100%
__accurate_ by using the following example:
 
J

Jon Harrop

Chris said:
A data-structure implementer should always be aware of how he is going to
safely reclaim memory in the presence of multi-threading.
Yes.

Garbage collection does not get you a free lunch here.

Your example before was a linked list. With a GC, these are implemented
completely naively because the GC takes care of everything (a free lunch).
Look at the OCaml and F# standard libraries, for example.
Your incorrect. You need to open your eyes here:

I have proven you wrong by providing a worked counter example. There is no
point in discussing this further.
Now your just trolling. Why are you thinking in terms of scope here? Did
you know that C++ allows the programmer to do something like:

{
{
Bar bar()
f(bar);
}
g()
}

See? We can go in circles for ever here. Your posing non-sense.

This neither proves nor disproves your claim. However, it is relevant to the
previous point because, again, GC provides a free lunch here by collecting
earlier without forcing you to rewrite your code as reference counting did.
Probably not.

I just tested this in OCaml and F# and both collect the value before the
scope ends.
You comparing creating efficient manual memory management schemes with
programming assembly language?

Yes. Both are only necessary in very specific circumstances.
Oh boy. Here we go again. I ask you: What reference counting algorithm?
You make blanket statements that are completely false!

I have described OCaml's zero overhead in this context. Show me a reference
counter that uses no space for its reference counts?

Failing that, show me a benchmark where any reference counted program uses
less memory than my GC'd equivalent?
There are some counting algorithms that use pointer stealing. Anyway, your
make false assumptions based on blanket statements. Drill down on some
specific reference counting algorithms please.

I am referring to all reference counting algorithms that require any space
for their counts, i.e. all of them.
That's no good! Multi-threading is the way things are now. You don't get a
free lunch anymore.

Very well. Let's do it in parallel.
I need to take a look at this, but, are you sure that this even needs
memory management?

Yes: the task rewrites expressions trees. There is a very similar C++
program here:

http://www.codecodex.com/wiki/index.php?title=Derivative

You could use that as a basis and add reference counting to it.
Could I just reserve a large block of memory and carve data-structures out
of it and use caching?

You will run out of memory very quickly. This program spends all of its time
in allocation and collection.
I don't think this needs GC or reference counting.

Unless you detect unused subexpressions and deallocate them you will run out
of memory.
I admire your faith but I would like to see some evidence to back up such
claims because they run contrary to common wisdom accumulated over the
past
few decades.

Here is a stupid contrived example:
________________________________________________________
struct object {
object* cache_next;
bool cached;
[...];
};

#define OBJ_DEPTH() 100000

static object g_obj_buf[OBJ_DEPTH()] = { NULL, true };
static object* g_obj_cache = NULL;

void object_prime() {
for(int i = 0; i < OBJ_DEPTH(); ++i) {
g_obj_buf.cache_next = g_obj_cache;
g_obj_cache = &g_obj_buf;
}
}

object* object_pop() {
object* obj = g_obj_cache;
if (! obj) {
if (obj = malloc(sizeof(*obj))) {
obj->cached = false;
}
}
return obj;
}

void object_push(object* obj) {
if (obj->cached) {
obj->cache_next = g_obj_cache;
g_obj_cache = obj;
} else {
free(obj);
}
}

void foo() {
for (;;) {
object* foo = object_pop();
object_push(foo);
}
}
________________________________________________________

This is a fast object cache that will likely perform better than doing it
the GC way where nobody wants to manage their own memory:
________________________________________________________
void foo() {
for (;;) {
object* foo = gc_malloc(sizeof(*foo));
}
}
________________________________________________________

Yes manual memory management is more work and if you don't want to do
that, well, GC can be a life saver indeed.


We need this program to perform an irreducible task that I can code up in a
GC'd language to compare performance.
A garbage collector generally cannot be as accurate as a reference count.
Your contrived scope example is non-sense.

I have proven your original statement wrong. Adding "generally" is better
but there is no evidence to support it.

We can at least test this on the symbolic rewriter by measuring the memory
consumption.
How are the implementation details of a GC irrelevant to a discussion on
GC?

Those aren't implementation details of a GC or, if they were supposed to be,
they are decades out of date. Mark and sweep has been incremental for over
three decades.
Any benchmark on GC these days has to be able to use multiple processes or
threads. Ahh... Here is something we can do! Okay, we can create a
multi-threaded daemon that serves factory requests to multiple processes.
This would use multi-threading, multi-processing, and shared memory:

I want to implement the wait-free factory as a multi-threaded daemon
process that multiple producers can register the path and name of a shared
library, which concurrent consumer processes can lookup by name;
dynamically link with the resulting library and call a "common" instance
function (e.g., define common api). I guess you could think of it as a
highly concurrent per-computer COM daemon. The factory consumer threads
can use the lock-free reader pattern, and the producer threads can use a
form of mutual exclusion; including, but not limited, to locks.

Or we can do a efficient observer pattern:

I want the wait-free observer to be a multi-threaded daemon that can allow
multiple threads to create/register delegates; message-types and allow
consumer threads to register with those delegates and receive their
messages; producer threads create messages with subsequently signal the
delegates that manages those messages to multicast to their registered
consumers.

Now those programs will definitely test a GC to the limits. BTW, do you
know of a GC that can collect across multi-process working with shared
memory???

Not multi-process, no. Either multi-threaded with shared memory or
multi-process with message passing.
Which one do you want to do? The multi-thread/process factory or the
multi-thread observer?

I can't see how to make an irreducible task with a correct answer out of
these problem descriptions. I can try to port your C++ code but my
translations will be open to accusions of cheating in the absence of a
well-defined problem to solve.
A cache is an aid to allocation.

The cache on a harddrive is not an "aid to allocation", for example.
Then pick a language that does do that.

Allocations are already cached by the minor (or first generation) heap of
most GCs. There is some sense in caching allocations on .NET because it
incurs a lock. You could do that in F# by preallocating an array of values.

However, I'm not sure that would be beneficial. I think allocating an array
of objects would incur multiple locks and allocating an array of structs
would incur a single lock but an indirection at every access (you cannot
pass structs by reference). I'd have to check this out though.
You should take a look at it.

Will do.
 
J

Jon Harrop

Chris said:
Then I say your totally wrong and give a simple counter example...

To be a valid counter example your program would have to prove that
reference counting is always more accurate. Your example does not prove
that. It is not a counter example.
 
C

Chris Thomasson

Chris Thomasson said:
I admire your faith but I would like to see some evidence to back up such
claims because they run contrary to common wisdom accumulated over the
past
few decades.

Here is a stupid contrived example:
________________________________________________________
struct object {
object* cache_next;
bool cached;
[...];
};

#define OBJ_DEPTH() 100000

static object g_obj_buf[OBJ_DEPTH()] = { NULL, true };
static object* g_obj_cache = NULL;

void object_prime() {
for(int i = 0; i < OBJ_DEPTH(); ++i) {
g_obj_buf.cache_next = g_obj_cache;
g_obj_cache = &g_obj_buf;
}
}


WHOOPS! There are several STUPID TYPO's in there. Well, that's what I get
for typing this out in the newsreader! Anyway, here is the full code for the
very simplistic object cache that will compile with a C compiler:
______________________________________________________________________
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>


typedef struct object_s object;

struct object_s {
object* cache_next;
int cached;
};


#define OBJ_DEPTH() 100000


static object g_obj_buf[OBJ_DEPTH()] = { { NULL } };
static object* g_obj_cache = NULL;


void object_prime(void) {
int i;
for(i = 0; i < OBJ_DEPTH(); ++i) {
g_obj_buf.cache_next = g_obj_cache;
g_obj_buf.cached = 1;
g_obj_cache = &g_obj_buf;
}
}

object* object_pop(void) {
object* obj = g_obj_cache;
if (! obj) {
if (obj = malloc(sizeof(*obj))) {
obj->cache_next = NULL;
obj->cached = 0;
}
} else {
g_obj_cache = obj->cache_next;
}
return obj;
}

void object_push(object* obj) {
if (obj) {
if (obj->cached) {
obj->cache_next = g_obj_cache;
g_obj_cache = obj;
} else {
free(obj);
}
}
}

void foo(unsigned long int depth) {
for (;depth > 0; --depth) {
object* foo = object_pop();
object_push(foo);
}
}


int main(void) {
object_prime();
foo(5);


/*---------------------------------------------------------*/
puts("\n\n\n______________________________________________\n\
press <ENTER> to exit...");
getchar();
return 0;
}

______________________________________________________________________



Sorry about that non-sense Jon!

;^(...


BTW, thank you for not flaming me too bad on this! ACK!
 
C

Chris Thomasson

Jon Harrop said:
To be a valid counter example your program would have to prove that
reference counting is always more accurate. Your example does not prove
that. It is not a counter example.

You example did not prove that reference counting is not accurate.
 
J

Jerry Coffin

[ ... ]
Jerry

Can I invite you to take a look at this post[1] back from a few years
ago where a MSFT employee laid out his thoughts on why they went the
GC way w/o bothering about refcounting?

Sure. It'll take a while to read the whole thing, so I can't comment on
it in any detail just yet, but my immediate reaction based on glancing
it over is that it appears that their starting point was primarily the
semantics of VB6.

If that's accurate, then it appears there's not really a huge amount to
say: VB-6 is quite a lot different from C++. Specifically, C++ allows
one to execute arbitrary code in a destructor, meaning that the
destruction of an object can (and often does) have implications far
above and beyond that of simply destroying the object itself. In many
cases, correct operation of the program depends completely upon
execution of that code at precisely the correct time.

In VB-6 (or earlier) I don't believe that's the case. At least offhand,
I can't think of any observable semantics associated with destroying
anything (though I've never used VB much, so that may easily be wrong).
If that's the case, changing the timing of destroying objects can't make
an observable difference either. That makes almost any sort of garbage
collection comparatively trivial to incorporate.

In a situation like that, your choice of garbage collection techniques
comes down primarily to trade-offs between memory usage and speed, with
(quite) a bit of tuning to support the specific patterns of object usage
you see. By the time MS was doing this work, people had been working on
garbage collectors for years, and writing something that worked at least
reasonably well was a matter of combining well known elements in
relatively well known ways to get what were probably about the expected
results.
Also shortly after the post
was written, MSFT funded a project to try to add ref-counting to their
Rotor (their version of open sourced CLR) codebase[2] as a kind of
feasibility study and it failed miserably. The latter project, I have
the details only as a word document and have uploaded it here[2]. Let
me know what you think.

[1] http://discuss.develop.com/archives/wa.exe?A2=ind0010A&L=DOTNET&D=0&P=39459
[2] http://cid-ff220db24954ce1d.skydrive.live.com/browse.aspx/RefcountingToRotor

This second paper is a bit harder to comment on. They openly admit that
they 1) made no attempt at optimization, and 2) didn't profile the code
(tried to, but failed).

They make the case (correct, as a rule, I think) that deterministic
finalization allows one to simplify client code. The fact that their
code was slow in the absence of optimization, and that they don't (or at
that time didn't) know exactly why or how it was so slow, only points to
what we don't really know.

At least at first glance, their implementation sounds pretty slow,
though some points aren't entirely clear. They mention calling AddRef
and Release, but it's not clear whether they mean they're literally
calling member functions, or just using those names to make the idea
clear to people accustomed to COM.

If they were really using COM-style member functions, each increment or
decrement would involve loading the object's vtable pointer, then
looking up the function in the vtable, then calling that function to
increment or decrement. We're up to something like three memory
references, of which only one (the vtable pointer) is at all likely to
be in the cache already.

Then they talk about having to store the count for each object at the
end of the object, with a potentially different offset for each type of
object. That tends to indicate that they'd have to store the proper
offset and look it up for each object. If, heaven forbid, they followed
standard COM practice, they'd have to call _another_ function out of the
vtable to get that offset. If they access the data directly, we're
looking at a couple more memory references, and if they called a COM-
style member function we'd be looking at another three memory references
or so, of which (again) only one would be at all likely to be in the
cache already.

All in all, what would normally be a single reference to memory that
would almost always be in the cache would turn into around four to six
references to memory, only one or two of which would be in the cache on
more than rare ocassion.

Reference counting requires only simple operations (incrementing or
decrementing) but it does those quite frequently. You only ever get away
with it at all because those are normally quite fast. When you add
something like 4 references to memory that's not likely to be cached to
every one of those operations, dismal performance sounds like the best
you'd expect.

I should also point out that at no point have I attempted to advocate
reference counting as a general purpose garbage collection solution, or
anything on that order. Quite the contrary, I think if somebody were to
attempt to implement Java, Lisp, Smalltalk, Self, OCaml, etc., using
reference counting as it's only method of garbage collection, they'd be
making an extremely unwise decision at best.

My advocacy in this matter is purely for at least a reasonable degree of
accuracy. My assumption is that people considering various forms of
garbage collection should already know something about their memory
usage patterns. Matching that up with the characteristics of various
forms of garbage collection will allow them to make an intelligent
choice.

I have no problem at all if that choice happens to be something other
than reference counting -- in fact, I'd go so far as to say that
reference counting is only a good choice in relatively limited
circumstances. At the same time, within those limited circumstances,
reference counting will typically be a substantially better choice than
most alternatives.

If there was _never_ a situation in which reference counting was useful,
I wouldn't worry much if advice against it was based on inaccurate
reasoning -- even if inaccurate, it wouldn't be particularly misleading.
That's not the situation here. Jon Harrop's statements are both wildly
inaccurate, AND grossly misleading. IMO, that's unacceptable, and his
false statements _need_ to be corrected -- not in any particular hope
that anybody in particular will choose to use reference counting, but
only in the hope that they can make an intelligent, well-informed
decision based on facts, not blatant falsehoods.
 
J

Jerry Coffin

Citation?

I just gave you a citation, moron! If you want more details: it's used
in an MPEG decoder. MPEG includes I-frames (which are vaguely similar to
JPEG pictures) and P-frames and B-frames. In a P-frame, you use a block
of pixels from a previous frame as a prediction of a block in the
current frame, and then you encode only the differences between that
block and the current block. In a B-frame, you use bidirectional
prediction, meaning you use both a previous AND a succeeding frame for
your prediction.

In managing the memory, you've got a few choices: you can simply keep
the entire previous frame, just in case a succeeding frame might predict
parts of itself from this data. Alternatively, you can keep only the
parts that are really _used_ for prediction -- which, of course, means
that you count up the references, and dispose of each block when it's no
longer needed.

[ ... ]
Mathematica and OCaml demonstrate the opposite.

They demonstrate nothing of the sort.
Yet you still haven't ported the benchmark I cited.

Believing that benchmark means _anything_ about GC demonstrates still
more thoroughly that you haven't a clue of what you're talking about.
No: reference counts are updated whenever the value is referenced or
dereferenced and those are not operations on the value.

If you're going to try to use a word like "dereferenced", you should
learn what it means first. As it stands right now, you come off a lot
like a six-year old trying to sound adult.
That description is 48 years out of date.

Nonsense. I specifically said that "...when carried out as a simple
mark/sweep collector..." This is absolutely true today, just as it was
in the very first Lisp interpreter.
Older generations are not regarded as "permanent".

You're showing still more of your ignorance -- there are certainly
collectors that treat objects as permanent once they reach a sufficient
age.
Which happens all the time and is the reason why reference counting has
worse locality of reference and performance. The other reason is that
reference counting causes fragmentation because it cannot move values.

Still more complete nonsense. "Locality of reference" is apparently
something else you shouldn't use because you obviously don't know what
it means. Reference counting and movement of objects are _completely_
orthogonal. Reference counting deals with figuring out _when_ an object
is dead, and has nothing whatsoever to do with _where_ the object is
stored.

[ ... ]
This is all speculation for which there is overwhelming evidence to the
contrary. In short, if any of your points were correct then people would be
building major GCs on reference counting but they are not.

Still more complete nonsense. Just for example, see David Ungar's
_Outwitting GC Devils: A Hybrid Incremental Garbage Collector_, where he
talks about problems they ran into when they first implemented
generational scavenging, and how they ameliorated them at that time. He
concluded that paper by saying:

Much work remains. We need to analyze the performance of
this system and see what devils plague it now. In the
meantime, this work may rekindle interest in pause-free
reclamation without read barriers, and we hope that those
who come after us in this field may profit from our
experience by watching out for unexpected devils.

Of course, if you want to argue with me about garbage collection, you're
going to end up reading a LOT of those old papers you disdain so much,
because as it stands right now, you don't even have a clue about the
basic background necessary to get started. Trying to discuss recent
developments with you would be a little like trying to teach string
theory to a three year old when he asks why things always fall down
instead of up -- you lack any of the background necessary to begin to
understand any of it.
Even its prophecies that we now know to be wrong?

What would those be? So far, the only one you've disputed HAS come true.
Worse, despite your attempts to change the subject, the citation was
originally about the incremental nature of reference counting -- and
you've yet show a single shred of evidence that this is not the case.
But they aren't used because everyone has moved on to real GCs because they
are better in almost every respect, including the one's you're trying to
contest.

You obviously don't know WHAT I'm contesting at all!
The memory gap is the name given to the phenomenon that has obsoleted your
argument.

You wish.

[ ... ]
Then you should be able to provide credible references and worked counter
examples as I have.

Your inability to recognize when you've been proved wrong is hardly my
problem.
 
C

Chris Thomasson

Jon Harrop said:
The language implementation is now split into the compiler and the GC
which
must be designed to cooperate.

Is a great fact that we can use C/C++ __and__ some assembly language to
create any garbage collector you can think of. C/C++ programmers have that
freedom. Does that prove anything? Na. It only shows how C/C++ is at a low
enough level to allow the creation of Java, C#, or basically any other
language and/or GC. Got it?

:^D
 
C

Chris Thomasson

Jon Harrop said:
Ironic result then. :)

Your trolling AGAIN! You say I am living many years in the past simply
because I make use of several forms of distributed reference counting. You
are full of it, and need to understand that GC is a great tool. Its only a
TOOL! Not an all purpose answer! There are many different forms of reference
counting and GC; so be it. You make false claims on ALL of them with your
bullshi% blanket ignorant statements. Therefore, I point out problems with
your logic; so be it. GC is good, Ref-Counting is good: Get it???
 
C

Chris Thomasson

Jon Harrop said:
Chris Thomasson wrote: [...]
Yes: the task rewrites expressions trees. There is a very similar C++
program here:

http://www.codecodex.com/wiki/index.php?title=Derivative
[...]

I do not have the time either today nor tomorrow to create a program that
can compete with a GC lang wrt specific problem arena at hand. Well, I am
going to give your benchmark a whirl anyway in C++, give me two or three
days please??? I think I can cut the number of new/delete calls by a fairly
wide margin. If I can use an inherently non garbage collected language like
C and/or C++ to even get a 1ms gain over one of the other langs that are GC
by nature... Well, if I can do it, will you go ahead and try an compete with
me in creating a process-wide distributed factory/obserbver pattern? I know
that GC does not collect over multi-process programs. Well, that's your
problem. I use C/C++ which gives be the freedom to implement such things.
Your GC native languages will have to compete in the multi-threaded __and__
multi-process world. If I can get a 1ms gain, good luck trying to HACK a GC
lang to go into uncharted waters!

:^D


What say you? I know that GC is not multi-process friendly... That your
problem, not mine. I use C/C++/x86-asm/SPARC-asm... GC lang? Okay. Lets rock
and roll!

:^|
 
C

Chris Thomasson

Chris Thomasson said:
Jon Harrop said:
Chris Thomasson wrote: [...]
Yes: the task rewrites expressions trees. There is a very similar C++
program here:

http://www.codecodex.com/wiki/index.php?title=Derivative
[...]

I do not have the time either today nor tomorrow to create a program that
can compete with a GC lang wrt specific problem arena at hand. Well, I am
going to give your benchmark a whirl anyway in C++, give me two or three
days please??? I think I can cut the number of new/delete calls by a
fairly wide margin. If I can use an inherently non garbage collected
language like C and/or C++ to even get a 1ms gain over one of the other
langs that are GC by nature... Well, if I can do it, will you go ahead and
try an compete with me in creating a process-wide distributed
factory/obserbver pattern? I know that GC does not collect over
multi-process programs. Well, that's your problem.
[...]

I am not so sure I can even get a 1ms gain, I have not tried yet but I think
this is going to be a challenge indeed! Thanks for the opportunity! Really,
I appreciate your time and patience with me Doctor!

:^)

BTW, the above was not meant to be sarcastic in any way, shape or form.
 
A

Alexander Terekhov

Jon Harrop wrote:
[...]
{
Bar bar()
f(bar);
g()
}

Reference counting will keep "bar" alive until its reference count happens
to be zeroed when it drops out of scope even though it is not reachable
during the call to "g()". Real garbage collectors can collect "bar" during
the call to "g" because it is no longer reachable.

The above is pretty much what you get from

using (Bar bar = new Bar()) {
f(bar);
g();
}

in GC environment. See

http://msdn2.microsoft.com/en-us/library/yh598w02(VS.80).aspx
So GCs can clearly collect sooner than reference counters.

You probably mean

using (Bar bar = new Bar()) {
f(bar);
}
g();

which is pretty much what you get from

{
Bar bar();
f(bar);
}
g();

;-)

I'm not sure what does that have to do with shared_ptr<> vs GC topic...

regards,
alexander.
 
I

Ian Collins

Chris said:
Is a great fact that we can use C/C++ __and__ some assembly language to
create any garbage collector you can think of.

I thought C/C++ resulted in undefined behaviour :)
 
D

Dmitriy V'jukov

That is irrelevant. Your argument in favor of reference counting was
completely fallacious.

You claimed was that reference counting is "more accurate than a traditional
GC could ever be". Consider:

{
Bar bar()
f(bar);
g()
}

Reference counting will keep "bar" alive until its reference count happens
to be zeroed when it drops out of scope even though it is not reachable
during the call to "g()". Real garbage collectors can collect "bar" during
the call to "g" because it is no longer reachable.


Can Real garbage collectors do this w/o help of compiler? I'm not
sure.
With help of compiler reference-counting can easily collect bar
precisely and promptly at the end of f().


Dmitriy V'jukov
 
D

Dmitriy V'jukov

Reference counts consume a lot more space.


Yes. For example, OCaml hides two bits inside each pointer totalling zero
overhead. In contrast, a reference counting system is likely to add a
machine word, bloating each and every value by 8 bytes unnecessarily on
modern hardware.


There is proxy-collector based on reference-counting which can be
implemented w/o any per-object overhead.

There is 'one-bit' reference counting which also uses only low bit of
pointer.

It seems that you limit your vision only to decades-old plain-old-
reference-counting algorithm.


Dmitriy V'jukov
 
C

Chris Thomasson

Chris Thomasson said:
Chris Thomasson said:
Jon Harrop said:
Chris Thomasson wrote: [...]
Yes: the task rewrites expressions trees. There is a very similar C++
program here:

http://www.codecodex.com/wiki/index.php?title=Derivative
[...]
[...]

Actually, the only way I can really think of competing with a GC lang wrt
the C++ code provided in the link above is to simply create a slab
per-concrete-object (e.g., Var, Int, Plus and Times). That will drastically
cut down on calls to new/delete. I will have code for you in a day or two.
Probably two because I am getting ready for a move from the Bay Area to the
South Tahoe Basin. Anyway, here is simple sketch for the Int class in the
example you linked to:

origin:
_______________________________________________________________________
class Int : public Base {
const int n;
public:
Int(int m) : n(m) {}
~Int() {}
const Base *clone() { return new Int(n); }
const Base *d(const string &v) const { return new Int(0); }
ostream &print(ostream &o) const { return o << n; }
};
_______________________________________________________________________



<sketch w/ typos>
_______________________________________________________________________
#define INT_DEPTH() 10000


struct Int : public Base {
int n;
Int* m_next;

Int(int m) : n(m) {}
~Int() {}

void Ctor(int m) {
n = m;
}

static Int* g_head;
static int g_depth;

static Int* CachePop(int const m) {
Int* _this = g_head;
if (! _this) {
_this = new Int(m);
} else {
g_head = _this->m_next;
--g_depth;
_this->Ctor(m);
}
return _this;
}

static void CachePush(Int* const _this) {
if (g_depth < INT_DEPTH()) {
_this->m_next = g_head;
g_head = _this;
++g_depth;
} else {
delete _this;
}
}


const Base *clone() { return CachePop(n); }
const Base *d(const std::string &v) const { return CachePop(0); }
std::eek:stream &print(std::eek:stream &o) const { return o << n; }
};


Int* Int::g_head = NULL;
int Int::g_depth = 0;

_______________________________________________________________________



Alls I have to do is replace 'Var, Int, Plus and Times' with the VERY simple
cache outlined above and I know it will cut the number of new/delete calls
by a large margin. That about all I can do in C++. Is that fair?

;^D
 
C

Chris Thomasson

Ian Collins said:
I thought C/C++ resulted in undefined behaviour :)

I really do like that C and C++ is low-level enough to be able to provide
the ability to actually create other great languages, and the GC's that go
along with them of course...

:^)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,537
Members
45,022
Latest member
MaybelleMa

Latest Threads

Top