So I want to exhaust a lot of memory

I

ImpalerCore

What is the best way to exhaust a lot of memory but at the same time
leave enough so that I can test out of memory errors for some
container functions, i.e. like not having enough memory to allocate a
new list node.

So far, all I've done is something like:

const size_t block_size = 1024*1024;
size_t count;

for ( count = 0; malloc( block_size ); ++count );
....

The smaller the block_size, the longer it takes to exhaust memory. So
I plan to starting with a large block size, exhausting memory,
reducing the block size, exhausting memory, until I get down to a
level that I can exercise my out of memory error handling.

Is this a good strategy? Are there pitfalls that I'm not seeing?
Again, I'm assuming that ending the process reclaims the memory used.
 
I

Ian Collins

ImpalerCore said:
What is the best way to exhaust a lot of memory but at the same time
leave enough so that I can test out of memory errors for some
container functions, i.e. like not having enough memory to allocate a
new list node.

Use your own mock malloc. Tat way you can return what ever you like,
including NULL.
 
C

chrisbazley

What is the best way to exhaust a lot of memory but at the same time
leave enough so that I can test out of memory errors for some
container functions, i.e. like not having enough memory to allocate a
new list node.

So far, all I've done is something like:

const size_t block_size = 1024*1024;
size_t count;

for ( count = 0; malloc( block_size ); ++count );
...

The smaller the block_size, the longer it takes to exhaust memory.  So
I plan to starting with a large block size, exhausting memory,
reducing the block size, exhausting memory, until I get down to a
level that I can exercise my out of memory error handling.

Is this a good strategy?  Are there pitfalls that I'm not seeing?
Again, I'm assuming that ending the process reclaims the memory used.

This is exactly the kind of thing that Simon P. Bullen's fortified
memory allocation shell is great at. I don't know how I ever managed
without it (or rather, I don't know why I ever trusted code that
hadn't been memory-squeezed).

His site is gone since Yahoo closed GeoCities (boo! hiss!) but you can
find a cached copy of the relevant page on The Wayback Machine:
http://web.archive.org/web/20020615230941/www.geocities.com/SiliconValley/Horizon/8596/fortify.html

To exercise out-of-memory handling I normally do something like the
following:

#include <stdlib.h>
#include <string.h>
#include "fortify.h"
#include "mymodule.h"
#define ARRAY_LEN(x) (sizeof(x) / sizeof(x[0]))

int main(int argc, char *argv[])
{
unsigned long limit = 0;
static const char * const strings[] =
{
"Ella",
"Inigo",
"Pip",
"Estella"
};
bool success;

do
{
MyModule *thing;

Fortify_SetAllocationLimit(limit);
Fortify_EnterScope();

/* Test the behaviour of 'mymodule' in low-memory conditions
*/
success = mymodule_create(&thing);
if (success)
{
unsigned int i;

for (i = 0; success && i < ARRAY_LEN(strings); i++)
success = mymodule_add_string(thing, strings);

for (i = 0; success && i < ARRAY_LEN(strings); i++)
success = mymodule_remove_string(thing, strings);

mymodule_delete(thing);
}
/* End test cases for 'mymodule' */

Fortify_LeaveScope(); /* report any memory leaks */

/* Exit if a memory leak occurred */
if (Fortify_GetCurrentAllocation() > 0)
{
printf("Allocation limit was %lu\n", limit);
exit(EXIT_FAILURE);
}
limit ++;
}
while (!success); /* loop until test completely successfully */

return EXIT_SUCCESS;
}

Because all of the false-failed memory allocations generate a lot of
output, you might want to comment out the definition of macro
FORTIFY_WARN_ON_FALSE_FAIL in the user options header "ufortify.h" and
recompile Fortify.

If you don't have access to (or authority to modify) the source code
of the version of Fortify that you're using then you can instead use
Fortify_SetOutputFunc() to temporarily install a function that
suppresses all output. [By temporarily, I mean between
Fortify_EnterScope() and Fortify_LeaveScope() in the example above.]

One thing to beware of is that the supplied prototype of the
Fortify_SetFailRate() function in "fortify.h" doesn't match its
definition. (The definition uses the alternative name
Fortify_SetAllocateFailRate that is referred to in the documentation;
I can only assume that the header is wrong.)

HTH,
 
N

Nobody

What is the best way to exhaust a lot of memory but at the same time
leave enough so that I can test out of memory errors for some
container functions, i.e. like not having enough memory to allocate a
new list node.
I plan to starting with a large block size, exhausting memory,
reducing the block size, exhausting memory, until I get down to a
level that I can exercise my out of memory error handling.

Is this a good strategy? Are there pitfalls that I'm not seeing?

Linux systems typically let you allocate as much virtual memory as you
like, then kill the process if it uses too much of it (i.e. when the OS
has trouble trying to "back" the allocation with physical RAM or swap).

You can prevent this with "echo 2 > /proc/sys/vm/overcommit_memory":

/proc/sys/vm/overcommit_memory
This file contains the kernel virtual memory accounting mode.
Values are:

0: heuristic overcommit (this is the default)
1: always overcommit, never check
2: always check, never overcommit

In mode 0, calls of mmap(2) with MAP_NORESERVE are not checked,
and the default check is very weak, leading to the risk of get-
ting a process "OOM-killed". Under Linux 2.4 any non-zero value
implies mode 1. In mode 2 (available since Linux 2.6), the
total virtual address space on the system is limited to (SS +
RAM*(r/100)), where SS is the size of the swap space, and RAM is
the size of the physical memory, and r is the contents of the
file /proc/sys/vm/overcommit_ratio.
 
E

Ersek, Laszlo

Use your own mock malloc. Tat way you can return what ever you like,
including NULL.

Yes. Make your container's constructor(s) take an allocator, for
example:

struct allocator
{
void *(*alloc)(size_t size, void *ctx);
void (*release)(void *object, void *ctx);
void *ctx;
};

The constructor should copy the whole struct (or store a pointer to it)
and use these functions for memory management.

Then suppose you have a complex test case that exercises many
allocations.

static int
test_my_container(const struct allocator *a, ...)
{
/* pass "a" to all container constructors */
/* return 0 for success, -1 for failure */
}

The following would allow you to hit each individual allocation point
with an OOM failure (takes 1 + n * (n + 1) / 2 allocation requests
altogether, where n is the number of allocations needed to complete a
single full test):


struct myctx
{
unsigned limit,
ctr;
}

static void *
myalloc(size_t size, void *v_ctx)
{
struct myctx *myctx = v_ctx;

return myctx->ctr++ < myctx->limit ? malloc(size) : 0;
}

static void
myrelease(void *object, void *ctx)
{
free(object);
}

static void
test_all(void)
{
struct myctx myctx = { 0u };
const struct allocator allocator = { &myalloc, &myrelease, &myctx };

while (-1 == test_my_container(&allocator, ...)) {
++myctx.limit;
myctx.ctr = 0u;
}
}


Among the many tests SQLite is subjected to, this kind is one kind.

http://www.sqlite.org/testing.html
3.1 Out-Of-Memory Testing

Another useful "malloc() plugin" (perhaps in combination with the
previous one) is to log the pointers returned by malloc() / passed to
free() (with printf()'s %p conversion specifier). In the end, one can
check the trace for double free's and leaks. (You're not sure to "reach
the end" if you commit double free's.) The output could look like:

$ printf 'what now' \
| LBZIP2_TRACE_ALLOC=1 lbzip2 \
>/dev/null

8806: malloc(16) == 0x10c2250
8806: malloc(900005) == 0x7fb808b06010
8806: malloc(909605) == 0x7fb808a27010
8806: malloc(55768) == 0x10c24b0
8806: malloc(3600000) == 0x7fb807eed010
8806: malloc(3600136) == 0x7fb807b7e010
8806: malloc(262148) == 0x7fb807b3d010
8806: free(0x7fb807eed010)
8806: free(0x7fb807b7e010)
8806: free(0x7fb807b3d010)
8806: free(0x10c24b0)
8806: free(0x7fb808b06010)
8806: malloc(40) == 0x10c24b0
8806: free(0x10c24b0)
8806: free(0x7fb808a27010)
8806: free(0x10c2250)

Analyzing with "malloc_trace.pl":

$ printf 'what now' \
| LBZIP2_TRACE_ALLOC=1 lbzip2 2>&1 >/dev/null \
| perl -w malloc_trace.pl '(nil)'

8814: peak: 9327678

Grab lbzip2.tar.gz from lacos.hu if you care for the Perl script.

Disclaimer: IIRC p1 == p2 doesn't necessarily imply

0 == strcmp((sprintf(buf1, "%p", p1), buf1), (sprintf(buf2, "%p", p2), buf2))

on old or esoteric platforms.

Cheers,
lacos
 
I

ImpalerCore

What is the best way to exhaust a lot of memory but at the same time
leave enough so that I can test out of memory errors for some
container functions, i.e. like not having enough memory to allocate a
new list node.

So far, all I've done is something like:

const size_t block_size = 1024*1024;
size_t count;

for ( count = 0; malloc( block_size ); ++count );
...

The smaller the block_size, the longer it takes to exhaust memory.  So
I plan to starting with a large block size, exhausting memory,
reducing the block size, exhausting memory, until I get down to a
level that I can exercise my out of memory error handling.

Is this a good strategy?  Are there pitfalls that I'm not seeing?
Again, I'm assuming that ending the process reclaims the memory used.

Thanks for all the responses. I have been using DMALLOC for general
memory leak detection, it seems to work well for me, but I can't seem
to make it work when I try to exercise actual out of memory problems.
I'll look at the suggestions and see what I can do to make a special
allocator/malloc that puts an upper limit the amount of memory
available.

Best regards,
John D.
 
P

Phil Carmody

Ian Collins said:
Use your own mock malloc. Tat way you can return what ever you like,
including NULL.

Looks like librandomcrash has some potential customers...

Phil
 
I

Ian Collins

Yes. Make your container's constructor(s) take an allocator, for
example:

That's a bit over the top, I was suggesting defining malloc and free in
the test code.
 
I

ImpalerCore

What is the best way to exhaust a lot of memory but at the same time
leave enough so that I can test out of memory errors for some
container functions, i.e. like not having enough memory to allocate a
new list node.

So far, all I've done is something like:

const size_t block_size = 1024*1024;
size_t count;

for ( count = 0; malloc( block_size ); ++count );
...

The smaller the block_size, the longer it takes to exhaust memory.  So
I plan to starting with a large block size, exhausting memory,
reducing the block size, exhausting memory, until I get down to a
level that I can exercise my out of memory error handling.

Is this a good strategy?  Are there pitfalls that I'm not seeing?
Again, I'm assuming that ending the process reclaims the memory used.

After looking at some of the open source malloc wrappers, I decided to
use a custom allocator to test my library. I decided to use the same
technique that GLib uses by providing a table of function pointers
linked to specific implementations of malloc, realloc, free, and
calloc, and my wrapper functions will call these function pointers if
they exist. This allows me to create several custom allocators and
plug them to use as my library's allocator fairly easily. For out of
memory testing, I can make a soft limit allocator that tracks the
amount of used space and stops once its been overrun. I can also
create a 'nth' bad alloc in the spirit of sqlite's memory testing,
starting from 1, 2, ... N to invoke memory errors at different levels
within the library. I've already been successful in discovering an
error in my_string_t when it wants to expand the buffer via realloc
but gets denied.

I can ensure that my library interface uses the set of my_malloc,
my_realloc, my_free, and my_calloc functions, but what do I do about
malloc and friends' influence on the standard library or 3rd party
libraries? I don't particularly want to replace malloc since I have
been using dmalloc for general memory tracking. I'm not familiar with
how much of the standard library uses the malloc interface internally,
and I can't exactly dovetail my version without some special non-
portable hooks. It seems to be the best solution for my needs so far.

Some of my library functions are designed not to return an error code,
but it may have an internal out-of-memory error. An example would be
my_list_insert_front:

my_list_t* my_list_insert_front( my_list_t*, void* object );

It's possible that the dynamic allocation of the my_list_t node fails,
but usage of the function dictates that the result is assigned back
the list pointer. If an out-of-memory error occur, I can't return
NULL or I'd lose the list. I can have the user rely on checking errno
for ENOMEM, which seems acceptable but maybe not the best method. I'm
not fond style-wise of the version that returns an int and takes a
my_list_t**, even though the return value method of error checking is
generally superior; at this point I would still prefer to check errno
to catch internal list node allocation failures. This isn't the only
point in my library interface that has this problem, and internally,
I'm still conflicted on this issue.

Does the C standard define whether malloc forces errno to ENOMEM or
some equivalent on an out-of-memory error, or is it a convenience of
the malloc implementation? I've not been able to accurately determine
this from my limited reading.

Just some of my thoughts to peruse through.
 
E

Eric Sosman

[...]
I can ensure that my library interface uses the set of my_malloc,
my_realloc, my_free, and my_calloc functions, but what do I do about
malloc and friends' influence on the standard library or 3rd party
libraries?

That's one of the reasons the Standard says it's undefined
behavior if you try to substitute your own function for part of
the Standard library. The library may have internal links between
its parts, and those mechanisms are not standardized and need not
even be disclosed. For example, fflush(NULL) flushes all open
streams whose most recent operation was output: Something must be
going on behind the scenes to get this to work, and the details of
that "something" are the implementation's business.

Do malloc(), calloc(), realloc(), and free() have such internal
links? They're obviously connected to each other, but do they have
connections to other pieces of the implementation? The Standard
doesn't say that they do -- but it doesn't say that they don't,
either, so you substitute for them at your peril. Perhaps the code
that runs before main() or during program termination "knows" things
about their data structures. Maybe there's a special "back door"
that delivers page-aligned I/O buffers for fopen(). Maybe ...

... which is sort of too bad, because of all the pieces of the
Standard library one would like to be able to override, the memory
manager is surely the most tempting. Implementors know this, so
they're unlikely to put roadblocks in the way just to make trouble.
But if there's a good reason (on some platform) for malloc() et al.
to have internal connections elsewhere, they'll be there. All I
can suggest is careful study of each implementation's documentation,
plus some testing and crossing of fingers.

Some of my library functions are designed not to return an error code,
but it may have an internal out-of-memory error. An example would be
my_list_insert_front:

my_list_t* my_list_insert_front( my_list_t*, void* object );

It's possible that the dynamic allocation of the my_list_t node fails,
but usage of the function dictates that the result is assigned back
the list pointer. If an out-of-memory error occur, I can't return
NULL or I'd lose the list. I can have the user rely on checking errno
for ENOMEM, which seems acceptable but maybe not the best method. I'm
not fond style-wise of the version that returns an int and takes a
my_list_t**, even though the return value method of error checking is
generally superior; at this point I would still prefer to check errno
to catch internal list node allocation failures. This isn't the only
point in my library interface that has this problem, and internally,
I'm still conflicted on this issue.

The errno mechanism is creaky, or even crufty, and (IMHO) should
not be employed as a channel for success/failure indications. Use it
to provide additional information about a failure that's indicated
some other way.
Does the C standard define whether malloc forces errno to ENOMEM or
some equivalent on an out-of-memory error, or is it a convenience of
the malloc implementation? I've not been able to accurately determine
this from my limited reading.

The Standard does not require a failed malloc() to set errno at
all, much less to set it to any particular value. Further, the
Standard does not forbid a *successful* malloc() from setting errno.
 
I

ImpalerCore

[...]
I can ensure that my library interface uses the set of my_malloc,
my_realloc, my_free, and my_calloc functions, but what do I do about
malloc and friends' influence on the standard library or 3rd party
libraries?

     That's one of the reasons the Standard says it's undefined
behavior if you try to substitute your own function for part of
the Standard library.  The library may have internal links between
its parts, and those mechanisms are not standardized and need not
even be disclosed.  For example, fflush(NULL) flushes all open
streams whose most recent operation was output: Something must be
going on behind the scenes to get this to work, and the details of
that "something" are the implementation's business.

     Do malloc(), calloc(), realloc(), and free() have such internal
links?  They're obviously connected to each other, but do they have
connections to other pieces of the implementation?  The Standard
doesn't say that they do -- but it doesn't say that they don't,
either, so you substitute for them at your peril.  Perhaps the code
that runs before main() or during program termination "knows" things
about their data structures.  Maybe there's a special "back door"
that delivers page-aligned I/O buffers for fopen().  Maybe ...

     ... which is sort of too bad, because of all the pieces of the
Standard library one would like to be able to override, the memory
manager is surely the most tempting.  Implementors know this, so
they're unlikely to put roadblocks in the way just to make trouble.
But if there's a good reason (on some platform) for malloc() et al.
to have internal connections elsewhere, they'll be there.  All I
can suggest is careful study of each implementation's documentation,
plus some testing and crossing of fingers.

Thanks for your insight into the issues. I don't have an intention of
overwriting the local standard library's malloc implementation, but
I'll be happy to let a dmalloc or similar library do it for debugging
purposes.
     The errno mechanism is creaky, or even crufty, and (IMHO) should
not be employed as a channel for success/failure indications.  Use it
to provide additional information about a failure that's indicated
some other way.

This is my general feeling as well.
     The Standard does not require a failed malloc() to set errno at
all, much less to set it to any particular value.  Further, the
Standard does not forbid a *successful* malloc() from setting errno.

I feel more confident now that relying on errno is a bad idea. It
sounds like to preserve the interface, I'll have to mimic a "errno"
state that will act in a similar manner as malloc implementations that
set errno to ENOMEM. Since my custom wrapper reference the global
function table, that is a place I can embed this "errno" like boolean
state. My malloc wrappers can update that state correctly and
independently of errno by just comparing the result of malloc internal
call generated by calling the custom allocator's malloc through the
table's function pointer. It allows me to keep the prototype for
my_list_insert_front the same, and check for memory error using the
state stored in the function table.

i.e.

struct my_allocator_ftable
{
void* (*malloc) ( size_t size );
void* (*realloc)( void* p, size_t size );
void (*free) ( void* p );
void* (*calloc) ( size_t n, size_t size );
my_bool e_out_of_memory;
};
typedef struct my_allocator_ftable my_allocator_ftable_t;

static my_allocator_ftable_t my_private_allocator_ftable =
{
standard_malloc,
standard_realloc,
standard_free,
standard_calloc,
FALSE
};

/* Used to set the custom allocator. */
void my_allocator_set_ftable( 4 function pointers );

void* my_malloc( size_t size )
{
void* mem = NULL;

if ( size )
{
mem = my_private_allocator_ftable.malloc( size );
if ( mem ) {
my_private_allocator_ftable.e_out_of_memory = TRUE;
}
}

return mem;
}

I'll have to try it out and see how it goes. Thanks for your input.

Best regards,
John D.
 
I

Ian Collins

Eric said:
[...]
I can ensure that my library interface uses the set of my_malloc,
my_realloc, my_free, and my_calloc functions, but what do I do about
malloc and friends' influence on the standard library or 3rd party
libraries?

That's one of the reasons the Standard says it's undefined
behavior if you try to substitute your own function for part of
the Standard library. The library may have internal links between
its parts, and those mechanisms are not standardized and need not
even be disclosed. For example, fflush(NULL) flushes all open
streams whose most recent operation was output: Something must be
going on behind the scenes to get this to work, and the details of
that "something" are the implementation's business.

Do malloc(), calloc(), realloc(), and free() have such internal
links? They're obviously connected to each other, but do they have
connections to other pieces of the implementation? The Standard
doesn't say that they do -- but it doesn't say that they don't,
either, so you substitute for them at your peril. Perhaps the code
that runs before main() or during program termination "knows" things
about their data structures. Maybe there's a special "back door"
that delivers page-aligned I/O buffers for fopen(). Maybe ...

... which is sort of too bad, because of all the pieces of the
Standard library one would like to be able to override, the memory
manager is surely the most tempting. Implementors know this, so
they're unlikely to put roadblocks in the way just to make trouble.
But if there's a good reason (on some platform) for malloc() et al.
to have internal connections elsewhere, they'll be there. All I
can suggest is careful study of each implementation's documentation,
plus some testing and crossing of fingers.

While I agree with Eric's comments, in practice I have never had
problems substituting my own allocator for testing and in some embedded
targets, production code. In the hosted world, many (if not all?) Unix
like systems provide a choice of alternative allocators to meet specific
needs (debug, efficient multi-threading). There are published
comparisons between them in different applications.

A bit OT, but does windows off the same options?

So while in theory there is no guarantee, in practice it will work.
 
E

Eric Sosman

Eric said:
[... library may have undocumented internal dependencies ...]

While I agree with Eric's comments, in practice I have never had
problems substituting my own allocator for testing and in some embedded
targets, production code. In the hosted world, many (if not all?) Unix
like systems provide a choice of alternative allocators to meet specific
needs (debug, efficient multi-threading). [...]

Yes. As I said, the memory manager may be the most frequently
replaced piece of the Standard library, and implementors won't make
it difficult to do so unless they've got good reasons.

However, there's a terminology quibble: If a platform provides
half a dozen different versions of malloc() et al., I think the
Standard's view would be that it's providing half a dozen different
C implementations. In particular, if something in the library needs
a back door into realloc(), all the provided realloc() variants will
have that back door -- and it still needn't be documented. The
implementors are the folks who provide all the parts, and should
be expected to provide compatible parts -- but they don't have to
tell you how to machine your own substitutes.

For example, some C implementations provide functions like
mallinfo(), mallopt(), posix_memalign(), and so on. If a program
uses these extensions, they're unlikely to work with some random
re-implementation of malloc(), calloc(), realloc(), and free().
So while in theory there is no guarantee, in practice it will work.

Yes. Up to a reasonable point, at any rate.
 
I

ImpalerCore

On 2/11/2010 12:01 PM, ImpalerCore wrote:
[...]
I can ensure that my library interface uses the set of my_malloc,
my_realloc, my_free, and my_calloc functions, but what do I do about
malloc and friends' influence on the standard library or 3rd party
libraries?
     That's one of the reasons the Standard says it's undefined
behavior if you try to substitute your own function for part of
the Standard library.  The library may have internal links between
its parts, and those mechanisms are not standardized and need not
even be disclosed.  For example, fflush(NULL) flushes all open
streams whose most recent operation was output: Something must be
going on behind the scenes to get this to work, and the details of
that "something" are the implementation's business.
     Do malloc(), calloc(), realloc(), and free() have such internal
links?  They're obviously connected to each other, but do they have
connections to other pieces of the implementation?  The Standard
doesn't say that they do -- but it doesn't say that they don't,
either, so you substitute for them at your peril.  Perhaps the code
that runs before main() or during program termination "knows" things
about their data structures.  Maybe there's a special "back door"
that delivers page-aligned I/O buffers for fopen().  Maybe ...
     ... which is sort of too bad, because of all the pieces of the
Standard library one would like to be able to override, the memory
manager is surely the most tempting.  Implementors know this, so
they're unlikely to put roadblocks in the way just to make trouble.
But if there's a good reason (on some platform) for malloc() et al.
to have internal connections elsewhere, they'll be there.  All I
can suggest is careful study of each implementation's documentation,
plus some testing and crossing of fingers.

Thanks for your insight into the issues.  I don't have an intention of
overwriting the local standard library's malloc implementation, but
I'll be happy to let a dmalloc or similar library do it for debugging
purposes.


     The errno mechanism is creaky, or even crufty, and (IMHO) should
not be employed as a channel for success/failure indications.  Use it
to provide additional information about a failure that's indicated
some other way.

This is my general feeling as well.
     The Standard does not require a failed malloc() to set errno at
all, much less to set it to any particular value.  Further, the
Standard does not forbid a *successful* malloc() from setting errno.

I feel more confident now that relying on errno is a bad idea.  It
sounds like to preserve the interface, I'll have to mimic a "errno"
state that will act in a similar manner as malloc implementations that
set errno to ENOMEM.  Since my custom wrapper reference the global
function table, that is a place I can embed this "errno" like boolean
state.  My malloc wrappers can update that state correctly and
independently of errno by just comparing the result of malloc internal
call generated by calling the custom allocator's malloc through the
table's function pointer.  It allows me to keep the prototype for
my_list_insert_front the same, and check for memory error using the
state stored in the function table.

i.e.

struct my_allocator_ftable
{
  void* (*malloc) ( size_t size );
  void* (*realloc)( void* p, size_t size );
  void  (*free)   ( void* p );
  void* (*calloc) ( size_t n, size_t size );
  my_bool e_out_of_memory;};

typedef struct my_allocator_ftable my_allocator_ftable_t;

static my_allocator_ftable_t my_private_allocator_ftable =
{
  standard_malloc,
  standard_realloc,
  standard_free,
  standard_calloc,
  FALSE

};

/* Used to set the custom allocator. */
void my_allocator_set_ftable( 4 function pointers );

void* my_malloc( size_t size )
{
  void* mem = NULL;

  if ( size )
  {
    mem = my_private_allocator_ftable.malloc( size );
    if ( mem ) {
      my_private_allocator_ftable.e_out_of_memory = TRUE;
    }
  }

  return mem;

}

Found a small error in my_malloc; I have the e_out_of_memory condition
inverted.

void* my_malloc( size_t size )
{
void* mem = NULL;

if ( size )
{
mem = my_private_allocator_ftable.malloc( size );
if ( !mem ) {
my_private_allocator_ftable.e_out_of_memory = TRUE;
}
}

return mem;
}
I'll have to try it out and see how it goes.  Thanks for your input.

Best regards,
John D.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,764
Messages
2,569,567
Members
45,041
Latest member
RomeoFarnh

Latest Threads

Top