Extremely pedantic, but however...

F

Frederick Gotham

Is it not inefficient to have allowed "free" to accept a null pointer? Would
it not have been better to disallow it, and to provide an auxiliary function
for times when its wanted:

#define FREE_SAFE(p) ( (void)( (p) && free((p)) ) )
 
M

Malcolm

Frederick Gotham said:
Is it not inefficient to have allowed "free" to accept a null pointer?
Would
it not have been better to disallow it, and to provide an auxiliary
function
for times when its wanted:

#define FREE_SAFE(p) ( (void)( (p) && free((p)) ) )
In the olden days freeing a null pointer would crash your computer.
The overhead is probably so trivial that the convenience is worth it.
Plus malloc() is allowed to return null if a region of zero size is
requested. So it is not necessarily illegitimate to free null.
 
S

Skarmander

Frederick said:
Is it not inefficient to have allowed "free" to accept a null pointer?

Only in C. Programmers in other languages generally don't care about an
insignificant extra comparison. (free() itself is hardly cheap on most
implementations.)
Would it not have been better to disallow it, and to provide an auxiliary
function for times when its wanted:

Define "better". C already catches a lot of flak for having standard
functions that do not check their arguments (or in some cases cannot check
their arguments). The comparatively small cost of ensuring free(NULL)
doesn't crash is worth it (note that in the general case free() does not
have observable side effects, so making free(NULL) a no-op does not decrease
reliability).

S.
 
F

Frederick Gotham

Skarmander posted:
Only in C. Programmers in other languages generally don't care about an
insignificant extra comparison. (free() itself is hardly cheap on most
implementations.)


I'm not talking about programmers, I'm talking about computers running
programs. If it takes 20 nanoseconds for a particular machine to check if a
pointer is null, then the execution time of your algorithm is extended by
20 nanoseconds for each time you call "free" (if every one of these null
checks is redundan). Those 20 nanoseconds could have been reclaimed if
"free" didn't check for null pointers.

Whether a particular human being (aka programmer, in this context)
considers 20 nanoseconds to be negligible is irrelevant to my query.

Define "better".


More efficient. Runs faster. Uses less resources.

C already catches a lot of flak for having standard
functions that do not check their arguments (or in some cases cannot
check their arguments).


Depends who you ask. I congratulate C for its efficiency. If you want your
hand to be held, you could get a wrapper library:

size_t strlen_HOLD_MY_HAND(char const *const p)
{
if (p) return strlen(p);

return 0;
}

The comparatively small cost of ensuring
free(NULL) doesn't crash is worth it (note that in the general case
free() does not have observable side effects, so making free(NULL) a
no-op does not decrease reliability).


If you want the feature of being able to invoke "free" upon a null pointer,
then all it takes is something simple like:

#define FREE_SAFE(p) do { void *const q = p; if(q) free(q); } while (0);

This way, you can use "free" whenever a null check would be redundant,
saving 20 nanoseconds in execution time.
 
C

CBFalconer

Frederick said:
.... snip ...

If you want the feature of being able to invoke "free" upon a null
pointer, then all it takes is something simple like:

#define FREE_SAFE(p) do { void *const q = p; if(q) free(q); } while (0);

This way, you can use "free" whenever a null check would be
redundant, saving 20 nanoseconds in execution time.

Ridiculous. All you are doing is applying the same test twice.
The following is the code used in my nmalloc package (A #define
converts nfree to free under the appropriate conditions). Note
that nothing is done when ptr is NULL, apart from the ability to
hook in user code. The DBG* lines are null in the release
version. The hook mechanism also allows the user to catch any
free(NULL) calls. See:

<http://cbfalconer.home.att.net/download/>

void nfree(void *ptr)
{
memblockp m;

if (hookptr[free_HK]) hookptr[free_HK](0, ptr);

if (ptr) {
m = MEMBLKp(ptr);
DBGPRTF("free(%p)", ptr); SHOWBLKF(m, "");
if (ISFREE(m) || /* bad, refreeing block */
FOULED(m) ) { /* block is fouled */
badcallabort("free", 4, m);
return; /* he can trap this SIGABRT */
}
dofree(m);
#if DEBUGF
DBGEOLN;
#endif
}
else if (hookptr[free_null_HK])
hookptr[free_null_HK](0, NULL);
} /* nfree */
 
S

Skarmander

Frederick said:
Skarmander posted:



I'm not talking about programmers, I'm talking about computers running
programs.

So am I.
If it takes 20 nanoseconds for a particular machine to check if a
pointer is null, then the execution time of your algorithm is extended by
20 nanoseconds for each time you call "free" (if every one of these null
checks is redundan). Those 20 nanoseconds could have been reclaimed if
"free" didn't check for null pointers.
And free() is going to take up much more than 20 nanoseconds, most likely an
order of magnitude more.
Whether a particular human being (aka programmer, in this context)
considers 20 nanoseconds to be negligible is irrelevant to my query.
Then your use of "inefficient" is peculiar, since efficiency never
exclusively depends on absolute execution time. If that were the case, you
should focus on making your programs much more efficient by not using
dynamic memory allocation at all, obviating any concerns about free().
More efficient. Runs faster. Uses less resources.
Your idea of better programs is limited.
Depends who you ask. I congratulate C for its efficiency. If you want your
hand to be held, you could get a wrapper library:

size_t strlen_HOLD_MY_HAND(char const *const p)
{
if (p) return strlen(p);

return 0;
}
This has nothing to do with hand-holding. If you're writing a program where
you need either the length of a string or 0 if there is no string, this
function is exactly what you want. If you're writing a program where you
need the length of a string, calling strlen() with a null pointer is an
error, since that's not a string.

For free() it was decided that calling it with a null pointer is not an
error, that "no memory" is a valid argument, and that freeing "no memory" is
a no-op. This makes sense, since malloc() may return "no memory". Again, I
maintain that the cost one pays for having these semantics, rather than more
intricate ones that allow for a slightly simpler implementation, is
insignificant.
If you want the feature of being able to invoke "free" upon a null pointer,
then all it takes is something simple like:

#define FREE_SAFE(p) do { void *const q = p; if(q) free(q); } while (0);

This way, you can use "free" whenever a null check would be redundant,
saving 20 nanoseconds in execution time.
That does not address my point, which is that nobody ever benefits enough
from those 20 nanoseconds to warrant changing the semantics of free().
Programmers who are actually concerned with omitting redundant null checks
should invest in their compiler, which is far more likely to pay off.

A check for NULL in strlen() could be significant. My bold assertion is that
a check for NULL in free() never is.

S.
 
J

jacob navia

Frederick said:
Skarmander posted:





I'm not talking about programmers, I'm talking about computers running
programs. If it takes 20 nanoseconds for a particular machine to check if a
pointer is null, then the execution time of your algorithm is extended by
20 nanoseconds for each time you call "free" (if every one of these null
checks is redundan). Those 20 nanoseconds could have been reclaimed if
"free" didn't check for null pointers.

Whether a particular human being (aka programmer, in this context)
considers 20 nanoseconds to be negligible is irrelevant to my query.

20 nanoseconds?

At 2GHZ a test for NULL takes 0.5 nano seconds with nanoseconds being
1e-9 seconds

OK, anyway this is for the principle you say. Suppose your program makes
1e4 frees/second. That extra test will accumulate to

0.5e-9*1e4 --> 0.5e-5 seconds, i.e. to make a difference of 1 second
your program should run for 2e5 seconds, i.e. 55.55 hours, or 2.31 days.

Non stop.

And then you would just see a difference of 1 second, all this doing
10 000 calls to free() in each second.

This program
#include <stdlib.h>
#include <stdio.h>
#define MAXTRIES 1000000*10
int main(void)
{
char *a;

for (int i=0; i<MAXTRIES;i++) {
a = malloc(123456);
free(a);
}
}

takes 3.484 seconds, i.e. 10 million calls to malloc/free
take 3.484 seconds, one of them takes 0,0000003484 seconds
in a 2 year old PC running at 2GHZ.

If you do 10 000 malloc/free per second you are using
0,003484 seconds on those calls. In 200 000 seconds
your program has spent 11.61 MINUTES in malloc/free, and from
those 11.62 MINUTES you spend 1 second more in that test for
NULL.
 
A

Ancient_Hacker

Frederick said:
Is it not inefficient to have allowed "free" to accept a null pointer?

The test takes one or two instructions. The rest of the code for
free() is at least ten times more instructions, most of them slow
memory-reference instructions.

In addition, since you can only free() something that has been
malloc()ed, and only free() it once, those two test instructions only
get run if you've also malloced() something. So you have to add the
overhead of malloc() when considering this issue. malloc(), depending
on the exact implementation, runs from 50 to 500 instructions.

So two extra fast instructions out of 70 to 520 not so fast ones isnt
much of a burden.

And that's only for programs that do nothing but malloc() and free().

I would not worry about it.
 
E

Eric Sosman

Frederick said:
Is it not inefficient to have allowed "free" to accept a null pointer?

Terribly inefficient. Even worse is the fact that free()
is a function, thus incurring the overhead of marshalling the
argument, transferring control (possibly disrupting pipelines
and instruction caches), remembering a return address, maybe
saving some registers and/or doing a window turn with possible
stack spill, and then unwinding the whole thing when free()'s
business has been done. Pure overhead! Horrible to contemplate!
free() should have been an operator, as in That Other Language.
*Then* we'd finally get some efficiency!

All right, class, let's turn the page. Next, we'll talk
about all the time printf() wastes interpreting format strings.
Who wants to go first?
 
M

Mark McIntyre

Skarmander posted:

More efficient.

define more efficient... :)
Runs faster. Uses less resources.

Fast, small, cheap. Perm two of three...
If you want the feature of being able to invoke "free" upon a null pointer,
then all it takes is something simple like:

#define FREE_SAFE(p) do { void *const q = p; if(q) free(q); } while (0);

This way, you can use "free" whenever a null check would be redundant,
saving 20 nanoseconds in execution time.

At the cost of getting the compiler to create an extra object, perform
a test and insert a loop. This may cost more in terms of time, and
resources, than the original free.

I assume youre aware of the Three Laws of Optimisation?
--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
 
S

Samuel Stearley

At 2GHZ a test for NULL takes 0.5 nano seconds with nanoseconds being
1e-9 seconds

A test is a single instruction. Then a non executed branch is another
instruction. The mips is the most recent CPU I can think of that has
an integrated cmp/branch instruction.

The CPU is (most likely) super scaller so these instructions are
executing in parallel.

I think that having to fetch 2 more instructions, branch prediction,
convert to internal RISC format (if this is an x86 taget) is going to
have a larger performance impact than actual execution

This program
#include <stdlib.h>
#include <stdio.h>
#define MAXTRIES 1000000*10
int main(void)
{
char *a;

for (int i=0; i<MAXTRIES;i++) {
a = malloc(123456);
free(a);
}
}

What about the performance of the following code ?
The branch prediction inside of free isn't going to be that great.

#include <stdlib.h>
#include <stdio.h>
#define MAXTRIES 1000000*10/2
int main(void)
{
char *a;

for (int i=0; i<MAXTRIES;i++) {
a = malloc(123456);
free(a);
free(NULL);
}
}
 
W

websnarf

Frederick said:
Is it not inefficient to have allowed "free" to accept a null pointer?

No. In a balanced malloc/free design, free takes about 30 clocks
minimum to completely execute. So the additional if() test (1 clock
maybe?) is just too trivial to matter. In non-balanced malloc/free
designs, indeed you can make the free() faster, but you've paid the
comparatively bigger penalty for the malloc (maybe 75 clocks?) instead
anyways.
[...] Would
it not have been better to disallow it, and to provide an auxiliary function
for times when its wanted:

#define FREE_SAFE(p) ( (void)( (p) && free((p)) ) )

This is worse, because it distributes the checking throughout your
code, rather than centralizing it inside the library. (I.e., your code
footprint increases).

This is besides the fact that since its a macro, p might be an
expression with side effects that you are evaluating twice.
 
J

jacob navia

Samuel said:
A test is a single instruction. Then a non executed branch is another
instruction. The mips is the most recent CPU I can think of that has
an integrated cmp/branch instruction.

The CPU is (most likely) super scaller so these instructions are
executing in parallel.

I think that having to fetch 2 more instructions, branch prediction,
convert to internal RISC format (if this is an x86 taget) is going to
have a larger performance impact than actual execution





What about the performance of the following code ?
The branch prediction inside of free isn't going to be that great.

#include <stdlib.h>
#include <stdio.h>
#define MAXTRIES 1000000*10/2
int main(void)
{
char *a;

for (int i=0; i<MAXTRIES;i++) {
a = malloc(123456);
free(a);
free(NULL);
}
}
It goes up from 3.484 to 3.687. But if I change the program to do
#include <stdlib.h>
#include <stdio.h>
static void fn(void *p)
{
}

#define MAXTRIES 1000000*10
int main(void)
{
char *a;

for (int i=0; i<MAXTRIES;i++) {
a = malloc(123456);
free(a);
fn(NULL);
}
}

The time stays the same... within measurements error.

This means that the overhead of calling a
function 10 million times is what is increasing the time here, and the
difference between calling an empty function and calling malloc
with NULL is zero

probably it just make a return after it sees NULL, that's all.
 
F

Frederick Gotham

CBFalconer posted:
Ridiculous. All you are doing is applying the same test twice.


I don't understand you.

The hypothetical "free" would not check for null.

The hypothetical "FREE_SAFE" would check for null once.
 
O

ozbear

Is it not inefficient to have allowed "free" to accept a null pointer? Would
it not have been better to disallow it, and to provide an auxiliary function
for times when its wanted:

#define FREE_SAFE(p) ( (void)( (p) && free((p)) ) )


Invoking your FREE_SAFE macro above with an argument of malloc(10000)
generates a nice memory leak.

Oz
 
E

Elijah Cardon

Eric Sosman said:
Terribly inefficient. Even worse is the fact that free()
is a function, thus incurring the overhead of marshalling the
argument, transferring control (possibly disrupting pipelines
and instruction caches), remembering a return address, maybe
saving some registers and/or doing a window turn with possible
stack spill, and then unwinding the whole thing when free()'s
business has been done. Pure overhead! Horrible to contemplate!
free() should have been an operator, as in That Other Language.
*Then* we'd finally get some efficiency!

All right, class, let's turn the page. Next, we'll talk
about all the time printf() wastes interpreting format strings.
Who wants to go first?
I waste an hour every time I have to lay eyes on the page in K&R with
fprintf specifiers. EC
 
S

SM Ryan

# > If it takes 20 nanoseconds for a particular machine to check if a
# > pointer is null, then the execution time of your algorithm is extended by
# > 20 nanoseconds for each time you call "free" (if every one of these null
# > checks is redundan). Those 20 nanoseconds could have been reclaimed if
# > "free" didn't check for null pointers.

This is the same argument as to why Fortran didn't have
zero trip DO loops. See also, etc etc etc.
 
C

CBFalconer

Frederick said:
CBFalconer posted:


I don't understand you.

The hypothetical "free" would not check for null.

The hypothetical "FREE_SAFE" would check for null once.

Are you just trolling? I posted an example of a free that checked
for NULL, and you just snipped it. Every free must do this to meet
the standards requirements.
 
K

Keith Thompson

CBFalconer said:
Are you just trolling? I posted an example of a free that checked
for NULL, and you just snipped it. Every free must do this to meet
the standards requirements.

He did say "hypothetical". The point of this thread is his suggestion
that free() would be more efficent if it *didn't* check for a null
pointer argument; the FREE_SAFE macro would do the check, and be more
or less equivalent to the real-world free(). Obviously this would not
conform to the actual standard.
 
S

Samuel Stearley

No he's not trolling.
He's saying that his 'FREE_SAFE' macro does not result in the NULL
check twice because the 'free' that its calling is a hypothetical free
that lacks such a check.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,013
Latest member
KatriceSwa

Latest Threads

Top