copy size

S

sinbad

is it ok to allow memcpy () to take size 0;
or is it a good practice to check for the size
and do memcpy only for sizes greater than 0;
 
K

Keith Thompson

sinbad said:
is it ok to allow memcpy () to take size 0;
or is it a good practice to check for the size
and do memcpy only for sizes greater than 0;

See section 7.24.1 of the C standard
(http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf is the latest
draft):

Where an argument declared as size_t n specifies the length of the
array for a function, n can have the value zero on a call to that
function.

So yes, it's ok to call memcpy() with a size of 0.

Note that if either s1 or s2 is a null pointer, the behavior is (I
believe) undefined.
 
E

Eric Sosman

I quick search of the Internet has revealed that you are right.

But it seems odd for three reasons:

a)In almost all the functions I write that have parameters of type
(void *) and size_t (usually occurring together, for the obvious
reasons), I handle the size=0 case. When I handle it, the most
natural construction of the function is that the pointer(s) are not
dereferenced if size=0.

Right. But the Standard does not require the implementation
to use the "most natural" construction. Indeed, it goes out of
its way to permit "unnatural" constructions in pursuit of speed
and other efficiencies.
b)Allowing memcpy(dptr, sptr, 0) means that you decided to push logic
to handle the size=0 case down into the function, by definition. It
could just as easily be handled by the caller. In some programs, the
case of size=0 and an invalid pointer go together. It seems useful to
make it standard that if size=0, neither pointer will be dereferenced.

This seems self-contradictory. If the caller can handle a
special case "just as easily," why is it "useful" to require the
internals of memcpy() to handle it in a particular way?
[...]
d)I'm not a processor design whiz. I do mostly work on 8-bit through
32-bit microcontrollers. These are not really a representative set. I
can't rule out the possibility that on certain families of machines
(branch prediction, state rollback, caching, who knows what else) that
there could be a performance penalty of not requiring that the
pointers be valid even if size=0.

I lack any inside knowledge of the various Committees' thought
processes, but it seems unlikely that the "all arguments must be
valid" requirement arose out of concerns for specific machines.
Rather, it seems more likely that the overarching principle "do
not constrain implementations unnecessarily" was at work. In this
case, the implementation is not required to determine something
about one argument before making use of the others.
 
E

Eric Sosman

Please don't play Devil's Advocate because you are bored : ).

Creating a memcpy() function that doesn't handle the size=0 case is
just as silly as creating a square root function that calculates
square root for every non-negative integer except 0.

The Standard does in fact require memcpy() to handle the size=0
case, so what's your point?
It forces the users to either a)wrap the function or b)handle the
exception case many times whereas handling it in the function would
require handling it only once.

Concerning (b), you yourself wrote "It could just as easily be
handled by the caller," end quote. If the burden on the caller has
zero weight, where's the utility in removing it?

As to (a), I have never felt the slightest need or inclination
to wrap memcpy(), and have certainly never felt forced to do so.
 
T

Tim Rentsch

David T. Ashley said:
So yes, it's ok to call memcpy() with a size of 0.

Note that if either s1 or s2 is a null pointer, the behavior is (I
believe) undefined.

I quick search of the Internet has revealed that you are right.

But it seems odd for three reasons:

a)In almost all the functions I write that have parameters of type
(void *) and size_t (usually occurring together, for the obvious
reasons), I handle the size=0 case. When I handle it, the most
natural construction of the function is that the pointer(s) are not
dereferenced if size=0.

b)Allowing memcpy(dptr, sptr, 0) means that you decided to push logic
to handle the size=0 case down into the function, by definition. It
could just as easily be handled by the caller. In some programs, the
case of size=0 and an invalid pointer go together. It seems useful to
make it standard that if size=0, neither pointer will be dereferenced.
[snip other]

Two points:

Dereferencing is not the only way a bad pointer value might cause
undesired behavior. Any use of a pointer in an "address-like"
manner might cause a malfunction if the pointer value is bad.

Implementations of library functions are allowed to make use of
non-portable, implementation-specific assumptions to do what they
do. Actions based on such assumptions might work for bona fide
pointers but fail for, eg, NULL. For example, we might do a
computation on a pointer value to prepare a pre-fetch of the
memory where the source bytes would be. Such a computation could
misbehave if operating on a bad pointer value, even though no
"dereferencing" in the sense of C semantics has taken place.
 
K

Keith Thompson

David T. Ashley said:
Please don't play Devil's Advocate because you are bored : ).

Creating a memcpy() function that doesn't handle the size=0 case is
just as silly as creating a square root function that calculates
square root for every non-negative integer except 0.

sqrt() computes square roots; sqrt(0.0) computes the square root of
0.0. It's not a special case.

memcpy() copies memory; memcpy() with size=0 doesn't copy any memory.

And as Eric points out, the standard already requires support for
memcpy() with size=0 if the s1 and s2 arguments are valid. If they're
not valid, the behavior is undefined, but I don't see that that's a
significant problem.
It forces the users to either a)wrap the function or b)handle the
exception case many times whereas handling it in the function would
require handling it only once.

The only reason to wrap the function or handle exceptional cases is if
you have a call to memcpy() where it's possible to have size=0 *and* one
or more null pointer or otherwise invalid arguments. I don't see that
happening much in real life.
 
T

Tim Rentsch

David T. Ashley said:
This seems self-contradictory. If the caller can handle a
special case "just as easily," why is it "useful" to require the
internals of memcpy() to handle it in a particular way?

Please don't play Devil's Advocate because you are bored : ).

Creating a memcpy() function that doesn't handle the size=0 case is
just as silly as creating a square root function that calculates
square root for every non-negative integer except 0. [snip]

I think you're confusing implementation and specification.

I expect most implementations will work just fine for any pointer
values given to memcpy() when the number of bytes to be moved is
zero.

However, the specification is written considering all possible
implementations, including some that have hardware characteristics
we don't even know about yet. In light of that, it is natural to
give as much freedom as we can to implementors of library
functions like memcpy(), provided the burden on developers using
those functions isn't too high. Here the burden seems pretty low
-- in practice I expect most calls to memcpy() either are known to
have valid pointer arguments, or are inside an if() if the pointer
arguments might not be valid (and so the calls would not be made
if the pointer arguments were null, for example), ie, the tests
would be made anyway, independent of whether memcpy() always works
for zero length calls. Since the burden seems to be low, we give
implementations more latitude, because even though _most_ won't
need it, _some_ may find it a significant benefit. It's because
we don't know the characteristics of all implementations,
including future implementations, that it makes sense to keep
the specifications as unrestrictive as we reasonably can.
 
T

Tim Rentsch

christian.bau said:
An implementation may assume that the programmer's code does not
invoke undefined behaviour. Therefore, after a call

memcpy (p, q, n);

the implementation may assume that p != NULL and q != NULL. So in this
code fragment (assuming that p and q are non-const pointers to numeric
objects)

memcpy (p, q, n);
if (p != NULL) *p = 1;
if (q != NULL) *q = 2;

the compiler is free to remove the test whether p != NULL and whether
q != NULL.

I believe this is technically true, but there is a subtlety. The
behavior of memcpy() operating on a null pointer is undefined
only as far as the Standard is concerned. If an implementation
chooses to document (and therefore define) the behavior of
memcpy() upon receiving a null pointer value argument, then the
behavior is defined and the implementation cannot assume that the
'!= NULL' tests are superfluous. An implementation may not act
in contradiction to its own documentation, even for situations
that the Standard labels "undefined behavior".
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,578
Members
45,052
Latest member
LucyCarper

Latest Threads

Top