D
Don Y
Hi,
There's rarely a problem generating regression tests for
functions/procedures that use concrete *values*:
sqrt.test
3 --> 1.732
4 --> 2
5 --> 2.236
(apologies if I have misremembered any of these values :> )
And, it is easy to erect the necessary scaffolding for those
sorts of tests without introducing any noticeable uncertainty
in the veracity of the results obtained.
But, I can't see a way to handle this sort of testing on
functions that manipulate *pointers* to objects -- without
adding a fair bit of code to the scaffolding that, itself,
represents a testing issue! (i.e., does the scaffolding
have any bugs that result in false positives or negatives??)
In particular, I am trying to develop a test suite for my
dynamic memory management functions (e.g., malloc and free
types of routines).
I.e., I can't come up with "constant" results against
which to compare test-time results. Sticking with the
traditional malloc/free for example (my routines are
more heavily parameterized), I can create a malloc.test
that pushes size_t's to malloc. But, aside from verifying
the result is not NULL (or, *is* NULL, in some cases!),
there's nothing I can do to check that the result is
actually correct!
[Recall, the location and size of the free store might change
from environment to environment; alignment constraints might
vary, etc.]
E.g., if a test invocation of malloc(100) returns 0x12345678
in one environment, it could just as easily return 0x1234
in *another* (smaller address space). A second, identical test
invocation returning 0x87654321 could likewise be "correct".
About the only thing I can check for is if that second
invocation returned 0x12345699 (which overlaps the previous
allocation at 0x12345678!).
Furthermore, just examining the results from malloc don't
tell me that the freelist is being maintained, properly.
Of course, I can write code to inspect these objects in
greater detail. But, then the test scaffolding starts to
rival the complexity of the functions being tested! (bugs)
I guess the "ideal" I see would be to be able to build a
"memory image" and verify that the functions cause expected
and predicted changes in that "image".
E.g., as the allocation policy is varied, to see the
effects of that reflected in the changes to the free list
before vs. after the invocation.
But, I can't see any way to make this NOT require some sort
of active probing of memory by the test scaffolding (which
means the internals of the functions have to be exposed to
the test suite).
So: how do other folks test for pointer manipulation? etc.
There's rarely a problem generating regression tests for
functions/procedures that use concrete *values*:
sqrt.test
3 --> 1.732
4 --> 2
5 --> 2.236
(apologies if I have misremembered any of these values :> )
And, it is easy to erect the necessary scaffolding for those
sorts of tests without introducing any noticeable uncertainty
in the veracity of the results obtained.
But, I can't see a way to handle this sort of testing on
functions that manipulate *pointers* to objects -- without
adding a fair bit of code to the scaffolding that, itself,
represents a testing issue! (i.e., does the scaffolding
have any bugs that result in false positives or negatives??)
In particular, I am trying to develop a test suite for my
dynamic memory management functions (e.g., malloc and free
types of routines).
I.e., I can't come up with "constant" results against
which to compare test-time results. Sticking with the
traditional malloc/free for example (my routines are
more heavily parameterized), I can create a malloc.test
that pushes size_t's to malloc. But, aside from verifying
the result is not NULL (or, *is* NULL, in some cases!),
there's nothing I can do to check that the result is
actually correct!
[Recall, the location and size of the free store might change
from environment to environment; alignment constraints might
vary, etc.]
E.g., if a test invocation of malloc(100) returns 0x12345678
in one environment, it could just as easily return 0x1234
in *another* (smaller address space). A second, identical test
invocation returning 0x87654321 could likewise be "correct".
About the only thing I can check for is if that second
invocation returned 0x12345699 (which overlaps the previous
allocation at 0x12345678!).
Furthermore, just examining the results from malloc don't
tell me that the freelist is being maintained, properly.
Of course, I can write code to inspect these objects in
greater detail. But, then the test scaffolding starts to
rival the complexity of the functions being tested! (bugs)
I guess the "ideal" I see would be to be able to build a
"memory image" and verify that the functions cause expected
and predicted changes in that "image".
E.g., as the allocation policy is varied, to see the
effects of that reflected in the changes to the free list
before vs. after the invocation.
But, I can't see any way to make this NOT require some sort
of active probing of memory by the test scaffolding (which
means the internals of the functions have to be exposed to
the test suite).
So: how do other folks test for pointer manipulation? etc.