Access violation in free()

S

spl

I am getting access violation in the below program for the free()
call, Whats wrong here and how to rectify it?
----------------------------------
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <conio.h>

char * CopyString(char *s)
{
int length = strlen(s);
char * t = (char *)malloc(length);
strcpy(t,s);
return t;
}


void main()
{
char *destStr;
char *sourceStr = "Test";
destStr = CopyString(sourceStr);
printf("Destination String : %s\n", destStr);
free(destStr);

}
 
I

Ian Collins

spl said:
I am getting access violation in the below program for the free()
call, Whats wrong here and how to rectify it?
----------------------------------
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <conio.h>
Why?

char * CopyString(char *s)
{
int length = strlen(s);
char * t = (char *)malloc(length);

Drop the cast.
strcpy(t,s);

Here you copy strlen(s)+1 bytes to a buffer strlen(s) bytes long - don't
forget the terminating '\0'.
return t;
}


void main()

Most environments use int main(void).
 
M

Martin Ambuhl

spl said:
I am getting access violation in the below program for the free()
call, Whats wrong here and how to rectify it?

Try the following and see if you have better luck:
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#if 0
#include <conio.h>
#endif

char *CopyString(char *s)
{
int length = strlen(s);
char *t = malloc(length+1); /* note! */
strcpy(t, s); /* in your original code, this violated the bounds of
the array pointed to by t, corrupting memory. */
return t;
}

#if 0
void main()
#endif
int main(void)
{
char *destStr;
char *sourceStr = "Test";
destStr = CopyString(sourceStr);
printf("Destination String : %s\n", destStr);
free(destStr);
return 0;
}
 
R

Richard

spl said:
I am getting access violation in the below program for the free()
call, Whats wrong here and how to rectify it?
----------------------------------
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <conio.h>

char * CopyString(char *s)
{
int length = strlen(s);
char * t = (char *)malloc(length);
strcpy(t,s);
return t;
}


void main()
{
char *destStr;
char *sourceStr = "Test";
destStr = CopyString(sourceStr);
printf("Destination String : %s\n", destStr);
free(destStr);

}

You are not mallocing enough memory. So when you do a strcpy you are
putting the end of string '\0' over some information which might be used
by malloc/free or which might not even be accessible.


length+1
 
C

CBFalconer

spl said:
I am getting access violation in the below program for the free()
call, Whats wrong here and how to rectify it?

First fix the various errors, noted below.
----------------------------------
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <conio.h>

No such file in standard c. delete the line.
char * CopyString(char *s)
{
int length = strlen(s);

a string needs 1+strlen bytes.
char * t = (char *)malloc(length);

Don't cast malloc.
strcpy(t,s);
return t;
}

void main()

main always returns an int. Say so. Use "int main(void)"
{
char *destStr;
char *sourceStr = "Test";
destStr = CopyString(sourceStr);
printf("Destination String : %s\n", destStr);
free(destStr);

And here you must return a status. "return 0;" will do.
 
R

Richard Heathfield

Martin Ambuhl said:

Try the following and see if you have better luck:
char *CopyString(char *s)

Better: char *CopyString(const char *s)
{
int length = strlen(s);

Better: size_t length = strlen(s);
char *t = malloc(length+1); /* note! */
strcpy(t, s); /* in your original code, this violated the bounds
of
the array pointed to by t, corrupting memory. */

Better: before copying to the newly created buffer, ensure it exists:

if(t != NULL)
{
strcpy(t, s);
}

or even:
if(t != NULL)
{
memcpy(t, s, length + 1);
}

since you already have that information to hand.
return t;
}

#if 0
void main()
#endif
int main(void)
{
char *destStr;
char *sourceStr = "Test";
destStr = CopyString(sourceStr);
printf("Destination String : %s\n", destStr);

Again, check that destStr is not a null pointer before passing it to
printf.

<snip>
 
T

Tor Rustad

Richard said:
Martin Ambuhl said:




Better: char *CopyString(const char *s)


Better: size_t length = strlen(s);

I consider this better:

assert(NULL != s);

before calling strlen().
Better: before copying to the newly created buffer, ensure it exists:

if(t != NULL)
{
strcpy(t, s);
}

or even:
if(t != NULL)
{
memcpy(t, s, length + 1);
}

Unless you are writing e.g. a non-stop server (or a library for it,
kernel mode code etc.), the normally best thing to do on memory
failures, is simply to exit.

Propagating NULL all over the place, result in bloated and hard-to-read
code.
 
R

Richard

Tor Rustad said:
I consider this better:

assert(NULL != s);

before calling strlen().

I have known solid, calm programmers go barmy when confronted with these
asserts. They are generally NOT required and only contribute noise IMO.

Sure there might be a few cases such as initial string allocation but
generally I find ASSERTS everywhere indicative of bad design. A good
design KNOWS that these things are not null in most cases. Too many
times they are just thrown in haphazardly as a "cure all" for poor
design.

Unless you are writing e.g. a non-stop server (or a library for it,
kernel mode code etc.), the normally best thing to do on memory
failures, is simply to exit.

Propagating NULL all over the place, result in bloated and
hard-to-read code.

--
 
R

Richard Heathfield

Tor Rustad said:
I consider this better:

assert(NULL != s);

before calling strlen().

Sure. I should have mentioned that.

Unless you are writing e.g. a non-stop server (or a library for it,
kernel mode code etc.), the normally best thing to do on memory
failures, is simply to exit.

We've had this debate over and over, and I think it's fair to say that
the balance of expert opinion is against you (although it's far from
unanimous), although of course it does depend very much on what you're
writing (as you suggest). The consensus seems to be that, if you're
writing 'generic' code - code that you expect to be used many times by
many programs - then you should report errors rather than terminate the
program. If you're writing the program itself, however, then of course
you do whatever is the right thing for that program. But if my word
processor exit()ed on a memory allocation failure without at least
giving me the chance to save the last twenty minutes' typing, I'd be
looking for a new word processor.
 
T

Tor Rustad

Richard said:
[...]
I consider this better:

assert(NULL != s);

before calling strlen().

I have known solid, calm programmers go barmy when confronted with these
asserts. They are generally NOT required and only contribute noise IMO.

Hopefully, these programmers have never developed code for
safety-critical applications and gone through an external code audit.

Sure there might be a few cases such as initial string allocation but
generally I find ASSERTS everywhere indicative of bad design. A good
design KNOWS that these things are not null in most cases. Too many
times they are just thrown in haphazardly as a "cure all" for poor
design.

The usage of assert() can make a great difference, the intension is to
detect programming faults, or even design errors.

For solid code, I expect pre-conditions, post-conditions and detection
of invariant violations. The question here, is if such run-time checks
should be included in production code, or not.

The question is *not*, to skip such checks in a debug build, given the
intrinsic quality of the code matters.
 
B

Bart van Ingen Schenau

Richard said:
I have known solid, calm programmers go barmy when confronted with
these asserts. They are generally NOT required and only contribute
noise IMO.

Sure there might be a few cases such as initial string allocation but
generally I find ASSERTS everywhere indicative of bad design. A good
design KNOWS that these things are not null in most cases. Too many
times they are just thrown in haphazardly as a "cure all" for poor
design.
IME, asserts are typically used to protect the code against users
(programmers!) that can't read documentation.
The assert is there to indicate as early as possible that the code is
being abused.
I strongly prefer an assert over having the code produce UB. Having
probems pointed out early in the integration testing beats having
management breathing down your neck because the end-users found a
problem that appears to stem from your code (but is actually caused by
someone incorrectly using your code).

Bart v Ingen Schenau
 
T

Tor Rustad

Richard said:
Tor Rustad said:


We've had this debate over and over, and I think it's fair to say that
the balance of expert opinion is against you (although it's far from
unanimous), although of course it does depend very much on what you're
writing (as you suggest). The consensus seems to be that, if you're
writing 'generic' code - code that you expect to be used many times by
many programs - then you should report errors rather than terminate the
program.

In generic code, I would design a call-back mechanism for error
handling, rather than propagating critical errors all over the place.

Yup, we have discussed this before, and perhaps most experts disagree
with me, but my comment on error management in the draft on "Secure C
Library Functions", did very much seem to have made it into the final
"TR 24731 Part1: Bounce-checking interfaces".

http://groups.google.no/group/comp....st&q=Tor+Rustad&rnum=8&hl=no#25af554fcc8ceeae

Douglas A. Gwyn agreed with me, and some other experts must have too. :)

If you're writing the program itself, however, then of course
you do whatever is the right thing for that program. But if my word
processor exit()ed on a memory allocation failure without at least
giving me the chance to save the last twenty minutes' typing, I'd be
looking for a new word processor.

If a program need a save-data-and-terminate handler, such a thing can
easily be added to a library. Propagating NULL, isn't the best way to go
IMO.
 
M

Martin Ambuhl

Tor said:
I consider this better:
assert(NULL != s);
before calling strlen().

All of Richard's suggestions are improvements. I followed my usual path
of preserving as much of the original poster's code as possible while
suggesting changes necessary to fixing his problem. Richard has, quite
reasonably, made suggestions which, while greatly improving the OP's
code, have nothing to do with his stated problem.

On the other hand, your suggestion is not an improvement to the OP's
code, mine which points out how to fix his problem, or that implied by
Richard's improvemnts. In fact, you suggestion is inane: assert()
should never appear in production code. It helps the end user not a damn
bit, but rather mystifies him. The presence of an assert() which is not
commented out in any program but a testing version is a sign of a
sloppy, lazy programmer.

Unless you are writing e.g. a non-stop server (or a library for it,
kernel mode code etc.), the normally best thing to do on memory
failures, is simply to exit.

This is another sign of a sloppy, lazy programmer. Your claim is not
only false but inane. Try to help the user of your code. You seem to
be the epitome of the user-hostile programmer.
Propagating NULL all over the place, result in bloated and hard-to-read
code.

Richard never sugested "propagating NULL all over the place." What a twit.
 
T

Tor Rustad

Martin Ambuhl wrote:

[...]

It helps the end user not a damn
bit, but rather mystifies him. The presence of an assert() which is not
commented out in any program but a testing version is a sign of a
sloppy, lazy programmer.

Nonsense, the assert() checks will only be done in debug builds, no
reason to have different sources for debug and production builds.

I have more than once, put a debug build in production, to track down
hard to find bugs in non-trivial servers. Analyzing a crash image after
assert() trapping, is very effective way of getting close to the problem.

Hard-to-find bugs, typically don't happen on the test system.
 
R

Richard Heathfield

Martin Ambuhl said:

assert() should never appear in production code.

Well, I agree, but surely the way to remove it is not by omitting it
from the source, but by defining NDEBUG during a production build?

<snip>
 
R

Richard Tobin

Martin Ambuhl said:
In fact, you suggestion is inane: assert()
should never appear in production code. It helps the end user not a damn
bit, but rather mystifies him. The presence of an assert() which is not
commented out in any program but a testing version is a sign of a
sloppy, lazy programmer.

I put assert() in to check for bugs in my program. For example, if a
certain computation should always produce a number between 0 and N, I
might put in an assert() to verify this. What would be the point of
removing the test in the final version of the program? Either the
code is correct, in which case it makes no measurable difference, and
the code is broken, in which case it will proceed to do further
computations with an incorrect value instead of failing immediately.
How does that help anyone?

No doubt there are cases where things are different. In a computer
game, it might be better to continue with some incorrect graphics
rather than quit and lose the player's state. But the programs I
write are used for research, and no result is better than a wrong
result.

-- Richard
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,780
Messages
2,569,608
Members
45,250
Latest member
Charlesreero

Latest Threads

Top