Experiment: functional concepts in C

  • Thread starter Ertugrul Söylemez
  • Start date
E

Ersek, Laszlo

The problem should be fixed at the right place.

[snip mockery]

[snip cursing]

I believe that my remark above is a perfectly crafted work of Usenet art;
the culmination of years of newsgroup experience.

There's no doubt about that. It's poignant and witty, indeed. A pure
essence of vileness.

You might want to examine the reasons why you reacted to it, it clearly has
found a collateral target in you also.

Yes, and I was expecting you to make this remark as well. (I'm saying
this in a neutral voice.)

I'm not used to being called an idiot, however indirectly. I'm convinced
you called me just that in your earlier post, to which I tried to
respond with arguments. You ignored those (perhaps because you found
them weak). I happen to share Ertugrul's view in this matter (or so I
perceive), hence I was a bit predisposed to become a target. I kind of
felt that technical opinion, and consequently, myself, attacked by this
outrageous (and certainly, witty) comment.

I don't quite follow the numerics of your reasoning about who you would rather
work with,

A bit less smart, but a bit more humane. Strictly in the "still smart
enough" zone.

The root of the problem here is that we are discussing inanimate objects, which
serve us according to our requirements; to that extent they are ``good'' or
``bad''. Unlike a mother, the operating system is not an independent /person/
who suffers from taking on additional responsibilities for the sake of others.
The very language does not fit.

Truth to be told, I didn't read this mother-child antecedent (there's no
way to keep up with such a stream of posts), but even a very bad analogy
doesn't deserve such a comment. Or whatever, I guess I need to grow a
thicker skin. I still think you may have been seduced a little by your
own craft in creating punchlines (cf. "narcissism") -- not that I don't
like to luxuriate in my own (imaginary) smartness :)

Oh well, let's go on with the technical stuff. Sorry for the bad words,
too.

So no, I don't think we should clean up after ourselves because the OS
cannot be trusted to do that. (I don't want to use such an OS.) I simply
feel my mental hygiene ("pedantism") disturbed when I don't clean up on
a *succes* exit. (For the librarization of an application, stuff should
be cleaned up even on error exit paths, and I sometimes do that too if
the app is not multi-threaded.) I admit such hair-splitting may have
disadvantageous performance characteristics and may involve some risk,
and if it became unfeasible in any specific case, I'd probably surrender
it with conditional compilation. (I can't write a malloc() without
thinking about a free(), but I could surround the latter with an
#ifdef.) I think when we're on our way towards exit(EXIT_SUCCESS),
that's an incidental circumstance wrt. releasing allocated objects whose
usefulness has ceased due to more localized / specific grounds. (I'm
sure this doesn't make any sense.)

Perhaps plugging the leaks of an inherited monster business app written
in C is only possible in the way you suggest (patching a gc under the
app), but I still believe that choosing the other approach for new apps
shouldn't preclude them being treated as "actual software".

Sorry for repeating myself.

Cheers,
lacos
 
E

Ertugrul Söylemez

I'm not used to being called an idiot, however indirectly. I'm
convinced you called me just that in your earlier post, to which I
tried to respond with arguments. You ignored those (perhaps because
you found them weak). I happen to share Ertugrul's view in this matter
(or so I perceive), hence I was a bit predisposed to become a
target. I kind of felt that technical opinion, and consequently,
myself, attacked by this outrageous (and certainly, witty) comment.

Really, I don't expect anyone to agree with me. After all, my coding
style works very well for me. I write safe code, which seldomly fails
and also sustains large scale restructuring. That's all I care about.

These days it doesn't happen too often that I write my code in C anyway.
IMO C is retarded in every way, but unfortunately a lot of C programmers
suffer from the Blub paradox.

So no, I don't think we should clean up after ourselves because the OS
cannot be trusted to do that. (I don't want to use such an OS.) I
simply feel my mental hygiene ("pedantism") disturbed when I don't
clean up on a *succes* exit. (For the librarization of an application,
stuff should be cleaned up even on error exit paths, and I sometimes
do that too if the app is not multi-threaded.) I admit such
hair-splitting may have disadvantageous performance characteristics
and may involve some risk, and if it became unfeasible in any specific
case, I'd probably surrender it with conditional compilation. (I can't
write a malloc() without thinking about a free(), but I could surround
the latter with an #ifdef.) I think when we're on our way towards
exit(EXIT_SUCCESS), that's an incidental circumstance wrt. releasing
allocated objects whose usefulness has ceased due to more localized /
specific grounds. (I'm sure this doesn't make any sense.)

I'd rather listen to Linus Torvalds than to Kaz Smartass Kylheku. Linus
even advocates the use of goto for proper cleanup before exit, be it on
success or error. It's not that I agree to Linus in all respects, but
at least he knows the value of safe, correct code.


Greets
Ertugrul
 
D

Dimiter \malkia\ Stanev

Hyman said:
So long that it was noticeable and annoying.
It doesn't matter exactly how long that was.

I was running today into exact same problem. Lots of <std::string> in
<std::map> allocated (part of our parser), and then freeing them all
took quite a lot. Granted MSVCR90D.DLL (debug VS2008) was the culprit,
but we needed the debug code to act fast too (part of testing), but even
in release it takes some time (though not that much).

The way the OS frees the memory, reminds of how erlang treats it's
processes - if process dies, the whole memory it allocated bonkers.

I also rememmber that Turbo Pascal used to have regions (kind of like
Objective-C autorelease memory pools). Come to think of them - these are
quite good tools, if you don't have gc.

Great postings, eye opener (I did knew about OS would free your memory,
just never thought, although experienced, that this would slow you down).

As things that might be affected - image a Makefile calling gcc, or
IncrediBuild calling msvc cl.exe compiler over and over - wouldn't it
better if gcc/msvc do not free any memory? Especially if you have 64-bit
machine, with more than 2/3 or 4GB of virtual memory, that might be better.

Or CGI, or some kind of converter (jpg -> dds, collada -> 3d model, .wav
-> mp3) and these are scheduled with Make, IncrediBuild, Electric Cloud
or whatever else.

In fact, one can go in these apps, and replace malloc with:

int block_pos = 0;
char block[HUGE];
void* malloc(int size)
{
void* p = block + block_pos;
block_pos += size;
return p;
}

In fact this is what lots of modern video games do, yes - we try to
avoid allocation where possible (and where not, it's usually isolated in
a pool - say scripting system, or trie table, hash-table, etc.)
 
D

Dimiter \malkia\ Stanev

ImpalerCore said:
I don't have a problem with the general idea of letting the OS reclaim
memory, but I don't have the experience to know how and when to make
the decision. If it takes 1, 5, or 30 seconds to end the program,
should that be a sign to just throw the hands up in the air to give it
to the OS to handle? Is there some other process really needing that
extra 5 seconds? Are users complaining about 5 seconds to close the
program? Is it that it costs more to develop the code to free
everything as efficiently, so it's cheaper to just let the OS handle

What if it's a CGI script, or Makefile calling something to build?
 
D

Dimiter \malkia\ Stanev

Ben said:
OK, smiley noted, but that is not a comparable situation. Freeing up
the memory used by a data structure is a proper part of its
implementation and testing that such clean-up functions work is a
proper part of the exercise.

In practise, I'd write the memory freeing code (unless, for some
reason, it really was very complex to write) and simply exclude the
free-ing up code (#ifdef TIDY_UP) prior to exit if I found it was
taking too long.

PS, yes I've used an OS that does not free memory on program exit:
TRIPOS (and yes, there was a good reason for the OS to be designed
that way).

I believe Symbian is like that, also WinCE if you allocate Heap's (but
not sure). Also old Win 3.0 - 3.1 with GlobalAlloc() (I think so).
 
D

Dimiter \malkia\ Stanev

Ertugrul said:
Okay, first let's talk about C. In C memory allocation and freeing is
not quite fast, so you're indeed making a good point. There is also no
garbage collector, which does that job for you. However, there is still
a very important reason to free your memory. At some point in time you
may want to restructure your program, perhaps to enhance it, at which
point it will get you into trouble. We're talking about safety and
modularity here.

On my system it's pretty fast, but I redeclare malloc/free:

int block_pos = 0;
char block[HUGE];

void* malloc(int size)
{
void* p = block + block_pos;
block_pos += size;
return p;
}

void free( void *)
{
}
 
D

Dimiter \malkia\ Stanev

Ertugrul said:
Of course, but as you say that's system-specific and you're left with no
choice anyway. But if you have a choice, you should choose correctness
over small performance gains.

Why? It's like finding an umbrella, after it finished raining.
 
D

Dimiter \malkia\ Stanev

Nick said:
if you don't notice them they don't matter!


I too have spent many a happy hour hunting memory leaks.


mine actually were leaks. The memory concerned no longer had any
references to it and wouldn't be freed on shutdown.

Every time a timer was started a tiny little bit of memory went
missing... Over a period of weeks this begins to matter!

Which means you did not shutdown/restarted your application. If that's
the case - then you have to free the memory, but there are lots of
applications that live for relatively small time, and are scheduled with
build tools (Makefile, or others) - for them freeing the memory, closing
the files is not needed.

If at some point, you decide to make these tools part of a DLL, .so, and
instead of calling them, contain them, then you have to free them, or
alternatively - provide them with memory block to allocate, still don't
free, and once they are finished free the memory block (Autorelease
pools in Objective-C, or Mark/Release() blocks in old Pascal, for "C" I
would wrap Doug Lea's mspace and give it to them).

We do this with video games - a new level starts - there is some kind of
stupid memory manager (usually fixed-pool allocator), and when the level
finishes, we just kill the whole thing.
 
D

Dimiter \malkia\ Stanev

Ertugrul said:
That's neither pragmatic, nor a solution. Pragmatic would be
determining the scope of a resource and freeing it after that scope is
left. The solution is to find and fix the bugs. What you're proposing
here is just a workaround for bad program behaviour. Also it leads to
even worse maintainance hells, because now you need to manage processes
as an additional resource.

Are you sure? So none of the garbage-collecting languages are pragmatic
for that matter?

And nothing is truer than "The solution is to find and fix the bugs",
although there is not one program running right now without a bug (Yes,
Knuth is an exception), all of them have.
 
D

Dimiter \malkia\ Stanev

Andrew said:
That might explain why it also takes ten seconds to load, while
Opera takes four and Chrome just over one.

IME since 2.1 (IIRC) Firefox has been progressively slower and
bloatier. The latest version was a slight uptick, but not much.

No. That does not explain it. What it explains is, that each one of
these browsers is just a totally different application one from each
other - each took different strategies, or it just happened that along
the way.

Newer windows OS comes with dynamic page swap, so it won't matter how
much virtual memory it uses (come to think of it, it does, but 500mb is
okay). Ten years ago, that number would've been say 15mb, 15 - 1.5mb,
sure in 5 years this might aswell be 5gb without a problem.

The important thing is to scale with the time. I don't really care for
executable that's only 100kb and uses another 100kb, if that executable
is my browser.
 
D

Dimiter \malkia\ Stanev

Robert said:
["Followup-To:" header set to comp.lang.c.]
jacob said:
lcc-win proposes a garbage collector in its standard distribution.
All the problems above are solved with a gc

Why insist on "lcc-win" when the garbage collector can be linked into
any C program on any hosted implementation?

robert

It probably can, as long as you don't do tricks. I've never succeeded
putting one with my apps, as I did lots of pointer fixups in loading raw
data, or unions with casting to different types.
 
T

toby

That might explain why it also takes ten seconds to load, while
Opera takes four and Chrome just over one.

IME since 2.1 (IIRC) Firefox has been progressively slower and
bloatier.

My experience chimes with yours and Richard's. While a long-time fan,
I have recently and reluctantly had to abandon Firefox on at least
three platforms (typically in favour of Opera) as a result.

Memory and CPU bloat seem rather important problems for Firefox to
address. I have no doubt that they have the brainpower to do so, but
is it on their radar yet?
 
N

Nick

Dimiter \"malkia\" Stanev said:
What if it's a CGI script, or Makefile calling something to build?

A CGI program sounds a perfect example to me, although earlier in the
thread someone was claiming that PHP did this and a direct consequence
of this was the security holes PHP introduces (I paraphrase, but don't
think incorrectly).

Consider a CGI program that generates a small graphic on the fly, and a
web page contains a few dozen of them.
 
J

jacob navia

Richard Heathfield a écrit :
And before Kaz sets off on another silly rant about psychiatry, it
sounds to me like Laszlo is reaching for the word "elegance". Kaz may or
may not appreciate the concept, but it is a genuine and valid concept
nonetheless. For a program to leave its "housekeeping" to the OS is
inelegant.

Well, I find programs loaded with redundant code much more inelegant.
MY criteria for elegance is small size, speed, doing the maximum with
the minimal set of code that fulfills the task at hand.

I have spent hours DELETING code from lcc-win or the IDE or the debugger,
trying to reduce the size of the program, an activity that is not
done in today's environments. "Cleaning up" means (for me) reorganizing
the code so that has LESS statements, factoring commonly used pieces
of code into routines, eliminating unneeded features, etc.

Keeping redundant code in my program is (for me) a big mistake. The
less code a program has, the easier is it for anybody to understand it.

I avoid as much as possible to recode routines already done by the OS,
for instance, I use as much as possible Windows routines for directory
management instead of writing "portable" ones. Under Unix, I use what
the OS offers for the same tasks to keep my code SMALL.

The net result is that my programs are incredibly SMALL. For instance, I
have packed a debugger, a project management utility, an editor, grep,
diff, and many other utilities into a 800K program (32 bits). In many
other languages a "Hello world" program is already that big...

The lcc-win compiler is 651 808 bytes long, including an assembler,
an optimizer, and the preprocessor. GNU's "cc1" is 2 623 488 and probably
doesn't even include the assembler.

Obviously we have different meanings for "elegance" here.

jacob
 
J

jacob navia

Ertugrul Söylemez a écrit :
I'd rather listen to Linus Torvalds than to Kaz Smartass Kylheku. Linus
even advocates the use of goto for proper cleanup before exit, be it on
success or error. It's not that I agree to Linus in all respects, but
at least he knows the value of safe, correct code.

Well, OBVIOUSLY in operating system's code it would be a VERY bad idea
to rely on the OS to do the cleanup isn't it?

Please try to see things in context. Nobody is advocating forgetting to
cleanup when cleanup is necessary. We are talking of the useless cleanup
that is done before exiting the program...

If we are going to argue RATIONALLY without piss contests we should limit
ourselves to that context.
 
E

Ersek, Laszlo

reaching for the word "elegance"

Yes; thanks for the help formulating this!
lacos

(PS. I also appreciate that you could unearth my forename from the
"From:" header. The Hungarian ordering of first name, last name is
reverse (like in Japanese). I thought applying a phone directory format
would help -- alas, it didn't work out very well (till now). I didn't
simply switch the order to match the English one because I post in
Hungarian groups too and I suspect it would draw stares.)
 
R

Richard Tobin

Seebs said:
But it's not necessarily a programming error to let the operating system
deallocate resources.

Incidentally, do those who believe programs should free() all
malloc()ed memory always catch interrupt signals in case the user
kills the program leaving memory allocated? It would be intolerable
if interrupting a program caused a memory leak...

Presumably they also believe that programs should always return to
main() rather than calling exit(), abort(), or assert(), in case the
operating system does not reclaim the stack. (Though they probably
have to do this anyway for malloc()ed memory.)

-- Richard
 
B

bartc

Richard Heathfield said:
Ersek, Laszlo wrote:



And before Kaz sets off on another silly rant about psychiatry, it sounds
to me like Laszlo is reaching for the word "elegance". Kaz may or may not
appreciate the concept, but it is a genuine and valid concept nonetheless.
For a program to leave its "housekeeping" to the OS is inelegant.

In that case perhaps the malloc() family of functions should provide an
extra function such as 'freeall()' (or free(-1) or some such scheme).

Then this will free all the blocks previously allocated with malloc(), since
the implementation knows exactly where all the blocks are, and can do so in
an efficient and linear manner. Or it can forget the individual blocks and
just free the larger blocks it has obtained from the OS.

The alternative, using free(), relies on knowing exactly where all these
blocks are, knowledge which might exist in dozens of pieces of code, or
embedded in thousands of bits of data, and may in the end not even result in
the OS reclaiming the memory, so it could be a waste of time.

(I'm not suggesting a program should rely on freeall(), but if a program
*is* going to exit at any particular point, it makes sense to make use of it
rather than waste time.

But is it also possible that, on program exit (return from main() or on
exit()), a malloc() clean-up routine is already called to do exactly what
I've suggested?)
 
S

Seebs

Redundancy is inelegant. Defence in depth is not.

A very, very, good summary.

I'd guess that about 2-5% of the code in my filesystem hackery program
is "defense in depth". Functions which are never called with null pointers,
which check their arguments in case there are null pointers. Functions
which print a string that can't possibly be null, but still print
s ? s : "<nil>"
instead of just passing the string to a printf-like function. (Which, in
turn, I already know prints "(null)" if you ask it to print a null pointer.)

It paid off. I've found several fairly significant bugs during some rework
and cleanup, but in the field, we've had exactly one case where a problem
occurred, and the logging information available allowed us to tell the
customer exactly where they had made a usage error that had triggered it.

-s
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,773
Messages
2,569,594
Members
45,123
Latest member
Layne6498
Top