Sorry, but I disagree. Imagine any kind of tool that make a single pass over
some data, like a compiler. Imagine it runs out of memory, what should it
do? Prompting the user to free memory is out, it can't rely on even running
interactively. Yes, it could trigger an internal garbage collection, but
adding one is a significant amount of work and not even that is guaranteed
to help. Maybe it even causes OOM earlier, due to its own overhead. That's
why a compiler, given a too large sourcefile, will simply exit with an
appropriate error. It is not that it's impossible to fix the problem but
that it requires unreasonable amounts of work to do it.
No, make a clean design, define there what you can do when the amount
of data is much bigger than you may be able to handle either now or in
long run. I've written much different interpreters hunging on that
problem and found always a clean solution to get the whole work done
unattendet without the need to break the system to an state to hinder
the apps to run after my job was finished to get theyr work done
propperly.
It is easy: simply define sequence ponts you can stop the current
work, work on the data you have already aquired, if needed even
partially and restart the work thereafter on that sequence point. Such
crap as xmalloc() had the consequence of impossibility to get a
mission complete forever. There is in no way a need tto let a app fail
only because it ends in an unsuccessfull call to malloc().
That is easy, just store them in a global. Did you perhaps miss the 'to some
extent' above? If you are writing to a temporary file, you simply declare
the FILE* global and if it is non-null you clean it up with an atexit()
handler. Simple and efficient.
Oh, you are one of the peoples who are writing completely unmaintable
code? I inherented such thing from another one - ending up in an
impossible mission to get a simplr bug fixed because it was not clear
to identify which goblal was used under which condition. A rewrite of
the whole thing was needed to get that simple bug fixed without
implementing another douzend bugs instead that one.
By that avoiding the mass of globla variables there was a nice side
effect in saving memory usement by splitting some static and malloced
arrays in more flexible lists.
Productive, failsave code is up to 90% nothing than strategy to
recover from CRT and operating system errors.
Seriously, you are missing the point _COMPLETELY_. xmalloc() is not the holy
grail, it is not the remedy for any code cancer but it is a tool which
competent programmers can decide to use or not use. If it doesn't work for
your code then so be it, but nobody claimed that it should do that. It is
surely not the right tool for a multithreaded server application because
one too large client request would cause DoS for all clients. It would not
be correct to use it in a library, because there it would force the error
handling on the user of the library and thus strongly reduce its
applicability. Nobody claimed that xmalloc() was the solution there.
Oh, a library that handles errors on themself with requirering the app
to use global variables are so buggy that they are completely
unuseable and to avoid on sight.
Uh, you was saying that you have to use globals to be ready to handle
lack of memory without data loss.
C'mon, that is just bullshitting here. If you understand why globals can be
dangerous, you would also understand when they can be used without danger
and even to an advantage. Don't construct any arguments on me that I didn't
make.
True, there are are rare possibilities where globals can be really
helpfull. But these are really small exceptions.
--
Tschau/Bye
Herbert
Visit
http://www.ecomstation.de the home of german eComStation
eComStation 1.2R Deutsch ist da!