xmalloc string functions

R

Richard

Flash Gordon said:
Kelsey Bjarnason wrote, On 30/01/08 11:43:
[snips]

Maybe for you. I, on the other hand, can be running three VMs eating up
a bit over three quarters of the memory, Lotus Notes, another email
client (yes, I have good reasons for using multiple email clients
simultaneously) and several other applications.

Likewise. This box has 2GB RAM installed, and it runs fairly often
at near capacity.

I'm glad I'm not the only one filling the RAM of a 2GB
machine. Actually, one of my colleagues is also finding the 2GB of RAM
is rather tight.
Well, an out of memory error is one thing; I don't mind if my word
processor tells me it can't allocate enough memory for another
document.

It telling me it cannot open another windows is fine, killing the word
processor and all the other documents would not be. I *expect* to find
I don't have the memory to do some things sometimes and have to close
down one of the many things I am doing.
I *would* be a tad upset if on doing something as trivial as pulling
down a menu, it ran out of memory and died, taking my edits with it.

Indeed.

"Indeed".

Wonderful. One couldn't make it up.
 
R

Richard

Flash Gordon said:
People have not been saying "continue as if nothing happened".


I don't have time. I'll use whatever I choose until I'm aware of a
severe enough problem to make me change or I find something better.


There are several reasons I am not fond of X already.

Please list them. I would be intrigued to see what your mighty brain has
rejected that the rest of the world can cope with quite happily.
 
R

Richard

Flash Gordon said:
Ian Collins wrote, On 30/01/08 01:11:

This is a company machine, not my personal machine. When it was bought
getting any more RAM in it would have cost an extra 400UKP or so and I
was not going to get away with that.


Not always possible either because you don't own the purse strings (a
company machine) or the cost is prohibitive, or the machine already
has the maximum the hardware supports.

I could probably now get the company to upgrade the machine from 2GB
to 4GB, but then I would probably just get the machine doing more
things and still run out of memory.


Fortunately not all and fortunately the ones I need to use have not
collapsed due to simple memory exhaustion.

What happened to swap btw?
 
R

Richard

Flash Gordon said:
Richard Tobin wrote, On 29/01/08 18:01:

So which of those does the rest of glib use? You would have to avoid
calling any of the functions which call g_new() not just avoid calling
it yourself.


Well, I get the Lotus Notes client reporting that it doesn't have
enough memory to open a new windows a *lot* more frequently than I get
it crashing, and the HW it is running on has yet to fail.

Maybes because all the petty checks of almost impossible memory failures
have used all the memory up?
 
R

Richard

Flash Gordon said:
Ben Pfaff wrote, On 30/01/08 03:15:

Note that I stated "out of resource message". This is because
1) I have not bothered to note the exact message
2) I think it is something along those lines

3) You made it up to support your view.
It is also possible that VMware has changed since I am using
Workstation version 6 latest build. Our Server and ESX machines have
enough physical RAM for all the machines they run so there is no
reason for them to run out.

But what about all the email clients you use?
Of course, I might have been lucky and only hit the instances where it
is trapped. However if it starts aborting on my I will be very quick
to complain to VMware and also investigate the alternative.

Do you pay for VMWare? Would one crash in a year offset the advantages
it has brought you? Or would you prefer a total recall and NO VMWare
until they redevelop it using your top notch best practices?
Lotus Notes has actually reported an "out of memory" error on several
occasions. Again, for all I know there could be lots of places it does
not check and I've been lucky.

Yes.

Considering the way you are damning certain Linux libraries I can assure
you they have never failed on me yet and I have a LOT of apps running.
 
R

Richard

According to the glib memory allocation documentation page, this cannot
be true, as the page clearly states, right near the top, that allocation
either *works* or the application *terminates*.

Some of the functions - including the one mentioned - explcitly say
that they *don't* exit. Not to mention the clue in the function name.

Now obviously it's a flaw in the documentation that it talks as if all
the functions aborted, but you don't want to rest your whole argument
on that flaw, do you? One mistaken statement, and you damn the whole
thing?

Given that this has already been pointed out, I am starting to doubt
your good faith.

-- Richard[/QUOTE]


It is fairly obvious to me that Flash and Kelsey's only agenda in this
thread is to big up their own perfection. They are talking absolute
rubbish and should know better. Kelsey has now totally rubbished a huge
% of Linux apps because he's too big headed to realise that the error
situations he talks about virtually never, ever happen in the real
world.

All the world is a trade off.
 
R

Richard

Malcolm McLean said:
The problem is, what is fixitup() going to do? Probably it needs
access to objects within the function's local scope to clean up. So it
cannot be a general-purpose function.

Of course not. It's a typical Falconer botch job.
 
M

Malcolm McLean

Richard said:
Please list them. I would be intrigued to see what your mighty brain has
rejected that the rest of the world can cope with quite happily.
The requirement for a colormap just to open a window is stupidly
complicated. By default there should be 2, 16, and 256 colour palettes
defined, then anyone who needs special colours messes about with maps.

Flashing cursors have to be done by hand, as do sinning spinners and
scrolling scrollbar buttons, and there isn't even a synch() command to get
the vertical retrace.

User messages cannot contain pointers so you cannot build a toolkit layer on
top of the existing event system.

I'm no Xlib fan. That's why I'm writing Baby X, my easy to use X toolkit.
The entire thing was build for a remote client / server paradigm that has
been shown to be impractical. However it does get windows up on Linux boxes,
and that is usually all that you need.
 
A

Antoninus Twink

Chuckle. You really are in love with yourself aren't you.

Maybe you should go and rewite QT etc too eh? Knowing what you know
etc. Just make sure it's as efficient if not more so that it was before.

It's clear that many of the "experts" in this group have *never* written
a substantial C program - they hope to just hope to impress the world
with the sheer arrogance of their grandstanding.
 
A

Antoninus Twink

Please list them. I would be intrigued to see what your mighty brain has
rejected that the rest of the world can cope with quite happily.

Let me guess... problem #1, X is actually used in the real world.
Immediately this makes it distasteful to the rarefied types who inhabit
this group and want C kept in the academy, used only for trivial linked
list routines and the like.
 
R

Richard

Kelsey Bjarnason said:
[snips]
The people (person) screaming that the sky is falling is just trying to
push an anti-GNOME agenda, it's really not anything more than that.

E.g. he is a troll, and you have feed him dinner and offered him a place
to stay for the night.

That fact should have been obvious to you immediately upon reading his
message which was full of feigned shock and in his followup message
saying that he has never liked GNOME but couldn't place why he had such
negative feelings for it. Riiiiight.

That would have been me, you miserable, insufferable, dishonest little
prick.

It would appear he has your number. Here is an active contributor to OSS
commenting on his acknowledged errors and all you can do is fixate on
something you appear to know nothing whatsoever about. You are all
theory and no practise.
Let me explain this to you in such a manner even your defective little
excuse for a brain can grasp it.

Aha, you are back to insulting people. It is no wonder you are so
universally derided in this group.
My objections to Gnome are *purely* a matter of personal taste. I
*prefer* other options. I have *never* said Gnome was a bad idea or
didn't deserve to be continued. I *have* stated, explicitly, I think it
*should* continue.

You have stated that it's a load of rubbish.
 
F

Flash Gordon

Totally wrong. For starters, a dialog has text inside. Try

Yes, and you *know* what that text is, so you *know* how much memory it
requires.
to figure out how much memory given string of text will
take (hint: you can't).

Hint, Xlib does not use a random number generator to draw text.
Bitmaps would do. Then, figure
out how much memory xlib needs for given drawing operations
(this is what I meant by "you need more").

Yes, and you know exactly what needs to be drawn.
Good joke.

I will admit that most people seem incapable of reading documentation.
Yep, draw the dialog hidden by other windows.

On Windows you can draw it whilst it is flagged as hidden. At least, you
could using the last GUI toolkit I used on Windows.
It will be drawn
nicely, except user won't see it. And when user brings it up,
you won't draw it because you "flushed the event queue".
You *have* to paint when X says to paint if you want the user
to see it.

Read the documentation. Flushing the event queue means your client will
send all of the pending events to the X server for actioning. I was
actually looking at the documentation when I wrote that. I.e. flushing
the event queue is analogous to flushing stdout. It also might free up a
little memory!
While a nice nit, it's just a BS nit. Xlib says "draw_whatever()
failed". Great. Handle it. But user won't see that silly text you
wanted to draw. Or are you talking about something else?
Perhaps about "some morons don't know to handle errors"?

Someone was talking about the X server hanging on out of memory. If they
are not handling X error you could get the appearance of hanging without
the actuality.

If the routines are failing you might be able to clear out the events
queued up for you process allowing memory to do things.

You might eventually reach the point of logging a failure to a file (or
failing in the attempt) and terminating the applciation.
"Some of the time"? Don't you need it to show the dialog
when you need it to? Most of the time X server has enough
memory for everything, it's not so interesting.

Some of the time the X server is running on a different box to the
application (yes, I *do* use X like this) some of the time it is on the
same box. There are too many variable to say that every time you will
succeed. Equally when you try an emergency save you might find the disk
is full.

I can't see anything cryptic about it.
"Better" is from enhancement requests department.

An application that works 1 time in 10 is better than one that never
works (in some circumstances) so do you initially deliver an application
that never works?
Didn't we
talk about saving user data and stuff like that?
Yes.

Of course
you should try to notify user! But it's just a bonus task.
No.

Spawning a small process which will show up a dialog has
much more chances to succeed than trying to show a dialog
in your process on OOM.

Depends on whether your process is out of memory or the machine.
Send a message over dbus to the
notification daemon (cheaper than a dialog), whatever.
It still has zero to do with how application can or can
not work after malloc() returned NULL.

Wrong, because you might be able to suspend the application until the
user has managed to free up memory and click on the retry button if that
is an appropriate strategy for the application in question.
It's not a framework. It's *nothing*.

Looks like an if statement testing the result of malloc to me.
Background print jobs, background syntax highlighting,
background recalculating text layout, background spell
checking, background resizing, background everything.
There are hundred different things going on in background.
You need a big fat OOM notifier, some central thing
which could signal OOM condition so that all those
guys could cancel themselves. You do not do

if (!(ptr = malloc(12)))
{
// cancel background print jobs
}

So how does your "big fat OOM notifier" get triggered if not on testing
the return value of malloc?

Remember also this was just one example of a possible recovery strategy,
not the one to be used on every application.
Great! A question, what did the event dispatcher
(or whatever) do when it didn't have memory to
process an event (or whatever)?

In this case it was more likely not to have time to process all events
that could be passed from one thread to the other (in fact it was pretty
much guaranteed it could not) so the solution was to throw away events.
This was a deliberate design decision that was correct for this
application, not a general solution for all applications.
If it didn't allocate
memory, what did the application do when it didn't
have memory to process a key press?
"Threads" doesn't really sound like something exciting
or advanced; boring things like handling key press
are a real big job (it's not what I do thanks to
the toolkit; but your application must handle it
itself for sure).

I would have to dig in to whether it is the OS or the application that
generates all events to determine what would happen in every situation.
Just as I would have to dig in to the behaviour of the keyboard buffer
to find out with the interactive non-GUI applications.
Pure total BS. You are not designing an application
starting from Xlib. If you are writing a toolkit
for your application, and implement X protocol, then
maybe.

If I don't call Xlib (or the toolkit) from a given place then the
application is not allocating memory in that place. Of course, the X
Server thread might be, but that is a completely separate process that
could be running on a different box.
Even on Windows, free from many X niceties,
you are not designing the controls you use (unless
you are Borland or Microsoft, in which case you are
hundred times right).

No, but you are still in control of when you call in to the toolkit.
Frankly, if everything was designed from ground again (and
that includes X, not "just" some funny libraries with funny names
starting with 'g'), it would be possible to handle OOM nicely.
It would require big amounts of resources [1] to design and write,
and then big amounts of resources to fix it and get it working
(the OOM handling part, that is). But, as it is, it's from the
ideal world software department.
Just because the world is not ideal is no excuse to make things worse.

What's worse again?

Re-read the thread.
Sure. But I am talking about: a user using GFrobnicator or
KThing losing his data because of OOM.

Everyone who has posted to this thread is a SW user including me.
 
R

Richard

Kelsey Bjarnason said:
[snips]

Segfault is here:

if (!(ptr = malloc(size)))
{
// Right here, in the untested code.
// Not because you access NULL, no,
// just a plain normal bug.
}

Why would you write your code that way?

Let's see:

FILE *fp = fopen...

if ( ! fp )
fwrite( ..., fp );

Do you generally write to (or read from) files you can't open? No? So
why would you write to or read from a pointer you can't allocate?

If you don't see the difference then god help you. It is a frequent
occurrence to miss config files or default data files. It is not
frequent for a VM OS to deny a malloc of 32 bytes. People can delete
files by mistake. They can not, however, tel the OS to refuse your app
32 bytes without seriously compromising their system and that program
ability to work correctly.
 
R

Richard

Malcolm McLean said:
The requirement for a colormap just to open a window is stupidly
complicated. By default there should be 2, 16, and 256 colour palettes
defined, then anyone who needs special colours messes about with maps.

Trivial nonsense.
Flashing cursors have to be done by hand, as do sinning spinners and
scrolling scrollbar buttons, and there isn't even a synch() command to
get the vertical retrace.
Bah.


User messages cannot contain pointers so you cannot build a toolkit
layer on top of the existing event system.

I'm no Xlib fan. That's why I'm writing Baby X, my easy to use X
toolkit. The entire thing was build for a remote client / server
paradigm that has been shown to be impractical. However it does get
windows up on Linux boxes, and that is usually all that you need.

And lets see what X does indeed give :

*insert huge list of wonderful X features not least sux and
x-forwarding*
 
F

Flash Gordon

Jeffrey Stedfast wrote, On 03/02/08 19:35:
but later you say there are no one-size-fits-all solutions? :)

That particular part of it is a common element that can be reused where
appropriate :)
If the toolkit being used is not one of those, then it is irrelevant that
some provide a means to do so, particularly if the "some" are not
available for the platform being targeted.

You can always go straight to the X API or the Windows API or whatever
for the emergency code.
I never said "badly designed", though I would agree "sub optimal in an
ideal world". There's a difference (to me, at least).

There is a whole range of design quality, and it is not even a line more
of a space with several dimensions. Perhaps I should have said "badly
designed in this respect" since in other respects they might approach
perfection.
How can you assert this?

There was meant to be an if in there. As in, "However, it it has enough
memory to do it.
I'll agree with that, and wherever I use malloc() directly (or
g_try_malloc(), I do write error handling - which may or may not include
attempting popping up an error dialog depending on the situation).

Well, that is good :)
Right, but as you mentioned was a problem for xmalloc(), we have the same
problem here. Not enough context for most real-world applications to
recover at this point.

OK, yes, but if you override the malloc/calloc/free when running your
emergency recovery code you can use your pre-allocated block for the
allocations that Gtk+ does so avoiding further out-of-memory problems :)
Easier said than done, not that it /can't/ be done - but one could easily
argue that this is more effort than it is worth, and unless you are able
to test your failure cases thoroughly, not even reliable.

As with all of life there are tradeoffs to be had.
It is /more/ reliable to routinely auto-save the user's work (as you
mentioned elsewhere, to a file other than the original) because it is
much easier to warn users about problems (potential or no) and certainly
easier to implement recovery should the application crash due to
uncontrollable (kernel crash, power outage, etc) error conditions on the
next application startup.

I agreed that this can be part of your recovery strategy.
Depending on the document, one could write the application such that any
button click (or whatever) would cause an auto-save in addition to some
timeout, thus reducing the likelihood of there being any unsaved changes
at any given point in time.

Or do what some editors I used to use did and literally save all changes
as the user went along. This was saving in to a recovery file not over
the original, and one recovery file would cover all the work done to all
files in that session. The best was the one where you could literally
sit watching it retype everything...
Since you obviously need this auto-save functionality in place if you are
serious about protecting the user's data at all costs anyway, then it
becomes no longer necessary to chain malloc() failures up your call stack
in order to use static emergency buffers.

At this point g_malloc() calling abort() becomes a moot point,
particularly if your auto-save code is robust against memory allocation
errors (keeping a small subsection of code bug free and robust against
all possible error conditions is a lot easier and less costly in
developer time than it is to do that for an application several million
lines of code long).

You should *still* do your damnedest to pop up a dialogue box so the
user knows the crash is due to out-of-memory! Also you want the program
to exit using exit not abort otherwise files might not be flushed before
being closed. Especially important of you use the method I just
suggested of logging everything as you go along rather than an autosave.
Hey, guess what? Evolution did this using an auto-save approach and it
used g_malloc() in much of the application code.

Well, I didn't like evolution anyway, I found it was hogging too many
resources on this machine for not enough benefit ;-)
Different approaches, same end result. Oh, sure, maybe in your ideal
case, the application exits from main() with a 'return 0;' as opposed to
an exit() call (or abort()), but that is irrelevant.

[snip]
You probably need mechanisms to signal the spell checker and print
process anyway to cope with the user choosing to abort them.

This is true, however you still need context information in order to do
so. I never said that the application wouldn't have the ability to cancel
the spell checker or printing, but in order to do so you need context. If
you are in a function being called asyncronously from somewhere which
might not even be your code which may not pass up your particular error
condition, then you are pretty much screwed unless your contexts are all
globally accessible.

Such things are likely to be separate threads, so you just send them an
appropriate signal (not necessarily in the C sense of the word) and give
them a chance to terminate. Then you only need to know how to signal
them which is information you could make globally available (maybe by
exposing a "kill print job" API that keeps static state).
While this may suggest the application (or the libs it depends on) is
poorly designed (or at least not suitably designed), the argument does
little to solve the actual problem at hand.

There are ways.
In the real world of end-user software development (e.g. not software
written for space ships or other areas where human lives are on the line)

Um, those *are* real world situations! ;-)
where the application's design is based on incomplete specifications (as
in they tend to change mid-development)

That applies to SW written for space ships and SW where lives are at
stake as well. Although such things do tend to be better documented and
change controlled.
in combination with insufficient
allotted time,

That certainly applies in the defence industry where I spent 15 years :-/
designing the perfect solution is downright impossible,

Well, nothing is ever perfect!
and so it is, unfortunately, not all too uncommon for the application's
design to be insufficient for every possible error condition.

If this is new to you, then you've never written real-world software and
I would appreciate having your pitty... because I, too, would love to
live in Ideal World where I have sufficient time and specifications to
use in order to come up with a proper design before I'm forced to begin
implementation :)

In the real world when asked to reduce an estimate I've been known to
sit and think for a few minutes and then say...

"Well, in my opinion you don't really need these pieces of functionality
since the things they are there for can be said to be covered by these
other things, and so by removing these requirements you remove this
amount of work."

I didn't get any complaints about my response either. I was the expert
and they had to accept my word on it.
[snip]
That it depends on the daemon shows that is is possible, or it would be
a simple case that none do.

Normally they report something the system administrator is able to
understand. At least, most that I use do.

Key word: most :)

The others *could* and not doing so is another aspect of poor quality.
Also a reason I would consider switching to an alternative.
Agreed in so much as they are not an ideal solution to the failing malloc
problem :)

Well, that is a start :)
They are, however, /a/ solution to the problem and might, in some
situations, be more than ample.

Better than dereferencing a null pointer. An application specific
alternative is better than a general purpose one as it can do
application-specific clean-up that a generic wrapper can't.
As have I, in my gui applications even.

So yours aren't all bad ;-)
In an ideal world, perhaps. If you've already got an auto-save feature
then it is not necessarily worth the extra effort.

I would agree that it /is/ worth the effort in the case where the failing
malloc() call is in the auto-save code, however :)

So you aren't as bad as some :)
I would conclude, that, like some parts of Evolution, if it is unable to
allocate resources for some non-critical data structure(s), that it is
able to report the "out of memory" issue to the user.

I seem to recall you claiming VMWare reported "out of memory" conditions
to the user as well, but as Ben Pfaff noted, VMWare uses xmalloc-like
wrappers as well.

It was some king of out-of-resource, and as it was less than a month ago
I've not had it happen again to check exactly what the error message is ;-)
Never said otherwise!

OK :)
Sure, the same goes for any application written on top of glib!

So don't write it on top of glib ;-)
(Not the case if you use g_malloc() of course, but you are hardly forced
to use only g_malloc() just because you link with glib).

Only if none of the other bits you call use g_malloc.
Not necessarily, but I will agree that this is /likely/ the case.

If you attempt to allocate the space on creating the document it is
*definitely* the case that the user will not have had a chance to do
anything with it.
I've used this approach for some simpler applications.

Auto-save is actually not that much different to this.

The logging has the advantage of never having to open a new file to it
after application start up, and generally all the resources needed to
write to a file are allocated when you open it :)

Amusing to me is that none of these developers are writing GUI apps or
libs afaict ;-)

Well, I do it occasionally.
It's not hard to find command-line programs and/or general purpose libs
that /are/ robust, like Ben Pfaff's AVL tree library for example, but
none of the ones I know of for writing GUI applications are of this
quality.

If I wanted to write an application that would meet your ideal criteria,
I'd have to write my application from the ground up, including the widget
toolkit. This is not only impractical from the development standpoint,
but also from the user's perspective where the application does not look
like any of his other applications. It would also not be able to share
much with the other applications running on the user's desktop and so
would use a lot more resources than a Good Enough solution.

Of course, you could work on improving the toolkits to deal with the
problem ;-)

Equally, if I had the time I could.
Anyone using glib stand-alone should probably reconsider, especially if
they are writing "mission critical" applications.

Most people, however, use glib via Gtk+ - and being that is the /only/
practical widget toolkit available to C software developers for Unix, you
can't easily write off glib altogether.

There where widget sets around before the start of the GNOME project and
they have been used successfully.
I honestly would not be surprised if the other major contender in the
widget toolkit space (being Qt) had similar problems wrt memory
allocation failure conditions, but even if it did, you wouldn't be able
to write the application in C afaik (you'd have to switch to c++).

Or write the GUI front end in C++ and the rest in C. Or do the GUI front
end in C# or Java or...
Yes, this is what I've been saying.

So you don't call glib functions from within that code :)
Glad we agree so far.
:)


Well, discussing it here isn't going to get the problem solved. If you
truly feel that strongly about it, then you should either fix the problem
(free software, afterall) or at the very least submit a bug report! ;-)

Oh that I had the time to bug-hunt free SW. Unfortunately I only
occasionally have time to bug-hunt the free libraries I use in the SW my
company sells and definitely don't have the time for other free SW. When
I do have the time I do submit bug reports and/or bug fixes.
You /assume/ that all code paths properly handle OOM conditions
internally and propagate them back up the call stack. But libc is still
only implemented by humans last I checked, so there is a possibility of
bugs.

Yes, and therefore it is pretty much guaranteed that all libc
implementations have bugs somewhere. I would, however, expect the memory
allocation functions to be amongst the most hammered and hence least
buggy parts of the library.
That's a pretty hefty assumption that you CANNOT rely on for mission
critical user data (since that's what your whole argument revolves around
in the g_malloc()-is-evil argument).

It is an evil that could have been avoided.
Because of this possibility, you MUST implement a safety net - aka auto-
save. Once you have auto-save in place and properly written to handle
every conceivable error condition that /it/ may encounter (OOM being
one), then the value gained by using malloc() over g_malloc() in the
remaining areas of the code begins to rapidly lose their practical value
(if the goal is simply to make sure the user's data is saved before
exiting).

Wouldn't you agree?

Personally I would still be likely to switch to a (possibly commercial)
application if I hit crashes on out-of-memory if they did not inform me
they that they were shutting down due to out-of-memory. If they let me
know I am a bit more tolerant as long as the recovery on restart gets
back my work.

Not always.

OK, *I* always have the choice, even if sometimes it involves changing
job. Not had to go that far yet though :)
For bonus reading, you might check out Richard Gabriel's paper on Worse
Is Better.

GLib's g_malloc() must be "good enough" because more and more Gtk+
applications keep popping up like wildfire just as C overtook LISP due to
the Worse Is Better rule.

What becomes popular is not always determined by what is good. There are
many examples of the worse solution winning out.

Personally I would seriously consider using a language that supports
exception handling if error propagation was going to prove too hard to
be "worth the effort". This would mean switching from C, but I consider
a language to be merely a tool so switching is not a problem.

As to LISP, I never liked it, but that is not an argument for here.
 
R

Richard

Kelsey Bjarnason said:
[snips]

Oh rubbish. If (an its a big IF) something so fundamental as mallocing
32 bytes fails then the chance of you being to gracefully retire is
virtually nil.

Ah, I see - you've never heard of multi-user and multi-tasking systems.
Let me explain:

In such a system, more than one program may be executed at the same time,
consuming resources, including memory. If one of the applications
requests a bit of memory _now_ which isn't available, this says nothing
of whether the memory will be available 10 seconds, or even 10
milliseconds, from now.

Such systems have been around since, oh, the 1960's at least, I'm
surprised you've never heard of them.

Even on a single-tasking system, the question of whether an allocation
failure is reason to abort an application is obviously very much in
doubt, as more than one person here has commented that this is a not good
way to design software.

That *you* cannot conceive of cases where allocation failure may not be
fatal isn't relevant: others can.

You seem to miss the point. I can. But they rarely happen and the fact
that YOU fail to see the fact that you made an arse of yourself by not
reading the documentation properly is not my issue.

I still have some sympathy for NOT checking all mallocs in certain
situations. It matters not about other processes when ones own can not
malloc 32 bytes.

What have YOU contributed to OSS that gives you the right to rubbish
such people as the Gnome development team?

One of life's sureties is that there's generally a reason for most
things. And you huffing and puffing like some sort of demented walrus
does nothing to change that.

Clearly they weighed up the pros and cons of their design decisions.

It is easy to criticise. not so easy to build a project, in your free
time, like Gnome and QT etc.
 
M

Malcolm McLean

Flash Gordon said:
Yes, and you *know* what that text is, so you *know* how much memory it
requires.


Hint, Xlib does not use a random number generator to draw text.
You don't understand the situation. X will call for a trivial amount of
memory to handle the font and the like, which is opaque and not accessible
to the user. Nor will it return an error message in this case - although it
practise it will probably generate a BadDrawable.

If the machine is so short of memory that these trivial allocation begin to
fail, the X system has had it. Everything will begin to malfunction. Events
will be dropped, windows won't appear, text won't draw. There's nothing you,
the application programmer, can do.
You might say that this situation shows how badly X was designed. However
imagine if every call to XDrawString() had to be tested for failure
conditions. It would be an intolerable burden on most developers. Not people
writing software for the space shuttle. But everyday applications programs,
where the result of a failure means loss of a few hours' work at worst.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,743
Messages
2,569,478
Members
44,898
Latest member
BlairH7607

Latest Threads

Top