Linux's approaching Achilles' heal

N

nbaker2328

Like a run-away freighttrain, the Open Source Community's "standard
practice" (_faux peer review_ plus shoddy coding standards and casual
dismissal of bug reports pointing out critical flaws http://pulseaudio.org/ticket/158
) is exactly the mind-set that will bring Linux tumbling down the hill
into the valley of the forgotten, non-important OSs that "could have
been".

It is easy to understand that, given the pressure to maintain a
'presence' in the month headlines and the desire to outperform the
competition in the number of 'features', some amount of short-cuts
will be taken and code audits being skipped so that the next 'distro
release' can announce a new fancy gizmo under its wing. *Some* degree
of this behavior is to be expected in an environment where any "Joe
Six-pack" can start a project and have his code used by and
encorporated into other software down the stream. However, I am quite
shocked that the practice is tolerated to the point that it leads to
extremely unstable critical support systems as detailed in the
following forum threads.

http://ubuntuforums.org/showthread.php?t=612606
http://ubuntuforums.org/showthread.php?t=614962

Nathan.
 
K

Keith Kanios

Like a run-away freighttrain, the Open Source Community's "standard
practice" (_faux peer review_ plus shoddy coding standards and casual
dismissal of bug reports pointing out critical flawshttp://pulseaudio.org/ticket/158
) is exactly the mind-set that will bring Linux tumbling down the hill
into the valley of the forgotten, non-important OSs that "could have
been".

It is easy to understand that, given the pressure to maintain a
'presence' in the month headlines and the desire to outperform the
competition in the number of 'features', some amount of short-cuts
will be taken and code audits being skipped so that the next 'distro
release' can announce a new fancy gizmo under its wing. *Some* degree
of this behavior is to be expected in an environment where any "Joe
Six-pack" can start a project and have his code used by and
encorporated into other software down the stream. However, I am quite
shocked that the practice is tolerated to the point that it leads to
extremely unstable critical support systems as detailed in the
following forum threads.

http://ubuntuforums.org/showthread.php?t=612606http://ubuntuforums.org/showthread.php?t=614962

Nathan.

I wouldn't call audio a *critical* system. If you read the response to
the half-witted comment, you will see why such non-critical systems
would be sacrificed in favor of more critical systems. If you are in a
out-of-memory situation, you will be across the board. In those
situations, you do the very same thing the human body does...
sacrifice appendages first and keep warm blood pumping to the vital
organs above all else.

A better solution to such a problem would be in fronting an effort/
campaign to reduce the amount of bloat and unnecessary memory usage.
 
D

Dan Espen

Like a run-away freighttrain, the Open Source Community's "standard
practice" (_faux peer review_ plus shoddy coding standards and casual
dismissal of bug reports pointing out critical flaws http://pulseaudio.org/ticket/158
) is exactly the mind-set that will bring Linux tumbling down the hill
into the valley of the forgotten, non-important OSs that "could have
been".

It is easy to understand that, given the pressure to maintain a
'presence' in the month headlines and the desire to outperform the
competition in the number of 'features', some amount of short-cuts
will be taken and code audits being skipped so that the next 'distro
release' can announce a new fancy gizmo under its wing. *Some* degree
of this behavior is to be expected in an environment where any "Joe
Six-pack" can start a project and have his code used by and
encorporated into other software down the stream. However, I am quite
shocked that the practice is tolerated to the point that it leads to
extremely unstable critical support systems as detailed in the
following forum threads.

http://ubuntuforums.org/showthread.php?t=612606
http://ubuntuforums.org/showthread.php?t=614962

Nathan.

Ah, my friend Nathan, I'm afraid it is you that is the idiot.
I assume these malloc wrappers print a message and then abort.
Do you have any idea what else they can do?

Do you really think a program can carry on and do anything reasonable
when it runs out of memory?

Don't you think it might require something for the program to continue
on? Like maybe memory?

Never the less, most of the software I write is middleware and
it does try to return error indications to the caller on out
of memory. I sometimes see dumps produced by
programs using my middleware as they try to report back to the user
that something went wrong.

If you think you are so smart, find out what the real power of open
source is. Find a better way and submit a patch.

But lose the arrogant attitude.
 
E

Evenbit

I wouldn't call audio a *critical* system. If you read the response to
the half-witted comment, you will see why such non-critical systems
would be sacrificed in favor of more critical systems. If you are in a
out-of-memory situation, you will be across the board. In those
situations, you do the very same thing the human body does...
sacrifice appendages first and keep warm blood pumping to the vital
organs above all else.

Oh come-on, Keith, you know better than to use the same pithy staw-man
that the PulseAudio retard used. We are talking about application
layers that deal primarily with multi-media data... this means the
'desired memory allotment' may run into the tens to the hundreds of
Gigs... so "across the board" is an extremely weak claim since it is
very unlikely for an other application requirement (and this goes for
the other apps currently running) to be anywhere near this size.
A better solution to such a problem would be in fronting an effort/
campaign to reduce the amount of bloat and unnecessary memory usage.

This can only be successful if it were "drilled into their heads" at
the start of the Freshman programming course and consistantly
continued throughout the CompSci regimen.

Nathan.
 
R

ray

Like a run-away freighttrain, the Open Source Community's "standard
practice" (_faux peer review_ plus shoddy coding standards and casual
dismissal of bug reports pointing out critical flaws http://pulseaudio.org/ticket/158
) is exactly the mind-set that will bring Linux tumbling down the hill
into the valley of the forgotten, non-important OSs that "could have
been".

It is easy to understand that, given the pressure to maintain a
'presence' in the month headlines and the desire to outperform the
competition in the number of 'features', some amount of short-cuts
will be taken and code audits being skipped so that the next 'distro
release' can announce a new fancy gizmo under its wing. *Some* degree
of this behavior is to be expected in an environment where any "Joe
Six-pack" can start a project and have his code used by and
encorporated into other software down the stream. However, I am quite
shocked that the practice is tolerated to the point that it leads to
extremely unstable critical support systems as detailed in the
following forum threads.

http://ubuntuforums.org/showthread.php?t=612606
http://ubuntuforums.org/showthread.php?t=614962

Nathan.

The main problem with your argument being, of course, that Vista which was
delayed several times and had features thrown out so that it could finally
come to market, seems to have even more problems.
 
K

Keith Kanios

Oh come-on, Keith, you know better than to use the same pithy staw-man
that the PulseAudio retard used. We are talking about application
layers that deal primarily with multi-media data... this means the
'desired memory allotment' may run into the tens to the hundreds of
Gigs... so "across the board" is an extremely weak claim since it is
very unlikely for an other application requirement (and this goes for
the other apps currently running) to be anywhere near this size.

I don't see how "straw man" applies here. I am simply commenting from
the appreciation of being a system-level programmer.

If one process is hogging all of the physical and swap memory, other
processes are being deprived of that memory. Ask Windows users how
appreciative it would be to lose one application's worth the data
instead of losing all of your data due to the entire system becoming
unresponsive.

If the problem is actually with running out of process (virtual)
memory, then I can think of more graceful ways to handle such out-of-
memory situations.
This can only be successful if it were "drilled into their heads" at
the start of the Freshman programming course and consistantly
continued throughout the CompSci regimen.

Nathan.

.... instead of Java, C# and garbage collection wiping incompetent
asses? It would be appreciated, but highly unrealistic when software
is market driven. Quality is no longer a factor, it is just reduced
down to time and price.
 
E

Evenbit

Ah, my friend Nathan, I'm afraid it is you that is the idiot.
I assume these malloc wrappers print a message and then abort.
Do you have any idea what else they can do?

Well, my friend Dan, I really do wish your assumption were correct.
It would be extremely nice (and helpful) if an application would
report an "error condition" before terminating. It would also, by
extension, be extremely nice (and helpful) if a support library would
report said error to the calling application so that the application
developer might have the opportunity to respond in a graceful manner
to environmental conditions. Non-returning function calls certainly
are a bane during debugging sessions.

I am also thinking of the Windows users who are new to Linux. When
programs like Firefox consistantly and suddenly "disappears" on them
(the way it does for me) without reporting the "why", they are going
to migrate back to their Microsoft products. At the very least, they
get the dreaded "Blue Screen of Death" which is a tonne more useful
information than something which terminates your application at will.
Now do you see the danger of PulseAudio and other shoddy libraries???

Nathan.
 
D

Dan Espen

Evenbit said:
Well, my friend Dan, I really do wish your assumption were correct.
It would be extremely nice (and helpful) if an application would
report an "error condition" before terminating. It would also, by
extension, be extremely nice (and helpful) if a support library would
report said error to the calling application so that the application
developer might have the opportunity to respond in a graceful manner
to environmental conditions. Non-returning function calls certainly
are a bane during debugging sessions.

You seem to have missed the point.
When an application is out of memory, almost anything you try to do to
report an error is going to fail.

It takes memory to invoke a function.
I am also thinking of the Windows users who are new to Linux. When
programs like Firefox consistantly and suddenly "disappears" on them
(the way it does for me) without reporting the "why", they are going
to migrate back to their Microsoft products.

Firefox disappearing, likely has nothing to do with this issue.

Install the Firefox bug reporting tool and a Firefox failure will
invoke a dialog that sends a bug report back to the developers.
At the very least, they
get the dreaded "Blue Screen of Death" which is a tonne more useful
information than something which terminates your application at will.
Now do you see the danger of PulseAudio and other shoddy libraries???

I don't see any danger.
It's an audio application.
It will stop and I'll look for the problem.
 
R

Rod Pemberton

Dan Espen said:
(e-mail address removed) writes:

Sigh, had to go to Google to read the other six posts that didn't propagate
well...

Although I strongly believe there are reasons to support the claim that
Linux is or will be "tumbling down the hill into the valley of the
forgotten, non-important OSs that 'could have been'," I don't believe the
issue is the mindset of Linux coders, their standards, their failure to fix
bugs, or even other issues such as reversion of prior bug fixes or
filesystem problems...

The real primary issue is money. Can Linux survive long term against a
company with billions in financial and physical capital, licensed and
proprietary software patents, driven programmers who are _paid_ to program
for a living, and an endless supply of software drivers written for their
OS's API by hardware manufacturers. Secondary issues include software
development time for new PC hardware or circuitry and the far above average
intellect of "their" large paid programmer base versus the average IQ,
skill, and time constraints of many unpaid "Joe Six-pack" 's. I see Linux
running into a wall due to the rapid continuous changes and advances in PC
circuitry unless a huge infusion of cash is found. A for profit Linux OS
corporation needs to be formed. Getting Apple to dump OS X for paid copies
of Linux would be a good start. If Linux can't compete with OS X for
profit, I really don't see a long term PC future. Perhaps one might as well
dump Linux now and embrace OS X...

Personally, I also think some long term design changes are needed. I'd
recommend a adopting a syscall only based version of Linux as it's primary
form, like UML. If only a syscall interface had to be written to bootstrap
Linux, cross-compiling to other platforms would be faster and easier.
Unfortunately, even with a UML version available, Linux's syscall interface
has bloated from 40 implemented functions in v0.01 to 290 in v2.16.17. The
number of syscalls needs to be drastically reduced or the syscall interface
needs to be built entirely on a small set of functions. I'd also recommend
using some other highly popular interface that allows development of almost
OS applications, say the SDL library, instead of the current syscall
interface. If SDL, this would allow numerous OS-like applications such as
DOSBox, Scummvm, etc. to run as the "higher level" OS. Writing the low
level OS portions are a pain. Nobody really wants to do that. It's already
been done fairly well for Linux. Much of the low level parts of Linux have
been extracted from Linux for the LinuxBIOS' FILO project anyway. Allowing
different top-ends to the OS would encourage much more upper level OS
development and adaptation. This adaptability might be a good long term
advantage against a corporate competitor that has become stagnant.


Rod Pemberton
 
F

Frank Kotler

Dan Espen wrote:

....
When an application is out of memory, almost anything you try to do to
report an error is going to fail.
....
Install the Firefox bug reporting tool and a Firefox failure will
invoke a dialog that sends a bug report back to the developers.

Clever, these Firefox developers...

Best,
Frank
 
E

Evenbit

I wouldn't call audio a *critical* system.

Audio is certainly a critical system for those users who are not
blessed with the normal human attribute of being 'sighted'. Blind
people do not depend on either screen graphics or text from a video
monitor -- they are able to use a PC solely via the audio feedback.
Why should library developers be granted exclusive permission to
determine which systems are *critical* and which are not? Shouldn't
these decision be left for the application programmer to decide?

Nathan.
 
E

Evenbit

I don't see how "straw man" applies here. I am simply commenting from
the appreciation of being a system-level programmer.

It is obvious that if you are indeed a "system-level programmer" who
is worth his salt, then you would have _some_ understanding about
modern memory management issues (it is clear from your responses that
you do not). When we issue a call to an OS asking for a chunck of
memory, the OS responds by looking for an area of _contiguous_ free
memory space of the size that we request. So, you see, it is
perfectly possible that an attempt to allocate 50Gigs will fail, while
subsequent calls to the same OS function asking for 10 instances of
10Gigs each will succeed.
If one process is hogging all of the physical and swap memory, other
processes are being deprived of that memory. Ask Windows users how
appreciative it would be to lose one application's worth the data
instead of losing all of your data due to the entire system becoming
unresponsive.

Wouldn't the better choice be to not lose ANY data??? Why do Linux
developers consistantly shoot for standards that are _below_ that of
Windows developers? Why should end-users tolerate a less-stable
experience -- especially when Linux-fans consistantly "bill" Linux as
the better(TM) product??
If the problem is actually with running out of process (virtual)
memory, then I can think of more graceful ways to handle such out-of-
memory situations.

This is indeed the issue at hand -- being "more graceful" than killing
the calling application and preventing any error reports from being
issued.
... instead of Java, C# and garbage collection wiping incompetent
asses? It would be appreciated, but highly unrealistic when software
is market driven. Quality is no longer a factor, it is just reduced
down to time and price.

This is the very mind-set and attitude which will get Linux labelled a
"has been" in the OS history books.

Nathan.
 
B

Bruce Coryell

Rod said:
Sigh, had to go to Google to read the other six posts that didn't propagate
well...




Although I strongly believe there are reasons to support the claim that
Linux is or will be "tumbling down the hill into the valley of the
forgotten, non-important OSs that 'could have been'," I don't believe the
issue is the mindset of Linux coders, their standards, their failure to fix
bugs, or even other issues such as reversion of prior bug fixes or
filesystem problems...

The real primary issue is money. Can Linux survive long term against a
company with billions in financial and physical capital, licensed and
proprietary software patents, driven programmers who are _paid_ to program
for a living, and an endless supply of software drivers written for their
OS's API by hardware manufacturers. Secondary issues include software
development time for new PC hardware or circuitry and the far above average
intellect of "their" large paid programmer base versus the average IQ,
skill, and time constraints of many unpaid "Joe Six-pack" 's. I see Linux
running into a wall due to the rapid continuous changes and advances in PC
circuitry unless a huge infusion of cash is found. A for profit Linux OS
corporation needs to be formed. Getting Apple to dump OS X for paid copies
of Linux would be a good start. If Linux can't compete with OS X for
profit, I really don't see a long term PC future. Perhaps one might as well
dump Linux now and embrace OS X...

Personally, I also think some long term design changes are needed. I'd
recommend a adopting a syscall only based version of Linux as it's primary
form, like UML. If only a syscall interface had to be written to bootstrap
Linux, cross-compiling to other platforms would be faster and easier.
Unfortunately, even with a UML version available, Linux's syscall interface
has bloated from 40 implemented functions in v0.01 to 290 in v2.16.17. The
number of syscalls needs to be drastically reduced or the syscall interface
needs to be built entirely on a small set of functions. I'd also recommend
using some other highly popular interface that allows development of almost
OS applications, say the SDL library, instead of the current syscall
interface. If SDL, this would allow numerous OS-like applications such as
DOSBox, Scummvm, etc. to run as the "higher level" OS. Writing the low
level OS portions are a pain. Nobody really wants to do that. It's already
been done fairly well for Linux. Much of the low level parts of Linux have
been extracted from Linux for the LinuxBIOS' FILO project anyway. Allowing
different top-ends to the OS would encourage much more upper level OS
development and adaptation. This adaptability might be a good long term
advantage against a corporate competitor that has become stagnant.


Rod Pemberton

Actually there are "for profit OS Linux corporations" around - such as
Red Hat, Novell (Suse), Caldera, and others of their ilk...

OS/2 is still around, though not owned or supported by IBM anymore:
http://www.ecomstation.com/ OS/2 was one sharp operating system about
15 years ago, just never caught on. But if this company is smart, they
could really position this as a viable alternative to Microsoft.

Another OS that could be a good alternative, if they positioned it a
little better, would be Sun's Solaris operating system. I tried an
evaluation copy and my system really hummed with it, even at 800 MHz.
Just that the networking support with Linux and MS was a little rough.
 
F

Fredderic

It is obvious that if you are indeed a "system-level programmer" who
is worth his salt, then you would have _some_ understanding about
modern memory management issues (it is clear from your responses that
you do not). When we issue a call to an OS asking for a chunck of
memory, the OS responds by looking for an area of _contiguous_ free
memory space of the size that we request. So, you see, it is
perfectly possible that an attempt to allocate 50Gigs will fail, while
subsequent calls to the same OS function asking for 10 instances of
10Gigs each will succeed.

That's odd... I was under the impression we had this thing called
paging, on modern operating systems. This has two effects; one,
applications are actually allocated memory in complete pages, and
secondly, those pages can reside anywhere in physical ram, and they'll
still appear contiguous to the application.

The only time this might be an issue, is with DMA, where a component
external to the processor (and hence without the benefit of the kernels
page tables) needs to access data across two or more pages.

Mind you, I'm not a systems level programmer either...

Wouldn't the better choice be to not lose ANY data??? Why do Linux
developers consistantly shoot for standards that are _below_ that of
Windows developers? Why should end-users tolerate a less-stable
experience -- especially when Linux-fans consistantly "bill" Linux as
the better(TM) product??

You, mate, are an ass. Every time I have run out of memory on a
Windoze system, the entire system crashed. My wife who still uses
Windoze will attest to that. All current unsaved data, in all
applications, gets flushed down the drain when not even Ctrl-Alt-Del
will respond, and you have to reach for the power button (because
modern machines don't come with a reset button anymore).

Every time I run out of memory in a Linux system, one application gets
hosed, _usually_ the right one. Though occasionally it's like my GUI
panel or something, which subsequently gets re-started, causing
something else to die instead, and occasionally it'll roll through two
or three unlucky minor apps before it hits the right one. It can also
be a bitch when it's the X-server itself that it decides to kill, but
such is life. I just sit back and watch for a few minutes, after which
I have a system that's at least stable enough to save down anything
that has survived, and either restart the X server myself, or give the
while system a thorough cleanout with a nice soft restart.

It's still a damn sight better than the Windoze way of just locking up
the entire frigging machine, and hosing everything indiscriminately.

This is indeed the issue at hand -- being "more graceful" than killing
the calling application and preventing any error reports from being
issued.

The question, is how exactly do you do that, without allocating
additional memory?

Come to think of it, how do you figure out when enough memory is really
enough? My system will quite happily (albeit a little slowly) run with
3-4 times the base memory allocated, as long as no single application
accounts for twice the base memory. In Windoze, it starts to die well
before that.

This is the very mind-set and attitude which will get Linux labelled a
"has been" in the OS history books.

But that is the mind-set that exists industry-wide. One only has to
look at Microsoft's business applications, most of which palm off HTTP
and XML as gods gift to software developers. They've rammed their
stock-standard HTTP/XML libraries into places they simply don't fit,
and focused on making the application look pretty so end users will
like it, and not notice the utter shite under the hood. I've seen it
time and time again. Most of the good quality innovative developments
I've seen of late, have come from Linux, not Microsoft.


So I really think you've got your head on backwards, mate. Linux's
achilles heal, if anything, is that fact that it's doing the job
right, rather than cutting corners and building lock-in boxes, in an
attempt to rule the world.


Fredderic
 
F

Fredderic

Audio is certainly a critical system for those users who are not
blessed with the normal human attribute of being 'sighted'. Blind
people do not depend on either screen graphics or text from a video
monitor -- they are able to use a PC solely via the audio feedback.
Why should library developers be granted exclusive permission to
determine which systems are *critical* and which are not? Shouldn't
these decision be left for the application programmer to decide?

They're not. Both systems get pretty much the same regard, as far as I
can see. But one would offer the suggested that without sight, there'd
likely be more memory for the audio system. Plus audio generally has a
lower memory footprint, and so short of audio editors and other
high-end music creation software, a simple screen reader is far less
likely to draw the application killers gaze, and far more likely to be
automatically restarted even if it did.

You know, I may have missed part of the thread, but it seems to me that
tugging on the accessibility string really is another step down the
ladder for you.


Fredderic
 
B

Barry Schwarz

You seem to have missed the point.
When an application is out of memory, almost anything you try to do to
report an error is going to fail.

The fact that a particular call to malloc fails does not mean the
application is out of memory. It only means that the requested amount
of contiguous memory is not available.
It takes memory to invoke a function.

A failed request for 1GB would probably have no affect on a call to
perror to report the problem.

On my system, and probably others, memory is divided into subpools.
The fact that the application subpools are exhausted has no impact on
the system subpools which the library routines can use if that is how
they are implemented.


Remove del for email
 
K

Keith Kanios

<<snipped>>





It is obvious that if you are indeed a "system-level programmer" who
is worth his salt, then you would have _some_ understanding about
modern memory management issues (it is clear from your responses that
you do not). When we issue a call to an OS asking for a chunck of
memory, the OS responds by looking for an area of _contiguous_ free
memory space of the size that we request. So, you see, it is
perfectly possible that an attempt to allocate 50Gigs will fail, while
subsequent calls to the same OS function asking for 10 instances of
10Gigs each will succeed.

Yeah, I know. Like, sheesh... how would I know about paging and memory
management if I have only written my own memory managers (rolls eyes)

Even at 4KB page resolution, physical out-of-memory situations *can*
occur and you *need* your system to do some quick and efficient
triage... and amputations if needed.

Stability comes before usability, not the other way around. If you are
physically out of memory, you simply cannot assume that you have
enough memory to perform even the simplest of operations. You want a
prime example of such bad design??? Use up all of your hard drive
space on your Windows box and then run a memory-intensive application/
game... catch you on the flip-side of that reset button buddy... and
pray that your chkdsk runs clean. This may be OK to get away with on
your desktop, but this is absolutely intolerable for a server/
production environment.

It would be wise to catch yourself up on some of these concepts
instead of insisting that you know them because you *think* they
should be that way. It could quite possibly keep you from looking like
a complete newbie.
Wouldn't the better choice be to not lose ANY data??? Why do Linux
developers consistantly shoot for standards that are _below_ that of
Windows developers? Why should end-users tolerate a less-stable
experience -- especially when Linux-fans consistantly "bill" Linux as
the better(TM) product??

I am not going to get into a NT vs. Linux war, as I really don't like
either of their designs and I'll pick BSD over the two any day.
However, I have consistently noticed (i.e. from vast server/desktop
experience) that memory management on Linux is handled much better
than NT... and this is coming from someone who runs Windows XP
despite.
This is indeed the issue at hand -- being "more graceful" than killing
the calling application and preventing any error reports from being
issued.

Is it really? Are you absolutely sure that the program is using up its
entire virtual memory space and not just choking on low RAM and HD
space situations??? Links that state this, exactly, would be
appreciated.
This is the very mind-set and attitude which will get Linux labelled a
"has been" in the OS history books.

Nathan.

I think Linux suffers from the very thing that makes it popular. It
tries to be the one OS that can run everywhere and on everything. In
this respect, it suffers in terms of quality. Mostly everything is
dependent on gcc to make all of the optimizations. There are too many
redundant libraries, and even then most of them do relatively simple
things. However, you will rarely see a properly configured Linux-based
server have the need to be restarted short of upgrades, deep
configuration changes and those rare kernel panics. I wish I could say
the same for even the best NT server setups I have come across.

Toaster Linux FTW!!!
 
E

Evenbit

It would be wise to catch yourself up on some of these concepts
instead of insisting that you know them because you *think* they
should be that way. It could quite possibly keep you from looking like
a complete newbie.

The only reason that I "insist that know them" is because I *have*
been reading this type of material. I haven't (knowingly) made any
claim about OS functionality that I didn't gain from reading a few
books on the subject.

Nathan.
 
K

Keith Kanios

It would be wise to catch yourself up on some of these concepts
instead of insisting that you know them because you *think* they
should be that way. It could quite possibly keep you from looking like
a complete newbie.

The only reason that I "insist that know them" is because I *have*
been reading this type of material. I haven't (knowingly) made any
claim about OS functionality that I didn't gain from reading a few
books on the subject.

Nathan.


Ah... theory. Leaves a nice warm feeling, doesn't it?

Three potential solutions to fix your unsound comments.

1) Re-read those books.
2) Get more modern/informative books.
3) Try a little practical implementation so you can see why it is so
foolish to back such inconsistent theories or potential
misunderstandings.

I am not trying to be too much of an a**hole here, but I have nearly 8
years of actual OS development experience and system-level programming
under my belt. It is not a lot, but I would be willing to pit it
against someone who seems to have just graduated from HLA. So, believe
me when I tell you: YOU ARE WRONG.

Now, adapt, overcome and enjoy the enlightenment that will follow ;)
 
C

CBFalconer

Barry said:
.... snip ...


The fact that a particular call to malloc fails does not mean the
application is out of memory. It only means that the requested
amount of contiguous memory is not available.

In addition, malloc and friends have no idea what earlier
assignments are being used for. There is no reason the program
cannot react to the error by releasing stored items until this
malloc actually succeeds. As long as success can be thus attained,
there is no reason for the program to fail.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,764
Messages
2,569,567
Members
45,041
Latest member
RomeoFarnh

Latest Threads

Top