bad alloc

P

Paul

If your goal is to improve robustness by logging the OOM condition,
then it's not just essential, it's mandatory.  If you fail to do it,
you failed to improve robustness.



Then you don't know many languages!  Java, Python, and many others
provide robust, enterprise-grade logging facilities out of the box.
Haskell, Erlang, and many others provide all sorts of transactional
facilities, depending on exactly what you want, out of the box.




You're just making the cost/value proposition worse, not better.
That's even more code I have to write!


No, that's not even how the discussion started.  One of the major
advocates for handling OOM suggested this was not only possible, but
trivial.


Nope.  All the other requests can die while you're trying to handle
the OOM condition.  Or the other side could drop the request because
they got tired of waiting.  The reality of the matter is that both
will happen.


Yes, it is.  It requires me to rewrite a considerable number of
language and sometimes even OS facilities, something you have admitted
yourself!  The entire reason I'm using a programming language is
because it provides useful facilities for me  As a result, it isn't
the least bit unreasonable to conclude that rewriting language
facilities is hard.  If I wanted to be doing to be writing language
facilities, then I'd just write my own damn programming language in
the first place!

But how much effort is it to just enclose all allocations in a try-
catch block?
It's only one try block and this is all provided as a language
feature.
Applications that require a response to OOM other than terminate are
an unsubstantial minority.  Systems that cannot permit termination as
an OOM response are almost certainly broken.

Looking at a user application on windows, termination on low memory
condition is permitted but if the program were just to terminate at
the slightest whiff of low memory it wouldn't be a very well written
program IMHO. At least give an error message and inform the user why
you are closing before termination, or just terminate the process that
the memeory was requested for and keep the program alive thus allowing
the user to close other apps and free up some memory.

If some other program is leaking memory, your program will get the
blame as being a rogue program because it just crashed in an
unproffessional way. This alone is a good reason for error checking.
 
N

none

On 09/ 4/11 11:20 AM, James Kanze wrote:
On 09/ 2/11 04:37 PM, Adam Skutt wrote:
[...]
I agree. On a descent hosted environment, memory exhaustion is usually
down to either a system wide problem, or a programming error.
Or an overly complex client request.
Not spotting those is a programming (or specification) error!

And the way you spot them is by catching bad_alloc:).

No, you set upfront bounds on allowable inputs. This is what other
engineering disciplines do, so I'm not sure why computer programmers
would do something different. Algorithms that permit bounded response
to unbounded input are pretty rare in the grand scheme of things.
Even when they exist, they may carry tradeoffs that make them
undesirable or unsuitable (e.g., internal vs. external sort).

So, if I understand you correctly, you are saying that you must always
setup some artificial limits to the external inputs and set
artificially low so that no matter what is happening in the rest of
the system, the program will never run out of resources....

This seems like a very bad proposition to me. The only way to win is
to reserve and grab at startup time all of the resources you might
potentially ever need in order to meet the worse case scenario of your
inputs.

Yannick
 
A

Adam Skutt

So, if I understand you correctly, you are saying that you must always
setup some artificial limits to the external inputs and set
artificially low so that no matter what is happening in the rest of
the system, the program will never run out of resources....

Yes, if you want to isolate failure processing one request from
another (esp. in a threaded system), you set limits on how much input
can be provided with each request. You reject requests that exceed
the limit.

However, this doesn't mean the limits are set artificially low.
Usually memory isn't your bounding constraint, so you'll run out of
database handles, CPU, etc. long before you run out of memory. Per-
request memory limits can be generous and not create an issue.

Of course, I'm personally fine with unbounded input as long as the
user understands the system will break at some point and they get to
keep both of the pieces.
This seems like a very bad proposition to me.  The only way to win is
to reserve and grab at startup time all of the resources you might
potentially ever need in order to meet the worse case scenario of your
inputs.

No, I'm not sure why you think this follows in the least. I also
think I've explained why this isn't the case several times already.
If you have per-request bounds and the OS can't give you memory when
you ask for it you either need to rewrite your code (so you'll be
terminating anyway) or the OS is likely to terminate (so you'll be
terminating anyway).

Adam
 
P

Paul

On 09/ 4/11 11:20 AM, James Kanze wrote:
On 09/ 2/11 04:37 PM, Adam Skutt wrote:
[...]
I agree. On a descent hosted environment, memory exhaustion is usually
down to either a system wide problem, or a programming error.
Or an overly complex client request.
Not spotting those is a programming (or specification) error!
And the way you spot them is by catching bad_alloc:).
No, you set upfront bounds on allowable inputs.  This is what other
engineering disciplines do, so I'm not sure why computer programmers
would do something different.  Algorithms that permit bounded response
to unbounded input are pretty rare in the grand scheme of things.
Even when they exist, they may carry tradeoffs that make them
undesirable or unsuitable (e.g., internal vs. external sort).

So, if I understand you correctly, you are saying that you must always
setup some artificial limits to the external inputs and set
artificially low so that no matter what is happening in the rest of
the system, the program will never run out of resources....

This seems like a very bad proposition to me.  The only way to win is
to reserve and grab at startup time all of the resources you might
potentially ever need in order to meet the worse case scenario of your
inputs.
This is not possilbe in the situation where a program is limited by
system memory. As a crude example a text editor opening a new window
to display each text file, the number of windows is limited by
available system RAM.

The type of program you describe is not limited by system RAM, it is
limited the procedures it executes.. for example Freecell or
Minesweeper are limited in the amount of RAM they can consume.

I think this is why programs specify system requirments, but many
games and stuff specify it as a minimum and suggest that much better
performance will be achieved if more RAM is available.


The language provides us with a mechanism to catch errors and I don't
know why people shun it. I think it has a lot to do with laziness TBH.
 
P

Paul

Yes, if you want to isolate failure processing one request from
another (esp. in a threaded system), you set limits on how much input
can be provided with each request.  You reject requests that exceed
the limit.

However, this doesn't mean the limits are set artificially low.
Usually memory isn't your bounding constraint, so you'll run out of
database handles, CPU, etc. long before you run out of memory.  Per-
request memory limits can be generous and not create an issue.

Of course, I'm personally fine with unbounded input as long as the
user understands the system will break at some point and they get to
keep both of the pieces.


No, I'm not sure why you think this follows in the least.  I also
think I've explained why this isn't the case several times already.
If you have per-request bounds and the OS can't give you memory when
you ask for it you either need to rewrite your code (so you'll be
terminating anyway) or the OS is likely to terminate (so you'll be
terminating anyway).
I used play World of Warcraft and my system would sometimes become
slow and sluggish, when I added more RAM the game ran much faster and
smoother.

These programs must be designed in such a way that they operate using
the maximum resources without crashing. If termination was the answer
I wouldn't have been able to play this game, prior to upgrading my
RAM, without it crashing all the time. It didn't crash it played ok
but became a bit slow when resources were low.
 
J

James Kanze

Then are you willing to make the claim most Linux, UNIX, OS X and
Windows services are poorly written? Because that's what you just
said, merely using different words.

I don't know about OS X, since I've never used it, but it's true
that Linux and Windows services are far from robust. Neither
are really what I'd take as a model. (I wouldn't use either
Linux or Windows if I needed a reliable server.)
Crashing is a perfectly reasonable response to an error condition,
really most error conditions. I'm not sure why anyone would think
otherwise even for a second.

It depends on the application. For most systems, crashing is
the only acceptable response for a detected programming error.
But depending on the application, running out of memory isn't
necessary a programming error.
 
J

James Kanze

I should clarify this part. All Unicies provide some limits through
ulimit, which provides mostly per-process limits that are user
discretionary. Few provide the ability to enforce limits on all
invocations of binary X, which is what I assumed you were talking
about and what would be necessary.

That's not what I was talking about, and that's absolutely
unnecessary. Any reliable server will be running on a dedicated
machine, in a dedicated environment. It's impossible to have
any sort of reliable system otherwise.
ulimit isn't helpful because the
limits have to be high enough for the largest program the user wishes
to run, and you cannot prevent the user from setting those limits for
running other binaries.

That's just bullshit. A reliable server will be started from a
shell script, which will set ulimits just before starting it
(and will do nothing else which might use significant amounts of
memory).
 
J

James Kanze

[...]
I'm not sure what you mean by "the facilities out of the box".
No language that I know provides the logging facilities
necessary for a large scale server "out of the box"; C++
actually does far better than most in this regard. And no
language, to my knowledge, provides any sort of transaction
management (e.g. with rollback).
Then you don't know many languages! Java, Python, and many others
provide robust, enterprise-grade logging facilities out of the box.
Haskell, Erlang, and many others provide all sorts of transactional
facilities, depending on exactly what you want, out of the box.

I'm familiar with Java and Python, and neither provide any sort
of transactional management, nor adequate logging for large
scale applications.
You're just making the cost/value proposition worse, not better.
That's even more code I have to write!

You have to write it anyway, since it's part of making the
application robust.
No, that's not even how the discussion started. One of the major
advocates for handling OOM suggested this was not only possible, but
trivial.

Nothing in a large scale application is trivial, but handling
out of memory isn't more difficult than any number of other
things you have to do.
Nope. All the other requests can die while you're trying to handle
the OOM condition. Or the other side could drop the request because
they got tired of waiting. The reality of the matter is that both
will happen.

The reality of the matter is that neither happens, if you
program correctly. I've written robust applications which
handled out of memory, and they worked.
Yes, it is.

It's no harder than a lot of other things necessary to write
correct software.
It requires me to rewrite a considerable number of
language and sometimes even OS facilities, something you have admitted
yourself!

But so do logging, and transaction management, and a lot of
other things.
The entire reason I'm using a programming language is
because it provides useful facilities for me As a result, it isn't
the least bit unreasonable to conclude that rewriting language
facilities is hard. If I wanted to be doing to be writing language
facilities, then I'd just write my own damn programming language in
the first place!

But no languate has adequate logging facilities, nor transaction
management facilities, nor a lot of other things you need.
Applications that require a response to OOM other than terminate are
an unsubstantial minority.

Finally something I can agree with. They're definitely a
minority. But they do exist. (I'd guess, on the whole, they
represent less than 10% of the applications I've worked on. But
it's hard to quantify, since a lot of lower level applications
I've worked on in the past didn't use dynamic memory, period.)
Systems that cannot permit termination as
an OOM response are almost certainly broken.

A system is broken if it doesn't meet its requirements.
And it makes justifying handling OOM only harder, not easier! You're
making my case for me!

You're just ignoring the facts. Some applications (a minority)
have to handle OOM, at least in certain cases or configurations.
If they don't they're broken.
 
J

James Kanze

On 09/ 4/11 11:20 AM, James Kanze wrote:
On 09/ 2/11 04:37 PM, Adam Skutt wrote:
[...]
I agree. On a descent hosted environment, memory exhaustion is usually
down to either a system wide problem, or a programming error.
Or an overly complex client request.
Not spotting those is a programming (or specification) error!
And the way you spot them is by catching bad_alloc:).
No, you set upfront bounds on allowable inputs.

That's another possible solution. Not acceptable for all
applications, and sometimes more difficult to implement than
handling OOM. Applications vary.
This is what other
engineering disciplines do,

Actually, they don't. There's a good reason why soldiers are
required to break step when crossing a bridge.
 
J

James Kanze

On 09/ 5/11 05:56 AM, James Kanze wrote:
[...]
Seriously, the problem is very much like that of a compiler.
Nest parentheses too deep, and the compiler will run out of
memory. There are two solutions: specify an artificial nesting
limit, which you know you can handle (regardless of how many
connections are active, etc.), or react when you run out of
resources. There are valid arguments for both solutions, and
I've used both, in different applications.
I have also seen both in compilers. For example last time I played, g++
didn't have a recursion limit for templates (as used in
meta-programming) while Sun CC does.

What you mean, of course, is that the recursion limit in g++ is
determined by the available resources; not that there isn't
one:).

There are arguments for both strategies. I've used both,
depending on the application. There is no one "correct"
solution.
 
A

Adam Skutt

I'm familiar with Java and Python, and neither provide any sort
of transactional management, nor adequate logging for large
scale applications.

They do provide robust, enterprise grade logging in java.util.logging
and simply logging OOB. Both provide what just about every other
logging framework on the planet provide; and both provide something
that's worlds better than what C++ provides out of the box, which is /
nothing/. If you believe otherwise, then you either don't know what's
actually done for large-scale applications or you don't actually know
the languages. I'll leave that decision up to you.
You have to write it anyway, since it's part of making the
application robust.

Just because you claim it makes the application more robust doesn't
mean that actually makes the application more robust. All I see is
more code to test and debug, and therefore more points where things
could fail.

Until you give concrete reasoning as to why we should believe a
program that terminates under OOM is less robust than one that does
not, you have no basis for claiming I have to write an OOM handler of
any sort. It's not a tautology that fewer terminations equal improved
robustness, no matter how much you wish it were. So I would start
there, honestly. Good luck with that.
Nothing in a large scale application is trivial, but handling
out of memory isn't more difficult than any number of other
things you have to do.

Yes, it is obviously considerably harder, seeing as I literally have
to do all the work myself. Just about everything else discussed has
frameworks and libraries to make my life easier. Handling OOM does
not.
The reality of the matter is that neither happens, if you
program correctly.  

I can't stop the world on Linux, OS X, Windows, or Solaris. Doing
that would require special language runtime and kernel support. So
no, even if I 'program correctly', the things I mentioned still can
happen. It's unavoidable: you cannot make the other threads stop
executing while you handle the OOM condition. That means they're
vulnerable to the OOM condition as well, as it's a /per process/
condition.
I've written robust applications which
handled out of memory, and they worked.

You mean they've worked in whatever conditions you managed to test
them under. But without source, given your track record of basic
factual errors, I don't really care what you claim you've coded.
It's no harder than a lot of other things necessary to write
correct software.

Show me the library I can download to handle OOM conditions on Linux,
Windows, and OS/X. I'll wait.
But so do logging, and transaction management, and a lot of
other things.

Nope. If the language doesn't provide them, I can download libraries
that provide them for me. The same is not the case for OOM.
Finally something I can agree with.  They're definitely a
minority.  But they do exist.  

If you actually believed this, you wouldn't be making the arguments
you're making. There's a logical contradiction between the claims:
1) Handling OOM improves robustness
2) Handling OOM is easy
3) Handling OOM is only necessary, and only done, for a minority of
applications.

Statement 3 is inherently contradictory with the first two. Moreover,
if you believed they were a minority, you wouldn't be arguing with me
at all, since my argument is inherently a cost/benefit argument. So
no, I don't believe you. Plus, you've contradicted yourself on this
point elsewhere.
A system is broken if it doesn't meet its requirements.

Few systems specify, "You cannot terminate under OOM conditions"
because it's a senseless and pointless requirement.
You're just ignoring the facts.

I'm not ignoring any facts. If the OS might indiscriminately kill my
process anyway, then my precious OOM handler may never even run. That
makes it even harder to justify writing the damn thing in the first
place, not easier! Ergo, you're making my argument for me.

Adam
 
A

Adam Skutt

That's another possible solution.  Not acceptable for all
applications, and sometimes more difficult to implement than
handling OOM.  Applications vary.

Then provide an example of when it's more difficult. I'll wait.
Actually, they don't.  There's a good reason why soldiers are
required to break step when crossing a bridge.

And if you think that reason is a counterexample to what I said, then
you're simply crazy. Walking over a bridge where you don't know how
much wait it is designed to support (nor can you) isn't relevant here;
we're the ones building the bridge, so we get to set the limits and
tell the world as necessary. Please don't be so disingenuous.

Adam
 
A

Adam Skutt

That's not what I was talking about, and that's absolutely
unnecessary.  

Then what is necessary? You clearly believe they provide something
(which is simply wrong, ulimit is indeed what is standard) and you
clearly believe whatever is provided is necessary. So stop stalling
and answer, or be a grownup and admit you were wrong twice over.
Any reliable server will be running on a dedicated
machine, in a dedicated environment.  

It's cute that you believe that, but that's hardly the case anymore.
It's impossible to have
any sort of reliable system otherwise.

All those very old IBM mainframe operators and systems would like to
have many words with you. IBM (along with many others) have been
doing virtualization, and doing it well, before the PC even existed.
I'm not sure why you believe dedicated hardware is necessary. I'm not
sure why you believe it improves robustness.
That's just bullshit.

No, it's not. It's a fundamental limitation of discretionary access
control. I can assign the same limits to /bin/bash as I can to /usr/
bin/firefox, even though the two don't need the same amount of
resources.
 A reliable server will be started from a
shell script, which will set ulimits just before starting it
(and will do nothing else which might use significant amounts of
memory).

Who said we were talking about just servers? I'm not sure as hell not
talking about just servers, nor have I ever been talking about just
servers. Hell, you're not even talking about just servers and it
would be disingenuous of you to claim otherwise. UNIX doesn't provide
the resource controls you think it does, but even if it did, they
wouldn't help the problem we're discussing.

Adam
 
A

Adam Skutt

It depends on the application.  For most systems, crashing is
the only acceptable response for a detected programming error.
But depending on the application, running out of memory isn't
necessary a programming error.

However, there's generally no way to tell. Plus, your implication 'If
it's not a programming error, I can handle it' simply does not
follow.

Adam
 
G

Goran

On 09/ 4/11 11:20 AM, James Kanze wrote:
On 09/ 2/11 04:37 PM, Adam Skutt wrote:
     [...]
I agree.  On a descent hosted environment, memory exhaustionis usually
down to either a system wide problem, or a programming error.
Or an overly complex client request.
Not spotting those is a programming (or specification) error!
And the way you spot them is by catching bad_alloc:).
No, you set upfront bounds on allowable inputs.  This is what other
engineering disciplines do, so I'm not sure why computer programmers
would do something different.
Because code runs in a more volatile environment

It does nothing of the sort!  Code does not have to deal the physical
environment: it doesn't have to be concerned with the external
temperature, humidity, shock, weather conditions, etc.  It does not
care whether the computer has been placed in a server room or outside
in the middle of the desert.  Hardware frequently has to care about
all of these factors, and many more.  Operating systems provide
reasonable levels of isolation between components: software can
generally ignore the other software running on the same computer.
Hardware design has to frequently care about these factors: the mere
placement of ICs on a board can cause them to interfere with one
another!

The list goes on and on, and applies to all of the other engineering
disciplines too.  This is easily by far both the most absurd and most
ignorant thing you've said yet, by a mile.
and tends to handle
more complex (models of) systems.

Because it's cheaper and easier to do such things in software, in no
small part because many of the classical design considerations for
hardware simply disappear.  However, that doesn't change my statement
on setting bounds in the least.
An obvious example: a program operates on a set of X-es in one part,
and on a set of Y-s in another. Both are being "added" to operation as
user goes along. Given system limits, code can operate in a range on A
X-es and 0 Y-s, or 0 X-es and B Y-s, and any many-a-combination in
between. Whichever way you decide on a limit on max count of X or Y,
some use will suffer.

I'm not sure how you think this is irrelevant to this portion of the
discussion, but it's not even true.  The limit for both may be
excessively generous for any reasonable use case.  Moreover, plenty of
hardware has to process two distinct inputs and still sets bounds, so
it is an accepted technique.
Compound this with the empirical observation
that, beside X and Y, there's U, V, W and many more, and there you
have it.

There is no such empirical observation.

Of course there is. Take a look at any non-trivial codebase. There's
dozens of types (classes, structures), combining themselves in a
myriad of ways. Each of those is instantiated a pretty much unknown
amount of times, put into containers, moved around, all sorts of stuff
is happening. Do you e.g. limit a number of string objects that can be
created in you code? I would like to see that codebase.

What the...!?!?

Goran.
 
F

Fred Zwarts \(KVI\)

"Paul" wrote in message
I used play World of Warcraft and my system would sometimes become
slow and sluggish, when I added more RAM the game ran much faster and
smoother.

These programs must be designed in such a way that they operate using
the maximum resources without crashing. If termination was the answer
I wouldn't have been able to play this game, prior to upgrading my
RAM, without it crashing all the time. It didn't crash it played ok
but became a bit slow when resources were low.

This is probably not a design of these programs. They run on a virtual
memory OS. If a program does not fit in RAM, OS replaces some RAM with swap
space. Swap space is usually hard disk, which is thousand times slower than
RAM. If RAM is added, the whole program fits in RAM, which makes it much
faster. No design of these programs is needed to use virtual memory, because
it is a OS feature, not a program feature.
 
G

Goran

On 09/ 4/11 11:20 AM, James Kanze wrote:
On 09/ 2/11 04:37 PM, Adam Skutt wrote:
[...]
I agree. On a descent hosted environment, memory exhaustion is usually
down to either a system wide problem, or a programming error.
Or an overly complex client request.
Not spotting those is a programming (or specification) error!
And the way you spot them is by catching bad_alloc:).
No, you set upfront bounds on allowable inputs.  This is what other
engineering disciplines do, so I'm not sure why computer programmers
would do something different.  Algorithms that permit bounded response
to unbounded input are pretty rare in the grand scheme of things.
Even when they exist, they may carry tradeoffs that make them
undesirable or unsuitable (e.g., internal vs. external sort).
So, if I understand you correctly, you are saying that you must always
setup some artificial limits to the external inputs and set
artificially low so that no matter what is happening in the rest of
the system, the program will never run out of resources....
This seems like a very bad proposition to me.  The only way to win is
to reserve and grab at startup time all of the resources you might
potentially ever need in order to meet the worse case scenario of your
inputs.

This is not possilbe in the situation where a program is limited by
system memory. As a crude example a text editor opening a new window
to display each text file, the number of windows is limited by
available system RAM.

Not at all. Say that you simply load said text into memory (crude
approach, works for a massive amount of uses). If the file is 3 bytes,
chances are, you'll open thousands. If the file is a couple of megs,
you won't get there.

Goran.
 
G

Goran

They do provide robust, enterprise grade logging in java.util.logging
and simply logging OOB.  Both provide what just about every other
logging framework on the planet provide; and both provide something
that's worlds better than what C++ provides out of the box, which is /
nothing/. If you believe otherwise, then you either don't know what's
actually done for large-scale applications or you don't actually know
the languages.  I'll leave that decision up to you.



Just because you claim it makes the application more robust doesn't
mean that actually makes the application more robust.  All I see is
more code to test and debug, and therefore more points where things
could fail.

Until you give concrete reasoning as to why we should believe a
program that terminates under OOM is less robust than one that does
not, you have no basis for claiming I have to write an OOM handler of
any sort.

The reasoning has been given to you on several occasions in this very
thread. You are just refusing to acknowledge it.

Here's a pretty typical situation (repeating oneself):
code embarks on a task
said task requires resources
code is chugging along, allocating resources on the way, calculating
(rinse, repeat)
calculation finished, code frees all _temporary_ resources that were
needed for calculation and are not part of result or state change

Now... See that rinse, repeat part? Well, imagine that, for whatever
reason, a resource was unavailable. Calculation is dead in the water.
What's a good code to do? Just as for a "happy" path, it should free
all that temporary resources. It should probably also revert all these
state changes. All that frees resources. Once out, there's plenty
breathing space.

And all you can say "I've hit OOM, I should die"? Yeah, right.

Further, it might be that said operation is the only operation you do.
In that case (and I said this in my very first post), yeah, dying on
OOM is just as good as rolling the stack and state down and exiting
cleanly. In any other situation, dying on OOM is much less appealing.

Goran.
 
N

Nick Keighley

On 09/ 4/11 11:20 AM, James Kanze wrote:
On 09/ 2/11 04:37 PM, Adam Skutt wrote:
     [...]
I agree.  On a descent hosted environment, memory exhaustionis usually
down to either a system wide problem, or a programming error.
Or an overly complex client request.
Not spotting those is a programming (or specification) error!
And the way you spot them is by catching bad_alloc:).
No, you set upfront bounds on allowable inputs.  This is what other
engineering disciplines do, so I'm not sure why computer programmers
would do something different.
Because code runs in a more volatile environment

It does nothing of the sort!  Code does not have to deal the physical
environment: it doesn't have to be concerned with the external
temperature, humidity, shock, weather conditions, etc.  It does not
care whether the computer has been placed in a server room or outside
in the middle of the desert.  Hardware frequently has to care about
all of these factors, and many more.  Operating systems provide
reasonable levels of isolation between components: software can
generally ignore the other software running on the same computer.
Hardware design has to frequently care about these factors: the mere
placement of ICs on a board can cause them to interfere with one
another!

The list goes on and on, and applies to all of the other engineering
disciplines too.  This is easily by far both the most absurd and most
ignorant thing you've said yet, by a mile.
and tends to handle
more complex (models of) systems.

Because it's cheaper and easier to do such things in software, in no
small part because many of the classical design considerations for
hardware simply disappear.  However, that doesn't change my statement
on setting bounds in the least.
An obvious example: a program operates on a set of X-es in one part,
and on a set of Y-s in another. Both are being "added" to operation as
user goes along. Given system limits, code can operate in a range on A
X-es and 0 Y-s, or 0 X-es and B Y-s, and any many-a-combination in
between. Whichever way you decide on a limit on max count of X or Y,
some use will suffer.

I'm not sure how you think this is irrelevant to this portion of the
discussion, but it's not even true.  The limit for both may be
excessively generous for any reasonable use case.  Moreover, plenty of
hardware has to process two distinct inputs and still sets bounds, so
it is an accepted technique.
Compound this with the empirical observation
that, beside X and Y, there's U, V, W and many more, and there you
have it.

There is no such empirical observation.  That windmill in front of you
is not a dragon.
Add a sprinkle of a volatile environment, as well as
differing environments, because one code base might run in all sorts
of them...
A simple answer to this is to (strive to ;-)) handle OOM gracefully.

Even if anything you'd just wrote were true, you're still making the
case for handling OOM by termination in reality.  If the environment
were really as diverse and volatile as you claim, and I can't prevent
the condition by setting reasonable bounds, there's really no reason
to believe I can respond to the condition after the fact, either.

rather tahn wait for OOM you could detect low memory conditions and
take evasive action (eg. reject any furthur requests and concentrate
on the ones you've got.
 
N

Nick Keighley

[limiting inputs] is what other engineering disciplines do,
Actually, they don't.  There's a good reason why soldiers are
required to break step when crossing a bridge.

And if you think that reason is a counterexample to what I said, then
you're simply crazy.  Walking over a bridge where you don't know how
much wait it is designed to support (nor can you) isn't relevant here;

it's not about weight it's about resonance. Though how the officer
estimates the bridge's resonant frequency is beyond me...
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,764
Messages
2,569,567
Members
45,041
Latest member
RomeoFarnh

Latest Threads

Top