Microsoft abandons the C language

C

Chicken McNuggets

Sorry this is garbage. MSVC supports all C/C++ standards, also extra
features (eg C++ style // comments will be accepted in C programs too)

No it does not. It does not support C99 or C11 and never will according
to Microsoft.
 
S

Stephen Sprunk

Maybe it could be avoided. But not always, especially not if you have
to code against the physical reality. But even in the case where a
deqadlock theoretically could be prevented, the tradeoff between "good
enough" and "perfect in 10 man-years" comes into play.

My point is that one cannot say a situation is "unrecoverable" and then,
later in the very same sentence, explain how to recover from it. If one
can recover, then by definition it is not "unrecoverable".
I'd be happy if you tell me how to fix the systems I'm supposed to
interface with. I'm bound by that pseky "good enough", so I cannot do
much more than assume that the opposite side of the socket, actually
implements the protocol as described. When that assertion fails, the
inevitable result is more often than not something that can be
described by the onomapoeticon "Kaboom!"

Obviously, one can't prevent error conditions outside of one's control;
that is what "outside of one's control" means.

However, one _can_ detect and handle such errors gracefully, rather than
simply crashing.

S
 
S

Stephen Sprunk

You're confusing two separate issues: programs that have errors and
programs that have resource demands that can't be met a few times a year
due to activities by the humans or other processes. The latter cannot be
corrected within the process; you can use convoluted code everywhere to
deal with rare events you can't correct, with the risk the convoluted
code is in error, or just abort, catch the signal and report with a
write(2,...), then exit.

If that is what you're actually doing, then the program is not
"crashing"; it is detecting an error condition, reporting it and
gracefully restarting. That's fine, for most customers.

An example of crashing is when bad input causes your code to dereference
a null pointer, which results in the OS killing your process.

S
 
R

Rui Maciel

Don't make my brown eyes China Blue said:
I find it acceptable to crash a couple times a year due to unrecoverable
resource contentions that can only be recoverred by restarting anyway.

What "unrecoverable resource contentions" are you referring to? Because you
were criticising and poking fun at those who took precautions to avoid basic
problems such as bursting the stack, and bursting the stack is hardly a mere
"resource contention" problem. So, you are either claiming that you
perceive crashes caused by bursting the stack as mere "unrecoverable
resource contention" issues, which is an absurd idea and appallingly
dangerous, or you are talking about something entirely different, which
would represent an attempt to pull a strawman.

So, which one is it?


Rui Maciel
 
R

Rui Maciel

Don't make my brown eyes China Blue said:
You're confusing two separate issues: programs that have errors and
programs that have resource demands that can't be met a few times a year
due to activities by the humans or other processes.

No matter how you cut it, bursting the call stack is not a mere resource
contention problem. No one who knew what he was doing would describe it as
such, let alone try to downplay it as a irrelevant problem which can be
perfectly fixed by restarting the process.

The latter cannot be
corrected within the process; you can use convoluted code everywhere to
deal with rare events you can't correct
, with the risk the convoluted code
is in error, or just abort, catch the signal and report with a
write(2,...), then exit.

No matter how convolutedly you code, at some point the system can be so
hosed you can't do anything. The question then is how defensively you want
to code, with the risk that the error is actually in the defensive code,
and at what point you want to deal with the reality that sometimes life
sucks.

No amount of C code is going to stick the ethernet cable back in.

The ethernet cable doesn't pop out of the socket if you burst the call
stack. The cable also stays where it is if a call to malloc() fails. There
are plenty of programs which were written in C that don't manage to suffer
from that sort of problem, mainly because they were actually written by
people who knew what they were doing. I'm talking about people who actually
were disturbed enough by some types of errors such as bursting the call
stack, that managed to do a good job avoiding them.


Rui Maciel
 
A

Anders Wegge Keller

Stephen Sprunk said:
On 31-Aug-12 13:17, Anders Wegge Keller wrote:
My point is that one cannot say a situation is "unrecoverable" and
then, later in the very same sentence, explain how to recover from
it. If one can recover, then by definition it is not
"unrecoverable".

For some value of "recover". If a server reboot is your idea of a
gracefull recovery, I recommend that you start using SCO Openserver as
deployment platform :)

But that bittterness aside, I think that we disagree about the
definition of recovery, rather than the tradeoff between effort and
return.
 
L

Les Cargill

Anders said:
Actually, I suspect Brown Eyes lives in that part of the world where
"good enough" is the deciding factor. What "good enough" is, is highly
dependant on the job at hand. A batch job, running every hour, that
can be restarted and recover on itself is "good enough", when it
restarts mid-batch twice a year. The control software for the
Curiosity sky crane is "good enough", when it never fails.

Knowing the difference between those two situations, is what most
managers and customers I know of, spend a lot of time mulling
over. None of them are prepared to pay the NASA-price for the
low-priority batch job.

Thing is, I suspect it's not really about cost. I think there
is a "public choice economics" problem underlying all this.

you just don't really know how to do hi-rel. hi-availability
stuff until you've done it. once you have, it doesn't
appear to cost any more than ... "the old sloppy way."

And as a career choice, I suspect "the old sloppy way" is
economically rewarded. The "good" way is at least
discouraged, somewhat - most of what I have seen in terms of
tools and all in the last thirty years seem to be not about
appealing to mature engineers but rather making the world safer
for ... "newbs".
Maybe it could be avoided. But not always, especially not if you have
to code against the physical reality. But even in the case where a
deqadlock theoretically could be prevented, the tradeoff between "good
enough" and "perfect in 10 man-years" comes into play.

I haven't seen a case where there was the remotest possibility of
deadlock in years and years. I still read the story
about Pathfinder and am absolutely dumbfounded. If there's even
the remotest *possibility* of priority inversion, you are
doing it wrong.
I'd be happy if you tell me how to fix the systems I'm supposed to
interface with. I'm bound by that pseky "good enough", so I cannot do
much more than assume that the opposite side of the socket, actually
implements the protocol as described. When that assertion fails, the
inevitable result is more often than not something that can be
described by the onomapoeticon "Kaboom!"

The economics of defects seems to be incredibly poorly understood.
 
R

Rui Maciel

Don't make my brown eyes China Blue said:
I see you've never had to deal with database deadlocks.

You fix nothing by restarting a process. The only thing that you manage to
do is to get the process back in a state where the problems caused by your
bugs aren't yet being triggered. Meanwhile, your bugs are still there, and
it's only a matter of time before someone is forced to deal with the same
problems caused by the same bugs.

There's an old joke about a group of engineers going on a car trip, and the
punchline was that when the car broke down, the computer engineer suggested
that everyone should get out of the car and then back in to see if that
would get the car to start. It was supposed to be a parody, not a factual
representation of how actual software problems were being tackled in the
real world.


Rui Maciel
 
S

Stephen Sprunk

you just don't really know how to do hi-rel. hi-availability
stuff until you've done it. once you have, it doesn't
appear to cost any more than ... "the old sloppy way."

That is quite true.

A related lesson is that you can take a large system and scale it down,
but you can't take a small system and scale it up. Things like high
scalability, high availability, etc. need to be designed in from the
start; they cannot be added later because they fundamentally change the
design of the system. If done at the start, they don't add much to the
cost--but if left out, you'll eventually have to scrap the entire design
and start over.
The economics of defects seems to be incredibly poorly understood.

As an industry, we seem to have a solid understanding of how much each
call to the support department costs and how much the QA department
costs overall, but nobody seems able to quantify how much it costs to be
_known_ as an unreliable company that makes unreliable products, at
least until the situation has gotten so bad that one starts losing
market share--and few companies recover from that death spiral.

S
 
I

Ian Collins

You fix nothing by restarting a process. The only thing that you manage to
do is to get the process back in a state where the problems caused by your
bugs aren't yet being triggered. Meanwhile, your bugs are still there, and
it's only a matter of time before someone is forced to deal with the same
problems caused by the same bugs.

The original context was unrecoverable resource contentions, not
programming bugs.
 
L

Les Cargill

Stephen said:
That is quite true.

I guess my point is that *apparently*, exposure to this is rare,
and therefore considered costly.
A related lesson is that you can take a large system and scale it down,
but you can't take a small system and scale it up. Things like high
scalability, high availability, etc. need to be designed in from the
start; they cannot be added later because they fundamentally change the
design of the system. If done at the start, they don't add much to the
cost--but if left out, you'll eventually have to scrap the entire design
and start over.


I hadn't considered scalability - W.R.T. software, that seems even
*worse* than reliability and availability in terms of being
properly ... considerable.

Web tools seem particularly terrible at this, although I'd defer
to someone with more experience than I have. I got
quoted - after considerable pulling - that one really popular
system had lambdas ( time per transaction ) on the order
of 100Hz* - *at best*. Apparently, you just throw
hardware at it...

*it might have been *10*Hz, but that sounds outrageously slow.
As an industry, we seem to have a solid understanding of how much each
call to the support department costs and how much the QA department
costs overall,

I am really quite unsure of that. The source of bias here is "well,
the present budget is <x>, so let's keep doing that" until you *can't*
do that any more, and then guess where cuts come from?

but nobody seems able to quantify how much it costs to be
_known_ as an unreliable company that makes unreliable products, at
least until the situation has gotten so bad that one starts losing
market share--and few companies recover from that death spiral.

Men plan, Schumpeter laughs. Good for him; I really don't
want to drive a Tucker....

The problem then is - now it's a job for the rhetoricians, not
the engineers. And they're *really* expensive and can
actually *do* the Jedi Mind Trick...
 
K

Keith Thompson

Rui Maciel said:
You fix nothing by restarting a process. The only thing that you manage to
do is to get the process back in a state where the problems caused by your
bugs aren't yet being triggered. Meanwhile, your bugs are still there, and
it's only a matter of time before someone is forced to deal with the same
problems caused by the same bugs.
[...]

Restarting a process certainly does fix whatever problems were
caused by the process not running, because now it's running again,
and presumably doing useful work.

Yes, the bug is still there, and you may have to restart the process
again in a few months. And yes, someone will have to deal with
the problems caused by the bugs -- by restarting the process at
the cost of a few minutes of downtime.

Nobody is saying that it's a perfect solution, or that it fixes
the bug. But sometimes it's good enough, and often *better* than
dropping everything else you're doing to fix the bug *right now*.
 
S

Stephen Sprunk

I guess my point is that *apparently*, exposure to this is rare,
and therefore considered costly.

It's considered costly because it's rarely designed in from the start,
so people look at the cost of scrapping their design and starting over,
rather than the (smaller total) cost of doing it right the first time.

It's also considered costly because there aren't that many people who
know how to do it right the first time, and simple market economics
tells us that such people will therefore be more costly to employ. In
reality, though, that is less costly than doing it wrong the first time.

Also, many companies are started by people who have dreams of making it
big but are utterly unprepared for what it means to actually have that
happen. There was a UPS commercial several years ago that showed a few
people in an office watching their sales ticker go live and cheering
when it rolled past a hundred orders--and then aghast when it soon
rolled past a hundred _thousand_ orders. For many startups, that isn't
too far off the mark--and it shows. Few survive that level of success,
usually by being bought by a larger company that knows how to handle it.
I hadn't considered scalability - W.R.T. software, that seems even
*worse* than reliability and availability in terms of being
properly ... considerable.

Scalability, reliability and availability of software are all closely
related, and the most common solution (clustering) addresses all three.

Basically, you cannot assume one of any functional unit; you must assume
there are N+1 of each unit, up to N of which are currently not available
(either due to being down or due to being overloaded), where N can be
anywhere from 0 (for small systems) to dozens or even hundreds (for
large systems). That is not something you can retrofit; it is a
fundamental change in the way you design systems.

(For extra credit, allow each unit within each N+1 group to be ahead or
behind one version of software, which enables on-line upgrades.)

This is a radically different approach from what someone here described
as the NASA model, where there is exactly one of everything that has to
have perfectly reliability and infinite capacity because if any unit
ever fails or gets overloaded, the system crashes--and people die.
I am really quite unsure of that. The source of bias here is "well,
the present budget is <x>, so let's keep doing that" until you *can't*
do that any more, and then guess where cuts come from?

I could tell you, to the penny, exactly how much it costs my employer
for each call to our support line. I could also tell you, to the penny,
exactly how much our QA department costs. I could even tell you, to the
penny, how much it costs to fix all the bugs that QA finds.

What I _can't_ tell you, even to within several orders of magnitude, is
how much it will cost us to _not find_ a bug or how much it will cost us
to _not fix_ said bug.

That's why the "expense" of finding and fixing bugs is always a target
for cuts: the available statistics only show half of the story.

S
 
M

Malcolm McLean

בת×ריך ×™×•× ×¨×שון, 2 בספטמבר 2012 20:05:15 UTC+1, מ×ת Rui Maciel:
Don't make my brown eyes China Blue wrote:


You fix nothing by restarting a process. The only thing that you manage to
do is to get the process back in a state where the problems caused by your
bugs aren't yet being triggered. Meanwhile, your bugs are still there, and
it's only a matter of time before someone is forced to deal with the same
problems caused by the same bugs.
So the question is, how much time, and what's at stake. Are we talking about
an aeroplane falling out of the sky, or one in a million games of Space
Invaders being aborted?
 
L

Les Cargill

Stephen said:
It's considered costly because it's rarely designed in from the start,
so people look at the cost of scrapping their design and starting over,
rather than the (smaller total) cost of doing it right the first time.

I think scrapping a prototype is both easier and cheaper, but it
apparently offends people's sensibilities.
It's also considered costly because there aren't that many people who
know how to do it right the first time, and simple market economics
tells us that such people will therefore be more costly to employ. In
reality, though, that is less costly than doing it wrong the first time.

it also means there's a market bias against *learning* to do it right
the first time. That's unfortunate.
Also, many companies are started by people who have dreams of making it
big but are utterly unprepared for what it means to actually have that
happen.

You got that right. That's a huge effect.
There was a UPS commercial several years ago that showed a few
people in an office watching their sales ticker go live and cheering
when it rolled past a hundred orders--and then aghast when it soon
rolled past a hundred _thousand_ orders. For many startups, that isn't
too far off the mark--and it shows. Few survive that level of success,
usually by being bought by a larger company that knows how to handle it.

Yep.


Scalability, reliability and availability of software are all closely
related, and the most common solution (clustering) addresses all three.

True.

Basically, you cannot assume one of any functional unit; you must assume
there are N+1 of each unit, up to N of which are currently not available
(either due to being down or due to being overloaded), where N can be
anywhere from 0 (for small systems) to dozens or even hundreds (for
large systems). That is not something you can retrofit; it is a
fundamental change in the way you design systems.

Yes, it is. Although I was ( erroneously ) thinking "reliability"
not in the redundant or hot-backup sense - more in the "don't crash"
sense.
(For extra credit, allow each unit within each N+1 group to be ahead or
behind one version of software, which enables on-line upgrades.)

This is a radically different approach from what someone here described
as the NASA model, where there is exactly one of everything that has to
have perfectly reliability and infinite capacity because if any unit
ever fails or gets overloaded, the system crashes--and people die.

*Some* NASA stuff uses "voting" type redundancy, as do other aviation
electronics.
I could tell you, to the penny, exactly how much it costs my employer
for each call to our support line. I could also tell you, to the penny,
exactly how much our QA department costs. I could even tell you, to the
penny, how much it costs to fix all the bugs that QA finds.

nobody has a knife that sharp :)
What I _can't_ tell you, even to within several orders of magnitude, is
how much it will cost us to _not find_ a bug or how much it will cost us
to _not fix_ said bug.

Ah, right.
That's why the "expense" of finding and fixing bugs is always a target
for cuts: the available statistics only show half of the story.

Indeed; the dreaded "unknown unknown."
 
J

jacob navia

Le 03/09/12 00:01, Stephen Sprunk a écrit :
This is a radically different approach from what someone here described
as the NASA model, where there is exactly one of everything that has to
have perfectly reliability and infinite capacity because if any unit
ever fails or gets overloaded, the system crashes--and people die.

This is an unwarranted insult to NASA people, that have to be able to
survive and do outstanding work in an ever decreasing budget. The
systems on Mars, for instance, have reliably worked since years:
Opportunity rover is running its real time OS since 8 years already, and
the twin rover Spirit, had a few startup problems but worked flawlessly
for many years after that initial problem.

NASA CAN'T replicate everything since every gram sent to Mars or to
Jupiter costs a fortune. You can't send several computers, several
backup systems, etc. You are constrained by the laws of gravity.
 
J

jacob navia

Le 02/09/12 13:05, Anse a écrit :
I suspect you "I owe you something"?
What?

The strategy of this troll is always the same: Just hang a "profound"
looking sentence into some posts at random. Obviously there isn't any
discussion or hint of any, it is just always the same pattern.

He has been trolling in the C++ group too, always with the same tactic.

He has no knowledge of anything technical or about software or anything
like that, it is just somebody that loves to start insulting people at
random and the anonymity of usenet allows to do it with easy.

Look at this for instance:

"I was trying to help the flailing duck and you call me a hunter?"
"Aren't you the one who shot him to begin with?"

Etc. Meaningless sentences juxtaposed so that they seem to be
articulated when in fact they aren't.

jacob
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,770
Messages
2,569,583
Members
45,073
Latest member
DarinCeden

Latest Threads

Top