Microsoft abandons the C language

R

Rui Maciel

jacob said:
If you know that your data will almost never go beyond 200 you will
almost always win, without EVER having a HARD-WIRED limit and using
almost always a fast stack allocation.

I didn't stated that VLAs are all evil, all the time. They can be handy in
some cases, including the ones you pointed out. I believe VLAs usefulness
shines through in cases where a fixed size array is declared with a size
large enough to fit every conceivable need, and replacing it a VLA will make
it possible to save a bit of memory.

In spite of that, VLAs tend to be sold as what they are not: a sort of
malloc() replacement which has all the advantages (fast, easy to use, no
need to manage memory, etc...) and absolutely zero disadvantages. The www
is packed with sites showcasing how VLAs are so perfect in every conceivable
scenario based on examples which are a bit worrisome, such as using them to
define dense matrices of an undetermined size and buffers without any size
cap. I won't be surprised if in the near future the "C is inherently
unsafe" mantra will resurface based solely on how VLAs are being marketed
right now and, as a consequence, how they are/will be employed.


Rui Maciel
 
R

Rui Maciel

Nomen said:
However with the UNIX <-> POSIX <-> C <->
What About The Children (legacy code that must continue to fail
identically to the way it has always failed) hot potato tossing that
typifies any attempt to do anything correctly, this will never be fixed in
UNIX, POSIX, or C language.

It can be argued that VLAs were already fixed on the C programming language,
considering that on C11 they were relegated to a conditional feature that no
implementation needs to support.


Rui Maciel
 
R

Rui Maciel

Don't make my brown eyes China Blue said:
I learned programming using Fortran 77, Pascal, and Algol 61 on CDC 3300s
and Cyber 170s. I suspect I learned a great deal about allocating loc
variables with limitted space.

I've been using VM since the 1980s. I realise that VM, like recursion and
stack allocation are radical concepts that have been around for a mere 50
years, but I have coped with these.

On unix it's easy to monitor a process and respawn everytime it exits.
It's easy to let a process blow its brain out on a unrecoverable error. I
use signal handlers for a last second error report, let it die, and
respawm. Because I debug code before releasing it into the wild, it dies
for resource problems rather than program errors. Restarting resolves
those problems.

This is why appeals to authority, particularly those based on a vague notion
of authority which is meaningless, say nothing about competence, and even
suggest the lack of it. If your code is so bad and your understanding of
the issue is so questionable that they lead you to accept frequent crashes
as a natural occurrence and, instead of investing your time avoiding those
crashes by fixing your bugs and avoiding writing them in, you opt to duct
tape together a CPR system for your programs then it leads me to believe
that you are in no position to comment on issues such as coding best
practices, let alone how to write safe code.


Rui Maciel
 
N

Nick Keighley

I know exactly what you mean. I've had to deal with several new
languages recently, including Haskall and Lua (at least, they're new to
me). Lua looks reasonably familiar, but Haskall leaves me feeling the
same way you describe.

yes i was left with a badly bruised brain after trying Haskall
For all I know it may have some significant
advantages compared to the languages I'm more familiar with, at least
for some uses, but for now just the simple conceptual differences are so
daunting that it's difficult to imagine what those advantages might be.

well I'm told functional languages are goign to become more important
as we get more and more processors (cores?) in our systems.

http://www.gotw.ca/publications/concurrency-ddj.htm

Since Moore's law doesn't seem to have gone away I suspect the number
of processors is goignt o rise exponentially. And funtional languages
are supposed to handle this well. They don't modify anything
9conceptually at least) so you distribute the processing arbitarily.

Perhaps I should try Erlang it looks less daunting.
 
S

Stephen Sprunk

I find it acceptable to crash a couple times a year

You may, but many/most of us, our employers and our customers do not.
due to unrecoverable resource contentions that can only be recoverred
by restarting anyway.

If it can be recovered by restarting, then it obviously wasn't
unrecoverable in the first place.

Also, what "resource contentions" are causing your code to crash?
Sounds like sloppy coding, where you just accept bugs rather than fix
them, rather than some inevitable problem.

S
 
A

Anders Wegge Keller

Stephen Sprunk said:
On 31-Aug-12 06:40, Don't make my brown eyes China Blue wrote:
You may, but many/most of us, our employers and our customers do
not.

Actually, I suspect Brown Eyes lives in that part of the world where
"good enough" is the deciding factor. What "good enough" is, is highly
dependant on the job at hand. A batch job, running every hour, that
can be restarted and recover on itself is "good enough", when it
restarts mid-batch twice a year. The control software for the
Curiosity sky crane is "good enough", when it never fails.

Knowing the difference between those two situations, is what most
managers and customers I know of, spend a lot of time mulling
over. None of them are prepared to pay the NASA-price for the
low-priority batch job.
If it can be recovered by restarting, then it obviously wasn't
unrecoverable in the first place.

Maybe it could be avoided. But not always, especially not if you have
to code against the physical reality. But even in the case where a
deqadlock theoretically could be prevented, the tradeoff between "good
enough" and "perfect in 10 man-years" comes into play.
Also, what "resource contentions" are causing your code to crash?
Sounds like sloppy coding, where you just accept bugs rather than
fix them, rather than some inevitable problem.

I'd be happy if you tell me how to fix the systems I'm supposed to
interface with. I'm bound by that pseky "good enough", so I cannot do
much more than assume that the opposite side of the socket, actually
implements the protocol as described. When that assertion fails, the
inevitable result is more often than not something that can be
described by the onomapoeticon "Kaboom!"
 
J

James Kuyper

On 08/31/2012 02:17 PM, Anders Wegge Keller wrote:
....
Actually, I suspect Brown Eyes lives in that part of the world where
"good enough" is the deciding factor. What "good enough" is, is highly
dependant on the job at hand. A batch job, running every hour, that
can be restarted and recover on itself is "good enough", when it
restarts mid-batch twice a year. The control software for the
Curiosity sky crane is "good enough", when it never fails.

Knowing the difference between those two situations, is what most
managers and customers I know of, spend a lot of time mulling
over. None of them are prepared to pay the NASA-price for the
low-priority batch job.

My skills are contracted out to NASA, but not on one of those projects
where "failure is not an option". It's just data-analysis software - no
one will die if my software fails, nothing I do will have any affect on
the instruments on the satellites that collect the data, and even if my
software goes radically wrong, there's not even any danger of losing the
input files needed to re-run my program, because copies of those files
are archived in places my program can't reach, no matter how badly it
malfunctions. Some of those archived files have been lost, permanently,
but nothing my programs do could have that effect.

Still, I try to program on the cautious side, and when I read

....
... so I cannot do
much more than assume that the opposite side of the socket, actually
implements the protocol as described. When that assertion fails, the
inevitable result is more often than not something that can be
described by the onomapoeticon "Kaboom!"


my immediate reaction was "Why can't you do more than that?". At a
minimum, when writing such code I'd try to validate whatever comes back
from the socket, and try to at least minimize the "Kaboom" if the
validity tests indicate that the protocol is not being implemented as
described. I realize that such validity tests cannot be perfect; that
really good validity tests can cost what you called "the NASA-price".
However, there should be at least a few simple validity tests you can
perform (unless it's a one-way data transfer, with no feedback) to
provide some kind of protection.
 
K

Keith Thompson

Stephen Sprunk said:
You may, but many/most of us, our employers and our customers do not.

Let me suggest another way of looking at it.

If you have a program that crashes a couple of times a year, for
whatever reason, of course ideally you'd want to fix the bug that
causes the crash.

But unless you have infinite resources, fixing that crash might not
be your highest priority, particularly if crashing and automatically
restarting doesn't cause any serious problems to your users.
(You might even consider preemptively killing and restarting
the program once a month or so, at a time of your choosing that
minimizes inconvenience.)

In practice twice-yearly crash is going to be one of a number of
reported bugs, and you have to pick and choose which bugs you're
going to spend time on.
 
J

jacob navia

Le 31/08/12 13:40, Don't make my brown eyes China Blue a écrit :
I find it acceptable to crash a couple times a year due to unrecoverable
resource contentions that can only be recoverred by restarting anyway.

The problem is that you aren't the only one that has this attitude. Now
suppose that your software has just 1% failure probability and all by
itself crashes twice a year.

If I use 100 components in my software (compiler, linker, operating
system, libraries I am interfacing with) the probability that the
program crashes is almost one...

In my compiler system, I can emit millions of instructions correctly, it
suffices of a single one and the program will crash.

Your attitude is acceptable in an environment where:

1) You know the components you use have a MUCH lower failure
rate than 0.01%

2) You have a correct method of handling all possible errors, for
instance restarting a fresh copy automatically, etc.

In other cases you are contributing to the general problem of
unreliable software.
 
L

lawrence.jones

James Kuyper said:
On 08/28/2012 07:14 AM, David Brown wrote:
...

As I understand it (any committee members who are reading this feel free
to correct me) the members of the ISO committee are national standards
organizations, one per country, with one vote each. For reasons I'm not
sure of, ANSI (the US member) seems to have influence greatly
disproportional to it's voting power.

ANSI has greater influence on some committees simply due to the greater
breadth and depth of expertise on the subject matter that exists in the
US as compared to other countries. It also has greater influence on
committees where it holds the convenorship and/or editorship for
somewhat obvious reasons. In the case of the C committee, all of those
apply.
 
A

Anders Wegge Keller

...
Still, I try to program on the cautious side, and when I read

...


my immediate reaction was "Why can't you do more than that?". At a
minimum, when writing such code I'd try to validate whatever comes back
from the socket, and try to at least minimize the "Kaboom" if the
validity tests indicate that the protocol is not being implemented as
described.

Case 1:

An airport, where the luggage sorting system is interfacing with
another system called FIS (Flight Information Service). The sorting
system receives information about which gate fligt SK777 is departing
from, so the luggage can be sorted correctly. The interface is a bit
weird, comsisting of a variable length telegram formatted as
/FIELD/DATA/FIELD/DATA/.... This particular failure was when FIS
started sending information for flights 3 days in advance. OPS hadn't
yet decided the gate, so we suddely started receiving
/GATE/N/A/... This didn't parse according to spec, so our system
replied with a NAK, indicating errornous telegram. The FIS then
decided to resend the same telegram.

Net result: After some time, our system had no idea about any of the
flights departing from that airport.

Case 2:

A large national postal service. The sorting system in question was
running in a "dumb mode", where all descisions was made by a host
system. At some point, a bug was introduced at this system, causing it
to lose information about parcels in some specific circumstances. Our
system reported those "lost" parcels on every roundtrip:

Parcel 4935374045DE has circulated 456 times
Parcel 4935374045DE has circulated 457 times
Parcel 4935374045DE has circulated 458 times
.....

Each parcel stuck on the system in this way takes up space, and after
some time, the capacity was severely degraded, not to talk about the
parcels, that in some caces had been stuck for well over a month.
I realize that such validity tests cannot be perfect; that
really good validity tests can cost what you called "the NASA-price".
However, there should be at least a few simple validity tests you can
perform (unless it's a one-way data transfer, with no feedback) to
provide some kind of protection.

I can (and do) validate all that I want, but that just doesn't help
in situations like these two.
 
I

Ian Collins

I see you've never had to deal with database deadlocks.

Or systems that over commit memory! I'm currently investigating a Linux
system where the name service cache daemon started crashing each
morning. The cores all indicate out of memory errors. The users don't
notice a few logins taking a wee bit longer once in a while.

So the condition may have been recoverable, but the designers decided
dumping core and restarting was the best option.
 
L

Les Cargill

BartC said:
I suspect the denotation 'FORTRAN' used to be more popular simply
because a lot of equipment then only worked in upper case...

COMPUTERS USED TO BE A LOT NOISIER!
 
C

Chicken McNuggets

However OpenGL was never popular with MS, and was only grudgingly
supported. Hopefully that will still be available, but probably stuck at
some ancient version.

Thankfully Microsoft don't need to directly support OpenGL. They only
provide OpenGL 1.1 in the base Windows install. The version of OpenGL
supported on Windows is down to which graphics card you have and what
version of the drivers for said graphics card you are using.

So it really doesn't matter what Microsoft do in relation to OpenGL you
just need to make sure you have a decent graphics card which has drivers
that support the version of OpenGL you want to use.
 
S

Stephen Sprunk

Let me suggest another way of looking at it.

If you have a program that crashes a couple of times a year, for
whatever reason, of course ideally you'd want to fix the bug that
causes the crash.

But unless you have infinite resources, fixing that crash might not
be your highest priority, particularly if crashing and automatically
restarting doesn't cause any serious problems to your users.
(You might even consider preemptively killing and restarting
the program once a month or so, at a time of your choosing that
minimizes inconvenience.)

In practice twice-yearly crash is going to be one of a number of
reported bugs, and you have to pick and choose which bugs you're
going to spend time on.

Of course, but saying that I don't currently have the resources to
diagnose and fix a particular bug is an entirely different thing than
saying I find that bug "acceptable".

I'm also well aware that a bug that causes a crash twice a year is
difficult to reproduce, which makes it difficult to diagnose and even
more difficult to prove that the proposed fix actually worked. Such
bugs often languish for years because nobody has figured out how to
reproduce them--but that still doesn't mean we find them "acceptable".

S
 
K

Keith Thompson

Stephen Sprunk said:
On 31-Aug-12 13:56, Keith Thompson wrote: [...]
In practice [a] twice-yearly crash is going to be one of a number of
reported bugs, and you have to pick and choose which bugs you're
going to spend time on.

Of course, but saying that I don't currently have the resources to
diagnose and fix a particular bug is an entirely different thing than
saying I find that bug "acceptable".

I'm also well aware that a bug that causes a crash twice a year is
difficult to reproduce, which makes it difficult to diagnose and even
more difficult to prove that the proposed fix actually worked. Such
bugs often languish for years because nobody has figured out how to
reproduce them--but that still doesn't mean we find them "acceptable".

I understand what you're saying, but it depends on just what you
mean by the word "acceptable". One could argue that if you leave
the bug unfixed and continue to use the buggy system, then you've
"accepted" it.

But that's just a quibble about the meaning of a mostly irrelevant word.
 
M

Malcolm McLean

בת×ריך ×™×•× ×©×‘×ª,1 בספטמבר 2012 20:29:53 UTC+1, מ×ת Keith Thompson:
I understand what you're saying, but it depends on just what you
mean by the word "acceptable". One could argue that if you leave
the bug unfixed and continue to use the buggy system, then you've
"accepted" it.
One obvious question is how likely is it that the bug will be triggered, incomparision to the chance that the computer will break?
 
N

Nobody

So it really doesn't matter what Microsoft do in relation to OpenGL you
just need to make sure you have a decent graphics card which has drivers
that support the version of OpenGL you want to use.

That almost changed in Windows 8. MS wanted to make OpenGL a wrapper
around the DirectX API, which would have meant that OpenGL couldn't
provide any feature not available via DirectX, nor be any faster. In the
end, that didn't happen because of pushback from nVidia and AMD, who are
too big to be steamrollered, even by MS.
 
A

Anse

Don't make my brown eyes China Blue said:
I find it acceptable to crash a couple times a year due to
unrecoverable resource contentions that can only be recoverred by
restarting anyway.

What theorpy did you have? And what about the strippers? Did you shack up
with one? 5 times a year only? No way! I call bullshit.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,777
Messages
2,569,604
Members
45,216
Latest member
topweb3twitterchannels

Latest Threads

Top