C 99 compiler access

J

Joe Wright

E. Robert Tisdale said:
Chris said:
What C programming isn't embedded?
By which I mean pure C
as opposed to the pseudo C people do with C++ compilers.

When you think that anything with electricity applied to it
has a micro these days.
Autos have 50 odd each, washing machines, microwaves,
radios, mobile phones, smart cards missiles etc. etc.
That is a lot of devices that get programmed in C.


Yes, but they represent only a tiny fraction of all C programs.
Embedded [C] programmers represent
a tiny fraction of all C programmers.

I was embedded last night. And again tonight, I hope. :)
 
B

Brian Inglis

Chris said:
What C programming isn't embedded?
By which I mean pure C
as opposed to the pseudo C people do with C++ compilers.

When you think that anything with electricity applied to it
has a micro these days.
Autos have 50 odd each, washing machines, microwaves,
radios, mobile phones, smart cards missiles etc. etc.
That is a lot of devices that get programmed in C.

Yes, but they represent only a tiny fraction of all C programs.
Embedded [C] programmers represent
a tiny fraction of all C programmers.

Where do you get your statistics on the number of embedded and
non-embedded C programmers?
I don't know either way, but am willing to accept that the number of C
programmers working on embedded devices may now be larger than the
number working on OSes, tools, DBMSes, *[iu]x projects in C nowadays,
as the commercial world seems to have switched from C to newer
languages and tools.
 
E

E. Robert Tisdale

Brian said:
E. Robert Tisdale said:
Embedded [C] programmers represent
a tiny fraction of all C programmers.

Where do you get your statistics
on the number of embedded and non-embedded C programmers?
I don't know either way, but am willing to accept that
the number of C programmers working on embedded devices
may now be larger than the number working on OSes, tools, DBMSes,
*[iu]x projects in C nowadays, as the commercial world
seems to have switched from C to newer languages and tools.
 
P

Pete Gray

Reading this thread, I was kind of curious as to the "state of play" of the
major languages - a web search revealed quite a useful site ...

http://www.tiobe.com/tpci.htm

.... which shows that there hasn't been a *huge* change in quite a while.
Although I was kind of surprised to see the fairly recent dip in Java.

-Pete.
--
+
| http://home.comcast.net/~pete.gray/
-
E. Robert Tisdale said:
Brian said:
E. Robert Tisdale said:
Embedded [C] programmers represent
a tiny fraction of all C programmers.

Where do you get your statistics
on the number of embedded and non-embedded C programmers?
I don't know either way, but am willing to accept that
the number of C programmers working on embedded devices
may now be larger than the number working on OSes, tools, DBMSes,
*[iu]x projects in C nowadays, as the commercial world
seems to have switched from C to newer languages and tools.
 
D

Douglas A. Gwyn

David said:
If you want to really turn overcommit off, you have to reserve swap
not just for malloc but any time you do a virtual memory operation
that could lead to copying a page later: forking, for instance, or
mapping a file into memory. Since most Unix systems nowadays have
shared libraries that are loaded with mmap, and the libraries continue
to bloat out, you can end up with a *lot* of pointlessly reserved swap
space even by current standards.

You don't need multiple copies of read-only (I or D)
segments. And the reason the rest of a process' space
is R/W is that it will most likely be needed in the
course of performing the algorithm. The only
inherently *dynamic* RAM is the stack (which in a
properly designed app should be bounded by a reasonable
size) and the heap. The main purpose of the heap is
specifically to share the limited RAM resource among
competing processes. It is important to program
reliability that each process be able to sense when a
resource shortage occurs during execution and to retain
control when that happens; the recovery strategy needs
to be specific to the application, but typically would
involve backing out a partially completed transaction,
posting an error notification, scheduling a retry, etc.

Ideally stack overflow would throw an exception, but
anyway malloc is relied on to return a null pointer to
indicate heap resource depletion. For a process to be
abnormally terminated in the middle of a data operation
instead of being given control over when and how to
respond to the condition is unacceptable. If, as you
claim, there is now a consensus among OS designers that
it is okay for systems to behave that way, then that
indicates a serious problem with the OS designers.

It also indicates that Linux must *not* be used in any
critical application, at least not without great pains
being taken to determine the maximaum aggregate RAM
utilization and providing enough RAM+swap space to
accommodate it, and one still would be taking a chance.

I suppose this is what comes of letting amateurs design
and construct operating systems, and/or write the apps.
There was a time when software engineers were in charge.
 
J

James Kuyper

Chris Hills said:
What C programming isn't embedded. By which I mean pure C as opposed to
the pseudo C people do with C++ compilers.

People who work too long in one type of environment have a tendency to
forget about the other environments. That doesn't mean that they don't
exist. I work on a project which processes the data coming down from
the MODIS instruments on the Terra and Aqua satellites. We've got
about 88 "Process groups" working on this data, each of which consist
of one or more programs, and the bulk of that code is written in C,
the rest is mostly either Fortran 77 or Fortran 90. It can't be
"pseudo C" beecause we're not allowed to deliver code that needs a C++
compiler to build it.

By the way - if code is written in the common subset of C and C++, in
what sense is it "pseudo C"?
 
H

Hallvard B Furuseth

Douglas said:
I suppose this is what comes of letting amateurs design
and construct operating systems, and/or write the apps.

You mean unlike the high-quality products of professionals
like Microsoft?

Nobody "let" anyone design and construct either Unix or Windows stuff.
Neither were developed in dictatorships with mandatory quality control
and laws prohibiting both sale and purchase of products without an
official stamp of quality.
 
C

Charles Sanders

Douglas A. Gwyn said:
You don't need multiple copies of read-only (I or D)
segments. And the reason the rest of a process' space
is R/W is that it will most likely be needed in the
course of performing the algorithm.

Not necessarily, it could be data that is initialised
during the first few seconds of running and then remains
largely unchanged and could be shared in systems that
implement copy-on-write sharing when a process forks.
The only inherently *dynamic* RAM is the stack (which in a
properly designed app should be bounded by a reasonable
size) and the heap. The main purpose of the heap is
specifically to share the limited RAM resource among
competing processes. It is important to program
reliability that each process be able to sense when a
resource shortage occurs during execution and to retain
control when that happens; the recovery strategy needs
to be specific to the application, but typically would
involve backing out a partially completed transaction,
posting an error notification, scheduling a retry, etc.

But what others are saying is that many applications
are allocating much, much more memory than they need, "just
in case", and that lazy allocation by the OS results in
a large improvement in performance.
Ideally stack overflow would throw an exception, but
anyway malloc is relied on to return a null pointer to
indicate heap resource depletion. For a process to be
abnormally terminated in the middle of a data operation
instead of being given control over when and how to
respond to the condition is unacceptable.

This depends on the purpose. For some uses, a large
improvement in performance may be worth the dangers of lazy
allocation. Also, my (possibly faulty) recollection is that
the memory failure results in a signal, which can in theory
be caught and handled - not that I would want to do it.
If, as you claim, there is now a consensus among OS designers
that it is okay for systems to behave that way, then that
indicates a serious problem with the OS designers.

What David actually said was that the consensus
was that it was "unreasonably expensive", ie that the
performance degradation was too large to justify the improvement
in safety. Of course, this is a value judgement, and
different individuals will come to different conclusions,
and the performance/safety trade off will be different
in a nuclear reactor controller, a critical database server,
and a user's desk top PC, to name a few extreme cases.
It also indicates that Linux must *not* be used in any
critical application, at least not without great pains
being taken to determine the maximaum aggregate RAM
utilization and providing enough RAM+swap space to
accommodate it, and one still would be taking a chance.

Yes, you would have to ensure that the RAM+SWAP was
large enough for the worst case, but ensuring this should be
adequate. With overcommitment turned off, it may be the
case that you would have to supply so much RAM+swap that
the system became too expensive.
I suppose this is what comes of letting amateurs design
and construct operating systems, and/or write the apps.
There was a time when software engineers were in charge.

I suspect there may be some truth to the claim about
amateurs designing applications, as modern applications seem
to regard RAM as infinite. However I think you are a little
unfair concerning OS designers. There seems to have been
a lot of debate and serious consideration of the performance
and safety trade offs. The fact that some OS designers have
made decisions on these trade offs that you disagree with
does not, in my opinion, mean that they are "amateurs".

Of course the best is to have both options. If
you need the safety, turn overcommitment off and pay the
price (either in money for more RAM or swap space or in
performance or both). If you do not need the safety,
allow overcommitment and get better performance for the
same cash. If you need safety, but cannot afford the price
of turning overcommitment off, either carefully analyse your
memory usage to ensure you are "reasonably" safe, or get
more money, or lower your target safety level, or lower
your performance standards, or compromise in some other
way.


Charles
 
S

Stewart Brodie

I agree with that completely: in my opinion, it is just plain unacceptable
for malloc() to lie. When I write code, I always take into account the
possibility of malloc() failure and implement some kind of recovery
strategy. If malloc() lies and relies on the execution environment just
arbitrarily killing the program off if the memory isn't really there is no
use to me whatsoever.

Of course, much of the time all malloc() is at the mercy of the environment
- all it can do is request the extra memory and assume that if it is told
that the allocation was successful, then the memory is there.

The only possible workaround I can think of is to always memset the data to
a non-zero value (you wouldn't want malloc() using some memory-mapping trick
to map several pages of zero data to a single page of zero bytes, after all
would you!)

I don't see why I should have to impede the performance of my applications
simply to work around the environment lying to me/malloc() and then killing
me off when I use the information that I accepted in good faith!

This depends on the purpose. For some uses, a large
improvement in performance may be worth the dangers of lazy
allocation. Also, my (possibly faulty) recollection is that
the memory failure results in a signal, which can in theory
be caught and handled - not that I would want to do it.

Indeed not, given the restrictions on what signal handlers can and cannot
do. The thing is, is that IF malloc tells me that there's a problem, then I
can do something about it as the program can only fail in a handful of
places and I can deal with those situations. The current situation is that
programs can fail at any point where a dynamically allocated object is
accessed due to a memory exhaustion problem.

What David actually said was that the consensus
was that it was "unreasonably expensive", ie that the
performance degradation was too large to justify the improvement
in safety. Of course, this is a value judgement, and
different individuals will come to different conclusions,
and the performance/safety trade off will be different
in a nuclear reactor controller, a critical database server,
and a user's desk top PC, to name a few extreme cases.


Yes, you would have to ensure that the RAM+SWAP was
large enough for the worst case, but ensuring this should be
adequate. With overcommitment turned off, it may be the
case that you would have to supply so much RAM+swap that
the system became too expensive.

That is true. After spending many months trying to build a reliable digital
TV set top box using just the Linux kernel, busybox and our custom
applications, I would not care to repeat the experience. The overcommit
functionality appears to be an all-or-nothing approach - that is, either we
ended up with no page sharing *at all*, which obviously vastly inflates the
total amount of RAM required (inflated it way beyond the amount of RAM in
the device, in fact), or we had to put up with the box just crashing
"randomly" from time to time because the applications were not expecting to
be just terminated by the kernel at arbitrary times. There was no mass
storage device except for a read-only (for day-to-day running purposes)
EEPROM device - therefore no swap at all.
 
C

Chris Hills

E. Robert Tisdale said:
Chris said:
What C programming isn't embedded?
By which I mean pure C
as opposed to the pseudo C people do with C++ compilers.

When you think that anything with electricity applied to it
has a micro these days.
Autos have 50 odd each, washing machines, microwaves,
radios, mobile phones, smart cards missiles etc. etc.
That is a lot of devices that get programmed in C.

Yes, but they represent only a tiny fraction of all C programs.
Embedded [C] programmers represent
a tiny fraction of all C programmers.

based on what?

There are many thousands of embedded items with C in them. Where are all
these non embedded C programs?

/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\
/\/\/ (e-mail address removed) www.phaedsys.org \/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
 
C

Chris Hills

Brian Inglis said:
Chris said:
What C programming isn't embedded?
By which I mean pure C
as opposed to the pseudo C people do with C++ compilers.

When you think that anything with electricity applied to it
has a micro these days.
Autos have 50 odd each, washing machines, microwaves,
radios, mobile phones, smart cards missiles etc. etc.
That is a lot of devices that get programmed in C.

Yes, but they represent only a tiny fraction of all C programs.
Embedded [C] programmers represent
a tiny fraction of all C programmers.

Where do you get your statistics on the number of embedded and
non-embedded C programmers?
I don't know either way, but am willing to accept that the number of C
programmers working on embedded devices may now be larger than the
number working on OSes, tools, DBMSes, *[iu]x projects in C nowadays,
as the commercial world seems to have switched from C to newer
languages and tools.

I happen to work where I have visibility of the tools people. They work
in C++ mostly. The small OS's are C

What I can't see is where the non-embedded C programs would be.

/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\
/\/\/ (e-mail address removed) www.phaedsys.org \/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
 
D

Douglas A. Gwyn

Charles said:
But what others are saying is that many applications
are allocating much, much more memory than they need, "just
in case", and that lazy allocation by the OS results in
a large improvement in performance.

Poor application design is no excuse for poor OS design.
Even well-designed apps are being made to suffer erratic
behavior without being given any sensible way to cope
with it.
the memory failure results in a signal, which can in theory
be caught and handled - not that I would want to do it.

But the point at which that occurs during program
execution cannot be reliably controlled by the program.
For example, it might occur in the middle of an
incomplete data structure modification.
and the performance/safety trade off will be different
in a nuclear reactor controller, a critical database server,
and a user's desk top PC, to name a few extreme cases.

Actually you have no way of accurately predicting the
consequences of a program malfunctioning on a desktop
PC. There is NO EXCUSE for INTENTIONALLY not providing
the "end user" with reliable facilities.
... The fact that some OS designers have
made decisions on these trade offs that you disagree with
does not, in my opinion, mean that they are "amateurs".

Memory objects are really no different from file objects
in this regard. What would you think if OS designers
applied the same "logic" to opening a file? The open
would just stash the name and always report success.
Then when the program proceeds to try to use the file
data, oops, ABEND. How in the world would we write
reasonable programs that have to operate in such a
hostile environment? And why should we??

The general principle *has* to be, when a program
requests access to some controlled resource, it needs
to be told *before proceeding* if the resource is
available. In the case of VM, that is traditionally
done by using a "working-set" model, which has the
further advantages that (a) if an attempt is made to
add another execssively large program to the mix, that
attempt will *immediately* be flagged as a problem, at
an appropriate place to implement a resaonable recovery
policy; (b) if a process gets too big during execution,
it does not terminate *other*, well-behaved processes.
 
R

Richard Kettlewell

Chris Hills said:
E. Robert Tisdale said:
Chris Hills wrote:
What C programming isn't embedded?
By which I mean pure C as opposed to the pseudo C people do with
C++ compilers.

When you think that anything with electricity applied to it
has a micro these days.
Autos have 50 odd each, washing machines, microwaves,
radios, mobile phones, smart cards missiles etc. etc.
That is a lot of devices that get programmed in C.

Yes, but they represent only a tiny fraction of all C programs.
Embedded [C] programmers represent
a tiny fraction of all C programmers.

based on what?

There are many thousands of embedded items with C in them. Where are
all these non embedded C programs?

There are about 14,000 packages available in the OS I use. Some of
those packages won't contain programs (but rather a lot of them
contain multiple programs); some of them aren't written in C, but my
experience is that (when I look at the source) the majority are. That
should account for at least several thousand C programs.

I think you have to miss rather a lot to ask "what C programming isn't
embedded": for all I know it might be a small minority but it's
definitely there.
 
A

Arthur J. O'Dwyer

[dropped c.s.c xpost]

Chris Hills said:
What C programming isn't embedded?
By which I mean pure C as opposed to the pseudo C people do with
C++ compilers.
[...]
There are many thousands of embedded items with C in them. Where are
all these non embedded C programs?

There are about 14,000 packages available in the OS I use. Some of
those packages won't contain programs (but rather a lot of them
contain multiple programs); some of them aren't written in C, but my
experience is that (when I look at the source) the majority are. That
should account for at least several thousand C programs.

I think you have to miss rather a lot to ask "what C programming isn't
embedded": for all I know it might be a small minority but it's
definitely there.

Not even a small minority. /Maybe/ a large minority. I think
the truth is that while "99% of all C programs are embedded" is
false, it /is/ the case that a programmer could go his whole life
and yet 99% of the C programs he would deal with would be embedded.
Chris is apparently one of those programmers.

If you use *nix, then I would expect a good 50% of your utility
programs would be written in C. Things like 'cat' and 'grep'
and 'lex' and 'yacc' and 'gcc' and so on.

HTH,
-Arthur
 
P

Paul D. Smith

sb> The only possible workaround I can think of is to always memset
sb> the data to a non-zero value

Actually, all you have to do is write one byte to each page. That's
enough to force the system to give you all the memory you asked for, and
that's not much of an overhead.

--
 
N

Niklas Matthies

sb> The only possible workaround I can think of is to always memset
sb> the data to a non-zero value

Actually, all you have to do is write one byte to each page. That's
enough to force the system to give you all the memory you asked for, and
that's not much of an overhead.

How does the program know the page size?

-- Niklas Matthies
 
D

Dragan Cvetkovic

Niklas Matthies said:
How does the program know the page size?


On POSIX systems one can use getpagesize() or sysconf(_SC_PAGE_SIZE) or
sysconf(_SC_PAGESIZE). No idea about non-POSIX systems (or if that concept
even make sense on all of them).

Or was that a rhetoric question?

Dragan

--
Dragan Cvetkovic,

To be or not to be is true. G. Boole No it isn't. L. E. J. Brouwer

!!! Sender/From address is bogus. Use reply-to one !!!
 
P

Paul E. Bennett

Pete said:
Reading this thread, I was kind of curious as to the "state of play" of
the major languages - a web search revealed quite a useful site ...

http://www.tiobe.com/tpci.htm

... which shows that there hasn't been a *huge* change in quite a while.
Although I was kind of surprised to see the fairly recent dip in Java.

Considering some of the other oddball languages they list I am surprised
they hadn't listed Forth (which on the same basis of their published
calculation method came above Prolog).

--
********************************************************************
Paul E. Bennett ....................<email://peb@a...>
Forth based HIDECS Consultancy .....<http://www.amleth.demon.co.uk/>
Mob: +44 (0)7811-639972 .........NOW AVAILABLE:- HIDECS COURSE......
Tel: +44 (0)1235-811095 .... see http://www.feabhas.com for details.
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk..
********************************************************************
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,011
Latest member
AjaUqq1950

Latest Threads

Top