C 99 compiler access

C

Chris Hills

Douglas A. Gwyn said:
Sorry, I misread it in context as saying that some
compiler vendors were encouraging that attitude.

Some may imply that..... See the new proposed secure library
Yes, that kind of problem has existed "forever".
It is a matter of education, not of standards.


Until they are educated about the value of
standards *and why as programmers they should
conform to them as much as possible*, they
aren't going to appreciate C99's standardizing
additional features that they have a good use
for.

So how do we convince them? Most still talk of K&R (usually V1) or
"ANSI-C" as a vague sort thing they have no idea what ISO-C is.

/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\
/\/\/ (e-mail address removed) www.phaedsys.org \/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
 
C

CBFalconer

Richard said:
(e-mail address removed) (Richard Tobin) wrote:
.... snip ...

Even so, malloc() is safer, but only because you can sometimes
turn memory overcommitting off; it would be hard do that with
stack overflows.

Smarter systems, using segments rather than the primitive linear
addressing mode :), simply enlarge the stack segment on overflow
and carry on. This can eventually fail for lack of memory,
virtual or real.
 
K

Keith Thompson

Chris Hills said:
It is freely available from many places.. I did not say it was free.
PC's are freely available and they are not free.

It's easy to interpret the phrase "freely available" as implying "free
of cost", especially since there's been a great deal of controversy
over whether the standard *should* be free of cost, there are draft
versions that are both freely available and free of cost, and your
next sentence referred to test suites costing a lot of money. (Are
the test suites "freely available" to anyone who has enough money, or
are there other restrictions?)

Thank you for the clarification, but I'm sure I'm not the only one who
interpreted your words the same way.
PLEASE can we not go round this loop again!

Ok.
 
H

Hallvard B Furuseth

CBFalconer said:
I wasn't thinking about usefulness, just about things that allow
creation of more efficient code in some environment. If there is
another way to express something that allows the efficient code,
and remains standard, I see no point in the extension.

I'm not sure if you are only addressing the last paragraph or all three,
but: When an improvement (optimization or whatever) gets too cumbersome,
it will often not be done. Like if one had to write a program to
generate the program, or write 100 macros instead of 10.
 
H

Hallvard B Furuseth

Richard said:
Labels as values are a maintenance nightmare, and probably not very
optimisable, either - poof goes efficiency.

They should certainly be used sparingly, but if you have to do a
computed goto anyway, then I don't see how e.g. a switch full of
'case foo: goto bar;' is more optimisable.
I'm glad C99 doesn't include
this misfeature. C is not Sinclair Basic, after all.

And inline _is_ in C99, so what's the problem?

It's not a problem, quite on the countrary. I mentioned it because
the article I was replying to said there were very few 'efficiency'
extensions to gcc. This is one.
This is the only feature mentioned in this thread which C99 doesn't
have, and which I think could be useful.

I don't see what's wrong with 'pure' etc below, nor with the features
you snipped below my machine-specific examples.
And they're generally very unportable.

Unportable? If you mean they are not standardized, then yes of course
they are unportable. If you mean they would be unportable even if
standardized, I don't know what you mean. It's simply information to
the compiler about the function, which the compiler may make use of.
Machine-specific. Exactly. Nice for a compiler, _not_ good for a
Standard.

Yes. I had the impression that this subthread was about extensions in
GCC in general, not only the ones which might be put in the standard.
 
B

Brian Inglis

Some may imply that..... See the new proposed secure library

Links? References?

Additions of niche features to the standard that will not be backed
by purchasing compilers/upgrades is pointless. None of GNU, MS, or
embedded vendors seem to be in any hurry to offer C99 compliance.
So how do we convince them? Most still talk of K&R (usually V1) or
"ANSI-C" as a vague sort thing they have no idea what ISO-C is.

Few are aware that there was a C99 standard, as they have not seen
implementations offer it: the committees and NBs have to come up with
compelling reasons to upgrade and get them mentioned on the web.
 
R

Richard Bos

CBFalconer said:
Smarter systems, using segments rather than the primitive linear
addressing mode :), simply enlarge the stack segment on overflow
and carry on. This can eventually fail for lack of memory,
virtual or real.

Of course. And at that point, on a correctly[1] configured system,
malloc() will tell you that this happened, while an overflowing stack
cannot do so, no matter how you've set up the system.

Richard

[1] For the circumstances; bugger advocacy
 
D

Douglas A. Gwyn

Richard said:
Of course. And at that point, on a correctly[1] configured system,
malloc() will tell you that this happened, while an overflowing stack
cannot do so, no matter how you've set up the system.

Not only that, it isn't very hard (in such a hostile
environment) for malloc to access all the supposedly-
allocated region reported by the OS function in order
to determine whether it is indeed accessible. The only
problem is in catching whatever form of exception might
result (e.g. catch SIGSEGV) and recovering gracefully,
after which malloc can return a null pointer.
 
D

Douglas A. Gwyn

Michael said:
Memory access
can fail for reasons beyond the implementation's control.

Normally that would only occur due to hardware or system
failure, not by intentional design.
Hell, I'd like it if all the platforms I had to support were completely
free of bugs. Barring that, I can't guarantee "reliable execution of
carefully written programs" anyway. They aren't. Shall I hold off on
further development until I can "get a better platform"?

No, but you should certainly complain to providers of
broken systems about their bugs and other infelicities.
While you're at it, also complain to the malloc
provider for not properly dealing with known system
characteristics.
 
D

Douglas A. Gwyn

Brian said:
Links? References?

Apparently Chris is referring to a proposal for WG14 to
work on more "secure" (mainly anti-buffer-overflow)
alternatives to existing C standard library functions.
Microsoft delivered a presentation using examples that
they have already implemented as part of their extended
C library. See
http://www.open-std.org/jtc1/sc22/wg14/www/docs/n997.pdf
for an associated official document.
Some people have claimed that this is a proposal to lock
the C standard into a proprietary vendor product, but
that is patently false, as can be seen by reading the
proposal. Randy Meyers of WG14 is currently spearheading
the effort to prepare an actual TR in this area.
 
D

Dan Pop

In said:
Richard said:
Of course. And at that point, on a correctly[1] configured system,
malloc() will tell you that this happened, while an overflowing stack
cannot do so, no matter how you've set up the system.

Not only that, it isn't very hard (in such a hostile
environment) for malloc to access all the supposedly-
allocated region reported by the OS function in order
to determine whether it is indeed accessible. The only
problem is in catching whatever form of exception might
result (e.g. catch SIGSEGV) and recovering gracefully,
after which malloc can return a null pointer.

That would defeat one of the greatest advantages of lazy swap allocation:
sparse arrays implemented as ordinary arrays.

Lazy swap allocation was designed with a purpose in mind and there is no
point for the C implementation to defeat it. People who don't want it
can either disable it (on certain platforms, it can be done by the user,
on a per process basis, on others it requires a reboot) or choose a
platform supporting eager swap allocation only.

Lazy swap allocation doesn't affect the implementation's conformance any
more than the possibility to kill any process before normal termination
and I have yet to see people complaining about this feature of most
operating systems.

Dan
 
C

Chris Hills

Brian Inglis said:
Links? References?

MS are proposing a new secure library for C... all 2000 of functions.
The document should be on the standards web site.

It is currently being discussed by the NB's and W14
Additions of niche features to the standard that will not be backed
by purchasing compilers/upgrades is pointless.

I agree.
Few are aware that there was a C99 standard, as they have not seen
implementations offer it: the committees and NBs have to come up with
compelling reasons to upgrade and get them mentioned on the web.


On the other hand why haven't the compiler implementors included the new
parts of C99 as the have gone along? People are saying that it is not
difficult to do. So you would have though that, as most of the compilers
have had new versions in the last 5 years there would be a lot more C99
compilers.

At the moment there is only one produced by a main stream commercial
company.


/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\
/\/\/ (e-mail address removed) www.phaedsys.org \/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
 
D

Douglas A. Gwyn

Chris said:
MS are proposing a new secure library for C... all 2000 of functions.

No. They said that their *full* extended C library contains
that many functions. They're certainly not proposing that
the C standard adopt all of them. We are certainly looking
at their interfaces and specs for relevant "secure" versions
of standard functions, since we are interested in pursuing
this area anyway. (In fact we still hear complaints about
gets().)
On the other hand why haven't the compiler implementors included the new
parts of C99 as the have gone along?

The compilers I use have certainly been incorporating more new
C99 features with each release. I think the one on my main
hosted platform now claims full conformance.
 
T

Thomas Pornin

According to Dan Pop said:
Lazy swap allocation doesn't affect the implementation's conformance any
more than the possibility to kill any process before normal termination
and I have yet to see people complaining about this feature of most
operating systems.

My confused neural system currently spews out the following bit of
information, which I transcribe here with absolutely no warranty of
exactness or suitability for the purpose of the present discussion:

Some years ago, ftp.cdrom.com was a big downloading site. At one point,
they used Alpha hardware with OSF/1. They could handle about 700
simultaneous connected clients. It turned out that by activating lazy
swap allocation, that number raised to 3000. The ftp server software,
combined with the libc malloc() implementation, was allocating much more
memory than what was actually needed, thus limiting arbitrarily the
number of concurrent instances, unless some sort of overcommit was used.
With overcommit, the server was nonetheless quite stable, so the overall
result was a net gain.


--Thomas Pornin
 
F

Fergus Henderson

Chris Hills said:
most of the C programming is in the embedded world

Do you have any evidence for that claim?

Of course there are a lot of embedded _processors_ around,
but it would be a vast and shaky leap to go from "most processors"
to "most programming".
 
D

Dan Pop

In said:
My confused neural system currently spews out the following bit of
information, which I transcribe here with absolutely no warranty of
exactness or suitability for the purpose of the present discussion:

Some years ago, ftp.cdrom.com was a big downloading site. At one point,
they used Alpha hardware with OSF/1. They could handle about 700
simultaneous connected clients. It turned out that by activating lazy
swap allocation, that number raised to 3000. The ftp server software,
combined with the libc malloc() implementation, was allocating much more
memory than what was actually needed, thus limiting arbitrarily the
number of concurrent instances, unless some sort of overcommit was used.
With overcommit, the server was nonetheless quite stable, so the overall
result was a net gain.

Exactly 10 years ago, I've got a low end Alpha DEC/OSF1 system as
my desktop machine. The thing had 64 MB of RAM and 128 MB of swap,
which was quite reasonable for a Unix workstation at the time (Linux
was perfectly happy with a lot less). Without lazy swap allocation,
by the time my X session had started, most of the swap space was
already allocated. After starting a few X clients, the system was
out of virtual memory resources.

Since increasing the size of the swap partition was not very practical,
I tried switching to lazy swap allocation. The effects were impressive:
at the end of the X session startup procedure, not a single page of swap
space was allocated. I could start as many X clients as I needed without
using more than half of my swap space. The only time when I ran into
troubles was when a netscape process got mad...

Sad conclusion: in the presence of so much software that overallocates
memory, lazy swap allocation is a must for people who cannot afford
wasting virtual memory resources. With the cheap disks of today, this
is far less an issue today than it was back then, however.

Dan
 
C

Chris Hills

Fergus said:
Do you have any evidence for that claim?

Of course there are a lot of embedded _processors_ around,
but it would be a vast and shaky leap to go from "most processors"
to "most programming".

What C programming isn't embedded. By which I mean pure C as opposed to
the pseudo C people do with C++ compilers.

When you thing that anything with electricity applied to it has a micro
these days. autos have 50 odd each, washing machines, microwaves,
radios, mobile phones, smart cards missiles etc etc That is a lot of
devices that get programmed in C.

/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\
/\/\/ (e-mail address removed) www.phaedsys.org \/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
 
C

Chris Hills

Douglas A. Gwyn said:
No. They said that their *full* extended C library contains
that many functions. They're certainly not proposing that
the C standard adopt all of them. We are certainly looking
at their interfaces and specs for relevant "secure" versions
of standard functions, since we are interested in pursuing
this area anyway. (In fact we still hear complaints about
gets().)


The compilers I use have certainly been incorporating more new
C99 features with each release. I think the one on my main
hosted platform now claims full conformance.

What is it?


/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\
/\/\/ (e-mail address removed) www.phaedsys.org \/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
 
E

E. Robert Tisdale

Chris said:
What C programming isn't embedded?
By which I mean pure C
as opposed to the pseudo C people do with C++ compilers.

When you think that anything with electricity applied to it
has a micro these days.
Autos have 50 odd each, washing machines, microwaves,
radios, mobile phones, smart cards missiles etc. etc.
That is a lot of devices that get programmed in C.

Yes, but they represent only a tiny fraction of all C programs.
Embedded [C] programmers represent
a tiny fraction of all C programmers.
 
D

David A. Holland

> Sad conclusion: in the presence of so much software that overallocates
> memory, lazy swap allocation is a must for people who cannot afford
> wasting virtual memory resources. With the cheap disks of today, this
> is far less an issue today than it was back then, however.

It's not just explicit overallocation as such. There's also a lot of
implicit swap allocation.

If you want to really turn overcommit off, you have to reserve swap
not just for malloc but any time you do a virtual memory operation
that could lead to copying a page later: forking, for instance, or
mapping a file into memory. Since most Unix systems nowadays have
shared libraries that are loaded with mmap, and the libraries continue
to bloat out, you can end up with a *lot* of pointlessly reserved swap
space even by current standards.

The general (but, of course, never universal) consensus among OS
implementors is that fully conservative swap allocation is
unreasonably expensive, and that changing the behavior of just malloc
(so processes can still randomly die on other blocks of memory if you
run out of swap) isn't a particularly good idea. It's also not clear
that if the system *does* run out of swap that having malloc start
failing is the right response, either. This issue was discussed to
death on the Linux kernel mailing list a few years back under the
general heading of "OOM" (out of memory) if anyone's interested in one
particular take on the gory details.

What this means from a C perspective is that swap overcommit isn't
likely to go away anytime soon. This does not mean, however, that
malloc never returns NULL...
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,768
Messages
2,569,575
Members
45,054
Latest member
LucyCarper

Latest Threads

Top