The illusion of "portability"

D

Default User

jacob said:
In this group there is a bunch of people that call themselves
'regulars' that insist in something called "portability".


Please stop trolling the newsgroup.



Brian
 
O

Old Wolf

jacob said:
In this group there is a bunch of people that call themselves 'regulars'
that insist in something called "portability".

Funny how all the "portability is a myth" people write
all their code on x86.
 
O

Old Wolf

Ben Pfaff wrote (re. error checking of mallc):
Why not use a wrapper function that will always do the right
thing?

That reminds me of the course where I learned C. The tutor advised:

#define MALLOC(type, n) ((type *)malloc((n) * sizeof(type))
 
J

jaysome

One reasonable option may be to flush stdout before exiting the
program, then call ferror to check whether there was an error.
If there was, terminate the program with an error (after
attempting to report it to stderr).

Did you have a test case for this code? If so, did you ever achieve
decision coverage in the case where ferror() gave an error? That would
be remarkable if you did, and I would like to hear how you did it. The
only way I know how to do it is with preprocessor macros, and even
then it's not the greatest.

Best regards
 
R

Richard Heathfield

jacob navia said:
Richard Heathfield a écrit :

True, C99 didn't really advance the language that much, but it has some
good points.

Nothing I'd bother to include in a letter home, but one or two tiny good
points. Nothing there worth breaking your program's portability over.
Anyway, if we are going to stick to standard C, let's agree
that standard C is Standard C as defined by the standards comitee.

K&R C is the original C language, and it's topical here in clc.
C90 is the current de facto standard, and is topical here in clc.
C99 is the current de jure standard, and is topical here in clc.

We discuss them all here. We didn't abandon K&R C just because of C90, and
we're not going to abandon C90 just because of C99.
// comments? More sugar. VLAs? More sugar.

And what do you have against sugar?
You drink your coffee without it?

My point, as I'm sure you know, is that these are trivial changes. If I'm
going to risk the portability of my code, I want a HUGE
functionality/convenience payoff in return. C++ just about gives me that,
which is why I sometimes do use C++ - but C99 certainly does not.
I mean, C is just syntatic sugar for assembly language. Why
not program in assembly then?

For which assembly language is C syntactic sugar? There's more to computers
than "the computer sitting on Mr Navia's desk".
Mixed declarations are a progress in the sense that they put the
declaration nearer the usage of the variable, what makes reading
the code much easier, specially in big functions.

Fine. When C99 is as widespread as C90 currently is, I'll be glad to take
advantage of them.
True, big functions are surely not a bright idea, but they happen :)

I accept that this is not a revolution, or a really big progress in C
but it is a small step, what is not bad.

It's not bad provided it works. It's foolish to adopt a feature just because
you can, if having adopted that feature you then find that your code won't
compile on some of your target platforms.
VLAs are a more substantial step since it allows to allocate precisely
the memory the program needs without having to over-allocate or risk
under allocating arrays.

I see nothing they give that can't be got from a quick malloc.
Under C89 you have to either:
1) allocate memory with malloc

Works for me.
2) Decide a maximum size and declare a local array of that size.

Not so good.
Both solutions aren't specially good. The first one implies using malloc
with all associated hassle,

What hassle? You ask for memory, you get a pointer to it or a NULL, end of
problem. And you get to find out whether it worked, rather than have your
program crash out on you if N is way too large for any reason.
and the second risks allocating not enough
memory. C99 allows you to precisely allocate what you need and no more.

So does malloc.

They do come handy, but again, they are not such a huge progress.

For once, we agree.
Maybe, maybe not, I have no data concerning this. In any case it
promotes portability (yes, I am not that stupid!

How stupid are you, exactly?
:) since it
defines a common interface for many math functions. Besides the control
you get from the abstracted FPU is very fine tuned.

You can portably set the rounding mode, for instance, and many other
things. In this sense the math library is quite a big step from C99.

Only for people for whom it's useful, which is almost nobody.
That is a progress too, but (I do not know why) we never discuss them
in this group.

Because There Are Other Newsgroups For Discussing Them. If you learn nothing
else from this reply, learn this, at least: comp.lang.c is not a dumping
ground for anything Jacob Navia finds interesting. It is a newsgroup for
the discussion of the C language. Other stuff is also very interesting.
Other stuff is also very useful. Other stuff is also great fun. But other
stuff is discussed in Other Newsgroups. In this newsgroup, we discuss C. In
other newsgroups, we discuss those other things.

Maybe what frustrates me is that all this talk about "Stay in C89, C99
is not portable" is that it has taken me years of effort to implement
(and not all of it) C99

Not all of it. So you don't even have a conforming compiler, and yet you're
suggesting we move to C99? Dream on.
and that not even in this group, where we should
promote standard C C99 is accepted as what it is, the current standard.

It's the current de jure standard. It will never become the de facto
standard until it meets the same needs that C90 currently meets.
I mean, each one of us has a picture of what C "should be". But if we
are going to get into *some* kind of consensus it must be the published
standard of the language, whether we like it or not.

But it isn't a consensus. If it were a consensus, it would be widely
implemented. And it isn't, so it isn't.
For instance the fact that main() returns zero even if the programmer
doesn't specify it I find that an abomination. But I implemented that
because it is the standard even if I do not like it at all.

Fine, but bright people will still explicitly return 0 from main - not just
because omitting it is indeed an abomination, but also to keep their
programs portable to all C90 implementations as well as all C99
implementations.
 
R

Richard Heathfield

Old Wolf said:
Ben Pfaff wrote (re. error checking of mallc):

That reminds me of the course where I learned C. The tutor advised:

#define MALLOC(type, n) ((type *)malloc((n) * sizeof(type))

How pointless.
 
R

Richard Heathfield

Ben Pfaff said:
Why not use a wrapper function that will always do the right
thing?

Because "the right thing" depends on the situation. Simply aborting the
program is a student's way out. Unfortunately, the only worse "solution"
than aborting the program on malloc failure - i.e. not bothering to test at
all - is also the most common "solution", it seems.
 
F

Flash Gordon

Keith said:
I think you meant to drop the first "not" in that last sentence;
"people not being told" should be "people being told".

I'll write a program to write out 1000 time, "Do not post to comp.lang.c
at almost 2AM."

#include <stdio.h>

int main(void)
{
int i;

for (i=0; i<1000; i++)
puts("Do not post to comp.lang.c at almost 2AM.");

return 0;
}
 
C

Chris Torek

... What did C99 give us? Mixed declarations? Sugar. // comments?
More sugar.

True, although sometimes a little syntactic sugar helps with
understanding.
VLAs? More sugar.

Here I have to disagree: when you need VLAs, you really need them.
In particular, they allow you to write matrix algebra of the sort
that Fortran has had for years:

void some_mat_op(int m, int n, double mat[m][n]) {
...
}

In C89 you have to sneak around the standard to do this at all
(assuming you are using actual "array of array"s rather than
"array of pointers to vectors" or "vector of pointers to vector
of pointers", anyway).
Compound literals - sure, they might come in handy one day.

They missed a bet with them though: a compound literal outside a
function has static duration, but any compound literal within a
function has automatic duration, with no way to give it static
duration. Hence:

int *f(void) {
static int a[] = { 1, 2, 3, 42 };
return a;
}

is OK, but you cannot get rid of the name:

int *f(void) {
return (int []){ 1, 2, 3, 42 };
}

is not, because the anonymous array object vanishes. (You can,
however, make the array "const", provided the function returns
"const int *".) Clearly C99 should have included:

return (static int []){ 1, 2, 3, 42 };

(and the corresponding version with const). :)

(The syntax for compound literals is rather ugly, although I am
not sure how one could specify them unambiguously otherwise.)
A colossal math library? Hardly anyone needs it ...

Although again, those who want to write their Fortran code in C
will find it handy. :)
 
C

Chris Torek

(e-mail address removed) a écrit :

It depends on a lot of things; sometimes fixed-size arrays really
are more efficient, and sometimes not. In particular, given a
code sequence like:

void f(int n) {
int i, a[N];
... loop i over n items, working with a ...
}

(where n <= N of course), the various as may be at an "easily
addressed" location like -100(sp)[regI]. Converting this to:

void f(int n) {
int i, a[n];
... loop i over n items ...
}

requires replacing the "pseudo-constant" -100(sp) with a value.
If the function happens to be large enough to exert enough register
pressure to spill important loop variables, this can make a
significant difference. Obviously this depends on (at least) the
target architecture and the number of "live" loop variables.

I implemented this by making the array a pointer, that
gets its value automagically when the function starts by making
a subtraction from the stack pointer. Essentially

int fn(int n)
{
int tab[n];
...
}

becomes

int fn(int n)
{
int *tab = alloca(n*sizeof(int));
}

The access is done like any other int *...

As someone else noted, this is not sufficient, because "sizeof tab"
must produce n*sizeof(int), not 1*sizeof(int *).

Morever, one *should* not do this quite so naively. Consider the
following code fragments:

int f(int, int);

void foo(int n) {
for (int i = 0; i < n; i++) {
int siz = f(i, n); // usually n, sometimes not
int arr[siz];

for (int j = 0; j < siz; j++) {
...
}
}
}

int main(void) {
foo(100000);
...
}

The inner (nested) loop runs about 10 billion (10 thousand million,
if you use non-US "billion") times, so this will take a while;
meanwhile the outer loop allocates 1 million times "sizeof(int)"
array elements. If sizeof(int) is 4, this is approximately 4
megabytes, which should easily fit on most desktop computers.

If you naively re-"alloca" the array every trip through the loop,
however, you will need one million of these ~4 MB regions, which
will (since the number is 1 million rather than 1048576) consume
a bit over 37 gigabytes. This is a little bit likely to run out.

The array should be destroyed at the bottom of the loop before
being recreated (with the new size "siz") at the top. (VLAs are
thus very different from alloca(): alloca()ed space is generally
*not* destroyed simply because the block in which it was allocated
has terminated; alloca()d space has "function duration", as it
were.)

Note that on machines with virtual frame pointers (MIPS, x86 with
gcc and -fomit-frame-pointer), a function that uses a VLA forces
the compiler to allocate an actual frame pointer within that
function. VLAs thus also interact with longjmp().
 
C

Chris Torek

[On testing for "printf" failure by using
fflush(stdout);
if (ferror(stdout)) ...
at the end of the program]

Did you have a test case for this code? If so, did you ever achieve
decision coverage in the case where ferror() gave an error? That would
be remarkable if you did, and I would like to hear how you did it.

It is not *quite* the same, but the principle is identical: in my
kernel-configuration generation program for BSD/OS (a variant is
found in NetBSD), I check the results of writing to the various
files using code like the above (I get to fclose() them though).
I did in fact test it once, by accident: I had a nearly-full disk
partition, and running "config new-kernel" filled it up so that
the generated configuration was incomplete. The program detected
and reported the error.

Redirecting standard output to a nearly-full disk (or limited-size
file, or using whatever other system resource limits are available)
is probably the easiest way to test this kind of code.
 
C

Chris Torek

... you cannot get rid of the name:

int *f(void) {
return (int []){ 1, 2, 3, 42 };
}

is not, because the anonymous array object vanishes. (You can,
however, make the array "const", provided the function returns
"const int *".)

I realize I phrased this badly (so that the statement as written
is wrong). What I meant is that you can make compound literals
"const", e.g.:

void f(void) {
int *ip = (int []){ 1, 2, 3, 42 };
...
}

or:

void f(void) {
const int *ip = (const int []){ 1, 2, 3, 42 };
...
}

If you make the compound literal "const", the compiler can generate
a single copy, even if it appears multiple times, although no compiler
is forced to work hard to optimize like this:

const int *p1 = (const int[]){ 1, 2 };
const int *p2 = (const int[]){ 1, 2 };

if (p1 == p2)
puts("p1==p2; this is allowed");
else
puts("p1!=p2; this is allowed too");

(this is identical in principle to string-literal sharing).

(It seems to me inconsistent that the objects produced by string
literals always have static duration, while those produced by
compound literals have automatic duration whenever possible.)
 
R

Richard Bos

I don't understand why anyone would complain against standards.

Because weaning programmers off the Standard means you can shift more of
your own embrace-and-extend crap. As simple as that, IYAM.

Richard
 
A

Andrew Poelstra

Ben Pfaff wrote (re. error checking of mallc):

That reminds me of the course where I learned C. The tutor advised:

#define MALLOC(type, n) ((type *)malloc((n) * sizeof(type))

It would've been interesting had any of the clc regs been there while
that class was being taught.
 
C

Clever Monkey

jacob said:
In this group there is a bunch of people that call themselves 'regulars'
that insist in something called "portability".

Portability for them means the least common denominator.
This is an unlikely definition. Try something like "most conforming to
a standard" or "least depending on undefined behaviour". This is the
mantra I hear over and over in c.l.c.
Write your code so that it will compile in all old and broken
compilers, preferably in such a fashion that it can be moved with no
effort from the embedded system in the coffe machine to the 64 bit
processor in your desktop.
Unless, of course, you know you are unlikely to deploy your app on the
CoffeeMaster 3000.
Sure, you can do that. But as you know, there is no free lunch.
You pay for that "portability" by missing all the progress done
since 1989 in C.
Unless, of course, you only cared about the C99 standard, in which case
you would miss out on all the progress since then. Life is unfair.
Note that there is objectively speaking not a single useful
program in C that can be ported to all machines that run the
language.
No one who knows what they are saying ever suggested otherwise, I'm
pretty sure. A portable, comforming program will be easy to port to a
new platform. The operating system world is full of examples of code
written with portability in mind. Usually the biggest problems run into
are compiler inconsistencies.

Portability doesn't mean no work; it means less work for a particular
development need. It is about paying a development cost up front to
lessen the maintenance costs down the line. One decides how much
portability they need depending on the task and target platforms.

Like many things in life, it is a trade-off.

Well, it looks like you are really just spoiling for a fight. Based on
this thread so far, it looks like you got one.
 
B

Ben Pfaff

Old Wolf said:
That reminds me of the course where I learned C. The tutor advised:

#define MALLOC(type, n) ((type *)malloc((n) * sizeof(type))

Well, at least this kind of macro always casts to the right type,
unlike manual casts.
 
B

Ben Pfaff

jaysome said:
Did you have a test case for this code? If so, did you ever achieve
decision coverage in the case where ferror() gave an error? That would
be remarkable if you did, and I would like to hear how you did it. The
only way I know how to do it is with preprocessor macros, and even
then it's not the greatest.

You can test it with system-specific features, e.g. /dev/full. I
don't recall whether I did so, though.
 
B

Ben Pfaff

Richard Heathfield said:
Ben Pfaff said:


Because "the right thing" depends on the situation. Simply aborting the
program is a student's way out. Unfortunately, the only worse "solution"
than aborting the program on malloc failure - i.e. not bothering to test at
all - is also the most common "solution", it seems.

I consider calling exit(EXIT_FAILURE) strictly better than
undefined behavior due to dereferencing a null pointer.
 
C

Clever Monkey

jacob said:
Keith Thompson a écrit :

Well but this group is about STANDARD C or not?
Yes.

If we do not agree about what standard C is, we can use the standard.
Which standard? One can debate the pros and cons of any sort of
standardization, or the value of one standard that is meant to supersede
another. A standard is a standard, however, and is best discussed
within context.
But if we do not even agree what standard C is ther can't be
any kind of consensus in this group you see?
Once we decide which of the major standards we are discussing, then you
will get consensus. Additionally, one can compare and contrast standards.
The talk about "Standard C" then, is just hollow words!!!
You seem to have an idee fixe that because there is more than one
standard, this means that there is no standard. Each standard is
constructed to take previous standards into consideration. As discussed
elsewhere, it is up to the developer to understand how these standards
affect the code he or she may write, and decide how "standard", which
"standard" and how much "portability" is required.

It. Is. A. Trade-off.

Why worry so much? Any legacy language of sufficient vintage has a long
history that includes many de facto or official standards. I work on an
application that has been ported to many different platforms over the
decades, from early Win16 systems through to 64-bit Windows, P390s, a
variety of real Unixes and everything in-between.

While our code is not instantly portable, the work to port to such a
diverse set of platforms is controlled via a core set of libraries (we
call them "quickports", and they have that name for a reason). This is
where grungy platform details can be wrapped by (wait for it) in a
standard interface. I assume this is how most shops do this sort of
thing, since it is the way I've always done it.

Like any collection of code of a certain age we do rely on a little
preprocessor magic, usually to work around non-conforming compilers and
the majority of the GUI stuff has been re-done in Java. The core C code
is as portable and conforming as we need it to be. Without a language
standard we would be writing to the compilers notion of an ad hoc
standard anyway.

Let me repeat this, in case you missed it: even if there were no
official or de facto standard, each compiling system and platform would
end up dreaming up its own ad hoc standard, anyway. Why not codify
existing good practice and put constraints around edge cases? This is
generally considered a Good Thing.

The notion that we would all just target a single platform using a
single compiler that eschews standards because it knows better (and why
*not* go off into an "obvious" better direction, since you will *never*
need to run on any other platform anyway) rings pretty freaking hollow
to me.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Members online

No members online now.

Forum statistics

Threads
473,774
Messages
2,569,596
Members
45,130
Latest member
MitchellTe
Top