C portability is a myth

J

Jesse Meyer

I visited your webpage.

One the third example that you gave for optimizations (if statements
within for loops), did you ever _benchmark_ the code you complained
about?

I wrote up a quick example in the same style, and over a million
loops, the 'optimized' example was less than .02 ms seconds faster
per loop on a 1Ghz Thunderbird.

Obviously, this is all dependent on what they were actually doing,
the compiler, the OS, the machine in question, and the phase of the
moon, but perhaps they profiled their performance critical code
and came up with similar numbers.

One thing is (probably) certain: The original way the code was written
is easier to read and change.

As Knuth said: Premature optimization is the root of all evil.
Until you profile the code in question, you don't know where to
focus your attention.
 
W

websnarf

Jesse said:
I visited your webpage.

One the third example that you gave for optimizations (if
statements within for loops), did you ever _benchmark_ the
code you complained about?

Yeah -- its called branch mis-predictions. Its a question of how many
of them you want to pay for. Its actually just a very simple math
problem.
I wrote up a quick example in the same style, and over a
million loops, the 'optimized' example was less than .02
ms seconds faster per loop on a 1Ghz Thunderbird.

Well I don't know what kind of example you wrote. I assume you didn't
play much with the predicability of the branches? You'll notice this
optimization (which I wrote about prior to the existence of the Athlon
CPU) didn't make the code go slower.
Obviously, this is all dependent on what they were actually
doing, the compiler, the OS, the machine in question, and the
phase of the moon, but perhaps they profiled their performance
critical code and came up with similar numbers.

It also depends on some degree of understanding by the programmer. Its
like math -- once you understand how and why it works it doesn't become
something which you form an opinion about. It just is what it is.
One thing is (probably) certain: The original way the code
was written is easier to read and change.

Yeah ok, so did you notice that the title of the web page was not
"Guidelines for making maintainable readable code"? You apply
optimization in just enough doses to deal with your bottlenecks. I
would have thought everyone knew this already.
As Knuth said: Premature optimization is the root of all evil.

Yes ok, well Knuth doesn't write 3D video games. And he cleverly put
the word "premature" in there. That's like saying "eating too much
food is bad for you" -- that doesn't mean food is bad for you.
Until you profile the code in question, you don't know
where to focus your attention.

You understand that there is a non-trivial probability that you can
trace that bit of wisdom from wherever you heard it right back to me
right?
 
A

Alan Balmer

And so I am not only critisizing:

One of my recent mini-projects had to be developed on Win/PC and
recompile and work on PocketPC. ARM does not like to work with unaligned
words as much as Pentium and I used arena allocation for allocation
performance reasons. All it needed was to align arena allocated
structures to 32 bit ...
Pentiums don't like unaligned words, either. They don't give you a bus
error, they just slow to a crawl. Don't use them.

I would guess that much of your problem with porting code is improper
usage of the language. Why was the data misaligned in the first place?
Did the compiler warn you, or did you silence it with casts?
I guess anybody here can tell stories about endianity ...

ARM will work either big or little endian.
 
G

Guillaume

Real software developers learn to factor their code into functional
components. For instance in Mozilla the part that decodes HTML and the
part that actually displays something should be completely separated.

This should hold true! Unfortunately, it's not always that way.
But how many "real software developers" out there, anyway?
Oh wait. There are even some people who strongly believe that
this kind of "modular" programming can only be done with an
OO language.

Maybe it's time we raised the bar in software engineering education...
That way when you port Mozilla you port the GUI code and not the HTML
engine.

Yup! Or at least only the platform-dependent stuff: that can be a
lot of other things beside GUI. But any kind of decent programming
practice should make this at least possible without having to rewrite
the whole thing from scratch.
This is no different for any other project... In most apps the UI can
be a relatively insignificant part of the project development....

That, that depends on the project. Some software pieces are mostly
UI stuff, and nothing of real interest behind that (maybe what's
underneath would be ported in a couple hours, lol). I'm not giving
any names, but...

Actually, I really wish the UI would be an insignificant part of most
software projects.
 
R

roman ziak

What does the presence or otherwise of Gregory's name in the
NetBSD developer list have to do with his proof that you are
talking nonsense?

My question was how many different plattforms are you porting your
programs on average.

To be honest, I got carried away, because I offered my mistake together
with fix to start discussion about portability and yet he offered and
noncostructive insult.
 
J

Jesse Meyer

Yeah -- its called branch mis-predictions. Its a question of how many
of them you want to pay for. Its actually just a very simple math
problem.

You don't have a benchmark: That's a theory. The theory is that
your compiler is converting your code into inefficient instructions,
and that set of code is where your program is spending a large
amount of time in that segment of code.

Now, I don't know what processor you are running on, but with
the environment you are speaking of (Redmond/games), it seems
that it would probably be from the x86 line. While processor
architecture isn't one of my strengths, I believe that even the
early pentiums had branch prediction along the lines of "assume
this conditional will evalulate the same way it did previously",
in which case, we won't see branch mis-prediction, but only the
cost of the conditional evalulation.
Well I don't know what kind of example you wrote. I assume you didn't
play much with the predicability of the branches? You'll notice this
optimization (which I wrote about prior to the existence of the Athlon
CPU) didn't make the code go slower.

Putting a spoiler on a chevy metro won't make the car go slower:
It won't make it go measurably faster though.
It also depends on some degree of understanding by the programmer. Its
like math -- once you understand how and why it works it doesn't become
something which you form an opinion about. It just is what it is.

This is condescending and adds nothing to the discussion.

Explain why I'm wrong: I'm saying that until you know how much time
your application is spending in a specific spot of code, you _don't_
know if its worth obfuscating the code in order for greater efficiency.
You don't even know _if_ obfuscating the code will lead to greater
efficiency: The compiler may optimize your code for you.
Yeah ok, so did you notice that the title of the web page was not
"Guidelines for making maintainable readable code"? You apply
optimization in just enough doses to deal with your bottlenecks. I
would have thought everyone knew this already.

But you never showed that it *was* the bottleneck. It makes as much
sense as deciding your automobile isn't running well and then putting
in a new catalytic converter.

If you improved your performance, it was by luck, not skill. You may
have decreased the maintainability of the code for no real performance
gain.
Yes ok, well Knuth doesn't write 3D video games. And he cleverly put
the word "premature" in there. That's like saying "eating too much
food is bad for you" -- that doesn't mean food is bad for you.

What you seem to be doing is operating under the mistaken impression
that ordering instructions in certain ways will always lead to faster
code: C does not state that a specific set of instructions will be
faster.

Therefore the speed of execution is dependent on your environment,
your compiler, and the phase of the moon. :)

Until you know where the bottlenecks are, you can't remove them.

PS: Your sig deliminator is screwed up: its dash-dash-space-return
 
R

roman ziak

Real software developers learn to factor their code into functional
components. For instance in Mozilla the part that decodes HTML and the
part that actually displays something should be completely separated.

That way when you port Mozilla you port the GUI code and not the HTML
engine.

This is no different for any other project... In most apps the UI can
be a relatively insignificant part of the project development....

Oh please ... "real software developers" ?

Interestingly enough, the Firefox (which uses Gecko as Mozilla), has a
subdirectory "chrome", which is heart of the app and contains plethora
of JavaScript files in them.

Most of the apps mentioned in this thread were going through their
evolution and needed to be changed at least when were ported to the
second plattform.

Only narrow scale of algorithms are part of the language and its
libraries. It is relatively easy to implement conforming C compiler than
conforming C++ compiler or VM based language with definitions for
broader scale of algorithms.
> Of course you'd know this because of your vast amounts of experience
> you required to pass judgement on how bad C is...

And yet, I did not say in this thread or anywhere else, that C is bad or
that Java (Pascal, Basic, C++, whatever) is better as
 
W

websnarf

Jesse said:
Yeah -- its called branch mis-predictions. Its a question of how
many of them you want to pay for. Its actually just a very simple
math problem.

You don't have a benchmark: That's a theory. [...]

I don't have a benchmark that I can share publicly. Remember this
example was used in some seriously proprietary code.
[...] The theory is that your compiler is converting your code
into inefficient instructions,

Well its deeper than that. A modern compiler can use a technique
called "unrolling" to speed up loops. Indeed many modern compilers do
that today. But this is premised on the notion that inner loops are
simple, short, and can improve instruction density as a result.
Leaving an if-conditional in the inside of your loops can prevent your
compiler from being able to unroll the loop, or from gaining much
benefit by doing it since it leaves extraneous instruction in the
execution path. Athlons are really fast at execution NOPs, but it
still has to decode them before it gets to other instructions.
and that set of code is where your program is spending a large
amount of time in that segment of code.

That isn't something I assume that this sort of code will do all the
time. Rather I assume that you've discovered some bottleneck in your
code, it looks like this, and you might want to do something about it.
Now, I don't know what processor you are running on, but with
the environment you are speaking of (Redmond/games), it seems
that it would probably be from the x86 line.

I'd like to point out that Nintendo has a base of operations in Redmond
WA as well. ;)
[...] While processor architecture isn't one of my
strengths,

Ah. So if you don't know about it, then you should assume it isn't
there right?
This is condescending and adds nothing to the discussion.

No, questioning what it is, adds nothing to the discussion. This
technique is a form of hoisting. Most people do not question hoisting
as a valid method of optimization (besides some expectation that maybe
your compiler will do it for you.) But for some reason you are. How
do you expect me to respond to this?
Explain why I'm wrong: I'm saying that until you know how much
time your application is spending in a specific spot of code,
you _don't_ know if its worth obfuscating the code in order for
greater efficiency.

This is non-sequitor. After you figure out where your bottlenecks are
you have to have some approach to optimizing them (after considering
removing them or rearchitecting your code.) The technique I showed is
one such technique. So the argument that it might not be where a
bottleneck is, is not an argument against the effectiveness of a
technique which is only meant to be applied where bottlenecks are.
You don't even know _if_ obfuscating the code will lead to greater
efficiency: The compiler may optimize your code for you.

The reason I brought the technique up in the first place is because I
*know* compilers will *not* do that sort of optimization. This
particular kind of hoist is, in general, too hard for the compiler to
know how to do correctly, and beyond reasonable capabilities of proving
that it is correct (a C compiler in general, cannot prove that a
variable will remain constant unless it has some a priori knowledge
about the aliasing of all the variables used in the vicinity of the
code). I've looked at this with the best compilers in the industry --
its just too hard of a problem for them.
But you never showed that it *was* the bottleneck.

That's because whether or not it is a bottleneck will depend on what
circumstances it will be used. Its easy to show scenarios in which
that same code either is or is not a bottleneck. Its independent of
the technique. This comment can be applied to any optimization
technique.
If you improved your performance, it was by luck, not skill. [...]

This makes no sense whatsoever. I have never improved the performance
of code by luck -- I have no idea how that could ever happen. You've
engineered things by luck? That's an interesting trick I'd like to
see.
[...] You may have decreased the maintainability of the code for
no real performance gain.

Ok, I see -- you're saying that we're not supposed to know the
performance of the output of any code generated by a C compiler, and
yet you know *FOR SURE* that there is no real performance gain from
this technique?
What you seem to be doing is operating under the mistaken impression
that ordering instructions in certain ways will always lead to
faster code:

Really? Where did I say or even imply this? I encourage you to scour
10+ years of my USENET postings or any publication or webpage (you can
use iarchive to find all my old stuff going back to about 1997) for any
such suggestion by me.
[...] C does not state that a specific set of instructions will be
faster.

*You* brought content from my web page into this newsgroup. You
understand? In the twisted universe of comp.lang.c, you can't know the
performance, reliabliity, or behavior of anything. In my universe, I
deal with things more concretely. Get it?

If you got a problem with my conceptions of C or how C relates to the
real world, then why would you go out of your way to voluntarily bring
that content from way over on my webpage into this newsgroup?
PS: Your sig deliminator is screwed up: its dash-dash-space-return

Google is screwed up. My life does not revolve around reverse
engineering google.
 
F

Flash Gordon

You mean where *semantics* get added to the definition?


What are you talking about? A VM can emulate multitasking -- an
underlying OS which does so is irrelevant (in fact, I would be curious
to know, if in fact, the Java spec *REQUIRES* the multitasking to be
emulated, so that race conditions are limited to VM instruction
granularity.) Interfacing with GUI environments has to be done with
some kind of native method rig up, and again is not a gating factor in
the ability to run Java.

So how do you do this native rig up on a system that does not even
*have* a display?

[...] C on the other hand, does.

C requires the existence of files/stdio, and a system timer. Voting
machines can arguably be considered more secure if they do *NOT*
contain any system timer (consider software which waits for the
election date, or well past the start of a real election before
enacting a "flaw" -- a little hard to do without a timer).

No, C only requires that for a *hosted* implementation. This is one
reason why the C standard talks about both hosted and free standing
implementations.
Yes, but I think the OP's point is that no pair of any such compilers
accepts the same set of source inputs and does not generate
semantically compatible output for the intersection of code that is
common.

Strange that so many of us (including me) are working on SW that uses
common source files and behaves the same on multiple implementations.
Also strange that I managed to debug some embedded SW by writing a test
harness for it on a Silicon Graphic workstation and then building and
running the module on the workstation before taking the *unmodified*
sources back to my normal development system and compiling them *still*
*unmodified* for a DSP processor and it worked.
I am aware of at least one smart card that was -- they implemented a
picojava instruction set for it. Think about the complexities involved
in architecture design and bring up -- there can't possibly be any
since they have most of it specced out for them already.

I've seen at least one C to Java Bytecode compiler (I don't know how
complete or conforming it is) so if it runs Java Bytecode then there is
at least one C compiler (however bad) that targets it.
You misspelled *least*. Lua, Python and Perl, for example, are *FAR*
more portable. I think the only two languages I know of which are
*less* portable than C are assembly and Basic.

So how come I have a Java application that will run on SCO 5.0.7 but not
on SCO 5.0.5?

Can you provide me with a Java implementation for the TMS320C25 I used
to write SW for? How about the Perl implementation? This processor only
has a *maximum* of 64K program address space and 64K data address space
and our target HW had at most 8K of each.
 
F

Flash Gordon

Portability != Availability.

In that case every single completely platform dependant language is 100%
portable, because if it runs on exactly one platform it is fully
portable across all platforms it is available on, i.e. that one single
platform.

Therefore, availability is part of portability because if the
implementation is not available for a given platform code written for it
is *not* portable to that platform.

This has nothing to do with portability. By your incorrect definition
of portability, assembly is far and away the most portable language.

If your Perl and Java code runs on few platforms than my C code then be
any useful definition of portability well written C is more portable
than Java or Perl.

As to your assertion that our definitions of portability make assembler
the most portable language, that is obviously complete and utter
bollocks. There is no one assembler language, just as there is no one
natural language and no one procedural language and no one object
oriented language. There is Z80 assembler which is a completely
different language to TMS320C2x assembler which is a completely
different language to 68000 assembler... The creators of all those
assembler languages would also agree that they are completely different
languages. C, on the other hand, is one language as defined by an ISO
standard.
 
L

Lawrence Kirby

Portability != Availability.

Portability is ultimately a measure of how easily you can get code working
on a variety of different platforms. Where implementations are unavailable
for a language portability is a non-starter. By your argument the most
portable languages are the ones with only one compiler (which probably
acts as the reference that defines the language). In such cases there can
be no portability issues, valid source code is 100% portable to all
implementations.
This has nothing to do with portability.

Sure it does. If your Perl code can only be run on 20% of the processors
(and systems they are in) out there and your C code can be run on 80% it
is pretty obvious which is more portable.
[...] Approximately 150,000,000, or about
15% of them, are the 32-bit and 64-bit processors used as the main CPU
in laptop/desk top/workstation/server applications, an extremely
narrow range of applications.

This has nothing to do with portability.

Sure it does. If there are a lot more platforms you can port by using one
language over another then the first language is more portable in this
respect. That's not say that this is the only measure for portability. The
amount you need to adapt code in the porting process is another important
consideration.

....
This has nothing to do with portability. By your incorrect definition
of portability, assembly is far and away the most portable language.

If there was one assembly language that worked on everything you would be
correct. But you're using the term "assembly language" to cover a
multitude of different and incompatible languages. The portability
consequences of that are rather obvious.

Lawrence
 
K

Keith Thompson

Jesse Meyer wrote: [...]
PS: Your sig deliminator is screwed up: its dash-dash-space-return

Google is screwed up. My life does not revolve around reverse
engineering google.

<OT>
I just tried posting a test message (in misc.test) through
groups.google.com. As far as I can tell, it doesn't automatically
append a signature at all (though I suppose I could have missed
something). If you're adding the signature manually, it's no harder
to get it right.
</OT>
 
W

websnarf

Lawrence said:
Portability is ultimately a measure of how easily you can get code
working on a variety of different platforms.

http://www.m-w.com/cgi-bin/dictionary?book=Dictionary&va=portable

The two definitions are talking about portability as a characteristic
of the code. The number of platforms have nothing to do with the
meaning of the word portable. One says code is portable between one
platform and another. Nobody says its degree of portability is two.

This isn't an opinion or a nuance -- its what it is. Drill it into
your head: Portability != Availability. They are just not the same
thing.
 
W

websnarf

Flash said:
So how do you do this native rig up on a system that does not even
*have* a display?

I'm not saying you do. The point is that its clearly a native
extension of some kind. Much like any graphics library in C. Look, I
am not a Java expert so there's only so much I know -- I just undertand
the basics. A Java extension for GUIs or graphics in general will
clearly present a uniform interface, its just a question of support for
each platform. With C, you can pretty much count on *EVERY* compiler
having a different way of talking to any native library. For example,
Microsoft uses MFC for GUI programming, while Borland uses OWL.
Strange that so many of us (including me) are working on SW
that uses common source files and behaves the same on multiple
implementations.

Ok, but how do you know they behave the same on multiple systems? Have
you tested it on a 8, 16, 32, 64 bit system, each of which may use
either 1s complement or 2s complement? Have you ensured that you have
used no more than 64K of data (this is apparently the only thing that's
guaranteed by the standard)?
Also strange that I managed to debug some embedded SW by writing
a test harness for it on a Silicon Graphic workstation and then
building and running the module on the workstation before taking
the *unmodified* sources back to my normal development system and
compiling them *still* *unmodified* for a DSP processor and it
worked.

Ok, but that's a representation of engineering resources. I *too*
write very portable code, but it takes a lot of effort. For example, I
have recently been porting a big integer library, and it has taken me
two weeks so far to support it on 16, 32, and 64 bit systems with 6
different compilers. I am down to my last scenario which is 64-bit on
a big endian 64-bit system, which I haven't got quite right yet. I
have decided that supporting 1s complement or systems that don't sign
extend signed integers will simply have to be ignored.

I mean the fact that its possible to write portable code that is
non-trivial in C, I kind of equate to the ability to write obfuscated C
code, or writing quines or whatever. Yeah its possible but its a big
waste of time, and at the end of the day, after investing massive
amounts of time accomplishing these ports one at a time you wonder what
exactly you have accomplished.
I've seen at least one C to Java Bytecode compiler (I don't know
how complete or conforming it is) so if it runs Java Bytecode then
there is at least one C compiler (however bad) that targets it.

Ok what other C compiler is that instance compatible with?
So how come I have a Java application that will run on SCO 5.0.7
but not on SCO 5.0.5?

No idea, I am not a Java expert. Java is not "race condition
compatible" and the spec doesn't tell you what platforms a JVM has to
be ported to. Portability != Availability.
Can you provide me with a Java implementation for the TMS320C25 I used
to write SW for?

Portability != Availability.
[...] How about the Perl implementation? This processor only
has a *maximum* of 64K program address space and 64K data address
space and our target HW had at most 8K of each.

Dunno, but you should try out LUA on it. Either way I don't think that
processor will ever be able to run my (or any other serious)
multiprecision integer library.
 
S

Stephen Sprunk

The two definitions are talking about portability as a characteristic
of the code. The number of platforms have nothing to do with the
meaning of the word portable. One says code is portable between one
platform and another. Nobody says its degree of portability is two.

This isn't an opinion or a nuance -- its what it is. Drill it into
your head: Portability != Availability. They are just not the same
thing.

If Java isn't available on my toaster, then Java code isn't portable to that
platform. If C is available on my toaster, then C is more portable than
Java.

Availability is one (of many) components of portability. Nobody is claiming
they're equivalent, but they're related.

S
 
W

websnarf

Stephen said:
Availability is one (of many) components of portability. Nobody is
claiming they're equivalent, but they're related.

They are different compontents related by a problem manifested by the C
language standard. Namnely: can you compile and run a given set of C
source code. Having a compiler on a platform is supposed to give you
some hope that its possible, but the path is usually convoluted unless
the program is extremely trivial (like a Unix text processing utility.)

Look, the distinction is just a matter of english:

When one "ports" a piece of code, one is not talking about the
solicitation for the availability of a compiler for a given platform.
One is implicitely assuming there is some other target platform with a
different compiler, and one engages in the activity of "porting"
(changing the code -- not installing some compiler.)

When an application becomes "ported", this means that the application
has been modified (including the rare possibility of no changes.)
Nobody would say an application becomes "ported" when a new platform or
compiler becomes available and the sources happen to work on it.

When one uses a "portability layer" they are not referring to something
like a "compiler emulator" or cross compiling, or some kind of compiler
detection. Its usually a library module, or source modification tool
that makes a body of code be of such a manifestation that it can run on
multiple platforms more easily irrespective of compiler availability.

If someone generates a "port" it doesn't mean they've made a C compiler
available to a platform where one didn't exist before. It means
they've modified their sources in some way to make a piece of code work
on a platform where it didn't before.

If I am "porting" a piece of code or library, I am not trying to make a
compiler available on a platform. I am just in the process of
modifying it to make it work on a target platform. A system developer
might port a compiler or generate one for a new system, but this is a
port in of itself for the purpose of making a compiler available. For
example when AMD and SUSE first ported gcc over to AMD64, they didn't
take credit for "porting" the rest of the open source universe to their
platform; and the ANSI C standard can't take credit for it either --
because after that was done each application then had to be "ported"
all over again (because there's no guarantee that they'll work -- you
have to test it).

Now if one creates a JVM for a platform (thus making it "available"),
does that mean they have "ported" all the Java applications onto that
platform? At best you can say that the Java *standard* ported the
applications.
 
W

websnarf

Stephen said:
Availability is one (of many) components of portability. Nobody is
claiming they're equivalent, but they're related.

They are different compontents related by a problem manifested by the C
language standard. Namnely: can you compile and run a given set of C
source code. Having a compiler on a platform is supposed to give you
some hope that its possible, but the path is usually convoluted unless
the program is extremely trivial (like a Unix text processing utility.)

Look, the distinction is just a matter of english:

When one "ports" a piece of code, one is not talking about the
solicitation for the availability of a compiler for a given platform.
One is implicitely assuming there is some other target platform with a
different compiler, and one engages in the activity of "porting"
(changing the code -- not installing some compiler.)

When an application becomes "ported", this means that the application
has been modified (including the rare possibility of no changes.)
Nobody would say an application becomes "ported" when a new platform or
compiler becomes available and the sources happen to work on it.

When one uses a "portability layer" they are not referring to something
like a "compiler emulator" or cross compiling, or some kind of compiler
detection. Its usually a library module, or source modification tool
that makes a body of code be of such a manifestation that it can run on
multiple platforms more easily irrespective of compiler availability.

If someone generates a "port" it doesn't mean they've made a C compiler
available to a platform where one didn't exist before. It means
they've modified their sources in some way to make a piece of code work
on a platform where it didn't before.

If I am "porting" a piece of code or library, I am not trying to make a
compiler available on a platform. I am just in the process of
modifying it to make it work on a target platform. A system developer
might port a compiler or generate one for a new system, but this is a
port in of itself for the purpose of making a compiler available. For
example when AMD and SUSE first ported gcc over to AMD64, they didn't
take credit for "porting" the rest of the open source universe to their
platform; and the ANSI C standard can't take credit for it either --
because after that was done each application then had to be "ported"
all over again (because there's no guarantee that they'll work -- you
have to test it).

Now if one creates a JVM for a platform (thus making it "available"),
does that mean they have "ported" all the Java applications onto that
platform? At best you can say that the Java *standard* ported the
applications.
 
F

Flash Gordon

I'm not saying you do. The point is that its clearly a native
extension of some kind. Much like any graphics library in C. Look, I
am not a Java expert so there's only so much I know -- I just undertand
the basics. A Java extension for GUIs or graphics in general will
clearly present a uniform interface, its just a question of support for
each platform. With C, you can pretty much count on *EVERY* compiler
having a different way of talking to any native library. For example,
Microsoft uses MFC for GUI programming, while Borland uses OWL.

Actually C provides exactly *one* method of interfacing to libraries.
It's just what libraries are available that varies. However, there are
cross platform GUIs.
Ok, but how do you know they behave the same on multiple systems? Have
you tested it on a 8, 16, 32, 64 bit system, each of which may use
either 1s complement or 2s complement? Have you ensured that you have
used no more than 64K of data (this is apparently the only thing that's
guaranteed by the standard)?

It was developed on a 16 bit system and is currently used on 32 bit
systems of both endiannesses.
Ok, but that's a representation of engineering resources. I *too*
write very portable code, but it takes a lot of effort.

Not if you know what you are doing. This code took no extra effort. In
fact, I did not even think about portability. It just worked.
> For example, I
have recently been porting a big integer library, and it has taken me
two weeks so far to support it on 16, 32, and 64 bit systems with 6
different compilers. I am down to my last scenario which is 64-bit on
a big endian 64-bit system, which I haven't got quite right yet. I
have decided that supporting 1s complement or systems that don't sign
extend signed integers will simply have to be ignored.

Sounds like the code was badly written.
I mean the fact that its possible to write portable code that is
non-trivial in C, I kind of equate to the ability to write obfuscated C
code, or writing quines or whatever. Yeah its possible but its a big
waste of time, and at the end of the day, after investing massive
amounts of time accomplishing these ports one at a time you wonder what
exactly you have accomplished.

It does not take me vast amounts of extra effort.
Ok what other C compiler is that instance compatible with?

If it follows the standard, then it is compatible with all other C
compilers that follow the standard. If not, then you still have a "C"
compiler targeting all systems with a Java implementation where all such
implementations will behave as closely for C as they do for Java, thus
making C at least as portable as Java by your definition.
No idea, I am not a Java expert. Java is not "race condition
compatible" and the spec doesn't tell you what platforms a JVM has to
be ported to.

Both platforms *have* a JVM. It's just the JVM on SCO 5.0.5 is not good
enough for the job.
> Portability != Availability.

So you keep saying. However, you fail to understand that portability is
the ability to port to different systems, therefore lack of an
implementation (or sufficiently functional implementation) on all
platforms is a limiting factor on portability.
Portability != Availability.

That does not stop availability being a limiting factor on portability.
[...] How about the Perl implementation? This processor only
has a *maximum* of 64K program address space and 64K data address
space and our target HW had at most 8K of each.

Dunno, but you should try out LUA on it. Either way I don't think that
processor will ever be able to run my (or any other serious)
multiprecision integer library.

I don't think your desktop PC will ever be certified to be fitted in a
military jet. Your point is?
 
F

Flash Gordon

They are different compontents related by a problem manifested by the C
language standard. Namnely: can you compile and run a given set of C
source code. Having a compiler on a platform is supposed to give you
some hope that its possible, but the path is usually convoluted unless
the program is extremely trivial (like a Unix text processing utility.)

Strange that when I ported a *complex* application to run under Cygwin
it required minimal effort.
Look, the distinction is just a matter of english:

When one "ports" a piece of code, one is not talking about the
solicitation for the availability of a compiler for a given platform.
One is implicitely assuming there is some other target platform with a
different compiler, and one engages in the activity of "porting"
(changing the code -- not installing some compiler.)

And if no such implementation is available then said port is not
possible unless you are prepared to write it. Therefore, when no such
implementation is already available part of the porting process is the
writing of such an implementation.
When an application becomes "ported", this means that the application
has been modified (including the rare possibility of no changes.)
Nobody would say an application becomes "ported" when a new platform or
compiler becomes available and the sources happen to work on it.

Incorrect. People frequently talk about porting applications where no
code changes are required. In such cases the porting is comparatively
simple, but still requires testing.

However, if we do use your terminology, when you get a Java application
to run on a different system without changing the code you are not
porting it, so the ability to do that has nothing to do with portability
(i.e. the ability to port).
When one uses a "portability layer" they are not referring to something
like a "compiler emulator" or cross compiling, or some kind of compiler
detection. Its usually a library module, or source modification tool
that makes a body of code be of such a manifestation that it can run on
multiple platforms more easily irrespective of compiler availability.
Irrelevant.

If someone generates a "port" it doesn't mean they've made a C compiler
available to a platform where one didn't exist before. It means
they've modified their sources in some way to make a piece of code work
on a platform where it didn't before.

Not entirely true. If there is no C implementation then part of the
porting process would generally be writing a C implementation. It
normally does not involve this simply because it is exceedingly rare to
find any target without a C implementation already available.
If I am "porting" a piece of code or library, I am not trying to make a
compiler available on a platform.

Unless there is no such compiler, in which case writing the compiler is
part of the porting process.
> I am just in the process of
modifying it to make it work on a target platform. A system developer
might port a compiler or generate one for a new system, but this is a
port in of itself for the purpose of making a compiler available. For
example when AMD and SUSE first ported gcc over to AMD64, they didn't
take credit for "porting" the rest of the open source universe to their
platform;

No, because the rest also had to be ported. See my comments above about
porting.
> and the ANSI C standard can't take credit for it either --

Yet further down you say the Java standard ports. Showing yourself as
inconsistent.
because after that was done each application then had to be "ported"
all over again (because there's no guarantee that they'll work -- you
have to test it).

The same applies to Java applications. I have already pointed out that I
*have* a Java application that does *not* work in all Java implementations.
Now if one creates a JVM for a platform (thus making it "available"),
does that mean they have "ported" all the Java applications onto that
platform? At best you can say that the Java *standard* ported the
applications.

Apart from the fact that not all Java applications will necessarily work
on it you mean? See my comment above. However, yes, they could be
considered to have ported all the Java applications that do run without
recompilation.

When my company releases Java code we specify which JVMs it *will* work
on and if someone uses something different then getting it to work it
*their* problem unless they pay us to sort it, and even then we won't
guarantee is *can* be sorted, hence us telling customers that they
*have* to upgrade to the latest version of SCO (although we are moving
away from it).
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,780
Messages
2,569,611
Members
45,264
Latest member
FletcherDa

Latest Threads

Top