A Portable C Compiler

J

jacob navia

http://slashdot.org/

"The leaner, lighter, faster, and most importantly, BSD Licensed,
Compiler PCC has been imported into OpenBSD's CVS and NetBSD's pkgsrc.
The compiler is based on the original Portable C Compiler by S. C.
Johnson, written in the late 70's. Even though much of the compiler has
been rewritten, some of the basics still remain. It is currently not
bug-free, but it compiles on x86 platform, and work is being done on it
to take on GCC's job."

The PCC was the first C compiler I used and studied, back then, when
Unix and C started appearing here in France. We had a source license,
and browsing there I found the PCC code.

The discussion is here.

http://undeadly.org/cgi?action=article&sid=20070915195203&mode=expanded/

It is interesting to see the level of frustration of the BSD people
with GCC. They just want a compiler that is simple, small, and...
supports all architectures that Open BSD supports.

Will they succeed?

Of course it is easy to have a compiler that supports 3 back ends, say.
But supporting 10?

With a mixture of weird CPUs etc?

In any case PCC should be up to the task. I remember it run in the
Honeywell-Bull computers of that time (beginning of the 80s), so
it should run in many others... Running with those was really a
challenge.
 
U

user923005

http://slashdot.org/

"The leaner, lighter, faster, and most importantly, BSD Licensed,
Compiler PCC has been imported into OpenBSD's CVS and NetBSD's pkgsrc.
The compiler is based on the original Portable C Compiler by S. C.
Johnson, written in the late 70's. Even though much of the compiler has
been rewritten, some of the basics still remain. It is currently not
bug-free, but it compiles on x86 platform, and work is being done on it
to take on GCC's job."

The PCC was the first C compiler I used and studied, back then, when
Unix and C started appearing here in France. We had a source license,
and browsing there I found the PCC code.

The discussion is here.

http://undeadly.org/cgi?action=article&sid=20070915195203&mode=expanded/

It is interesting to see the level of frustration of the BSD people
with GCC. They just want a compiler that is simple, small, and...
supports all architectures that Open BSD supports.

Will they succeed?

Of course it is easy to have a compiler that supports 3 back ends, say.
But supporting 10?

With a mixture of weird CPUs etc?

In any case PCC should be up to the task. I remember it run in the
Honeywell-Bull computers of that time (beginning of the 80s), so
it should run in many others... Running with those was really a
challenge.

Starting with PCC and trying to compete with GCC is like starting with
a dingy and planning to race a 65' yacht.

I guess that :
http://www.tendra.org/about/

has a much better chance to succeed.

Other attempts:
http://www.thefreecountry.com/compilers/cpp.shtml
 
E

Erik Trulsson

user923005 said:
Starting with PCC and trying to compete with GCC is like starting with
a dingy and planning to race a 65' yacht.

That depends on in what manner you are trying to compete. It is true that it
seems unlikely that PCC will be able to generate as good code as GCC anytime
in the near future. On the other hand it should not be very difficult to
compete with GCC with regards to compile time and memory usage needed by the
compiler (areas in which GCC is not very good.)

For people trying to do development on older machines these features can be worth
much more than having the generated code run 0.5% faster.

Other people will have other priorities.
 
R

Rui Maciel

Erik said:
That depends on in what manner you are trying to compete. It is true that
it seems unlikely that PCC will be able to generate as good code as GCC
anytime in the near future.  On the other hand it should not be very
difficult to compete with GCC with regards to compile time and memory
usage needed by the compiler (areas in which GCC is not very good.)

Properties such as compile time and memory usage are only relevant to the
compilation process, which is a very tiny part of the whole software
production process. As far as compilers go and what is expected from the
compiler, those features may be nice to have but they are very far from
being important. In fact, they are totally irrelevant.

No one in their right mind prefers a lighter compiler that produces weak or
buggy code to one which is not so light but produces strong, tight and even
secure code.

For people trying to do development on older machines these features can
be worth much more than having the generated code run 0.5% faster.

In this day and age anyone can purchase a very capable system with
multi-core processors for less than 300 euros. It is also possible to buy
used systems for almost nothing. Frankly, I don't believe that build times
are an issue anymore or have been for some time.
Other people will have other priorities.

I don't believe that any developer will ever be willing to trade quality
code for a snappier build process. Naturally it is a nice feature but there
is absolutely no way it would ever be seriously considered for any
tradeoff.


Rui Maciel
 
R

Rui Maciel

Erik said:
That depends on in what manner you are trying to compete. It is true that
it seems unlikely that PCC will be able to generate as good code as GCC
anytime in the near future.  On the other hand it should not be very
difficult to compete with GCC with regards to compile time and memory
usage needed by the compiler (areas in which GCC is not very good.)

Properties such as compile time and memory usage are only relevant to the
compilation process, which is a very tiny part of the whole software
production process. As far as compilers go and what is expected from the
compiler, those features may be nice to have but they are very far from
being important. In fact, they are totally irrelevant.

No one in their right mind prefers a lighter compiler that produces weak or
buggy code to one which is not so light but produces strong, tight and even
secure code.

For people trying to do development on older machines these features can
be worth much more than having the generated code run 0.5% faster.

In this day and age anyone can purchase a very capable system with
multi-core processors for less than 300 euros. It is also possible to buy
used systems for almost nothing. Frankly, I don't believe that build times
are an issue anymore or have been for some time.
Other people will have other priorities.

I don't believe that any developer will ever be willing to trade quality
code for a snappier build process. Naturally it is a nice feature but there
is absolutely no way it would ever be seriously considered for any
tradeoff.


Rui Maciel
 
J

jacob navia

Rui said:
Properties such as compile time and memory usage are only relevant to the
compilation process, which is a very tiny part of the whole software
production process.


I have to disagree here.

Each time you make a change in C you have to rebuild. for many projects,
a change can affect a lot of files. Global changes that need a full
recompilation are done not VERY often, but they are done...

This means that a compiler that slows down the development process by
just 30-60 seconds per build, it is taking between 15-30 minutes per day
to each developer...

Multiply that for a team and you see that a lot of time people are just
waiting for gcc to finish. Of course, this is not visible in small
projects.
As far as compilers go and what is expected from the
compiler, those features may be nice to have but they are very far from
being important. In fact, they are totally irrelevant.

Surely not. A fast compiler allows YOU to develop faster. And that is
important. Gcc is not very fast, mind you.
No one in their right mind prefers a lighter compiler that produces weak or
buggy code to one which is not so light but produces strong, tight and even
secure code.

You are speaking here as if you had never a gcc bug...

And yes, a compiler can be slow AND buggy, just look at gcc 3.1.xx for
amd64 platform and you will see what a buggy compiler can be. The same
with the 4.0xx and 4.1 series...

A simpler compiler is surely easier to debug you see?
In this day and age anyone can purchase a very capable system with
multi-core processors for less than 300 euros. It is also possible to buy
used systems for almost nothing. Frankly, I don't believe that build times
are an issue anymore or have been for some time.

For the company I am working, a full rebuild takes 10 minutes in a super
hyper fast lane dual core amd64 using MSVC. Using gcc it takes like 45
minutes...
I don't believe that any developer will ever be willing to trade quality
code for a snappier build process.

Quality of code? Gcc's code quality can be great when there is no bugs
in the optimizer... When they are, as it is sadly very often the case,
we have to use the debug version... And that code is quite bad.

We get then the worst of both worlds: slow AND buggy.
is absolutely no way it would ever be seriously considered for any
tradeoff.

Since they have the monopoly under linux, there is nothing
anyone can do about that.

DISCLAIMER:
I am biased against it. I use another compiler.
 
W

William Ahern

Rui Maciel said:
Erik Trulsson wrote:
Properties such as compile time and memory usage are only relevant to the
compilation process, which is a very tiny part of the whole software
production process. As far as compilers go and what is expected from the
compiler, those features may be nice to have but they are very far from
being important. In fact, they are totally irrelevant.
No one in their right mind prefers a lighter compiler that produces weak or
buggy code to one which is not so light but produces strong, tight and even
secure code.

Indeed. And in fact, the OpenBSD developers have for years complained that
GCC does not produce strong, tight and secure code. In other words, they
claim that GCC slowly compiles fast, buggy code.
 
F

Flash Gordon

jacob navia wrote, On 18/09/07 14:34:
I have to disagree here.

Each time you make a change in C you have to rebuild. for many projects,
a change can affect a lot of files. Global changes that need a full
recompilation are done not VERY often, but they are done...

This means that a compiler that slows down the development process by
just 30-60 seconds per build, it is taking between 15-30 minutes per day
to each developer...

Multiply that for a team and you see that a lot of time people are just
waiting for gcc to finish. Of course, this is not visible in small
projects.

That is not long. Wait until you work on a project where a build take 8
hours!
Surely not. A fast compiler allows YOU to develop faster. And that is
important. Gcc is not very fast, mind you.

Yes, I agree a fast compiler is useful, and gcc is not the fastest around.
You are speaking here as if you had never a gcc bug...

And yes, a compiler can be slow AND buggy, just look at gcc 3.1.xx for
amd64 platform and you will see what a buggy compiler can be. The same
with the 4.0xx and 4.1 series...

A simpler compiler is surely easier to debug you see?

Personally I've hit very few bugs in gcc. They do exist but I don't hit
them often enough to worry about.
For the company I am working, a full rebuild takes 10 minutes in a super
hyper fast lane dual core amd64 using MSVC. Using gcc it takes like 45
minutes...

A dual core amd64 is *not* super hyper fast. Anyway, make sure you have
make configured to do multiple compilations at once. On Linux it is
often recommended that you set make to compile two files per core at a
time, so on a dual core machine you should be compiling 4 files in parallel.
Quality of code? Gcc's code quality can be great when there is no bugs
in the optimizer... When they are, as it is sadly very often the case,
we have to use the debug version... And that code is quite bad.

At -O2 I've *very* rarely hit problems.
We get then the worst of both worlds: slow AND buggy.


Since they have the monopoly under linux, there is nothing
anyone can do about that.

No gcc does not have a monopoly under Linux. There is tcc, although that
is still flagged as experimental on Ubuntu, Tendra and of course Intel's
icc.
DISCLAIMER:
I am biased against it. I use another compiler.

I use gcc a *lot* under Linux, and historically I have used it a fair
bit under SCO and AIX with some use under Cygwin and MinGW as well and
have not hit the level of bugs you claim for it.
 
F

Flash Gordon

William Ahern wrote, On 18/09/07 16:06:
Indeed. And in fact, the OpenBSD developers have for years complained that
GCC does not produce strong, tight and secure code. In other words, they
claim that GCC slowly compiles fast, buggy code.

Not secure is not the same thing as buggy. If you want secure code you
want the code to do something safe on buffer overflows, for example, but
as far as the C standard is concerned whatever the code does on a buffer
overflow it is not a bug in the compiler. By strong and tight they could
also mean things which are nothing to do with whether gcc incorrectly
translates code.
 
R

Rui Maciel

jacob said:
Each time you make a change in C you have to rebuild. for many projects,
a change can affect a lot of files. Global changes that need a full
recompilation are done not VERY often, but they are done...

This means that a compiler that slows down the development process by
just 30-60 seconds per build, it is taking between 15-30 minutes per day
to each developer...

Multiply that for a team and you see that a lot of time people are just
waiting for gcc to finish. Of course, this is not visible in small
projects.

That scenario isn't very realistic. From your numbers and, for the sake of
simplicity, assuming that a working day would be 10 hours, in order for a
developer to spend an extra 30 minutes a day on those hypothetical builds
he would have to perform between 30 and 60 complete builds per day, which
would amount to, on average, each and every developer having to run a
complete build every 10 or 20 minutes.

As it is easy to see, that scenario is very unrealistic, not to say
extremely far fetched. In any scenario vaguely resembling that, the
development team would have to deal with more serious problems than the
compiler speed.

Surely not. A fast compiler allows YOU to develop faster. And that is
important. Gcc is not very fast, mind you.

In order for the compiler speed to have any measurable impact on the
development speed, you have to compile your code on virtually every small
change you do in the code. If that is the case, you have bigger problems to
deal with than your compiler.

You are speaking here as if you had never a gcc bug...

No one is claiming that GCC is perfect nor that it is the fastest compiler
ever built. In fact, GCC is perfectly irrelevant to this subject.

On the other hand, GCC isn't even remotely the piece of crap which is being
made out to be with this PCC deal. In fact, PCC supporters can only claim
it's "lightness" as the only favourable feature, completely ignoring and
downplaying the fact that PCC is extremely buggy on it's own and doesn't
even try to optimise the code.

And yes, a compiler can be slow AND buggy, just look at gcc 3.1.xx for
amd64 platform and you will see what a buggy compiler can be. The same
with the 4.0xx and 4.1 series...

Once again, GCC is perfectly irrelevant to this subject.
Nonetheless, I've ran a quick check and, according to the GCC project page,
the last release of the 3.1 branch was done in 2002-07-25 and, according to
Wikipedia, the very first processor sporting the AMD 64 architecture was
released in April 2003. In fact, the whole 3.2 branch was released between
the end of the 3.1 branch and the release of the first AMD 64 processor.

So, naturally I have to call bullshit on your statement.

A simpler compiler is surely easier to debug you see?

Simpler, in the sense that it is virtually void of any feature like
optimisation, support for target platforms and for other languages beyond
C. It won't be of much use but it will be simpler to maintain.

For the company I am working, a full rebuild takes 10 minutes in a super
hyper fast lane dual core amd64 using MSVC. Using gcc it takes like 45
minutes...

Once again, GCC is perfectly irrelevant to this subject. Why you keep on
whining about GCC is beyond me. Moreover, no one ever claimed that GCC was
the best compiler around or even the fastest. The claim that was made was
that the "people building on old machines" claim is perfectly irrelevant,
knowing that any system sold in the past 3 or 4 years is more than capable
of doing a decent build job.

Quality of code? Gcc's code quality can be great when there is no bugs
in the optimizer... When they are, as it is sadly very often the case,
we have to use the debug version... And that code is quite bad.

We get then the worst of both worlds: slow AND buggy.


Once again, GCC is perfectly irrelevant to this subject.
Nonetheless, let me get this straight. In your own words GCC does in fact
produce great code but if there happens to be a bug somewhere then GCC is
suddently terrible. Do you honestly believe that right now PCC, which is
void of any basic feature and plagued with bugs, is a better compiler than
GCC's C compiler? Do you believe that taking less time to produce
non-optimised, bug-ridden code is in some way better than the heavily
optimised code that GCC generates or even the fact that it supports dozens
of target platforms and 6 programming languages?

Since they have the monopoly under linux, there is nothing
anyone can do about that.

Monopoly? What the hell are you talking about? No one forces anyone or any
linux distro to adopt GCC nor anyone bars anyone from installing and using
some other compiler on a linux distro. Some companies even sell compilers
for linux. Heck, some linux distros even compile their binaries with other
compilers such as Intel's. What on earth are you smoking?


Rui Maciel
 
R

Rui Maciel

William said:
Indeed. And in fact, the OpenBSD developers have for years complained that
GCC does not produce strong, tight and secure code. In other words, they
claim that GCC slowly compiles fast, buggy code.

And in order to fix that they adopt a compiler that is fast to produces
completely unoptimised code that is bug-ridden and does not implement any
security check? It doesn't seem to be a wise decision.


Rui Maciel
 
R

Rui Maciel

William said:
Indeed. And in fact, the OpenBSD developers have for years complained that
GCC does not produce strong, tight and secure code. In other words, they
claim that GCC slowly compiles fast, buggy code.

And in order to fix that they adopt a compiler that is fast to produces
completely unoptimised code that is bug-ridden and does not implement any
security check? It doesn't seem to be a wise decision.


Rui Maciel
 
W

William Ahern

Rui Maciel said:
William Ahern wrote:

And in order to fix that they adopt a compiler that is fast to produces
completely unoptimised code that is bug-ridden and does not implement any
security check? It doesn't seem to be a wise decision.

The GCC steering committee is unwilling to allow the technical architecture
of GCC to move in certain directions; for example, forms of whole
application analysis (which is pertinent to security features). Look at the
history of LLVM.

OpenBSD's GCC version had Propolice integrated several *years* before it
ever made it into mainline. They've became very frustrated with having to
maintain their own modifications, and in some cases multiple patches, when
they were forced to upgrade GCC versions for some architectures, but found
it desirable not to upgrade (or couldn't upgrade) others.

I can't speak to how bug-ridden is PCC (nor really GCC, either), but I
imagine given its size addressing such bugs might be substantially easier to
diagnose and fix.
 
W

William Ahern

Rui Maciel said:
I don't believe that any developer will ever be willing to trade quality
code for a snappier build process. Naturally it is a nice feature but there
is absolutely no way it would ever be seriously considered for any
tradeoff.

And yet, it is being seriously considered by OpenBSD and NetBSD. The OpenBSD
folks have specifically stated that they would prefer a faster build to
faster code.

Your overly broad and loaded "quality" argument serves only to muddy the
waters. GCC's output isn't shinier than any other output.
 
A

Al Balmer

No one in their right mind prefers a lighter compiler that produces weak or
buggy code to one which is not so light but produces strong, tight and even
secure code.

So, either the compiler in question does not fit that description, or
the BSD programmers are not in their right mind. Or both?
 
J

jacob navia

Rui said:
That scenario isn't very realistic. From your numbers and, for the sake of
simplicity, assuming that a working day would be 10 hours, in order for a
developer to spend an extra 30 minutes a day on those hypothetical builds
he would have to perform between 30 and 60 complete builds per day, which
would amount to, on average, each and every developer having to run a
complete build every 10 or 20 minutes.

This is normal for me.

A build after 10 minutes is quite normal when I am developing. Maybe
more. I made a typing mistake, and discover that at link time...
or, I misspelled a variable, or called the wrong function, then you
correct and build again. A normal thing...
As it is easy to see, that scenario is very unrealistic, not to say
extremely far fetched.

Of course, I try to do as few builds when using gcc! In THAT
environment instead of relying on the machine catching errors I look
twice at each word I type, like in the old days when I developed
with PCC under Honeywell bull.
In any scenario vaguely resembling that, the
development team would have to deal with more serious problems than the
compiler speed.

Yeah. You do not do as many builds, never do typing mistakes etc.
Great!
In order for the compiler speed to have any measurable impact on the
development speed, you have to compile your code on virtually every small
change you do in the code. If that is the case, you have bigger problems to
deal with than your compiler.

It is not good to use the compiler too much... or what?
No one is claiming that GCC is perfect nor that it is the fastest compiler
ever built. In fact, GCC is perfectly irrelevant to this subject.

On the other hand, GCC isn't even remotely the piece of crap which is being
made out to be with this PCC deal.

Nobody said that GCC is a "piece of crap". In my original message I even
doubted that PCC will stay as simple with 10 supported platforms. You are
exaggerating to bring the discussion into an emotional battle, what is
quite an error. Let's discuss without getting excited OK?
In fact, PCC supporters can only claim
it's "lightness" as the only favourable feature, completely ignoring and
downplaying the fact that PCC is extremely buggy on it's own and doesn't
even try to optimise the code.

Who needs so many optimizations as GCC?
Do they produce faster code?

Sometimes, sometimes not.

For instance, they started aligning the stack at an 8 byte boundary
in the x86 some years ago, producing a code bloat of more than 10%.

That slowed so much the generated code that I filed a bug report...
They corrected it later. What is amazing is that nobody discovered
that.
Once again, GCC is perfectly irrelevant to this subject.
Nonetheless, I've ran a quick check and, according to the GCC project page,
the last release of the 3.1 branch was done in 2002-07-25 and, according to
Wikipedia, the very first processor sporting the AMD 64 architecture was
released in April 2003. In fact, the whole 3.2 branch was released between
the end of the 3.1 branch and the release of the first AMD 64 processor.

Yes, must have been the 3.3.1, I mixed it up.
So, naturally I have to call bullshit on your statement.

Yes, a mixup is bullshit.
Simpler, in the sense that it is virtually void of any feature like
optimisation, support for target platforms and for other languages beyond
C. It won't be of much use but it will be simpler to maintain.

Yeah, if it compiles C it is of no much use, I know.

But maybe it *could* be that *some* people *like* C you see?
Once again, GCC is perfectly irrelevant to this subject. Why you keep on
whining about GCC is beyond me.

I am not "whinning", this is just a problem of the GCC "support" people.
They think that they can forget their "customers" because they offer
their software for free. I have never treated people pointing me the
bugs in the compiler as "whinners".
Moreover, no one ever claimed that GCC was
the best compiler around or even the fastest.

Yes, that would be really difficult.
The claim that was made was
that the "people building on old machines" claim is perfectly irrelevant,
knowing that any system sold in the past 3 or 4 years is more than capable
of doing a decent build job.

I told you that a build takes like 15 minutes with MSVC, with gcc takes
like than 30-45 minutes, depending on other load.
Once again, GCC is perfectly irrelevant to this subject.
Nonetheless, let me get this straight. In your own words GCC does in fact
produce great code but if there happens to be a bug somewhere then GCC is
suddently terrible. Do you honestly believe that right now PCC, which is
void of any basic feature and plagued with bugs, is a better compiler than
GCC's C compiler?

Well, it compiled ssh, and many other utilities. "Plagued with bugs" is
your own imagination.
Do you believe that taking less time to produce
non-optimised, bug-ridden code is in some way better than the heavily
optimised code that GCC generates or even the fact that it supports dozens
of target platforms and 6 programming languages?

1) I do not care about 4 from those 6 languages. I program in C, and the
company uses C++.
2) Maybe is great that GCC supports many platforms but (for instance)
under AIX is unusable because of too many bugs, we had to use IBM's
AIX compiler. Under Solaris the situation is similar. We use Sun's
compiler. And those are company decisions, not mine.
Monopoly? What the hell are you talking about? No one forces anyone or any
linux distro to adopt GCC nor anyone bars anyone from installing and using
some other compiler on a linux distro.

Look.
Did you know that /usr/lib/libc.so is not a binary shared library?
NO!

It is an ASCII LINKER SCRIPT for GNU's "ld"...

Ahhh you do not use "ld"?

Then you are doomed stupid!

Most of the headers and system headers under linux are full of
__attribute__ etc etc etc!

I have ported lcc-win to linux and I can tell you that I had to write
my own headers, exactly like under windows with the BIG difference that
with Microsoft it is still possible to use the windows SDK headers.

Under linux if you do not support __attribute__ you are doomed.
Some companies even sell compilers
for linux.

Borland tried ... Where are they now?
Intel is there though, but that is the only one. Never heard from anyone
else.

Heck, some linux distros even compile their binaries with other
compilers such as Intel's. What on earth are you smoking?


Rui Maciel


Yeah I stopped somking some years ago.

Bad decision, it was a good defense against getting excited about this
kind of stuff.

jacob
 
R

Rui Maciel

William said:
And yet, it is being seriously considered by OpenBSD and NetBSD. The
OpenBSD folks have specifically stated that they would prefer a faster
build to faster code.

Your overly broad and loaded "quality" argument serves only to muddy the
waters. GCC's output isn't shinier than any other output.

Please note that GCC and it's features is not nor it has ever been the
subject of this discussion. I've stated that compilation times isn't a
relevant feature when we are dealing with compilers. It is a nice feature
to have but it will never be a trade-off with features like stability or
optimisation. What good is a compiler for if it is quick to compile but
produces code which is buggy and slow?

And yet GCC is being constantly dragged into any comparison with PCC, as if
the objective was to smear GCC instead of defending PCC's strong points.


Rui Maciel
 
R

Rui Maciel

Al said:
So, either the compiler in question does not fit that description, or
the BSD programmers are not in their right mind. Or both?

You have to ask them why they decided to invest their time in that
endeavour. Personally I hope PCC matures and becomes an excellent compiler.
Everyone benefits if there is another FLOS compiler available. I would be
very happy if it could also support C++ and Fortran and could also offer
better support for the standards.

Nonetheless, the only reasons that so far have been given to justify it's
adoption was PCC's reduced compilation times and it's license. It has been
speculated that the later has been the main reason. If that's the case then
I have to say that they don't seem to be in their right minds.


Rui Maciel
 
R

Rui Maciel

Al said:
So, either the compiler in question does not fit that description, or
the BSD programmers are not in their right mind. Or both?

You have to ask them why they decided to invest their time in that
endeavour. Personally I hope PCC matures and becomes an excellent compiler.
Everyone benefits if there is another FLOS compiler available. I would be
very happy if it could also support C++ and Fortran and could also offer
better support for the standards.

Nonetheless, the only reasons that so far have been given to justify it's
adoption was PCC's reduced compilation times and it's license. It has been
speculated that the later has been the main reason. If that's the case then
I have to say that they don't seem to be in their right minds.


Rui Maciel
 
J

jacob navia

Rob said:
Just a minute - surely you only have to do this if you typo in a header?
Or is your build system screwed? Or perhaps you keep code in headers?

(Otherwise only the file you changed generally needs to be rebuilt,
depending on your dependency tree.)

B.
Of course, only one file is rebuilt. But there is always the link step,
what is not fast really, specially with BIG projects.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,731
Messages
2,569,432
Members
44,832
Latest member
GlennSmall

Latest Threads

Top