Free C++compilers for a classroom

J

john

Kuberan said:
I like netbeans w/ the c++ extensions. It's also cross platform if
that's a concern.


I tried it on Windows XP SP2/JDK 1.6.0_02, with MinGW and DJGPP and it
did not manage to compile an ISO C++ hello world program with either of
them (it had problems with the provided "makes" and its Makefile).

Do you use it for C/C++ under Windows with any success?
 
J

john

BobR said:
I am thinking Dev-C++ 4.9.9.2 on top of mingw version 3.4.5, I have also
downloaded cygwin, but I want a user friendly IDE for Windows with an
uptodate C++ compatible compiler.

Any other ideas?

Sounds good to me.

You might take a look at MinGW Studio for a simpler IDE (and has a GNU/Linux
port).
MinGWStudio http://www.parinyasoft.com/
[ easily tied to the MinGW installed with Dev-C++, or ?]

It looks abandoned, like Dev-C++.

or:
Code::Blocks http://www.codeblocks.org/
[ I haven't tried it, but heard good things.]

Well, it is in nightly builds, I am not sure which night will have the
least problems... :)
 
J

john

Erik said:
You should create an empty project, add a source-file and then write the
code in that, it might also be a good idea, but not necessary, to
disable the VC++ specific extensions (all that is needed is to change
one option).

OK, the empty project and a new .cpp file is relatively easy, however it
doesn't support C files in the editor, and I am interested for support
of both C and C++.

Until now I am with Dev-C++ 4.9.9.2 and MinGW configuration, however
MinGW has bug with long double under C

long double ld= 123.456789;

printf("%Lf", ld);


prints garbage.
 
G

Guest

OK, the empty project and a new .cpp file is relatively easy, however it
doesn't support C files in the editor, and I am interested for support
of both C and C++.

I hope you are not teaching one of those C/C++ curses that begins to
teach the students how to write a C programs and then tack on some C++
stuff like classes.
 
R

robertwessel2

OK, the empty project and a new .cpp file is relatively easy, however it
doesn't support C files in the editor, and I am interested for support
of both C and C++.


..C files are supported, but they're not listed on the menu. Just
create a new C++ source file and include the ".c" extension (instead
of letting him create the .cpp extension by default).

Also, look into the templates, you should be able to create a template
(that you pick when selecting "new project") that will create the
project with a single main.c source file already in it (perhaps with a
skeleton of main() ).

30 seconds of Googling turned up the following C (only) project
template:

http://www.codeproject.com/useritems/VCNetCustomWizard.asp

Creating a custom template for your beginning students is probably a
good idea anyway.
 
J

john

Erik said:
I hope you are not teaching one of those C/C++ curses that begins to
teach the students how to write a C programs and then tack on some C++
stuff like classes.

There are two semesters with C and one semester with C++.
 
J

James Kanze

* James Kanze:
It seems you mean: it's almost a given that a modern learning
institution forbids the students learning anything about the craft
they're supposed to learn.

The craft that they are supposed to be learning is how to write
correct programs. Debuggers certainly aren't any use for that;
they normally are only used once the program is known to be
incorrect.
Well, I don't think they forbid learning.

Which is why they would forbid debuggers, since use (or at least
abuse) of a debugger prevents learning how to reason about your
program.
 
J

James Kanze

Erik Wikström wrote:
There are two semesters with C and one semester with C++.

That sounds backwards; C++ is at least twice as big as C. Also,
I hope that the C++ comes first; it's generally easier to go
from C++ to C than vice versa.
 
J

James Kanze

john napisa?(a):
emacs, gmake, g++ and you have the best tools set ever :) what else to
you need ?

Vim. Emacs causes carpal tunnel syndrome.

For the rest, it depends on your application. For professional
use, gmake is really about the only game in town, but you do
need a lot to go with it: awk, sed, and a good shell (ksh or
bash) for example. (You don't really write every single line
that gets fed to the compiler, do you? There are always parts
which are generated automatically from shell scripts.) For
someone learning the language, however, it's probably a question
of too much too fast; if they're learning the language, let them
learn the language, without too much hassle concerning build
issues.

And of course, if you want to teach C++ (as opposed to some
dialect), you really should be using the Comeau compiler, with
the Dinkumware library. It's not free, but it's cheap enough
that a student should be able to afford it.
 
P

Pete Becker

Which is why they would forbid debuggers, since use (or at least
abuse) of a debugger prevents learning how to reason about your
program.

Tom DeMarco, in his book "Managing Software Projects" (I think that's
the title), suggested that developers not be allowed to use compilers.
Compiling should be part of the test suite. I suspect he was kidding,
but it's a good point, which I've seen stated as: "A fool with a tool
is still a fool."
 
A

Alf P. Steinbach

* Pete Becker:
Tom DeMarco, in his book "Managing Software Projects" (I think that's
the title), suggested that developers not be allowed to use compilers.
Compiling should be part of the test suite. I suspect he was kidding,
but it's a good point, which I've seen stated as: "A fool with a tool is
still a fool."

As I recall IBM tested the "no compiler" theory in Australia, and it was
a disaster... ;-)

Cheers,

- Alf
 
A

Alf P. Steinbach

* James Kanze:
The craft that they are supposed to be learning is how to write
correct programs. Debuggers certainly aren't any use for that;

They are very much use for that, yes.

Because in order to write correct programs you need to understand what's
going on, and in my experience a good debugger helps the students
enormously with that.

My students, at vocational school and college level, were not steered
away from debuggers, but the opposite. However, at college level this
practical approach eventually required a hierarchy of lab assistants,
older students earning a few bucks. It's a question of organizing.

they normally are only used once the program is known to be
incorrect.

I think you mean, by professionals. But no, that's not the case either,
but I think it depends on the quality of the tools. I probably would
not have used gdb to check or figure out the system's documentation, but
some other debuggers are great tools in that respect.

Which is why they would forbid debuggers, since use (or at least
abuse) of a debugger prevents learning how to reason about your
program.

Hm, "they" again. And no, using a debugger does not prevent the student
from learning to reason about his or her program. In my (rather
extensive) experience, just the opposite: it makes abstract concepts
real, so that students really get it. And in the 90's some debuggers,
especially Borland's, had some functionality specifically for that
purpose, e.g. slow animation of the program's execution. I hope current
teachers have not lost that perspective, but I fear they have, because
at least in the US the fad now is to entice students with eye-candy
multimedia instead of enticing them with really grokking the systems.

Cheers, & hth.,

- Alf
 
J

James Kanze

* Pete Becker:
As I recall IBM tested the "no compiler" theory in Australia, and it was
a disaster... ;-)

As I recall, it's still being used for a number of critical
systems, where quality is of the utmost importance. It does
reduce the probability of an error in the final code somewhat,
but I'm not convinced that it's cost effective overall; a
compiler can check for things like typos a lot faster and a lot
cheaper than a code reviewer can.

The same thing doesn't hold for a debugger, though. First, of
course, a debugger doesn't detect the presence of an error to
begin with, and can only be used if you have an actual test case
which systematically triggers the error. And even when the
error occurs systematically, it's far more cost effective to
find it in code review than in a test, and if you find it in
code review, the exact error is already known, so you don't need
a debugger. (Also, if you have good, low level unit tests, just
reviewing which tests pass, and which fail, should indicate
fairly well where the error is located.)
 
J

James Kanze

* James Kanze:
They are very much use for that, yes.
Because in order to write correct programs you need to
understand what's going on, and in my experience a good
debugger helps the students enormously with that.

To write correct programs, you have to understand what's going
on before the code is written. You can't use a debugger on code
that hasn't been written yet.

Debuggers can be useful for understanding poorly written and
poorly documented existing code. But that's not normally what
students are concerned with. They're supposed to be learning
how to write good code; by banning the debugger, you remove the
crutch which allows them to get by without understanding what
they are doing.
My students, at vocational school and college level, were not
steered away from debuggers, but the opposite. However, at
college level this practical approach eventually required a
hierarchy of lab assistants, older students earning a few
bucks. It's a question of organizing.
I think you mean, by professionals.

Actually, I meant by students. Professionals use them mostly
for legacy code, or code from some outside source, which isn't
correctly documented or cleanly written. They can be of
enormous help there. The professionals I know don't use them at
all for the code they write themselves. (In many of the places
I've worked, there hasn't even been a debugger available. And
nobody missed them.)
But no, that's not the case either, but I think it depends on
the quality of the tools. I probably would not have used gdb
to check or figure out the system's documentation, but some
other debuggers are great tools in that respect.

If you have to check and figure out documentation, a debugger
can be a valuable asset. Even gdb. But I think you'd agree
that if the documentation requires checking and figuring out,
that's a problem in itself. We certainly don't want to teach
students that they should write documentation that needs
checking and figuring out, or even that they can get away with
it. It's precisely because this is the major use of a debugger
that I would generally ban its use by students.
 
R

robertwessel2

As I recall, it's still being used for a number of critical
systems, where quality is of the utmost importance. It does
reduce the probability of an error in the final code somewhat,
but I'm not convinced that it's cost effective overall; a
compiler can check for things like typos a lot faster and a lot
cheaper than a code reviewer can.


The methodology is called "Cleanroom," and is moderately interesting,
since it's rather different than most of the other high-reliability
methodologies. And it focuses almost entirely on preventing bugs from
being introduced into the software project. Compiling and trying to
empirically determine if a piece of code works as expected is
considered wholly inadequate, rather the code must be proved correct
first. And frankly, if the code has compiler catchable syntax errors
in it, it's going to *way* fall short of that requirement.

In any event, the requirement that the coder not be allowed to compile
or test their code (in some cases the dividing line is drawn on the
other side of the compile - IOW the testers cannot compile code, but
that doesn't make much difference) is not really the central point,
it's rather to prevent cheating by the programmers, who otherwise
could do some of their own compiles and unit testing and produce code
that will have a much lower defect density than equally unverified
code. That kills the tracking and feedback mechanism, which assumes
that the number of hard-to-find defects is related to the number of
easy-to-find defects in a predictable way, and by eliminating the
"easy" defects (IOW the ones the programmer will catch in the compile/
unit test cycle), you can no longer track the (hidden) hard-to-find
defects. If you could trust the programmers to accurately report all
those defects, the split would not be (as) necessary.

They've had some success (although like with all high reliability and
formally verified development methodologies, the cost is very high),
but I sure as heck would not like to work under those conditions.
 
A

Alf P. Steinbach

* James Kanze:
To write correct programs, you have to understand what's going
on before the code is written. You can't use a debugger on code
that hasn't been written yet.

Hm, I think you have too compentent people around you... ;-)

Fact is, most students (unless they've been taught by someone a bit
unconventional like me) have only the vaguest grasp of what various
program constructs do, what's really going on, because many and perhaps
most students are taught by the monkey-see-monkey-do principle where
patterns they see are repeated and slightly adapted to new
circumstances, without any real understanding.

In first-year of computer science, students benefit a lot from seeing an
execution position actually jump around in a loop, seeing the effects on
variables, seeing that there is a call stack, and so on. That also
holds for graduates starting in a company, because they're not likely to
have experienced that. I guess that you would never dream of placing a
1 MB object in a local variable except if you had a very good reason for
doing that and knew the exact circumstances under which that code would
be executed; a lot of students and fresh employees do such things
unthinkingly, because they haven't experienced any problem with it.

Debuggers can be useful for understanding poorly written and
poorly documented existing code.

That's most code in existence... :)

Or rather, :-(.

But that's not normally what
students are concerned with. They're supposed to be learning
how to write good code; by banning the debugger, you remove the
crutch which allows them to get by without understanding what
they are doing.

On the contrary, a debugger allows them to understand what they're
doing, at a much deeper and more concrete level than book lernin'.

Actually, I meant by students. Professionals use them mostly
for legacy code, or code from some outside source, which isn't
correctly documented or cleanly written. They can be of
enormous help there. The professionals I know don't use them at
all for the code they write themselves. (In many of the places
I've worked, there hasn't even been a debugger available. And
nobody missed them.)

Again, I think you have too compentent people around you... ;-)

Lucky you.


If you have to check and figure out documentation, a debugger
can be a valuable asset. Even gdb. But I think you'd agree
that if the documentation requires checking and figuring out,
that's a problem in itself.

Oh yes, it is a problem.

Unfortunately that's generally the case with Microsoft's documentation.

Microsoft is a big company that makes a lot of libraries that people use
in their real-world programs.

Then there's a ditto company called Sun.

Not to mention one called IBM (although IBM is generally very very
methodical, that doesn't help when the method is all at the wrong level
of abstraction).

And for contemporary teaching I guess Google is unavoidable.

Although I have next to no experience using Google's public libraries
(they're mostly JavaScript), it is telling that most everything is
designated "beta", so I suspect that if I sat down and made a forensic
map application that I thought of yesterday, then I would be spending
quite some time using various debugging techniques to teach myself the
ins and outs of those libraries, and what's reality versus doc.

We certainly don't want to teach
students that they should write documentation that needs
checking and figuring out, or even that they can get away with
it. It's precisely because this is the major use of a debugger
that I would generally ban its use by students.

Ah, well, the days are past when students wrote only small programs that
relied on nothing else than Pascal's built-in Read and Write. But even
there debuggers were a great boon to understanding.

Cheers, & hth.,

- Alf
 
K

Kai-Uwe Bux

The methodology is called "Cleanroom," and is moderately interesting,
since it's rather different than most of the other high-reliability
methodologies. And it focuses almost entirely on preventing bugs from
being introduced into the software project. Compiling and trying to
empirically determine if a piece of code works as expected is
considered wholly inadequate, rather the code must be proved correct
first. And frankly, if the code has compiler catchable syntax errors
in it, it's going to *way* fall short of that requirement.

Are you serious? You mean for each line they actually prove a lemma saying
that it has the required semicolon or curly brakets? Every time I have seen
or written a proof of correctness for a piece of code, the proof focused
entirely on semantics (because proofs are for humans and humans are
interested in understanding the semantics). I like proving things, and
there are parts of my code base that I only got correct when (after many
failures) I got down to proving them to work. However, such proves _never_
dealt with syntax errors.

In any event, the requirement that the coder not be allowed to compile
or test their code (in some cases the dividing line is drawn on the
other side of the compile - IOW the testers cannot compile code, but
that doesn't make much difference) is not really the central point,
it's rather to prevent cheating by the programmers, who otherwise
could do some of their own compiles and unit testing and produce code
that will have a much lower defect density than equally unverified
code. That kills the tracking and feedback mechanism, which assumes
that the number of hard-to-find defects is related to the number of
easy-to-find defects in a predictable way, and by eliminating the
"easy" defects (IOW the ones the programmer will catch in the compile/
unit test cycle), you can no longer track the (hidden) hard-to-find
defects. If you could trust the programmers to accurately report all
those defects, the split would not be (as) necessary.

They've had some success (although like with all high reliability and
formally verified development methodologies, the cost is very high),
but I sure as heck would not like to work under those conditions.

By and large, I don't by this theory. When you try proving correctness of an
algorithm, it is very easy to fool yourself. Wrong proofs are as easy to
write as buggy programs. Moreover, it is _very_ hard to tell a wrong proof
from a correct one (because, contrary to popular belief, proofs are not
formal). It takes mathematicians several years of training to acquire that
skill (note: the most difficult part is to spot a bogus proof of a true
statement).

When dealing with any proof problem, you first toy around with examples and
convice yourself with moral arguments that some idea should work. Being
able to test those theories by implementating is invaluable for
understanding the problem. And that always comes first before you can hope
finding a solution (be that a proof or a program).

If this method yields better code quality, I would venture the conjecture
that it is not because the developers are denied access to compilers but
because they are given more time.


Best

Kai-Uwe Bux
 
J

john

Don't debuggers show things only in assembly, and you have to know
assembly before using them?
 
G

Guest

Are you serious? You mean for each line they actually prove a lemma saying
that it has the required semicolon or curly brakets? Every time I have seen
or written a proof of correctness for a piece of code, the proof focused
entirely on semantics (because proofs are for humans and humans are
interested in understanding the semantics). I like proving things, and
there are parts of my code base that I only got correct when (after many
failures) I got down to proving them to work. However, such proves _never_
dealt with syntax errors.

I would suspect that the theory (and probably practice also) is that if
you have to write code the is provably correct you spend a lot of time
thinking about the code you are writing, so spending a little more to
make sure that the syntax is correct is no big loss. Also I would
suspect that you allow those syntax errors you write because the cost of
finding them through compilation is so low.

There is this story about Donald E. Knuth (I do not know if it is true
or not) where he participated in a programming contest, and he needed
the least amount of time to write the code, produced the best code, and
it all compiled and worked on the first try. His comment on the matter
was that when he learned to program you could not afford to write code
with bugs in it since compilation was to hand in the punch cards and
then wait a week to get the results back. If your code did not even
compile you had just wasted one week.
By and large, I don't by this theory. When you try proving correctness of an
algorithm, it is very easy to fool yourself. Wrong proofs are as easy to
write as buggy programs. Moreover, it is _very_ hard to tell a wrong proof
from a correct one (because, contrary to popular belief, proofs are not
formal). It takes mathematicians several years of training to acquire that
skill (note: the most difficult part is to spot a bogus proof of a true
statement).

When dealing with any proof problem, you first toy around with examples and
convice yourself with moral arguments that some idea should work. Being
able to test those theories by implementating is invaluable for
understanding the problem. And that always comes first before you can hope
finding a solution (be that a proof or a program).

If this method yields better code quality, I would venture the conjecture
that it is not because the developers are denied access to compilers but
because they are given more time.

I know of only one place (though there must be several others) which
practice this methodology, and that is the guys who write the code used
in the US space shuttles. I would guess that most of them have at least
a PhD in formal software verification or similar, and as you said, they
have a lot of time.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,576
Members
45,054
Latest member
LucyCarper

Latest Threads

Top