Some errors in MIT's intro C++ course

  • Thread starter Alf P. Steinbach /Usenet
  • Start date
L

Lie Ryan

Pascal J. Bourguignon ha scritto:


I think there's one problem with this approach in a beginner's course.

If students actually manage to write programs which compile and run fine
with all extensions disabled on Windows/VC or Linux/GCC, then it seem
very harsh to punish them if their code fails on some exotic machine,
moreso as the students themselves had not had the chance of testing
their code on those platforms before handing in the homework.


I would tell them that they must not rely on primitive types having a
certain size or on other such things, but I would not want that to
affect their grades in a beginner's course. It is certainly a good thing
for more advanced courses, though, when you hand out assignments for
which students might actually be tempted to rely on specific hardware.

Rather than having the student's code being able to compile for all
those exotic machines; I'd rather like to see the teacher's code compile
on all those exotic machines first before subjecting the student's code
to do the same.

I have been in a university course where I have to debug the teacher's
code (which was going to be used for assignment) that doesn't run
correctly on a Linux machine because the code relies on rand() always
returning a number that fits a 16-bit integers (which crippled platform
does that in this age of 32-bit and 64-bit CPU?); it doesn't help that
the code raises hundreds of compiler warnings when compiled with -Wall
-pedantic and how it liberally mixes tabs and spaces (worst: the tab
width setting for each file is different). Even worst is how they write
assignment specs almost like: "write a C++ program that behaves like
this .exe program".
 
J

Jorgen Grahn

.
This is the oft repeated argument I spoke about earlier. Let me copy
and paste my earlier arguments.

http://groups.google.com/group/comp.lang.c++/msg/d18ffde8d09d4f4d



You're welcome to disagree. Many people do. Preferably we won't rehash
the same arguments over and over again though.

It would help if you wouldn't use insulting phrases like "incredibly
brittle and fragile" about C++ on comp.lang.c++. If you had something
more important to say (which I think you probably did) you could have
phrased that differently. Something like "unlike e.g. Java you cannot
sandbox C or C++ code without process borders", or whatever the right
phrasing is.

/Jorgen
 
K

Keith H Duggar

perl is pretty good for a lot of tasks - complex reporting, data
extracts, etc.  A comparable Java/C++ would probably have 10x/40x more
lines of code respectively.

Oh my, the nonsense people pull from thin air. On multiple occasions
I've had to convert Perl to C++ and the C++ has /never/ been more than
3x the lines. The range is 1x to 3x. Your 40x claim is nearly comical
(and would be but for the poor noobs that will believe your nonsense).

KHD
 
J

Joshua Maurice

On Sun, 2010-09-12, Joshua Maurice wrote:

...





It would help if you wouldn't use insulting phrases like "incredibly
brittle and fragile" about C++ on comp.lang.c++. If you had something
more important to say (which I think you probably did) you could have
phrased that differently.  Something like "unlike e.g. Java you cannot
sandbox C or C++ code without process borders", or whatever the right
phrasing is.

Ok. I'll try to use less pejorative terms. I didn't realize that
people would take it as some sort of attack on C++. (I still stand by
my words as an apt characterization, though if it fails to get my
intended message across due to perceived implied intent, then yes, I
should choose different words.) I do like C++, and prefer it to Java
generally. However, I recognize its weaknesses, such as the near
entire lack of effective fault isolation intra-process.
 
B

Balog Pal

Ok. I guess we disagree on facts. I believe that a misbehaving Java
library cannot mess up the process as easily as a misbehaving C++
library. The C++ library could trash the entire memory subsystem with
a single bad line of code, which under certain coding styles is quite
easy to make. A race condition with the status quo or with the
upcoming standard. However, short of maliciousness, Java doesn't have
these problems. It's still possible that a bug in a library borks your
whole program, but the odds seem to be a lot less. It can't
inadvertently trash the memory subsystem, or cause a seg fault from a
race condition, etc.
<<

Facts ot opinion?
Race condition was undeined behavior at least in java 1.2 as I last read --
doubt that changed. And a *practical* java program will crash easily just
by doing resource allocation.

Not corruping memory so easily is true -- but java has no notion of passing
objects by value/const ref -- so the objects sit there unprotected to state
altering effects. How you calculate a "chance" of what harm the effect of
a bug can be? Please admit you really can not.

The exception handler at upper level si hardly any wiser than the designer
and design translator who already was proven wrong by the Assert violation
triggered.

And we didn't even talk about java programs extending boundaries -- how many
use JNI, CORBA or orther RPC calls, etc. Or systems that will interpret
data written to certain named files as device request.
Of course, it's all a matter of degree. Process boundaries are only
good to a degree. Separate physical hardware gives better fault
isolation than separate processes under Linux. In the end, as you say,
good design is required.

In the beginning. So what we're talking about? Why twist the mud? State
can be correct/incorrect/unknown. discoveruing violation puts you where?
After that, it all depends. Perhaps your
particulars require dumping core when your Java process hits a
programmer bug. I'm not in a position to comment on your design goals,
how good your fault isolation in the Java process is, etc. For
example, a misbehaving Java library could still make calls into other
libraries, which might trash the program. It's a judgment call as to
the proper response.

Judgement indeed. Was just reading about launch of game 'Elemental war of
magic'. That was craching all over and impossible to play -- yet the CEO
judged it is okay the way it is, and glad the paying customers sent in all
those bug reports promptly. The approach makes me want to puke.
I just wanted to make the observations:
1- Fault tolerance requires fault isolation.
2- Generally one cannot reliably isolate faults inside of a C++
process. Fault isolation must be at the process level for C++.

provided the system sandboxes the process -- and the process concept apply
in the first place...
After some replies, I made one final claim:
3- It's much easier to get more reliable fault isolation inside of a
single process in other languages, like Java, as opposed to C++.

You meant to say, it is way easier elude yourself to think that true, just
because other languages have less UB and no trivial means to corrupt memory
like buffer overrun or access free-d object.

But it is just delusion -- as despite java manages object memory chunk's
lifetime, designing the life of object has all the same obligations on the
designer and the coder -- and messing it up is exactly that easy.
Resulting in corruption too, only it manifests differently.
And I definitely don't mean to get into a language dick waving
contest. I am just noting that fault isolation does not necessarily
need to be at the process level. it depends on the particulars.

Well, guess if a program have no state at all, and you restrict to ceratin
operations -- i.e your ptogram just calcualtes digits for pi, you can draw
the line elsewhere. Normal programs that are subject to 'development' and
worth discussion are hardly that limited ever.
 
I

Ian Collins

The very nature of Java, lack of pointer arithmetic, defined and
"sensible" outcomes from race conditions, defined and "sensible"
behavior on null pointer accesses, etc., aka all of its security
features, gives Java a higher degree of fault isolation intra-process
than C++.

I do see where you are coming from, but I don't agree. Java as a
language has a number for specific behavioural requirements. The
language was originally designed to be used in a sandbox and this
behaviour has proved very popular in a number of domains well beyond the
original restricted uses.

What I'm saying is those same rules can be applied to an application
written in C++ (what were the original JVMs written in?). Many problems
don't require them and so those applications don't pay the costs of
supporting those security features. For those that do, the coding
standards for the application can enforce them. I have many
applications based around my XML and web libraries that don't contain a
single pointer dereference or call to delete. The library components
manage object lifetime, null pointer access and threading behaviour.
Fault isolation by definition is how isolated are separate parts of
your application from faults in other parts of your application. In
Java, it's basically impossible to corrupt the bits of a local stack
object of someone's else's thread which did not have any escaped
references, but it's trivial for that to happen in C++ from a
programmer mistake.

If allowed, yes. If all object's memory is hidden form the user, no.
 
J

Joshua Maurice

"Joshua Maurice" <[email protected]>

Ok. I guess we disagree on facts. I believe that a misbehaving Java
library cannot mess up the process as easily as a misbehaving C++
library. The C++ library could trash the entire memory subsystem with
a single bad line of code, which under certain coding styles is quite
easy to make. A race condition with the status quo or with the
upcoming standard. However, short of maliciousness, Java doesn't have
these problems. It's still possible that a bug in a library borks your
whole program, but the odds seem to be a lot less. It can't
inadvertently trash the memory subsystem, or cause a seg fault from a
race condition, etc.
<<

Facts ot opinion?
Race condition was undeined behavior at least in java 1.2 as I last read --  
doubt that  changed.   And a *practical* java program will crash easily just
by doing resource allocation.

That was many many years ago. See the Java 1.5 memory model. Race
conditions do not result in seg faults nor any other kind of
"undefined" behavior. See the "out of thin air" guarantee as an
example.
Not corruping memory so easily is true -- but java has no notion of passing
objects by value/const ref -- so the objects sit there unprotected to state
altering effects.

Well, they have some limited forms of it, but I agree that it's
generally quite inferior. They do have immutable objects and "const
wrappers" like Collections.unmodifiableList. Kind of a non sequitar
really.
How you calculate a "chance" of what harm the effect of
a bug can be? Please admit you really can not.

The exception handler at upper level si hardly any wiser than the designer
and design translator who already was proven wrong by the Assert violation
triggered.

This is an argument saying that fault isolation isn't possible. I
think that's silly. I think that fault isolation is practical, and the
only way to achieve fault tolerance and robustness in an application.

Perhaps your implicit argument is that fault isolation must be done at
the "process level" for any language, but I disagree.

Also, one must calculate the chance that one part of the application
can harm another part of the application. It's part of the design of
robustness. Robustness is always a measure of degree. There are no
absolutes. For example, do I need separate processes? Perhaps I'm
concerned that the processes may interact with the OS in a way that
will kill the entire computer, such as allocating too much memory,
screwing with resources, exploiting a bug to get root and doing "bad
things" (tm). So, perhaps I need separate physical machines. Maybe I
even need an uninterruptable power supply. These are very important
questions. As an extreme, perhaps I need to shield my hardware against
EMPs or cosmic rays.
And we didn't even talk about java programs extending boundaries -- how many
use JNI, CORBA or orther RPC calls, etc.  Or systems that will interpret
data written to certain named files as device request.

Indeed. I said it depends on the particulars, and for example if you
have a JNI library written in C++, then you would lose the Java
guarantees of which I spoke. As an example, the JVM is not magic. It's
written in C or C++ or something like it, so all of the same problems
are possible. It's just that the JVM is far better tested and reviewed
than your program will be, so the odds of a bug in the JVM breaking
your program are rather small. Again, it's all about odds.

I would direct you to my post in comp.lang.c++.moderated:
http://groups.google.com/group/comp.lang.c++.moderated/msg/dacba7e87ded4dd7

In short: we're out to make money, so we have to make trade offs, such
as trade offs between developer time and robustness.
In the beginning.  So what we're talking about?    Why twist the mud?  State
can be correct/incorrect/unknown. discoveruing violation puts you where?

Just like if you discover a bug in a C++ process, what makes you think
that the state of the OS is known so that you can rollover to a backup
process? What makes you think that the entire computer isn't entirely
borked? Because it's not likely.
Judgement indeed.  Was just reading about launch of game 'Elemental war of
magic'.  That was craching all over and impossible to play -- yet the CEO
judged it is okay the way it is, and glad the paying customers sent in all
those bug reports promptly.     The approach makes me want to puke.


provided the system sandboxes the process -- and the process concept apply
in the first place...

I'm sorry. I cannot understand what you are trying to say here. Could
you phrase it in another way please?

If you're trying to argue that fault isolation can only be done at the
process level (or higher), then I disagree, and you have made no
attempt to argue this position besides saying "no it's not".
You meant to say, it is way easier elude yourself to think that true, just
because other languages have less UB and no trivial means to corrupt memory
like buffer overrun or access free-d object.

Yes. I do mean that. (Well, except for the deluded part.) Because it's
harder for a programmer bug to corrupt arbitrary memory, then there is
more fault isolation between different parts inside of a single Java
process.
But it is just delusion -- as despite java manages object memory chunk's
lifetime, designing the life of object has all the same obligations on the
designer and the coder -- and messing it up is exactly that easy.
Resulting in corruption too, only it manifests differently.


Well, guess if a program have no state at all, and you restrict to ceratin
operations -- i.e your ptogram just calcualtes digits for pi, you can draw
the line elsewhere.   Normal programs that are subject to 'development' and
worth discussion are hardly that limited ever.

Please see above for my "chance" rebuttal.
 
J

Joshua Maurice

I do see where you are coming from, but I don't agree.  Java as a
language has a number for specific behavioural requirements.  The
language was originally designed to be used in a sandbox and this
behaviour has proved very popular in a number of domains well beyond the
original restricted uses.

What I'm saying is those same rules can be applied to an application
written in C++ (what were the original JVMs written in?).  Many problems
don't require them and so those applications don't pay the costs of
supporting those security features.  For those that do, the coding
standards for the application can enforce them.  I have many
applications based around my XML and web libraries that don't contain a
single pointer dereference or call to delete.  The library components
manage object lifetime, null pointer access and threading behaviour.


If allowed, yes.  If all object's memory is hidden form the user, no.

I agree with 2 reservations.

1-
One as you noted (and as I noted earlier), the correctness of such a
system is conditional on the correctness of the C++ library /
framework or the (J)VM. I suspect that the sun JVM is much more tested
and reliable than your C++ library / framework. As such, when
evaluating the robustness of such designs, I would be more concerned
about the correctness of your C++ library / framework as opposed to
the JVM, enough so that I would consider it more "required" to use
separate processes for the C++ application.

2-
For those that do, the coding
standards for the application can enforce them.

Code review, application coding standards, etc., collectively
programmer care, can only go so far. Enforcement by the compiler and
runtime as an automated process is far more reliable and robust. I
disagree with your implicit assertion that coding standards are a
perfectly equivalent substitute for compiler enforcement on things
such as type safety.
 
K

kwikius

kwikius  wrote:
#include <vector>
int main)
{
  std::vector<int> v(100U, 0);
  v[100] = v[-1];
}
compiled without complaint  ... didnt crash .. Great!
Program must be working  .. ;-)
It crashes with all of the compilers I use.

That must be a very specialized list.

A compiler that guaranteed to crash would be a poor compiler as it
would have to perform runtime checks (ignoring it uses some hardware
technology) There is a good chance that an out of bounds write will
write valid data ... just in the wrong object. This is a classic cause
of mysterious bugs... Many factors depend on the setup and OS. what
is the actually type of the index parameter... This isnt defined in
much of a meaningful way to make any guarantee what [-1] will turn
into AFAICS in terms of a valid memory offset.

I think that Mr Kanze is misleading to say that using vect[N] where N
is out of range will cause a crash.. Dont rely on that! Much better to
invoke an exception ( so use vect.at(N) ).. even if uncaught you are
less likely to be running in a dangerous state.

The integer conversions is one among many where C++ suffers in weak
typing which it cant remove as its based in C. We are talking about a
language that is 40 years old or more. Things have moved on a long way
since then...

I am very happy using gcc :) . It has very good support, it is
tracking the next version of C++ and I can use it on a wide variety of
platforms. I wont be changing anytime soon. ( Manifests..??
Aargh!!! :) ) though will be interested to try out the LLVM CLang
compiler when it is stable. though it will probably only work on Apple
iPhone ;-)

regards
Andy Little
 
Ö

Öö Tiib

I do not think that that is an accurate reflection of what I said. I
specifically stated that good fault tolerance in programs (of all
kinds and languages) is the result of fault isolation.

Sorry, but it keeps sounding like with C++ you can not cheat but with
Java it is fine to tolerate bugs. How does your code decide from what
module the insanity did propagate? Not tested. Unstable. Prototype-
quality parts obviously not disabled. What should it do?
In C++, this
can only really be accomplished at the process level because of the
way C++ is. However, Java has stricter security guarantees on
misbehaving programs, so you can achieve fault isolation inside of a
process in Java to a reasonable level.

I do not understand why you raise some sort of differentiation more so
antagonism there. Is it easier to isolate bugs in Java? Great! Why
these were not fixed then? What is the reasonable in-process anti-
sanity medicine in Java that lets your insane medical equipment to
take counter measures and continue to kill patient in correct way this
time?
I would still use fault
isolation in the Java medical equipment just like the C++ medical
equipment, but the fault isolation may not be at the process level in
Java.

Most terrible catch-a-bug-and-continue voodoo i have seen during my
life was written in C#. A partially innocent person failing to
interface with such a trash got fired to please shareholders. Anytime
i hear something about fault-proof blah-blah error-tolerance it makes
me a bit despiteful. Code throwing some "AssertionError" may be fine.
Code catching it without rethrowing can not be acceptable. Why you
defend it? It was not cosmic ray nor cute puppy peeing there it was
programming error in untested part of code.
 
J

James Kanze

#include <vector>
int main)
{
std::vector<int> v(100U, 0);
v[100] = v[-1];
}
compiled without complaint ... didnt crash .. Great!
Program must be working .. ;-)
It crashes with all of the compilers I use.
That must be a very specialized list.
[/QUOTE]
A compiler that guaranteed to crash would be a poor compiler
as it would have to perform runtime checks (ignoring it uses
some hardware technology)

There's a way of turning it off, if the profiler says you have
to. But you only do so if the profiler says you have to.

[...]
I think that Mr Kanze is misleading to say that using vect[N]
where N is out of range will cause a crash..

It's true that it's a QoI issue, but any good implementation
will cause a crash.
Dont rely on that! Much better to invoke an exception ( so use
vect.at(N) ).. even if uncaught you are less likely to be
running in a dangerous state.

Either the profiler says that you can't keep the test, in which
case, you can't use at (since it also does the test), or it
doesn't, in which case, you can use the full checking version.
(Remember too that a lot of accesses will be through iterators,
not through []. And they don't have any equivalent of at.)
 
A

Alf P. Steinbach /Usenet

* James Kanze, on 13.09.2010 19:00:
Dont rely on that! Much better to invoke an exception ( so use
vect.at(N) ).. even if uncaught you are less likely to be
running in a dangerous state.

Either the profiler says that you can't keep the test, in which
case, you can't use at (since it also does the test), or it
doesn't, in which case, you can use the full checking version.
(Remember too that a lot of accesses will be through iterators,
not through []. And they don't have any equivalent of at.)

Sometimes the obvious needs to be stated.

I hadn't thought of that.

And I've seen this discussion before, quite a number of times.


Cheers,

- Alf
 
Ö

Öö Tiib

You're changing the subject here: I just wanted to disproved the
statement "accessing a vector out of boundaries will 'safely crash'
for sure". Anyway:

Possibly i changed subject. It was about teaching students originally.
My point was only that they should anyway be taught how to turn all
bug detection that is available and how to use debugging tools and
what help such will provide. In that context James was right that it
will crash. On general case you are right, without special steps taken
it may run.
I think I mentioned it elsewhere, but I've never used the debug
versions of the containers, and I've never seen them used in projects.
My guess is that they're not used nearly as often as you suggest.

Possibly your code coverage with tests is fine enough to not worry.
Possibly you use static analysis tools that often predict possible
buffer boundaries breaking. Possibly you even check bounds yourself
explicitly or use <algorithm>s not indexes or iterators. The quality
is very different and so are skills and platforms. With MS compilers
for example you have to take special steps to turn that debug version
off and it should be done carefully to not get into binary
incompatibility hell between modules.
 
K

kwikius

kwikius  wrote:
#include <vector>
int main)
{
  std::vector<int> v(100U, 0);
  v[100] = v[-1];
}
compiled without complaint  ... didnt crash .. Great!
Program must be working  .. ;-)
It crashes with all of the compilers I use.
That must be a very specialized list.
A compiler that guaranteed to crash would be a poor compiler
as it would have to perform runtime checks (ignoring it uses
some hardware technology)

There's a way of turning it off, if the profiler says you have
to.  But you only do so if the profiler says you have to.

    [...]
I think that  Mr Kanze is misleading to say that using vect[N]
where N is out of range will cause a crash..

It's true that it's a QoI issue, but any good implementation
will cause a crash.

Then AFAICS you are saying gcc is not a good implementation, though it
follows the C++ standard in this area. There is no requirement to
crash in this case.

I think maybe you are a compiler salesman... but I'm not buying
it! ... ;-)

regards
Andy Little
 
J

Joshua Maurice

You know the old tale about Baron Munchhausen, who was sinking in a swamp,
but then pulled himself out by his hair.

Fault isolation is like that.  In reality you can do it only on the
archimedan path, using some fixed point.  Otherwise it's jut the baron's
tale.


Is it really?  I'd classify the line of thoughts in the usual 'wishful
thinking' -- where you expect the solution to be present, so it is talked
into the world.  As words are easy to mince.


It is, but just stating that will not create its funation anywhere.  You may
have it, sometimes you can build it -- but it is a serious business.


Process level is one possible 'fixed point'.  Provided your environment
implements process handling that way.  (i.e your 'process' runs in a proper
sandbox (say "user space") while some other part of the system runs
separated (say "kernel space"), and nothing can be done in the former to
mess up the latter.  Including I/O operation, processor exceptions, resource
exhaustion, etc -- the supervisory system shall be able to recover from
anything.

Even that is not easy to accomplish, with all the support built into today's
microprocessors.  But most OS-es aim for exactly that, so you can use fruits
of the gigantic effort.

More reliable systems use separation of more systems.

I do not state that isolation within a process is impossible in the first
place, but it has similar amount of requirements and way less support. So it
will be too rarely practical, if you stick to true meaning of roboustness,
not just wish it in.


I heard about calculation of chances in too many sad conversations. All were
really just empty claims and playing russian roulette with customers/users.

Until I observe something better, I stick to binary: can or can not violate.
Where can violate, I calculate it as a 100% chance, and act accordingly. Too
bad others do not -- as the world seem to go the Terry Pretchet's way (in
Discworld seem that 1 to million chances happen 9 times out of ten...).


Yes, you build the threat model like that.   Not the other way around, as
usual ("who on earth will enter 8000 characters in the password field?"
"access to that variable happens rarely, no way will that race condition
manifest" ... etc ... )


I wouldn't bet on the last one, as my programs tend to run a decade 7/24
with like 1 defect reported in the period, while JVMs are full of
frightening fixes, and their stability did not impress me ever.

But that was not what I was talking about really, for the scope of this
discussion we can assume the JVM works perfectly to its specification, and
just look whether a faulting java program is okay to throw exception at the
fault detecting spot instead of doing halt, or jump directly to the monitor.

I don't think so.    For a moment let's even take aside concerns on building
the in-process monitor.  Suppose we have it, and it is sitting on the top
end of the exception chain, and once reached can magically discard all the
bad stuff and resume some healthy execution.

What can happen in between? just two things from tp of my head:
1.  code running form finally{} blocks
2. catch{}

Say your program writes some output and uses temporary files. On the
normative path when it finished all the temporary files are removed. it
keeps a neat list of them.
If it detects some problem (not fault, just a recoverable condition, like
access denied or disk full), it throws exception, cleanup is done upwards in
finally blocks, on top an exception handler reports the user that another
try is due. But the state is happy and clean.

Now suppose there is a fault in the program, and the state is messed up. it
is detected in some assert -- and you throw exception also -- to be caught
even higher than the previous.  The finally blocks run, and process theit
job list -- that can be messed up to any degree, so deleting all your disk
including mounts.  Or maybe just the input files instead of the temporaries,
whatever.  Not my idea of being roboust, or containment.

For the other is there anything to say?   Mis-processing exceptions is not
so different to other kind of bugs.  Execution continues in the bad state.


I like that article. But I would not merge the different subjects.  It is
originally about correct programs with different performance.

Certainly quality is also a 'tradeoff' if you measure up costs of
problem-detection methods.  And we well know that software production is yet
in the wild west era, no enforced quality controls, 'provided as is' license
nonsense and so on.  And customers buy, often even demand the crapware.

my poractical view is that there is indeed limit, and diminishing returns to
detection -- a few problems will remain in coding, and some interaction not
addressed in design.  But the usual practices stop quality control several
magnitudes under that point.    And replace actual quality with talk, or
making up chances.  Or delusions about roboustness without any kind of proof
it is there.


Actually it does not make me think just like that, and when I'm asked about
'likely', I rather stick to the raw threat model -- ahat actions can or can
not happen, and what consequences.  Where it is important,  checking or
providing kernel code is due, or suggesting the external measures/better
isolation.

Certainly we're not clairvoyant, so yet uncovered bugs are not considered..

But that leads far away from the original point, and instead of
relativisation, please defend the in-process containment idea.  If you mean
it in "general".


C++ is used for many things, not only in unix/win32-like environment.  In
embedded stuff you often do not have any OS at all, so it's up to you to
build the "fixed point" inside or outside.


Maybe so, in my view if someone claims to have the perpetoom mobile, it is
his job to prove it -- just like I want to see the old Baron lifting from
swamp.

OTOH we may not really be in total disagreement.  As a java system
(probably) can be configured such that executing code is fixed, and an
internam monitor that uses no state at all -- thus protected from in-process
problems.  tratring from there it may be possible to build a better one,
with some protected state too.
I'm yet sceptic that when talk is about such a thing it is actually created
around my requirements on "trusted computer base".    Pelase don;t take it
personally, I just encountered too many 'smopke and mirrors' stuff,
especially with java-like systems where security was just lied in, referring
non-existing or irrelevant things.

C-like systems are at least famous for being able to corrupt anything, so we
can skip over an idle discussion. :)


The crucial question is whether that 'more' isolation is 'enough' isolation
for the purpose.   That is where I stay sceptic.   You know, a beer bottle
is way less fragile than an egg, but the difference is irrelevant if you
drop them from the roof on concrete.  Or even from a meter height.

And to point out again, the original idea was introducing yet an extra gap
between the supposedly safe area and the point of fault discovery.

As I think I say in another post, if you want to argue from evidence
that Java does not do enough to give useful fault isolation intra-
process, I can accept that. My problem is "accepting that the natural
unit of fault isolation is the process and this is beyond question and
doubt".

I do believe we're mostly agreeing. I just wanted to clarify some
points to ensure I was getting across.

As a default position, I still prefer to bring down a Java process
when the program encounters a programming error (hence when I
emphasized "may" way back when in this thread). I admit that I'm
rather ignorant in this regard, though I would "sleep better" with
java throwing on an assert than C++ throwing on an assert.

However, I think I'm alone in this default position compared to some
of my coworkers, some of whom might argue vehemently that a Java
library should never to rarely dump core when it detects a programmer
error (where dump core might be "print a stack trace and
Runtime.getRuntime().halt()").
 
B

Balog Pal

Joshua Maurice said:
One as you noted (and as I noted earlier), the correctness of such a
system is conditional on the correctness of the C++ library /
framework or the (J)VM. I suspect that the sun JVM is much more tested
and reliable than your C++ library / framework. As such, when
evaluating the robustness of such designs, I would be more concerned
about the correctness of your C++ library / framework as opposed to
the JVM, enough so that I would consider it more "required" to use
separate processes for the C++ application.

This sound like a religious comment instead of a technical one.

JVM is one helluva complex program. And is written using possibly the very
same C++ library/framework we use. (And executes on top of another pretty
complex system too...)

Also I see countless releases of the JVM rolling out. Ways more than C++
compilers and especially C++ libs/frameworks. With fat list of problems
fixed every week. And people using java around tend to swear a lot, on
broken stuff or that have different results on different builds.

My estimates show more effort put in C++ compilers and libs timely; more
time passed to evolve; and the complexity is just a fragment. So how come
your figure of the latter shall be less bug-ridden?
Code review, application coding standards, etc., collectively
programmer care, can only go so far. Enforcement by the compiler and
runtime as an automated process is far more reliable and robust. I
disagree with your implicit assertion that coding standards are a
perfectly equivalent substitute for compiler enforcement on things
such as type safety.

Sure, auto tools, like a compiler does a good job -- in its scope. Too bad
it is way too limited in the engineering realm.

I snipped the portion where yo stated that java makes "sensible" things for
some common problems, like race conditions. This is just a great delusion.
Indeed, from 1.5 the memory model changed, and race is no longer said
explicitly UB in the langspec. But what you get instead? For *practice*
all the same problems.

Java can go no further than stating the a 'correctly synchronized' program
will show a correctly sequenced execution. While if you have data race,
you will mess up the program state all the same. Calling it 'unexpectd' or
'unintuitive' instead of 'undefined' will not help that much. And the
compiler will not provide a great help -- it is the designer's job to have
corrext synchron and eliminate data races.

Yes, having UB in a program is bad. But if you substitute UB cases to some
stock defined behavior that does not match that what *expected* by the
programmer, you still have no correct thing, and the results can be almost
as disastrous. Somewhat less on one end due to more contained impact,
but somewhat more due to this kind of common false sense of security.

The compiler is good to pick up typos, but really the correct design, design
reviews and code reiews are the tools that make it possible to come up with
actually working stuff.

Simplest example, an uninitialized int is UB in C++, and well defined 0 in
java. If you wanted to set it to a particular value but missed some case in
the if/switch, how the compiler/runtime helps you? ( Actually for C++ you
have a fair chance to gain a compiler warning with optimized build, or a
flag from valgrind; certainly if you did set the value just a wrong one,
you're left to tests and code review. )
 
Ö

Öö Tiib

Let me try to understand your position more.

Why is the process (such as under Linux, or any other reasonable OS) a
good level of fault isolation for C++ programs?

If you want to say that certain systems have to shut down on fault
then so they do. It has nothing to do with software trying to fix
unknown programming errors in itself run-time or other such nonsense.
Even location of it is not known without analyze. It may be was
diagnosed far from real cause.
 
B

BGB / cr88192

Pascal J. Bourguignon said:
Alf P. Steinbach /Usenet said:
[This article is cross-posted to comp.lang.c++ and comp.programming]

I was a bit shocked when I saw this a few days ago. [...]

http://ocw.mit.edu/courses/electric...nce/6-096-introduction-to-c-january-iap-2009/

6.096 Introduction to C++
[...]
- Alf (shocked)

Indeed, this is shocking.

Introductory programming courses are bad in general IME...

IME, they will be run by teachers with fairly noobish understanding of the
topic in many areas (reserving the more knowledgable teachers for teaching
more advanced courses, ie, those where people actually do stuff, and not
rehashing the basics to groups of students will unlikely move on to do much
of anything related to the field, most taking the course either: because
they were curious/bored, because they need something in their elective
credits, or because their major requires it even though they intend to go do
something else).

but, this may be ok, if the teachers still know more than the typical
students, as the students will usually scratch their head and forget enough
that any errors will not likely matter: they will learn better if and when
they actually go on to do much programming...

and, any student who knows better already knows enough to where the course
is optional anyways, just the whole thing of colleges that one sits around
in classes, and one either already knows the stuff and does well, or one
doesn't know the topic and is screwed over anyways (since many of these
topics are not something one can adequately pick up in the course of the
course, unless the assignments are severely dumbed down vs the actual
in-class topic...).

or such...
 
L

Lie Ryan

I maintain a rather less widely used Java open
source tool, which has about 2800 classes, organized into packages,
which seems to be a workable size for one person to maintain. My
experience is that scripting languages won't work for this size of
projects.

Except that in scripting language, you tend to cut the number of classes
that in Java takes 2800s to around 400s.

Why such a drastic reduction?
1. Java forces you to make everything into classes.
2. Java syntax is rather verbose, requiring you to partition more of
everything or else things gets more hairy easily.
3. Scripting language tends to have a large and very specialized
standard libraries.

And that will take the huge number to a much more manageable level.
 
J

James Kanze

perl is pretty good for a lot of tasks - complex reporting, data
extracts, etc. A comparable Java/C++ would probably have 10x/40x more
lines of code respectively.

Perl is probably the worst language I've ever seen. But it does
have a large number of ready-made modules which can simplify
a lot of tasks. (So do C++ and Java, for that matter. But
their generally a lot more difficult to find.)

In general, scripting languages are good for smaller projects.
They do require less lines of code: no variable declarations,
etc. What they leave out is what is generally necessary to
manage a large project: if I consider AWK, in a couple of
hundred lines, it's pretty simple to keep track of which
variables have been used, what they contain (including the
logical type---AWK variables are untypes), etc. Make that tens
of thousands of lines, or more, and it rapidly becomes
impossible. So you require declarations, and let the compiler
do the checking for you.
Interesting. James, everyone respects your programming
skills, but your writings on Java are frankly nuts :)

It's based on concrete experience. Java has some serious
problems.
Look at the open source
world, there are tons of very highly regarded Java projects,
particularly in the areas of XML processing, data processing web
servers, etc.

I'm not saying it's impossible. Just more difficult than with
a better designed language. Most of the larger Java projects
I've seen have been, in fact, GUI front ends to other tools
(e.g. Eclipse). Java is very good for GUI's, not because of the
language, but because of its libraries.

Depending on what you're targetting, Java's distribution model
(with one jar file for all platforms) may be an advantage or
a disadvantage (the "write once, debug everywhere" syndrome).
Some are collaborative, some are single author. A good
example of a widely used single author tool is Saxon maintained by
Michael Kay. I checked an older version, and it has about 1000
classes, organized into packages, typically from 100-200 loc, although
some are larger. I maintain a rather less widely used Java open
source tool, which has about 2800 classes, organized into packages,
which seems to be a workable size for one person to maintain. My
experience is that scripting languages won't work for this size of
projects.

Totally agreed. They do seem to be somewhere in between, where
Java is still quite usable. Whether you choose Java, C# or C++
for such projects depends on a lot of issues: Java is clearly
the least powerful langauge of the three, but distribution
issues may make it preferrable.
There are C++ counterparts for some of the Java projects, e.g. Apache
xerces and xalan C++ versions, but they tend to have fewer features,
larger code bases, weird syntax, custom classes for things like
strings, and the consequent need to spend a day on google to figure
out how it all works.

A lot of the open source projects seem to be open source just to
show off how poorly they are engineered:). (This isn't
necessarily the case for the ones you mention: from what I've
seen, Apache is on a level close to many professional
projects---better than a lot I've seen, even. Historical
reasons also lead to e.g. using your own string class.)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,007
Latest member
obedient dusk

Latest Threads

Top