Exception Misconceptions: Exceptions are for unrecoverable errors.

T

tanix

Kaz Kylheku wrote:
Stefan Ram wrote:
[snip]
=A0 More elegantly? Actually, for correct and secure C++ code,
=A0 all functions need to be written to be =BBexception safe=AB, but
=A0 only a minority of C++ programmers does so or even is aware
=A0 of it.

The above is false. Exception-safe code is needed to write code
that avoids resource leaks in the face of an exception.
For instance:
=A0 =A0{
=A0 =A0 =A0 char *p =3D new char[256];
=A0 =A0 =A0 f();
=A0 =A0}
hm , why would you do this?

To demonstrate one way in which code fails to be exception safe.
isnt't that
=A0 =A0 =A0{
=A0 =A0 =A0 =A0 vector<char> p(256);
=A0 =A0 =A0 =A0 f();
=A0 =A0 =A0}
is simpler?

This code no longer demonstrates a resource leak in the face of an except= ion,
and so it would not have made a sutitable example to accompany my article= ..

Doh?

I guess what Branimir tried to tell was that you should always release
your ressources in a destructor. This gives you automatically the
basic exception guarantee.

Except that in some cases your destructor is not called.
Plus, James Kanze can tell you more about non-trivial destructors.
:--}

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
T

tanix

Memory managment is not a problem. You can implement GC,
for any apllication even in assembler. Reference counting
is simple form of gc and works well in c++ because
of RAII.
Problem is that that in java you don;t have raii and
resource management you still have
to implement by reference counting, or you close files
immediately in same scope? Never return descriptors
or other resources tu user?
So you have to manually call addRef/releaseRef
to implement GC for averything that is not java object....
in java

There is no reference counting in Java as far as I know.
Not that it matters I guess...
All in all first I was first warmed with idea of GC,
it is not problem to have it , but then I tried
haskell, and didn;t have memory leaks,rather
"space leaks" ;)
And somebodey tried to convince me that conservative GC
is faster that shared_ptr/auto_ptr (what a ....;)


Greets

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
T

tanix

Yes it is. A mutex held is also a ressource, and so is a transaction.
Both should be wrapped in a class having appropriate destructor
semantics.

Yup.

You have to unwind EVERYTHING, no matter how small it is.
Otherwise, sooner or later your box will run out of steam.

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
P

peter koch

Kaz Kylheku wrote:
Stefan Ram wrote:
[snip]
=A0 More elegantly? Actually, for correct and secure C++ code,
=A0 all functions need to be written to be =BBexception safe=AB, but
=A0 only a minority of C++ programmers does so or even is aware
=A0 of it.
Why?
The above is false. Exception-safe code is needed to write code
that avoids resource leaks in the face of an exception.
For instance:
=A0 =A0{
=A0 =A0 =A0 char *p =3D new char[256];
=A0 =A0 =A0 f();
=A0 =A0}
hm , why would you do this?
To demonstrate one way in which code fails to be exception safe.
isnt't that
=A0 =A0 =A0{
=A0 =A0 =A0 =A0 vector<char> p(256);
=A0 =A0 =A0 =A0 f();
=A0 =A0 =A0}
is simpler?
This code no longer demonstrates a resource leak in the face of an except= ion,
and so it would not have made a sutitable example to accompany my article= ..

I guess what Branimir tried to tell was that you should always release
your ressources in a destructor. This gives you automatically the
basic exception guarantee.

Except that in some cases your destructor is not called.
Plus, James Kanze can tell you more about non-trivial destructors.

No. The destructor is always called if the process is not terminated.

/Peter
 
S

Stefan Ram

Branimir Maksimovic said:
Refcounts are negligible in comparison to what gc is doing.
GC cannot be efficient since it cannot access program

»[A]llocation in modern JVMs is far faster than the best
performing malloc implementations. The common code path
for new Object() in HotSpot 1.4.2 and later is
approximately 10 machine instructions (data provided by
Sun; see Resources), whereas the best performing malloc
implementations in C require on average between 60 and 100
instructions per call (Detlefs, et. al.; see Resources).
And allocation performance is not a trivial component of
overall performance -- benchmarks show that many
real-world C and C++ programs, such as Perl and
Ghostscript, spend 20 to 30 percent of their total
execution time in malloc and free -- far more than the
allocation and garbage collection overhead of a healthy
Java application (Zorn; see Resources).«

http://www-128.ibm.com/developerworks/java/library/j-jtp09275.html?ca=dgr-jw22JavaUrbanLegends
I don;t want to discuss this, but it is obvious that nothing in java is
designed with performance in mind. Quite opposite....

Java 1.6 (aka »Java 6«) is already one of the fastest languages:

http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=all

And Java 1.7 (aka »Java 7«) is reported to be even faster:

»Java 5 <=== 18% faster=== < Java 6 < ===46% faster===< Java 7«

http://www.taranfx.com/blog/java-7-whats-new-performance-benchmark-1-5-1-6-1-7

See also:

http://www.stefankrause.net/wp/?p=9

http://paulbuchheit.blogspot.com/2007/06/java-is-faster-than-c.html

http://www.idiom.com/~zilla/Computer/javaCbenchmark.html
 
B

Branimir Maksimovic

Stefan said:
Branimir Maksimovic said:
Refcounts are negligible in comparison to what gc is doing.
GC cannot be efficient since it cannot access program

»[A]llocation in modern JVMs is far faster than the best
performing malloc implementations.

Well,look controlling memory allocation is crucial performance
feature of c++. You can write special allocators in every
class by just overloading new/delete. Performance gain on
modern hardware is where you allocate objects for
particular class, not in allocation in itself....
Because depending on memory layout and dispersion
of objects you can gain 2-10 times speed because
of cache and how you access objects.

Allocation is not where GC fails, rather deallocation....

Because there is no fastest and simpler way to perform collection,
than to stop program, perform collection in multiple threads, then let
program work....

I think it is clear that this concept fails in combination with
threads because they share same address space...

It can work alright with processes which don;t share address space.

Greets
 
S

Stefan Ram

Branimir Maksimovic said:
Allocation is not where GC fails, rather deallocation....

Deallocation matters for long-running programs.
A programm that is running only a short time might
never need to actually reclaim memory. Otherwise,
I agree that this takes some time indeed.
 
B

Balog Pal

Stefan Ram said:
Deallocation matters for long-running programs.
A programm that is running only a short time might
never need to actually reclaim memory. Otherwise,
I agree that this takes some time indeed.

Hm, you actually suggest that if a program does just a couple allocations up
front and keep all the objects, comparing the speed of the allocation make
sense?

Can you provide an application example where that java's allegedly superfast
allocation can be noticed?
 
S

Stefan Ram

Balog Pal said:
Can you provide an application example where that java's
allegedly superfast allocation can be noticed?

BTW: The benchmarks I have referred to should have measured
the execution time already /including/ possible GC runs.
So, when they find Java to be nearly as fast as C++,
this already includes the allocation and possibly GC times.

And regarding your question: If the allocation in Java would
take more time than it takes now, those benchmarks would
obviously have found Java to be slower. So the time Java
needs for an allocation can be noticed in nearly every Java
program as a part of the overall performance, since most
Java programs do many allocations.
 
B

Branimir Maksimovic

Kaz said:
GC is very efficient from an SMP point of view, because it allows
for immutable objects to be truly immutable, over most of their
lifetime.

SMP means nothing in comparison of performance gain you get
from CPU cache.

Immutable objects are really bad idea, for example all objects
in haskell are immutable. Array update is O(n) string
is implemented as linked list. That's why no one really uses
haskell deafult objects, rather we have fast mutable string fast m
utable array fast mutable this and that etc which are
actually structures implemented in c.
And no one actually programs function stile rather
payload code is wrapped in monads ;)
Object copy is expensive operation in nowadays hardware.
It is always much faster and chipper to perform update
or use copy on write and reference counted strings....
because they are cache friendly. Mutex lock/unlock is very cheap
operation.
Look, I tested quad xeon against home athlon dual core.
Initializing 256 meg of ram from 4 threads (each thread 64meg)
on quad xeon on higher cpu frequency against old dual athlon , athlon
performs better or same! Catch 22 is that I tried fastest athlon
and got same result as old athlon;) because the have same
speed of memory bus ;)
On intels before i7 architecture secret of performance
was not to miss cache much....

Greets
 
T

tanix

Branimir Maksimovic said:
Refcounts are negligible in comparison to what gc is doing.
GC cannot be efficient since it cannot access program

»[A]llocation in modern JVMs is far faster than the best
performing malloc implementations. The common code path
for new Object() in HotSpot 1.4.2 and later is
approximately 10 machine instructions (data provided by
Sun; see Resources), whereas the best performing malloc
implementations in C require on average between 60 and 100
instructions per call (Detlefs, et. al.; see Resources).
And allocation performance is not a trivial component of
overall performance -- benchmarks show that many
real-world C and C++ programs, such as Perl and
Ghostscript, spend 20 to 30 percent of their total
execution time in malloc and free -- far more than the
allocation and garbage collection overhead of a healthy
Java application (Zorn; see Resources).«

http://www-128.ibm.com/developerworks/java/library/j-jtp09275.html?ca=dgr-jw22J
avaUrbanLegends

Good read.

This is just an insult and not only an insult to intelligence
of those, who designed the language, but total fiction.
Java 1.6 (aka »Java 6«) is already one of the fastest languages:

Yep, that is what I suspected.
And Java 1.7 (aka »Java 7«) is reported to be even faster:

»Java 5 <=== 18% faster=== < Java 6 < ===46% faster===< Java 7«

http://www.taranfx.com/blog/java-7-whats-new-performance-benchmark-1-5-1-6-1-7

Cool. I like that. Helps me quite a bit.

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
B

Branimir Maksimovic

tanix said:
This is just an insult and not only an insult to intelligence
of those, who designed the language, but total fiction.

Yeh, right Im reading local os news groups and sysadmins always
ask how to tweak vm to perform faster...
Lot of complains about java software and performance on high
end hardware and SAN.....
Yep, that is what I suspected.

Yeah, that site is really good reference for language benchmarks ;)
Why don;t they test application with more then 100 lines of code ;)

Greets...
 
T

tanix

Yeh, right Im reading local os news groups and sysadmins always
ask how to tweak vm to perform faster...

Does not mean anything to me.
Some people are obscessed beyond reason.
Lot of complains about java software and performance on high
end hardware and SAN.....


Yeah, that site is really good reference for language benchmarks ;)
Why don;t they test application with more then 100 lines of code ;)

Greets...

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
I

Isaac Gouy

Yeah, that site is really good reference for language benchmarks ;)
Why don;t they test application with more then 100 lines of code ;)


It's too difficult to get anyone to read programs that are shorter
than 100 lines of code.
 
J

James Kanze

GC is heavy performance killer especially on multiprocessor systems
in combination with threads....it is slow, complex and inefficient...

Obviously, you've never actually measured. A lot depends on the
application, but typically, C++ with garbage collection runs
slightly faster than C++ without garbage collection. Especially
in a multi-threaded envirionment,

[...]
virtual machine is also heavy performance killer...

Which explains why some of the leading experts in optimization
claim that it is necessary for the best optimization. (I don't
fully buy that claim, but a virtual machine does have a couple
of advantages when it come to optimizing: it sees the actual
data being processed, for example, and the actual machine being
run on, and can optimize to both.)
Yes.
I think java is designed in such way that it will still be slow in
comparison to other compiled languages...if it is compiled
language.

First, Java is a compiled language, and second, it's not slower
than any of the other compiled languages, globally. (Specific
programs may vary, of course.)
 
J

James Kanze

Hm, explain to me how can any thread, access or change any
pointer in memory without lock while gc is collecting....
There is no way for gc to collect without stopping all threads
without locking.... because gc is just another thread(s) in
itself...

Maybe. I've not actually studied the implementations in
detail. I've just measured actual time. And the result is that
over a wide variety of applications, garbage collection is, on
the average, slightly faster. (With some applications where it
is radically faster, and others where it is noticeably slower.)
Of course. GC is complex program that has only one purpose.
To let programmer not write free(p), but programmer
still has to write close(fd).
What's the purpose of that?

Less lines of code to write.

If you're paid by the line, garbage collection is a bad thing.
Otherwise, it's a useful tool, to be used when appropriate.
Refcounts are negligible in comparison to what gc is doing.

Reference counting is very expensive in a multithreaded
environment.

And in the end, measurements trump abstract claims.

[...]
Manual memory deallocation is simple, fast and efficient.
Nothing so complex like GC. Cost of new and delete is nothing
in comparison to GC.

That's definitely not true in practice.

[...]
GC cannot be implemented efficiently since it has to mess with
memory...

What you mean is that you don't know how to implement it
efficiently. Nor do I, for that matter, but I'm willing to
accept that there are people who know more about the issues than
I do. And I've measured the results of their work.

[...]
I don't want to discuss this, but it is obvious that nothing
in java is designed with performance in mind. Quite
opposite....

You don't want to discuss it, so you state some blatent lie, and
expect everyone to just accept it at face value. Some parts of
Java were definitely designed with performance in mind (e.g.
using int, instead of a class type). Others less so. But the
fact remains that with a good JVM, Java runs just as fast as C++
in most applications. Speed is not an argument against Java
(except for some specific programs), at least on machines which
have a good JVM.
 
J

James Kanze

tanix said:
[...]
Memory managment is not a problem. You can implement GC, for
any apllication even in assembler. Reference counting is
simple form of gc and works well in c++ because of RAII.

Reference counting doesn't work in C++, because of cycles. And
reference counting is very, very slow compared to the better
garbage collector algorithms.

[...]
And somebodey tried to convince me that conservative GC is
faster that shared_ptr/auto_ptr (what a ....;)

And you refused to even look at actual measurements. I'm aware
of a couple of programs where the Boehm collector significantly
out performs boost::shared_ptr. (Of course, I'm also aware of
cases where it doesn't. There is no global perfect solution.)
 
J

James Kanze

»[A]llocation in modern JVMs is far faster than the best
performing malloc implementations. The common code path
for new Object() in HotSpot 1.4.2 and later is
approximately 10 machine instructions (data provided by
Sun; see Resources), whereas the best performing malloc
implementations in C require on average between 60 and 100
instructions per call (Detlefs, et. al.; see Resources).

Although I agree with the final results (having made some actual
measurements), the wording above is very definitely
"advertising". It's a well known fact that *allocation* is very
fast in a copying garbage collector---even 10 instructions seems
like a lot. But this is partially offset by the cost of
collecting, and in early implementations (*not*, presumably
HotSpot) by the fact that each dereference involved an
additional layer of indirection.
And allocation performance is not a trivial component of
overall performance -- benchmarks show that many
real-world C and C++ programs, such as Perl and
Ghostscript, spend 20 to 30 percent of their total
execution time in malloc and free -- far more than the
allocation and garbage collection overhead of a healthy
Java application (Zorn; see Resources).«

That's also a bit of advertising. I really wouldn't call an
interpreter a "typical" program. For that matter, I don't even
know if there are typical programs, C++ is used for so many
different things. (In numeric processing, for example, it's
quite possible for a program to run hours without a single
allocation.)

There's an old saying: don't trust any benchmark you didn't
falsify yourself. Garbage collection is a tool, like any other.
Sometimes (a lot of the time) it helps. Other times it doesn't.
If my experience is in any way typical (but it probably isn't),
it's impact on performance is generally negligeable, one way or
the other. It's essential for robustness (no dangling
pointers), but a lot of programs don't need that much
robustness. For the rest, it depends on the application, the
programmer, and who knows what other aspects. It's a shame that
it's not officially available, as part of the language, but I'd
also oppose any move to make it required.
 
J

James Kanze

Stefan Ram wrote:

[...]
Allocation is not where GC fails, rather deallocation....

It doesn't fail there, either. But any comparison should take
deallocation into consideration. (Well, formally... there's no
deallocation with garbage collection. But the system must take
some steps to determine when memory can be reused.)
Because there is no fastest and simpler way to perform
collection, than to stop program, perform collection in
multiple threads, then let program work....

Try Googleing for "incremental garbage collection".
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,066
Latest member
VytoKetoReviews

Latest Threads

Top