Exception Misconceptions: Exceptions are for unrecoverable errors.

T

tanix

tanix said:
James Kanze wrote:
Stefan Ram wrote:
[...]
Allocation is not where GC fails, rather deallocation....
It doesn't fail there, either. But any comparison should take
deallocation into consideration. (Well, formally... there's no
deallocation with garbage collection. But the system must take
some steps to determine when memory can be reused.)

I DO like that one. What a master stroke!

:--}
Because there is no fastest and simpler way to perform
collection, than to stop program, perform collection in
multiple threads, then let program work....
Try Googleing for "incremental garbage collection".

Incremental garbage collection is form of collection when
you don;t free everything immediately, but this does not
change a fact whenever you have to see if something is referenced
or not you have to stop program and examine pointers,

Yes. This IS becoming a matter of life and death issue it seems.

:--}

Ok, I give, Merry Christmass! ;)

You do not have to give. Otherwise, what are we going to do here?
:--}

I'd just like to see something more elegant of an argument.

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
B

Branimir Maksimovic

tanix said:
tanix said:
James Kanze wrote:
Stefan Ram wrote:
[...]
Allocation is not where GC fails, rather deallocation....
It doesn't fail there, either. But any comparison should take
deallocation into consideration. (Well, formally... there's no
deallocation with garbage collection. But the system must take
some steps to determine when memory can be reused.)
I DO like that one. What a master stroke!

:--}

Because there is no fastest and simpler way to perform
collection, than to stop program, perform collection in
multiple threads, then let program work....
Try Googleing for "incremental garbage collection".

Incremental garbage collection is form of collection when
you don;t free everything immediately, but this does not
change a fact whenever you have to see if something is referenced
or not you have to stop program and examine pointers,
Yes. This IS becoming a matter of life and death issue it seems.

:--}
Ok, I give, Merry Christmass! ;)

You do not have to give. Otherwise, what are we going to do here?
:--}

I'd just like to see something more elegant of an argument.

I thought I give up ;) , Merry Christmass, gain ;)
 
L

LR

Branimir said:
Java is compiled language in a sense that any interpreted language
is run time compiled...but that does not makes those languages
compiled...

What about JIT compilation?

LR
 
B

Branimir Maksimovic

LR said:
What about JIT compilation?

LR
Well, every interpreter can compile code after it interprets it.
Java compiles to byte code, which cannot be executed
natively...

If that counts, than it is compiled language...

Greets
 
S

Stefan Ram

LR said:
What about JIT compilation?

Compilation is not a property of languages but of implementations.

(However, when the language has an »eval«, each implementation also
needs to provide an interpreter.)
 
J

James Kanze

Yes, of course, it all starts with the fact, that one cannot
compare the speed of languages but only the speed of
specific programs running under a specific /implementation/
of a language running under a specific operating system
running on a specific hardware.

Yes. And the fact that any given implementation (and any given
language, for that matter) will have it's strong points and its
weak points. If you want a language to look good, you write to
its strong points, and to the other languages weak points.
What is language-specific is only the fact that some
language features make some kinds of optimization possible
(like restrict in C) or impossible (e.g., when aliasing by
pointers is possible).

There's possible and impossible, but but there's also
difficulty. There are C++ compilers, for example, which use
profiler output, and if it makes a difference, generate two
versions of a function, depending on whether there is aliasing
or not, with a quick test at the top to decide which one to use.
But it's a lot more effort than in Java. Similarly, given an
array of Point (where Point is basically 2 double), it's
perfectly conceivable that a Java compiler treat it as an array
of double[2]. But it's a lot more work, and a lot less likely,
than for a C++ compiler.
So, if I had to implement some algorithm, I would not refuse
Java from the first, because it is slow , but do some
benchmarking with code in the direction of that algorithm.

Exactly. Most of the time, what eliminates Java is that it
doesn't support really robust programming.
After all, /if/ Java is sufficiently fast for my purpose,
it gives me some conveniences, such as run-time array index
checking, automatic memory management and freedom from
the need for (sometimes risky) pointer arithmetics.

It also pretty much makes programming by contract impossible,
requires implementations of concrete classes to be in the same
file as the class definition, and does a number of other things
which make programming in the large difficult.

That said, it's not a bad language for small non-critical
applications. And it has a pretty nice GUI library, and is well
integrated in web server environments.
 
S

Stefan Ram

James Kanze said:
what eliminates Java is that it doesn't support really robust
programming. It also pretty much makes programming by
contract impossible, requires implementations of concrete
classes to be in the same file as the class definition.

If you would like to provide a minimal example of C++ source code
with a class definition and a concrete class in two different
files, I hope that I then can show how to do the same in Java.
 
J

James Kanze

James Kanze <[email protected]> wrote:

[...]
Yep. And the more high level some abstraction is, the more
performance it can gain and the less of even theoretical
advantage any other approach may claim.

More generally, the more information a compiler has, including
information concerning why some operation is taking place, the
better it can optimize. It's pretty well established that when
the language has built in bounds checking (so the compiler knows
the why of the comparisons), the compiler can generate better
code.

In the case of a VM, of course, the compiler has very exact
knowledge about the input data, the frequency of the various
paths, and the CPU it is running on. All of which are important
information. Where I have my doubts is because in a VM, the
compiler is severely limited in the time it can take for its
analysis; if an optimizing compiler takes a couple of hours
analysing all of the variations, fine, but in a VM? But since
I'm not an expert in this field, I don't really know.
And that is exactly what I am seeing in my own situation.

The few measurements I've made would bear you out. I'm sure
that there are applications where C++ will be faster, and there
are probably some where Java will be faster, but for most
applications, C++ is chosen not for speed, but because it has
greater expressibility.
 
B

Branimir Maksimovic

James said:
[...]
Virtual machines are always slower then real machines....
This is a blanket statement by someone who is obscessed with
machine cycles while his grand piece of work is not even worth
mentioning, I'd say.

Above all, it's a statement made by someone who's never made any
actual measurements, nor spoken to experts in the field.

How do you know that?

Greets
 
B

Branimir Maksimovic

James said:
The few measurements I've made would bear you out. I'm sure
that there are applications where C++ will be faster, and there
are probably some where Java will be faster, but for most
applications, C++ is chosen not for speed, but because it has
greater expressibility.

No. Java is chosen because it is simplified language which
anyone can learn and maintain in one month. C++ is ugly
and complex language with lot of traps and
require several years and tears to learn.
Java sacrifices lot of things, but is faster language
in a sense that programmer can use it effectively and
write working programs in much shorter time then in c++.
Simple as that. While c++ has more tools and power
as language, once you've learn it you can do better
than in java. But lot of programmers are not capable
to produce code in time, therefore java wins.

Greets.
 
T

tanix

You want me to tell which search engine it is ;)

Not necessarily. But I am impressed already.
I can;t tell you that...

But can you tell me more specifics on how exactly things
differed so drastically.

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
T

tanix

[...]
virtual machine is also heavy performance killer...
Which explains why some of the leading experts in
optimization claim that it is necessary for the best
optimization. (I don't fully buy that claim, but a virtual
machine does have a couple of advantages when it come to
optimizing: it sees the actual data being processed, for
example, and the actual machine being run on, and can
optimize to both.)
Yep. And the more high level some abstraction is, the more
performance it can gain and the less of even theoretical
advantage any other approach may claim.

More generally, the more information a compiler has, including
information concerning why some operation is taking place, the
better it can optimize. It's pretty well established that when
the language has built in bounds checking (so the compiler knows
the why of the comparisons), the compiler can generate better
code.

In the case of a VM, of course, the compiler has very exact
knowledge about the input data, the frequency of the various
paths, and the CPU it is running on. All of which are important
information. Where I have my doubts is because in a VM, the
compiler is severely limited in the time it can take for its
analysis; if an optimizing compiler takes a couple of hours
analysing all of the variations, fine, but in a VM? But since
I'm not an expert in this field, I don't really know.
And that is exactly what I am seeing in my own situation.

The few measurements I've made would bear you out. I'm sure
that there are applications where C++ will be faster, and there
are probably some where Java will be faster, but for most
applications, C++ is chosen not for speed, but because it has
greater expressibility.

Interesting subject.

Not sure if it is that simple to argue this expressibility argument.

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
B

Branimir Maksimovic

tanix said:
But can you tell me more specifics on how exactly things
differed so drastically.

Enough to say that one programmer thought to replace php
part with java server, and of course php was faster.
He didn;t figure that out either...

Greets
 
T

tanix

Yes, of course, it all starts with the fact, that one cannot
compare the speed of languages but only the speed of
specific programs running under a specific /implementation/
of a language running under a specific operating system
running on a specific hardware.

Yes. And the fact that any given implementation (and any given
language, for that matter) will have it's strong points and its
weak points. If you want a language to look good, you write to
its strong points, and to the other languages weak points.
What is language-specific is only the fact that some
language features make some kinds of optimization possible
(like restrict in C) or impossible (e.g., when aliasing by
pointers is possible).

There's possible and impossible, but but there's also
difficulty. There are C++ compilers, for example, which use
profiler output, and if it makes a difference, generate two
versions of a function, depending on whether there is aliasing
or not, with a quick test at the top to decide which one to use.
But it's a lot more effort than in Java. Similarly, given an
array of Point (where Point is basically 2 double), it's
perfectly conceivable that a Java compiler treat it as an array
of double[2]. But it's a lot more work, and a lot less likely,
than for a C++ compiler.
So, if I had to implement some algorithm, I would not refuse
Java from the first, because it is slow , but do some
benchmarking with code in the direction of that algorithm.

Exactly. Most of the time, what eliminates Java is that it
doesn't support really robust programming.

Wow. That bites. I'd be curious to see some specifics on this.
It also pretty much makes programming by contract impossible,
requires implementations of concrete classes to be in the same
file as the class definition,

And THAT is a ROYAL drag. No questions about it.
and does a number of other things
which make programming in the large difficult.

Well... I don't know what kind of things you are talking about.
That said, it's not a bad language for small non-critical
applications.

Except it is routinely used in MASSIVELY scaled apps
in banks, Wall street, etc. I just see those guys too often.
And it has a pretty nice GUI library, and is well
integrated in web server environments.

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
T

tanix

No. Java is chosen because it is simplified language

I like to hear that one. Makes me feel good.
which anyone can learn and maintain in one month.

Well, I thought just the other way around.
At least if you talk to java experts and they ARE experts
by any measure.

What they say is something like:
"well, sure, java has much steaper learning curve.
But once you ease into it, it is a totally different game".

I'd say, from my own experience, yes, even after spending
years with C++, it took me quite a while to completely
rethink almost all I knew, and the further I got, the better
it got. I, personally, think that java overall has MUCH
more "expressive power", except we may imply different things
by it.

The ease with which I work in Java could not be even COMPARED
to C++, which was constant pain on the neck with all sorts
of secondary issues I would not even want to worry about.

Just don't ask me what are those.
The last time I looked on my firewall app in C++, it was like,
oh, jeez, I have to see THIS stuff again?

True, it was the MFC flavor of it. But, I would not write
anything but the simpliest apps in C++ on the level of some
tool or gadget. Too much foolishness, too much unnecessary
complexities that do not buy me too much.
C++ is ugly
Agreed!!!!
:--}

and complex language with lot of traps and
require several years and tears to learn.

Now that you say it, I might even start thinking THIS way!
:--}
Java sacrifices lot of things, but is faster language
in a sense that programmer can use it effectively and
write working programs in much shorter time then in c++.

At least that is what I saw.
And I mean MUCH easier, at least for me.
If you asked me to rewrite my main app in C++,
I'd say sorry.
Simple as that. While c++ has more tools and power
as language, once you've learn it you can do better
than in java. But lot of programmers are not capable
to produce code in time, therefore java wins.

Well, the problem with producing "code in time" is often
caused by the fact that first comes the time, which is
your "deadline", and then comes the job, and only THEN
comes a job description.

I think it wears out a lot of people, by being forever
whipped to get this time. Too much frustration, too much
overstressing of people. Too little rationale behind it.

If you work with budget and top corporate "strategies",
all you have is the amount of money we are willing to
spend on it and then comes the time frame "to be competitive".

Then a rough estimation of human resources,
pretty much pulled out of the hat, without even knowing
what exactly needs to be done and what kind of issues
are going to be addressed and what kind of problems
are going to appear out of the blue.

Then, managers like to pull these magic 3 months numbers,
no matter what. Whether they have 3 people or 6 people,
they'd have to do it in 3 months. If you port a kernel,
you may get 6 months, or even 12 months. But then you
have to have REAL big guns financing such a long trip.

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
K

Kaz Kylheku

Well, I've started and stopped application which controlled Shanghai
airptort back in 1993.

This is inconsistent with the observation that you write bullshit
that we might expect from someone who was /born/ in 1993.
 
B

Branimir Maksimovic

Kaz said:
This is inconsistent with the observation that you write bullshit
that we might expect from someone who was /born/ in 1993.

Well, that depends...I was born in 68'.

Let me show you one thing:

Number of nodes: 5855000
Timer: initial randomize: 0.818023
Timer: merge_sort: 5.558880
Timer: randomize after merge: 1.201952
Timer: radix_sort: 2.021901
Timer: randomize after radix: 1.470415
Timer: quick_sort after radix: 3.805699
vnode size : 5855000
Timer: quick_sort nodes by address: 0.730361
Timer: quick_sort: 0.505779
Timer: randomize after quick: 0.911052
cummulative result:
----------------------------------
initial randomize: 3.470863
merge: 21.176704 randomize: 5.611846
radix: 8.425690 randomize: 6.397170
quick sort nodes by address: 3.909824 > vector<void*> quick sort then
fill linked list with nodes sorted by address
quick no address sort after radix: 15.180166 > unoptimized linked list,
, nodes are not sorted by address
quick: 2.297783 randomize: 4.075525 > nodes are sorted by address....
7 times faster same algorithm, almost 6 million nodes
Pt count created:999999
true
true
qsort:0.06568 > this is cache optimized quick sort of million elements
vector
sort:0.134283 > this is sort from gcc lib
lqsort:0.19908 > this is cache optimized sort of linked list of million
elements
lsort:0.437295 > this is linked list sort from gcc's lib
Pt count:0

Which virtual machine can perform crucial cache optimizations?

Greets
 
K

Kaz Kylheku

How that can possibly be?

How it can be is that surprising truths in the world don't take a pause
so that morons can catch up.
GC kills all threads when it has
to collect?

Big stopping threads using the scheduler is more efficient than
throwing locks or atomic instructions in their execution path.

What's more efficient: pausing a thread once in a long while, or having
it constantly trip over some atomic increment or decrement, possibly
millions of times a second?
Or it can magically sweep through heap,stack,bss
etc and scan without locking all the time or stopping program?

The job of GC is to find and reclaim unreachable objects.

When an object becomes unreachable, it stays that way. A program does
not lose a reference to an object, and then magically recover the
reference. Thus, in general, garbage monotonically increases as
computation proceeds.

This means that GC can in fact proceed concurrently with the
application. The only risk is that the program will generate more
garbage while GC is running, which the GC will miss---objects which GC
finds to be reachable became unreachable before it completes.
But that's okay; they will be found next time.

This is hinted at in the ``snapshot mark-and-sweep'' paragraph
in the GC algorithms FAQ.

http://www.iecc.com/gclist/GC-algorithms.html
Explain to me how?

Go study garbage collection. There is lots of literature there.

It's not a small, simple topic.
Manual deallocation does not have to lock at all....

WTF are you stuipd?

Firstly, any comparison between GC and manual deallocation is moronic.

In order to invoke manual deallocation, the program has to be sure
that the object is about to become unreachable, so that it does
not prematurely delete an object that is still in use. Moreover,
the program has to also ensure that it eventually identifies all objects
that are no longer in use. I.e. by the time it calls the function, the
program has already done exactly the same the job that is done by the
garbage collector: that of identifying garbage.

/Both/ manual deallocation and garbage collection have to recycle
objects somehow; the deallocation part is a subset of what GC does.
(Garbage collectors integrated with C in fact call free on unreachable
objects; so in that case it is obvious that the cost of /just/ the call
to free is lower than the cost of hunting down garbage /and/ calling
free on it!)

The computation of an object lifetime is not cost free, whether it
is done by the program, or farmed off to automatic garbage collection.

Your point about locking is naively wrong, too. Memory allocators which
are actually in widespread use have internal locks to guard against
concurrent acces by multiple processors. Even SMP-scalable allocators
like Hoard have locks. See, the problem is that even if you shunt
allocation requests into thread-local heaps, a piece of memory may be
freed by a different thread from the one which allocated it. Thread A
allocates an object, thread B frees it. So a lock on the heap has to be
acquired to re-insert the block into the free list.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,578
Members
45,052
Latest member
LucyCarper

Latest Threads

Top