Optimise my ray tracer

R

Remon van Vliet

Jon Harrop said:
I did and it makes no difference because it is the default. However, running
with -client actually appears to be slightly faster (17.8s vs 18.3s). That
might not be a significant difference though.

That's a really odd result actually, i've never seen tight loops in java
code run faster on the -client than -server. Any explanation for this?
 
C

Chris Uppal

That's a really odd result actually, i've never seen tight loops in java
code run faster on the -client than -server. Any explanation for this?

I'd guess that a small-ish part of it is that the server JVM has a longer
startup time (especially with the fast-startup stuff in Java 1.5 which I assume
(without checking) that only the client JVM bothers with). I can imagine that
making a couple of seconds difference.

A possible explanation for the rest of the difference would be that the
specific benchmark is structured so that the server JVM cannot optimise it
without doing on-stack replacement of running methods. I don't know what the
real story on that is, but my own experiments indicate that it is unlikely to
optimise a benchmark that consists of a single set of nested loops.

The easy, and obvious, way to find out is to re-structure the benchmark so that
it:

(a) times the relevant operation /itself/, rather than timing the
execution of the whole program.

(b) runs the benchmark in a loop. Probably (since the loop is long) timing
each iteration independently.

Without those changes, I don't see that the benchmark can be saying anything
very interesting. Of course, /with/ the changes it might still turn out that
the server JVM was lagging badly behind C++ on this application. I wouldn't be
surprised myself, since the way it is written involves creating infeasible
large numbers of intermediate objects -- which would obviously put Java at a
disadvantage. (Not that I'm suggesting that it's an invalid benchmark for that
reason.)

-- chris
 
J

Jon Harrop

Most of the time is spent in mutually recursive functions, rather than
"tight loops".
I'd guess that a small-ish part of it is that the server JVM has a longer
startup time (especially with the fast-startup stuff in Java 1.5 which I
assume
(without checking) that only the client JVM bothers with). I can imagine
that making a couple of seconds difference.

The difference (of 0.5s) is well within random noise.
A possible explanation for the rest of the difference would be that the
specific benchmark is structured so that the server JVM cannot optimise it
without doing on-stack replacement of running methods. I don't know what
the real story on that is, but my own experiments indicate that it is
unlikely to optimise a benchmark that consists of a single set of nested
loops.

Then what can it optimise?
The easy, and obvious, way to find out is to re-structure the benchmark so
that it:

(a) times the relevant operation /itself/, rather than timing the
execution of the whole program.

If the "start up" time is proportional to the running time of the program
(e.g. because it is the time spent compiling to native code on-the-fly)
then it should be counted as it is. There is no evidence of a large
constant start up time.
(b) runs the benchmark in a loop. Probably (since the loop is long)
timing
each iteration independently.

It would be neither fair nor representative to rerun the Java test many
times.
Without those changes, I don't see that the benchmark can be saying
anything
very interesting. Of course, /with/ the changes it might still turn out
that
the server JVM was lagging badly behind C++ on this application. I
wouldn't be surprised myself, since the way it is written involves
creating infeasible large numbers of intermediate objects -- which would
obviously put Java at a
disadvantage. (Not that I'm suggesting that it's an invalid benchmark for
that reason.)

Indeed, there is no easy way around this and many (most?) numerically
intensive programs use small vectors and matrices.

So the conclusion can only be that Java is not suitable for such
applications.
 
R

Remon van Vliet

Jon Harrop said:
Most of the time is spent in mutually recursive functions, rather than
"tight loops".


The difference (of 0.5s) is well within random noise.
Startup time is added to the benchmark, and as such isnt measuring Java
execution performance.
Then what can it optimise?
Well it should be obvious that nested loops cannot be optimised further by a
VM than run-time compiling (JITing) them and possible unroll smaller loops.
The server VM may and most likely will do this more aggressively.
If the "start up" time is proportional to the running time of the program
(e.g. because it is the time spent compiling to native code on-the-fly)
then it should be counted as it is. There is no evidence of a large
constant start up time.
There's no evidence for either point. Fact remains is that the VM does
things on startup that a native compiler does at compilation time. If you're
measuring raw execution performance neither are relevant.
It would be neither fair nor representative to rerun the Java test many
times.

Untrue, if you're comparing execution speed then you should measure just
that. Startup time is not related to the speed at which the actual
computational code gets executed.
Indeed, there is no easy way around this and many (most?) numerically
intensive programs use small vectors and matrices.

So the conclusion can only be that Java is not suitable for such
applications.

That conclusion is severely flawed in my opinion. Computationally intensive
programs do not by definition create a large number of intermediate (or any
other kind of) objects. The fact that applications such as raytracing are
computationally expensive are completely unrelated to object creation. In my
experience matrices and vector objects can be reused and pooled to a point
where object generation is minimal. Also, if there's one area where (JITted)
Java code and C(++) code runs at comparable speeds it's
arithmetic/logic/computation.

Finally, i did a test myself with the mentioned code (300) :

-client :
// Setup :0.0s
// Raytracing (tracer.trace(size, ss)) :14.7s

- server
// Setup :0.0s
// Raytracing (tracer.trace(size, ss)) :8.8s

Obviously the difference is there. If someone sends me the EXE that runs the
same algorithm i'll run it here and compare it.

Remon van Vliet
 
V

Vit

Indeed, there is no easy way around this and many (most?) numerically
intensive programs use small vectors and matrices.

So the conclusion can only be that Java is not suitable for such
applications.

Omitting the rest of the discussion, I would like to reply to that
last statement only.

First. The languages are roughly divided into "compilative" and
"interpretative". For the first ones more checks and optimizing work
can be done at compile-time, while for the later they all fall to
run-time. Interpretation loops of VM, start ups etc. take their time
for work non-productive from the end-result point of view.
Java, from one side, is implemented as a compilative language, i.e.
with a JVM interpreter. You pay this price for being
platform-independent.
From another side, Java, in most of its features is a well-compilable
language, alike C and C++.
The solution that I have proposed giving my statistics, was to compile
your application for a known target platform (if it is x86/Windows, it
can be JET (http://www.excelsior-usa.com/jet.html), or it can be
GCJ... The reason was to get rid of the interpreting engine - JVM and
to benefit of the powerful optimizing techniques implemented in modern
compilers.

Second. A theorem: For any language, any platform and any hardware
there exists a problem that cannot be efficiently solved with them.
Proof: consider any polynomic or NP-complete problem with high
factors.
Usually, is is not a tool to be blamed. Special methods to solve these
"hard tasks" should be elaborated, standard algorythms should be
optimized into special versions, etc.
Consider the case that is now fundamental to programming with Windows:
I mean "invalidation rectangles". I bet that initially the problem was
that redrawing the full heap of windows on a screen when something
changes in one of them was not acceptible (redrawing time being much
more the eye wink time). Suppose that with modern speeds it is
possible to recalculate the full screen picture at a time less than a
screen update frequency (60Hz). Who would ever think on that problem?
Hence the statement: a program that will need to be optimized should
be optimized not only at the latest phase, when a VM or a compiler try
to do their best, and not even (though in more extent) at the phase of
coding the algorythm on a particular language, but yet on the phase of
algorythm selection.
As for the language, the skills of writing fast-working programs may
be somewhat different from the skills of writing programs "as books
and professors teach". It may be more preferrable to unroll loops, to
precalculate often used data into huge static arrays, write in a mix
of languages (well, after all, if 90% of time your program spents in a
5-lines loop, isn't it a reason to write this loop in assembler than
to reject the language which is perfect for rest N-5 lines??), etc.
etc...

So, the art of optimizing programs is an art indeed that should be
learnt as we learn to program :)

Vit
Excelsior, LLC
 
J

Jon Harrop

Remon said:
Startup time is added to the benchmark, and as such isnt measuring Java
execution performance.

The benchmark clearly is measuring Java execution performance.
Well it should be obvious that nested loops cannot be optimised further by
a VM than run-time compiling (JITing) them and possible unroll smaller
loops. The server VM may and most likely will do this more aggressively.

Then I would expect to see a performance improvement, which I don't.
Untrue, if you're comparing execution speed then you should measure just
that. Startup time is not related to the speed at which the actual
computational code gets executed.

It is, of course. And you'd be able to see the startup time on my graphs
were it not for the fact that it is negligible.
That conclusion is severely flawed in my opinion.

I have presented objective, quantitative evidence supporting it.
Computationally
intensive programs do not by definition create a large number of
intermediate (or any other kind of) objects.

Like 3D vectors?
The fact that applications
such as raytracing are computationally expensive are completely unrelated
to object creation. In my experience matrices and vector objects can be
reused and pooled to a point where object generation is minimal.

Then you're trying to work around a poor allocator and garbage collected.
You don't have to do this in OCaml or SML. That is what I am trying to
measure.
Also, if
there's one area where (JITted) Java code and C(++) code runs at
comparable speeds it's arithmetic/logic/computation.

I'll try those next. What benchmark would show Java in the best possible
light?
Finally, i did a test myself with the mentioned code (300) :

-client :
// Setup :0.0s
// Raytracing (tracer.trace(size, ss)) :14.7s

- server
// Setup :0.0s
// Raytracing (tracer.trace(size, ss)) :8.8s

What's your setup, what values of n and level did you use?
 
R

Remon van Vliet

Jon Harrop said:
The benchmark clearly is measuring Java execution performance.
you're measuring startup + execution
Then I would expect to see a performance improvement, which I don't.
Well i do, (see below)
It is, of course. And you'd be able to see the startup time on my graphs
were it not for the fact that it is negligible.
We'll agree to disagree
I have presented objective, quantitative evidence supporting it.
So far not even the results are reproducable on my system. And i maintain
that if object creation is a problem for a language then just make sure you
dont do it that often. You can call this a workaround but fact remains that
creating thousands of objects on the fly continuously is bad practice in any
language.
Like 3D vectors?
An enormous amount of 3D vectors are used in my real-time raytracer, but
almost always are instances of such vectors reused in some way or form
without breaking up code clarity. It's a simple matter of knowing what
platform you're coding for.
Then you're trying to work around a poor allocator and garbage collected.
You don't have to do this in OCaml or SML. That is what I am trying to
measure.
I thought you were measuring execution speed? If you use an approach
suitable for OCaml that is highly inefficient to do in Java then obviously
the benchmark results will reflect that. Note that i'm not argueing which
language is better or anything, i'm sure the above languages have their
benefits.
I'll try those next. What benchmark would show Java in the best possible
light?
A raytracer is fine actually, but alternatives can be any form of solving a
computationally intensive problem is fine.
What's your setup, what values of n and level did you use?
I just ran RayTracing.java with args[0] = 300, it ran on a 1.5ghz Pentium 3
DELL notebook with 1gb memory within Eclipse IDE.
 
J

Jon Harrop

Remon said:
you're measuring startup + execution

For different execution times, yes.
Well i do, (see below)

I had been using my own source. I just tried it with RayTracer.java and I
still see little difference between -client and -server:

$ javac RayTracing.java
$ time java RayTracing 256 >image.pgm

real 0m21.098s
user 0m20.340s
sys 0m0.230s
$ time java -client RayTracing 256 >image.pgm

real 0m21.034s
user 0m20.320s
sys 0m0.180s
$ time java -server RayTracing 256 >image.pgm

real 0m19.634s
user 0m16.640s
sys 0m0.280s

Compared to the most naive OCaml implementation on the same machine:

$ time ./ray 6 256 >image.pgm

real 0m6.374s
user 0m5.860s
sys 0m0.020s

or the optimised OCaml:

$ time ./ray 6256 >image.pgm

real 0m3.525s
user 0m3.500s
sys 0m0.010s
We'll agree to disagree

Can you see a significant startup time on my graphs?
So far not even the results are reproducable on my system. And i maintain
that if object creation is a problem for a language then just make sure
you dont do it that often. You can call this a workaround but fact remains
that creating thousands of objects on the fly continuously is bad practice
in any language.

Absolute nonsense. The C++ is doing exactly the same thing and is many times
faster. I bet an OCaml version which abused objects in the same way would
be similarly many times faster than Java.
An enormous amount of 3D vectors are used in my real-time raytracer, but
almost always are instances of such vectors reused in some way or form
without breaking up code clarity. It's a simple matter of knowing what
platform you're coding for.

You're just working around a poor allocator/GC implementation. OCaml and SML
both allocate and GC exactly the same stuff as Java but run many times
faster without the programmer having to understand the details of the
compiler and platform.
I thought you were measuring execution speed? If you use an approach
suitable for OCaml that is highly inefficient to do in Java then obviously
the benchmark results will reflect that. Note that i'm not argueing which
language is better or anything, i'm sure the above languages have their
benefits.

So how would you implement 3D vectors efficiently in Java? In-line
everything as you have to in C?
A raytracer is fine actually, but alternatives can be any form of solving
a computationally intensive problem is fine.

So I am showing Java in the best possible light?
What's your setup, what values of n and level did you use?
I just ran RayTracing.java with args[0] = 300, it ran on a 1.5ghz Pentium
3 DELL notebook with 1gb memory within Eclipse IDE.

Perhaps it is Intel specific. I don't have an Intel to try it on...
 
C

Chris Smith

Jon Harrop said:
It is, of course. And you'd be able to see the startup time on my graphs
were it not for the fact that it is negligible.

Jon,

I don't know what you're seeing, but I can tell you that you are
definitely reaching the wrong conclusion. It can be easily demonstrated
that the Java virtual machine does have a considerable startup time
before reaching application code, on the order of 0.5 to 1.5 seconds.
The exact time depends mainly on the amount of the core API used by an
application, and the amount of actual application code. Your benchmarks
are extremely low on both accounts, so you will probably see a startup
time of about half a second or thereabouts. It's also the case that the
JVM requires "warm up" time before it reaches full speed, and that the
first time a piece of code gets run will not be the fastest.

How you react to this depends on what you are trying to measure. If
you're testing the ability of Java to execute one-shot batch or command
line applications that run and terminate in a relatively short period of
time, then you should benchmark startup and cold runs just as you're
doing. The _problem_ is when people attempt to extract from these
results to more common problem spaces, such as long-running and user-
interactive applications, where those particular performance factors are
irrelevant or near to it.

--
www.designacourse.com
The Easiest Way To Train Anyone... Anywhere.

Chris Smith - Lead Software Developer/Technical Trainer
MindIQ Corporation
 
J

Jon Harrop

Chris said:
How you react to this depends on what you are trying to measure. If
you're testing the ability of Java to execute one-shot batch or command
line applications that run and terminate in a relatively short period of
time, then you should benchmark startup and cold runs just as you're
doing. The _problem_ is when people attempt to extract from these
results to more common problem spaces, such as long-running and user-
interactive applications, where those particular performance factors are
irrelevant or near to it.

Yes. I'm seeing 0.2s startup for Java. The OCaml and MLton run-times have no
startup and seem to be well warmed up after 30s.

I should plot a graph of the running time for the languages vs scene
complexity. That should give a better indication of whether or not Java's
performance is heading in the right direction for bigger problems.
 
B

Bent C Dalager

Absolute nonsense. The C++ is doing exactly the same thing and is many times
faster.

Does the C++ implementation use heap objects or stack objects?

Cheers
Bent D
 
J

Jon Harrop

Bent said:
Does the C++ implementation use heap objects or stack objects?

My guess is that the C++ is stack-allocated, and the OCaml and SML are
definitely heap allocated.
 
J

Jon Harrop

Jon said:
My guess is that the C++ is stack-allocated, and the OCaml and SML are
definitely heap allocated.

Having said that, MLton might optimise the SML to allocate on the stack.
 
B

Bent C Dalager

My guess is that the C++ is stack-allocated, and the OCaml and SML are
definitely heap allocated.

Ok. I'm not familiar with OCaml and SML, but a stack-based C++
implementation could very well beat a primarily heap-based Java
implementation. It would have to take care not to do excessive copy
constructor calling and have fast destructors, but this seems quite
doable.

I am not aware of any reasonably easy methods of getting stack-like
effeciency out of Java. Presumably, the JIT might optimize some heap
objects into stack objects but pulling this off for the whole
application (or even just the frequently used parts) seems like quite
a challenge for the programmer.

Cheers
Bent D
 
J

Jon Harrop

Bent said:
Ok. I'm not familiar with OCaml and SML, but a stack-based C++
implementation could very well beat a primarily heap-based Java
implementation. It would have to take care not to do excessive copy
constructor calling and have fast destructors, but this seems quite
doable.

I am not aware of any reasonably easy methods of getting stack-like
effeciency out of Java. Presumably, the JIT might optimize some heap
objects into stack objects but pulling this off for the whole
application (or even just the frequently used parts) seems like quite
a challenge for the programmer.

Note firstly that Java's inefficiency clearly has nothing to do with the
stack as it is 5x slower than other heap-based methods and secondly that it
isn't easy to tell when stack allocation will be faster (e.g. tail
recursive map is only faster on long lists).

IMHO, there is something very inefficient going on with Java. I suspect a
bad allocator and GC but this would mean costly objects in a heavily OO
language, which would surprise me.
 
R

Remon van Vliet

Jon Harrop said:
Note firstly that Java's inefficiency clearly has nothing to do with the
stack as it is 5x slower than other heap-based methods and secondly that it
isn't easy to tell when stack allocation will be faster (e.g. tail
recursive map is only faster on long lists).
I'm still waiting on you pointing out what exactly is so inefficient about
Java and it's JVM implementations. Repeating a point more than once doesnt
make it true. I agree that stack vs. heap will not explain a factor 5 speed
difference.
IMHO, there is something very inefficient going on with Java. I suspect a
bad allocator and GC but this would mean costly objects in a heavily OO
language, which would surprise me.
More "Java inefficiency". Object creation is obviously something that will
slow a Java (or any other VM-based OO language) application down if used
excessively, but i would be surprised if that'll ever cause serious issues
in properly designed applications. Also, garbage collection does introduce
additional overhead and speed decrease, but that's marginal at best, and
reduces a few headaches for programmers.

My point being, Java will be slower for certain applications compared to
other languages, but you keep making claims based on what can best be
descriped as hunches. Even if the benchmark would display the speed
differences you claim (which i still cant reproduce i might add) then you're
still testing a very limited collection of language functionality. Also, one
could argue that even *if* Java is structurally slower than C++ the tradeof
for getting an easier, more structured and in the end better development
language is worth it. That, obviously, is a subjective matter and cannot be
captured in benchmarks.

Remon van Vliet
 
J

Jon Harrop

Remon said:
I'm still waiting on you pointing out what exactly is so inefficient about
Java and it's JVM implementations.

I don't know what is so inefficient about Java, I just know that it is
inefficient because all of the other languages that I've tried are many
times faster than Java on this test.
I agree that stack vs. heap will not explain a factor 5 speed difference.
Yes.


More "Java inefficiency".

That is the topic of this thread. :)
Object creation is obviously something that will
slow a Java (or any other VM-based OO language) application down if used
excessively,

I don't believe that this is true. For example, if I were to use OCaml's
objects to implement 3D vectors I don't think it would be much slower, just
unnecessarily verbose.
but i would be surprised if that'll ever cause serious issues
in properly designed applications.

The ray tracer is clearly an application. Are you saying that it isn't
properly designed? If so, how?
Also, garbage collection does introduce
additional overhead and speed decrease, but that's marginal at best, and
reduces a few headaches for programmers.

Sure, SML and OCaml are GC and the C++ uses destructors to achieve the same
effect (no explicit deallocation).
My point being, Java will be slower for certain applications compared to
other languages, but you keep making claims based on what can best be
descriped as hunches.

Yes. Here is what I know:

1. On 32-bit Athlon t-bird, SML is 5x faster than Java, OCaml is 3.7x faster
than Java, C++ is 3.2x faster than Java.

2. On 64-bit Athlon64, C++ is 2.5x faster than Java, SML is 2x faster than
Java, OCaml is 1.85x faster than Java.
Even if the benchmark would display the speed
differences you claim (which i still cant reproduce i might add)

Perhaps Java runs well on Intel and very poorly on AMD. Has anyone else
found evidence of this?
then
you're still testing a very limited collection of language functionality.

This ray tracer uses data structures (3D vectors, trees and lists), variant
types/inheritance, both floating point and integer arithmetic, both
imperative looping and functional recursion. So I'd say that, for such a
small benchmark, it is actually testing quite a lot.
Also, one could argue that even *if* Java is structurally slower than C++
the tradeof for getting an easier, more structured and in the end better
development language is worth it. That, obviously, is a subjective matter
and cannot be captured in benchmarks.

I have two points here:

1. Although not ideal, LOC goes some way to capturing the expressiveness of
the languages. Java is not only by far the slowest language on this test
but also the most verbose (least expressive). In contrast, OCaml requires
half as much code and runs twice as fast as Java.

2. Although C++ lacks some of the features of Java (e.g. GC), SML and OCaml
are both much safer, more expressive, higher-level languages than Java. So
the argument that Java's performance is excusable because it has GC is not
valid.
 
O

Owen Jacobson

Here is what I know:

1. On 32-bit Athlon t-bird, SML is 5x faster than Java, OCaml is 3.7x
faster than Java, C++ is 3.2x faster than Java.

Please be careful what you assert. I'm in no way disputing your
observations, but for correctness you might say that your SML
benchmark program runs five times faster than your Java benchmark program,
not that SML is five times faster than Java.
 
R

Remon van Vliet

Jon Harrop said:
I don't know what is so inefficient about Java, I just know that it is
inefficient because all of the other languages that I've tried are many
times faster than Java on this test.
Well, if this is the case, could you please post a page where you have the
benchmarks downloadable for the mentioned languages, if anything i'm quite
interested in observing the differences and maybe finding out why the
differences are so excessive. Like i mentioned before i ported a raytracer
to Java once and the spead decrease was in the single digits percents, not a
factor 5..
difference.

That is the topic of this thread. :)
True, but at the risk of repeating myself, just claiming it's inefficient is
a bit easy. I just dont get how on earth you can get such huge performance
differences. It's simply so far from my practical experience that i do not
understand how you can get those results. Do note that i'm only familiar
with C said:
I don't believe that this is true. For example, if I were to use OCaml's
objects to implement 3D vectors I don't think it would be much slower, just
unnecessarily verbose.
Hm, test this, it's entirely possible OCaml's object managment is faster
than Java's, but you should see a significant decrease in performance.
The ray tracer is clearly an application. Are you saying that it isn't
properly designed? If so, how?
If you argue that object creation slows the Java version down by a factor 5
than designing/implementing it in such a way that object creation is
extensive is not a good design no? If you reply with that it shouldnt matter
because a proper OOP design should be executed efficiently in an OO language
than i can only agree. But as mentioned before i have trouble believing
Java's object instantiating is so much slower than other VM languages.
Although i'll be the first to admit i can easily be proved wrong with a
simple object creation benchmark, or even a 3D vector math benchmark.
Sure, SML and OCaml are GC and the C++ uses destructors to achieve the same
effect (no explicit deallocation).
Like i said, my knowledge of the other languages is limited, if you say
similar things can be done i'll take your word for it. It's the magic of
them doing the same thing 5 times as fast that i find hard to believe. If
they offer the same benefits that Java offers at 5 times the speed people
should seriously consider moving to these alternatives ;)
Yes. Here is what I know:

1. On 32-bit Athlon t-bird, SML is 5x faster than Java, OCaml is 3.7x faster
than Java, C++ is 3.2x faster than Java.
That's for your benchmark though. I know it's factually untrue that C++ is
more than 3 times faster than Java per definition.
2. On 64-bit Athlon64, C++ is 2.5x faster than Java, SML is 2x faster than
Java, OCaml is 1.85x faster than Java.


Perhaps Java runs well on Intel and very poorly on AMD. Has anyone else
found evidence of this?
Cant say, something is up with your Java VM and/or config though. My -server
VM runs all math/vector/object intensive programs at least 30-50% faster and
you hardly see any difference.
functionality.

This ray tracer uses data structures (3D vectors, trees and lists), variant
types/inheritance, both floating point and integer arithmetic, both
imperative looping and functional recursion. So I'd say that, for such a
small benchmark, it is actually testing quite a lot.
You use the Java Collection Framework for inner workings of raytracers you
might have found your performance issue of you use the wrong/synchronized
collections. But i trust you didnt. You're still testing very limited
functionality in the stress part of your benchmark though.
I have two points here:

1. Although not ideal, LOC goes some way to capturing the expressiveness of
the languages. Java is not only by far the slowest language on this test
but also the most verbose (least expressive). In contrast, OCaml requires
half as much code and runs twice as fast as Java.
This i can only agree with, OCaml is apparently quite a bit more code
efficient than Java.
2. Although C++ lacks some of the features of Java (e.g. GC), SML and OCaml
are both much safer, more expressive, higher-level languages than Java. So
the argument that Java's performance is excusable because it has GC is not
valid.
Safer?

Anyway, like i suggested, how about you make your benchmark programs public
and we can run them on our systems for more benchmark test results. Apart
from us disagreeing on some points it's quite helpful to actually test these
things properly. I'm quite interested in comparing VM languages' performance
in terms of object/instance managment, garbage collection (or related
features), vector math, integer math, collections etc...what do you think?
Let me know if i can help in any way with making this happen. Perhaps we can
define a benchmark that tests language features that often cause performance
issues in cpu-intensive applications.

Remon
 
J

Jon Harrop

Owen said:
Please be careful what you assert. I'm in no way disputing your
observations, but for correctness you might say that your SML
benchmark program runs five times faster than your Java benchmark program,
not that SML is five times faster than Java.

Yes, absolutely, this is all only in the context of this ray tracer.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,581
Members
45,056
Latest member
GlycogenSupporthealth

Latest Threads

Top