How to enable JIT?

R

Roedy Green


Interpreted mode is much slower than hotspot. I can't think a
situation where you would want to use it, unless you were tracking
down a bug in hotspot. You could prove it was hotspot's fault if the
program worked with -Xint.
--
Roedy Green Canadian Mind Products
http://mindprod.com

One path leads to despair and utter hopelessness. The other,
to total extinction. Let us pray we have the wisdom to choose correctly.
~ Woody Allen .
 
D

Dmitriy Melnik

Nowadays no problem. The compilers and optimisers are vastly improved.
Java is in theory a much easier language to optimise than C. It is
much harder to do things behind the compiler's back in Java.

When you optimise your algorithms, MEASURE them first to see where the
time is being chewed up. Seehttp://mindprod.com/jgloss/profiler.html
Always work on the current bottleneck.
Good advice. Also, MEASURE them afterward. Some apparent "optimizations" may
fail to help, even might make things slower.

Thanks, your help is immeasurable! You literally revitalized my
project! And mindprod.com seems to become a valuable source of
information for me.
 
J

Joshua Cranmer

Lew said:
My French isn't quite up to this. Je regrette.

I personally find technical writing easier to translate than normal
speech, because technical writing has a lot more cognates.

I'll not give a direct translation (too time-consuming to write), but
here's a rough guide to what's being said:

* Numerous hash functions exist, which are a fertile area of research
* A hash function needs extensive research to be declared sufficiently
good for practical use
* For better study, efficient implementations of new hash functions are
needed.

sphlib is a library containing C and Java implementations of several
hash functions, which is looking to fulfill these needs. More precisely,
the goals of sphlib are:
- contain efficient, portable code, reusable in different projects
- present a clear internal code easily modifiable for all to work on in
research
- present [1] a base of comparison of hash functions

[ Next paragraph talks about need for optimization ]

C was chosen because it combines good possibilities to optimize with a
certain portability; it's also widely used on PCs and servers, and other
languages have the ability to call C code.

Java was chosen as a representative of virtual machine languages; sphlib
uses Java in part to get a better holistic view of hash function
performance, since C can't measure how they work on virtual machines.

[ Discussion of sphlib being free (libre) [2] software ]

Again, this was merely a rough, quick translation. I'm inferring some of
the translations from context, so they may be incorrectly translated,
but this should be sufficient.

[1] I'm not sure of my translation of "incarner", but I don't want to
get my dictionary out right now.
[2] It is literally "logiciel libre." Nice Romance languages,
distinguishing between libre and gratis without needing parentheses :)
 
L

Lew

According to Lew:
Thomas said:
French government gave money, hence French government got report in
French. Bandwidth measures are numbers, hence "international".

How nice for the French government. Doesn't help me at all, though.
The second link contains the code and a README file in English. The code

But not a link in English to those things.
includes optimized implementations of some hash functions (33 variants)
in both C and Java, along with the benchmarking framework, which
includes the necessary warmup.

Well, I was rather hoping that you'd report the results of *your* tests, but
so be it. Thanks anyway.

I'm finding that the language barrier interferes with my ability to find the
appropriate download links. If you feel like actually answering my questions,
I will be glad to see those answers.
 
A

Arne Vajhøj

Lew said:
My French isn't quite up to this. Je regrette.

I am sure some people in France are not happy about it, but
French is not really a world language in IT.

I learned some French like 28 years ago and I can practically
just count to 3 in French.

Arne
 
T

Tom Anderson

- present [1] a base of comparison of hash functions

[1] I'm not sure of my translation of "incarner", but I don't want to get my
dictionary out right now.

Me neither, but could it mean something like 'create'? Like 'incarnate' in
english?

Whatever it means, the gist is got - thanks for your translation!
[2] It is literally "logiciel libre." Nice Romance languages,
distinguishing between libre and gratis without needing parentheses :)

You can say 'liberated software' in english, but a certain beardy zealot
chose not to, all those years ago.

Anyway, the French still can't distinguish 'warm' from 'hot', so they
lose!

tom
 
D

Dmitriy Melnik

This is a wide area. The best known factorization algorithms for big
numbers (say 450 bits or more) are variants of GNFS. The process begins
with a sieving phase which can be distributed over many nodes. The
bottleneck here is RAM speed and Java should be fine here. The second
phase is harder; it entails performing linear algebra on a matrix which
is bigger (_really_ bigger) than what fits on a single machine. There is
no known good algorithm for performing that computation on a cluster. If
you find how to spread that phase over several nodes efficiently then
Java or not Java, you will be famous. Right now, factorization records
(663 bits, if my memory serves me well) used Very Big Machines with Lots
of Fast RAM. On such a system, Java may be a problem, because array
indices in Java are signed 32-bit integers, thus limited to about 2
billions of elements.

Now if you envision other tasks than factorization of big integers (e.g.
you want to factor many medium-sized integers, or you are interested
in quite different areas, such as exhaustive search for a symmetric
encryption key) then it is hard to say anything in general. You may
want to look athttp://distributed.net/: they coordinated the current
record in exhaustive key search (a 64-bit RC5 key). Another resource
is BOINC:http://boinc.berkeley.edu/
BOINC is currently being used by Graz University in an attempt to build
the a collision on the SHA-1 hash function:http://boinc.iaik.tugraz.at/

I am implementing a quadratic sieve. I do not aim at beating records
in this area. I am interested in creating a distributed system that
could support a number of algorithms with decent performance. The main
focus in on distribution capabilities though. My research is just
being started. I plan to use Java technology only. I don't even know
what algorithms could be practically implemented on a cluster of about
several dozens nodes.
 
D

Dmitriy Melnik

This is a wide area. The best known factorization algorithms for big
numbers (say 450 bits or more) are variants of GNFS. The process begins
with a sieving phase which can be distributed over many nodes. The
bottleneck here is RAM speed and Java should be fine here. The second
phase is harder; it entails performing linear algebra on a matrix which
is bigger (_really_ bigger) than what fits on a single machine. There is
no known good algorithm for performing that computation on a cluster. If
you find how to spread that phase over several nodes efficiently then
Java or not Java, you will be famous. Right now, factorization records
(663 bits, if my memory serves me well) used Very Big Machines with Lots
of Fast RAM. On such a system, Java may be a problem, because array
indices in Java are signed 32-bit integers, thus limited to about 2
billions of elements.

Now if you envision other tasks than factorization of big integers (e.g.
you want to factor many medium-sized integers, or you are interested
in quite different areas, such as exhaustive search for a symmetric
encryption key) then it is hard to say anything in general. You may
want to look athttp://distributed.net/: they coordinated the current
record in exhaustive key search (a 64-bit RC5 key). Another resource
is BOINC:http://boinc.berkeley.edu/
BOINC is currently being used by Graz University in an attempt to build
the a collision on the SHA-1 hash function:http://boinc.iaik.tugraz.at/

I am implementing a quadratic sieve. I do not aim at beating records
in this area. I am interested in creating a distributed system that
could support a number of algorithms with decent performance. The main
focus in on distribution capabilities though. My research is just
being started. I plan to use Java technology only. I don't even yet
know
what algorithms could be practically implemented on a cluster of about
several dozens nodes.
 
L

Lew

These are *my* tests. I wrote myself the 33 hash function
implementations, both C and Java. I ran the experiments and performed
the measures. I wrote the reports.

These are the source files for the tests. They are not the actual
test runs with their results. But that's all right - obviously you
don't want to share your results and that's your privilege.
That may be a technological barrier. The page features a quite visible,
blinking icon of a floppy disk with an arrow, which basically means "to
download click on me". So either you are using a text-only browser, or

Oh, now *that's* a universal icon.

Regardless, I am interested in your results, which you've made quite
plain you don't wish to share. Thanks anyway.
 
Joined
Jul 1, 2012
Messages
1
Reaction score
0
You can disable JIT using the java.compiler option below. JIT is called at runtime.

javac MyClass.java

java -Djava.compiler=NONE MyClass
or
java -Djava.compiler="" MyClass
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,767
Messages
2,569,570
Members
45,045
Latest member
DRCM

Latest Threads

Top