I usually used the observer pattern to listen to changes on the data
model, and the visualization only redraws if needed and isn't
synchronized with the numerical compuation.
However i read matrix data inside my paint methods.
I suspect that you should use /more/ copying of data, not less. Your
simulation engine will run best if it can ignore the possibility that something
else is reading the same data. So it runs at full speed on one thread (not
doing any synchronisation). At extremely long intervals by computer
standards -- roughly once a second, say -- it makes a copy of the current state
of the simulation, and saves it. The test for whether to do that is in the
outermost loop of the simulation, and so that will have negligible effect on
the overall speed. When it determines that it is time to make a copy, it does
so, and then (and only then) uses a synchronised method to save the new
description of the state.
The GUI meantime (running on a different thread) updates the screen display at
regular intervals. To do that it uses a synchronised method to get the most
recent copy and refreshed from that. It keeps that copy around so that it can
repaint() itself as necessary.
Depending on how you've structured your existing code, making the copy may be
almost trivial. Note that you will have no display-related code in the
simulation engine at all (not even triggering notifications for any Observers).
Can i use jni and an optimized native library to implement the
numerical elaboration?
Doing so, will my visualization code be preserved and reused?
Will my system perform better?
A lot depends on how much work you do in each call to JNI. If you are just
doing something trivial like generating the next random number, then almost
certainly not. The cost of crossing the JNI barrier is pretty high, and will
swamp the gains from using (say) Intel's maths libraries. On the other hand,
if you have some slow operation (like matrix multiplication) where the time
taken is high, and -- more importantly -- the time required to copy any
necessary data across the JNI barrier is small in comparison[*] then using the
native libraries may help you.
([*] Copy is O(N) but if the operation is, say, O(N**2) then you can ignore the
cost of the copy.)
It may be that your code is dominated by a small number of slow operations
which can be implemented quickly in an external library. For instance it may
be that array multiplication dominates the time, and that the Intel library has
a particularly well-tuned implementation of that. If that applies then you may
see big gains by using JNI for array multiplication. If not then you'll have
to rewrite your code so that the bulk of the implementation /is/ in C/C++ if
you want to take advantage of Intel's libraries -- e..g make each step of your
simulation into a single call to JNI.
BTW, the way that Java represents 2D arrays is not efficient, and is probably
incompatible with what an external library would expect. If you represent a
logically two-dimensional array of doubles as a double[][] then each access
will require two indirections. A better scheme (albeit quite a bit more work)
is to represent it as a single double[] and use arithmetic combinations of the
row/column coordinates to find each element. The external library will expect
to find the data in this format anyway, so by using it internally you minimise
the messing around (and perhaps the copying too) when you cross the JNI
barrier. For instance from one of your later posts in this thread:
double matrix1[50][900];
for(int i =0;i<50;i++){
final double matrix_col[]=matrix;
for(int j=0;j<900;j++){
....}}
becomes:
double[] matrix = new double[50*900];
for (int i = 0; i < 50; i++)
{
int start = i * 900;
int end = start + 900;
for (int j = start; j < end; j++)
{
float elem = matix[j];
...
}
}
Some people have reported seeing useful speedups using that technique (not huge
but useful), but the main reason for using it is so that highly tuned external
implementations of the array operation can work on the data more-or-less
directly.
-- chris