eti said:
Dear William,
I did the following:
- given a high number of threads, i.e. 100;
- given an object with a synchronized method accessed concurrently by
the above threads;
- that method implements a sleep, randomly set between 5 to 10 ms;
Result;
- the overall latency due by all concurrent threads accessing the
object synchronized method grows with the number of accessing threads,
well far beyond 100 ms, that is more that 10 times the espected time
to execute the very same method with no synchronization constraints !
As I can suppose this is due to the "long" queue of all threads
waiting to access that method: they are all queued up becaue of the
synch lock, and they need to wait previous threads to release it
before accessing the object synch method.
This kind of synthetic test is only meaningful if it models your
application well. I would strongly suspect that it does not.
In the first place, 5-10 ms may well be an excessive estimate of the
time that the critical sections would consume. In fact, depending on
how you do it, I think it could be extremely excessive.
In the second place, in your actual app you are unlikely to have all the
threads contending for the same resource at precisely the same time. In
fact, if there is one thread per sensor and one store per sensor then
the sensor threads don't need to contend with each other at all. Even
if some or all of the threads did contend for the same resource(s),
staggering their access in time would reduce the apparent mean
synchronization delay that each would see, and you'll likely get such
staggering for free.
Notice that in the same context with the same object but with NO
synchronization on that method, the overall latency of all accessing
threads is as aspected: the mean value of the random sequence of sleep
timeouts, between 5 and 10 ms !! -
But that's irrelevant if your app won't work correctly unsynchronized.
If it will then you are foolish to worry about synchronization in the
first place.
Now in our filter sevlet context: the filters are the concurrent
threads that access a webapp context synchronized variable or a
"sensor" synchronized method to store their current latency value.
They are all queued up and the overall latency, as perceived by the
client, would be excessively high !!!!
You tread very uncertain ground making performance predictions, and
using multiple exclamation points doesn't improve your likelihood of
being right. There is cerainly a potential performance problem here,
but whether it would be realized as an actual problem is impossible to
say without trying it.
I think you would be wise to avoid having a large number of threads all
frequently synchronizing on the same object, but that is by no means the
only way it could be done.
On the other hand, if I can act in a reverse fashion and access from
my sensor the actual existing filter threads to read in their "latency
field", NO synchronization is needed.
Sorry, but that's incorrect, or at least unwise. You still need to
synchronize the acts of updating the filters' data and reading the
filters' data. There is not so much potential for contention here, however.
The problem is just that I have
to know at any given (sampling) time the curren number and istances of
the spawned filter threads.
And that is part of the reason why it's a poor design choice. The
application context is where information shared among disconnected
application components should go. Just what you store there and how you
access it is the key to solving your problem.
You could proceed more or less as you seem to want by putting some sort
of registry of your filters in the app context. There would still be
synchronization requirements there, but not with the same degree of lock
contention you anticipate.
It is much cleaner, though, to just store some kind of data cubbyholes
in the app context. For instance, put a List in the app context, and
each filter puts an int[1] in the List, which it will use to expose its
current data. (The List takes the place of a filter registry, and has
similar synchronization requirements; the arrays are the shared data
exposed by the filters.) Accesses to the int[]s should still be
synchronized, but you only have two threads contending for each one, and
no more synchronization requirement than if you were executing methods
on the filters themselves. (The actual form of the data objects used is
pretty open; int[] was used as an example, but it is not the best choice
from an OO standpoint.)
John Bollinger
(e-mail address removed)