How to use power of Dual/ Quad core Processors in Applet?

L

Lew

pkriens said:
The Java memory model allows the caches of the processors to differ
for variables that are not synchronized or volatile. So on processor A

Not accurate. It also allows them to differ for variables that are volatile
or accessed in a synchronized way, at times. It also requires that
non-volatile variables and those accessed outside synchronized blocks follow
certain visibility rules.
you can read a different value for variable x than on processor B
until they are synchronized. Code that works well on a single
processor because there is only one memory can fail subtly on multiple
processors. Obviously the code is wrong, but I think it makes sense to
schedule Java programs on a single CPU unless specifically allowed.

So you are suggesting that one not fix broken code, instead just try to get
your customers not to run it on multi-processor machines? I cannot imagine
any responsible developer advocating such a practice.

And it isn't "code hat works well o0n a single processor"; it's code that is
equally broken on a single processor. What an irresponsible suggestion.
 
L

Lew

Lew said:
Not accurate. It also allows them to differ for variables that are
volatile or accessed in a synchronized way, at times. It also requires
that non-volatile variables and those accessed outside synchronized
blocks follow certain visibility rules.


So you are suggesting that one not fix broken code, instead just try to
get your customers not to run it on multi-processor machines? I cannot
imagine any responsible developer advocating such a practice.

And it isn't "code hat works well o0n a single processor"; it's code
that is equally broken on a single processor. What an irresponsible
suggestion.

You are aware that single-processor machines also reschedule instructions and
data, and that broken thread code on even single-processor machines can
manifest its bugs, aren't you? In other words, that your short-sighted and
irresponsibly lazy idea won't even work reliably anyway?
 
R

Roedy Green

You are aware that single-processor machines also reschedule instructions and
data, and that broken thread code on even single-processor machines can
manifest its bugs, aren't you? In other words, that your short-sighted and
irresponsibly lazy idea won't even work reliably anyway?
Please. Can the ad hominems. It is enough to attack the idea.
 
P

Patricia Shanahan

Lew said:
Not accurate. It also allows them to differ for variables that are
volatile or accessed in a synchronized way, at times. It also requires
that non-volatile variables and those accessed outside synchronized
blocks follow certain visibility rules.


So you are suggesting that one not fix broken code, instead just try to
get your customers not to run it on multi-processor machines? I cannot
imagine any responsible developer advocating such a practice.

And it isn't "code hat works well o0n a single processor"; it's code
that is equally broken on a single processor. What an irresponsible
suggestion.

I think two issues are getting confused here:

1. Should all code be written, reviewed, and tested, to be correct
assuming only the memory model rules in the JLS. My answer is *YES*.

2. Should multi-threaded code be assumed safe to run on multiple cores,
given only experience of its reliability on a single processor? This is
a much more difficult question.

There are bugs that will show up only under high stress with relatively
low probabilities on a single core, but that are more likely to be
visible on multiple cores.

I am especially concerned about Java because it makes shared memory
multi-threading look much easier than it is.

Patricia
 
L

Lew

Roedy said:
Please. Can the ad hominems. It is enough to attack the idea.

What ad hominem? I said the /idea/ was irresponsibly lazy, nothing about any
individual. In other words, I was already following your advice.
 
T

Twisted

Ah. Although, I generally try not to mess with these things. My assumption
is the people who design the kernel know a lot more about how to split up
resources than I do....

I'm not sure that's a safe assumption in the one special case that the
people in question work in Redmond. I've had some fairly wacko
behavior happen on dual core Windoze boxen. One XP SP2 system running
a compute-intensive mathematical visualization tool caused temporary
system freezes and stuttering when running on a dual-core machine
until forced to only use one core via Task Manager. In another case, 1
high priority task and 2 low priority tasks were running, the latter
having equal proprities. The high priority task hogged the whole CPU
until confined to one core, whereupon the low priority tasks should
have divvied up the other core but didn't -- one hogged it and the
other was starved.

OTOH I know that actually programming a scheduler to work ideally is
non-trivial. Weird glitches like "priority inversion" will happen with
some (and just about all naïve) implementations in some situations
with particular combinations of relative priorities, processes
sleeping or blocked on I/O or lock acquisition, and processes awake. I
wouldn't be surprised if even non-Redmond developers don't always get
it right even on serial hardware.
 
R

Roedy Green

What ad hominem? I said the /idea/ was irresponsibly lazy, nothing about any
individual. In other words, I was already following your advice.

Ideas are not lazy. People are. You characterised him as lazy for his
error. That was uncalled for.
 
L

Lew

Roedy said:
Ideas are not lazy. People are. You characterised him as lazy for his
error. That was uncalled for.

I swear it's the idea I was talking about. Feel free to misinterpret my
words, but I called the idea lazy, not the person. I have no reason to lie
about it.

I happen not to agree with your statement "ideas are not lazy". Therefore it
is possible for me to call an idea lazy, as I did. You will notice that I was
careful in my post to use the word "idea" as the target of the adjective.

Please do not distort my words or change their intent. I am telling you that
I intended to describe the idea as lazy, not the person. You can tell me that
I mean something different, but I think I am the authority on my own intent.
 
R

Roedy Green

One XP SP2 system running
a compute-intensive mathematical visualization tool caused temporary
system freezes and stuttering when running on a dual-core machine
until forced to only use one core via Task Manager.
From Microsoft's point of view, they worry first about making good
programs work properly on dual core. After that is working they can
think about how to stop a defective program from freezing the machine.
 
T

Twisted

From Microsoft's point of view, they worry first about making good
programs work properly on dual core. After that is working they can
think about how to stop a defective program from freezing the machine.

Microsoft now sometimes worries about making programs work properly?
When did this happen? There was no news announcement or fanfare...
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,266
Messages
2,571,089
Members
48,773
Latest member
Kaybee

Latest Threads

Top