setting available CPU cores programmably

D

D

Does Java have a programmable way to set CPU affinity? I know low-
level invocations to OS can do the hack, but does anyone know high-
level library APIs for it?
 
A

Arne Vajhøj

D said:
Does Java have a programmable way to set CPU affinity? I know low-
level invocations to OS can do the hack, but does anyone know high-
level library APIs for it?

There are no standard Java API to do that.

Not surprisingly given the requirement to support practically
any OS.

You can write something in C and call it via JNI.

Arne
 
J

John B. Matthews

Arne Vajhøj said:
There are no standard Java API to do that.

Not surprisingly given the requirement to support practically
any OS.

You can write something in C and call it via JNI.

OP: This raises a question: What effect do you intend to achieve with
such a change? For example, some operating systems allow adjusting the
scheduling priority when a process is started.
 
D

D

OP: This raises a question: What effect do you intend to achieve with
such a change? For example, some operating systems allow adjusting the
scheduling priority when a process is started.
I'd like to do some benchmarking on how a multi-core machine can speed
up a program that runs 100 threads. Specifically, I'd like to be able
to draw a table such as:

1 core: 32.3seconds
2 cores: 29 seconds
3 cores: 27.2 seconds
....

I guess one thing I can do is to disable certain cores on the OS
level, but the system I have is a multi-user one, and I'd really hate
to have one user disturb the use of all other users. So, ideally, some
form of "virtual disability of select cores" would be ideal.

Thank you!
 
M

Mark Space

D said:
I guess one thing I can do is to disable certain cores on the OS
level, but the system I have is a multi-user one, and I'd really hate
to have one user disturb the use of all other users. So, ideally, some
form of "virtual disability of select cores" would be ideal.


If you have the ability to disable certain cores, I'm surprised you
can't do so on a per-user basis. What OS and hardware is this?
 
J

John B. Matthews

D said:
I'd like to do some benchmarking on how a multi-core machine can speed
up a program that runs 100 threads. Specifically, I'd like to be able
to draw a table such as:

1 core: 32.3seconds
2 cores: 29 seconds
3 cores: 27.2 seconds
...

I guess one thing I can do is to disable certain cores on the OS
level, but the system I have is a multi-user one, and I'd really hate
to have one user disturb the use of all other users. So, ideally, some
form of "virtual disability of select cores" would be ideal.

In addition to numerous other pitfalls, most benchmarks are predicated
on a "quiet machine":

<http://wikis.sun.com/display/HotSpotInternals/MicroBenchmarks>

Does your host operating system offer any features that might help?
 
L

Lew

Patricia said:
Can you use the number of threads, something you can control in Java, as
a surrogate for number of cores?

Alternative, the operating system may have ways of restricting the CPUs
for job. What OS are you using on the benchmark machine?

To pick up on this and Mark Space's vaguely similar suggestion, you could use
virtualized hardware, say using Xen, VMWare or KVM, to limit the number of
available CPUs to the whole guest OS.
 
C

charlesbos73

OP: This raises a question: What effect do you intend to achieve with
such a change? For example, some operating systems allow adjusting the
scheduling priority when a process is started.

Maybe I'm not understanding your question but there are
lot of very good reasons to force the CPU affinity.

Once you've set the CPU affinity, modern OSes will respect
your choice and won't make the app run on CPUs you didn't
pick, so the effect the OP is intending to achieve [sic] are
very real on every modern Windows, OS X and Un*x OSes.

And that's perfectly normal: in quite some case the
user (or administrator) simply "knows better" than
the OS's scheduler.

This papers from Intel comes to mind:

http://software.intel.com/en-us/art...smp-scaling-user-directed-processor-affinity/

Having control over the CPU affinity can be very important
for some applications.

This technique is used too on mega-DBs running on 16 cores
machines etc.

Googling yeld a lot of results where setting the CPU
affinity brought very real benefits. Very real as
in "there's no coming back to not using this technique".

The case I'm familiar with is however completely
different: it concerns user running mainly
one big, huge, monstro-software crunching lots of data,
in which the user spends most of his time because that
is his work. Post-production, audio, and 3D software
comes to mind. For the users slowdowns and
unresponsivness are a big no-no. In this case it's
not uncommon to have "the" software running on (x-1 CPUs)
and having *all the other apps* running on the single
CPU left. This ensure better responsiveness than what
any scheduler could provide for the app in question.

And for these user it's all that matters: that this
one app is as responsive as possible, even if some
other apps suffers from this.

Just like in some case you can trade some memory
for a ramdisk and see real benefits (because you
know better than the OS's disk caching mechanism)
there are Real-World [TM] cases where setting the
CPU affinity is desirable.
 
L

Lew

charlesbos73 said:
Once you've set the CPU affinity, modern OSes will respect
your choice and won't make the app run on CPUs you didn't
pick, so the effect the OP is intending to achieve [sic] are
very real on every modern Windows, OS X and Un*x OSes.
("[sic]" was in the cited post.)

The notation "[sic]" is embedded in a quote following a grammatical or
spelling error or unusual turn of phrase that was in the original, to show
that the citation did not introduce the error. It is not normally used for
responses that contain grammatical errors. If one wished to emphasize that
the grammatical error was intentional, it would have been better to place the
"[sic]" notation after the word "are" than before it.

<http://www.merriam-webster.com/dictionary/sic[3]>
 
J

John B. Matthews

charlesbos73 said:
Maybe I'm not understanding your question but there are
lot of very good reasons to force the CPU affinity.

Indeed, I was curious if any such reasons applied.
Once you've set the CPU affinity, modern OSes will respect
your choice and won't make the app run on CPUs you didn't
pick, so the effect the OP is intending to achieve [sic] are
very real on every modern Windows, OS X and Un*x OSes.

I don't understand your use of the editing symbol, _sic_, in this
context: said:
And that's perfectly normal: in quite some case the
user (or administrator) simply "knows better" than
the OS's scheduler.

This papers from Intel comes to mind:

http://software.intel.com/en-us/articles/improved-linux-smp-scaling-us
er-direc ted-processor-affinity/

Thank you for this informative link.
[...]
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,577
Members
45,054
Latest member
LucyCarper

Latest Threads

Top