OP: This raises a question: What effect do you intend to achieve with
such a change? For example, some operating systems allow adjusting the
scheduling priority when a process is started.
Maybe I'm not understanding your question but there are
lot of very good reasons to force the CPU affinity.
Once you've set the CPU affinity, modern OSes will respect
your choice and won't make the app run on CPUs you didn't
pick, so the effect the OP is intending to achieve [sic] are
very real on every modern Windows, OS X and Un*x OSes.
And that's perfectly normal: in quite some case the
user (or administrator) simply "knows better" than
the OS's scheduler.
This papers from Intel comes to mind:
http://software.intel.com/en-us/art...smp-scaling-user-directed-processor-affinity/
Having control over the CPU affinity can be very important
for some applications.
This technique is used too on mega-DBs running on 16 cores
machines etc.
Googling yeld a lot of results where setting the CPU
affinity brought very real benefits. Very real as
in "there's no coming back to not using this technique".
The case I'm familiar with is however completely
different: it concerns user running mainly
one big, huge, monstro-software crunching lots of data,
in which the user spends most of his time because that
is his work. Post-production, audio, and 3D software
comes to mind. For the users slowdowns and
unresponsivness are a big no-no. In this case it's
not uncommon to have "the" software running on (x-1 CPUs)
and having *all the other apps* running on the single
CPU left. This ensure better responsiveness than what
any scheduler could provide for the app in question.
And for these user it's all that matters: that this
one app is as responsive as possible, even if some
other apps suffers from this.
Just like in some case you can trade some memory
for a ramdisk and see real benefits (because you
know better than the OS's disk caching mechanism)
there are Real-World [TM] cases where setting the
CPU affinity is desirable.