Change character in string

  • Thread starter Dirk Bruere at NeoPax
  • Start date
B

Bent C Dalager

Vector otoh, already has it's first fork when .equals() is invoked
on it, so it's too late for grabbing the napkin now. To repair
Vector, it would have to be made unsynchronized itself, and then
each method would have to aquire Collections.synchronizedCollection's
mutex object, and sync on that, first.

Wouldn't this necessarily lead to strict serialization of all
synchronized Collections? The way Java structures its synchronized
blocks you can't first take the napkin, then take two forks, then
release the napkin, then only later release the forks. Or am I missing
something?

Also, it would only work so long as there are no elements within the
lists that use some other collections framework for /their/ contents
since that other framework would synch on its own global mutex.
Maybe that's the reason, why
Sun say that Vector is slightly faster: because it doesn't use the
central lock - and that has a hidden price tag that may really hurt.

My guess is that Vector may be slightly faster because it doesn't have
the SynchronizedList indirection layer on top. I cant imagine why a
decent JIT compiler wouldn't be able to optimize that away though.

Cheers,
Bent D
 
A

Andreas Leitgeb

Bent C Dalager said:
Either way of doing it is wrong: in one case you end up deadlocking
and in the other you're not synchronizing on all the data that you
need. And neither of these flaws is obvious to the user, or even
documented in any way that I know of.

So that's now 100 penalty points this time against sync'd ArrayList?
The world just refuses to be simple.
 
A

Andreas Leitgeb

I already corrected that, as it isn't really how it is.
As it seems, the "mutex" is just a final copy of the wrapper's this,
which doesn't make it too useful, afaict.
Also, it would only work so long as there are no elements within the
lists that use some other collections framework for /their/ contents
since that other framework would synch on its own global mutex.

It's really not a list- nor Vector, nor .equals()-specific problem.
Whereever any synchronized method invoked on one object needs access
to synchronized accessors of the other object, the problem comes up.
And this could surely be rephrased much more generally.

PS:
Perhaps one has to choose among Vector and ArrayList depending on
whether one prefers deadlocks to corrupted data or vice versa. ;-/
 
L

Lew

Andreas said:
So that's now 100 penalty points this time against sync'd ArrayList?
The world just refuses to be simple.

It's not a penalty against either one. Neither synched class can claim to be
thread-safe.

As Goetz's /Java Concurrency in Practice/ and other sources point out,
whenever one deals with multiple objects cooperating to provide shared state
or multiple methods interacting with shared state, it is not sufficient to
synchronize on only one object or method at a time.

Examples where this pertains include the broken double-checked locking idiom,
check-and-set scenarios, and the thing that messed up 'Vector#equals()',
acquisition of multiple locks from different threads in a different order from
each other.

Bent's example of deadlock rested on a deliberate violation of proper
concurrency practice. It is the sort of violation that can readily happen
unintentionally in a larger context.

As to the great "Vector vs. ArrayList" cage match, the decision rests on which
one is easier to control. For applications that need a totally thread-safe
'equals()' comparison of lists, one cannot rely on the internal
synchronization of a synched 'List', neither version. One must use explicit
synchronization, ergo probably an unsynchronized 'List' implementation like
'ArrayList' (without a 'synchronizedList()' wrapper) under a caller's lock.
In other words, both 'Vector' and 'synchronizedList()' lose in favor of
correct coding with careful thought.

Yet another reason to eschew 'Vector' - the myth that it's thread-safe leads
one astray, and the fix leads back to a "real" collections-framework 'List'
anyway.
 
L

Lew

Andreas said:
This "mutex" is a *non*-static field, so something was wrong
with my previous speculations. No central napkin on the table.

But then, why does it work?

It doesn't, really. It can lead to inconsistent results instead of deadlock.
Correct resolution of Bent's example requires a lock outside the 'List'
altogether.
 
B

Bent C Dalager

So that's now 100 penalty points this time against sync'd ArrayList?
The world just refuses to be simple.

This is just multithreading rearing its ugly head. It's deceptively
difficult to get right.

After realizing the above List synch issues some years ago I promptly
adopted the idea that all my data structures need to be immutable and
that whenever I think I require one that is not, I need to realize it
is going to be 10x to 100x as expensive to use and maintain. This will
tend to prompt me to consider alternative algorithms first.

And, yes, sometimes I /do/ miss C++'s "const". But only sometimes :)

Cheers,
Bent D
 
L

Lew

Peter said:
It's not a myth that Vector is threads-safe. It _is_ thread-safe.
What's a "myth" is that you can take any random thread-safe data object
and not concern yourself with threading issues related to the use of
that object. In fact, deadlock is a very real concern with _any_ object
that is thread-safe. Just because an object is thread-safe, that
doesn't mean you don't then need to think about how you're using it.

OK, that's a better expression of what I was driving at.
And the "fix" is to lock _around_ the entire operation that shares two
"thread-safe" objects, so that you don't wind up having them trying to
take each other's locks while they have acquired their own. There's
nothing about that that would "lead back" to ArrayList or other
implementations.

Unless of course you had a separate agenda you're trying to push. :p

No agenda but to express what you phrased more clearly. I've been studying
concurrency since before I learned Java and the only thing I'm sure of today
is that I still have to learn more and tread carefully.
 
T

Tom Anderson

In the task of comparing two synchronized Lists there is a
"meta"-critical path: that, where the own lock and a foreign lock
may(*) be aquired (i.e. the infamous dining philosophers problem).
If for any pairs of locks (namely each List's one) there is a master
lock, which must be aquired first, then the problem is avoided.

For the philosophers that would mean, that before even taking their
first fork, they'd first have to grab the one napkin in the center of
the table (ugh :)

Actually, there's another way: if there's a total order on the forks, then
the philosophers just have to agree to always pick them up in that order.
Let's say they're numbered, with fork 1 being the highest priority,
followed by 2, etc. If you've got fork m, you can only ever be waiting for
fork n, where n > m; whoever holds fork n can only ever be waiting for
fork p, where p > n, and since that means p > m, they can't be waiting for
the fork you're holding, and so there's no possibility of deadlocking with
them. You can extend that analysis to show that you also can't make
deadlocks with larger numbers of people.

In the philosophers' case, there's an even simpler solution that doesn't
involve a total order: you colour the forks red or black, and lay them
around the table in alternating colours (and if there are an odd number of
diners, you set an extra place, or give one of them chopsticks). You then
instruct the philosophers to pick up their black fork first, then their
red fork. You can only ever be waiting for a red fork, while holding a
black fork, so you know that nobody who holds the fork you're after can
ever be waiting for a fork you're holding. Deadlock cannot ensue.

The former case is applicable to the list equals problem, although the
latter is not. The solution looks like:

// assume non-identity, non-nullity and listness are known by this point
public boolean equals(List<?> that) {
List<?> first;
List<?> second;
if (order(this, that)) {
first = this;
second = that;
}
else {
first = that;
second = this;
}
synchronized (first) {
synchronized (second) {
return list.equals(that);
}
}
}

static boolean order(List<?> a, List<?> b) {
// this is the only tricky bit
// but as long as it's consistent, it doesn't matter how you do it
}

To implement order, i'd just compare the identity hashes:

static boolean order(List<?> a, List<?> b) {
if (a == b) throw new IllegalArgumentException();
int ida = System.identityHashCode(a);
int idb = System.identityHashCode(b);
if (ida == idb) throw new RuntimeException("whoops");
return ida > idb;
}

The one weakness there is that it fails for different objects which have
the same identity hash. You could mitigate this by adding some
tie-breaking rules (eg go on the lexicographical order of their class
names), but there's no easy way to guarantee an answer.

You could break ties arbitrarily, and then record the decision in a global
map, so that if you saw the same pair again, you could give a consistent
decision. That's pretty evil, though.

tom
 
T

Tom Anderson

The same listIterator is used by SynchronizedArrayList but the get()
/it/ calls is that of the base unscynchronized ArrayList.

Either way of doing it is wrong: in one case you end up deadlocking and
in the other you're not synchronizing on all the data that you need. And
neither of these flaws is obvious to the user, or even documented in any
way that I know of.

I have always found this rather serious because, of all the methods you
can call on a List you'd expect equals() to be among the safer ones. All
sorts of algorithms rely on being able to use it all the time - but
perhaps that's my GUI bias shining through.

I can't vouch for my own unbiasedness, but this definitely looks like a
bug to me. I think you can just about defend the status quo on the grounds
of the wording of synchronizedList's documentation:

Returns a synchronized (thread-safe) list backed by the specified list.

Which says that the returned list is thread-safe (whatever that means),
but nothing about it interacting with other lists in a thread-safe way.
But still, it's lame.

Even with better locking in the synchronizedList wrapper, though, you
can't guarantee thread-safety - if the other list is a normal
non-synchronized one, there's no way to detect or prevent modification by
another thread during the comparison. Still, it would be good to have a
promise that if access to the other list *is* properly synchronized, then
equals() *will* be thread-safe.

tom
 
J

John B. Matthews

[...]
In the philosophers' case, there's an even simpler solution that
doesn't involve a total order: you colour the forks red or black, and
lay them around the table in alternating colours (and if there are an
odd number of diners, you set an extra place, or give one of them
chopsticks). You then instruct the philosophers to pick up their
black fork first, then their red fork. You can only ever be waiting
for a red fork, while holding a black fork, so you know that nobody
who holds the fork you're after can ever be waiting for a fork you're
holding. Deadlock cannot ensue.
[...]

Here's an example of the approach using philosopher chirality rather
than chopstick color:

<http://www.doc.ic.ac.uk/~jnm/book/book_applets/Diners.html>
<http://www.doc.ic.ac.uk/~jnm/book/book_applets/FixedDiners.html>

Here are three other schemes based on work by Michael Feldman in Ada,
described in the second link:

<http://faculty.cs.wwu.edu/reedyc/CS145_Winter_2005/Lecture_Notes/CS145Di
ningPhil.html>
<http://faculty.cs.wwu.edu/reedyc/CS145_Winter_2005/Lecture_Notes/diners/
society.ads.html>
 
A

Andreas Leitgeb

Btw., this also extends to that the rivalling school of misosophists at
next table can have their own table's central napkin, and do not need to
go over to their arch-foes' table to take theirs. The misosophists, of
course, also won't borrow any of the philosophers' forks.
Actually, there's another way: if there's a total order on the forks, then
the philosophers just have to agree to always pick them up in that order.

That thought also occurred to me. If java maintains internally some object
identity, that won't change during the whole lifetime of the object,
especially not during any GC'ing. (I don't know if "the internal address
of the object" as mentioned in the javadoc for java.lang.Object really has
that property. It should, but it's not guaranteed to exist) With that,
Java could add a multisync(obj1,obj2,...) feature, that would first make
a consistent re-arrangement of the given objects based on that identity,
and then lock them all in the "same" order.
As a small inconvenience it would be necessary to throw some Exception,
if the current Thread already had any of these objects locks. (Strictly
it could even allow certain held locks (namely if it's the first few
according to the internal arrangement, but that would make it rather
unpredictable for the programmer who doesn't know the internal ordering,
but may vaguely know about this Thread's already held locks. The trivial
cases could even be caught by the compiler.)
I do not propose this feature - just brainstorm what could be possible.
// assume non-identity, non-nullity and listness are known by this point
public boolean equals(List<?> that) {
List<?> first;
List<?> second;
if (order(this, that)) {
first = this;
second = that;
}
else {
first = that;
second = this;
}
synchronized (first) {
synchronized (second) {
return list.equals(that);
}
}
}

That's about what multisync would do just without exposing the
order and with any (>=2) number of objects. And you must be
sure that the calling thread doesn't yet have any of the locks,
or you still may run into deadlocks (namely if you happens to
have the second one).
 
L

Little Green Man

In the philosophers' case, there's an even simpler solution that doesn't
involve a total order: you colour the forks red or black, and lay them
around the table in alternating colours (and if there are an odd number of
diners, you set an extra place, or give one of them chopsticks). You then
instruct the philosophers to pick up their black fork first, then their
red fork. You can only ever be waiting for a red fork, while holding a
black fork, so you know that nobody who holds the fork you're after can
ever be waiting for a fork you're holding. Deadlock cannot ensue.

The former case is applicable to the list equals problem, although the
latter is not.

The graph whose nodes are objects that are synchronized and whose
edges are between all (and only) pairs that get locked together must
be two-colorable. This won't be true in general, though as you noted
it is true when the objects form a line, or are even in number and
form a circle, and only adjacent pairs are locked together.
static boolean order(List<?> a, List<?> b) {
        // this is the only tricky bit
        // but as long as it's consistent, it doesn't matter how you do it

}

The evil mythical being is in the details is it not?

Sun should add a construct

synchronized (x, y, z, ...) {

}

that allows any number of items in the parentheses and always locks
the same set of objects in a deterministic order.

As for how this could be implemented.

One way adds four bytes (eight on 64 bit systems) to an object: add an
"object number" hidden field that is globally unique. (It might also
be used as the identity hash, making those unique, possibly improving
some hash table performance). These might be issued sequentially in
allocation order, but that requires all allocations to go through a
global lock. Smarter might be to partition the space of possible
object numbers into non-overlapping ranges that are assigned to the
thread-local allocation buffers; objects allocated from these get
numbers from the associated range. When a TLAB needs to be
replenished, it also gets a new, previously-unused range. (And the
TLAB now "needs to be replenished" if it can't satisfy an allocation
*either* of memory *or* of an object number). This returns the
performance of allocation close to what it is in present Hotspot VMs.

Another way would use the object's hashCode return, with tie-breaking
by assigning an "object number" on the fly to only the objects that
come up in ties (which gets reused for the same objects if they come
up again). A global lock only has to be acquired when a particular
object comes up in a tie for the first time. Tiebreaking could proceed
first by hashCode and then by (if different) identityHashCode, further
reducing (in some cases) the need to acquire that global lock.

This method has the advantage of not needing an additional field in
every object, though it needs a way to associate a small subset of all
objects with a tie-breaking number.

Personally, I'd favor ensuring that the identity hash code is globally
unique and using that, with the range-allocation-to-TLAB method to
avoid damaging the performance of allocations. Since objects *already*
have to get, and somehow encode, their identity hash code, this
shouldn't make objects any bigger or allocation any slower if done
right.

Of course, the hashCode() and System.identityHashCode() methods have a
return type of "int", not "long", so though the internal "identity
hash code" would be unique on 64-bit platforms, the hash code method
returns still wouldn't be, though they now would be on 32-bit
platforms. The hash code methods would have to return the low 32 bits
of the internal number.

There's also a consideration with long-running systems eventually
going through the entire pool of numbers (though it might take an
astronomically long time in the 64-bit case). Two solutions to *that*:

* Use 64 bits no matter what, and pray to one of your cultures'
deities.
* In combination with the new "G1" collector, associate ranges (and
presumably TLABs) with GC spaces within the heap; whenever that
space
is free of live objects, that range of object numbers is noted as
available for reuse. Indeed, the spaces and the ranges can be
permanently associated to one another. Objects move less in G1, so
the
initial address of an object can be used most of the time. When an
object has been moved and is still live, though, the same initial
address mustn't be reused. If there is an efficient way to detect
that
such an object exists (with the GC presumably keeping track), the
address of a new object allocated at the same spot can be
"perturbed"
to give the "object number". The lowest three bits and many of the
highest bits are all likely to be zero, so flipping bits 0, 1, 2,
and
then 63 (or 31), 62 (or 30), etc. might be used to displace the
numbers of objects initially allocated in the same spot from one
another.
* If an OS exists that can hand out huge virtual address spaces to
processes and efficiently use only as much page file and RAM as the
process uses from that address space, however distributed, then JVMs
on that platform can just allocate all objects at unique addresses!

Here, I'd say the best bet is just to associate a range with a TLAB,
and when a range empties or a TLAB empties, use a new range and a new
TLAB; if the global counter used to generate new ranges wraps, so be
it. Low values are likely to be mostly unused. Wrapping should be rare-
to-inconceivable if longs are used internally.

A less efficient but robust solution is to associate with each object
a unique *ternary* string, stored using the bit-pairs 01, 10, and 11,
with 00 a sentinel value, and making this get as long as is needed.
Long-running systems will slowly lose efficiency of allocation,
identity hash codes, and multi-synchronization but never acquire the
risk of deadlock.

The ternary string with terminator exceeds 32 bits at 15 ternary
digits stored, so this becomes worse than the current situation, space-
wise, at around 3^15 = 14,348,907 objects allocated. That is a
dismayingly small number. But it only becomes worse, space-wise, than
64-bit binary integers at 3^30 = 205,891,132,094,649 objects
allocated, or a couple hundred trillion. (At this point, the ternary
strings jump from 62 bits to 65 bits, exceeding 64, at two bits per
ternary digit plus two sentinel zero bits.)

Speed-wise, it is slightly worse from the get-go due to the more
complex operations to be performed.

Normal speed efficiency can be restored by using integers of machine-
native size up until -1 (all Fs in hex; the "maximum" before it wraps
to numbers already used) and then layering on an alternative scheme
for all subsequent objects (which are marked by all having -1 in the
"object number" field). Ternary strings is one such alternative
scheme. So is a second integer field, used until it becomes -1, then a
third, and so forth, or a zero-terminated string of bytes or words
used as a bignum, or something. (A bignum that only has to support
order-comparison and addition, simplifying things somewhat.)

One final option, but with potential difficulties, involves freeing
and reallocating numbers using the garbage collector through garbage
collecting in *both* memory *and* the space of numbers, marking ranges
free when no objects with numbers in that range are live. Another
option involving the GC is to use objects' memory addresses at all
times (very cheap 99% of the time, though it can't be used for the
identity hash code then, unless the latter is copied from the address
and then frozen) and do something "special" if the object is forced to
move. The deadlock risk arises if some objects are being locked in
sequence by two threads AND one of these objects is moved. Suppose A >
B, a thread locks A, then B is moved so B < A, the other thread locks
B, and now each is waiting on the other object. Oops.

Since the deadlock risk only arises while a multi-synchronized is
actually in the process of acquiring locks, though, we can do a few
tricks:

* Multi-synchronized may start by flagging all of the objects, then
proceed to get their numbers, then begin acquiring locks, then
unflag the objects. The garbage collector will refuse to move
objects
that are flagged at the time, waiting until the next GC or just
waiting until the object is unflagged. A volatile boolean suffices
for
the flag, obviating the need for global locking.
* A GC moving objects can cause multi-synchronizeds that are in
progress
to abort -- that is, the lock-acquisition behavior is viewed as a
transaction. If it acquires a few locks but hasn't got them all when
a
GC occurs that moves an object, it releases all the acquired locks,
then begins acquiring locks all over again, in (possibly-changed)
sequence order. Since none of the objects has been modified by the
*body* of the synchronized block yet, releasing and reacquiring
locks
doesn't violate thread-safety.

The above might actually be the simplest to implement of all of these
ideas.

I leave it up to Sun's engineers to pick whichever one they think is
best, or come up with an alternative all their own.
 
L

Little Green Man

  *body* of the synchronized block yet, releasing and reacquiring
locks
  doesn't violate thread-safety.

Actually, no, I did not. That is incorrect. All of those lines were a
maximum of 72 characters long and therefore should not have wrapped.
The responsible party will now apologize in public and publicly
absolve me of any responsibility for *their* poor formatting of *my*
post.
 
L

Lars Enderin

Little said:
Actually, no, I did not. That is incorrect. All of those lines were a
maximum of 72 characters long and therefore should not have wrapped.
The responsible party will now apologize in public and publicly
absolve me of any responsibility for *their* poor formatting of *my*
post.

Why not blame Google, instead of trying to find somebody else to blame?
I think I know who you are. There can't be more than one person with
that particular syndrome. Your alias is also a give-away.
 
L

Lew

Little said:
Actually, no, I did not. That is incorrect.

You did not what, violate thread safety?

I am confused.
All of those lines were a maximum of 72 characters long and therefore should not have wrapped.
The responsible party will now apologize in public and publicly
absolve me of any responsibility for *their* poor formatting of *my*
post.

I don't think Google is in the habit of jumping in on these conversations.

What does it matter anyhow? The line wrap in no way impedes the sense of your
post.
 
T

Tom Anderson

Btw., this also extends to that the rivalling school of misosophists at
next table can have their own table's central napkin, and do not need to
go over to their arch-foes' table to take theirs. The misosophists, of
course, also won't borrow any of the philosophers' forks.

That reminds me: there is yet another solution. You have a global map
(probably a WeakHashMap) from Pair<List> to Object, which is lazily filled
with pairs of any interesting lists, mapped to Objects to which there are
no heap references outside the map; pairs (a, b) and (b, a) always map to
the same Object. Pair's equals and hashCode methods have to be based on
the identity of the lists rather than their contents. Then, to do an
equals, you use Pair(this, that) as a key into the map, and use the
resulting object as a mutex.
That thought also occurred to me. If java maintains internally some object
identity, that won't change during the whole lifetime of the object,
especially not during any GC'ing.

That's exactly what an identity hash code is. See System.identityHashCode,
and the thread we had here several months ago where someone dug out the
actual implementation(s).
(I don't know if "the internal address of the object" as mentioned in
the javadoc for java.lang.Object really has that property. It should,
but it's not guaranteed to exist) With that, Java could add a
multisync(obj1,obj2,...) feature, that would first make a consistent
re-arrangement of the given objects based on that identity, and then
lock them all in the "same" order.

The problem is that the identity hash is no more than 32 bits in size, and
a JVM can have more than 2**32 objects, at least on 64-bit machines. That,
and the fact that the definition does not promise that objects will have
unique identity hashes, and the implementation that's used doesn't make
any particular effort to achieve it (we think it's a per-thread
pseudorandom number generator).
As a small inconvenience it would be necessary to throw some Exception,
if the current Thread already had any of these objects locks. (Strictly
it could even allow certain held locks (namely if it's the first few
according to the internal arrangement, but that would make it rather
unpredictable for the programmer who doesn't know the internal ordering,
but may vaguely know about this Thread's already held locks. The trivial
cases could even be caught by the compiler.)

Eh, it's threading, unpredictable is par for the course.
I do not propose this feature - just brainstorm what could be possible.


That's about what multisync would do just without exposing the order and
with any (>=2) number of objects. And you must be sure that the calling
thread doesn't yet have any of the locks, or you still may run into
deadlocks (namely if you happens to have the second one).

Yes, i hadn't thought of that when i wrote my code, but it's quite true.

However, given the limitations of the existing identity hashes, i don't
think it's possible.

If the VM used a GC which, although it moved objects around, never changed
their order (which is not entirely unreasonable - generational GC looks a
lot like that anyway), then it could implement multi-locking using the
order of objects in memory. In fact, it wouldn't have to keep them in
order as long as it could reconstruct the allocation order; if it kept
objects allocation-ordered within particular regions, and knew which order
those regions were allocated in, that would also do it. That would fit
with a generational collector with a pair of semi-spaces for young
objects, which is what i believe the current standard collector is. I
don't know if it preserves object order, though - from what i know of
classical moving collectors, they don't, since they move objects in the
order of a breadth-first traversal of the object graph.

tom
 
T

Tom Anderson

In the second line of this paragraph, didn't you mean "wouldn't need" ?

Yes. Sorry.
If talk was about pure legacy code, then -source 1.4 should get rid of
all those unchecked-warnings, and such code also doesn't contain
variables with types like List<Foo>. But even if *such* code is
compiled with -source 1.5 and spews out gobs of [unchecked] warnings,
it's still no way less safe as the same code was back when compiled with
a 1.4 javac.

True, of course.
If, however, you're talking about new sources having to use Vector for
old interfaces, then not changing Vector at all would surely have been
the easiest signal by Sun to declare it unmisunderstandably as obsolete.
Yet, they didn't.

By that point, Vector implemented List, so when List became generic, they
had a choice bewteen adding a binding or a type parameter to Vector, or
having it generating warnings on compilation. I don't think i would have
liked the latter.

tom
 
T

Tom Anderson

[...]
In the philosophers' case, there's an even simpler solution that
doesn't involve a total order: you colour the forks red or black, and
lay them around the table in alternating colours (and if there are an
odd number of diners, you set an extra place, or give one of them
chopsticks). You then instruct the philosophers to pick up their
black fork first, then their red fork. You can only ever be waiting
for a red fork, while holding a black fork, so you know that nobody
who holds the fork you're after can ever be waiting for a fork you're
holding. Deadlock cannot ensue.
[...]

Here's an example of the approach using philosopher chirality rather
than chopstick color:

<http://www.doc.ic.ac.uk/~jnm/book/book_applets/Diners.html>
<http://www.doc.ic.ac.uk/~jnm/book/book_applets/FixedDiners.html>

Which i think comes to exactly the same thing, but is much easier in terms
of buying forks.

I was wondering why this didn't have the same problem as fork colour does
with odd numbers of philosophers, but then i realised that it doesn't
actually (i think!) need alternation at all: there just has to be *one*
left-handed philosopher. The others can build up long chains of
fork-waiting (in which everyone holds a right fork and is waiting for a
left), but as soon as such a chain hits him, it loses its potential to
cause a deadlock, because he will never be holding a needed (right to him)
fork while waiting for a fork someone else in the chain holds (left to him
- he will have picked that up first). I think that also means, if you
insist on multi-coloured forks, you can get away with one golden fork,
which has to be picked up first, and have the rest be plain stainless
steel, and have the philosophers pick up the left-hand fork first if
they're both stainless. I could be very wrong about all that, though!

If by 'three other schemes' you mean the philosopher types listed in the
second link, i don't think those are solutions to the problem, they're
different problematic kinds of philosopher behaviour. I could be missing
something here.


tom
 
T

Tom Anderson

Sun should add a construct

synchronized (x, y, z, ...) {

}

that allows any number of items in the parentheses and always locks
the same set of objects in a deterministic order.

As for how this could be implemented.

One way adds four bytes (eight on 64 bit systems) to an object: add an
"object number" hidden field that is globally unique. (It might also be
used as the identity hash, making those unique, possibly improving some
hash table performance). These might be issued sequentially in
allocation order, but that requires all allocations to go through a
global lock. Smarter might be to partition the space of possible object
numbers into non-overlapping ranges that are assigned to the
thread-local allocation buffers; objects allocated from these get
numbers from the associated range. When a TLAB needs to be replenished,
it also gets a new, previously-unused range. (And the TLAB now "needs to
be replenished" if it can't satisfy an allocation *either* of memory
*or* of an object number). This returns the performance of allocation
close to what it is in present Hotspot VMs.

See if you can find the thread where we dug up and interpreted the
implementation of identityHashCode - it's almost exactly as you've
described. Except that numbers are only assigned when an object's identity
has is first asked for, not at allocation, thus avoiding the work of
assigning numbers to objects which will never need them.
Another way would use the object's hashCode return, with tie-breaking by
assigning an "object number" on the fly to only the objects that come up
in ties (which gets reused for the same objects if they come up again).
A global lock only has to be acquired when a particular object comes up
in a tie for the first time. Tiebreaking could proceed first by hashCode
and then by (if different) identityHashCode, further reducing (in some
cases) the need to acquire that global lock.

The trouble with that is when you're dealing with mutable objects: a
change in state may mean a change in hashCode order, which means that two
attempts to lock the same pair of objects at different times may go in
different orders, which could lead to deadlock.
There's also a consideration with long-running systems eventually
going through the entire pool of numbers (though it might take an
astronomically long time in the 64-bit case). Two solutions to *that*:

* Use 64 bits no matter what, and pray to one of your cultures'
deities.

No - if we're going to come up with barmy schemes, we might as well come
up with ones which are guaranteed to work.
* In combination with the new "G1" collector, associate ranges (and
presumably TLABs) with GC spaces within the heap; whenever that
space
is free of live objects, that range of object numbers is noted as
available for reuse. Indeed, the spaces and the ranges can be
permanently associated to one another. Objects move less in G1, so
the
initial address of an object can be used most of the time. When an
object has been moved and is still live, though, the same initial
address mustn't be reused. If there is an efficient way to detect
that
such an object exists (with the GC presumably keeping track), the
address of a new object allocated at the same spot can be
"perturbed"
to give the "object number". The lowest three bits and many of the
highest bits are all likely to be zero, so flipping bits 0, 1, 2,
and
then 63 (or 31), 62 (or 30), etc. might be used to displace the
numbers of objects initially allocated in the same spot from one
another.

I think the detection would be the hard part.

Another option might be to have some kind of monotonically increasing
number - something like time, although it only has to increase when things
get moved - and use that in the high bits, so that objects allocated in
the same place at different times don't get the same number.

In fact, if you just the the offset of the object in the block in the low
bits, then this works as long as every block into which things are
alllocated gets a unique prefix for the high bits. So, you have a global
counter, and every time you designate a block as a nursery, you use that
counter to issue it with a prefix. If allocation blocks are 2**n bytes in
size and you allocate on 2**m-byte boundaries, you have wordsize - n + m
high bits to play with; with 1 MB blocks (a number pulled randomly from
the air) and 8-byte boundaries, that's 17 low bits, and 15 high bits on
32-bit machines, or 47 on 64-bit machines. 2**15 is probably not enough;
2**47 is.
* If an OS exists that can hand out huge virtual address spaces to
processes and efficiently use only as much page file and RAM as the
process uses from that address space, however distributed, then JVMs
on that platform can just allocate all objects at unique addresses!

As long as pages are bigger than objects, this is going to lead to
fragmentation - you end up keeping a whole page in core just for one
object.
A less efficient but robust solution is to associate with each object
a unique *ternary* string, stored using the bit-pairs 01, 10, and 11,
with 00 a sentinel value, and making this get as long as is needed.
Long-running systems will slowly lose efficiency of allocation,
identity hash codes, and multi-synchronization but never acquire the
risk of deadlock.

The ternary string with terminator exceeds 32 bits at 15 ternary
digits stored, so this becomes worse than the current situation, space-
wise, at around 3^15 = 14,348,907 objects allocated. That is a
dismayingly small number. But it only becomes worse, space-wise, than
64-bit binary integers at 3^30 = 205,891,132,094,649 objects
allocated, or a couple hundred trillion. (At this point, the ternary
strings jump from 62 bits to 65 bits, exceeding 64, at two bits per
ternary digit plus two sentinel zero bits.)

Speed-wise, it is slightly worse from the get-go due to the more
complex operations to be performed.

Normal speed efficiency can be restored by using integers of machine-
native size up until -1 (all Fs in hex; the "maximum" before it wraps
to numbers already used) and then layering on an alternative scheme
for all subsequent objects (which are marked by all having -1 in the
"object number" field). Ternary strings is one such alternative
scheme. So is a second integer field, used until it becomes -1, then a
third, and so forth, or a zero-terminated string of bytes or words
used as a bignum, or something. (A bignum that only has to support
order-comparison and addition, simplifying things somewhat.)

I am filing this under very clever but unlikely to be a good idea.
Since the deadlock risk only arises while a multi-synchronized is
actually in the process of acquiring locks, though, we can do a few
tricks:

Exactly. I think the idea of allocating persistent, unique-for-all-time
identity numbers to objects isn't necessary - as long as you can get the
GC and locking subsystems to cooperate. Bear in mind that it's doesn't
matter in the slightest if we lock a pair of objects in a different order
at different times, as long as we never do it at overlapping times. If we
do lock a-then-b, unlock both, then lock b-then-a, we're fine.
* Multi-synchronized may start by flagging all of the objects, then
proceed to get their numbers, then begin acquiring locks, then
unflag the objects. The garbage collector will refuse to move
objects
that are flagged at the time, waiting until the next GC or just
waiting until the object is unflagged. A volatile boolean suffices
for
the flag, obviating the need for global locking.

Risky - a contended lock could hold up movement, which sounds like bad
news.
* A GC moving objects can cause multi-synchronizeds that are in
progress
to abort -- that is, the lock-acquisition behavior is viewed as a
transaction. If it acquires a few locks but hasn't got them all when
a
GC occurs that moves an object, it releases all the acquired locks,
then begins acquiring locks all over again, in (possibly-changed)
sequence order. Since none of the objects has been modified by the
*body* of the synchronized block yet, releasing and reacquiring
locks
doesn't violate thread-safety.

Better. This reminds me of Richard Gabriel's PC losering story, and the
winning solution adopted by unix:

http://www.jwz.org/doc/worse-is-better.html

My solution would be to be a bit more MIT, and say that (a) multi-locks
are acquired in allocation order and (b) if objects which are currently
multi-locked (or perhaps just locked at all, for simplicity) are moved,
they must maintain their allocation order with respect to other locked
objects. If we arrange things so that memory is divided into zones with
known non-overlapping ranges of the allocation order (which corresponds
nicely to generations), then that just means taking care when moving
objects from one zone to another, or within a zone, to keep locked ones in
the same order in memory. This avoids the need for identity numbers at
all.

Although i don't know what happens if a multi-lock is underway during a
move; i worry that two threads could end up with different ideas about the
order, but i *think* that as long as they pay attention to when moves
occur, and adjust their locking sequence accordingly, it's okay.

Making all moves, regardless of locking, order-preserving would avoid any
problems. There's even a fighting chance that this is already what
happens.

tom
 
T

Tom Anderson

I like this idea, but rather than all the detail below, why not require
x, y, and z to be of the same type and to implement Comparable? That
would solve the issue wouldn't it?

As long as their compareTo methods are threadsafe but not synchronized ...

And even then, this is less general than a more detaily approach. What if
you really want to lock several objects which can't be mutually
comparable? How would you make Lists, the original example here,
comparable (without restricting them to containing comparable elements)?

tom
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads

Can an Applet beep? 4
ListModel name 10
Sorting a JList 4
JMF? 21
Slightly tricky string problem 18
Java in Java 10
Official Java Classes 10
File over network timeout 3

Members online

Forum statistics

Threads
474,262
Messages
2,571,056
Members
48,769
Latest member
Clifft

Latest Threads

Top