Why "lock" functionality is introduced for all the objects?

S

supercalifragilisticexpialadiamaticonormalizeringe

Each String instance has the following fields:

private final char value[];
private final int offset;
private final int count;
private int hash;

There are 12 bytes in addition to the char array. The offset and count
fields allow quick sub-string construction, and hash is used to cache
the hashCode result.

Oh, geez, even *more* overhead. And let's not forget the array has its
own separate object header and length field!

The array may be shared by several String objects.

It usually won't be. Really, how often does anyone use .substring except
for a very short-lived object that usually is fed directly into
StringBuilder.append() or something that calls that under the hood, or
else to an I/O write operation?
In general, many trade-offs in Java, not just the decision to make every
object capable of being a lock, assume that other considerations are
more important than minimizing memory use. For example, caching the hash
code pays four bytes per String in order to have a hash code that
depends on the entire string, without paying the cost of calculating it
repeatedly when a String is used as a hash table key.

Funnily enough, using four characters (if there are that many, else the
whole string) from near the middle of the string would probably work
nearly as well, even for the fairly common cases of many strings sharing
a common prefix, suffix, or both. Strings with highly regular middles
and variable ends are not very common by contrast. And what does that
require?

int mid = length >> 1; // emphasizing that a cheap shift op works
int start = max(mid-2,0);
int end = min(mid+2, length);
int hash = 0;
int fct = 1;
for (int i = start; i < end; ++i) {
hash += fct*content;
fct *= 256;
}

For the common case of Latin-1 strings this turns the characters there
into the hash bytes directly. Throw in some unicode characters and it
gets a bit more interesting as the characters may affect two bytes of
the hash each, except the last one of the four.

Of course, they could also have used a smarter caching strategy. When is
hash caching useful? When the string's in a hash map and going to be
looked up in it frequently. But this turns into two subcases:

1. The string already in the hash map is the same *object* as the
string used for lookup.
2. The strings are not the same object, though they have the same
content.

In the latter case, the string passed to get() is obviously not interned
and is probably being constructed anew each time, likely from I/O reads.
Caching its hash is useless since it's going to be GC'd and recreated
sans cached hash. In the former case, the string probably *is* interned,
in which case the smart place for the hash cache is in the *string
interning table* rather than in the individual string objects,
particularly if you could arrange the under-the-hood implementation to
use an int[] to hold *all* the hashes instead of separate int fields all
over the system.
If, for your purposes, minimal memory use is very important, you may
want to consider other languages with other trade-offs.

And here I thought they were trying to heavily push Java for use on
mobile phones and other devices with limited memory.
 
S

supercalifragilisticexpialadiamaticonormalizeringe

Much less than 50% of the object size here is for monitors.

Whoosh! Lew misses my point again, by a country mile no less. What a
surprise.
 
S

supercalifragilisticexpialadiamaticonormalizeringe

yeah...

they made every object lockable but not every object cloneable, where
one would think cloning would be generally a higher priority.

If they'd been smarter about managing mutability to begin with (e.g.
fields immutable by default, objects normally immutable) cloning would
not be a priority at all.

Sure, the boxed primitives and Strings are immutable, and that's about
it. We've got mutable stuff out the wazoo, most of which probably
shouldn't be -- java.util.Date, anyone? Not to mention java.awt.Point
and friends. All those conundrums about whether Square should be a
subclass of Rectangle, or Circle of Ellipse, go away if they aren't
mutable. Then they clearly are subclasses. Mutable collections and
arrays also make type reasoning more complicated and don't allow casting
a List<Sub> to a List<Super> as then you might add a non-Sub Super to
the list. If the list wasn't mutable there'd be no problem casting a
List<Sub> to a List<Super>.

I could go on...but I won't.
 
S

supercalifragilisticexpialadiamaticonormalizeringe

No, but you argued with me when I refuted that claim.

True, but not *because* you refuted that claim; rather, because in doing
so you chose to err just as far in the other direction.
So it's not any kind of straw man,
Poppycock.

I also never said that you did make that claim, as such.
Twaddle.

But you disagreed with my refutation of it, putting you in that topic.

I disagreed, specifically, with "most objects are much larger than 4
bytes", and with good reason.
And there are other people in the world besides yourself,

Irrelevant since we're discussing, specifically, your argument with me.
you self-involved little man.

Gratuitous and irrelevant ad hom.
I asked first.

And I provided.
The one making the claim that there is 100% overhead, or
any percent overhead, needs to substantiate the claim.

I've substantiated it plenty, for example the overhead doesn't drop
below 5% for an ArrayList until it has at least 30 items in it.
(Actually complicated by how the array gets grown, if you consider empty
space in the array to be more overhead; the overhead then jumps to over
50% if the ArrayList gets a 33rd item, if it doubles at powers of 2 if
not constructed with a specific initial capacity, and will average 25%,
actually.)
I've already proven the 100% claim false, as have you,
Irrelevant.

but no one has proven any actual number.

Folderol. I gave some math in my previous post indicating when many
common structures such as Strings have overhead drop below 5% -- for
Strings, it's at 60 characters. A very large proportion of the Strings
in typical systems I've seen are shorter than that.
I haven't asserted an actual number, so have nothing to
substantiate.

So you defend your lack of substantiation of your claims with the
vagueness of those claims?!
Show me the numbers.

Been there, done that, got the T-shirt.
If you disagree with the refutation of that point, then you are on that
topic, and you have an obligation to be responsible for that.

I disagree with a specific claim you made *in* your refutation, but not
with the fact of the refutation. Please do at least *try* to get that
straight in your head.

To recap: There is a difference between disagreeing with "the monitor
overhead is less than 100%" and disagreeing with "most objects are much
larger than 4 bytes".

And all of this is ignoring the fact that the OP likely meant the
monitor doubled the size of the object *header*, not of the *object*.
Though his claim for "double the GC cycles" is highly dubious; even
actually doubling the sizes of all the objects in the system wouldn't
tend to do that with a generational GC and most objects being
short-lived enough to die in the eden space.
You keep using that term. I am not sure that it means what you think it
means.

You're not sure of a lot of things you should be, and, unfortunately,
sure of a lot of things you shouldn't be.
Interesting that you frame this in terms of "opponents". We're not
supposed to be opponents but partners in exploration of the truth.

Tell that to the guy that was the first to start insinuating that maybe
his opponent was intentionally lying. Who was that again? Oh. That's right.
Apparently your purpose is to turn this into some kind of contest, and
you hold an oppositional frame.

Talking to yourself is a sign of a disturbed mind, you know.
I am interested in increasing knowledge here, not doing battle,

Funny. From here it looks like exactly the opposite is true. Consider
these questions:

1. Who brought some actual data into the discussion upon request?
2. Who didn't, and defended that by saying his claims were too vague for
him to be able to do so?
3. Who was the first to start suggesting that the other guy might be
deliberately telling falsehoods?
4. Who started slinging around gratuitous insults like "you
self-involved little man"?

5. Who would rather stick his fingers in his ears and shout "LA LA LA!"
instead of having an intelligent debate on the subject?
 
R

Robert Klemme

Yet in the end the community seems to agree not to use "synchronized"
directly but rather use classes from java.util.concurrent (namely Lock and
Condition). So is this keyword really that important?

Where do you take that from? I know at least two cases from my recent
development history where it came in extremely handy that all objects
have a monitor. In those cases where there were a lot of objects stored
and we needed to synchronize on each individual object to prevent a
bottleneck and allow scalability a solution using an implementation of
java.util.concurrent.locks.Lock almost certainly would have used
significantly more memory.

It is (almost) twice as much memory as it could be and twice as much GC
cycles. Almost because in real world the number of objects that you need to
synchronize on is way lower than the number of all objects you create.

I'd say that heavily depends on the application type. I don't think
such a general statement is warranted.

<snip/>

Kind regards

robert
 
R

Robert Klemme


I have doubts about the viability of the alternatives suggested in that
article. I commented

I don't think the interface is a proper solution. Reason is that you
can pass in a lockless instance of class Foo which implements this
interface anywhere where Object is allowed. This means that you must
check at runtime whether the instance is lockable or not. That might
introduce significant overhead for applications which frequently
synchronize. A better way I can think of off the top of my head would
be a superclass of Object, but that would break in various other ways
(e.g. because suddenly Object.class.getSuperclass() would return a non
null value which breaks the existing contract). This leaves us with
data-only type. But this would dramatically change the type system,
namely you need two different types of references. This might not be a
big deal for the compiler but it may make GC much more complex because
now there is no longer a uniform object type on the heap. GC is complex
enough in modern JVMs so this could be a significant burden.

Kind regards

robert
 
T

Tom Anderson

I'm curious why Java designers once decided to allow every object to be
lockable (i.e. allow using lock on those). I know, that out of such a
design decision every Java object contain lock index, i.e. new Object()
results in allocation of at least 8 bytes where 4 bytes is object index
and 4 bytes is lock index on 32- bit JVM.

That's not quite right. In the olden days, it's true that every object
header contained room for a lock pointer - but back then, that meant that
every header was *three* words (12 bytes), not two. Two words were needed
for the header (one for a vtable pointer, one for various other things),
and the third was for the lock.

What happened then was that a very clever chap called David Bacon, who
worked for IBM, invented a thing called a thin lock:

http://www.research.ibm.com/people/d/dfb/papers.html#Bacon98Thin

Which was subsequently improved by another clever chap called Tamiya
Onodera into a thing called a tasuki lock, which you don't hear so much
about.

The details are described quite clearly in the papers, but the upshot is
that an object is created with neither a lock nor a slot for a lock
pointer (and so only a two-word header), and the lock is allocated only
when needed, and then wired in. Some fancy footwork means that the object
doesn't need to grow a pointer when this happens; the header remains two
words, at the expense of some slight awkwardness elsewhere. Some even
fancier footwork means that if only one thread locks the object at a time
(a very common pattern), then a lock doesn't even need to be allocated.
The better decision, IMHO, would be to introduce lock/wait mechanics for
only, say, the Lockable descendants.

I agree with this, actually. There might be some small performance
improvement, but it would also make the locking behaviour of code more
explicit, and so clearer.

tom
 
K

KitKat

What happened then was that a very clever chap called David Bacon, who
worked for IBM, invented a thing called a thin lock:

http://www.research.ibm.com/people/d/dfb/papers.html#Bacon98Thin

Which was subsequently improved by another clever chap called Tamiya
Onodera into a thing called a tasuki lock, which you don't hear so much
about.

Are you sure that last one was a "chap"? "Tamiya" sounds rather feminine
to me.
The details are described quite clearly in the papers, but the upshot is
that an object is created with neither a lock nor a slot for a lock
pointer (and so only a two-word header), and the lock is allocated only
when needed, and then wired in. Some fancy footwork means that the
object doesn't need to grow a pointer when this happens; the header
remains two words, at the expense of some slight awkwardness elsewhere.

Such as? I can think of only one possibility that could be even close to
efficient: maintain an IdentityHashMap<Object,Lock> somewhere under the
hood.
 
K

KitKat

The obvious alternative is the make one of the existing words dual
purpose, either directly containing its data or containing an index to a
structure containing both the lock and the original use of the word.
That does require, in effect, a spare bit to indicate which mode the
object is in.

Yeah, that could work if you can spare a bit from the non-lock stuff in
the other original two words of object header.

The above assumed that all the bits in the other two words were already
spoken for. But if not, your suggestion fits well the phrase "thin lock"
since the lock is essentially only 1 bit wide for most objects.
 
T

Tom Anderson

Yeah, that could work if you can spare a bit from the non-lock stuff in the
other original two words of object header.

The above assumed that all the bits in the other two words were already
spoken for. But if not, your suggestion fits well the phrase "thin lock"
since the lock is essentially only 1 bit wide for most objects.

That is indeed pretty much exactly what a thin lock is.

tom
 
T

Tom Anderson

Are you sure that last one was a "chap"? "Tamiya" sounds rather feminine to
me.

Perhaps - and a quick google reveals that it is a girl's name in Hebrew.
However, in Japanese, i believe it's a family name, and that Tamiya
Onodera is Dr Tamiya's name written in the normal Japanese order, putting
his family name first. Although i could be wrong.

The object's identity hash is shuffled between the object and its lock
according to whether it has an expanded lock or not.
I can think of only one possibility that could be even close to
efficient: maintain an IdentityHashMap<Object,Lock> somewhere under the
hood.

That might be memory-efficient, but it would not be at all time-efficient,
as it would require a map lookup to lock an object. Resizing the hash
would be an interesting exercise, too. Actually, i think early JVMs (1.1
era, IIRC, perhaps even 1.0) used something a bit like this; they didn't
use the identity hash, but back then the garbage collector was non-moving,
so they could use addresses as keys, and there was a global lock table
somewhere. I don't know how it handled resizing. Badly, i expect.

tom
 
K

KitKat

Perhaps - and a quick google reveals that it is a girl's name in Hebrew.
However, in Japanese, i believe it's a family name, and that Tamiya
Onodera is Dr Tamiya's name written in the normal Japanese order,
putting his family name first. Although i could be wrong.

???

Regardless of which, "Onodera" also sounds feminine.
The object's identity hash is shuffled between the object and its lock
according to whether it has an expanded lock or not.

That would work, if that's what the second word in the object header
normally is. Assuming it's the heap address at time of creation, and
objects are aligned on word boundaries, the two least order bits of the
identity hash are going to be zero, so you can use those bits for
something else and mask them off to get the hash.

On the other hand, that suggests a way to make object headers of only
*one* word.

Consider: how likely are we to have four billion vtables in a running
32-bit JVM? Let alone Long.MAX_VALUE - Long.MIN_VALUE + 1 in a 64-bit one?

Reserve a low chunk of the address space (and call it part of permgen?)
for vtables and your vtable pointers get quite short. The vtable pointer
plus a few bits of the object's initial address would still make a
pretty decent identity hash for collections with heterogeneous keys;
homogeneous keys, in my experience, are usually value objects with
overridden hashCode such as Strings and you can make the
initial-address-bits (and the thin lock bit) the low order bits. Shift
right one bit to lose the lock bit and have the hash; shift right n bits
for some fairly small n to get the vtable pointer. Vtable lookup is a
tiny bit slower due to a test of the lock bit plus one added shift
instruction on each lookup, but the critical performance points tend to
get JITted into direct calls or branch-predictable is-it-a-Foo?
jump-or-normal-vtable-lookup choices. And the vast majority of
production Java code is I/O bound anyway.

Well, except when the object needs a fat lock. Then the whole word
becomes a pointer to a structure that points to the vtable and contains
the lock and identity hash. Now vtable lookup has an added indirection.
But the bottleneck with such objects will usually be contention for the
lock itself, not CPU cycles.
That might be memory-efficient, but it would not be at all
time-efficient, as it would require a map lookup to lock an object.

Map lookups are O(1) and a low level implementation in C built into the
JVM would boil down to masking and shifting the hash and then a pointer
addition and dereference, only needed when you wanted to lock or unlock
an object -- and again, the time spent on this will be dwarfed by the
time spent in contention for the lock anyway, fairly often. Branch
prediction and pipelining might help in the case of high-CPU areas that
lock an object with low contention, in that some of the work might
proceed in parallel with lock acquisition in the absence of contention
(though, only as far as work that can be done in cache or registers,
since the lock must be held prior to any memory reads or writes in the
guarded object, and on initial acquisition failure that work may have to
be repeated later on acquisitoon).
 
J

Joshua Maurice

On Jun 28, 10:12 pm,
supercalifragilisticexpialadiamaticonormalizeringelimatisticantations
If they'd been smarter about managing mutability to begin with (e.g.
fields immutable by default, objects normally immutable) cloning would
not be a priority at all.

Sure, the boxed primitives and Strings are immutable, and that's about
it. We've got mutable stuff out the wazoo, most of which probably
shouldn't be -- java.util.Date, anyone? Not to mention java.awt.Point
and friends. All those conundrums about whether Square should be a
subclass of Rectangle, or Circle of Ellipse, go away if they aren't
mutable. Then they clearly are subclasses. Mutable collections and
arrays also make type reasoning more complicated and don't allow casting
a List<Sub> to a List<Super> as then you might add a non-Sub Super to
the list. If the list wasn't mutable there'd be no problem casting a
List<Sub> to a List<Super>.

I could go on...but I won't.

If you want a functional language, go use a functional language and
stop complaining that Java is not a functional language.
 
J

Joshua Cranmer

If the list wasn't mutable there'd be no problem casting a
List<Sub> to a List<Super>.

And then I'd complain because my program would be spending more time
copying the values between immutable queues than actually doing work. As
long as the language has the potential for mutable collections (which
most people want for performance reasons), you have the potential for
generics casting issues.
 
S

supercalifragilisticexpialadiamaticonormalizeringe

On Jun 28, 10:12 pm,
supercalifragilisticexpialadiamaticonormalizeringelimatisticantations


If you want a functional language, go use a functional language and
stop complaining that Java is not a functional language.

Contrary to popular belief, immutability is not solely useful in a
functional language. In fact, OO languages benefit greatly if their
"value types" (things you're likely to want to use as hash keys and to
generally represent state) are immutable.
 
S

supercalifragilisticexpialadiamaticonormalizeringe

And then I'd complain because my program would be spending more time
copying the values between immutable queues than actually doing work. As
long as the language has the potential for mutable collections (which
most people want for performance reasons), you have the potential for
generics casting issues.

Lists are, in my experience, typically constructed, then consumed; only
infrequently is a mutable one maintained with recurring episodes of
reading and writing over time. The common case could have been optimized
with better support than the various Collections.unmodifiableFoo()
methods provide. For example, if you could tag a list as not modifiable
the compiler can both disallow writing through it and allow casting from
UnmodifiableList<Sub> to UnmodifiableList<Super>. We kinda have that now
in that we can cast List<Sub> to List<? extends Super> and then the
compiler will indeed not let us add to it, but <? extends Super> is both
awkward and not equal to Super. A lot of methods might be written to
demand a List<Super> even if they won't modify the list, and will thus
work with a List<? extends Super>. More generally it complicates
generics. The fact of the matter is that <? extends X> is like of like
"unmodifiable, and also <X>", at least for collections; a more clear way
of (separately) expressing "unmodifiable" would have been nice.

So, basically, what I'm saying is that we should have had some notion of
constness in Java. :)
 
B

BGB

And then I'd complain because my program would be spending more time
copying the values between immutable queues than actually doing work. As
long as the language has the potential for mutable collections (which
most people want for performance reasons), you have the potential for
generics casting issues.

well, and probably putting more pressure on the garbage collector.

a great downside of using an FP-like style with a GC and a language/VM
that generally lacks the concept of user-defined value-types is that it
increases the amount of garbage produced (thus increasing the number of
GC cycles).

a language with 'struct' need not have this issue, as then they can use
it for implementing such value types.

but, I have seen cases where people have abused struct (mostly in C#),
generally using references-to-struct for things which would probably
have been better done with a heap-allocated class instance.


in my own personal language, it may be possible to create structs using
a constructor and using 'final' on fields to create an immutable struct.


or such...
 
S

supercalifragilisticexpialadiamaticonormalizeringe

well, and probably putting more pressure on the garbage collector.

Typical immutable usage-patterns create more pressure in one single
predominant way: by producing more very-short-lived temporaries holding
intermediate values. Assuming the JIT doesn't optimize those out of the
heap altogether, they will die in edenspace which generally makes them
extremely cheap (as the gc cost to clean up edenspace is proportional
only to the number of survivors, not the number of dead objects). Where
they can be a bit less cheap is that heap space requirements to avoid
more major collections may be higher.
a great downside of using an FP-like style with a GC and a language/VM
that generally lacks the concept of user-defined value-types is that it
increases the amount of garbage produced (thus increasing the number of
GC cycles).

Don't forget that the JIT can optimize local temporary objects that
escape analysis shows never leave the method scope (or have their
identity hash code needed) into being defacto "value type" objects
instead of heap objects.
 
B

BGB

???

Regardless of which, "Onodera" also sounds feminine.

grr... the name is not latin-based, not everything that ends in 'a' is
female.

not like it is some guy with a name like "Chibichibi Hitomi" or
something, which would be a bit suspect.


"anata wa des-ka?"
"chibi-chibi hitomi wa deeesuuu!" (meanwhile doing an imbalanced stance).

as other people look with a solidly "WTF?" expression upon hearing this.

another person stands up, puts his hands to his face, and a background
voice exclaims "shaaku!" (IOW: "shock!").

That would work, if that's what the second word in the object header
normally is. Assuming it's the heap address at time of creation, and
objects are aligned on word boundaries, the two least order bits of the
identity hash are going to be zero, so you can use those bits for
something else and mask them off to get the hash.

On the other hand, that suggests a way to make object headers of only
*one* word.

Consider: how likely are we to have four billion vtables in a running
32-bit JVM? Let alone Long.MAX_VALUE - Long.MIN_VALUE + 1 in a 64-bit one?

well, that is the cost of full pointers probably.
everything is addressable.

storing type-IDs as an id-number can also work, then one fetches the
vtable/... via an array index. a downside though is that this would have
a potential performance impact, as additional operations are now needed
to access the vtable.


or, one can reserve a chunk of memory (wherever it is) and subtract out
the address. then one can re-add the relative address to the base address.

say, a 64-bit base address known to the VM, and only a 32-bit relative
address is stored in the object (possibly shifted right 3 bits with the
top-3 used for the lock).

in x86-64, this can be done mostly with a single instruction, say:
lea rcx, [rbx+rax*8]

or, say, one calls a method (rdx=object, rbx=magic base pointer):
mov eax, [rdx] ;fetch vtable word from object
mov ecx, [rbx+rax*8+72] ;access vtable entry at offset 72
lea r8, [rbx+rcx] ;add method address to base
call r8 ;call method

Reserve a low chunk of the address space (and call it part of permgen?)
for vtables and your vtable pointers get quite short. The vtable pointer
plus a few bits of the object's initial address would still make a
pretty decent identity hash for collections with heterogeneous keys;
homogeneous keys, in my experience, are usually value objects with
overridden hashCode such as Strings and you can make the
initial-address-bits (and the thin lock bit) the low order bits. Shift
right one bit to lose the lock bit and have the hash; shift right n bits
for some fairly small n to get the vtable pointer. Vtable lookup is a
tiny bit slower due to a test of the lock bit plus one added shift
instruction on each lookup, but the critical performance points tend to
get JITted into direct calls or branch-predictable is-it-a-Foo?
jump-or-normal-vtable-lookup choices. And the vast majority of
production Java code is I/O bound anyway.

Well, except when the object needs a fat lock. Then the whole word
becomes a pointer to a structure that points to the vtable and contains
the lock and identity hash. Now vtable lookup has an added indirection.
But the bottleneck with such objects will usually be contention for the
lock itself, not CPU cycles.



Map lookups are O(1) and a low level implementation in C built into the
JVM would boil down to masking and shifting the hash and then a pointer
addition and dereference, only needed when you wanted to lock or unlock
an object -- and again, the time spent on this will be dwarfed by the
time spent in contention for the lock anyway, fairly often. Branch
prediction and pipelining might help in the case of high-CPU areas that
lock an object with low contention, in that some of the work might
proceed in parallel with lock acquisition in the absence of contention
(though, only as far as work that can be done in cache or registers,
since the lock must be held prior to any memory reads or writes in the
guarded object, and on initial acquisition failure that work may have to
be repeated later on acquisitoon).

yep.

a table need not be all that expensive.

in my VM at least, interface method dispatch is itself done via the use
of a hash table (as well as using a table for any object locking).


actually, a variant of the relative-address scheme is used as well on
x86-64, but in this case more due to the x86-64 ISA generally using
32-bit offsets for everything (calls/jumps/... are limited to 32-bits
unless one wants to use GPRs to hold the temporary addresses, meaning it
is much more efficient to try to have most of ones' JITted code/data be
within a +-2GB window).

in this case, calls outside of this window are generally handled via
trampolines within the window.

in effect, this window forms an "executable heap". granted, currently
the whole region is read/write/execute, where I guess on some systems
SELinux may make a problem for this, but I have yet to address this
(seems to work fine on my systems... mostly tested with Fedora x86-64).

(note: I also target Win64 and Win32 as well, with Win64 being done in
roughly the same way, and all this being N/A to Win32).

sadly, the region currently uses manual-MM. the GC can scan the region
for references, but code/data/bss sections are not automatically
reclaimed, as I found out after initial implementation that this would
create serious implementation problems, and so instead opted with using
different executable memory for GCed code, as well as it having to
follow special rules, ...


current setup:
4GB RWX (combined code/data/bss, allocation starts from the middle and
follows an even/odd "spiral" pattern).

theoretically, one would have to double-map the code-heap in this case, say:
2GB RX (code/rodata), 2GB RW (data/bss), 2GB RW (alias for code-heap)
or:
2GB RX (code/rodata), 2GB RW (data/bss, alias to first region)


presumably, the standard JVM does something similar internally?...


in the hypothetical object situation, this region would also be used for
object vtables/... as well (but, as given before, would additionally
require a region-base pointer, that or use relative addresses, which
have their own complexities, and require "movsx rax, dword [...]"
instructions).


or such...
 
K

KitKat

grr... the name is not latin-based,

What does Latin have to do with Java, BGB?
not everything that ends in 'a' is female.

No, just the names that do.
not like it is some guy with a name like "Chibichibi Hitomi" or
something, which would be a bit suspect.

Yes, "i" instead of "y" endings are also usually feminine.
"anata wa des-ka?"
"chibi-chibi hitomi wa deeesuuu!" (meanwhile doing an imbalanced stance).

What does your public drunkenness have to do with Java, BGB?
as other people look with a solidly "WTF?" expression upon hearing this.

another person stands up, puts his hands to his face, and a background
voice exclaims "shaaku!" (IOW: "shock!").

What does your hallucination have to do with Java, BGB?
storing type-IDs as an id-number can also work, then one fetches the
vtable/... via an array index. a downside though is that this would have
a potential performance impact, as additional operations are now needed
to access the vtable.

Even the other suggestion involved a shift and mask, as well as a prior
bit-test and branch in case of a thick lock in which event the vtable
was one more indirection away, though branch prediction would take care
of the latter handily for every method invocation not called inside of a
critical section.
or, one can reserve a chunk of memory (wherever it is) and subtract out
the address. then one can re-add the relative address to the base address.

Doubles heap size, if you're suggesting what it sounds like you're
suggesting.
say, a 64-bit base address known to the VM, and only a 32-bit relative
address is stored in the object (possibly shifted right 3 bits with the
top-3 used for the lock).

Limits the heap to what the limit would be in a 32-bit VM.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,055
Latest member
SlimSparkKetoACVReview

Latest Threads

Top