addEvent - The late entry :)

R

Richard Cornford

Peter said:
On Jul 20, 5:40 am, Richard Cornford wrote:

[snip]
(but any of those clunky 'bind' methods so beloved of library
authors would do just as well as an example).

Partial application is a common technique in lambda languages
and not looked down upon when used appropriately.

True, and it is a very useful facility to have. But it is only very
occasionally a useful facility to have.
The implementation in JavaScript is not as aesthetic as in some
other languages somewhat due to the "this" issue in JavaScript;
however, conceptually the binding of some parameters to one
functions and producing another function taking no or less
parameters is the same. I don't understand why you would
apparently balk at this concept. The use of a "bind" function
is not clunky, in my opinion.

Take the last version of Prototype.js that I looked at (1.6.0.2).
Internally there are 28 calls to its - bind - method and precisely zero
of those pass more than one argument to the method. The method's code
is:-

| bind: function() {
| if (arguments.length < 2 && Object.isUndefined(arguments[0]))
| return this;
| var __method = this, args = $A(arguments), object = args.shift();
| return function() {
| return __method.apply(object, args.concat($A(arguments)));
| }
| },

- so each of those calls to - bind - test the length of arguments and
finds that it is 1 and so calls Object.isUndefined. If the first
argument is not undefined (and in 9 of those calls the argument passed
is - this -, which can never be undefined by detention) it goes on to
call - $A - in order to make an array of the arguments object, it then
renders that array empty by - shift-ing the one argument out of it, and
later it concatenates that empty array onto the array resulting form
another call to - $A -. The availability of a more efficient alternative
seems like a good idea, even if that alternative were only made
available internally. That alternative could be as simple as:-

simpleBind: function(obj) {
var fnc = this;
return (function() {
return fnc.apply(obj, arguments);
});
}

- and still fully satisfy all of the internal uses of bind that appear
in Prototype.js.

But it is the very fact that Prototype.js uses - bind - so much and yet
in using it never exploits its full capabilities that demonstrates that
the need for all of those capabilities is not that great. Even in code
as general as Prototype.js the ratio of common cases that don't need
'outer' arguments to cases that do is at least 28 to 1. It might be
argued that the real need is so infrequent that it should be handled
externally by the code that needs it. Or that a two stage process where
any 'outer' set of arguments is associated with one function object and
then it is that function that is passed into a much simplified - bind -
method, would provide a better interface. Something like:-

simpleBind: function(obj) {
var fnc = this;
return (function() {
return fnc.apply(obj, arguments);
});
}
complexBind:function(obj){
var fnc = this;
var args = $A(arguments);
args.shift();
return (function(){
return fnc.apply(this. args.concat($A(arguments)))
}.simpleBind(obj));
}

(untested) - where the common case enters at one point and the much less
common case enters at another. That uncommon case suffers for its
increased demands but usually, and certainly in Prototype.js's internal
use, the benefit outweighs the losses. And if a particular application
can be seen to employ the (normally) uncommon case extensively it can
employ the original process. Except that it cannot because it is in the
nature of these large-scale internally interdependent general-purpose
libraries that people using them cannot play with the APR safely
(maintenance and upgrade headaches follow form the attempt) and the
libraries themselves cannot change their external API without breaking
existing code that uses them.

So, yes 'clunky'. Certainly far short of being well-thought-out or
elegant. Indeed so far so that their only way out is to petition for a
new - bind - method in the new language versions so that faster native
code can rescue their authors from the consequences of original designs.
[snip]
fn.call( el, window.event);
}
el.attachEvent( 'on'+type, el[type+fn])
}
else
el[ 'on'+type] = fn

If this branch is ever good enough it is also always good
enough.

Functionally yes. I think, in this case, the third branch is
unnecessary even if implemented so all three branches have
the same behavior. If someone did think the third branch was
necessary then the first two branches (using addEventListener
or attachEvent) could be justified as performance boosts.

Which assumes that there would be a performance boost. Probably the
calling of event listeners is so fast that measuring a difference would
be difficult. But the extra function call overhead in the -
attachEvent - branch is unlikely to help in that regard.
A large portion of the remainder of your message below is
related to keeping code small which is really just for a
performance boost.

Not just, size and the internal complexity that manifests itself in size
have consequences for understandably and so maintainability.
Do you happen to know of a way to detect the Safari versions
which do not honour calls to preventDefault for click or
double click events when the listener was attached using
addEventListener?

No, it has not yet been an issue for me. But Safari browsers expose many
'odd' properties in their DOMs so I would bet that an object inference
test could be devised even if a direct feature test could not, and the
result would still be fare superior to the UA sniffing that seems to go
on at present.
There is a "legacy" workaround using onclick and ondblclick
properties of elements but these are a bit ugly and have some
drawbacks which need documentation. At this point, since those
versions of Safari have been automatically upgraded, I'd rather
put those versions of Safari down the degradation path as though
they didn't have event models at all. I just don't know how to
detect these versions of Safari.

[snip]
Recently I have been thinking about how to express what it is about
the
attempt to be general that tends to results in code that bloated and
inefficient.

[snip interesting thoughts]

You an Matt Kruse have faced off many times about this whole
"general" library business. I don't quite see the fuss.

Talking (even arguing) about them is better way of testing the veracity
of ideas than not.
Matt is being a bit extreme by suggesting that the general
position reporting function should be written

Matt's position has tended to be that someone else (someone other than
him) should write it.
even though you have stated it would be 2000+ statements and
far too slow to be practical. Your multiple implementations
approach seems more appropriate in this situation.

In messages like this one, you on the other hand seem to
eschew things "general" (though I don't think you do so 100%).
Take, for example, the scroll reporting code you wrote in the
FAQ notes

http://www.jibbering.com/faq/faq_notes/not_browser_detect.html#bdScroll

I consider that level of "multi-browserness" sufficiently "general".
That is, I would be comfortable using this type of code on the
unrestricted web.

But it is not general. The only dimension in which it is general is the
browser support dimension (though it should not be too bad in the
programmer understanding dimension). It does not even cover the general
case of wanting to find the degree to which a document has been scrolled
because it has no facility for reporting on any document other than the
one containing the SCRIPT element that contained/loaded its code. Add
support for multi-frame object models and you have something else again.
When you write that an "attempt to be general that tends to results
in code that bloated and inefficient" I think it is worth defining
where you draw the line between bloated and non-bloated code and
efficient and inefficient code.

I am not drawing a line. I am expression the direction of movement that
results form a cause and effect relationship. Understanding the
relationship allows people to draw their own lines at the points which
suite their situations/contexts/applications.
If a "general" event library could be written in 10 lines would
that be acceptable to use in cases where it is more general than
necessary?

Certainly if performance followed size.
What if it was 20 lines? 50 lines? 200 lines? 10000 lines? The
absolute size of the general code does matter to some extent.

Imagine the 200 line version's only drawback was download time
and initial interpretation, with no other runtime penalties.
If that code was already written, would it be worth using in
situations where code only 50% size could be used given the
smaller code is not already written. Writing 100 lines of event
library code is probably not trivial and require heavy testing.
I would use the 200 line version as it is ready, tested,
cacheable, not really a download burden for the majority of
today's network connections (even most mobile networks).

In that hypothetical situation, I probably would use the code as well.
I think that the extreme positions for and against "general"
are both faulty.

You are not seeing the question of how 'general' is "general". An event
library, no matter how large/capable is not 'general' in every sense. It
should have no element retrieve methods, no retrieval using CSS
selectors, no built-in GUI widgets, etc. It is (potentially) a task
specific component, even if it comprehensively addresses its task.
When general code has acceptable performance, then creating
and maintaining one version is the winner.

Given the number of dimension to 'general' it has not yet been
demonstrated that truly general code can be created so any "when general
code ..." statement is irrelevant. Certainly if you can get away with a
single implementation of a task-specific component then there is no need
for an alternative.
I think an event library falls into this category.

Of a component that only needs one good implementation? I doubt that, in
Aaron's code to date we have not even discussed the possibilities opened
up by having multiple frames (where normalising the event to -
window.event - is not necessarily the right thing to do). I would not
like the idea of importing a large-ish single document component into
every individual frame when a slightly larger multi frame system could
do the whole job, or of using that larger multiple frame version in a
single page application.
When the general code is unacceptably slow then more optimized
versions must be written. The position reporting
problem falls into this category.

But the position reporting problem also demonstrates that the fact that
actual applications of javascript (web sites/applications, etc) are
specific can be exploited to facilitate problem solving. That is,
because at least some of the possible permutations of applications will
not be present in any specific application those excluded permutations
do not need to be handled by code that will never see them. It is not
just speed, there is also the possibility of getting to cut the Gordian
knot rather then untying it. There is also an advantage in the speed
with which re-useable code can be created, because you do not have to
spend time writing code for situations that you know will not apply. And
there is also a testing advantage, because excluded permutations do not
need to be tested.
"Premature optimization is the root of all evil" seems to apply
and suggests the strategy to use general code until there is push
back from some human (possibly the programmer, testers, customers)
that the code really is too slow. Then, and only then, fallback
to the multiple implementations strategy which is, in many regards,
an optimization strategy.

It is not just optimisation. It is not solving problems that are
currently not your problems, it is not spending time writing code that
is not needed, and it is not spending time testing code that will not be
executed in the context of its use.

Richard.
 
R

Richard Cornford

For the called event handler I was intending/trying to give a
constsant interface.

This is the sort of information that should come with code when it is
first posted. It is necessary in order to make an informed judgement
bout whether the code achieves its specification or not.
For the calling API I was trying to give support for setting single
events per element. But also hoping to offer extended functionality
of individual browsers. This may be a bad thing.

It is a bad thing. There is no point putting a layer over the APIs
provided by the browser if the result is precisely as complex to use as
the existing API. Someone wanting to use that extended functionality is
going to want to know whether it is available or not, and once they have
tested for that they have carried out 70-odd% of the work your functions
do internally, and everything they would have needed to do in order to
decide which listener attaching method to use for themselves.

Richard.
 
R

Richard Cornford

Aaron said:
So Opera needs a special case calling attachEvent on windows instead
of addEventListener ?

Event handing on objects outside of the DOM (which pretty much comes
down to window/frame objects) needs to be handled completely
independently of event handling on elements within the DOM. Any
parallels between the two should be regarded as fortunate coincidences
rather than as being related (or implying a relationship). (And then
being cautious about load and unload events)

Richard.
 
T

Thomas 'PointedEars' Lahn

Peter said:
The spec seems to disagree

"Although all EventListeners on the EventTarget are guaranteed to be
triggered by any event which is received by that EventTarget, no
specification is made as to the order in which they will receive the
event with regards to the other EventListeners on the EventTarget."

http://www.w3.org/TR/DOM-Level-2-Events/events.html#Events-flow-basic

Thanks, I was not aware of that paragraph.

However, implementations so far appear to implement what W3C DOM Level 3
says (currently a Working Draft):

,-<http://www.w3.org/TR/DOM-Level-3-Events/events.html#Events-flow>
|
| [...]
|
| Firstly, the implementation must determine the current target. [...]
|
| Secondly, the implementation must determine the current target's candidate
| event listeners. This is the list of all event listeners that have been
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| registered on the current target in their order of registration. Once
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| determined, the candidate event listeners cannot be changed, adding or
| removing listeners does not affect the current target's candidate event
| listeners.
|
| Finally, the implementation must process all candidate event listeners in
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| order and trigger each listener if all the following conditions are met.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| A listener is triggered by invoking the EventListener.handleEvent() method
| or an equivalent binding-specific mechanism.
|
| * The event object's immediate propagation has not been stopped.
| * The listener has been registered for this event phase.
| * The listener has been registered for this event type.

This behavior makes sense to me as `add' would indicate an order, whereas
`attach' does not.
Full ACK.
"ACK"?

ACK.

[...] would make your posts much easier to read for those
unfamiliar with your uncommon English abbreviation. "ACK" does not
appear in my English dictionary, for example.

It appears in the Jargon File, FOLDOC (which took it from the former), and
Wikipedia, for example. The word can be easily found in these and millions
of other locations with Google, which you are using, many of them explaining
it as it was meant here:

| Results 1 - 10 of about 15,500,000 for ACK. (0.21 seconds)

You are only *playing* stupid, I hope.


PointedEars
 
P

Peter Michaux

[snip regarding Prototype.js's implementation and use of "bind"]

I'm a little surprised you know so much about the internals of
Prototype.js. I haven't looked inside that library for more than a few
minutes in the last year. All that $, $A, $H, and my personal
"favorite" $$ are successful repellent. I don't even know what these
functions do exactly and in some case not even partially.

The last time I did look in the library it seemed that the
Prototype.js typists don't seem to care about any sort of modularity
in their code. It is one big lump that cannot be untangled. I am not
surprised that their heavy-duty "bind" is used internally. RobG
pointed out sometime that many functions in Prototype.js start by
calling $ for one of the arguments. Even functions that aren't API
functions do this which is completely pointless.

This example of "bind" does make a good point that the function is
more "general" than needed in the particular case. But there would
only be a need to write a more trimmed down version if the performance
penalty of the current function is causing a problem. I don't want to
make it seem like writing especially inefficient code is ok until
there are customer complaints but there is a balance which needs to be
struck in the interest of saving development dollars.

I happen to use "bind" for partial application in the opposite way.
I'm not concerned with setting "this" but rather with the partial
application of the explicit arguments to a function. I create a
"thunk" in Lisp terminology (a function that takes no arguments.) I'm
probably a little too keen on Lisp-style of programming right now but
it is a pleasure compared to the bothers of "this".

[snip]
Not just, size and the internal complexity that manifests itself in size
have consequences for understandably and so maintainability.

I've thought about the saying "premature optimization is the root of
all evil" in several different ways recently. I believe the original
use was with respect to reducing CPU use and so speeding up the
program. Now this saying can apply to size, speed, maintainability,
generality. None of these should be optimized prematurely as the
others usually suffer. That is the reason I pressed the issues I did
below. Sometimes it seems you are advocating optimizing size, speed at
the cost of maintainability (i.e. multiple versions) and generality.

No, it has not yet been an issue for me. But Safari browsers expose many
'odd' properties in their DOMs so I would bet that an object inference
test could be devised even if a direct feature test could not, and the
result would still be fare superior to the UA sniffing that seems to go
on at present.

I'd be surprised if there is an object change between versions 2.0.2
and 2.0.3 (if those are the correct numbers, I have to check) where
the bug was fixed. If that one bug was the only thing changed in the
release or if no new objects or properties were added, that would be a
problem for my goal. I suppose I could throw away versions 2.0.3 and
2.0.4 if something noticeable changed in version 2.0.5, for example.
All these versions were automatic upgrades and so none are really in
use now.

[snip]
Talking (even arguing) about them is better way of testing the veracity
of ideas than not.
Indeed.



Matt's position has tended to be that someone else (someone other than
him) should write it.

I think there is a bit of that but also I think he wants to read an
acknowledgement that using code that is slightly too general for a
situation is tolerable.

But it is not general. The only dimension in which it is general is the
browser support dimension (though it should not be too bad in the
programmer understanding dimension). It does not even cover the general
case of wanting to find the degree to which a document has been scrolled
because it has no facility for reporting on any document other than the
one containing the SCRIPT element that contained/loaded its code. Add
support for multi-frame object models and you have something else again.

Fair enough. I really don't think about multiple frames/windows as I
almost never use them (at not least where this would matter.)


[snip]
In that hypothetical situation, I probably would use the code as well.

[point A] I will refer to this below.
You are not seeing the question of how 'general' is "general". An event
library, no matter how large/capable is not 'general' in every sense. It
should have no element retrieve methods, no retrieval using CSS
selectors, no built-in GUI widgets, etc.

I don't know how CSS selectors or built in widgets have anything to do
with an event library.

It is (potentially) a task
specific component, even if it comprehensively addresses its task.

If it could truly comprehensively do its job then I think that would
mean "general".

Given the number of dimension to 'general' it has not yet been
demonstrated that truly general code can be created so any "when general
code ..." statement is irrelevant. Certainly if you can get away with a
single implementation of a task-specific component then there is no need
for an alternative.

That is a valuable acknowledgement by including "get away with".

Of a component that only needs one good implementation? I doubt that, in
Aaron's code to date we have not even discussed the possibilities opened
up by having multiple frames (where normalising the event to -
window.event - is not necessarily the right thing to do). I would not
like the idea of importing a large-ish single document component into
every individual frame when a slightly larger multi frame system could
do the whole job, or of using that larger multiple frame version in a
single page application.

Given the your acknowledgement at "point a" above, it would seem the
size of "slightly" might play a role.

If the slightly larger multi-frame system was written and there was a
tight deadline, I would use it.

If the single page version was already written and could do the job by
being included in every individual frame then I would use it on a
tight deadline. Caching could be set up with some no-check, far-future
expiration date header so there is no cost to including it in every
page.

This is now going into the "need to know the requirements", which is
your point anyway.

[snip]
It is not just optimisation. It is not solving problems that are
currently not your problems, it is not spending time writing code that
is not needed, and it is not spending time testing code that will not be
executed in the context of its use.

Nicely written; however, if the code is already written, tested, and
available for download from the web, but solves a problem more general
then problem at hand, where does one draw the line and say it is *too*
general? There must be some observable that indicates this situation.
For example "the client is complaining downloads are too long" or "the
client is complaining the application is not responsive enough" or
"the other programmers are spending too much time on maintenance" or
the genuine expectation that one of these sorts of problems will
arise.

The majority of JavaScirpt programmers (almost all less "uptight" than
us) seem to agree that there is a problem that can be specified and
solved with a library that can be shared on the web to the benefits of
others. Perhaps each project you work on is so radically different,
and perhaps quite advanced, that your given problems are not solved
well by the feature sets of these prepackaged libraries (leaving the
quality of these libraries aside for a moment.)

For my own use, I developed a library and slapped the label "Fork"
onto it. I think it solves roughly the same problem as the base
utilities of YUI!, Prototype.js, jQuery, Dojo, etc. This vaguely
specified problem is what people call the "general" problem and the
use of "general" in this case is incorrect. Your use of "general" is
better. The same problem occurs with the distinction between multi-
browser and cross-browser. Libraries claiming to be cross-browser are
usually just multi-browser for some very small value of "multi". I
will endeavor to be more careful about my use of the word "general".

What would be great is if there was a word for this vaguely specified
problem that so many libraries roughly solve because these libraries,
though more general than necessary in many cases, are acceptable
solutions for what I dare to say are "most" browser scripting
problems. Many of the regulars past and present of this group have
maintained their own solutions to roughly the same problem. Matt
Kruse, Thomas Lahn (I'll probably get crucified for including his name
in this list), David Mark, and I have all shown code we have written
that we can cart around to various projects. The code may be too
general in some cases but the cost of extra download time is
acceptable (perhaps unnoticeable) and the paying customer would rather
we just get on with development than trimming the code down to a
minimal implementation.

The vaguely specified problem is *something* *approximately* like the
following. This is probably closer to the problem that the c.l.js
regulars solve than the mainstream libraries solve but they aren't far
off.

---------------

Full support in this generation of browsers
IE6+, FF1+, O8.5+, S2+
No errors (syntax or runtime) in
IE5+, NN6+, O7+, S1+
Possibly syntax errors but no runtime errors in
IE4+, NN4+
Who cares
IE3-, NN3-

A single page implementation of an event library. (Not worried about
frames/windows as you discussed above.)

Probably many, many other restrictions/requirements/assumptions.

---------------

The above specific problem (or one quite close) seems to be the one
that needs solving most frequently and so is the one for which the
most prefabricated is available for download on the web. This has been
incorrectly referred to as the "general" problem and the available
solutions have been labeled "cross-browser" even though they are not
even close and don't even attempt to use feature testing well.

Since this problem arises so frequently it is good that programmers
share code to solve this problem (or problems very similar). The fact
that this problem is more general than necessary in many cases is
clearly not a problem. If it were then customers would have complained
enough that some other problem would be solved (perhaps your multiple
implementations system would be popular.)

I have tried solving only the problem at hand for a given web page. As
more requirements arrive from the customer, I find I always end up,
once again, solving this same vaguely specified problem. Perhaps this
vaguely specified problem is exactly at the perceived level of
functionality the web can provide without being too expensive to
develop.

There is something to this vaguely specified problem, don't you agree?

Peter
 
P

Peter Michaux

The spec seems to disagree
"Although all EventListeners on the EventTarget are guaranteed to be
triggered by any event which is received by that EventTarget, no
specification is made as to the order in which they will receive the
event with regards to the other EventListeners on the EventTarget."

Thanks, I was not aware of that paragraph.

However, implementations so far appear to implement what W3C DOM Level 3
says (currently a Working Draft):

,-<http://www.w3.org/TR/DOM-Level-3-Events/events.html#Events-flow>
|
| [...]
|
| Firstly, the implementation must determine the current target. [...]
|
| Secondly, the implementation must determine the current target's candidate
| event listeners. This is the list of all event listeners that have been
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| registered on the current target in their order of registration. Once
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| determined, the candidate event listeners cannot be changed, adding or
| removing listeners does not affect the current target's candidate event
| listeners.
|
| Finally, the implementation must process all candidate event listeners in
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| order and trigger each listener if all the following conditions are met.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| A listener is triggered by invoking the EventListener.handleEvent() method
| or an equivalent binding-specific mechanism.
|
| * The event object's immediate propagation has not been stopped.
| * The listener has been registered for this event phase.
| * The listener has been registered for this event type.

This behavior makes sense to me as `add' would indicate an order, whereas
`attach' does not.

It doesn't make sense to me and I wish they wouldn't include this in
the specification.

If two unrelated pieces of code are registering listeners on an
element, the order of execution of those listeners would not matter.

If two related listeners are being registered and the order does
matter then they should be combined and added as one listener so the
order is explicit in the code. Anything else and debugging is a
nightmare.

Also, I don't think "add" means "append at the end". For example, the
Mathematical concept of addition for numbers is commutative.

Full ACK.

ACK.

[...] would make your posts much easier to read for those
unfamiliar with your uncommon English abbreviation. "ACK" does not
appear in my English dictionary, for example.

It appears in the Jargon File, FOLDOC (which took it from the former), and
Wikipedia, for example. The word can be easily found in these and millions
of other locations with Google, which you are using, many of them explaining
it as it was meant here:

| Results 1 - 10 of about 15,500,000 for ACK. (0.21 seconds)

You are only *playing* stupid, I hope.

Sometimes but I genuinely don't know about 95% of the abbreviations
you use. I didn't know "ACK", for example. Have a look in the group
archives for "ack". There are not many uses and many are the
interjection use of "ACK!!" It seems you studied all the obscure
abbreviations you could and wield them to bewilder your opponents and
gain some sort of sense of superiority. Given how many messages you
type here per month, the extra nine characters you saved with "ACK"
are insignificant.

Peter

P.S. I didn't snip anything because I deemed it all relevant.

P.P.S "P.S." is a well known abbreviation for "post script" and
"P.P.S." is for "post, post script". I'm sure you could see a way of
extending this system for any number of notes included after the main
body of text.
 
T

Thomas 'PointedEars' Lahn

Peter said:
Why LGPL? There are JavaScript libraries under the MIT and BSD licenses
which are more liberal than LGPL.
Offering your library under the LGPL means, most importantly, you are
offering your library to the world.

In a nutshell:

LGPL: You should use GPL v2 instead.

<http://www.gnu.org/licenses/why-not-lgpl.html>

GPL v2: You should use GPL v3 instead.

<http://www.gnu.org/licenses/rms-why-gplv3.html>

GPL v3: Copyright notices must be included and left in when contained.
Source code must be provided on request, free of charge. Users
may modify and redistribute the program under these conditions.

<http://www.gnu.org/licenses/gpl-3.0.html>

MITL: Copyright notices must be included and left in when contained.
Source code does not need to be provided. Users may modify and
redistribute the program under these conditions.

<http://www.opensource.org/licenses/bsd-license.php>

BSDL: See MITL plus the "This product includes software developed by
the University of California, Berkeley and its contributors."
acknowledgement. Advertising with the name of the university
or its contributors is forbidden.

<http://www.opensource.org/licenses/mit-license.php>

A common misconception is that one had to provide the source code along with
the program and was not allowed to make a profit from GPL'd software. This
is wrong.

Furthermore, distributing a library written in an ECMAScript implementation
under the BSD or MIT licenses is questionable. If the library is to be used
client-side (as here), you will have to provide the source code anyway. And
as for the BSD License, unless you really use University products in it, you
could as well distribute under the MIT or GPL with little difference in your
freedom or that of your users.
I think it is great you are taking the project of writing a library
seriously but if you are going to release your library publicly you may
want to give it a very low version number like 0.0 or 0.1 so people know
you are just starting to learn the issues of both JavaScript and cross
browser coding.

Full ACK.


PointedEars
 
T

Thomas 'PointedEars' Lahn

Peter said:
Thomas said:
Peter said:
Thomas 'PointedEars' Lahn wrote:
* event listeners *added* with addEventListener() are executed *in
order of addition*;
The spec seems to disagree [...]
http://www.w3.org/TR/DOM-Level-2-Events/events.html#Events-flow-basic

Thanks, I was not aware of that paragraph.

However, implementations so far appear to implement what W3C DOM Level
3 says (currently a Working Draft):

[Event listeners for an event target are to be triggered in order of
registration]

This behavior makes sense to me as `add' would indicate an order,
[W3C DOM Level 3 (WD): Event listeners are to be triggered in order

It doesn't make sense to me and I wish they wouldn't include this in the
specification.

If two unrelated pieces of code are registering listeners on an element,
the order of execution of those listeners would not matter.

And so their being triggered in order of registration would not be a drawback.
If two related listeners are being registered and the order does matter
then they should be combined and added as one listener so the order is
explicit in the code. Anything else and debugging is a nightmare.

It may not be possible to know the previously registered event listener,
insofar this solution is not generally applicable. However, I'd rather they
included an interface for a native event registry in the Specification that
could be used to determine all event listeners previously registered by the
same user, at least, so that that one would have to write less
less-efficient (sic!) non-native code for that.
Also, I don't think "add" means "append at the end". For example, the
Mathematical concept of addition for numbers is commutative.
YMMV.
Full ACK.
"ACK"? ACK.

[...] would make your posts much easier to read for those unfamiliar
with your uncommon English abbreviation. "ACK" does not appear in my
English dictionary, for example.
It appears in the Jargon File, FOLDOC (which took it from the former),
and Wikipedia, for example. The word can be easily found in these and
millions of other locations with Google, which you are using, many of
them explaining it as it was meant here:

| Results 1 - 10 of about 15,500,000 for ACK. (0.21 seconds)

You are only *playing* stupid, I hope.

[...] I genuinely don't know about 95% of the abbreviations you use.

That is your problem alone. There is none among them that is unfamiliar on
Usenet. (Compared to our dear Doctor, for example, I am being quite liberal
also in that regard.) Of course a googlodyte like you now-apparently are
does not need to know, but then also he better did not go on whining about them.
I didn't know "ACK", for example. Have a look in the group archives for
"ack". There are not many uses and many are the interjection use of
"ACK!!" It seems you studied all the obscure abbreviations you could and
wield them to bewilder your opponents and gain some sort of sense of
superiority. Given how many messages you type here per month, the extra
nine characters you saved with "ACK" are insignificant.

So you are not playing stupid?
[...] P.S. I didn't snip anything because I deemed it all relevant.

P.P.S "P.S." is a well known abbreviation for "post script" and "P.P.S."
is for "post, post script". I'm sure you could see a way of extending
this system for any number of notes included after the main body of text.

OK, you are not playing. You *are* stupid, and you are wasting my time.


Score adjusted

PointedEars
 
K

kangax

Take the last version of Prototype.js that I looked at (1.6.0.2).
Internally there are 28 calls to its - bind - method and precisely zero
of those pass more than one argument to the method. The method's code
is:-

| bind: function() {
| if (arguments.length < 2 && Object.isUndefined(arguments[0]))
| return this;
| var __method = this, args = $A(arguments), object = args.shift();
| return function() {
| return __method.apply(object, args.concat($A(arguments)));
| }
| },

The 1.6.0.2 release is almost 6 months old. Function.prototype.bind
from the latest revision (http://github.com/sstephenson/prototype/tree/
master/src/base.js#L168) "forks" returning function based on a number
of arguments given.

Similar changes are about to be made to other Function.prototype.*
methods. Using:

__method.apply(null, [this].concat($A(arguments)));

where:

__method.call(null, this);

would suffice, is no doubt reckless.
 
J

jdalton

@richard Cornford
Indeed so far so that their only way out is to petition for a
new - bind - method in the new language versions so that faster native
code can rescue their authors from the consequences of original designs.
As far as I know the Prototype core members have not petition for a
native bind method.
I believe other developers have seen its worth and have requested its
addition.
I dislike the tone of your comment. The authors are perfectly capable
and do not need “rescuing”.

You make some very good points about Prototype’s bind method.
There are been some performance patches submitted that deal with bind
and other methods:
http://prototype.lighthouseapp.com/attachments/31739/0467-Optimize-bind-bindAsEventListener.patch
http://prototype.lighthouseapp.com/attachments/33717/0473-faster-curry-wrap-and-methodize.patch

@ Peter Michaux
Prototype.js typists don't seem to care about any sort of modularity
As far as I know the core has avoided modularity because it is harder
to maintain.
If file size is an issue you can gzip and minify it to around 20kb.
RobG pointed out sometime that many functions in Prototype.js start by
calling $ for one of the arguments. Even functions that aren't API
functions do this which is completely pointless.
$() resolves the element(s) from an ID(s) and extend the element(s) if
needed.
Prototype should have a reason for using $() in the places it is used.
You can read the documentation: http://www.prototypejs.org/api/utility

- JDD
 
R

Richard Cornford

jdalton said:
@Richard Cornford
As far as I know the Prototype core members have not petition
for a native bind method.

Maybe, but that has not stopped the authors of other similar libraries
making such petitions, and citing Prototype.js methods in order to
bolster their position.
I believe other developers have seen its worth and have
requested its addition.

I dislike the tone of your comment.

Should I take it that the contempt is coming across?
The authors are perfectly
capable and do not need “rescuing”.

The evidence (past and present) suggests otherwise.
You make some very good points about Prototype’s bind method.

You don't say?
There are been some performance patches submitted that deal
with bind and other methods:
http://prototype.lighthouseapp.com/attachments/31739/0467-Optimize-bind-bindAsEventListener.patch
<snip>

That ends up with a method that looks like:-

| bind: function() {
| if (arguments.length < 2 && Object.isUndefined(arguments[0]))
| return this;
| var __method = this, args = $A(arguments), object = args.shift();
| if (args.length) {
| return function() {
| return __method.apply(object, args.concat($A(arguments)));
| }
| }
| return function() {
| return __method.apply(object, arguments);
| }
| },

- which is a bit of a half-arse effort at improving performance, and
faulty in terms of the logic it uses. The branch that just returns this
is only acted upon if the value of the first argument is undefined, but
the apply and call method both use the global object as the - this -
value if their first argument is null or undefined. Javascript offers a
very simple test that discriminated null and undefined form all other
values, so that could be used in place of the comparatively heavyweight
and insufficient - Object.isUndefined(arguments[0]) - test.

As the - args - array is only to be used when it has a non-zero length
it would be better to only create that array in the branch that uses it.
It remains where it is because the - object - variable needs to be
initialised for both of the following branches. Except in reality it
does not need to be initialised at all. If instead of using a
variable, - object - had been declared as a single formal parameter for
the method that parameter would have automatically been assigned the
same value a arguments[0]. Having done that the - args - array creation
can be moved into the branch that uses it, and so not executed at all
when only one argument was used with the - bind - method.

Once the - object - parameter is being used it is no longer necessary
to - shift - its value out of the front of the array, and that opens up
the possibility of using an alternative approach to create the array.
Specifically applying the - Array.prototype.slice - to the arguments
object, as in - args = Array.prototype.slice.call(arguments, 1); -,
where fast native code creates the desired array and skips the first
argument in the process.

And finally, using the - concat - method to append an array created from
the arguments object is very convoluted and relatively inefficient when
the arguments object can be used as the second argument to the - apply -
method and the - apply - method could be called on - push -, which will
take any number of arguments and append them to an array. That is:-
__method.apply(object, args.concat($A(arguments))); - can be replaced
with - __method.apply(obj, args.push.apply(args, arguments)); - and
should result in superior performance.

The result might resemble:-

Function.prototype.bind = function(obj){
var args, fnc;
if(arguments.length > 1){
fnc = this;
args = Array.prototype.slice.call(arguments, 1);
return (function(){
return fnc.apply(obj, args.push.apply(args, arguments));
});
}else if(obj == null){
return this;
}else{
fnc = this;
return (function(){
return fnc.apply(obj, arguments);
});
}
};

- which has actually become very different form the authors of
Prototype.js's best efforts to date. And it has lost all of its internal
dependencies on Prototype.js in the process. (Maybe there is a lesson in
that.)

Your faith in the authors or Prototype.js is touching, but no
justification for it is evident in the code they write. An I bet that
when they become aware of some of the possibilities suggested above they
will happily take all the credit for any resulting performance gains
that would be the consequences of their applying them (and some can be
applied all over the place in Prototype.js methods).

Richard.
 
R

Richard Cornford

Richard Cornford wrote:
The result might resemble:-

Function.prototype.bind = function(obj){
var args, fnc;
if(arguments.length > 1){
fnc = this;
args = Array.prototype.slice.call(arguments, 1);
return (function(){
return fnc.apply(obj, args.push.apply(args, arguments));

That line will work better as:-

return fnc.apply(obj, (args.push.apply(args, arguments)&&args));

or the returned function as:-

return (function(){
args.push.apply(args, arguments);
return fnc.apply(obj, args);
});
});
}else if(obj == null){
return this;
}else{
fnc = this;
return (function(){
return fnc.apply(obj, arguments);
});
}
};
<snip>

Richard.
 
K

kangax

And finally, using the - concat - method to append an array created from
the arguments object is very convoluted and relatively inefficient when
the arguments object can be used as the second argument to the - apply -
method and the - apply - method could be called on - push -, which will
take any number of arguments and append them to an array. That is:-
__method.apply(object, args.concat($A(arguments))); - can be replaced
with - __method.apply(obj, args.push.apply(args, arguments)); - and
should result in superior performance.

I didn't know about Array.prototype.push being faster than
Array.prototype.concat in this case. Thanks for the tip.
The result might resemble:-

Function.prototype.bind = function(obj){
var args, fnc;
if(arguments.length > 1){
fnc = this;
args = Array.prototype.slice.call(arguments, 1);
return (function(){
return fnc.apply(obj, args.push.apply(args, arguments));

Wouldn't it be better to invoke "push" from the Array.prototype
directly, rather than resolving the reference through the args'
prototype chain?
 
L

Lasse Reichstein Nielsen

Richard Cornford said:
Function.prototype.bind = function(obj){
var args, fnc;
if(arguments.length > 1){
fnc = this;
args = Array.prototype.slice.call(arguments, 1);

This copies the arguments ...
return (function(){
return fnc.apply(obj, args.push.apply(args, arguments));

.... and this push call changes the copy. I.e., every time
the function is called, the args array is made larger.
Also, the push function doesn't return the updated array.

In this case, the concat function would probably be better, i.e.:


The change could be rolled back, e.g.:
return function() {
return fnc.apply(obj, args.concat(arguments));
}

/L
 
P

Peter Michaux

On Jul 21, 2:37 pm, "Richard Cornford" <[email protected]>
wrote:

[snip]
Function.prototype.bind = function(obj){
var args, fnc;
if(arguments.length > 1){
fnc = this;
args = Array.prototype.slice.call(arguments, 1);

There is no guarantee that I know of stating the above line will work.
Array prototype properties like "slice" are not generic in ECMAScript
3 like the generics in ECMAScript 4 are planned to be. That is, in
ECMAScript 3, Array.prototype.slice doesn't necessarily work with an
array-like "this" object. The "this" object should be an actually
Array object. If I remember correctly, David Mark suggested he new of
an implementation where the above code would error. I use a loop to
accomplish the goal in the above line.

[snip]

Peter
 
R

RobG

On Jul 21, 2:37 pm, "Richard Cornford" <[email protected]>
wrote:

[snip]
Function.prototype.bind = function(obj){
  var args, fnc;
  if(arguments.length > 1){
    fnc = this;
    args = Array.prototype.slice.call(arguments, 1);

There is no guarantee that I know of stating the above line will work.
Array prototype properties like "slice" are not generic in ECMAScript
3 like the generics in ECMAScript 4 are planned to be.

In regard to what ECMA-262 Ed. 3 says about Array.prototype.slice,
there is a note at the bottom of Section 15.4.4.10 that says:

NOTE The slice function is intentionally generic; it does
not require that its this value be an Array object. Therefore
it can be transferred to other kinds of objects for use as a
method. Whether the slice function can be applied successfully
to a host object is implementation-dependent.

That is, in
ECMAScript 3, Array.prototype.slice doesn't necessarily work with an
array-like "this" object. The "this" object should be an actually
Array object. If I remember correctly, David Mark suggested he new of
an implementation where the above code would error.

Was that specifically in regard to the arguments object, or in
general?
 
R

RobG

[snip]
And finally, using the - concat - method to append an array created from
the arguments object is very convoluted and relatively inefficient when
the arguments object can be used as the second argument to the - apply -
method and the - apply - method could be called on - push -, which will
take any number of arguments and append them to an array. That is:-
__method.apply(object, args.concat($A(arguments))); - can be replaced
with - __method.apply(obj, args.push.apply(args, arguments)); - and
should result in superior performance.

I didn't know about Array.prototype.push being faster than
Array.prototype.concat in this case. Thanks for the tip.


The result might resemble:-
Function.prototype.bind = function(obj){
  var args, fnc;
  if(arguments.length > 1){
    fnc = this;
    args = Array.prototype.slice.call(arguments, 1);
    return (function(){
      return fnc.apply(obj, args.push.apply(args, arguments));

Wouldn't it be better to invoke "push" from the Array.prototype
directly, rather than resolving the reference through the args'
prototype chain?

For some value of "better". It seems to me that:

variableObj->args->args[[Prototype]]->push


is fewer lookups than:

variableObj->[[Scope]]window->Array->prototype->push


and less to type - you may have other criteria. :)
 
R

Richard Cornford

Lasse said:
This copies the arguments ...


... and this push call changes the copy. I.e., every time
the function is called, the args array is made larger.
Also, the push function doesn't return the updated array.

You are right. That will work for the first call and then be a mess on
all subsequent calls.
In this case, the concat function would probably be better, i.e.:


The change could be rolled back, e.g.:
return function() {
return fnc.apply(obj, args.concat(arguments));
}

Yes it would, but not like that because step 4 in the concert algorithm
checks whether the argument is an array and branches if it is not. And
that would leave the whole arguments object being append to the array.

It will be necessary to turn the arguments object into an array again,
so:-

return function() {
return fnc.apply(obj, args.concat(args.slice.call(arguments, 0)));
}

Richard.
 
R

Richard Cornford

Peter Michaux wroteL:
On Jul 21, 2:37 pm, Richard Cornford wrote:

[snip]
Function.prototype.bind = function(obj){
var args, fnc;
if(arguments.length > 1){
fnc = this;
args = Array.prototype.slice.call(arguments, 1);

There is no guarantee that I know of stating the above
line will work.

There is ECMA 262 saying that it will work, at least so long as nobody
has re-assigned to the - Array.prototype - or its - slice - method.
Array prototype properties like "slice" are not generic
in ECMAScript 3 like the generics in ECMAScript 4 are planned
to be.

The spec says they are generic, so if they are not in some
implementations that would be an implementation bug.
That is, in ECMAScript 3, Array.prototype.slice doesn't necessarily
work with an array-like "this" object. The "this" object should be
an actually Array object.

The algorithm is only interested in the - this - object having a -
length - property and 'array index;' properties. Though not having
either will not break the - slice - algorithm.
If I remember correctly, David Mark suggested he new of
an implementation where the above code would error. I
use a loop to accomplish the goal in the above line.

As Prototoype.js has never even come close to being cross-browser such
rumours are not problematic. The approach can be tested in the 3 or 4
supported browsers and shown to work there.

Richard.
 
R

Richard Cornford

kangax said:
And finally, using the - concat - method to append an array created
from
the arguments object is very convoluted and relatively inefficient
when
the arguments object can be used as the second argument to the -
apply -
method and the - apply - method could be called on - push -, which
will
take any number of arguments and append them to an array. That is:-
__method.apply(object, args.concat($A(arguments))); - can be replaced
with - __method.apply(obj, args.push.apply(args, arguments)); - and
should result in superior performance.

I didn't know about Array.prototype.push being faster than
Array.prototype.concat in this case. Thanks for the tip.

Faster, but not usefully so.

However, it seems that the fastest method of turning an arguments object
into an array is:-

((arguments.length == 1)?[arguments[0]]:Array.apply(this, arguments))

- where - this - is the global object in my tests, but should not be
altered by the process so could be any object. I did try null as first
argument, which is fine on everything but Firefox, where it makes the
process considerably slower than using an object reference.

Unfortunately when the Array constructor, called as a function (so not
with the - new - keyword), is only given one argument and that argument
turns out to be a numeric value that is a positive integer smaller than
2 to the power of 32 then you get different behaviour. So the expression
has to include that special handling for (arguments.length == 1). My
test still show that whole expression outperforming all of the
alternatives that I could think of.

The precise benefit depends on the number of arguments. With zero
arguments the different can be very small on some browsers (especially
firefox and IE). I did my comparisons against Prototype.js's - $A -
function , which is considered to be 100% in the following numbers. At
20 arguments Windows Safari 3 executes that expression in 48% of the
time, and at zero arguments 47%. Mac Safari 2 was better, ranging from
16% with 20 arguments down to 34% with zero.

Opera 9.2 showed the next greatest performance change. 23% at 20
arguments and 57% at zero.
IE 7 came next with 62% at 20 arguments and 81% at zero.
IE 6: 65% to 89%.

Firefox 2.0.4 was the worst I have tried to date. At 20 arguments it
only manages 79% and was down to 85% by zero arguments, but the actual
execution time for the expression (and all of the alternatives) was the
longest of the group tested on the same hardware (by at least a factor
of 2) so the actual gain in time saved was still greater than for some
of the better performing JS engines.

Array.prototype.slice.call(arguments, X); - has still got to be the
fastest method if not all of the arguments are wanted, and (strangely) -
Array.prototype.splice.call(arguments, 0, arguments.length); - is
another alternative, but there, with Windows Safari 3, the benefit
dropped off with more arguments and at 8 arguments it was worse than -
$A -.

These were tests on a mixture of Pentium 4, Core 2 Duo and Core 2 Quad
processors and Windows and Mac OSs so the results may not yet be
sufficiently reprehensive for the comparisons to hold in general. I will
probably post the test code for these tomorrow (or soonish) in case
anyone wants to see if other hardware permutations contradict those
results (or try it with other browsers/OSs).

Richard.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,754
Messages
2,569,521
Members
44,995
Latest member
PinupduzSap

Latest Threads

Top