My Library TaskSpeed tests updated

S

Scott Sauyet

David said:
I wonder if you ran the thing while I had a bad build up there.  That
has happened a couple of times in the last week.

That's always possible. I don't know if you can tell by looking at a
minified version, but it's here:

http://scott.sauyet.com/Javascript/Test/taskspeed/2010-02-15a/frameworks/mylib-min.js

Obviously that's from the version I ran at

http://scott.sauyet.com/Javascript/Test/taskspeed/2010-02-15a/

This one has in addition to the real libraries tested, one I described
with the changes to the loop initialization for My Library (slows down
by 15%) and one with a number of the cheats I complained about added
to the JQuery tests (huge speed-ups.)

I tested again today in IE6, which is painfully slow, and I get the
same errors. Maybe it was just a bad build.

-- Scott
 
D

David Mark

Scott said:
Well, this is how your post that started this thread began:

| I've updated the TaskSpeed test functions to improve performance.
This
| necessitated some minor additions (and one change) to the OO
interface
| as well.

:)

I know. I just wrote that. I fail to see how it implies that the speed
test results had _anything_ to do with adding a loadClone method to the
OO interface. JFTR, it didn't.
 
D

David Mark

Scott said:
That's always possible. I don't know if you can tell by looking at a
minified version, but it's here:

http://scott.sauyet.com/Javascript/Test/taskspeed/2010-02-15a/frameworks/mylib-min.js

Yes and no. I can re-test with my IETester in IE6 mode anyway. That
may or may not mean anything. Are you using a true blue IE6 or a
simulation of some sort?

Okay. I'll just hit that in my IETester IE6 and see what happens. Thanks!
This one has in addition to the real libraries tested, one I described
with the changes to the loop initialization for My Library (slows down
by 15%) and one with a number of the cheats I complained about added
to the JQuery tests (huge speed-ups.)

But jQuery is still bringing up the rear I assume.
I tested again today in IE6, which is painfully slow, and I get the
same errors. Maybe it was just a bad build.

What errors? :) I mean that literally. What is the error message you see?

Thanks!
 
D

David Mark

Scott said:
That's always possible. I don't know if you can tell by looking at a
minified version, but it's here:

http://scott.sauyet.com/Javascript/Test/taskspeed/2010-02-15a/frameworks/mylib-min.js

Obviously that's from the version I ran at

http://scott.sauyet.com/Javascript/Test/taskspeed/2010-02-15a/

I just ran it in (IETester) IE6. The only error I see is at the end
(dojo is undefined). LOL.

Of course, IETester is just some black box that could be doing anything
behind the scenes. I normally test with a real IE6 installation. Just
so happens that that is not convenient for me at the moment and I want
to get to the bottom of this. So, what errors are you seeing?

You mentioned three tests and all have had changes in the last little
while. As empirical evidence is inconclusive, we should turn to the
code at this point anyway. You mentioned sethtml, insertBefore and
insertAfter as having some unspecified errors. Typically, I copy and
paste the test functions into a blank page so that I can debug the
errors. As I can't do that at the moment, I can only go by the code
used in each of these test functions:-


"sethtml" : function() {
var myEl = E().loadNew('p').setText('New Content');

The E constructor is used in almost all of the tests, so we can throw
that out as a suspect. Same for its loadNew method. In contrast, the
setText method is only used by these three tests, so is a good (if
unlikely suspect). So let's round that one up:-

ePrototype.setText = function(text) {
setElementText(this.element(), text);
return this;
};

....which leads to setElementText, which I know works in IE6 and hasn't
changed in ages. So I think I can safely rule out the setText method.

Back to the test sethtml function:-

return Q('div').empty().forEach(function(el) {

The - empty - method is used by the finale test as well, but let's look
at it too:-

ePrototype.empty = function() {
emptyNode(this.element());
return this;
};

....which leads to emptyNode:-

API.emptyNode = emptyNode = function(node) {
while (node.firstChild) {
node.removeChild(node.firstChild);
}
};

....which is not going to have issues.

myEl.loadClone().appendTo(el);

Here is the code behind that chain:-

if (isHostMethod(html, 'cloneNode')) {
ePrototype.clone = function(bChildren) {
return this.element().cloneNode(bChildren);
};
ePrototype.loadClone = function(bChildren) {
return this.load(this.clone(bChildren));
};
}

But note the feature detection and realize that TaskSpeed does not do
any feature detection of the API as it is one of those
run-straight-into-a-wall type of applications. It is likely the authors
never heard of detecting _API_ methods, which is admittedly a newly
introduced concept (well, over two-years-old at this point). I point
this out as these test functions should not even be created if the
required features are absent. But then TaskSpeed would have to detect
whether the functions were created and that is beyond its current scope.
So don't expect TaskSpeed to degrade gracefully like the My Library
test page demonstrates. It will definitely crash and burn for all in
extremely outdated browsers (which does not describe IE6, of course).

ePrototype.appendTo = function(el) {
el.appendChild(this.element());
return this;
};

Nothing suspicious there.

}).load('div').length();

This just re-loads the query. It's not a new feature and I am sure all
of this has been previously tested in IE6.

},

"insertbefore" : function() {
var myEl = E().loadNew('p').setText('A Link');
return Q('.fromcode a').forEach(function(a) {
myEl.loadClone().insertBefore(a);
}).length();
},

"insertafter" : function() {
var myEl = E().loadNew('p').setText('A Link');
return Q('.fromcode a').forEach(function(a) {
myEl.loadClone().insertAfter(a);
}).length();
},

I don't see anything suspicious there either. For completeness, here
are the insertBefore and insertAfter methods:-

ePrototype.insertBefore = function(el) {
var parent = el.parentNode;

if (parent) {
parent.insertBefore(this.element(), el);
}
return this;
};

ePrototype.insertAfter = function(el) {
var next = el.nextSibling;
var parent = el.parentNode;

if (parent) {
if (next) {
parent.insertBefore(this.element(), next);
} else {
parent.appendChild(this.element());
}
}
return this;
};

This is from the latest code, but not much of that stuff has changed of
late. I don't see anything there that would cause issues in IE6 and my
testing (with my page, as well as the one you reported as problematic)
with the (possibly bogus) IETester bears that out. I even tested IE5.5
(or some approximation of it). The 5.5 results were wonderfully
affirming (virtually all blacked out except the last two columns). :)

So, can anyone see a problem in a real IE6 installation? I don't have
access to one right at the moment and it irritates me to think I could
have missed something. I'll be the first to admit if I did something
stupid. But I really don't think that is the case at this point (though
was wondering there for a bit).
 
S

Scott Sauyet

I just ran it in (IETester) IE6.  The only error I see is at the end
(dojo is undefined).  LOL.

Of course, IETester is just some black box that could be doing anything
behind the scenes.  I normally test with a real IE6 installation.  Just
so happens that that is not convenient for me at the moment and I want
to get to the bottom of this.  So, what errors are you seeing?

Unfortunately, all that shows up in the title of the cell is:

[object Error]

which is not at all useful.

I made a simpler version just testing My Library in order to try to
run this down. It's at

http://scott.sauyet.com/Javascript/Test/taskspeed/2010-02-17a/

It's happening for me on my home machine as well as my work one. I'm
using Multiple IEs [1], which installs multiple versions of IE on a
Windows machine. I've never run into any issues beyond the many
versions occasionally sharing some settings, but it could be a problem
with this version.

-- Scott
____________________
[1] http://tredosoft.com/Multiple_IE
 
D

David Mark

Scott said:
I just ran it in (IETester) IE6. The only error I see is at the end
(dojo is undefined). LOL.

Of course, IETester is just some black box that could be doing anything
behind the scenes. I normally test with a real IE6 installation. Just
so happens that that is not convenient for me at the moment and I want
to get to the bottom of this. So, what errors are you seeing?

Unfortunately, all that shows up in the title of the cell is:

[object Error]

which is not at all useful.

Right. Something screwy is going on there as I've had several reports
of success. But all it takes is one failure to spoil the bunch.
I made a simpler version just testing My Library in order to try to
run this down. It's at

http://scott.sauyet.com/Javascript/Test/taskspeed/2010-02-17a/

I'll try it out, but I expect it will "pass" here again.
It's happening for me on my home machine as well as my work one. I'm
using Multiple IEs [1], which installs multiple versions of IE on a
Windows machine. I've never run into any issues beyond the many
versions occasionally sharing some settings, but it could be a problem
with this version.

Multiple IE versions can lead to screwy problems, but it galls me that
my library is somehow tripping over this (and the others aren't). If
you would, I would appreciate it if you would make a simpler test case.
It will likely be a good learning exercise in any event (you've
stumbled over something that is throwing me and I thought I'd seen it
all in IE, multi or not).

Paste the test function(s) into a page without the try-catch and see
what it throws up. If you can tell me the error message and line, I'm
sure I can figure it out and fix it without duplicating the issue here.
I'm _very_ curious as to what the problem could be. Also, did you try IE7?

I reviewed the related code and nothing jumped out at me, so I'm in the
dark at this point. That's not somewhere I'm used to being when it
comes to IE. It's particularly irritating as I had multi-IE (or
something similar) on the test machine that went down recently. I
wonder if the problem would have shown up on that one.

Thanks again for your help Scott. Your efforts are _much_ appreciated!
 
D

David Mark

Scott said:
I just ran it in (IETester) IE6. The only error I see is at the end
(dojo is undefined). LOL.

Of course, IETester is just some black box that could be doing anything
behind the scenes. I normally test with a real IE6 installation. Just
so happens that that is not convenient for me at the moment and I want
to get to the bottom of this. So, what errors are you seeing?

Unfortunately, all that shows up in the title of the cell is:

[object Error]

which is not at all useful.

I made a simpler version just testing My Library in order to try to
run this down. It's at

http://scott.sauyet.com/Javascript/Test/taskspeed/2010-02-17a/

It's happening for me on my home machine as well as my work one. I'm
using Multiple IEs [1], which installs multiple versions of IE on a
Windows machine. I've never run into any issues beyond the many
versions occasionally sharing some settings, but it could be a problem
with this version.

I was actually thinking it was a problem with my build, but your
suggestion seems more likely at this point (a problem with the multi-IE6).

I created a simplified test page for you:-

http://www.cinsoft.net/scotttest.html

If it makes it all the way through, you will see a "good!" alert. There
is no try-catch, so any error will be reported. I assume it will be the
sethtml test as it is the first of the three that was reported as having
a problem in your setup(s).

Thanks again for your help! Will be very interesting to see what those
errors are about.
 
D

David Mark

Scott said:

I was hesitant to dump more IE's on this box, but went ahead and did it.

I should have trusted my initial instincts. I had noted that the only
common denominator was the setText method and sure as hell that was the
one throwing the exception. It wasn't that method's fault though. It
simply exposed an issue with the MultipleIE product.

The line that threw the exception was setting the innerText property of
a newly created element. In the debugger I was able to read the
property ("") as expected, but not set it. That's no good. :(

So this one's not on me. I never really thought it was, but had to know
for sure.

You might want to report this to the MultipleIE people. Appears they
have issues with some of the versions (there are several others noted on
their Website).

Glad that one is off my plate.

Once again, thanks for the feedback! There are no bad bug reports. :)
 
D

David Mark

Richard Cornford wrote:

[...]
I prefer to question these notions, and make two observations. The first
being that when asked for the rational justification for general purpose
javascript libraries the answers given tend towards the superficial and
the unthinking. Mostly when I ask the question no attempt to answer is
given at all. A few people bother to assert that the answer is "obvious"
(dubious because when the reason for something really is obvious it is
also trivial to state that reason; meaning the fact that it has not been
stated when asked for implies that it is not "obvious"). Sometimes the
next position is that having pre-existing general libraries are is good
idea in other languages so they must be a good idea in javascript, which
denies the context of application, as downloading/compiling the
equivalent of Java's 16 megabytes of libraries every time you want view
a web page clearly would not be a good idea.

In my experience, it is even worse. These library devotees truly
believe that using a GP library is forward-thinking, efficient and
virtually a requirement for anything more than a "hello world"
application; whereas, "old-school" JS is backwards time-wasting
bullshit. Of course, none of them has ever written a cross-browser
script (though they often think they have), so there is really no way
for them to make meaningful comparisons.

For example, about a year ago, I was charged with re-vamping a _very_
simple, but uber-critical, public form-based application. It's the kind
of thing I've done a million times: write the basic HTML and CSS, then
the back-end scripting and finally a dash of client-side script to give
it a little flavor. When I first looked at the original I was horrified
and informed management that we were definitely starting over. They
didn't have any problem with that as they brought me in for my expertise
and I spared them the details of why I was throwing out somebody's baby.

It was the usual nonsense. Horribly invalid and heavy XHTML as HTML,
tables-in-tables-in-more-tables, no navigation, all in one page,
ridiculous CSS, didn't work without scripting (showed a loading
animation forever), inaccessible form control widgets, downloaded half a
MB of scripts to do basic form validation, _literally_ fell apart in
lower resolutions and older browsers, etc., etc. I'm sure it isn't
difficult to picture as the Web is full of such ill-advised "modern"
monstrosities. But, for each of them there is some beetle-browed
twit-and-a-half that thinks they created a masterpiece (and usually an
equally clueless manager that lauded them as a genius for "just getting
things done" and "saving time and money"). That's where they get the
egos. And, of course, the catalyst for these "positive" results was
library xyz. So I knew I'd have to do a literally perfect job as the
slightest misstep at any point in the development would give them
ammunition to say "aw, you shoulda just used a library" and "why are you
wasting time?" Needless to say, I was confident that I could withstand
the scrutiny of "forward-thinking" know-nothings as they don't tend to
test things very thoroughly anyway.

So, I put together one of the leanest, most semantic little masterpieces
I've ever done. As I would find out, the first cut wasn't 100% perfect,
but it was _damned close_. The weight of the thing was cut by over 90%
(no exaggeration), it was valid, strict HTML, did not require scripting,
ran on (almost) everything, including older phones, text browsers,
screen readers, etc. And, to assuage the "old-school" challenges that I
knew would be forthcoming from the peanut gallery, I added "cool"
animations that even leveraged the (then) new Webkit transitions so as
to work swimmingly in iPhones/iPods, etc. It was fucking _beautiful_.
I showed it to management before I moved on to carving it up into
templates for integration with the server side scripts, which were
already in place. They marveled at how _fast_ it was and left it at
that. So far so good.

Now, enter beetle-browed nitwit #1. The first thing they wanted to know
was why didn't I use library xyz. The idea that it was a good idea not
to use the piece of shit was just beyond their comprehension. I could
hear the skepticism in their voice (it was a remote job, so I couldn't
see them grimacing) as they questioned me about "reinventing the wheel"
and whether or not this "alien" POJS creation would really work in "all
browsers". Since I had already shown it to the people who mattered, I
didn't bother arguing with them. For all I knew, they wrote the
original and I didn't feel like wasting time explaining why the original
was a complete and utter disgrace.

Then, when I was about done with everything, testing the new forms (yes,
plural as I actually used more than one document), enter beetle-browed
nitwit #2, who announced that he wanted to start "designing" the forms.
It was a real spit-take moment. _Designing_ the forms?! The fucking
thing was in the bag. What could he possibly have meant by that? Well,
I did have some idea and it turned out to be spot on. I mentioned to
him that if he was going to use PNG's to avoid translucent pixels if at
all possible. He seemed puzzled by that comment and asked why. I told
him they would look terrible in IE6 without ugly workarounds. Ah, no
problem, he said, they don't "support" IE6. And yes, corporate users
were the target market. (!) Take it several shambling, bumbling steps
further as I was told not to "waste time" testing in anything less than
IE8 and to forget about "broken" browsers like Opera too. OMFG. I
didn't know where to begin to explain the world to this guy, so I didn't
bother. I knew I sure as hell wasn't signing off on anything that
wasn't tested in IE6 (for a start).

Next thing I know, there are some bogus "XHTML" documents and equally
ludicrous CSS "designs" in my inbox. But I had an out (or so I
thought). I changed the look of mine to (roughly) match his (without
breaking in lower resolutions for one difference) and informed him that
I wouldn't be using his "fine work" as the originals had already been
put into templates for the server and the deadline was less than a
fucking week away. I figured nobody in their right mind(s) would argue
with that. I wasn't wrong about that, but these people were not in
their right minds. Be fair, these people were completely crackers (to
put it mildly).

Next thing I know (and I wonder what prompted it), I had an angry
manager asking me why I hadn't used this guy's "design". I was "wasting
time", they wanted to "just get it done", etc., etc. **** me running.
The little bastard was stirring up shit behind-the-scenes because he was
sure that his way was "right" and mine was "wrong". Simple as that.

So, I tried my best to explain (with visual aids) that junior's "design"
completely fell apart on my PC (and I mean fell apart as in unusable,
even unreadable). They had a pow-wow and came back quizzing me about
the size of my monitor. I told them it was fucking huge (it's a 60"
television for Christ's sake). They had another huddle and came back to
ask what they really wanted to know, which was what _resolution_ was I
running. I told them, in all honesty, that it was 800x600 and oh did
they freak out (knew they would). :)

I demonstrated to them it didn't do much (if any) better at 1024x768 and
explained that users can't be required to use maximized browsers with no
extra toolbars, OS font settings that exactly match their developers,
etc., etc. There was a delay as I assume they tossed this "new"
information around with said nitwit and the final resolution was that I
was using an incorrect setup and must change my display settings to 1280
by whatever immediately so that no more time would be wasted. OMFG
again. I tried to explain that they had no idea what any resolution
would look like on my end as I use large font mode, but they just didn't
want to hear any more about it. I could sense they were getting _very_
angry (at me!)

Then enter nitwit #3 (this shop had them in abundance), who informed me
that they had noticed a client-side issue in Chromium. I said fine. It
should take all of two minutes to diagnose and fix as I know how to
debug a fucking Web application (especially one that is just a trivial
set of forms). They said don't bother as it was just Chromium.
Whatever. I didn't have Chromium and AFAIK you have to build it, so I
put it on my list to check out before the release.

Next thing I know, I'm put on something else and there is this
trying-to-be-ironic post in one of the company forums that referred to
what I had done as old-fashioned, over-complicated, etc. They actually
had the gall to refer to the 100 or so lines of "plain-old" JS as
something that "just works" (with the ironic quotes if you can believe
that) and that they had taken the liberty of dropping in library xyz so
that it could be brought into the "modern" age, using (you guessed it)
fucking queries in lieu of referencing form controls through the forms'
- elements - collections. One of their "justifications" for this was
that there was "some problem" in one of the tested browsers (the
unexplained and never-investigated Chromium issue, of course) and that
clearly if I had used magic library xyz, they'd all have been
celebrating a successful launch by now. Then they systematically carved
away everything good about it, adding XHTML-style markup, lots of
idiotic CSS hacks, Flash, tons more bogus scripting (all powered by
browser sniffing, of course). But that wasn't enough destruction, so
they threw out the server side templates and went with a more "modern"
(and inherently inaccessible) Ajax approach. Basically, it ended up
back where it started. And who was the villain in all of this? You
guessed it.

I warned them that they were courting disaster with all of that bullshit
and they thoughtfully told me they would change it if there were any
problems. I asked them how the hell they would know if there were
problems and predictably they had no real answer (and I could sense they
were miffed by that line of questioning). And I thought all technical
people were supposed to be able to grasp basic logic. Stupid me. :)

The epilogue was that when they finally released the (now) piece of
shit, announcing it to the entire world, I tried it out in IE8
compatibility mode and it dutifully threw an exception and died. Why?
And this is the capper. Without even consulting me (the resident
browser scripting expert), they had changed a line of mine to use
hasAttribute where I had originally checked a DOM property. I guess
they thought that attribute methods were more "standard" or something.
Mother-fucking incompetent idiots. As you might imagine, above all
else, _that_ bit infuriated me the most. I did report the problem to
management and (weeks later) it got spackled over. But who knows how
many sales to IE users were lost in the interim? And I wonder how long
it would have sat like that if I hadn't said something. Stupid them. :)

And, of course, it no longer did anything useful in _any_ browser
without scripting. Though it was no longer an endless loading
animation, perhaps the new one was worse as it allowed the user to fill
out the whole form before they realized they were screwed (submitting
just put them back on an unpopulated form). And, of course, the layout
fell apart in anything less than 1280 by whatever. I don't think I've
ever used that resolution, so I can't say if it actually worked at that
either. It's the same old story. The developers think that all of the
end-users have (or _should_ have) the same exact setup as they do (and
if they don't, **** 'em).

So yeah, communicating with typical "modern" library-happy dip-shit
developers is difficult. They just don't get it (and likely never
will). It's so "obvious" to them that libraries are the way to "just
get things done". Anyone who says otherwise is living in the past,
"programming assembler", "wasting time", etc. Furthermore, end-users
are expected to eventually "catch up" to their "advanced" designs, so
why worry about the odd stragglers? Sites like Ajaxian and
StackOverflow are full of these train wrecks, bantering about how
library abc changed their lives and how they'd never "go back" to
fighting with Javascript (of course not, they got knocked out in the
first round).

I suspect that most of them just don't know anything about software (let
alone cross-browser scripting) and can't be bothered to learn as it
would get in the way of taking money off gullible clients. :(
 
G

Garrett Smith

Couldn't agree more with that.

A hand rolled QuerySelector is too much work. It is just not worth it.

IE botches attributes so badly that trying to get the workarounds
correct would end up contradicting a good number of libraries out there.

Many developers don't know the difference between attributes and
properties. Many of the libraries, not just jq, have attribute selectors
that behave *incorrectly* (checked, selected, etc).

For an example of that, just try a page with an input. jQuery.com
homepage will do fine:
http://docs.jquery.com/Main_Page

(function(){
var inp jQuery('input[value]')[1];
inp.removeAttribute("value");
// The same input should not be matched at this point,
// Because its value attribute was removed.
alert(jQuery('input[value]')[1] == inp);
})();

Results:
IE: "true"
FF: "false"

As a bookmarklet:
javascript:(function(){var inp =
jQuery('input[value]')[1];inp.removeAttribute("value");alert(jQuery('input[value]')[1]
== inp);})();

By failing to make corrections for IE's attribute bugs, the query
selector fails with wrong results in IE.

Workarounds for IE are possible, but again, the amount of benefit from
being able to arbitrarily use `input[value]` and get a correct,
consistent, specified (by the Selectors API draft) result, is not worth
the effort in added code and complexity.

Instead, where needed, the program could use `input.value` or
`input.defaultValue` to read values and other DOM methods to find the
input, e.g. document.getElementById, document.getElementsByName, etc.

[...]

The only way of testing if an event will fire is to subscribe to it and
the fire the event.

Programmatically dispatching the event as a feature test is inherently
flawed because the important questions cannot be answered from the
outcome; e.g. will the div's onfocusin fire in when the input is focused?
I don't know of anything simple. I think that the designers of -
addEventListener - fell down badly here. It would have been so easy for
that method to return a boolean; true for success; then if you attempted
to add a non-supported listener it could return false from which you
would know that your listener was going to be ineffective. Still, that
is what happens when the people designing the specs have negligible
practical experience in the field.

That design requires a definition for "non-supported listener."
 
D

David Mark

Garrett said:
Couldn't agree more with that.

A hand rolled QuerySelector is too much work. It is just not worth it.

I agree with that too, but you have to give the people what they want.
A record number of hits today (and it is only half over) confirms that. :)
IE botches attributes so badly that trying to get the workarounds
correct would end up contradicting a good number of libraries out there.

Who cares about contradicting them? They contradict each other already!
Though some have copied each other, making for a cross-library comedy
of errors.

http://www.cinsoft.net/slickspeed.html
Many developers don't know the difference between attributes and
properties.

Without naming names, John Resig. :)
Many of the libraries, not just jq, have attribute selectors
that behave *incorrectly* (checked, selected, etc).

Do they ever. Non-attribute queries too. They foul up everything they
touch. As the browsers have converged over the last five years or so,
the weenies have written maddeningly inconsistent query engines on top
of them, making it appear that cross-browser scripting is still hell on
earth (and, in a way, it is). They are self-defeating their own stated
purpose (to make cross-browser scripting fun and easy!)
For an example of that, just try a page with an input. jQuery.com
homepage will do fine:
http://docs.jquery.com/Main_Page

(function(){
var inp jQuery('input[value]')[1];
inp.removeAttribute("value");
// The same input should not be matched at this point,
// Because its value attribute was removed.
alert(jQuery('input[value]')[1] == inp);
})();

Results:
IE: "true"
FF: "false"

As a bookmarklet:
javascript:(function(){var inp =
jQuery('input[value]')[1];inp.removeAttribute("value");alert(jQuery('input[value]')[1]
== inp);})();

That road has been plowed:-

http://www.cinsoft.net/queries.html

This is the gold standard:-

http://www.cinsoft.net/attributes.html

....and virtually all of it has been manifested in My Library.
By failing to make corrections for IE's attribute bugs, the query
selector fails with wrong results in IE.

And _tons_ of others. If you don't constantly "upgrade" these things,
they go from some wrong answers to virtually all wrong answers. Try the
SlickSpeed tests in IE5.5 (still in use in Windows 2000, but that's
beside the point) or anything the developers either ignore or haven't
heard of (e.g. Blackberries, PS3, Opera 8, Opera 9, etc.)
Workarounds for IE are possible, but again, the amount of benefit from
being able to arbitrarily use `input[value]` and get a correct,
consistent, specified (by the Selectors API draft) result, is not worth
the effort in added code and complexity.

Pity none of them read that draft before dumping a QSA layer on to their
already inconsistent DOM (and occasionally XPath) layers. I didn't read
it either, but I had a feeling the browsers were using XPath behind the
scenes (and sure enough, the behavior confirmed that when I got around
to testing QSA). It's insane as QSA is relatively new has its own
cross-browser quirks.
Instead, where needed, the program could use `input.value` or
`input.defaultValue` to read values and other DOM methods to find the
input, e.g. document.getElementById, document.getElementsByName, etc.

Yes. And JFTR, defaultValue is what reflects the attribute value. All
of the others are reading the value as they think it "makes more sense",
despite the fact that it doesn't match up with XPath or QSA (for obvious
reasons). That dog won't hunt in XML documents either, but they all go
to great lengths to "support" XML (with bizarre inferences to
discriminate XHR results for example). So it's a God-awful mess any way
you slice it.
[...]

The only way of testing if an event will fire is to subscribe to it and
the fire the event.

Depends on the context. My Library can detect supported events using a
technique that was first reported here years ago. Sure, users can
disable the effectiveness of some events (e.g. contextmenu). But unless
you design an app that rises or sets with context clicks, it doesn't matter.
Programmatically dispatching the event as a feature test is inherently
flawed because the important questions cannot be answered from the
outcome; e.g. will the div's onfocusin fire in when the input is focused?

Right, it's much worse than my method.
That design requires a definition for "non-supported listener."

Well, that shouldn't take more than a sentence. ;)
 
S

Scott Sauyet

Richard said:
Haven't we already agreed that the test framework was adapted directly
from one that was designed to test selector engines, and so must have
been for libraries with selector engines?

The test framework was so adapted. I'm not sure why that implies that
the libraries to be tested have to be the same ones that were being
tested for selector speed. In actual fact, of course, all the
libraries that I've seen tested with this do have selector engines as
well as DOM manipulation tools.

DOM Manipulation tools and selector engines. Obviously you can run
the former against the results of the latter, but more generally, when
you need to manipulatate the DOM, you need some way to select the
nodes on which you work. You could use some of the host collections,
getElementById, getElementsByTagName, or some manual walking of the
DOM tree starting with the document node, but somehow you need to do
this. If you try to provide any generic tools to do this, you might
well start down a path that leads to CSS Selector engines. Of course
this is not a requirement, but it's a real possibility; to me it seems
a good fit.

The name "taskspeed" implies pretty much that; that we are testing
actual tasks.

Do you not think that attaching event listeners or selecting elements
can count as tasks?


Some of the libraries do their DOM manipulation via selector based
querying. For them there is no alternative. But DOM manipulation tasks
do not necessitate the use of selector engines (else DOM manipulation
did not happen prior to about 2006). It is disingenuous to predicate DOM
manipulation tasks on selector engine use. It makes much more sense to
see how each library competes doing realistic tasks in whatever way best
suites them, be it with selector engines or not.

I believe that was adequately answered when I pointed out that the
specification does not actually require a selector engine.

So all those - addEventListener - and - attachEvent - calls are not
event handler manipulations then?

Just barely, I would contend. Nothing in the test verifies that the
event handler actually is attached.


Then that would have been another mistake as "offering classical OO" is
not necessary for any real world tasks either.

I choose not to use these classical OO simulators, and I don't feel
I'm missing anything, although I'm mostly a Java programmer who is
perfectly comfortable in the classical OO world. The point is that
the tests were written around the libraries.

The "scare quotes" are an expression of 'so called'. "Library" is a
perfectly acceptable name for these things, but tends to get used in
these contexts with connotations that exclude many of the things that
could also reasonably be considered to be libraries.

Yes, there are many other ways to organize code, but I think these
tests were designed for the general-purpose libraries. Competent JS
folks could also write a "library" that passed the tests and did so
efficiently, perhaps even supplying a useful API in the process, but
which does not try to handle the more general problems that the
libraries being tested do. That might be an interesting exercise, but
is not relevant to

[ .. Interesting discussion on libraries and code reuse deleted
as I have nothing to add ... ]
Really? Very interesting. I didn't realize that it was a
system performance issue. I just thought it was a new way
of doing things that people started trying around then.

The thing with trying something before it is viable is that when you
find out that it is not viable you are not then going to waste time
promoting the idea.

I don't know how much more viable it was then. I remember writing my
own API in, I think, 2004 that worked something like this:

var a = Finder.byTag(document, "div"),
b = Finder.filterByClass(a, "navigation");
c = Finder.byTag(b, "a"),
d = Finder.byTagAndClass(document, "li", "special"),
e = Finder.byTag(d, "a"),
f = Finder.subtract(c, e);

It was plenty fast enough for my own use, but it was rather verbose to
use. I wish I had thought of how much cleaner this API would have
been:

var f = selector("div.navigation a:not(li.special a)");

Perhaps one general-purpose enough to handle all the possible CSS
selectors would not have been viable then, but I think for what I was
using it, the tools were already in place.

Recall, though, that in the early days of JQuery a great deal of work
went into making its selector engine faster. That would not have been
necessary if it had not been on the very edge of being viable at the
time.

I remember jQuery doing that once the SlickSpeed tests were released.
Did it happen earlier too?

Why not? Given a selector and a document all you have to do is verify
that the correct number of nodes were created and that they all are the
expected nodes.

Okay, perhaps "would not be easily testable". Maybe this sounds
simpler to you than it does to me, but especially if there is no one
fixed test document, this sounds to me to be much the same as writing
a general-purpose selector engine.

Automatically testing a system that relies on human interaction is
inherently problematic.


But why find the size of collections of elements? That is not a task
that is common in browser scripting tasks. Even if you need to iterate
over some collection of elements with something like a - for - loop you
don't care how large that collection is, only that you can read whatever
size it is in order to constrain the loop.

Absolutely it would be best if the test infrastructue independently
verified the results. I'm still not convinced that it would be an
easy task without either writing a general-purpose selector engine, or
restricting the test documents to a fairly simple set.

It forces the 'pure DOM' code to do things that are not necessary for
real-world tasks, thus constraining the potential implementations of the
tests to code that is needlessly inefficient in how it addresses the
tasks. Thus the 'libraries' never get compared with against what real
DOM scripting is capable of, in which case why bother with making the
comparison at all?

Are you saying that it unnecessarily restricts the set of libaries
that can be tested or that the time spent in the selectors used to
feed back the "results" to the test infrastructure would significantly
skew the timing?

In making comparisons between the libraries, at doing selector engine
based tasks (that is, forcing everyone else to play the game JQuery's
way) they may have some value. But there is no real comparison against a
'baseline' unless the baseline is free to do whatever needs doing by any
means available and where the tasks being preformed are realistically
related to the sorts of things that actually need doing, as opposed to
being tied up with arbitrary element counting.

So if the infrastructure was expanded to somehow verify the results
rather than ask for a count back, would this solve the majority of the
problems you see?

So why is the element retrieval for the 'pure DOM' code done with a
simplified selector engine that receives CSS selector strings are its
argument?

I would assume that it's because the implementor found it easiest to
do so this way. Note that he's commented out the QSA code, but it was
probably an artifact of his testing with QSA, in which case it's
easier to have a function that responds to the same input as the
native QSA. Surely he could have written something like this
(untested, and changed only minimally) instead:

getSimple:document.createElement("p").querySelectorAll&&false?
function(tag, className){
return this.querySelectorAll((tag || "*") +
(className ? "." + className : ""));
}:
function(tag, className){
for(var
result = [],
list = this.getElementsByTagName(tag || "*"),
length = list.length,
i = 0,
j = 0,
node;
i < length; ++i
){
node = list;
if(className &&
node.className &&
node.className.indexOf(className) > -1)
result[j++] = node
;
};
return result;
}

then used code like this:

return utility.getSimple.call(body, "ul", "fromcode").length;

instead of this:

return utility.getSimple.call(body, "ul.fromcode").length;

Because that's all this trivial selector engine does, "tag.class".


When the "table" function is defined to return "the length of
the query 'tr td'," we can interpret that as counting the results
of running the selector "tr td" in the context of the document
if we have a selector engine, but as "the number of distinct TD
elements in the document which descend from TR
elements"if not.

We can also observe that in formally valid HTML TD elements are required
to descend from TR elements and so that the set we are after is actually
nothing more than all the TD elements in the document, and so wonder why
the code used in the 'pure DOM' is:-

| tr = body.getElementsByTagName("tr");
| i = tr.length;
| for(var total = 0; i;)
| total += tr[--i].getElementsByTagName("td").length
| ;
| return total;

(code that will produce a seriously faulty result if there were nested
tables in the document as some TD would end up being counted twice.)

- instead of just:-

return body.getElementsByTagName("td").length;

Although we have a test document at hand, and the BODY would be part
of a formally valid document if properly paired with a valid HEAD, I
don't think we would want to assume that this is the only document to
be tested. Or should our test infrastructure require a formally valid
document? I've worked in environments where parts of the document are
out of my control and not valid; I'd like my tools to be able to run
in such an environment.

- or better yet, counting the number of TDs in the document before
adding another 80 (so when the document is smaller and so faster to
search) and then returning that number plus 80 for the number of TDs
added gets the job done. I.E.:-

...
var total = body.getElementsByTagName("td").length;
... //loop in which elements are added
return total + 80;

And then, when you start down that path, you know the document and so
you know it started with 168 TDs and so adding 80 results in 248, so
just return that number instead. It is completely reasonable for DOM
scripts to be written for the context in which they are used, and so for
them to employ information about that context which is gathered at the
point of writing the code.

This would end up under cheating in my book.

This comes down to a question of verification; is this code here to
verify the document structure after the modifications, or to announce
how many TDs there are in the DOM? If it is for verification then that
should not be being done inside the test function, and it should not be
being done differently for each 'library', because where it is done
impacts on the performance that is supposed to be the subject of the
tests, and how it is done impacts on its reliability.

While I agree in the abstract, I'm not willing to write that sort of
verification, unless it was agreed to restrict the framework to a
single, well-known test document. As I've argued earlier, I think
that such an agreement could lead to serious cheating.

Finding elements is an important aspect of DOM scripting, but how often
do you actually care about how many you have found (at least beyond the
question of were any found at all)?

The counting done here is just a poor-man's attempt at verification.

[ ... ] For the TD example above, all the
verification code has to do is get a count of the number of TDs in the
DOM before it is modified, run the test, and then count the number
afterward in order to verify that 80 were added. Even that is more than
the current 'verification' attempts. To mirror the current set-up all
you would have to do is have some external code count some collection of
elements from the document's DOM after the test has been timed. A simple
selector engine (which is passed the document to search as an argument)
would be fine for that, each DOM would be subject to the same test code
and its performance would not matter as it would not be part of the
operation being timed.

But this doesn't verify that each of the newly added TD's has content
"first" or that these new ones were added at the beginning of the TR,
both requirements listed in the spec.
Why does how long the test takes to run matter? Is this a short
attention span thing; worried that people will get bored waiting? That
isn't a good enough reason to compromise a test system.

Sorry, I misspoke. It's not the time to actually run the test that
I'm worried about, but the time to do the analysis of the document in
order to write the code to verify the results.
[ ... ]
Make all the libraries report their results, and note
if there is any disagreement.
But reporting result is not part of any genuinely
representative task, and so it should not be timed along
with any given task. The task itself should be timed in
isolation, and any verification employed separately. [ ... ]
I think this critique is valid only if you assume that the
infrastructure is designed only to test DOM Manipulation. I
don't buy that assumption.

The infrastructure should be designed only to test DOM manipulation.

I don't see why. I believe this is the crux of our disagreement.
There are many tasks for which I use Javascript in a browser:
selecting and manipulating elements, performing calculations,
verifying form data, making server requests, loading documents into
frames, keeping track of timers. Why should the test framework test
only DOM manipulation?

[ ... ]

In another thread [1], I discuss an updated version of
slickspeed, which counts repeated tests over a 250ms span
to more accurately time the selectors.

Way too short. If a browser's clock is +/-56 milliseconds that is more
than 20% of your total timing. Even if it is the observably common +/-16
milliseconds then that is 5%. I would want to see this sort of testing
loop pushed up to over 2 seconds.

Perhaps. It's not available as a parameter to set, but the 250
milliseconds is in only one location in the script. In my testing,
there were inconsistent results if I tried below 100 ms. But by 150,
it was quite consistent. I went up to 250 just to add some margin of
safety. When I've tried with times as high as 10 seconds, I have not
had substantially different results in any browser I've tested (a
relatively limited set, mind you, not going back before IE6, and only
quite recent versions of most of the other modern popular browsers.)


That is not a hugely realistic test in itself. What exactly would anyone
do with an array of element IDs? If you were going to use them to look
up the elements in the DOM why not collect the array of elements and
save yourself the trouble later?

Actually, I use ids a fair deal to relate different parts of the DOM
together through events. Granted I don't often use them in an array,
but it's easy enough to imagine a realistic case for it:

// myArray contains ["section1", "section2", and "section3"]
for (var i = 0, len = myArray.length; i < len; i++) {
var elt = API.getById(myArray),
links = API.getByClass("linkTo-" + myArray);
for (var j = 0, len2 = links.length; j < len2; j++) {
API.register("click", links[j], function(evt) {
API.showTab(elt);
return false;
});
}
API.creatTab(elt);
}

where API.getById, API.getByClass, API.register, API.createTab, and
API.showTab are defined as you might expect, and links that I want to
open up a new tab have the class "linkTo-" followed by the id of the
element I want in the tab.

Judging whether that is a realistic variance to impose on the document
would depend on why you needed this information in the first place.

It's a test infrastructure. If we try to tie it too closely to
particular real-world examples, I'd be afraid of limiting its
flexibility. If we can determine that there really are no real-world
uses of somthing under test, then we should remove that test. But if
there is at least reason to imagine that the technique could be
usable, then there is no reason to discard it.

It is realistic to propose that in real-world web pages a server side
script may be generating something like a tree structure made up of
nested ULs and that some of its nodes would have IDs where others would
not. But now, given server side scripting, we have the possibility of
the server knowing the list of IDs and directly writing it into the
document somewhere so that it did not need looking up from the DOM with
client-side scripts, and if the reason for collecting the IDs was to
send them back to the server we might also wonder whether the server
could not keep the information in its session and never necessitate
client-side script doing anything.

Of course we could. But I often do things client-side to offload some
of the processing and storage that would otherwise have to be done
server-side.

[ ... ] I definitely wouldn't try to build entirely random
documents, only documents for which the results of the tests
should be meaningful. The reason I said I probably wouldn't
do this is that, while it is by no means impossible, it is also
a far from trivial exercise.

There is certainly much room for improvement in these testing frameworks
before moving to that point.

Is this something you would be willing to help implement? Your
critique here is very valuable, but some specific code suggestions
would be even more helpful.

Not exclusively if you want to compare them with a 'pure DOM' baseline.


So the reason for having a 'pure DOM' baseline is to be able to compare
their performance/code with what could be achieved without the overheads
imposed by the need to be general.

Yes, and ideally also to have an implmentation so transparent that
there is no doubt that it's results are correct. I don't think this
implmentation reaches that standard, but that should be another goal.

[ ... ] I see a fair bit of what could
reasonably be considered optomising for the test, and
I only really looked at jQuery's, YUI's, and My Library's
test code. I wouldn't be surprised to find more in
the others.

But that is in the implementation for the test functions, not the
libraries themselves. I don't see any reason why the test functions
cannot be written to exploit the strengths of the individual libraries.

Well, some of the issues were with caching outside the test loop.
This would clearly be mitigated if the framework ran the loops instead
of the test code, but those clearly optimize in a manner counter to
the spirit of the tests. Similarly, there are tests that doesn't
attach event listeners to the particular items in question but just to
a single parent node. This definitely violates the guidelines.

[ ... ]
The biggest point for improvement would be in the specification for the
tasks. They should be more realistic, more specific (and more accurate),
and probably agreed by all interested parties (as realistic and
implementable) and then given to the representatives of each
library/version to implement as best they can. That way nobody can be
cheated and the results would be the best representation possible of
what can be achieved by each library.

And I think there's another factor which may be hard to integrate.
The tests are not designed only to show what's achievable for
performance but to show how the library works when it's used as it's
designed to be used. If you have a wrapper for getElementById that
you expect users to use all the time, it's not right to have test code
which bypasses it to gain speed. It's hard to enforce such a rule,
but it still should be stated explicity so that those who don't comply
can be called out for it.

Granted, the re-working of the specs for the tasks would not be easy, as
it would have to address questions such as whether saying an event
listener should be attached to all of a particular set of elements left
room for attaching to a common ancestor and delegating the event
handling. This is one of the reasons that specifying realistic tasks
should be the approach taken as for a realistic task the question is
whether delegation is viable, and if it is, and a library can do it,
there is no reason why it should not.

Again, is this something that you personally would be willing to help
code?

Thank you for your incisive posts on this subject.

-- Scott
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,576
Members
45,054
Latest member
LucyCarper

Latest Threads

Top