jQuery vs. My Library

P

Peter Michaux

Helbrax wrote:

Because they are stupid. ;)


How about "constructor?"

"My Library" is overloaded and ambiguous also. "I was using my library
today." vs "I was using My Library today." is a bit too subtle. Also
Googling "My Library" will likely be a bit of trouble.

Peter
 
D

David Mark

"My Library" is overloaded and ambiguous also.

It's the name of the Web app that generates custom libraries. I think
it is appropriate for that. After you build it, it is _your_ library.

"I was using my library
today." vs "I was using My Library today." is a bit too subtle. Also
Googling "My Library" will likely be a bit of trouble.

"My Library javascript" and even "browser scripting library" result in
the top pick in every search engine I've tested, but the point is
taken.
 
D

David Mark

Such a community is really "growing"? I just see more and more jQuery
everywhere I look.


I wish more people understood this point. I've found bugs thanks to
testing in "weird" browsers.

They are great for that. :) It's not that I care that Opera 6 has no
documentElement property. I want the library to degrade gracefully in
any environment.
And, of course, the whole [My Library] is modular to a fault

What is the fault?

Ah, figure of speech. Perhaps not appropriate here. I used that a
ways back to describe Dojo's
automatic downloading and concatenation of dozens of files on every
refresh (through sync XHR no less), with no option to leverage the
files individually (so why are they separate?) I don't think I ever
got a suitable answer on that one (among many other questions).

There are faults to the partial builds though. The use of typeof to
detect functions that may not be part of the build is clearly error-
prone (and I have been slowly weeding those out and replacing them
with API.* tests). The server side script is fairly primitive as well
as it could easily take care of these issues on its end.
 
D

David Mark

David Mark ha scritto:



Test? Comment? Whatever. English is not my first language.

Yes, that was uncalled for. My apologies as I knew what you meant.
Minimize or ignore criticism aimed at you;

Ludicrous. Do you have examples or are you just making things up?
insult people, distort facts and words.
Examples?

You caused the runaway of many decent persons from this
newsgroup, too.

Oh dear. If grown adults cannot control their own PC's and eyes and
choose to flee a newsgroup rather than skip articles by authors they
don't like, I can hardly be named the culprit. If this were a
moderated group, these same "adults" would be crying to the moderator
to install a global filter for them. For God's sake, filter your own
mail.
And you are banned from some other newsgroups. > And you
in the past have faked your name to support yourself and your library.
And so on. Seriously.





 From now on, maybe it's not still ridicolous. But it was!

No, it wasn't.
And if you are addressing people to your homepage and there they will
find that bullshit, this is what you have done, no more and no less.

It was never bullshit. It was a test of pre-QSA libraries, rather
than apples vs. oranges. The whole jump to QSA is a ridiculous
blunder anyway. Like browser sniffing, it allows the authors to
delude themselves at the expense of their users.
It seems you have some supporters, after all. Not proud of it?

The only "supporters" I really care about are those who send me money,
which have been steadily growing in number since I started promoting
the library as something other than a curiosity. :)

Having trouble with a rusting Prototype or jQuery site? Do drop me a
line. ;)
To be fair: I began to understand something about javascript when I
first stumped into this newsgroup.
And you too teached me something, among your rants. The same way one
learns how to properly use a hammer after having beaten his finger, I
would add. :)

Oh and I tried your library months ago, and apparently I found
immediately a small insignificant bug in the graphical effects test
page. Insignificant, but...

Well, I am all ears, but it has likely already been addressed as
testing all of the old browsers
turned up some holes in the feature testing. There was a silly
oversight in the enhanced changeImage function as well, which has been
fixed in the posted downloads, but I haven't re-built the dynamic
(builder) version yet.

And again, my pitch has never been that it is 9000+ error-free,
perfectly executed and thoroughly tested lines of JS. Not even close
at this point. My point is that it is light years ahead of the
others. The design is far more ambitious as well (to allow calling
apps to gracefully degrade, rather than run head-first into brick
walls). And the others have that nagging browser sniffing that has to
be updated constantly to keep up with just a handful of major browsers
(in their default configurations). It's night and day in that respect.
 
D

David Mark

Thanks for the support! I agree that Ajaxian has squandered whatever
integrity it may have had (be fair, none). And their issue is that I
called their site a "crock of shit". I really think that is fair
comment.
You are 100% correct. There is nothing more relevant that the Ajaxian
could discuss.

I second that.
What is more relevant than a fundamental criticism of all the major JS
libraries and the counter-claim of a superior design?

Exactly. Web developers and the people they work for don't need to
squander their time on endless Beta tests of dubious browser scripts.
It's madness and has been going on for far too long. Most of these
things are still UA sniffing, which guarantees they have no legs.
They fall apart in anything more than a couple of years old as well.
That's just not cross-browser scripting.
The fact that DM has (to put it mildly) an abrasive on-line persona,

Only to those who ask for it. ;)
which even extends to criticism of *them*, should be taken by them as a
challenge.

Well, for example, Resig hides his head in the sane, refusing to even
read this group. Now that's a loser. And I don't think it is
abrasive to call a spade a spade. He had numerous chances to make his
case and flopped every time.
If his claims are so baseless and irrelevant, why don't they
sharpen their knives and assemble a 'dream team' to rebut his claims?

Who would be on this team? I can think of a few people here that
could easily poke holes in My Library (and I welcome that), but notice
that the library mavens prefer to get their science from blogs and
Ajaxian. Think of the authors of the "majors". Do you ever see them
in here, even to ask a question? ISTM that they live in their own
parallel universes where basic rules of logic do not apply (e.g. the
prevalence of the typeof xyz == 'array' gaffe, browser sniffing,
etc.) They call it the "real world", but I don't think that's an
appropriate name for their personal cocoons. ;)
They would probably say they don't want to give him any more publicity,

They might say all sorts of illogical things. They are great for
that.
but if they can't demolish what he's saying technically, then he, and
the issues he raises, deserve to be widely aired.

Well, there's no "if" there. They can't. Their positions are
indefensible. It would be like saying that lead-based paint was
really a good idea and all of this talk of abatement is just jealousy
talking. :)
As somebody with no dog in this fight, I, too, would love to read their
article on this and the followups... I plan to contact them and tell
them so.

From what I've heard, they are getting a lot of that. They've sort of
painted themselves into a corner though. Will be interesting to see
how long it takes them to back off their silly position. Let me know
as I don't read that rag at all. ;)
 
S

Scott Sauyet

Well, for example, Resig hides his head in the sane, refusing to even
read this group.

Well, that is a beautiful typo! I'll bet John Resig would be willing
to agree that avoiding this group is avoiding insanity. :)

-- Scott
 
D

David Mark

David said:
Garrett Smith wrote:
Matt Kruse wrote:
With so many globals, I would suggest giving them full names as well
as the single-letter identifiers. E===Element, etc
That would conflict with any code that uses:-
Element.prototype.myFunc = [...]
Yes.  It will likely end up as MyElement, MyForm, MyImage, MyDocument,
etc.
var myEl = MyElement('#test');
Or better myLib.getElement(...), as (structurally) suggested before?

+1 Please do use a namespace

Well, I did (API) for all but the aforementioned constructors. I
haven't decided what I want to do with them yet. They are optional,
of course.
 
D

David Mark

Oh yes, I always test with IE6.  I did forget to post those results,
because I didn't have it on the machine I used yesterday.

    934  1105  1771  3825  1113.

Obviously MooTools falls down a bit and Prototype even more.  The rest
were comparable.



I think it's worth testing older libraries in various environments.
What I objected to is the self-aggrandizing manner in which David
Marks promoted the spped of his library, upgrading his library in the
tests to the latest version, but leaving the other libraries with two-
year old versions.  You know he didn't expect anyone to notice.

That's complete bullshit. I hadn't done anything to that page in
years until you started focusing on it. I don't even consider it a
particularly compelling test as running queries over and over is not
standard practice for an application. You misconstrued that my
comments about speed were strictly related to SlickSpeed.

And again, testing QSA vs. QSA is ridiculous. They are all _fast
enough_ with QSA. Of course, many browsers in use today cannot handle
QSA, so the "slow lanes" are more important comparisons.
When I post some results he responds by saying that I'm testing the
wrong thing.

You were. Testing QSA vs. a library is stupid. Others have mentioned
that too. ;)
Either the browsers are too recent or the computer is
too fast.  It's nonsense, of course.

I never said anything like that, except in reference to being "fast
enough" in brand new PC's, which is not exactly a badge of honor.
What about the millions of users who do not buy new PC's every six
months? There's lots of them out there.
 
D

David Mark

[...]
To repeat, the people that I know that use the "common" js libraries are
unhappy with all of them.

And for very good reason. For years, these efforts have promoted
themselves as "fixing Javascript" and that they "smooth out" cross-
browser quirks. In reality, it's all been a fraud (or delusion) on
their parts. My Library (and the CWR project before it) came about
after it was determined that these scripts were not solving anything,
but rather "punting" on everything (e.g. sniffing the UA string to
make it look like their designs were realized).

Then there is the outrageous "test-driven" development (as referenced
by one of Resig's posts a couple of years back). What it translates
to is a bunch of people who have no idea how to go about cross-browser
development, using unit tests to shape their designs. It's
programming by observation, not understanding.

They should solve the problems first (without consulting the baseless
UA string), then use unit tests to confirm their solutions are viable
in as many environments as possible. Using the "crystal ball"
approach is folly and results in patchworks that are never really done.
 
D

David Mark

In other words, the time has run over you.
Feel free to take a shovel and bury yourself somewhere in a backyard.

You've got the wrong end of the stick for sure. Anyone can make QSA
"fast". There's nothing to it as the browser does all of the
work. ;)

And, of course, the whole idea of dropping QSA on top of incomplete
and incompatible layers of XPath and DOM traversal, just because they
can (or perceive the need to "keep up" with the other half-ass
scripts) is stupid. It's almost beyond belief, except futility has
become the standard in this industry.
 
D

David Mark

Testing is pointless if you don't have any criteria to establish what
the testing means. Speed is usually the last criterion to be
considered, more important ones are:

Right, which makes it strange for David to claim that I was testing
the wrong things.  How in the world could he *know* what tests are
meaningful for me?
The users of such libraries are visitors to web sites. Testing
performance on a developers machine on a LAN (or with client and
server on the same box) is completely the wrong environment.

Actually, I'm not doing much front-end development right now.  But
there's a good chance I'll be doing so soon, for the corporate
intranet at my job.  The project will have 50 - 100 users, most on IE8
or FF3.5, but some probably on Chrome.  I will try to ensure that it
will work in Opera and Safari as well.  I will probably be able to
assume that the users will have Windows XP or Windows 7, and my
machine is the type the company is using to replace old ones.  I can't
assume they will be as powerful as mine, but I also don't need to
worry about 500MHz, single-core processors.
You mean essential.

Yes, but there are limits to what's worth testing for any particular
user.  I'm certainly not expecting this to even look reasonable in
IE3.
Precisely, which is why results from a developer's machine mean very
little.

Unless... :)

For me ancient processors and FF1 means very little.




[ ... ]
The slickspeed tests are designed for one purpose only: to test the
speed of CSS selectors. If the "major libraries" fork into browser-
native QSA branches and don't use their CSS selector engines, then
what is being tested? The tests themselves don't even use a suitable
document, they use a document essentially picked at random.
If the tests were to have any real meaning, the test document should
be specifically designed to test several scenarios for each selector,
such as a group of elements close together, some widely separated in a
shallow DOM and others in a deep and complex DOM. It may be that a
particular library comes up trumps in one type of DOM but not in
another. There should also be edge cases of extremely complex
selectors that may never occur in reality, but test the abiltiy of the
engine to correctly interpret the selector and get the right elements.
Speed may be a very low priority in such cases.

There's a lot to be said for that.  But there's also a lot to be said
for a process that weighs the speeds of the selectors depending upon
the likely common usage of each.  A test that weighs these equally has
some clear-cut issues:

    span.highlight
    #myDiv ul div.group ul li:nth-child(7n + 3)
No one's perfect. But subjective criteria like "is the architect a
nice guy" don't rate too highly in my selection criteria. I've worked
with a number of self-opinionated arseholes who were, never-the-less,
very good at their job. I much preferred working with them to the Mr.
Nice Guys who were barely competent but great to talk to over a
beer.  :)

Oh, I'd always prefer to work with someone competent but less
likable.  However, I would hesitate to commit to using his library in
any production environment until there are people helping support it
that seem willing to admit to their faults and honestly interested in
helping users through their problems.
If it's a one-man show, then I
want that one man to be someone whose responses to requests for help,
to suggestions, and to critiques are helpful rather than abusive.

No need to speculate.

http://groups.google.com/group/my-library-general-discussion/

:)
 
D

David Mark

Defining "code quality" is a topic that is likely to result in flames.

I did create one unofficial document here:-http://jibbering.com/faq/notes/review/code-guidelines.html

There was much heated debate regarding the "don't modify objects you
don't own." That document may need to be taken down, actually.

The motivation for the document was bad code reviews. I wanted to
facilitate better code reviews.

Code quality matters. Bugs should be fixed before being pushed to
production, or even given to QA.

Code design and architecture matters, too. Javascript is extremely
flexible and powerful language. It is easy to create tangled messes in
javascript.




David is not alone, he is just the most obstreperous. That's probably
too kind a term to use.

I too am building a library/framework. Features: IoC to create
factories, AOP event system, dom abstraction layer, and some widgets. IT
is all organized into modules. Ther is no query selector because the
cost of using that just isn't worth it at this point. Native QSA might
be a good option in 5 years when support is more widespread, but not for
the time being.

There is a ton of work that needs to be done on it. It is not easy find
good javascript developers who have time to donate on this stuff, so I'm
doing it all myself. Slowly.

If you would like to donate your time provide criticism or feedback to
the the code, I would certainly appreciate that very much. I won't call
you a buffoon for doing that, but I reserve my right challenge the
criticism if I feel it is wrong.

And who was called a buffoon for providing (sensible) criticism
(rather than obvious attempts to muddy the waters with mis-quotes,
misconceived ideas about unit testing with tests designed for other
designs, bad logic, etc.?)
 
G

Garrett Smith

David said:
[...]
Then there is the outrageous "test-driven" development (as referenced
by one of Resig's posts a couple of years back). What it translates
to is a bunch of people who have no idea how to go about cross-browser
development, using unit tests to shape their designs. It's
programming by observation, not understanding.
You know, you don't have to embarrass yourself like that.
 
S

Scott Sauyet

That's complete bullshit.  

I may be wrong about your motives or your expectations. But I am not
trying to convince anyone of something I don't believe to be true.
I hadn't done anything to that page in
years until you started focusing on it.  I don't even consider it a
particularly compelling test as running queries over and over is not
standard practice for an application.  You misconstrued that my
comments about speed were strictly related to SlickSpeed.

I didn't start the focus on it. Your original message in this
discussion brought up the speed and suggested that people take your
speed test. Matt Kruse asked for your results, but you simply told
him to try it himself. That's when I added my own tests. This is
from [1]:

| Scott Sauyet wrote:
| >> Matt Kruse wrote:
| >>> David Mark wrote:
| >>>> And take a guess which is faster. Rather, don't guess but try
the the
| >>>> Speed Test.
| >>> Have you? Will you post the results?
| >> Huh? The Speed Test on my site. I've ran it in everything from
IE8 to
| >> FF1 (and most in between). My Library kills its contemporaries
(the
| >> further back you go, the larger the margin).
| > Are you referring to this?:
| > http://www.cinsoft.net/mylib-testspeed.html
|
| Yes.

At that time, the referenced page was a SlickSpeed test [2], although
it has since been changed to one that links to both SlickSpeed and
TaskSpeed tests.

Please remember what I've said in this discussion: My Library performs
very well, and is among the faster ones in many of the tests I've
seen. But you've significantly oversold its speed in your original
and subsequent posts. My Library is not the undisputed fastest
library for SlickSpeed. That's all I've said, but I've given
significant backup to my words by posting up-to-date tests others can
run in their own environments.

-- Scott
____________________
[1] http://groups.google.com/group/comp.lang.javascript/msg/12b79a16826a3a8d
[2] http://web.archive.org/web/20080614033803/http://www.cinsoft.net/mylib-testspeed.html
 
D

David Mark

David said:
On Jan 23, 9:35 pm, Andrew Poulos <[email protected]> wrote:

[...]> Then there is the outrageous "test-driven" development (as referenced
by one of Resig's posts a couple of years back). What it translates
to is a bunch of people who have no idea how to go about cross-browser
development, using unit tests to shape their designs. It's
programming by observation, not understanding.

You know, you don't have to embarrass yourself like that.

Like what? My point is well-documented (and certainly not mine
alone). Where have you been for the last couple of years? Are you
saying that jQuery (and the like) are not patchwork quilts because of
their reliance on empirical observations, rather than understanding?

See jQuery's attr method for a start. They _obviously_ never
understood the decade-old issue of MSHTML attributes, so they've been
patching it little by little for years, based on tickets filed by
users. They could have (and certainly should have) done the required
research and work to start with. Would have saved a lot of time and
trouble (not to mention embarassment).

Then there is YUI converting null getAttribute results to empty
strings. Does that indicate understanding? Similarly, jQuery sets
"removed" attributes to empty strings because somebody noticed that
certain attributes were not being removed.

It's the same story over and over. Look at any module in any "major"
library and framework and you see the same sort of confused code
accompanied by the same sort of confused comments (often open-ended
questions rather than assertions).

There's no investigation, no solution and no learning, just patching
based on reports from the field. Software just doesn't work like
that. Seeing as cross-browser scripts constitute a very difficult
form of software development, it doesn't make a whole hell of a lot of
sense to use nonsensical methodology. The futile (asd predictable)
results have been apparent for years (to everyone it seems, escept the
self-designated browser Ninjas).
 
S

Scott Sauyet

Based on the SlickSpeed tests John-David Dalton recently demonstrated,
I've created my own new version of SlickSpeed. It uses essentially
the same timer-loop method as he did, but then calculates and reports
the time per iteration in microseconds, totaling them in milliseconds
so that the final report looks more like the original one: smaller is
better again! :) I've chosen to run the test repeatedly until it's
taken over 250 ms, which seems to give a reasonably good balance
between accuracy and performance; the original 40 selectors take about
10 seconds per library tested.

There is still one flaw that Richard Cornford pointed out [1]: the
loop also includes repeated calls to "new Date()". This will serve to
slow down all libraries and somewhat downplay the differences between
them. I'm considering ways to fix it, but for the moment it shouldn't
affect rankings, only the speed ratios between the libraries.

My first pass at this is at:

http://scott.sauyet.com/Javascript/Test/slickspeed/2010-02-12a/

There is a link in the footer to a zip file containing the PHP code
used.

My raw results are below, run against recent versions of the major
browsers on a powerful Windows XP machine. While this is still not a
home run for My Library, it's definitely getting to be a closer
contest on my developer's machine, at least with QSA. While I
understand and partially agree with John-David's objections to the
current My Library QSA code [2], I think it's good enough for this
sort of testing at the moment, and I doubt the fixes to it will change
much in the way of speed.

The big news is that in these tests, in all browsers, except IE6 and
IE8 in compatibility mode, My Library (QSA) was the fastest for a
majority of the selectors.

But in none was it the overall fastest. JQuery was the fastest in
everything but IE6, where it came in third behind Dojo and MooTools.
In many browsers, if two of the selectors were optimized to match the
speed of the competition, My Library (QSA) would have been about the
fastest overall library. Those were the two selectors with
":contains": "h1[id]:contains(Selectors)" and
"p:contains(selectors)". In the various IE's there was a different
issue, "p:nth-child(even/odd)" were wrong in both versions of My
Library, and were significantly slower, too.

One other place where My Library might be able to do a little catch-up
is with some of the most commonly used selectors; jQuery is fastest,
and, in some environments, significantly faster than the competition,
at "tag" and "#id", which probably account for a large portion of the
selectors used in daily practice.

The other point to make is that we've pretty much hit the point where
all the libraries are fast enough for most DOM tasks needed, and
especially for querying. So although there will always be some
bragging rights over fastest speed, the raw speeds are likely to
become less and less relevant.

In any case, nice job, David!

(results below.)

-- Scott
____________________
[1] http://groups.google.com/group/comp.lang.javascript/msg/44cf1a85fe8075c0
[2] http://groups.google.com/group/comp.lang.javascript/browse_thread/thread/7ee2e996c3fe952b

=================================================================
Resultss on Windows XP SP2 with dual 3.0 GHz CPUs, 3.25 GB RAM
=================================================================
dj = Dojo 1.4.0
jq = jQuery 1.4.1
mt = MooTools 1.2.4
ml = My Library, downloaded 2010-02-11
mlqsa = My Library with QuerySelectorAll, downloaded 2010-02-11
pr = Prototype 1.6.1
yui = Yahoo User Interface 3.0


Chrome 3.0.195.27:
------------------
dj:9 jq:6 mt:38 ml:61 mlqsa:12 pr:9 yui:12

Firefox 3.5.7:
--------------
dj:24 jq:16 mt:45 ml:94 mlqsa:22 pr:32 yui:20

IE6:
----
dj:871 jq:1295 mt:1004 ml:4856 mlqsa:4815 pr:2665 yui:1731

IE8 compat:
-----------
dj:188 jq:294 mt:588 ml:2926 mlqsa:1691 pr:1952 yui:707

IE8:
 
L

Lasse Reichstein Nielsen

Scott Sauyet said:
There is still one flaw that Richard Cornford pointed out [1]: the
loop also includes repeated calls to "new Date()". This will serve to
slow down all libraries and somewhat downplay the differences between
them. I'm considering ways to fix it, but for the moment it shouldn't
affect rankings, only the speed ratios between the libraries.

One approach to minimizing the impact of new Date computations is to
ensure that you don't do them too often.
If the run() function called in the loop is fast, it's better to do a
bunch of them between creating a new date. I.e., instead of

do {
comps++;
run();
} while ((time = (new Date() - start)) < 250);

maybe do
do {
for (var i = 0; i < 1000; i++) {
run();
}
comps += 1000;
} while ((time = (new Date() - start)) < 250);

The exact number of rounds ofcourse depends on how long the run
function takes. If a single call to run takes more than, say, 100 ms,
then 250 ms, 3 rounds, isn't really enough to do statistics on anyway.

Looks nice, btw.
/L
 
S

Scott Sauyet

Scott Sauyet <[email protected]> writes:
One approach to minimizing the impact of new Date computations is to
ensure that you don't do them too often.
If the run() function called in the loop is fast, it's better to do a
bunch of them between creating a new date. I.e., instead of

 do {
    comps++;
    run();
 } while ((time = (new Date() - start)) < 250);

maybe do
 do {
    for (var i = 0; i < 1000; i++) {
      run();
    }
    comps += 1000;
 } while ((time = (new Date() - start)) < 250);

That's one of the approaches I'm considering. But there are two
concerns I have with it. First, if one of them *does* take 100 ms,
then running it 1000 times will really slow down the tests. As it is,
I get a reasonable average if it runs at least a dozen or so times,
which happens in the vast majority of tests, but if it takes a
comparatively long time, then I'm still running at least once, and it
it takes that long once, chances are good that an average of a number
of runs won't change it by an order of magnitude, probably much less.

My second concern is that this still adds overhead to the raw
iteration of the tests. I'm sure the loop is not as bad as the Date
creation, but it's still something to think about.

The other approach I'm thinking about runs times the equivalent loop
without the selector test and then subtracts it from the time,
something like this:

do {
comps++;
run();
} while ((time = (new Date() - start)) < 250);

var i =0, time2, start2 = new Date();
do {
i++;
time2 = (new Date() - start2);
} while (i < comps);

time = time - time2;

This *seems* to make sense, but I'm afraid I'm missing something
fundamental. I'll probably at least try this.
Looks nice, btw.

Thanks.

-- Scott
 
I

Ivan S

Based on the SlickSpeed tests John-David Dalton recently demonstrated,
I've created my own new version of SlickSpeed.  It uses essentially
the same timer-loop method as he did, but then calculates and reports
the time per iteration in microseconds, totaling them in milliseconds
so that the final report looks more like the original one: smaller is
better again!  :)  I've chosen to run the test repeatedly until it's
taken over 250 ms, which seems to give a reasonably good balance
between accuracy and performance; the original 40 selectors take about
10 seconds per library tested.

There is still one flaw that Richard Cornford pointed out [1]: the
loop also includes repeated calls to "new Date()".  This will serve to
slow down all libraries and somewhat downplay the differences between
them.  I'm considering ways to fix it, but for the moment it shouldn't
affect rankings, only the speed ratios between the libraries.

My first pass at this is at:

   http://scott.sauyet.com/Javascript/Test/slickspeed/2010-02-12a/

There is a link in the footer to a zip file containing the PHP code
used.

My raw results are below, run against recent versions of the major
browsers on a powerful Windows XP machine.  While this is still not a
home run for My Library, it's definitely getting to be a closer
contest on my developer's machine, at least with QSA.  While I
understand and partially agree with John-David's objections to the
current My Library QSA code [2], I think it's good enough for this
sort of testing at the moment, and I doubt the fixes to it will change
much in the way of speed.

The big news is that in these tests, in all browsers, except IE6 and
IE8 in compatibility mode, My Library (QSA) was the fastest for a
majority of the selectors.

But in none was it the overall fastest.  JQuery was the fastest in
everything but IE6, where it came in third behind Dojo and MooTools.
In many browsers, if two of the selectors were optimized to match the
speed of the competition, My Library (QSA) would have been about the
fastest overall library.  Those were the two selectors with
":contains": "h1[id]:contains(Selectors)" and
"p:contains(selectors)".  In the various IE's there was a different
issue, "p:nth-child(even/odd)" were wrong in both versions of My
Library, and were significantly slower, too.

One other place where My Library might be able to do a little catch-up
is with some of the most commonly used selectors; jQuery is fastest,
and, in some environments, significantly faster than the competition,
at "tag" and "#id", which probably account for a large portion of the
selectors used in daily practice.

The other point to make is that we've pretty much hit the point where
all the libraries are fast enough for most DOM tasks needed, and
especially for querying.  So although there will always be some
bragging rights over fastest speed, the raw speeds are likely to
become less and less relevant.

In any case, nice job, David!

(results below.)

  -- Scott
____________________
[1]http://groups.google.com/group/comp.lang.javascript/msg/44cf1a85fe8075c0
[2]http://groups.google.com/group/comp.lang.javascript/browse_thread/thr....

=================================================================
Resultss on Windows XP SP2 with dual 3.0 GHz CPUs, 3.25 GB RAM
=================================================================
dj = Dojo 1.4.0
jq = jQuery 1.4.1
mt = MooTools 1.2.4
ml = My Library, downloaded 2010-02-11
mlqsa = My Library with QuerySelectorAll, downloaded 2010-02-11
pr = Prototype 1.6.1
yui = Yahoo User Interface 3.0

Chrome 3.0.195.27:
------------------
dj:9    jq:6     mt:38    ml:61    mlqsa:12    pr:9     yui:12

Firefox 3.5.7:
--------------
dj:24   jq:16    mt:45    ml:94    mlqsa:22    pr:32    yui:20

IE6:
----
dj:871  jq:1295  mt:1004  ml:4856  mlqsa:4815  pr:2665  yui:1731

IE8 compat:
-----------
dj:188  jq:294   mt:588   ml:2926  mlqsa:1691  pr:1952  yui:707

IE8:

Latest Opera (10.10) has a problem with Mylib without QSA and Dojo
(all tests returned error).
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top