jQuery vs. My Library

L

Lasse Reichstein Nielsen

Jorge said:
Ermm, *cough* *cough*.

I'm really taking a pot-shot at the people making "performance
benchmark suites" that claim to have lasting relevance and to test
something relevant to actual use.
If they are merely a bunch of micro-benchmarks, they are far too
easy to optimize for - without actually helping the performance
of real applications. Like this "dead variable/dead code" elimination
that wouldn't do anyting for a real programt that only compute things
it actuall needs.

Benchmarks should be built with a *goal*. It should measure something
relevant that you want to optimize for, and then you can use the
benchmark as measure of the success of your optimizations.
And, preferably, it should compute a result (and check the result!),
and not do stupid things that no right-minded programmer would
do (like running entirely in top-level code or use global variables
where local ones would suffice). That just means that you end up
optimizing for stupid code instead of good code.


Another thing to make is a "speed test". It just measures how the
current browsers are doing something *right now*. That's also a
perfectly fine thing to do, for doing comparisons. It's not a problem
if it's just a micro-benchmark, because if it is hit by some optimzation
that skews the result, you can just rewrite it until it isn't, and
measure what you need.
It's just normally not suitable for being promoted as a benchmark -
something people should measure themselves against. It's likely to not
stand the test of time.


/L
 
J

Jorge

I'm really taking a pot-shot at the people making "performance
benchmark suites" that claim to have lasting relevance and to test
something relevant to actual use.
If they are merely a bunch of micro-benchmarks, they are far too
easy to optimize for - without actually helping the performance
of real applications. Like this "dead variable/dead code" elimination
that wouldn't do anyting for a real programt that only compute things
it actuall needs.

Benchmarks should be built with a *goal*. It should measure something
relevant that you want to optimize for, and then you can use the
benchmark as measure of the success of your optimizations.
And, preferably, it should compute a result (and check the result!),
and not do stupid things that no right-minded programmer would
do (like running entirely in top-level code or use global variables
where local ones would suffice). That just means that you end up
optimizing for stupid code instead of good code.

Another thing to make is a "speed test". It just measures how the
current browsers are doing something *right now*. That's also a
perfectly fine thing to do, for doing comparisons. It's not a problem
if it's just a micro-benchmark, because if it is hit by some optimzation
that skews the result, you can just rewrite it until it isn't, and
measure what you need.
It's just normally not suitable for being promoted as a benchmark -
something people should measure themselves against. It's likely to not
stand the test of time.

Of course. When you said "stupid micro-benmarks" this one came to my
mind:
http://groups.google.com/group/comp.lang.javascript/msg/84fd9cbba33b9edd
 
L

Lasse Reichstein Nielsen

Jorge said:
Of course. When you said "stupid micro-benmarks" this one came to my
mind:
http://groups.google.com/group/comp.lang.javascript/msg/84fd9cbba33b9edd

Well ... it isn't doing very good at testing Javascript performance :)

Running the JS code in my eval-box
(http://www.infimum.dk/HTML/javascript/jstest6.html) gives a result
of 0.8 MHz (Chrome 5.0.342.2) and 1.5 MHz (Opera 10.50) [1].
If I wrap the code in a a function and call it ("function foo(){...} foo();")
and make sure that n, v and t are local variables (notice the semicolon
after 20e6 isn't a comma), I get a result of 55.7 MHz and 81 MHz
respectively.

Yes, it's too short, but the bigger problem is that it runs as
top-level code and uses global variables. In a browser, that can mean
a XSS security check for every variable access (at worst) and it
disallows a lot of optimizations because the variables are global and
therefore can be changed from pretty much anywhere at any time.

/L
[1] Running code in the scope of an eval is slooooow!
 
A

Antony Scriven

If it's a local variable (and you *really* shouldn't use
global variable in a loop, or write benchmark code that
runs at top-level), and it's not read again afterwards,
and it's possible to see that the loop always terminates,
then it's a safe optimization to remove the entire loop.

I.e.
 function test() {
   var x = 42;
   for (var i = 0; i < 1000000; i++) { x = x * 2; }
 }

This entire function body can safely be optimized away.
Whether a JavaScript engine does the necessary analyzis
to determine that is another question, but it's
a possible optimization.

Ah yes, thanks, I see Michael's point now. I was thinking in
more general terms when I wrote that. I imagine the analysis
is much harder to do when global variables or global objects
are involved.
Quite a lot of stupid micro-benmarks can be entirely
optimized away like this.

Indeed. Those sorts of benchmarks are entirely
unrepresentative of any production code I've seen, and are
a small part of the total cost of developing software. --Antony
 
J

Jorge

Jorge said:
Of course. When you said "stupid micro-benmarks" this one came to my
mind:
http://groups.google.com/group/comp.lang.javascript/msg/84fd9cbba33b9edd

Well ... it isn't doing very good at testing Javascript performance :)

Running the JS code in my eval-box
(http://www.infimum.dk/HTML/javascript/jstest6.html) gives a result
of 0.8 MHz (Chrome 5.0.342.2) and 1.5 MHz (Opera 10.50) [1].
If I wrap the code in a a function and call it ("function foo(){...} foo();")
and make sure that n, v and t are local variables (notice the semicolon
after 20e6 isn't a comma), I get a result of 55.7 MHz and 81 MHz
respectively.

83.7MHz... that's what FF3.6 gives on my Mac !! Bully. So at the very
least, we know now that JS can + and / and loop at nearly 2/3rds the
"speed of C" (in vacuum :). That alone is good news, it seems to
me.

Any good idea about what piece of code -anything a bit more complex- I
should test ?, I don't know, e.g., prime numbers, to see what gives ?
Yes, it's too short, but the bigger problem is that it runs as
top-level code and uses global variables. In a browser, that can mean
a XSS security check for every variable access (at worst) and it
disallows a lot of optimizations

Yep you're absolutely right. In the v8 shell the speed jumped to 2x
faster, in the jsc shell 5.4x to 49MHz (from 9!), etc. You've had a
pretty good idea.
because the variables are global and
therefore can be changed from pretty much anywhere at any time.

"changed from pretty much anywhere at any time" ? JS has a single
execution thread... ¿?

(function test () {
var k= 20e6, n= k, v= 0, t= +new Date();

while (n) {
v+= n/2/n;
n--;
}

t= +new Date() -t;
(this.alert || this.print)([
(k/t/1e3).toFixed(1)+ " MHz",
t+" ms", v]);
})();

Thanks,
 
M

Michael Haufe (\TNO\)

83.7MHz... that's what FF3.6 gives on my Mac !! Bully. So at the very
least, we know now that JS can + and / and loop at nearly 2/3rds the
"speed of C"  (in vacuum :). That alone is good news, it seems to
me.

I don't believe FF 3.6 compiles a trace through eval atm. So I expect
that is the speed of the interpreter.
 
D

Dr J R Stockton

In comp.lang.javascript message <8ad81ac9-617f-4aef-bbd7-105824b7b817@g2
8g2000yqh.googlegroups.com>, Sat, 6 Mar 2010 16:08:24, "Michael Haufe
(\"TNO\") said:
This approach would assume that the implementation in question doesn't
optimize away the loop as a useless construct.

To be careful, one should also measure the time for an absent loop. If
that is the same as for an empty loop, then one must include in all test
loops something which will not be optimised.

To select something which cannot be optimised, one must determine the
smartness of the optimiser.

A counting loop must use a control variable, overt or concealed. In a
language such as Pascal, that could be declared in a procedure
containing the timing tests, and an optimiser could easily see when an
empty loop could be optimised out.

But in JavaScript a loop controlled by J might be followed, after
intervening definitely-non-J statements, by something like
alert(window[prompt("What")]) // enter 'J' or not-'J'
and at loop time the system cannot possibly tell whether the loop-exit
value of J will be needed later.

So at least the optimiser will have to determine the exit value of J, or
to check for such statements later in the execution.

There's a good use (in theory) for 'eval' - its presence late in
code can set a lower bound to the optimiser's level of caution.


So, to ensure that a loop cannot be optimised away, count with J
normally but include in the loop one step of a Lehmer/Park–Miller PRNG.
That step is reasonably fast; and only a really clever and perverse
optimiser would be able to recognise the situation and predict the
result of J steps in order to remove a loop in safety.

Pause to update Demo 6 in the master copy of
<URL:http://www.merlyn.demon.co.uk/js-quick.htm>.

If I am not mistaken, none of the browsers on this PC optimises out a
simple empty loop of the form Q_ = K_ ; while (Q_--) { } .
 
S

Scott Sauyet

Okay, it might look that way to you.  To me it is a very concise and
intuitive way to say "select all the anchor elements that are inside
divs with a class of navigation but are not inside list items with a
class of special, and by the way, return them in document order,
please."  That was the requirement.

The question remains, though, did the requirement make sense in the first
place?  Was it really not possible to solve this problem differently, for
example with event bubbling, with using the CSS cascade, with using
standards-compliant event-handler attributes, or a combination of them?  
And then, is the approach that had been chosen to meet the requirement more
or at least equally reliable than the alternatives? [ ... ]

These, of course, are useful and important questions.

I'm bothered, though, that a number of people here seem to feel they
know the answers to them in the abstract without any knowledge of
specifics of various situations. There seems to be a consensus among
some of the regulars in c.l.j.s. that CSS selector engines are a bad
idea; I'm not sure of the basis for that consensus. but it seems to be
strong enough that people are willing to assume those who disagree
have misunderstood either their own requirements or have failed to
consider alternatives. There seems to be no room for the possibility
that people have the requirements right, understand the alternatives,
and choose to use selector engines.

I'm reminded of when I came aboard a Java project just after a
consultant had left. He was supposed to be an expert in Object-
oriented development and design. He had radically reworked various
sections of code, trying to fix code smells he saw. But he did so
blindly, following some rote rules that might be fine in the abstract,
but didn't always apply. One of his "code smells" was that OO systems
should never need "switch" statements. He methodically went through
the code base, adding little helper classes (this was before "enum" in
Java) to replace the switch statements. In the end, he managed to
replace a handful of poorly conceived switch statements with better
designs. But he also replaced ten times as many switch statements
with code that was longer, less readable, less performant, and far
less maintainable than it started out. The trouble was that the
system had been well-designed, and the choice to use a switch
statement was made by people who usually did understand the trade-offs
made.

Sure, there have been times when I've used selector engines when there
were other reasonable alternatives. But there are plenty of times
when event delegation or other alternatives were not practical or even
possible, times when -- with or without a selector engine -- I needed
to operate on a collection of elements in the document.

I'm wondering if people here regularly work in fairly ideal
environments where they're in charge of all the mark-up and all the
CSS as well as all the JS used. While I'm sometimes in that
situation, it's much more common for me to be trying to squeeze some
behavior into existing HTML/CSS, often interspersed with existing JS.
It's often dictated by the project whether I can use inline JS. Often
a JS library has been chosen, and I must use it. Often parts of my
page are generated by black box systems.

Is this sort of environment unusual here?

-- Scott
 
J

Jorge

(...) I imagine the analysis
is much harder to do when global variables or global objects
are involved.(...)

I wonder why you and Lasse keep saying this. ISTM that once the JS's
single thread of execution enters that while () {} nothing is going to
interrupt it, so it's safe to assume that nothing but the code inside
the "while" is going to touch them (the globals), isn't it so ?
 
L

Lasse Reichstein Nielsen

Jorge said:
I wonder why you and Lasse keep saying this. ISTM that once the JS's
single thread of execution enters that while () {} nothing is going to
interrupt it, so it's safe to assume that nothing but the code inside
the "while" is going to touch them (the globals), isn't it so ?

Most of the time, yes. For certain: No.

With global variables, any function call must be assumed to be able to
change the value of any global variable - unless you do cross-function
analysis. In Javascript, cross-function analysis is *hard* because
functions are first class values, so function variables can vary.

With ES5 getters and setters, any property access, including reading
and writing global variables, can call a function that can change
any global variable.
On top of that, any object that sneaks into the loop might be coerced
to a value by calling valueOf or toString on it - which can again change
anything in the world.
And any function called inside the loop, e.g., Math.floor, might
have been overwritten by a malicious user. You really can't trust
anything in Javascript except operators working on primitive values.

This gives many, many cases where a simple one-function analysis must
surrender, because it simply cannot guarantee anything about the usage
of a global variable.

On top of this, accessing global variables in browsers are likely to
incur extra overhead from XSS security checks.

In short: Don't use global variables in speed sensitive code.

It's still not as bad as using eval or with, which means that
you can pretty much not determine anything about the scope
statically. And about the usage of any variable in scope of the
eval.
/L
 
A

Antony Scriven

(...) I imagine the analysis [optimizing empty loops
into a no-op] is much harder to do when global
variables or global objects are involved.(...)

I wonder why you and Lasse keep saying this. ISTM that
once the JS's single thread of execution

Actually do you have a reference to the part in the standard
that guarantees that? I can't for the life of me find it.
I'm probably just being thick, sorry.
enters that while () {} nothing is going to interrupt it,
so it's safe to assume that nothing but the code inside
the "while" is going to touch them (the globals), isn't
it so ?

Sorry, I was talking in more general terms. Consider

for(i=0; i<5; ++i){}

where 'i' is global. If 'i' is used at some point after the
loop, then I'd expect the increments to take place. Whether
'i' is used at later is probably impossible to determine.

It may also be inadvisable to replace the loop with 'i=5;'
as an optimization. Certainly in Firefox you can do the
following.

this.watch('i', function(){
return 1000;
});
for(i=0; i<5; ++i){}

Thus changing 'i' here has a side effect. And then there's
the case where the condition uses a global as well.

for(i=0; i<imax; ++i){}

I guess you could perform a run-time check on imax here.
But if you don't know whether or not 'i' is going to be
used later on it's probably pointless. --Antony
 
J

Jorge

On Mar 8, 6:17 pm, Jorge wrote:

 >
 > > (...) I imagine the analysis [optimizing empty loops
 > > into a no-op] is much harder to do when global
 > > variables or global objects are involved.(...)
 >
 > I wonder why you and Lasse keep saying this. ISTM that
 > once the JS's single thread of execution

Actually do you have a reference to the part in the standard
that guarantees that? I can't for the life of me find it.
I'm probably just being thick, sorry.

I can't find it either in the specs, but, have you ever seen one that
isn't ?

http://weblogs.mozillazine.org/roadmap/archives/2007/02/threads_suck.html
 
J

Jorge

Most of the time, yes. For certain: No.

With global variables, any function call must be assumed to be able to
change the value of any global variable - unless you do cross-function
analysis. In Javascript, cross-function analysis is *hard* because
functions are first class values, so function variables can vary.

With ES5 getters and setters, any property access, including reading
and writing global variables, can call a function that can change
any global variable.
On top of that, any object that sneaks into the loop might be coerced
to a value by calling valueOf or toString on it - which can again change
anything in the world.
And any function called inside the loop, e.g., Math.floor, might
have been overwritten by a malicious user. You really can't trust
anything in Javascript except operators working on primitive values.

This gives many, many cases where a simple one-function analysis must
surrender, because it simply cannot guarantee anything about the usage
of a global variable.

On top of this, accessing global variables in browsers are likely to
incur extra overhead from XSS security checks.

In short: Don't use global variables in speed sensitive code.

It's still not as bad as using eval or with, which means that
you can pretty much not determine anything about the scope
statically. And about the usage of any variable in scope of the
eval.

Ok ok ok. Thanks.
 
A

Antony Scriven

On Mar 8, 6:17 pm, Jorge wrote:
 >
 > > (...) I imagine the analysis [optimizing empty
loops into a no-op] is much harder to do when
global variables or global objects are
involved.(...)
 >
 > I wonder why you and Lasse keep saying this. ISTM
that once the JS's single thread of execution
Actually do you have a reference to the part in the
standard that guarantees that? I can't for the life of
me find it. I'm probably just being thick, sorry.

I can't find it either in the specs, but, have you ever
seen one that isn't ?

No. I was just wondering if the standard specifically mentioned
the issue. --Antony
 
G

Garrett Smith

Scott said:
Scott said:
David Mark wrote:
Scott Sauyet wrote:
Things like this are very convenient:
select("div.navigation a:not(li.special a)");
Honestly, that looks ludicrous to me.
Okay, it might look that way to you. To me it is a very concise and
intuitive way to say "select all the anchor elements that are inside
divs with a class of navigation but are not inside list items with a
class of special, and by the way, return them in document order,
please." That was the requirement.
The question remains, though, did the requirement make sense in the first
place? Was it really not possible to solve this problem differently, for
example with event bubbling, with using the CSS cascade, with using
standards-compliant event-handler attributes, or a combination of them?
And then, is the approach that had been chosen to meet the requirement more
or at least equally reliable than the alternatives? [ ... ]

These, of course, are useful and important questions.

I'm bothered, though, that a number of people here seem to feel they
know the answers to them in the abstract without any knowledge of
specifics of various situations. There seems to be a consensus among
some of the regulars in c.l.j.s. that CSS selector engines are a bad
idea; I'm not sure of the basis for that consensus. but it seems to be
strong enough that people are willing to assume those who disagree
have misunderstood either their own requirements or have failed to
consider alternatives. There seems to be no room for the possibility
that people have the requirements right, understand the alternatives,
and choose to use selector engines.

That in and of itself is speaking in the abstract. I much prefer looking
at the situation at hand to make a decision on how to affect the code.

[...]
I'm wondering if people here regularly work in fairly ideal
environments where they're in charge of all the mark-up and all the
CSS as well as all the JS used. While I'm sometimes in that
situation, it's much more common for me to be trying to squeeze some
behavior into existing HTML/CSS, often interspersed with existing JS.
It's often dictated by the project whether I can use inline JS. Often
a JS library has been chosen, and I must use it. Often parts of my
page are generated by black box systems.

Is this sort of environment unusual here?
Most of the time the code is awful. I'd like to look at why this is. I
feel that it is useful to understand a problem because in so doing, the
alternatives to causing the problem usually become self-evident.

Part of the problem comes from low standards of personal self
achievement. People just don't try hard. I see this every day at the
gym. I see it in doctors who are often arrogant, yet display lack of
knowledge of nutrition (often evidenced by their own corpulence). I see
it in customer service representatives who do not read or listen, but
parrot the same lines they're taught. Most relevantly, I see low
standards for acheivement on the web. Web developers are copying bad
ideas and practices and doing what they're told by managers who do not
have any business to do the telling.

The result of not trying hard is that the end product -- the web, in
this case -- suffers greatly. The web is a horrible mess. most sites
nowadays don't require javascript, but are authored in such a way that
if the user has javascript turned off, then the site does function.
Other sites display javascript errors or use strategies that are not
forwards compatible or limited to a subset of browsers.

An even worse problem is the internal quality of sites, where the site
is "working" as far as the client is concerned, but is designed in such
a way that changes are difficult and can have unwanted effects.

Another factor that contributes to badly designed web applications is
project requirements being handed from the top-down. The requirements
may start from marketing, which generates business requirements, to
project managers, to hiring managers, who hire the front-end coder.
Hiring managers usually do know how to hire qualified front-end
developers. As a result, they tend to end up making decisions based on
not-so-good criteria (e.g. "can you write a merge sort" or the trick
logic questions that are fun, but totally worthless for solving
web-related programming problems)

If the project ideas and goals are evaluated earlier in the process
involving the front-end developer, and not at the end of the line with
the final decision of what must be done, and reiterated involving the
front end developers, then bad program decisions can be avoided in
solutions that solve the problem more simply or effectively.

No matter what project, if the project involves writing rich
functionality in the browser, then there is going to be a need to write
javascript functionality and that functionality will be best organized
into abstractions. How to organize those abstractions requires
assessment of the situation by somebody with experience using the
technologies correctly and who has experience in organizing systems of
javascript. Having knowledge of how to use a javascript library is not
pertinent to the type of skillset of organizing abstractions.

To summarize: Every project company will have to write its own library.
The degree of sophistication of this library may vary and the things
that it does will likely vary as well.

The process of adding new features atop existing functionality for which
they are not elegantly accommodated results in code debt. Google groups
is a fine example of that, as can be seen many of the "improvements"
show that new functionality that was added atop a system that was not
well designed in the first place resulted in code that was a total mess
(Discussions of this exist in the archives.)

I have authored such things as, for example, a "StubManager" which fired
"StubChangeEvents", a "TimeSynchronizer", "RadioGroup",
"ConditionFilter". Each project have a use-case that is unlikely to be
reused in another project. Knowing when, why, and how to create
abstractions for such use cases is important.

In addition to writing its own library, every company will have to
actively maintain its code. A formalized testing process for maintenance
can go a long way to make it easier to quickly make new changes and
additions to the code while verifying not only "does it still work", but
redefining what "work" means, and if there is something that the program
is doing that it should not be doing anymore, then the code that is
performing that can be removed.

When the software does not do what is wanted of it, and when a
behavioral change is wanted, then it becomes necessary to change the
code. The change often requires modifying that which already exists and
that requires understanding everything that the existing code should be
doing (and identifying code which is irrelevant, useless ("dead code")).

Layering functionality on top of legacy code can tend to increase
technical debt. It leads to more dead or useless code and general
confusion about what the code does, e.g. "what does this do", "I thought
it was working before", and other such inanities.

At a certain point of debt accrual, changes become so expensive so that
the important changes must be weight against the cost of rewriting the
entire system from scratch.

I can recall specific instances where the code was gutted and removed
and the result was a system that was clearer and cleaner than would have
been possible if that had not been done.

One was where the code had included content in the form of:

<script>
if(isMacIE) {
document.write("<img ..");
// etc for NN4, et al.
</script>

In that case, I did my best to understand exactly what the code should
be doing and ended up removing the script tag. The result was code that
could be clearly understood by almost anyone.

I can think of many more cases where that sort of approach was not used,
and for a period of time, I was very patient with others, trying to not
push my ideas of how things ought to be done. I recall specifically on a
project where I took that mindset that the code was awful, I worked my
ass off, my coworkers were lazy and unprofessional, and the project
failed. I am not not so patient and make no bones about pointing out
problems in the code. I am not interested in being part of a failing
project.

What I have been doing is to create a base library that will make it
easier to develop widgets and systems. You can still easily write bad
code and if you don't know what you're doing with it, it is probably
going to be even easier to write bad code (by facilitating and
empowering and augmenting mistakes on a whole new level). What it does
do is make it easier for someone who wants to write a system for a RIA
(or browser-based app, or whatever you call 'em).

So to answer your question: "Is this sort of environment unusual here?"
I would stay that it is not unusual, but is instead a very common
symptom where the developer is asked to add features to a maldesigned
system.
 
J

Jorge

Concurrency wasn't one of the design goals of JavaScript (or later
ECMAScript), and the language lacks any support for access
synchronization. It's possible that some implementations (ActionScript,
maybe, or some of the CommonJS server-side environments) do support
multiple threads, but I think they'd have to extend the language to make
a real multi-threaded system feasible.

AFAIK, all browser script engines have a single main thread, but
recently various implementations of background threads have surfaced
(web workers), which effectively makes JS development on these browsers
potentially multi-threaded. The main difference to most other
multi-threaded platforms is that one thread is privileged (i.e., has
full access to the DOM), while the other threads are intended for
secondary, background calculation tasks. They communicate via a message
passing system instead of sharing access to variables and properties,
which more or less bypasses the need for synchronization.

Yep. But -just nitpicking a little bit- you prolly shouldn't really
call a worker a thread as it shares no context with its parent. It's a
completely separate process with a completely independent separate JS
context with which you communicate only via JSON texts -messages-
passing.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,013
Latest member
KatriceSwa

Latest Threads

Top