JS framework

D

Diego Perini

Is there a public ticket in webkit/chrome bug tracker where this is
being discussed? I wasn't aware of implementations starting to work on
(or considering) `Element#match` addition.

No ticket that I know of but talks about these techniques are taken in
consideration by the KHTML folks so I suspect Webkit/Chrome are on
that line too if I am not wrong KHTML/Webkit have always shared a lot
of code. Here is one reading about it in KHTML:

http://vtokarev.wordpress.com/2008/11/27/css-optimizations-in-khtml/

I bet Hyatt and team will not skip that ! They already have more than
one year QSA experience, maybe that helps.

I really hope they do !


Diego Perini
 
D

Diego Perini

Diego said:
Diego Perini wrote:
[...]
I see where you are coming from with the API. It would be convenientto
have that supported by native code, but that is not the case.
Nice to hear you deem "match()" adequate for a browser native
implementation, it seems Safari/Chrome will be having that in future
releases.
Is there a public ticket in webkit/chrome bug tracker where this is
being discussed? I wasn't aware of implementations starting to work on
(or considering) `Element#match` addition.
No ticket that I know of but talks about these techniques are taken in
consideration by the KHTML folks so I suspect Webkit/Chrome are on
that line too if I am not wrong KHTML/Webkit have always shared a lot
of code. Here is one reading about it in KHTML:

Actually, from what I understand, that post talks about internal
optimizations in KHTML's CSS handling. I didn't see anything suggesting
introduction of public `match` method. Did I miss something?

Ouch...re-reading it shows I had incorrectly bookmarked something I
thought related to match and element but it isn't really so.
On a side note, `match` is something that should probably be proposed to
(and standardized by) WHATWG first, and only then implemented by vendors
- to eliminate any inconsistencies.

As I outlined in the previous link I posted it was already suggested
in the webapi working group by David Andersson, though for some reason
it never made into a specification for some reason.


Diego Perini
 
G

Garrett Smith

Diego said:
Diego said:
Diego Perini wrote:
Diego Perini wrote:
kangax wrote:
Garrett Smith wrote:
RobG wrote:
Ivan S pisze:
WhatJSframework would you guys recommend to me?
Tryhttp://www.domassistant.com/
No, don't. It seems to munge the worst of jQuery CSS-style selectors
and Prototype.jsaugmentation of host objects together in one script
that is also depenent on browser sniffing.
The authors' claim that it is the fastest script in the SlickSpeed
tests isn't substantiated - in Safari 4 it comes in 4th out of 7, even
jQuery is faster.
Knowing how to use document.getElementById, event bubbling (also
called "delegation") would offer much superior runtime performance.
You can't do much with `document.getElementById`. Sometimes you want to
select by class, attribute, descendant or a combination of those. I
agree that more complex selectors are rarely needed (well, except maybe
something like `nth-child("odd")`).
Event delegation is great, but then you might want some kind of `match`
method to determine whether a target element matches selector. `match`
is not necessary, of course, but testing an element manually adds noise
and verbosity.
A codebase that does not have a selector API should be smaller. In
comparison to a codebase that has a selectors API, the one that does
notshould be downloaded faster, interpreted faster, and should have a
smaller memory footprint.
That much is... obvious :)
I don't think last two really matter. Execution of an entire fully CSS3
compliant selector engine (such asNWMatcher- the best one of them I've
even seen) doesn't take more than few milliseconds. It's 2ms in
FF3.0.10. Probably not more than 20-30ms in relatively slow IE6,7. Would
you really care about those?
I would definitely care about 20ms.
As discussed in previous mails, I have not ever needed an API to check
the tagName and className; that is a very simple and straightforward
thing to do and does not require abstraction.
In my experience, I have not ever needed to check much more than that. I
can't really think of a problem that would be best solved byNWMatcher,
nor can I imagine whereNWMatcherwould provide a clearer and faster
abstraction.
Why add something that is not necessary?
FWICS,NWMatcherstill uses function decompilation.http://github.com/dperini/nwmatcher/blob/ceaa7fdf733edc1a87777935ed05...
Garrett
--
The official comp.lang.javascript FAQ:http://jibbering.com/faq/
Garret,
if I where to be sure nobody had overwritten native functions with
broken replacements I wouldn't had used function decompilation, I
would probably have replaced that hated line:
(/\{\s*\[native code[^\]]*\]\s*\}|^\[function\]$/).test(object
[method]);
with a simpler:
typeof object[method] == 'function';
You don't see the problem because you haven't faced it,
Maybe your way is not the only way. Could it be that there is another
way to approach the problem that is at least as good?
Correct you are. Maybe you have a better solution. Just show me so I
can compare.
but it is a
problem and a very frequent one when you have to ensure compatibility
with code written by others not interested in keeping a clean
environment.
Modifying host objects is known to be problematic.
Again, you are correct, but I have to live with it. I personally don't
do it, but I don't know what library will my choosen by my users nor
can I force them to choose one or another.
Hiding that mistake does not make it go away. If the code is broken, fix
it. The code should not be broken in the first place, and certainly not
in the way that your workaround caters to.
You correctly use "should" but reality is what I work with. Will be
happy to include APE in my library chooser as soon as possible, still
I would have to support all the libraries with that shaming habit.
Sure when there are enough "perfect" libraries I could start to leave
out those that are not perfect.
APE is AFL, so would probably not licenseable for most projects. Having
effectively no user-base, I can remove things that should not be there.
Perhaps in the future, I'll go with a less restrictive mozilla-type
license.
Supporting potential problems in third party libraries is a *choice*.
Piggybacking off the popular libraries probably results in a larger user
base. The obvious downside is that additional code is required to hide
the inherently faulty design of those libraries. What is less obvious is
that the additional code requires "implementation dependent" function
decompilation, which is known to be faulty in at least opera mobile[1]
and is defined to return a FunctionDeclaration, which is not what IE,
Gecko, Safari, Chrome, Opera do when Function.prototype.toString is
called on an anonymous function.
General purposeframeworksare often chosen to "save time". Prototype,
Mootools, Cappucino, Mojo all modify host objects. Interesting that
NWMatcherhas extra code created for working around such problematic
design of the modification of host objects. Wouldn't it make more sense
to just not use those?
The extra code is very small and worth it currently; it comes down I
only needed to know that get/hasAttribute where the native browser
implementations and not a third party attempt to fix them, no third
party extension of get/hasAttribute for IE that I know has the
equivalent capabilities of IE regarding XML namespaced attributes.
If I understand correctly, the concern is that a third party library may
have attempted to fix IE's getAttribute/hasAttribute methods by defining
those methods on the element, and that if that had happened, it would
cause the script to not function properly.
Correct, in that case it was Prototype messing with those native
functions.
Sounds like a great reason not to use Prototype.
XML Namespaced attributes should be avoided. Why would you want to use that?
NWMatcher try to support as much as possible the W3C specifications
(xml:lang attribute and similar) which are suported by the IE natives.
Why on earth do you need to match xml:lang? That would only be supported
by using "getAttribute('xml:lang')" where IE would not interpret a
namespace, but would instead use a nonstandard approach. The colon
character is not valid in attributes.

I don't have much use myself either, that doesn't mean I have to
constrain my project to my own needs, I have bug tracking tools that I
try to follow and where possible and by specs I try to have it
considered. Is that a problem ?

The "xml:lang" property is mentioned in all the CSS >= 2 specs I have
read and it hasn't changed in the CSS Selectors Level 3, read first
6.3.1 and then follow links to 6.6.3 of latest CSS Selector Level 3
draft March/2009:

http://www.w3.org/TR/css3-selectors/#attribute-selectors

Though you may still say that my implementation is failing in some
way. No problems with that, my objective is learn and improve.

Not modifying host objects would be an improvement.

| d.isCaching

Mutation events. I remember in Gecko, DOMAttrModified would fire when a
textarea's |value| property was changed and that bug was justified on a
bugzilla ticket. I'm also concerned with the reliability of
"DOMNodeRemoved" when setting innerHTML or textContent.
I would like to do it OT if you have spare time and you really wish !

It is not "off topic". Quite the contrary: It is the essence of not
having a problem that draws the very point I am trying to make: YAGNI
[YAGNI].

The code I am going to show already uses delegation with some tricky
and possibly highly populated "trees" in smaller overlay windows; the
task is to replace "manual" delegation spread throughout the code with
a centralized event manager that could handle delegation using CSS
selectors. This is already handled very well since hundred of events
are handled by a few elements that act as events dispatchers (the
bubbling).

Link?

Writing English at this "level" is a bit difficult for me but I am
willing to try hard.

Yes, please do try and I will do my best to understand and where I do
not understand, I will try and make that clear.
It is not clear what problem you solve with 0 lines of code ! No lines
of code means no problem, so be real.

Yes, that was my point; no problem to solve.
As an example I can write:

NW.Event.appendDelegate( "p > a", "click", handler_function ); //
only first link in each paragraph

That constrains the position of the "activating" link the paragraph.
That, as is, is solvable that by checking tagName, parentNode, and
parentNode.getElementsByTagName("a") === target. However that strategy
in and of itself is arbitrary and fragile. If the markup were to have a
case for including a non-activating link as the first link in a <p>, the
script would fail. It is rigid for the same reason. You can't change the
order. I've been advocating this whole time that it is best to use
semantic markup. The |class| attribute could be used here.

addCallback(baseNode, handlePanelActuatorClick);

- and then in "handlePanelActuatorClick", do the checking to see if the
target has class "panelActuator".

function checkLinks(ev) {
var target = getTarget(ev);

var isPanelActuator = hasClass(target, "panelActuator");
if( isTargetFirstLinkInParagraph ) {
alert("winner");
}
}

The side effects to that strategy are:
* callback does the checking.
- reduces function calls
- debugging is straightforward
- does not require an extra call to a Selector lookup function
* NWMatcher is not required
- less code overall
- no extra non-standard API to learn
* Encourages the authoring of semantic HTML
- makes automation testing easier (for the same reasons it makes
assertions in the callback easier).
- behavior is deliberately and consistently applied (irrelevant
structural changes won't cause problems)

Drawbacks:
* requires user-defined hasClass(el, klass) and getTarget(ev) methods.
and I add some needed action to all the links in the page with no need
to wait for an onload event, thus ensuring a cross-browser experience
and an early activated interface. You may question the lengthy
namespace or names, yes the code is free just use yours...

Can I not question the need for the code itself?
Sure you can write the above snippet of code shorter and better, but
then you write it once and forget about reuse it.

I would not reuse the implementation. I /would/ reuse the
getNextSiblingElement or hasClass methods. I'd organize those methods
where they tend to get reused together, or where those methods have
common functionality, such as a some shared hidden (scope) variables.

Checking an element's "checked" or "tagName" property is trivial. It
would be pointless to create an abstraction for that.

OTOH, reading an element's checked *attribute* seems pointless. Why
would a program care? A CSS3 compliant selector API would would be
required to have that feature, but is it needed by a program?
Now suppose those "links" where also changing, pulled in by some http
request, the event will still be there, no need to redeclare nothing.

Yet another benefit to using bubbling. However, NWEvents is not needed
for that. Bubbling comes for free.

One-up to that is to reuse and cache an object decorator/wrapper that
was created lazily, on a bubbled event. That is possible when the
decorating object does not hold a referece to the element, but to an ID
(string).
Add this same technique to some more complex page with several FORMs
and validation as a requirement and you will probably see some benefit
in having all form events also bubble up the document (submit/reset/
focus/blur and more).

Submit does not bubble in IE, though. You could try and capture bubbled
"Enter" key and (submit) button clicks, but that also requires more
aggressive interrogation of the target (to make sure it is not readonly,
not disabled, etc).

The "but it should work" situations usually make me think about trying a
different approach.

I don't care about reset events, but I'm curious about handling the
bubbled submit. Would you like to post up some code, or a link to the
relevant place? Maybe another thread for that would be better, so that
discussion stays focused on that.

[Selectors API design discussion]
It was a very good idea for me and for others using it, obviously you
can teach me how to do that in 0 lines I am all ears.

A solution to a problem can not be critiqued if there is no problem
provided.

Code that does not meet the requirements fails on the first criteria.

So, if an assessment is to be made of NWMatcher, doesn't it sound right
and proper to show NWMatcher being used to solve a problem? Given a
problem P, in context, a comparison of P solved with NWMatcher vs P
solved with something else.

It looks like NWMatcher is adapted for jquery and prototype, right? If
so, I wonder what those users' good idea(s) were.
It works, NWMatcher passes the only few test suites available to me
(Prototype and jQuery), if you wish one added please suggest.

I have some test of my own obviously.

Depending on the browser, a checkbox' "checked" attribute may be a
boolean, null, or a string value.

A CSS3 compliant selector API would have to take all that into account.

I can't see a good reason for wanting to read the checked *attribute*.
Why would a script care about that?

The checked *property* should be what is a concern. A textarea's "value"
property, or another property, such as IE's "unselectable", might be
things a program would be concerned about. How would you match those
using css 3 selectors?
I know elements ID are unique. Thanks.

That wasn't my point. document.getElementById and getElementsByName are
broken in IE[1]. The reason I mentioned that it is a similar type of
thinking. The thinking is "X should work" and then trying to make it work.

A workaround for the IE bug is to replace document.getElementById with a
hand-rolled version. The hand rolled version checks to see if the
non-null element has an "id" property that is the same value. The
workaround is avoidable by not giving one element an ID that matches a
NAME of another (and expecting that nobody else will do that).

I was just trying to illustrate a point of not trying to patch all
browser bugs. There are way too many of them and they can often be
avoided by just being aware of the problem and not triggering it.
The IDs are in many case generated by module in the page, we have no
control over uniqueness between module's output but we may solve the
problem by giving different IFRAMEs to each module to ensure that.

I don't know what you are referring to. It sounds like you are
describing a mashup.
Well to achieve that with CSS and HTML in a site one should be
prepared to having to produce and maintain hundred/thousand different
manually edited static pages, we are miles away from that approach,
wasn't scripting introduced to solve that too. Or is there some
restrictions that impose the "just effects" web ? Isn't then jQuery or
APE enough for that ?

Brendan Eich would be able to provide a better answer on why scripting
was introduced. I'm not even sure I know the correct answer. I only know
what is going on for the past 10 years.

I proposed a History page for ecmascript.org, sent as email to one of
the es-discuss maintainers. I'm not expecting a 911 response, but it
would be nice to see such page.
I mean there are other markets and other needs, is it so difficult to
realize that ?

That sounds like something Martin Fowler calls "speculative generality".
Well algorithm was indeed a too big word, I meant the mix of different
conditional comparisons (it, tag, class, perv/next etc) you would have
to do each time the structure of your DOM changes (excluding the 0 LOC
trick).

There are cases where order matters and cases where an element's
position in the source order is arbitrary, and can often be enforced by
using valid HTML (only <li> inside a list, for example). The markup can
give big hints at what is arbitrary and what is not. The author is
responsible for that. The class or ID usually is not arbitrary.

What I would do is write semantic markup, make it is simple and logical
and obvious as possible, and then code for that. Things that are
arbitrary being changed won't affect the script.
I agree that in MOST cases, surely the majority, this code overhead is
not necessary. What about the other part of the "MOST" cases ?


NWMatcher will parse and compile the passed selector to a
corresponding resolver function in javascript. From what I understand
it's exactly what you do manually, as simple as that, the compiled
function is then invoked and saved for later use, no successive
parsing is done for the same selector to boost performances. I leaved
in a function to see the results of the process for demo purpose in
case you are curious about the outcome, use NW.Dom.compile(selector)
and print the resulting string to the console.

Selector API is overkill. The only time I can see needing that is for a
StyleSheet-related application. I made a styleSheet editor about four
years ago and used a Selectors API to match all the nodes in the
document based on selector text found in the styleSheet.
Not all browsers have that API extension, only the very latest
browsers have that.

Nobody has previousSiblingElement; that is a user defined function (I
miscommunicated that). "previousElementSibling" is Doug Schepers' choice
of name for the property (as I previously stated below).
I have heard saying that when QSA happeared first in browsers more
than a year ago, you said yourself QSA was mistakenly designed. Until
this is fixed things like NWMatcher will still bee needed and I am
speaking about the newest and most advanced implementors Webkit/
Chrome, what about IE6/IE7, maybe in a few years. NWMatcher will last
some more time for these reasons be assured.

That sounds like something I might say, though I don't recall
specifically.
Well I implemented that in javascript...why do you have such doubt
then ?


I really look forward to see that happen too. I am not in an hurry !

Technology can be improved by developers by making it more easy and
simple not by teaching difficult actions or hard to remember
procedures.

Can you explain a little more? What do you mean by "technology" and
"teaching difficult actions"?
Looking at the code, I think I see a problem in NW.Dom.getChildren:-
| getChildren =
| function(element) {
| // childNodes is slower to loop through because it contains
| // text nodes
| // empty text nodes could be removed at startup to compensate
| // this a bit
| return element[NATIVE_CHILDREN] || element.childNodes;
| },
The children property is inconsistent across the most common browsers.
It is best left alone.
There are no problems there, the returned children collection is
filtered for "nodeType == 1" or "nodeName == tag" afterwards.
That can be easily enough demonstrated.

There is no filtering shown in the code above. None. Where is the test
case of NWMatcher.DOM.getChildren?

This is the relevant code string that is wrapped around during the
function build that does what you are looking for:

// fix for IE gEBTN('*') returning collection with comment
nodes
SKIP_COMMENTS = BUGGY_GEBTN ? 'if(e.nodeType!=1){continue;}' :
'',


Nowhere I said either that the method serves to the purpose you are
trying to give it. You just guessed it !

No, I did not guess. I looked an found an inconsistency.

If someone were going to guess what that method is for (I would not), he
might read the code comment:-

| // retrieve all children elements
| getChildren: getChildren,

- and make a fair guess that it returns child elements.

A fair /expectation/ would be that the method would not return
inconsistent results across browsers.
That may be partly my fault too, by having exposed it as a public
method. ;-)

I was talking about "match()" and "select()", these are the only two
methods meant to be used. Sorry if it is unclear in the code/comments.

Why would you expose other methods if they are not intended to be used?
[snip explanation of children/childNodes]


Regardless of the getChildren method inconsistency, I don't need that.
AISB, YAGNI!
You are wrong. NWMatcher will never return text or comment nodes in
the result set, so it fixes the problems you mentioned on IE, and
since NWMatcher feature tests that specific problem, any new browser
with the same gEBTN bugs will be fixed too.
There are a couple of things wrong with that paragraph.

1) I am *not* wrong. The source code for NW.Dom.getChildren, as posted
above, taken from NWMatcher source on github[2], indicates otherwise.

As I told I am not trying to have an unified cross-browser
"getChildren" it is an helper used by the compiled functions, I could
have completely avoided having that function independent, it was to
improve speed on IE quickly discarding text nodes.

Why not just use the NATIVE_CHILDREN variable? Providing inconsistent
results to the caller imposes a responsibility that the caller has to
know about, despite the method name "getChildren" and the comment above it.

The caller must perform a few steps:
1) call getChildren
2) filter out comments, text nodes.

You can probably get away with it if you own all the code, but such code
would not fly where code sharing is common.

I suggest renaming of NATIVE_CHILDREN to something less incorrect. Maybe
CHILDREN_OR_CHILDNODES. NATIVE_CHILDREN is incorrect because it can be
"childNodes", which is not children (seems confusing).

[snip getChildren example]
No, let's see it this way, the getChildren() is to get the fastest
collection available. It didn't improve so incredibly for the records
but there was a gain.

Using === instead of == to compare nodeType might also make a comparable
improvement of performance. Probably only measurable in extreme cases.

Not referencing the |arguments| object would also help. This has been
discussed here at length.
I have never thought it was to be sold. Incentives comes in various
formats !

Yes, they do.
Works for me and for the hundreds of tests it passes.

I wonder can the following be expected to match:
input[checked]
input[checked=checked]
input:checked:not:)disabled)

How do I select an input that is checked (input.checked == true) and not
disabled.
But I agree that the problem you talk about has been greatly
underestimated by several related working groups. Don't blame me or my
code for those errors.
* 16k is a lot, but NWMatcher is nearly 50k[2].

The "match()" method is the length I said, no more than 16kbytes
source code, the rest is for the "select()" method (I have no use for
the "select()" while everybody else uses only it) the caching code and
the type of checks that you said I should have leaved out.-
* What is the situation you're finding yourself in that you need
"negation pseudo classes"?

Scraping external text content most (hope the term is correct). Also,
in general when the list of things to do is much bigger of the list of
things not to do (while attaching event).
Chances are, someone has encountered that problem before and has figured
out a way that does not require matching "negation pseudo classes". It
may be that you have a unique problem. That warrants exposition.

Let's say I want all the elements but not SCRIPTS and or STYLESHEETS
and or OBJECTS/APPLETS...sound familiar and useful as a task ?

No, I can't say that I've even been in such situation where I needed all
elements by not SCRIPT, LINK, OBJECT, APPLET.

I'm familiar with page scraping, though I've never made a mashup.

Code could filter those elements.

var tagsExluded = /^SCRIPT|LINK|OBJECT|APPLET|!$/;

for(...) {
if(!tagsExcluded.test(el.tagName)) {

}
}

How is that expressed using "negation pseudo class" selector?

That is:

input:not([checked])

-would match inputs that do not have a checked attribute (not property).

I don't see how you'd use :not() to match elements.

http://www.w3.org/TR/css3-selectors/#negation
| The following selector matches all button elements in an HTML
| document that are not disabled.
|
| button:not([DISABLED])

That example is wrong. The button could have been disabled, but the tag
does not have the disabled attribute declared.
However it is also a specification of CSS3, they may be able to give
you other ideas about that in their docs.

Sure good stuff to study, along with ARIA, which I've meant to read more.
Yeah you repeated it a few times, you have no use...I see, I will not
blame you for that.

However I have no errors nor warnings in the console using these
helpers and the results are correct AFAIK.

This should already be a good enough reason to start using them and
try something new.

Honestly, it doesn't do something I need. I think some use-cases would
help show areas that are unused or problematic. If there's a part you
want code-reviewed, post it up :).

I could, you haven't given me a reason to not do it, but I will
carefully ponder any related/motivated suggestion.

By not using quirks mode, the script is less complicated.
It's a shame few follows, rules are easier to follow and doesn't
require big efforts just open minds.

Validation of HTML can be enforced on any project. Just do simple buddy
checks/code reviews. It shouldn't take that long to catch on and pretty
soon everybody validates their code.
Validation is a big target both for NWEvents and NWMatcher. You should
try it.

Try validation?

I validate ruthlessly and have been a big proponent of validating
everywhere I go, often the the annoyance of other developers. I made a
point of adding that to the FAQ. Earlier versions did not mention HTML
validation at all.

Or are you again suggesting me to try NWMatcher? If that is so, then I
feel like I failed to explain my reasons for needing to see
justification for it. I've included some links to "the simplest thing
that could possibly work, but not any simpler," and "YAGNI". I don't
know of a write up for "don't do that".
Thank you for scrutinizing. I have to address some of the concerns you
made like remove some public methods to avoid confusing devs.

Do you want more code review? Where? Post a link.
Diego Perini

Garrett

[YAGNI]http://groups.google.com/group/comp.software-eng/msg/f3882fbbb48b80cd?dmode=source
YAGNI on Wikipedia: http://en.wikipedia.org/wiki/You_Ain't_Gonna_Need_It

[DoTheSimplestThingThatCouldPossiblyWork]
http://c2.com/xp/DoTheSimplestThingThatCouldPossiblyWork.html

[Speculative Generality]
http://foozle.berkeley.edu/projects/streek/agile/bad-smells-in-code.html#Speculative+Generality
 
G

Garrett Smith

Mistakes corrected below.

[...]
That constrains the position of the "activating" link the paragraph.
That, as is, is solvable that by checking tagName, parentNode, and
parentNode.getElementsByTagName("a") === target. However that strategy

Correction: Should be:
parentNode.getElementsByTagName("a")[0] === target
in and of itself is arbitrary and fragile. If the markup were to have a
case for including a non-activating link as the first link in a <p>, the
script would fail. It is rigid for the same reason. You can't change the
order. I've been advocating this whole time that it is best to use
semantic markup. The |class| attribute could be used here.

addCallback(baseNode, handlePanelActuatorClick);

- and then in "handlePanelActuatorClick", do the checking to see if the
target has class "panelActuator".

function checkLinks(ev) {
var target = getTarget(ev);

var isPanelActuator = hasClass(target, "panelActuator");
if( isTargetFirstLinkInParagraph ) {
alert("winner");
}

Correction: should be:

if( isPanelActuator ) {
alert("winner");
}

Naming the variable "isTargetFirstLinkInParagraph" would not have
described the intent of the code as well.

Garrett
 
D

Diego Perini

Reposting these links since previous message may have gone unnoticed
(the length of this thread is out of control for me):

Online demo/test with the sliced down version of NWMatcher containing
just the needed "match()" method:

http://javascript.nwbox.com/cljs-071809/nwapi/nwapi_test.html

You can download a complete archive of all the source used to build
this demo here (with minified and compressed examples):

http://javascript.nwbox.com/cljs-071809/nwapi-demo-cljs.tgz

Only 5Kbytes gzipped for the match/delegation task to show, I believe
I can shave out 1 more kilobyte with a bit of work and following the
suggestions to the letter.

Diego said:
Diego Perini wrote:
Diego Perini wrote:
Diego Perini wrote:
kangax wrote:
Garrett Smith wrote:
RobG wrote:
Ivan S pisze:
WhatJSframework would you guys recommend to me?
Tryhttp://www.domassistant.com/
No, don't. It seems to munge the worst of jQuery CSS-style selectors
and Prototype.jsaugmentation of host objects together in one script
that is also depenent on browser sniffing.
The authors' claim that it is the fastest script in the SlickSpeed
tests isn't substantiated - in Safari 4 it comes in 4th out of 7, even
jQuery is faster.
Knowing how to use document.getElementById, event bubbling (also
called "delegation") would offer much superior runtime performance.
You can't do much with `document.getElementById`. Sometimes you want to
select by class, attribute, descendant or a combination of those. I
agree that more complex selectors are rarely needed (well, except maybe
something like `nth-child("odd")`).
Event delegation is great, but then you might want some kind of `match`
method to determine whether a target element matches selector. `match`
is not necessary, of course, but testing an element manually adds noise
and verbosity.
A codebase that does not have a selector API should be smaller. In
comparison to a codebase that has a selectors API, the one that does
notshould be downloaded faster, interpreted faster, and should have a
smaller memory footprint.
That much is... obvious :)
I don't think last two really matter. Execution of an entire fully CSS3
compliant selector engine (such asNWMatcher- the best one of them I've
even seen) doesn't take more than few milliseconds. It's 2ms in
FF3.0.10. Probably not more than 20-30ms in relatively slow IE6,7. Would
you really care about those?
I would definitely care about 20ms.
As discussed in previous mails, I have not ever needed an API to check
the tagName and className; that is a very simple and straightforward
thing to do and does not require abstraction.
In my experience, I have not ever needed to check much more than that. I
can't really think of a problem that would be best solved byNWMatcher,
nor can I imagine whereNWMatcherwould provide a clearer and faster
abstraction.
Why add something that is not necessary?
FWICS,NWMatcherstill uses function decompilation.http://github.com/dperini/nwmatcher/blob/ceaa7fdf733edc1a87777935ed05...
Garrett
--
The official comp.lang.javascript FAQ:http://jibbering.com/faq/
Garret,
if I where to be sure nobody had overwritten native functions with
broken replacements I wouldn't had used function decompilation, I
would probably have replaced that hated line:
(/\{\s*\[native code[^\]]*\]\s*\}|^\[function\]$/).test(object
[method]);
with a simpler:
typeof object[method] == 'function';
You don't see the problem because you haven't faced it,
Maybe your way is not the only way. Could it be that there is another
way to approach the problem that is at least as good?
Correct you are. Maybe you have a better solution. Just show me so I
can compare.
but it is a
problem and a very frequent one when you have to ensure compatibility
with code written by others not interested in keeping a clean
environment.
Modifying host objects is known to be problematic.
Again, you are correct, but I have to live with it. I personally don't
do it, but I don't know what library will my choosen by my users nor
can I force them to choose one or another.
Hiding that mistake does not make it go away. If the code is broken, fix
it. The code should not be broken in the first place, and certainly not
in the way that your workaround caters to.
You correctly use "should" but reality is what I work with. Will be
happy to include APE in my library chooser as soon as possible, still
I would have to support all the libraries with that shaming habit.
Sure when there are enough "perfect" libraries I could start to leave
out those that are not perfect.
APE is AFL, so would probably not licenseable for most projects. Having
effectively no user-base, I can remove things that should not be there.
Perhaps in the future, I'll go with a less restrictive mozilla-type
license.
Supporting potential problems in third party libraries is a *choice*.
Piggybacking off the popular libraries probably results in a larger user
base. The obvious downside is that additional code is required to hide
the inherently faulty design of those libraries. What is less obvious is
that the additional code requires "implementation dependent" function
decompilation, which is known to be faulty in at least opera mobile[1]
and is defined to return a FunctionDeclaration, which is not what IE,
Gecko, Safari, Chrome, Opera do when Function.prototype.toString is
called on an anonymous function.
General purposeframeworksare often chosen to "save time". Prototype,
Mootools, Cappucino, Mojo all modify host objects. Interesting that
NWMatcherhas extra code created for working around such problematic
design of the modification of host objects. Wouldn't it make more sense
to just not use those?
The extra code is very small and worth it currently; it comes down I
only needed to know that get/hasAttribute where the native browser
implementations and not a third party attempt to fix them, no third
party extension of get/hasAttribute for IE that I know has the
equivalent capabilities of IE regarding XML namespaced attributes.
If I understand correctly, the concern is that a third party library may
have attempted to fix IE's getAttribute/hasAttribute methods by defining
those methods on the element, and that if that had happened, it would
cause the script to not function properly.
Correct, in that case it was Prototype messing with those native
functions.
Sounds like a great reason not to use Prototype.
XML Namespaced attributes should be avoided. Why would you want to use that?
NWMatcher try to support as much as possible the W3C specifications
(xml:lang attribute and similar) which are suported by the IE natives.
Why on earth do you need to match xml:lang? That would only be supported
by using "getAttribute('xml:lang')" where IE would not interpret a
namespace, but would instead use a nonstandard approach. The colon
character is not valid in attributes.
I don't have much use myself either, that doesn't mean I have to
constrain my project to my own needs, I have bug tracking tools that I
try to follow and where possible and by specs I try to have it
considered. Is that a problem ?
The "xml:lang" property is mentioned in all the CSS >= 2 specs I have
read and it hasn't changed in the CSS Selectors Level 3, read first
6.3.1 and then follow links to 6.6.3 of latest CSS Selector Level 3
draft March/2009:

Though you may still say that my implementation is failing in some
way. No problems with that, my objective is learn and improve.

Not modifying host objects would be an improvement.

| d.isCaching

You are fully correct again.

I couldn't find a better way to have different caching storage bound
to different documents, that was the reason of it.

But that's again used only for the "select()" method (caching system),
which I did completely remove in the code I posted above.

I posted that incorrectly at the end of one of my previous messages,
so it seems hidden out due to a too long message.
Mutation events. I remember in Gecko, DOMAttrModified would fire when a
textarea's |value| property was changed and that bug was justified on a
bugzilla ticket. I'm also concerned with the reliability of
"DOMNodeRemoved" when setting innerHTML or textContent.

Mutation events are also only used to have the fastest "select()"
method, are only supported on Firefox and Opera currently also they
should be working in all HTML 5 compliant browsers, by some year now.
They are also very very useful in web coding.

In NWMatcher the "DOMAttrModified" is feture tested in a good enough
way I suppose, the problem is inferring that because it works also
"DOMNodeInserted" and "DOMNodeRemoved" will work. That is not the best
I could do really but wanted to keep down codesize too.

The quirk you explain above about textarea's value have to do with the
difference between "attributes" and "properties" at the time they draw
an interpretation of the facts and their whish to have them
distinguished by time of test.
I would like to do it OT if you have spare time and you really wish !

It is not "off topic". Quite the contrary: It is the essence of not
having a problem that draws the very point I am trying to make: YAGNI
[YAGNI].
The code I am going to show already uses delegation with some tricky
and possibly highly populated "trees" in smaller overlay windows; the
task is to replace "manual" delegation spread throughout the code with
a centralized event manager that could handle delegation using CSS
selectors. This is already handled very well since hundred of events
are handled by a few elements that act as events dispatchers (the
bubbling).

Link?

A link would just show the outcome HTML/CSS work which is no
discussion.

Authentication is needed to acces the development interface.

OT...I meant "off thread" to avoid annoying others and turn me a
spammer.
Yes, please do try and I will do my best to understand and where I do
not understand, I will try and make that clear.



Yes, that was my point; no problem to solve.

I tried to setup some of what you ask in a couple of post above, I did
it incorreclty so maybe you skipped that part.

I have to say you convinced me to split my project in two parts, the
second with the outcome of your suggestions.

There is no space to fixing browser bugs in a W3C or CSS specification
compliant.

I have already sliced a "what's strictly necessary" out of NWEvents +
NWMatcher in previous posts with links. I can do far better, probably
leaving out the get/hasAttribute fixing that I don't need for if I
switch to "properties" only mode. NWMatcher wanted to be like that in
past versions but you know I like following specifications a counsels,
success/results varies depending on many factors...

One thing I also admit is having added all the normal "select()"
method and cruft/dependencies/cache just to have more people using it
and <small>compete</small> with other similar offerings. I clearly
explained why my need of the match() method and the problems "I" think
it solves.
That constrains the position of the "activating" link the paragraph.
That, as is, is solvable that by checking tagName, parentNode, and
parentNode.getElementsByTagName("a") === target. However that strategy
in and of itself is arbitrary and fragile. If the markup were to have a
case for including a non-activating link as the first link in a <p>, the
script would fail. It is rigid for the same reason. You can't change the
order. I've been advocating this whole time that it is best to use
semantic markup. The |class| attribute could be used here.

That was deliberatedly written so to...This is unconstrained:

NW.Event.appendDelegate( "p a", "click", handler_function ); //
all links in each paragraph

if you still need the first "anchor" element in "p", unconstrained by
what's in between "p" and "a" then CSS3 will do like this:

NW.Event.appendDelegate( "p a:nth-of-type(1)", "click",
handler_function ); // only first link in each paragraph
(unconstrained)

CSS3 is flexible enough and still quite readable. Hope you can see it
here as a clear separation from structure helper instead.
addCallback(baseNode, handlePanelActuatorClick);

- and then in "handlePanelActuatorClick", do the checking to see if the
target has class "panelActuator".

function checkLinks(ev) {
var target = getTarget(ev);

var isPanelActuator = hasClass(target, "panelActuator");
if( isTargetFirstLinkInParagraph ) {
alert("winner");
}

}

The side effects to that strategy are:
* callback does the checking.
- reduces function calls
- debugging is straightforward
- does not require an extra call to a Selector lookup function
* NWMatcher is not required
- less code overall
- no extra non-standard API to learn
* Encourages the authoring of semantic HTML
- makes automation testing easier (for the same reasons it makes
assertions in the callback easier).
- behavior is deliberately and consistently applied (irrelevant
structural changes won't cause problems)

Drawbacks:
* requires user-defined hasClass(el, klass) and getTarget(ev) methods.

That's an "infinity" of code for a 0 LOC solution. ;-)

And if you consider supporting IE you havent touched yet how you would
capture or bubble events in your 0 LOC for your one time delegation
aproach. Especially you haven't touched the cross-browser
functionality, when your snippet will run, will it run at the same
expected time on all browsers ?
Can I not question the need for the code itself?

Sure you can. I appreciate you doing this.
I would not reuse the implementation. I /would/ reuse the
getNextSiblingElement or hasClass methods. I'd organize those methods
where they tend to get reused together, or where those methods have
common functionality, such as a some shared hidden (scope) variables.

Checking an element's "checked" or "tagName" property is trivial. It
would be pointless to create an abstraction for that.

OTOH, reading an element's checked *attribute* seems pointless. Why
would a program care? A CSS3 compliant selector API would would be
required to have that feature, but is it needed by a program?

Not really, not needed. It would be needed only to know the original
setting when the initial HTML was parsed but there are
".defaultVaule"s properties for most form elements to achieve that if
other scripts haven't messed up with them!

As I already said I am all for switching this js world to "properties"
only to skip this *attribute* "stopper" forever.
Yet another benefit to using bubbling. However, NWEvents is not needed
for that. Bubbling comes for free.

As said there are a lot of events that actually do not bubble both on
IE and on other browsers. The HTML 5 specifications says all event
should be captured/bubbled from window/document to target and vv.
except for "onload".

So there are no compliant browser yet. What next !

Not being able to use delegation on certain form events is not a
problem for simple "guest pages", I agree.

Next option is NWEvents alone for simple selectors 3Kb gzipped, next
option is to also load/add "NWMatcher" to be able to use complex
selectors using the exact same syntax with a total of only 5Kb gzipped
(as for the links I sent in my previous post).
One-up to that is to reuse and cache an object decorator/wrapper that
was created lazily, on a bubbled event. That is possible when the
decorating object does not hold a referece to the element, but to an ID
(string).


Submit does not bubble in IE, though. You could try and capture bubbled
"Enter" key and (submit) button clicks, but that also requires more
aggressive interrogation of the target (to make sure it is not readonly,
not disabled, etc).

submit and all other events can be captured and bubbled in IE by using
NWEvents 3Kb gzipped.
The "but it should work" situations usually make me think about trying a
different approach.

I already have running code implementing the simple delegation aproach
you talk about, my task was to centralize that pattern through the
code with some API and improve UI performances. Both of these task
will ease swapping out old code in my project.
I don't care about reset events, but I'm curious about handling the
bubbled submit. Would you like to post up some code, or a link to the
relevant place? Maybe another thread for that would be better, so that
discussion stays focused on that.

To have all the events bubbling and be also able to capture them on IE
I used the element "activation" mechanism that is working very well on
IE. In pratice you can use "activation" events like
"onbeforeactivate", "onactivate", "onbeforedeactivate", and
"ondeactivate" see MSDN docs.

These are device independent events that happen right before any
element is being "activated/deactivated", so for any device (keyboard/
mouse etc.) that activates them. If you tab to an input element you
get fired before the element is dispatched a focus event.

With this in mind I have setup an event proxying system that allows to
simulate the capturing/bubbling. Seems to work.

Dean Edwards also has recently released an implementation of this
technique in his base2:

http://dean.edwards.name/jsb/

XUL has behaviors that works in a similar fashion and IE tried with
their (now dead) expressions.

It seems many was after this technique or a similar approach in the
past.
[Selectors API design discussion]
It was a very good idea for me and for others using it, obviously you
can teach me how to do that in 0 lines I am all ears.

A solution to a problem can not be critiqued if there is no problem
provided.

Code that does not meet the requirements fails on the first criteria.

So, if an assessment is to be made of NWMatcher, doesn't it sound right
and proper to show NWMatcher being used to solve a problem? Given a
problem P, in context, a comparison of P solved with NWMatcher vs P
solved with something else.

It looks like NWMatcher is adapted for jquery and prototype, right? If
so, I wonder what those users' good idea(s) were.
It works, NWMatcher passes the only few test suites available to me
(Prototype and jQuery), if you wish one added please suggest.
I have some test of my own obviously.

Depending on the browser, a checkbox' "checked" attribute may be a
boolean, null, or a string value.

A CSS3 compliant selector API would have to take all that into account.

I can't see a good reason for wanting to read the checked *attribute*.
Why would a script care about that?

The checked *property* should be what is a concern. A textarea's "value"
property, or another property, such as IE's "unselectable", might be
things a program would be concerned about. How would you match those
using css 3 selectors?

No explicit CSS3 selectors exist for that, but NWMatcher is quite
extensible in that respect, specific selectors can be added at will.
I know elements ID are unique. Thanks.

That wasn't my point. document.getElementById and getElementsByName are
broken in IE[1]. The reason I mentioned that it is a similar type of
thinking. The thinking is "X should work" and then trying to make it work.

No need for them in NWMatcher, just a simple call to
getElementsByTagName as you can see from demo sources, and a feture
test to check it is IE to skip non elements.
A workaround for the IE bug is to replace document.getElementById with a
hand-rolled version. The hand rolled version checks to see if the
non-null element has an "id" property that is the same value. The
workaround is avoidable by not giving one element an ID that matches a
NAME of another (and expecting that nobody else will do that).

I was just trying to illustrate a point of not trying to patch all
browser bugs. There are way too many of them and they can often be
avoided by just being aware of the problem and not triggering it.

Right most if not all the bugs are avoided by using NWEvents/NWMatcher
because there are really no calls to possibly buggy native methods
(in the sliced down example I recently posted).
I don't know what you are referring to. It sounds like you are
describing a mashup.

Yes, done by our users through a visual interface where they also
choose their preferred javascript FrameWork and plugins (may be
different for different pages).
Brendan Eich would be able to provide a better answer on why scripting
was introduced. I'm not even sure I know the correct answer. I only know
what is going on for the past 10 years.

I proposed a History page for ecmascript.org, sent as email to one of
the es-discuss maintainers. I'm not expecting a 911 response, but it
would be nice to see such page.


That sounds like something Martin Fowler calls "speculative generality".



There are cases where order matters and cases where an element's
position in the source order is arbitrary, and can often be enforced by
using valid HTML (only <li> inside a list, for example). The markup can
give big hints at what is arbitrary and what is not. The author is
responsible for that. The class or ID usually is not arbitrary.

What I would do is write semantic markup, make it is simple and logical
and obvious as possible, and then code for that. Things that are
arbitrary being changed won't affect the script.




Selector API is overkill. The only time I can see needing that is for a
StyleSheet-related application. I made a styleSheet editor about four
years ago and used a Selectors API to match all the nodes in the
document based on selector text found in the styleSheet.



Nobody has previousSiblingElement; that is a user defined function (I
miscommunicated that). "previousElementSibling" is Doug Schepers' choice
of name for the property (as I previously stated below).

Oh ok. These are then lines to add up to the 0 LOC solution.
That sounds like something I might say, though I don't recall
specifically.





Can you explain a little more? What do you mean by "technology" and
"teaching difficult actions"?

Let see if I can: It is easier to explain a normal user to press a
button to apply a bold style to a piece of selected text than explain
Looking at the code, I think I see a problem in NW.Dom.getChildren:-
| getChildren =
| function(element) {
| // childNodes is slower to loop through because it contains
| // text nodes
| // empty text nodes could be removed at startup to compensate
| // this a bit
| return element[NATIVE_CHILDREN] || element.childNodes;
| },
The children property is inconsistent across the most common browsers.
It is best left alone.
There are no problems there, the returned children collection is
filtered for "nodeType == 1" or "nodeName == tag" afterwards.
That can be easily enough demonstrated.
There is no filtering shown in the code above. None. Where is the test
case of NWMatcher.DOM.getChildren?
This is the relevant code string that is wrapped around during the
function build that does what you are looking for:
// fix for IE gEBTN('*') returning collection with comment
nodes
SKIP_COMMENTS = BUGGY_GEBTN ? 'if(e.nodeType!=1){continue;}' :
'',
Nowhere I said either that the method serves to the purpose you are
trying to give it. You just guessed it !

No, I did not guess. I looked an found an inconsistency.

If someone were going to guess what that method is for (I would not), he
might read the code comment:-

| // retrieve all children elements
| getChildren: getChildren,

- and make a fair guess that it returns child elements.

A fair /expectation/ would be that the method would not return
inconsistent results across browsers.

I wasn't fixing any bug there just getting the reference to the
"supposed" best children collection in each different browser, then I
loop over it and filter out non element nodes. However it was used
only in one instance and as I said only for the "select()" method,
removed.
Why would you expose other methods if they are not intended to be used?

I have to call these methods from the cached compiled functions which
are in another scope. I could have use call/apply but I didn't want to
add more overhead and thought this was a better way. To avoid exposing
them, they could be made private and passed to functions, yes not sure
what's better here.
[snip explanation of children/childNodes]
Regardless of the getChildren method inconsistency, I don't need that.
AISB, YAGNI!
You are wrong. NWMatcher will never return text or comment nodes in
the result set, so it fixes the problems you mentioned on IE, and
since NWMatcher feature tests that specific problem, any new browser
with the same gEBTN bugs will be fixed too.
There are a couple of things wrong with that paragraph.
1) I am *not* wrong. The source code for NW.Dom.getChildren, as posted
above, taken from NWMatcher source on github[2], indicates otherwise.
As I told I am not trying to have an unified cross-browser
"getChildren" it is an helper used by the compiled functions, I could
have completely avoided having that function independent, it was to
improve speed on IE quickly discarding text nodes.

Why not just use the NATIVE_CHILDREN variable? Providing inconsistent
results to the caller imposes a responsibility that the caller has to
know about, despite the method name "getChildren" and the comment above it.

The caller must perform a few steps:
1) call getChildren
2) filter out comments, text nodes.

You can probably get away with it if you own all the code, but such code
would not fly where code sharing is common.

I suggest renaming of NATIVE_CHILDREN to something less incorrect. Maybe
CHILDREN_OR_CHILDNODES. NATIVE_CHILDREN is incorrect because it can be
"childNodes", which is not children (seems confusing).

As I said that was a bad example for both of us we come to talk about,
better leave that out, I completely agree that if such named function
where exposed publicly it should at least be a cross-browser solution
and return consistent results browser wide.
[snip getChildren example]


No, let's see it this way, the getChildren() is to get the fastest
collection available. It didn't improve so incredibly for the records
but there was a gain.

Using === instead of == to compare nodeType might also make a comparable
improvement of performance. Probably only measurable in extreme cases.

Yes maybe to avoid an internal conversion/cast, will add these
improvements then.
Not referencing the |arguments| object would also help. This has been
discussed here at length.

No need to that either for the "match()" method.
I have never thought it was to be sold. Incentives comes in various
formats !

Yes, they do.
Works for me and for the hundreds of tests it passes.

I wonder can the following be expected to match:
input[checked]
input[checked=checked]
input:checked:not:)disabled)

How do I select an input that is checked (input.checked == true) and not
disabled.

All the above should work, :checked and :disabled are compared with
elelemnt "properties" not with corresponding attributes as for all
other pseudos. The [ ] syntax is the only one considering/forcing an
attribute match (but I would also prefer just property match).

If not please tell me to make corrections.
But I agree that the problem you talk about has been greatly
underestimated by several related working groups. Don't blame me or my
code for those errors.
* 16k is a lot, but NWMatcher is nearly 50k[2].
The "match()" method is the length I said, no more than 16kbytes
source code, the rest is for the "select()" method (I have no use for
the "select()" while everybody else uses only it) the caching code and
the type of checks that you said I should have leaved out.-
Scraping external text content most (hope the term is correct). Also,
in general when the list of things to do is much bigger of the list of
things not to do (while attaching event).
Let's say I want all the elements but not SCRIPTS and or STYLESHEETS
and or OBJECTS/APPLETS...sound familiar and useful as a task ?

No, I can't say that I've even been in such situation where I needed all
elements by not SCRIPT, LINK, OBJECT, APPLET.

Well let's then say: all form elements but not buttons...maybe more
usefull ?
I'm familiar with page scraping, though I've never made a mashup.

Code could filter those elements.

var tagsExluded = /^SCRIPT|LINK|OBJECT|APPLET|!$/;

for(...) {
if(!tagsExcluded.test(el.tagName)) {

}

}

How is that expressed using "negation pseudo class" selector?

In CSS3 with I believe that several :not() should do:

":not(script):not(link):not(object):not(applet)"

with NWMatcher:

":not(script, link, object, applet)"

also understand this is an NWMatcher syntax extension only.
That is:

input:not([checked])

-would match inputs that do not have a checked attribute (not property).

I don't see how you'd use :not() to match elements.

http://www.w3.org/TR/css3-selectors/#negation
| The following selector matches all button elements in an HTML
| document that are not disabled.
|
| button:not([DISABLED])

That example is wrong. The button could have been disabled, but the tag
does not have the disabled attribute declared.

Yeah, there are/where many inconsistency in previous works/drafts, but
with tickets for implementors that will be sorted out soon or late.
Sure good stuff to study, along with ARIA, which I've meant to read more.

Believe it or not I have tried to stay with CSS3/HTML5 specifications,
you can see notes about that in the original NWMatcher.
Honestly, it doesn't do something I need. I think some use-cases would
help show areas that are unused or problematic. If there's a part you
want code-reviewed, post it up :).

Already posted, with most corrections you suggested to show the "match
()" method in action.
By not using quirks mode, the script is less complicated.

Older browser will enter quirks mode by default if no doctype is
specified in the page (really many of them).
Validation of HTML can be enforced on any project. Just do simple buddy
checks/code reviews. It shouldn't take that long to catch on and pretty
soon everybody validates their code.



Try validation?

I validate ruthlessly and have been a big proponent of validating
everywhere I go, often the the annoyance of other developers. I made a
point of adding that to the FAQ. Earlier versions did not mention HTML
validation at all.

Or are you again suggesting me to try NWMatcher? If that is so, then I
feel like I failed to explain my reasons for needing to see
justification for it. I've included some links to "the simplest thing
that could possibly work, but not any simpler," and "YAGNI". I don't
know of a write up for "don't do that".

Right try to do validation with NWEvents and use the "submit"/"change"
capturing/bubling capabilities.
Do you want more code review? Where? Post a link.

You already have the links but I fear to ask much more than I already
did ! (In every sense) :)


Diego
 
D

David Mark

I've only hit this bug once or twice before, because I don't use many
animations in jQuery. For the most part, the problems that jQuery has
with IE haven't affected me. When they do, I talk about it in order to
figure out the best way to handle it.

So? You are deluded for repeating over and over that jQuery "smooths
over" browser quirks for novices when it can't even deal with the
quirks mode (the rendering mode of choice for novices.) We've been
over this (and similar issues) a hundred times at least.
It's odd to me that you would read the support groups for a script
that you find so repulsive and useless. Seems like a waste of time on
your part.

It's odd to me that you are so fascinated with me. Mind your own
business.
Your criticisms have certainly taught me some things and probably
affected my attitude towards jQuery in some ways. But you are just one
fish in a sea of opinions, ideas, and thoughts.

I'm one of the few who bothers to waste time trying to straighten you
out (and it's getting very old.) You and your "jQuery is a tool" BS.
Jesus. *You* are a tool.
You are not quite as
important or wise or influential as you seem to think you are.

You said all of that, stupid. Where did it come from?
Your
fascination with pointing out how "right you were" two years ago and
how nobody listened to you seems... well... odd to me.

Shouldn't. Typically you are the one I point it out to and you were
the one that made all of the idiotic assertions about jQuery (and
libraries in general) that were later proved completely false. You
were the one who demanded I publish a library for the good of mankind,
etc. Then you went right on ahead and plugged jQuery for two more
years like a broken record. Even when they had to rewrite virtually
the whole stupid thing for IE8 (as predicted) and you couldn't even
upgrade your hacked version of the lib, you popped up here every other
month to start the same "jQuery is simple and concise" conversation
over. Hell, you are doing it again in this very thread. I really
think you are mentally ill.

http://groups.google.com/group/comp...read/thread/415949d1bcce6e6a/03c4d326340e7f7d

Remember that bullshit? Regardless, I'm not posting it for your
benefit at this point. It's to refute your irresponsible promotion of
a script that is *completely* devoid of merit. There's absolutely
nothing there. On the other hand, I did publish a ton of my code as
an easily customizable library and you came up with every idiotic
excuse in the world to avoid it (including that it was "hard" to
build.) Did you make any attempt to improve the code as you indicated
you would? Any sort of collaborative effort at all? Of course not.
Probably you saw the lack of constant patches as indicating a lack of
"progress."

http://en.wikipedia.org/wiki/Cognitive_dissonance
 
M

Miladin Vukasinovic

in message
So? You are deluded for repeating over and over that jQuery "smooths
over" browser quirks for novices when it can't even deal with the
quirks mode (the rendering mode of choice for novices.) We've been
over this (and similar issues) a hundred times at least.


It's odd to me that you are so fascinated with me. Mind your own
business.


I'm one of the few who bothers to waste time trying to straighten you
out (and it's getting very old.) You and your "jQuery is a tool" BS.
Jesus. *You* are a tool.


You said all of that, stupid. Where did it come from?


Shouldn't. Typically you are the one I point it out to and you were
the one that made all of the idiotic assertions about jQuery (and
libraries in general) that were later proved completely false. You
were the one who demanded I publish a library for the good of mankind,
etc. Then you went right on ahead and plugged jQuery for two more
years like a broken record. Even when they had to rewrite virtually
the whole stupid thing for IE8 (as predicted) and you couldn't even
upgrade your hacked version of the lib, you popped up here every other
month to start the same "jQuery is simple and concise" conversation
over. Hell, you are doing it again in this very thread. I really
think you are mentally ill.

http://groups.google.com/group/comp...read/thread/415949d1bcce6e6a/03c4d326340e7f7d

Remember that bullshit? Regardless, I'm not posting it for your
benefit at this point. It's to refute your irresponsible promotion of
a script that is *completely* devoid of merit. There's absolutely
nothing there. On the other hand, I did publish a ton of my code as
an easily customizable library and you came up with every idiotic
excuse in the world to avoid it (including that it was "hard" to
build.) Did you make any attempt to improve the code as you indicated
you would? Any sort of collaborative effort at all? Of course not.
Probably you saw the lack of constant patches as indicating a lack of
"progress."




You are arguing with a troll.
It is one idiot who spams groups using ie. 50 fake identities (ie. gmail &
web newsfeed, posted through anon proxy).
On JS group this troll is using following identities:

Matt Kruse <[email protected]>
Diego Perini <[email protected]>
WilsonOfCanada <[email protected]>
JRough <[email protected]>
Axel Bock <[email protected]>
Jonathan Fine <[email protected]>
Gabriel Gilini <[email protected]>
Geoff Cox <[email protected]>
.... 20 other etc.

These are all fake identities of one idiot!
They all speak the same, defend eachother, and keep asking same stupid
questions again and again.. ie. I heard of JavaScript, is it any good? Or..
I was thinking of using Java.. somebody replies Java is not JavaScript..
etc.
It is a troll.
There is no -they-
It is one retard with extreme amount of spare time who keeps spaming the
group. Probably a nutcase.
Some of regulars earlier (or few times) have hurt his feellings he disapears
and re-apears 2 days later under different fake identtity. Story continues.

You need access to nttp server and a reader,
this troll is spamming through anon proxies and Google groups/web newsfeeds
you can track it down as ie all above sender IP's are anon proxies.

ie.
Jonathan Fine <[email protected]>, sending through http://www.giganews.com
web interface, hides the sender ip (and sender ip is a proxy)

Geoff Cox <[email protected]>
X-Usenet-Provider: http://www.giganews.com
NNTP-Posting-Host: 81.179.6.197
again, same thing

Diego Perini <[email protected]>
NNTP-Posting-Host: 79.1.50.78
posting through proxy/Google groups, sender IP is an anon proxy
etc. etc.


These are all fake -identities- of one retard.
You are not arguing with 50 people.
It is one idiot with a lot of spare time.


We have his real name, IP, location.. everything
 
D

David Mark

This thread is over a month old! Were you searching the archives for
something to argue about?

No, I don't read the group constantly, so I missed this gem a month
ago. Nobody listened, indeed. LOL. *You* and various jQuery dolts
didn't listen. That's your problem and you need to deal with it.

And, it seems the jig is up for you. Thanks Miladin! I knew it. :)
 
D

David Mark

[snip]
These are all fake -identities- of one retard.
You are not arguing with 50 people.
It is one idiot with a lot of spare time.

We have his real name, IP, location.. everything

Pick him up for questioning...
 
M

Matt Kruse

It is one idiot who spams groups using ie. 50 fake identities (ie. gmail &
web newsfeed, posted through anon proxy).
On JS group this troll is using following identities:
Matt Kruse <[email protected]>
Diego Perini <[email protected]>
WilsonOfCanada <[email protected]>
JRough <[email protected]>
Axel Bock <[email protected]>
Jonathan Fine <[email protected]>
Gabriel Gilini <[email protected]>
Geoff Cox <[email protected]>
... 20 other etc.

The first rule of Javascript Club is, you do not talk about Javascript
Club.

Matt Kruse (or am I?)
 
M

Miladin Vukasinovic

[snip]
Matt Kruse <[email protected]>
Diego Perini <[email protected]>
WilsonOfCanada <[email protected]>
JRough <[email protected]>
Axel Bock <[email protected]>
Jonathan Fine <[email protected]>
Gabriel Gilini <[email protected]>


Pick him up for questioning...


Here are few more aliases of this troll-

matt prokes <[email protected]>
RobG <[email protected]>
Jason Carlton <[email protected]>
abpteam <[email protected]>
gisat <[email protected]>

Posting a personal photo is not required, you can track him down by
information in posts/headers.

First, this troll constantly keeps "posting" which means he keeps opening
new discusions by different fake identities.
Second, all these new threads (posts) have a sender-ip being a anonymous
proxy (troll is trying to disguise himself)
Thirdly, the subject and contents of posts is irrelevant, it is not
important what is said it is all bullsh*t a dummy discussion.
Ie. if he created a thread named: Fu*k yyou all`! nobody would reply.
So he opens a thread ie: I am using a jQuery which is hust beautiful
provides easy way whatever.. it is really superior to.. etc.
Point is to provoke a reaction, and finally, aggravation on other side
ie. to drive regulars into discussion/argument.
It's a troll.

This troll will open new disscussions by different subjects and new fake
identies, so you can notice it by new *users* appearing on group, look
into sender-ip and if posts are sent through annon proxy +Google groups or
web newsfeed it is this troll. And content of the posts will be such that it
attempts to pull remaining users of group into aggravated discussion.
 
M

Miladin Vukasinovic

[snip]
These are all fake -identities- of one retard.
You are not arguing with 50 people.
It is one idiot with a lot of spare time.

We have his real name, IP, location.. everything
Pick him up for questioning...



Also this troll will not always attempt to create an argument, but do
completely opposite.
Ie he will post post a stupid question ie. how do I whatever, some of the
regulars explain how, this troll replies Ah thank you so much whoever.. this
is beautiful etc.
Do not be fooled by that.
It is to achieve "acceptance" of one fake identity into the group.
 
D

David Mark

[snip]
Matt Kruse (or am I?)

A rube by any other name...

And this isn't exactly news. I think everyone knows VK has been
reborn as "Jorge." You say "The Natural Philosopher", I say "The
Electrician." You say "SteveYoungGoogle", I say "SteveYoungTBird."
Granted, that last one was less than effective. But it's an old
ruse. Flame out and come back as somebody else. Of course, the scope
of this latest deception is unprecedented. "Matt" must have a lot of
time on his hands. ;)
 
R

Richard Cornford

Here are few more aliases of this troll-

matt prokes < ... >
RobG < ... >
Jason Carlton < ... >
abpteam < ... >
gisat < ... >

This list is getting increasingly arbitrary with the passage of time.
Posting a personal photo is not required, you can track him
down by information in posts/headers.

Whatever it is that you think you have observed, your conclusions are
wrong. And whatever testing strategy you may be employing it clearly
does not have the resolution to make the discriminations you think it
is making.
First, this troll constantly keeps "posting" which means he
keeps opening new discusions ... .
<snip>

As this does not describe at least some of the names on your list you
list is in error by your own assertion.

However, there is always the possibility that the list is truly
arbitrary, that there never was any indication that these are all the
same individual, not any actual effort to find that evidence, and it
is you who is the troll trying to provoke a time-wasting response.

Richard.
 
J

Jorge

SteveYoungGoogle said:
[snip]


Matt Kruse (or am I?)
A rube by any other name...

And this isn't exactly news. �I think everyone knows VK has been
reborn as "Jorge." �You say "The Natural Philosopher", I say "The
Electrician." �

You say "David Mark" and I say "Roger Gilreath"
You say "David Mark" and I say "Roger"

LOL, I recall that. David Mark (in disguise) was banned from the iphone
web dev group for among other niceties calling Joe Hewitt -no more no
less- a moron. Inmediatly after he came back again to continue bugging
everybody under the fake identity of Rick Schramm
"(e-mail address removed)". Pathetic. The same one that repeats here
again and again "get a real name". ROTFLOL
 
M

Miladin Vukasinovic

Richard Cornford said:
This list is getting increasingly arbitrary with the passage of time.


Whatever it is that you think you have observed, your conclusions are
wrong. And whatever testing strategy you may be employing it clearly
does not have the resolution to make the discriminations you think it
is making.

<snip>

As this does not describe at least some of the names on your list you
list is in error by your own assertion.

However, there is always the possibility that the list is truly
arbitrary, that there never was any indication that these are all the
same individual, not any actual effort to find that evidence, and it
is you who is the troll trying to provoke a time-wasting response.


One question,
did you check the ips from that list.?

Let me quess.. you didn't.

You can always ask for photos for those on that list to prove yourself
wrong.

Remember it is you who replies to this idiot's crap, it is your time how you
spend it it is your decision.
If you enjoy it, I don't mind you are welcome
Reply as normally.
 
M

Miladin Vukasinovic

[snip]
A rube by any other name...

And this isn't exactly news. I think everyone knows VK has been
reborn as "Jorge." You say "The Natural Philosopher", I say "The
Electrician." You say "SteveYoungGoogle", I say "SteveYoungTBird."
Granted, that last one was less than effective. But it's an old
ruse. Flame out and come back as somebody else. Of course, the scope
of this latest deception is unprecedented. "Matt" must have a lot of
time on his hands. ;)


SteveYoung no, the ip is not a proxy.
He is not listed in the above list.

Other, yes.
The Natural Philosopher, matt and other matt's
are this idiot posting using fake identities.


Explanation for less skilled users, not to jump into conclusions.

Posting through Google groups or web newsfeeds does not itself implies it is
a troll.
The fact that posting is made through Google groups/web newsfeeds + anon
proxy ip + in disscussions there are attempts either to build up heat and
aggravation of participants, or to buld up traffic.. ie. doing opposite by
saying oh thank you its beautiful fantastic etc. it is to cause confusion as
that way his post (and himself) is not recognized as troll.
(ie. he tries to mix, troll and offensive posts, and by other identites
kissing up to regulars)

It is not obvious and not easy to notice ie. by less experienced users.
We know this trolls actions from before and the way he tries to troll and it
does not requres effort trace doewn this activity.
 
D

David Mark

SteveYoungGoogle said:
[snip]
Matt Kruse (or am I?)
A rube by any other name...
And this isn't exactly news. I think everyone knows VK has been
reborn as "Jorge." You say "The Natural Philosopher", I say "The
Electrician."
You say "David Mark" and I say "Roger Gilreath"
You say "David Mark" and I say "Roger"

LOL, I recall that. David Mark (in disguise) was banned from the iphone

You recall wrong (as usual.) *I* was banned from that ridiculous
group after about three posts and I was certainly using my own name.
web dev group for among other niceties calling Joe Hewitt -no more no
less- a moron.

No (moron), that was the specific interpretation of some other moron
(and I suppose morons think alike.) Anyone who can read would know
otherwise. Sure as hell, his "framework" for the iPhone is (was?) a
major piece of junk.
Inmediatly after he came back again to continue bugging
everybody under the fake identity of Rick Schramm
"(e-mail address removed)".

Wrong again. Actually there was quite an outcry at my banishment (for
good reason.) Unlike your topic-challenged ramblings, I never bugged
anybody.
Pathetic. The same one that repeats here
again and again "get a real name". ROTFLOL

You've said this repeatedly and been told repeatedly that using an
alias to circumvent some drooling idiot of a moderator is not the same
thing.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,743
Messages
2,569,478
Members
44,898
Latest member
BlairH7607

Latest Threads

Top