to learn jQuery if already using prototype

T

Thomas 'PointedEars' Lahn

VK said:
Bytecode is platform-independent, of course, because a Virtual Machine
interprets it. As I have said, in that sense at least JavaScript[tm],
and as it turns out JScript also, are compiled languages.

[...]

Please spare us your fantasies about what other people might think. Thank
you in advance.
Javascript is being stored and delivered to the engine in the raw text
source code format. The engine naturally compiles it to be able to use
so the engine works with compiled code - but Javascript is not a
compiled language.

Yes, it is. What you describe is called Just-In-Time (JIT) compilation.
Try to see the difference, if you can.

There is no difference at all, Often Wrong. One can even compile the same
source code into a file in bytecode format, and have the same JavaScript VM
execute that (as it is possible on NES-compatible servers).
P.S. JScript.NET _is_ a compiled language.

As are JavaScript and JScript. As hard as it may be for you to understand,
compilation does not require a file on the filesystem as its result.


PointedEars
 
M

Matt Kruse

Bytecode is platform-independent, of course, because a Virtual Machine
interprets it.  As I have said, in that sense at least JavaScript[tm],
and as it turns out JScript also, are compiled languages.
Javascript is being stored and delivered to the engine in the raw text
source code format. The engine naturally compiles it to be able to use
so the engine works with compiled code - but Javascript is not a
compiled language.

I hate to agree with VK, but I do.

The terms "compiled" and "interpreted" to refer to programming
languages are both vague and don't have exact meanings to begin with.
They are just labels.

Javascript is interpreted rather than compiled because the raw source
code is delivered to the end user. Of course it is turned into a
machine-readable form before execution - all languages are. Else the
term "interpreted language" would be meaningless.

Javascript is not pre-compiled into byte code which is then delivered
to the client's VM to execute (typically). Is there a standard
javascript bytecode definition that all javascript VM's will execute
identically? In contrast, a language like Java is considered to be a
"compiled" language because you can deliver the pre-compiled .class
files, even though they still require a VM to execute, but compiled
code is not the original source. The line there is even blurry because
you _can_ deliver the source and have it compiled "on the fly".

In the end, the discussion of the labels "compiled" and "interpreted"
is pointless because the reality of how it works is known. Especially
when the terms do not have "scientific" meanings and are open to
interpretation.

Matt Kruse
 
T

Thomas 'PointedEars' Lahn

Matt said:
Bytecode is platform-independent, of course, because a Virtual
Machine interprets it. As I have said, in that sense at least
JavaScript[tm], and as it turns out JScript also, are compiled
languages.
Javascript is being stored and delivered to the engine in the raw text
source code format. The engine naturally compiles it to be able to use
so the engine works with compiled code - but Javascript is not a
compiled language.

I hate to agree with VK, but I do.

Your loss.
Javascript is not pre-compiled into byte code which is then delivered to
the client's VM to execute (typically).

Typically (JavaScript[tm] and JScript would cover, say, 95% of the market of
ECMAScript-compliant script engines), it is. You appear to have overlooked
the quotations by their inventors.
Is there a standard javascript bytecode definition that all javascript
VM's will execute identically?

There isn't even *a* "javascript VM" to begin with.

All JavaScript[tm] engines will have to use SpiderMonkey or Rhino; for the
former, we now have its inventors assertion that there is a bytecode
specification; for the latter, since it is Java-based, we can assume it has.

All JScript and JScript .NET engines would have to be that which Microsoft
provides with the Microsoft Script Engine (since it's closed source), so no
surprise there either.
The terms "compiled" and "interpreted" to refer to programming languages
are both vague and don't have exact meanings to begin with. They are just
labels.

This is simply wrong. RTFM.
[...] In the end, the discussion of the labels "compiled" and
"interpreted" is pointless because the reality of how it works is known.

Parse error.
Especially when the terms do not have "scientific" meanings and are open
to interpretation.

No, see above. But what matters here (if it matters at all; to remind you,
the cause of this subthread was a website that compared Prototype.js to Java
and jQuery to Ruby) is that JavaScript and JScript code is compiled *first*.
This does not apply to all languages that are finally interpreted.

What the both of you seem to overlook is that compilation and interpretation
can complement each other.


PointedEars
 
R

Richard Cornford

Matt said:
Wow, given your view of the jQuery dev team, I'm not sure if
that's even close to a compliment ;)

Well, you know that I like to call things the way that I see them ;-)

They need you a lot more than you need them.
If you're referring to this point:
http://groups.google.com/group/jquery-dev/msg/54b54712bd48ec83
then I can still find it.
<snip>

That is odd. I can see your post from my computer at home but still not
from work. I cannot believe that our firewall is capable of being
sufficiently subtle as to be censoring a single post in a thread (and
certainly not without messing the rest of the page up in the process).
It does have some very silly aspects to its configuration, like we
cannot view the MSDN page on the - responseXML - property of HTTP XML
request objects because of its URL contains the character sequence made
from the last two letters of "response" and the first letter of "XML",
but that sort of thing should not come into this case.

Richard.
 
R

Richard Cornford

Andrew said:
Each individual on his own.

Maybe, but there are circumstances were the best advice possible is to
delete something and start again from scratch, but most individuals who
hear that advice don't regard it is constructive when they do.
Or, in other words: say what you want to say, and I'll
brush off anything I think is unwarranted.

Presumably you mean you will brush off anything that you regard as
unwarranted?
I'm not
setting conditions for prior restraint here.

Requiring what you get to be "constructive" is not a condition?
I never said anything of the sort. I said the minority need
to do more _persuading_.

OK. Why, what is in it for them?
You stated that these libraries were junk

I very much doubt that I did.
as though it were common knowledge.

If I had it would not be because it was common knowledge, but rather
because it was the case.
Clearly it isn't common knowledge.

There are lots of things that are true but are not common knowledge. And
that is even if you are not taking 'common knowledge' as referring to
what is commonly know by ordinary people (ordinarily people mostly being
people who have no idea what javascript is in the first place, and
little interest in knowing).
I hold that any technology decision is a question of taste.

Decisions suggest an informed process of deciding. Otherwise we may be
dealing with no more than the accumulated outcome of sequences of random
influences, misconceptions and learnt incantations. If someone writes:-

<script type="javascript">
var url = " ... ";
...
document.write('<scr'+'ipt type="javascript"
src="'+url+'"></scr'+'ipt>');
</script>

- there are things about that that are not a question of taste at all.
That the mark-up is invalid is an objective fact. That there are two
unnecessary concatenation operations is a fact, and that the apparent
justification for those additional concatenation operations has missed
the point is also a fact.

Some decisions to do things, or not to do things are not a question of
taste, but rather the consequences of understanding.
There is no objective "better" in the sense of Ruby vs.
Python, or vi vs. emacs;

Maybe, but there is an objective "better" in the sense of using:-

if(elem == null){
dojo.raise("No element given to dojo.dom.setAttributeNS");
}

- in place of:-

if(
elem == null ||
((elem == undefined)&&(typeof elem == "undefined"))
){
dojo.raise("No element given to dojo.dom.setAttributeNS");
}

- because the latter is just silly in comparison to the former (as they
both have precisely the same outcome).
there is only the subjective "better" - whichever best
serves the user's own needs.

There seems to be an unfortunate tendency among web developers to lose
site of who the "user" actually is. The user (for web developers) is the
poor sod looking at an alarming little grey box with yellow 'warning'
triangle just above their web browser's window that says "Your browser
does not support AJAX" and wondering what the hell they are expected to
do about it (get it tickets for the next match or something?).
Naturally, this does _not_ mean that everything is relative,
or that it's not worth having passionate arguments thereabout.
I implied as much in my music analogy: friends argue among
themselves over which band is "better," but they all realize
that taste is the ultimate arbiter. These arguments become
tiresome only when people dig trenches and start speaking
in absolutes.


And I submit that is a matter of taste.

Thomas's memory is a matter of taste?
Bugs are bugs,

Not really. There are bugs and there are bugs. A typo in the middle of a
large block of code is something that can happen to anyone, and it could
also easily be missed by others reviewing that code. A glaring error in
something that experience would teach you to always double check and
also should be exposed in any reasonable testing is something else
entirely.
of course, and we welcome bug reports. But you've gone
further than that; you've inferred from "evidence" that
code in Prototype does not do what its author means for
it to do.

No, I said that the evidence was that Prototype.js was (at least in
November last year) only doing what was (apparently) expected by
coincidence; that it had not actually been programmed to do what it was
doing. I also implied that were that evidence existed it was reasonable
to question the understanding of javascript that informed all of the
design decisions that occurred prior to that code being written; such as
the underlying design approach and the resulting API.

(I have also pointed out that Prototype.js is incredibly slow at doing
pretty much anything complex)
Then write your own words.

I did.
That way they'll be _from the heart_.

No they would not. They would be from the head.
You know the point I'm trying to make.

Not really.

The word "censorship" doesn't come within miles of this
thread.

Well this is Usenet so there is no censorship.
I do not own a telecommunications company; I don't have
the means or authority to "censor" anyone.

You would not have the means to censor Usenet even if you did own a
telecommunications company.
I can only imagine the OP was interested in the free
exchange of ideas when he asked you why you thought
jQuery and Prototype were junk.

He (do you have any evidence that he is a 'he'?) did not ask me
anything.

That last sentence is the answer to such a FAQ.

It is already in the FAQ in as many words.
Even a link to that question and answer would be more
helpful than what has happened in this thread.

Not really. The OP is not asking for specific information on javascript,
and there is no code to post in relation to the question. The question
asked was along the lines of "having learnt something about Prototype.js
should I then spend some time learning something about JQuery". To
which the direct answer appears to have been "no" (if a little more
strongly/colourfully expressed). My answer, in as far as I answered the
question at all, was 'learn javascript and browser scripting first and
then you can make up your own mind'.
I mean that they weren't participants before their first post.

And they weren't human before they were conceived.
Many posters, I would venture, only come here when they
need help, and therefore aren't already familiar with the
quirks of the community.

There is no need to "venture" that, it is self-evidently true.
Please search this newsgroup for the terms "Prototype"

What are you expecting? You give a library the same name as a
significant aspect of the language it written in and then cannot find
specific references to it in the archives of a newsgroup dedicated to
that language. It was a predictably bad choice of name.
and/or "jQuery" and see how quickly you find a well-summarized
critique of either library.

Who said finding that sort of thing out was going to be quick? I bet the
search would still turn out to be informative even if it could not be
instantaneous.

but redefine them for WebKit and IE because the String#replace
approach is much, much faster in these two browsers (but much,
much slower in FF and Opera).

Can you post a test-case that demonstrates that assertion?
Historically IE has been renowned for its lousy performance
with string manipulation, while Mozilla outperformed everyone
else in that area.

I don't have a test-case. The change was made one year ago by
Thomas Fuchs [1]. You're welcome to ask him, though I suspect
he'll punch me in the sternum for having dragged him into this.

He won't have to. I will just dismiss this as yet another
unsubstantiated rumour.

You haven't demonstrated that anything is baseless

How would you expect me to demonstrate the lack of any technical
foundation for UA string based browser sniffing? I can hardly point to
something that doesn't exist and say "there is the absence of any
technical foundation for all to see". Of course if there was any
technical foundation then that could be pointed at quite easily, but as
the navigator.userAgent string is a reflection of the HTTP User Agent
header then any such direction must lead to the definition of the header
in the HTTP specification, and that definition pretty much says that the
User Agent header is an arbitrary sequined of zero or more characters
that is not even required to be consistent from one request to the next
(i.e. that it is not specified as being a source of information at all).
or ineffective;

Does that need to be demonstrated (again)? It is known that web browsers
use User Agent headers that are indistinguishable form the default UA
header of IE, so how could it be effective to discriminate between
browsers using the UA string whenever two different browsers use UA
headers that are indistinguishable?
you've only revealed a different set of priorities.
You'd rather have 100% guaranteed behavior

I would certainly rather have consistent and predictable behaviour
before worrying about performance.
even if it meant a wildly-varying performance
graph across browsers.

Where is the evidence for "wildly-varying"? I don't think
escaping/unescaping methods are going to be used frequently enough for
their specific performance to mattered that much at all. If you used
them internally, or they were fundamental to using the library in the
first place then their performance would be much more significant.
I'd rather have the reverse.

So you would not be certain what the code was going to do, but you would
know that whatever it did it would take about the same amount of time to
do it wherever it was running? I certainly do not have a taste for that
design philosophy.
The check-in is only one year old. It is Thomas's bug, but he
is no rookie,

Hansom is as hansom does. But that was not really my point. One of the
things that gets proposed as a justification for libraries of this sort
(a reason for their not being junk by virtue of what they are) is that
with many individuals contributing there are plenty of eyes looking at
the code to be able to find these sorts of things and fix them up front.
But if it takes me three seconds to find what nobody else had noticed
then it must be the case that there is nobody involved looking with my
eyes.
so I can only surmise that we all make silly mistakes
sometimes. Bad luck for him that he managed to stumble
upon your Shibboleth Bug(TM).

Bad luck for everyone else who manage to let it pass unnoticed.
We listen to criticism, we read bug reports, and we constantly
search for ways to improve the feedback loop.

That all sounds very 'marketing-speak'.
So does John Resig, by the way, so I'd suggest you file a bug
on jQuery's Trac about the "makeArray" mistake.

Why? Polishing the handrails on the Titanic may have made it more
appealing to look at but didn't change the rate at which it sank after
the design flaw coincided with the iceberg.

Richard.
 
R

Richard Cornford

Not obvious.

There's plenty of bugs YUI.

Who is talking about bugs? Take this code from the dojo library:-

| if(
| elem == null ||
| ((elem == undefined)&&(typeof elem == "undefined"))
| ){
| dojo.raise("No element given to dojo.dom.setAttributeNS");
| }

The rules for javascirpt dictate then whenever the -
(elem == undefined) - expression is evaluated (that is, whenever
- elem == null - is false) the result of the expression must be
false, and so the - (typeof elem == "undefined") - expression just
cannot ever be evaluated. The bottom line is that if the author of
that code had understood javascript when writing it the whole thing
would have been:-

if(elem == null){
dojo.raise("No element given to dojo.dom.setAttributeNS");
}

- or possibly:-

if(elem){
dojo.raise("No element given to dojo.dom.setAttributeNS");
}

- as there should be no issues following from pushing other
primitive values that have falseness through the exception
throwing path as well null and undefined.

The first is not a bug; it does exactly what it was written to, and
does it reliably and consistently. But it is a stupid mistake on the
part of its 'programmer', and survived in the dojo source for long
enough to be observed because nobody involved with dojo knew enough
actual javascript to see that it was a stupid mistake and correct it.

YUI may contain bugs but it does not contain this type of stupid
mistake because at least one person (and it only takes one) knows
javascript well enough to be able to see this type of thing and
stop it (presumably at source by ensuring any potential
transgressors become better informed bout the language they are
using).

Now JQuery contains the infamous - ( typeof array != "array" )-
stupid mistake, and Prototype.js (at least version 1.6 (which is
not that long ago)) contained the attempt to conditionally employ
function declarations that only worked by coincidence. Neither
of those are bugs as such (they don't stop the respective code
from 'working' (at least to the limited degree to which it is
designed to 'work')), but they are precisely the type of stupid
mistake that follows from code authors having a minimal
understanding of the language they are using. And where those
authors are part of a collective they don't speak for the
knowledge of the specific author responsible but instead
indicate the level of understanding of the _most_
knowledgeable person involved.

A fix for the bug that was demonstrated seems to be by
simply putting the &amp; last.

String.prototype.unescapeHTML = function() {
return this.replace(/&lt;/g,'<')
.replace(/&gt;/g,'>')
.replace(/&amp;/g,'&');
};

That would need to be tested out though.

No it does not need to be test, it is correct. The general
rule is that the character significant in escaping needs to
be processed first when escaping and last when unescaping.

Absolutely. It is a simple bug, and a mistake that in my
experience is made by nearly every programmer who comes to
the issues of encoding/escaping for the web for the first
time (pretty much no matter what their previous level of
experience in other areas). It is something that I have
learnt to double check, habitually, and that is the reason
that I spotted it so quickly.

Richard.
 
R

Richard Cornford

kangax said:
On Apr 20, 5:46 pm, Richard Cornford wrote:

Is it possible to see the above mentioned web applications?

An invisible web application would not be very easy to use (or sell).

But you mean is it possible for you to see them. If you can convince our
marketing department that you are a potential customer they will happily
demonstrate it to you (in your own offices, anywhere on the planet, and
at your convenience). Their question will be "how much property do you
own/manage?", but if the answers is much less than 1000 building they
are probably not going to be interested.

Richard.
 
L

Lasse Reichstein Nielsen

Thomas 'PointedEars' Lahn said:
Matt Kruse wrote:

This is simply wrong.

No, it's not.

Link?

Compiling and interpreting are processes. Applying the names to
langauges as a whole suggests (but is not formally defined) that the
language is inherently using that process. That is not how most
langauges are specified. Instead they are specified at the semantic
level, allowing both interpretation and compilation.

If instead we look at the, formal, definitions of an Interpreter and a
Compiler, then it becomes clearer how to, at least, distinguish the
processes.

An interpreter is defined by *two* things: The interpreted (source)
language, and the implementation language.
In order to use it, you run the implementation language and give
the source language program as input. The application of the
interpreter to the program, Interpreter(source), must have the same
semantics in the implementation language, as the souce program has
in the interpreted language.

A compiler is defined by *three* things: The source language, the
target language, and the implementation language.
You run the implementation language with the source program as input
and receive a program in the target language as output.
The target progrem must have the same semantics in the target language
as the source program has in the source language.


The typical meaning of a "compiled language" is one where the source
program is compiled to another form once and then that form is stored
and run several times. I.e., where the process of compilation is
separate from the process of interpreting the compiled form.
No, see above. But what matters here (if it matters at all; to remind you,
the cause of this subthread was a website that compared Prototype.js to Java
and jQuery to Ruby) is that JavaScript and JScript code is compiled *first*.

The combination of first compiling and then executing implements an
interpreter. Nothing prevents an interpreter from using a compiler as
a component, but its behavior is still clearly that of an interpreter:
it executes the program.

That means that Javascript used on the web is always interpreted.
This does not apply to all languages that are finally interpreted.

What the both of you seem to overlook is that compilation and interpretation
can complement each other.

Absolutely. But if "compiled language" and "interpreted language" is to
have any meaning (which I doubt it has), the most significant process
should be the deciding factor, and Javascript is typically interpreted,
whether that interpreter contains an internal compiler or not.

I know there are exceptions where Javasscript is compiled offline and
transmitted in compiled form, but that only strengthens the point that
"compiled language" and "interpreted language" are meaningless.

/L
 
T

Thadeu de Paula

I am learning more and more Prototype and Script.aculo.us and got the
Bungee book... and wonder if I should get some books on jQuery (jQuery
in Action, and Learning jQuery) and start learning about it too?

Once I saw a website comparing Prototype to Java and jQuery to Ruby...
but now that I read more and more about Prototype, it is said that
Prototype actually came from Ruby on Rails development and the creator
of Prototype created it with making Prototype work like Ruby in mind.
Is jQuery also like Ruby? Thanks so much for your help.

They are not bad... if you don't want to use javascript. The better is
you learn JS. Then, seeing it's code you'll know what it do.

But imagine...
Using Prototype, jQuery etc.. you'll need to load these scripts in all
your pages... using all functions or not. They create variables to
store long built-in JS properties. This is unnecessary.

The better is you code your object and functions and load only
necessary
to do especific things. And... about code... if you use many
document.objasdjhk()
and wanto to keep the code smaller... compress it with an javascript
compressor and keep the original code to edition.

Prototype, jQuery are "good" works from people that knows JS. But it
can
not be the best way of implement JS in your pages.
 
B

beegee

The combination of first compiling and then executing implements an
interpreter. Nothing prevents an interpreter from using a compiler as
a component, but its behavior is still clearly that of an interpreter:
it executes the program.

Yes, that is a very clear definition. Now, what is the difference
between an interpreter and a virtual machine? PointedEars suggests
they are the same thing. Certainly both execute a program and both
use a compiler as a component.

Instinctively, I know there is a difference between languages such as
Javascript, Ruby on one hand and Java,.NET C# on the other. There is
a trade off of speed for expression that I at first attributed to
compilation vs. interpretation. Recently, I've noticed that C# has
added lambdas and a variant (typeless) type to the latest version.
The syntax is kind of a nightmare compared to Javascript but it means
that VM is doing the same kind of "interpretation" that the Javascript
interpreter is doing. So maybe the difference between these languages
is that one type is oriented toward compilation and the other is
oriented toward interpretation even though they have evolved towards
each other.

Bob
 
T

Thomas 'PointedEars' Lahn

beegee said:
Now, what is the difference between an interpreter and a virtual machine?
PointedEars suggests they are the same thing. Certainly both execute a
program and both use a compiler as a component.

Not necessarily.
Instinctively, I know there is a difference between languages such as
Javascript, Ruby on one hand and Java,.NET C# on the other.

So at least partially you would be let down by your instincts. At least as
for JavaScript, JScript, JScript .NET, and Java, there is no difference
regarding this as should have been clear to you by now.


PointedEars
 
J

Joost Diepenmaat

beegee said:
Recently, I've noticed that C# has
added lambdas and a variant (typeless) type to the latest version.
The syntax is kind of a nightmare compared to Javascript but it means
that VM is doing the same kind of "interpretation" that the Javascript
interpreter is doing. So maybe the difference between these languages
is that one type is oriented toward compilation and the other is
oriented toward interpretation even though they have evolved towards
each other.

This is nothing new. Take a look at Common Lisp for a language that has
both extreme expressiveness and compilers that can produce very fast
code. Anyway the border between interpreted and compiled implementations
is very fuzzy (unless you just mean that you can transform the source
code to some pre-processed byte stream or stand-alone executable, which
is really quite easy and doesn't really mean much. Many languages that are
typically viewed as interpreted (perl, for instance) can do that.

This whole discussion is pretty meaningless.
 
D

dhtml

Who is talking about bugs? Take this code from the dojo library:-

| if(
| elem == null ||
| ((elem == undefined)&&(typeof elem == "undefined"))
| ){
| dojo.raise("No element given to dojo.dom.setAttributeNS");
| }

The rules for javascirpt dictate then whenever the -
(elem == undefined) - expression is evaluated (that is, whenever
- elem == null - is false) the result of the expression must be
false, and so the - (typeof elem == "undefined") - expression just
cannot ever be evaluated. The bottom line is that if the author of
that code had understood javascript when writing it the whole thing
would have been:-

if(elem == null){
dojo.raise("No element given to dojo.dom.setAttributeNS");

}

- or possibly:-

if(elem){
dojo.raise("No element given to dojo.dom.setAttributeNS");

}

- as there should be no issues following from pushing other
primitive values that have falseness through the exception
throwing path as well null and undefined.

The first is not a bug; it does exactly what it was written to, and
does it reliably and consistently. But it is a stupid mistake on the
part of its 'programmer', and survived in the dojo source for long
enough to be observed because nobody involved with dojo knew enough
actual javascript to see that it was a stupid mistake and correct it.

YUI may contain bugs but it does not contain this type of stupid
mistake because at least one person (and it only takes one) knows
javascript well enough to be able to see this type of thing and
stop it (presumably at source by ensuring any potential
transgressors become better informed bout the language they are
using).

I really didn't want to be goaded into postup a dumb and dumber
competition with other people's code, but you've left me with not very
good choices.


YUI has some pretty bad/obvious bugs in crucial places. augmentObject,
hasOwnProperty. Dom.contains:-

hasOwnProperty: function(o, prop) {
if (Object.prototype.hasOwnProperty) {
return o.hasOwnProperty(prop);
}

return !YAHOO.lang.isUndefined(o[prop]) &&
o.constructor.prototype[prop] !== o[prop];
},

- Which will throw errors in IE when - o - is a host object and
return wrong results in Opera when - o - is window. augmentObject:-

augmentObject: function(r, s) {
if (!s||!r) {
throw new Error("Absorb failed, verify dependencies.");
}
var a=arguments, i, p, override=a[2];
if (override && override!==true) { // only absorb the
specified properties
for (i=2; i<a.length; i=i+1) {
r[a] = s[a];
}
} else { // take everything, overwriting only if the third
parameter is true
for (p in s) {
if (override || !r[p]) {
r[p] = s[p];
}
}

YAHOO.lang._IEEnumFix(r, s);
}
},


It is questionable strategy to do object augmentation on the prototype
chain of the supplier. It would be better to use hasOwnProperty to
filter the stuff in the supplier's prototype chain out. Next, if the
receiver has a property p with a false-ish value, then the ovveride
flag is irrelevant.

Dojo calls object augmentation "extend" which to me seems to be
misleading.

isAncestor: function(haystack, needle) {
haystack = Y.Dom.get(haystack);
needle = Y.Dom.get(needle);

if (!haystack || !needle) {
return false;
}

if (haystack.contains && needle.nodeType && !isSafari)
{ // safari contains is broken
YAHOO.log('isAncestor returning ' +
haystack.contains(needle), 'info', 'Dom');
return haystack.contains(needle);
}
else if ( haystack.compareDocumentPosition &&
needle.nodeType ) {
YAHOO.log('isAncestor returning ' + !!
(haystack.compareDocumentPosition(needle) & 16), 'info', 'Dom');
return !!(haystack.compareDocumentPosition(needle) &
16);
} else if (needle.nodeType) {
// fallback to crawling up (safari)
return !!this.getAncestorBy(needle, function(el) {
return el == haystack;
});
}
YAHOO.log('isAncestor failed; most likely needle is not an
HTMLElement', 'error', 'Dom');
return false;
}

This would return inconsistent results, depending on the browser.
For example:
YAHOO.util.Dom.isAncestor(document.body, document.body);

Though the last is not as obvious a mistake as the others.

There are considerably questionable practices in the Event library.

The connection manager is horribly designed. The fact that it attempts
to do form serialization within iteself, is just a horrible decision.
If the author had been forced to write a test for that, he'd probably
have moved that form serialization code somewhere else, to make it
easier to test (easier coverage verification).

[snip]
And where those
authors are part of a collective they don't speak for the
knowledge of the specific author responsible but instead
indicate the level of understanding of the _most_
knowledgeable person involved.

This can lead to blocking code reviews and scape goating. With a test
driven approach, the only thing to blame is the process, and that's
fixable (besides the fact that the test never has any hard feelings).

Without tests, you get things like blood commits and code freezes.
Some libraries actually do code freezes. And they have at least one
expert. And they have bugs. Dumb ones.
No it does not need to be test, it is correct. The general
rule is that the character significant in escaping needs to
be processed first when escaping and last when unescaping.

It addresses the problem that was demonstrated in your example. It
does not, however, take into consideration the possibility that - this
- could contain * any * other entities.

If the need to handle - &quot; - or & (which is also '&') got
added in later, they'd need to be reviewed by the one, sole expert, to
make sure the person who wrote the amending code didn't make a rookie
mistake. A test could clearly prove it worked.

var s = "&quot;".replace(/&quot;/g, '"');

So the fix addresses only one concern.

another consideration is that String.prototype.escapeHTML should be
stable, but if there's a bug, and dependencies on that bug, then the
fixing of the bug becomes complicated. In may very well be the case
that some novice programmer used escapeHTML, found that it didn't work
right, made some adjustments in his implementation to compensate for
that bug. In essence, his implementation is now depending on that bug.
This is where I see adding things to built-in prototypes to be risky.

If the programmer had made a method, then that method could always be
deprecated in a future release, if found to be problematic.

So, to sum it up, my recommentations:
1) write a test
2) don't put the methods on String.prototype because they might change
later.
Absolutely. It is a simple bug, and a mistake that in my
experience is made by nearly every programmer who comes to
the issues of encoding/escaping for the web for the first
time (pretty much no matter what their previous level of
experience in other areas). It is something that I have
learnt to double check, habitually, and that is the reason
that I spotted it so quickly.
That was my first time writing an unescape function in Javascript. I
think I might have written one in Java several years ago in a response
filter exercise, though.

If I had to write something more comprehensive to account for more
entities, I'd probably consider looking into inverting control to the
browser's parser using a combination of newDiv.innerHTML and
newDiv.textContent|innerText

document.body.textContent = "&";
document.body.innerHTML; // &amp;

document.body.innerHTML = "&quot;"
document.body.textContent; // "

Obviously not using document.body, but a newly created node. I would
probably write some tests for that, including the cases you posted,
make sure they all fail, then write out some code (the code could be
changed in the future, since there are tests).

Garrett
 
A

Andrew Dupont

Maybe, but there are circumstances were the best advice possible is to
delete something and start again from scratch, but most individuals who
hear that advice don't regard it is constructive when they do.


Presumably you mean you will brush off anything that you regard as
unwarranted?

How is that different?
Requiring what you get to be "constructive" is not a condition?

You're hyper-parsing my statements. I think the response the OP got
_isn't constructive_ clearly you disagree. I think some of the
criticism of Prototype that occurs in this newsgroup _isn't
constructive_, but some _is_. I am saying that I will disregard things
that I don't feel are constructive. Therefore there is no
"requirement" of any sort. Jesus.
OK. Why, what is in it for them?

Nothing. Go eat a taco if you like.
I very much doubt that I did.

Somewhere along the way I got you mixed up with Thomas. For that I
apologize.
There are lots of things that are true but are not common knowledge.

I'm not arguing whether it's true or not. If I say "Martin Scorsese
can't direct for shit," I expect people around me to look at me funny,
because whether my statement is true or not it goes against consensus.
Therefore I might feel a burden to _elaborate_.
Maybe, but there is an objective "better" in the sense of using:-

if(elem == null){
    dojo.raise("No element given to dojo.dom.setAttributeNS");

}

- in place of:-

 if(
   elem == null ||
   ((elem == undefined)&&(typeof elem == "undefined"))
){
    dojo.raise("No element given to dojo.dom.setAttributeNS");

}

And you seem to say that a small handful of silly choices in a
framework mean the entire thing is worthless.

Thomas's memory is a matter of taste?

No, his assessment is a matter of taste. The part that connects his
memory and his opinion.
Not really. There are bugs and there are bugs. A typo in the middle of a
large block of code is something that can happen to anyone, and it could
also easily be missed by others reviewing that code. A glaring error in
something that experience would teach you to always double check and
also should be exposed in any reasonable testing is something else
entirely.

We disagree on the degree to which this is important. I think we've
gone as far as we can on this point.
No, I said that the evidence was that Prototype.js was (at least in
November last year) only doing what was (apparently) expected by
coincidence; that it had not actually been programmed to do what it was
doing. I also implied that were that evidence existed it was reasonable
to question the understanding of javascript that informed all of the
design decisions that occurred prior to that code being written; such as
the underlying design approach and the resulting API.
(I have also pointed out that Prototype.js is incredibly slow at doing
pretty much anything complex)

That's a very vague statement. A few examples would be greatly
appreciated. The words "slow," "anything," and "complex" are relative
to the individual, and obviously must be taken into account when
making a technology decision. jQuery's central API focus — fetching
nodes by CSS selector — means that a line of jQuery code is often much
slower than the equivalent, non-framework-aided code. Many people use
it anyway because they deem it to be worth the trade-off.

I did.


No they would not. They would be from the head.


Not really.

Apparently I need to stop using sarcasm. I also need to stop speaking
abstractly, aside from basic declaratives (e.g., "Your tone is too
harsh."). You are reading the most literal of meanings into every
single word I write.
Well this is Usenet so there is no censorship.


You would not have the means to censor Usenet even if you did own a
telecommunications company.

Case in point. I don't know why you think I was trying to censor
conversation in the first place. My point is that I can't (and don't
want to) censor anything.
And they weren't human before they were conceived.

Needlessly argumentative and willfully dense to the point I'm making.
There is no need to "venture" that, it is self-evidently true.



What are you expecting? You give a library the same name as a
significant aspect of the language it written in and then cannot find
specific references to it in the archives of a newsgroup dedicated to
that language. It was a predictably bad choice of name.

Again, argumentative. The fact it's a bad name for a library (which I
agree with) is unrelated to the point I am making.
Who said finding that sort of thing out was going to be quick? I bet the
search would still turn out to be informative even if it could not be
instantaneous.

This is like saying I ought to read the entirety of Donald Knuth's
published works before I write a simple algorithm. You may be right in
your "bet," but that's not how a user in need of help is going to
_behave_, so what's the user of pretending otherwise?
How would you expect me to demonstrate the lack of any technical
foundation for UA string based browser sniffing? I can hardly point to
something that doesn't exist and say "there is the absence of any
technical foundation for all to see".  Of course if there was any
technical foundation then that could be pointed at quite easily, but as
the navigator.userAgent string is a reflection of the HTTP User Agent
header then any such direction must lead to the definition of the header
in the HTTP specification, and that definition pretty much says that the
User Agent header is an arbitrary sequined of zero or more characters
that is not even required to be consistent from one request to the next
(i.e. that it is not specified as being a source of information at all).


Does that need to be demonstrated (again)? It is known that web browsers
use User Agent headers that are indistinguishable form the default UA
header of IE, so how could it be effective to discriminate between
browsers using the UA string whenever two different browsers use UA
headers that are indistinguishable?

First of all: we don't sniff the UA string to detect IE. We sniff to
distinguish between other browsers (e.g., between Gecko and WebKit),
but we use a different heuristic for IE.

Second: your gripe seems to be that we use any heuristic at all — that
it's not possible to detect differences in UAs with 100% accuracy.
99.99% accuracy is good enough for me. I'm not insisting that you must
agree; I'm only asking not to be regarded as a savage.
So you would not be certain what the code was going to do, but you would
know that whatever it did it would take about the same amount of time to
do it wherever it was running? I certainly do not have a taste for that
design philosophy.

No, I would rely on the unit tests to demonstrate a consistent
behavior. It doesn't give me 100% certainty because I can't test every
string on earth. (In this case, obviously, we didn't test enough
strings.)
Hansom is as hansom does. But that was not really my point. One of the
things that gets proposed as a justification for libraries of this sort
(a reason for their not being junk by virtue of what they are) is that
with many individuals contributing there are plenty of eyes looking at
the code to be able to find these sorts of things and fix them up front.
But if it takes me three seconds to find what nobody else had noticed
then it must be the case that there is nobody involved looking with my
eyes.

Again: someone else did notice it. But it did go unfixed for about a
year, and I find that disappointing and unusual. We agree on the
former but not the latter.
That all sounds very 'marketing-speak'.

Again, you're being oddly argumentative. I don't care how it sounds to
you.
Why? Polishing the handrails on the Titanic may have made it more
appealing to look at but didn't change the rate at which it sank after
the design flaw coincided with the iceberg.

Are you sure? The bug submission screen has a gigantic text box in
which you can make any sort of condescending judgment you like. How
can you resist?

Cheers,
Andrew
 
P

Peter Michaux

[snip]
People here will keep complaining about the shoddy quality of
libraries while that quality, slowly, increases.

I keep wondering why the process is so slow. The cores of all the
major libraries like Prototype.js, jQuery, etc are all relatively
small (~a few thousand lines). In a month I'm sure someone could go
through these libraries in great detail and find and fix many
problems.

One reason I can think up is the library granularity, packaging and
API are not of interest to experts who would know how to fix the
internals. That is, even if these libraries were technically perfect
the experts that complain about them still wouldn't use them. I think
this is primarily true for Prototype.js because augmenting objects you
don't own is something almost everyone stops doing very early on. I
don't know how the Prototype.js folks can find this practice
acceptable as it has burned them quite a few times.
This goes on until
the time when libraries have won, and anybody not using them is
putting themselves at a disadvantage.

I think that time has already come. It seems most c.l.j regulars who
complain about the mainstream libraries do maintain their own
libraries in various forms. Some of the various forms have interesting
APIs and priorities that are quite different than popular libraries.

I think the time will come very soon that maintaining one's own
library may not be feasible because the size of the library will be
too big. The boss will want big fancy widgets faster than before
because someone using a library can produce them quickly even though
they may have known bugs or have bloated code with large unused
portions being served to the client.

Peter
 
V

VK

I keep wondering why the process is so slow. The cores of all the
major libraries like Prototype.js, jQuery, etc are all relatively
small (~a few thousand lines). In a month I'm sure someone could go
through these libraries in great detail and find and fix many
problems.

Easy to say - really hard to implement. It may take a couple of hours
to "streamline" some code upon programmer's idea of the best. But such
freedom of actions is available only for a new player on the market.
For a library long time used in serious commercial solutions the main
priority is not "beauty" or even effectiveness of code, these are all
secondary matters. The main mandatory priority is backward
compatibility - not only with the previous versions of the library
itself, but also with all current solutions using this library.
Sometimes it locks the possibility to update a segment even if its
clearly ineffective or wrong: because some big solutions are using
workarounds dependent on this particular segment structure.
This is the main rule and trend of the commercial programming: long
used software has an established brand and customer base, but unable
to be very flexible due to backward compatibility requirements. A new
player has to fight its way through: but being free of current
commercial obligations it can be much more flexible and dynamic so to
satisfy better new customers. In case of success it gets its own
market share and its own commercial responsibility so loosing the
initial advantage of the flexibility - and the process starts over.
have bloated code with large unused
portions being served to the client.

This part of complains oftenly seen at c.l.j. are really strange. OOP
by itself doesn't provide easy mechanics for "per chunk" code usage.
This is the generic trade off of OOP: it has high maintainability but
low modularity. In order to use someMethod from library X it is needed
to include all someMethod dependencies as well: which oftenly mean all
inheritance chain up to Object. With the common namespace protection
pattern where the whole library is one global object one would need to
write an AI enforced compiler to get only the needed parts and to get
them back together properly. And still in the majority of cases it
will result in copying the entire inheritance chain - so nothing but
an error prone loss of time out of such compiler. OOP library usage is
per library, not per method based. I do not understand why the generic
feature of the OOP paradigm is used to criticize Javascript libraries
alone as if it would some exclusive Javascript default. In OOP the
regular solution to the problem is having some commonly agreed core
libraries guaranteed to be presented and then developing own libraries
as extra layer(s) atop of core. Javascript is slowly moving to this
direction. The first necessary step was to let the market to clean up
the initial anarchy of N different libraries on each corner. This step
is pretty much accomplished as of this year, with Prototype being an
industry standard with jQuery second industry standard compliant
library. There are also MooTools and dojo frameforks but they are
operating on a different segment of the market: their purpose is to
move on client as much logic as client can take without freezing its
interface.
 
R

RobG

Easy to say - really hard to implement. It may take a couple of hours
to "streamline" some code upon programmer's idea of the best. But such
freedom of actions is available only for a new player on the market.
For a library long time used in serious commercial solutions the main
priority is not "beauty" or even effectiveness of code, these are all
secondary matters. The main mandatory priority is backward
compatibility - not only with the previous versions of the library
itself, but also with all current solutions using this library.

By that criterion, Prototype.js is a poor choice. The assertion
doesn't stack up anyway - a site that has been developed with a
particular version of a library is under no compulsion to change to
newer versions as they become available.

Anyway, I understood Peter's comment to be in regard to fixing bugs
and poor design in the actual code, not to change the API.

Sometimes it locks the possibility to update a segment even if its
clearly ineffective or wrong: because some big solutions are using
workarounds dependent on this particular segment structure.

Anyone who writes code that is dependent on aberrant behaviour
deserves what they get. I see no evidence in Prototype.js that they
refuse to fix bugs because it will cause previous versions to break if
the new version is substituted (noting that there is no compulsion to
use newer versions anyway).
This is the main rule and trend of the commercial programming: long
used software has an established brand and customer base, but unable
to be very flexible due to backward compatibility requirements.

Provide a single example of where the authors of Prototype.js have
refused to fix a bug because it will break backward compatibility.


[...]
This part of complains oftenly seen at c.l.j. are really strange. OOP
by itself doesn't provide easy mechanics for "per chunk" code usage.

Rubbish. The fundamental design of a library can be extremely modular
if that is the designer's choice. It just happens that some popular
libraries are not designed to be modular.

With the common namespace protection
pattern where the whole library is one global object one would need to
write an AI enforced compiler to get only the needed parts and to get
them back together properly.

Not at all - it has been shown here that using:

var XXLIB = {
fnOne: function(){...},
fnTwo: function(){...},
...
};

provides no more (and possibly less) of a "name space" than the
effectively equivalent:

function XXLIB_fnOne(){...}
function XXLIB_fnTwo(){...}

however the former pattern is used more frequently as it seems more OO
than the latter. Neither pattern forces any kind of internal
dependency. You should try to track down the various ways of creating
an array in Prototype.js.

String.prototype.toArray is the trivial and limited:

toArray: function() {
return this.split('');
},

the $A function uses the bog standard:

function $A(iterable) {
if (!iterable) return [];
if (iterable.toArray) return iterable.toArray();
var length = iterable.length || 0, results = new Array(length);
while (length--) results[length] = iterable[length];
return results;
}

however Enumerable.toArray follows a torturous route through map, Hash
and several other parts of the library. So what does that tell you
about the author's intentions to modularise the code? Clearly it
wasn't a priority (which isn't necessarily a criticism, it's a
statement of fact).

Saying it is impossible to write modular OO code just because a few
popular libraries aren't modular is not a particularly convincing
argument.

And still in the majority of cases it
will result in copying the entire inheritance chain - so nothing but
an error prone loss of time out of such compiler.

There is no reason why a library must be based on an inheritance
chain, nor does that approach necessarily make modularisation more
difficult. Prototype.js takes the approach of extending nearly all
the built-in objects other than Object, however that doesn't
necessarily make one part of the code dependent on another - it is a
consequence of how the library has been written.

OOP library usage is
per library, not per method based.

Continually repeating the same statement does not make it so.
Internal dependencies are not necessarily a fundamental feature of OO
programming per se - the reverse *should* be the norm.
I do not understand why the generic
feature of the OOP paradigm is used to criticize Javascript libraries
alone as if it would some exclusive Javascript default.

Because is isn't a fundamental "feature" of OO design.
In OOP the
regular solution to the problem is having some commonly agreed core
libraries guaranteed to be presented and then developing own libraries
as extra layer(s) atop of core.

The same old assertion.
Javascript is slowly moving to this
direction.

It is?
The first necessary step was to let the market to clean up
the initial anarchy of N different libraries on each corner. This step
is pretty much accomplished as of this year, with Prototype being an
industry standard with jQuery second industry standard compliant
library.

"Industry standard compliant library". That is there any such thing
as "industry standard" in client-side browser scripting?
 
V

VK

it has been shown here that using:

var XXLIB = {
fnOne: function(){...},
fnTwo: function(){...},
...

};

provides no more (and possibly less) of a "name space" than the
effectively equivalent:

function XXLIB_fnOne(){...}
function XXLIB_fnTwo(){...}

With the latter ("Macromedia notation") even more efficient at least
for JScript where DispIDs are not reusable so any lookup chain
abbreviation brings better performance. Alas the ol'good "Macromedia
notation" is currently a victim of programming fashion. Namely it is
"out of fashion".
Continually repeating the same statement does not make it so.
Internal dependencies are not necessarily a fundamental feature of OO
programming per se - the reverse *should* be the norm.

Possibly we are talking about different OOP ideas. The one proposed in
the conventional CS departments assumes classes created on the base of
other classes (superclasses) where the choice of the superclass to
extend is based on the required minimum of augmentation to receive new
class with needed features. This way the modularity of a particular
class depends solely and exclusively on the position of such class in
the inheritance chain. If some class extends Object directly then it
is rather simple to include it directly in some other block. With a
class being on the top or even middle of the chain its inclusion also
requires the inclusion of all underlaying chain segment. Because no
educated guess can be a priori made about the position of the class X
in library Y, OO modularity overall is low - yet its maintainability
is high, as I said. It is possible to imagine (and to make) a library
where each and every class directly extends Object, no matter how
close some classes would be to each other. Just don't call it OO based
library then.
The same old assertion.

"The common knowledge basics mentioning" would be more appropriate :)
Again - unless we are talking about different OO ideas.
It is?
Yep.


"Industry standard compliant library". That is there any such thing
as "industry standard" in client-side browser scripting?

Of course. Say - just a small sample - make a library where $
identifier is used for your own purposes not related with DOM ID
lookup. Now try to sell it without fixing it. Report the results. ;-)
 
P

Peter Michaux

I agree there is no difference in the namespace protection gained by
either solution above. I've brought this up ocassionally on c.l.js.
The response has been that there is no namespace protection difference
but performance. (There may be a slow hashing algorithm in that
browser?) Richard Cornford wrote that at least one browser does not do
well when there are many global objects. I believe it is somewhere in
this thread

<URL: http://groups.google.com/group/comp..._frm/thread/494e1757fa51fe3f/a504c64b42db8c8d>

Richard seems to like a third option which I think works like this

var localFnOne = XXLIB('fnOne');

which gives the library a chance to "build" the fnOne function. This
also seems to encourage making local copies inside a function of
library functions which is faster when the function calling the
library runs. These local copies however, do seem to encourage early
binding. That is the library "fnOne" function cannot be redefined
unless there is special effort (not that difficult) in the library
design to allow for that. I've thought about Richard's system quite a
bit and haven't thought of a compelling advantage that the earlier two
version don't have.
With the latter ("Macromedia notation") even more efficient at least
for JScript where DispIDs are not reusable so any lookup chain
abbreviation brings better performance. Alas the ol'good "Macromedia
notation" is currently a victim of programming fashion. Namely it is
"out of fashion".

As I wrote above, I have been told that some browser(s) is slower with
many global symbols. I haven't verified that myself. I much prefer the
idea of the Macromedia style as I don't use underscore and it would
could easily be reserved for the concept of namespacing and the dot
could be saved when a conceptually real OOP object with mutable state
is involved.

One thing I don't like about using a dot for namespacing is someone
might use "this" in one of the functions to refer to the namespace
object. That means local copies cannot be made trivially. For example,

var XXLIB = {
fnOne: function(){...},
fnTwo: function(){this.fnOne()},
// ...
};

and then in local code

var fnTwo = XXLIB.fnTwo;

requires using apply

fnTwo.apply(XXLIB, [])


This would be difficult to make necessary with the Macromedia solution
or with Richard's solution.

Peter
 
V

VK

Richard Cornford wrote that at least one browser does not do
well when there are many global objects. I believe it is somewhere in
this thread
<URL:http://groups.google.com/group/comp.lang.javascript/browse_frm/thread...>

There is nothing obvious about this problem in the linked thread.
Maybe I looked at the wrong space? Overall name me a browser that
would _increase_ performance with more global vars created :)

At the same time I am not aware of a browser that would be
_particulary_ bad with numerous global vars - up to the point of an
obvious productivity decrease in comparison with other browsers.

At the same time long lookup chains a.b.c.d.e.f etc. do impact
noticeably at least one browser with non-reusable DispIDs - IE/
JScript.
While preparing for sell my SVL library (Superimposed Vector Language,
a layer interface for SVG+VML) I couldn't get a satisfactory smooth
rotation of complex 3D shapes on 1.x GHz machines which was not
acceptable. Then I just rebuild the entire library in the old top
level based "Macromedia notation style" and things came to life right
away. I don't mean one could right new levels of Quake in SVL after
that :) - but the productivity became commercially satisfactory for
the customer.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,768
Messages
2,569,574
Members
45,051
Latest member
CarleyMcCr

Latest Threads

Top