addEvent - The late entry :)

J

Jorge

Sooner or later you're going to have to admit that Apple is doing it
very well with their Safari browser, in contrast, the multimillion$
software company's browser is a real pity, even in its latest (still
beta after 6(?) years) incarnation. They ought to give for free some
Red-Bulls for breakfast at Redmond.

--Jorge.
 
J

Jorge

So you are disputing the degree to which the suspicion and
distrustfulness is excessive and/or irrational?

Yes, because Safari's source code is open, of course.

<kidding>
And as they're using it at Redmond trying to learn how to write
browsers, they would have told us everybody already if they had found
anything weird.
</kidding>

--Jorge.
 
D

dhtml

Why don't you see that it is superfluous to respond to me asserting the
very thing that I not only have already suggested but gong to some
effort to ascertain the truth of?

I actually did agree with the part about using inline code.

How many times have you seen some insanely clever scheme to 'subclass
array' when the author should have been just using slice()? Well then,
this is the simple answer. Hey - at least the code works, right?
<snip - more pointless noise>

If you'd asked, it probably would have been a lot better of a way to
learn. But then again, nah, that probably wouldn't be possible. You
seem to prefer to impart, rather than receive knowledge (publicly). A
large portion of the things I wrote are not the type of thing that
would seem interesting to the way that someone like you thinks (based
on my observations of what you write).

You go and write tests for how many milliseconds faster it is for
calling Array.push, and post examples. The PrototypeJS guys appreciate
these and call them 'useful nuggets', within the 'piles of dung' you
keep slinging.
It's nice out. That will be all for today!

Ah, doing something smart for a change.

What is that supposed to mean?

Garrett
 
R

Richard Cornford

I'm a little surprised you know so much about the
internals of Prototype.js.

We do get asked about it form time to time, so some familiarity with the
code seems like a good idea if those questions are to be answered
(responsibly).
I haven't looked inside that library for more than
a few minutes in the last year. All that $, $A, $H,
and my personal "favorite" $$ are successful repellent.

That particular mistake in Prototype.js does contribute to some pretty
opaque code, both within it and using it.
I don't even know what these functions do exactly
and in some case not even partially.

No, but you can guess that whatever it is they will probably do it
badly. The - $A - in the 1.6.0.2 version, for example, would loop for
ever if the argument passed to it was an object with a negative -
length - property (and that is a fault that could be fixed with zero
impact on its performance).
The last time I did look in the library it seemed
that the Prototype.js typists don't seem to care about
any sort of modularity in their code. It is one big lump
that cannot be untangled.

That is another factor that makes understanding it much harder than it
needs to be.
I am not surprised that their heavy-duty "bind" is used
internally. RobG pointed out sometime that many functions
in Prototype.js start by calling $ for one of the arguments.
Even functions that aren't API functions do this which is
completely pointless.

Yes, superficially it looks like an effort to make the code tolerate
mistakes made by its programmers (internal and external), but if that
was the intention you have to wonder why functions like $A are so
vulnerable to incorrect argument types.
This example of "bind" does make a good point that the
function is more "general" than needed in the particular
case. But there would only be a need to write a more
trimmed down version if the performance penalty of the
current function is causing a problem.

But the performance is bad. I have followed attempts to speed JQuery up
more than Prototype.js, and JQuery is really bad in that regard. Over
the years there have been numerous posts to this group from people
complaining, for example, that sorting a 4,000 row table was
unacceptably slow. To which there is pretty much no answer but to say
that if you create a table that large manipulating it through the DOM
with javascript is going to be slow no matter how much optimisation is
done in the code. But I am reading people complaining about JQuery
performance at table sorting with 200-400 rows.
I don't want to make it seem like writing especially
inefficient code is ok until there are customer complaints
but there is a balance which needs to be struck in the
interest of saving development dollars.

Generally is does seem like a good idea to aim to get something working
well before worrying about performance. On the other hand, there are
aspects of performance that can be considered at the design stage and
would be expensive to sort out (re-design) later.

I've thought about the saying "premature optimization is
the root of all evil" in several different ways recently.
I believe the original use was with respect to reducing
CPU use and so speeding up the program. Now this saying
can apply to size, speed, maintainability, generality.

How would premature maintainability work?
None of these should be optimized prematurely as the
others usually suffer.

For which a good picture of what (or rather when) qualifies as
"premature". Which is likely to end up being answered on a per-context
basis, as you know there will be contexts where performance is paramount
and others were it would be a near irrelevance.
That is the reason I pressed the issues I did below.
Sometimes it seems you are advocating optimizing size,
speed at the cost of maintainability (i.e. multiple versions)
and generality.

The code authoring strategy that I advocate is primarily aimed at
maximum code re-use and fast development. On their own, those two
aspects are not enough to justify one choice over another, so it is
useful to point out their impacts in other areas. But I don't see what
maintainability has got to do with this as using pre-written, well
tested and reliable implementations of task specific interfaces in the
form of discrete modules does not harm maintainability. Indeed my
experiences suggests it aids it considerably; even finding bugs in an
interface implementation allows for fixing those and testing the result
in isolation form the rest of the system, and the results do not tend to
have any (non-positive) impact outside of the module in question.
I'd be surprised if there is an object change between versions
2.0.2 and 2.0.3 (if those are the correct numbers, I have to
check) where the bug was fixed.

I think you would be surprised by how many strange, non-documented and
non-standard features Safari's object model does expose (or has
exposed).

I think there is a bit of that but also I think he wants to
read an acknowledgement that using code that is slightly too
general for a situation is tolerable.

If so that will be because he want to use such a statement as a
justification for using code that is way too general for a situation.

Fair enough. I really don't think about multiple frames/windows
as I almost never use them (at not least where this would matter.)

But code supporting multiple frames is a more general code than code
that only supports the frame it is loaded into.
In that hypothetical situation, I probably would use the code
as well.

[point A] I will refer to this below.
You are not seeing the question of how 'general' is "general".
An event library, no matter how large/capable is not 'general'
in every sense. It should have no element retrieve methods,
no retrieval using CSS selectors, no built-in GUI widgets, etc.

I don't know how CSS selectors or built in widgets have anything
to do with an event library.

They shouldn't have anything to do with an event library. Separating
separate responsibilities into discrete units/modules is a good design
strategy (leaving the extent to which that is done (the sizes/scopes of
those units/modules) the subject of debate).

However, the general-purpose librates do not follow this strategy (or
where they do they do not follow through with this strategy). JQuery,
for example, has recently grown a (predictably dodgy) position reporting
API as part of its core. JQuery's plug-in philosophy is one of the few
better ideas in its design, but runs against pressures for the library
to be more general and in the end loses to the louder popular demand.
If it could truly comprehensively do its job then I think that
would mean "general".

In one sense. It would not be generally applicable as the overheads in
handling the multi-frame (with sub-frames reloading at intervals) would
be considerable and pointless in a single page context.
That is a valuable acknowledgement by including "get away with".

There is always a place for pragmatism, and I have always advocated
designing for the context, which necessitates appreciating the full
context.
Given the your acknowledgement at "point a" above, it would
seem the size of "slightly" might play a role.
Yes.

If the slightly larger multi-frame system was written and there
was a tight deadline, I would use it.

How tight is the deadline and what does not written mean in context? If
an overall strategy of building form re-usable low level components,
through intermediate components up to the final top level has been
employed then the odds are good that at least some (and likely much) of
the existing code can be re-used in the 'writing from scratch', and then
some other components may be available in addition. If the deadline is
really tight none of that will matter, but otherwise ...
If the single page version was already written and could do
the job by being included in every individual frame then I
would use it on a tight deadline. Caching could be set up with
some no-check, far-future expiration date header so there is
no cost to including it in every page.

That is not "no cost", it is a minimal download cost but the code must
still be compiled whenever a page loads, and complied before any
page-specific set-up can be performed.

Nicely written; however, if the code is already written, tested,
and available for download from the web, but solves a problem
more general then problem at hand, where does one draw the line
and say it is *too* general? There must be some observable that
indicates this situation. For example "the client is complaining
downloads are too long" or "the client is complaining the
application is not responsive enough" or "the other programmers
are spending too much time on maintenance" or the genuine
expectation that one of these sorts of problems will arise.

And where a less, but still sufficiently, general alternative is
available ("already written, tested, and available for download from the
web")?


===========================================================
The majority of JavaScirpt programmers (almost all less "uptight"
than us) seem to agree that there is a problem that can be
specified and solved with a library that can be shared on the
web to the benefits of others.

For the majority of "JavaScript Programmers" the "problem" is how to do
their job while having a near negligible technical understanding of the
technologies involved. They are right in seeing anything that lets them
get away with that as a solution to their problem, and it is
unsurprising that they would then apply pressure to authors of these
libraries to cover more of the circumstances they fact but could not
cope with for themselves.

It is the solving of a problem, but it is not necessarily resulting in
the best solution for the problems that should be getting the best
solutions (the problems belonging to the clients/employers and projects
being worked on).
Perhaps each project you work on is so radically different,
and perhaps quite advanced, that your given problems are not
solved well by the feature sets of these prepackaged libraries
(leaving the quality of these libraries aside for a moment.)

I am employed as a specialist in a context where a specialist is
necessary. The 'popular' general-purpose libraries really are an
irrelevance in my context; they just cannot cut it there.

But knowing how trivially easy it is to achieve many of the things that
I see these libraries being used for on the public Internet I sill see
much of what they do get used for as totally inappropriate (even
disregarding all the browser support inevitably sacrificed whenever one
is used). It is clear that much of what is happening in this regard
follows from people employing other people's examples in the same
unconsidered 'copy-n-past' style that has been the norm in web
development for its entire existence.
For my own use, I developed a library and slapped the label
"Fork" onto it. I think it solves roughly the same problem
as the base utilities of YUI!, Prototype.js, jQuery, Dojo,
etc. This vaguely specified problem is what people call the
"general" problem and the use of "general" in this case is
incorrect. Your use of "general" is better. The same problem
occurs with the distinction between multi-browser and
cross-browser. Libraries claiming to be cross-browser are
usually just multi-browser for some very small value of
"multi". I will endeavor to be more careful about my use
of the word "general".


What would be great is if there was a word for this vaguely
specified problem that so many libraries roughly solve
because these libraries, though more general than necessary
in many cases, are acceptable solutions for what I dare to
say are "most" browser scripting problems.

I don't think that they are acceptable solution for most browser
scripting problems. Their shortcomings probably go unnoticed by the
clients that pay for these solutions (because they will tend to use
default configurations of 'common' web browsers and will never have seen
the potential of the optimum solutions to their problems). I don't see
not observing issues as being the same as there not being issues
(consider your own blind-spot for multi-frame scripts because you don't
see that problem in your work).
Many of the regulars past and present of this group have
maintained their own solutions to roughly the same problem.
Matt Kruse, Thomas Lahn (I'll probably get crucified for
including his name in this list), David Mark, and I have
all shown code we have written that we can cart around to
various projects. The code may be too general in some cases
but the cost of extra download time is acceptable (perhaps
unnoticeable) and the paying customer would rather we just
get on with development than trimming the code down to a
minimal implementation.

You don't think that I have code that I can "cart around"?
The vaguely specified problem is *something* *approximately*
like the following. This is probably closer to the problem
that the c.l.js regulars solve than the mainstream libraries
solve but they aren't far off.


---------------


Full support in this generation of browsers
IE6+, FF1+, O8.5+, S2+
No errors (syntax or runtime) in
IE5+, NN6+, O7+, S1+
Possibly syntax errors but no runtime errors in
IE4+, NN4+
Who cares
IE3-, NN3-

What about all the other browsers? And what about the configurable
aspects of the browsers you have listed? With ActiveX disable IE 6 won't
do AJAX despite its being IE 6.

The problem to be solved is to identify an exploit the capabilities of
any browser that provides the necessary capabilities and to provide
planed and controlled outcomes on all that do not. The "who cares" list
is only acceptable because there does come a point when browsers are too
old to be worth considering, but apart form that thinking in terms of
lists of browsers should be restricted to the practical necessities of
testing and not a significant part of the design problem (except in
known environments were a list may be part of the context).
A single page implementation of an event library. (Not worried about
frames/windows as you discussed above.)

Probably many, many other restrictions/requirements/assumptions.

---------------

The above specific problem (or one quite close) seems to be
the one that needs solving most frequently and so is the one
for which the most prefabricated is available for download on
the web.

What you are describing as a problem here is not a problem at all; it is
one possible (even if likely/common) aspect of the solutions to other,
more specific problems.
This has been
incorrectly referred to as the "general" problem and the
available solutions have been labeled "cross-browser" even
though they are not even close and don't even attempt to use
feature testing well.

Since this problem arises so frequently it is good that
programmers share code to solve this problem (or problems
very similar).

Good and bad. Bad in the sense that people who appreciate the
consequences adopt code that becomes the limiting factor in the projects
where they use it without knowing that they are doing that, let alone
considering whether that is appropriate for their context.
The fact that this problem is more general than
necessary in many cases is clearly not a problem. If it were
then customers would have complained enough that some other
problem would be solved (perhaps your multiple implementations
system would be popular.)

Customers don't complain; they go somewhere else. Search engines are
used to provide a list of alternatives so it doesn't often matter if
some of those alternatives don't work so long as there are more in the
list. And if you have ever tried complaining about
defective/non-functional/error throwing web sites you would quickly
learn why anyone who attempted to complain would rapidly be strongly
discouraged from carrying on.
I have tried solving only the problem at hand for a given
web page. As more requirements arrive from the customer,
I find I always end up, once again, solving this same
vaguely specified problem.

Yes, so what you are describing as the problem is in reality just an
aspect of the solutions to other problems. If you were writing systems
for intranets you would probably find those other problems still
occurring, but your "vaguely specified problem" might never appear at
all.
Perhaps this vaguely specified
problem is exactly at the perceived level of functionality
the web can provide without being too expensive to develop.

Very unlikely.
There is something to this vaguely specified problem, don't
you agree?

No, it is just symptom of something else.

Richard.
 
D

dhtml

Richard said:
No, but you can guess that whatever it is they will probably do it
badly. The - $A - in the 1.6.0.2 version, for example, would loop for
ever if the argument passed to it was an object with a negative - length
- property (and that is a fault that could be fixed with zero impact on
its performance).

The same type of bug exists in the Array extras in Firefox.

Array.forEach({length:-1}, funciton(i){document.title=i;});

Firefox 3:
javascript:void(Array.forEach({length:-1}, funciton(i){document.title=i;}));

Result:
"Slow Script" dialog.

Garrett
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,767
Messages
2,569,572
Members
45,046
Latest member
Gavizuho

Latest Threads

Top