Recommendations for JavaScript drop-down menu code

T

Thomas 'PointedEars' Lahn

Peter said:
That may be true of a library like Prototype that is distributed as a
single file. Libraries like Dojo, YUI are distributed in multiple
files so even as the library gains capabilities it doesn't mean the
whole library has to be downloaded for each page.

It seems granularity of a library's distributed source code is a core
source of contention. Should a library be distributed as one file like
Prototype and jQuery or distributed as many files like YUI to be
combined as needed by a developer working on a particular page? If
multiple files how small should each file be. If multiple files, how
should they be combined for a particular page?

I would go even farther. A client-side script library should be like a
real-life library. Not only that you can lend only the books about the
categories you are interested in -- try to use only one script resource for
each aspect of client-side scripting -- but also you can lend only the books
of the category that you need as references for your essay: every script
resource should be written with the possibility in mind that the user of the
library can take a single method from it and include only the methods that
depends on to make it work.

These goals can collide with each other, of course: you may want to
implement a general testing method in one resource and use it in all other
resources. Sometimes then it can be better to copy that testing method to
the other resources so that another dependency is avoided (which introduces
the issue of version management of that method, though).

Of course, I'd rather have a built-in `import' statement which makes the
script engine resolve the dependencies without having to download the entire
script resource where a method depended on is declared. But I doubt this
would be feasible on the Web.


PointedEars
 
T

Thomas 'PointedEars' Lahn

Thomas said:
Of course, I'd rather have a built-in `import' statement which makes the
script engine resolve the dependencies without having to download the entire
script resource where a method depended on is declared. But I doubt this
would be feasible on the Web.

Hmmm, in combination with server-side scripting this might be feasible. So
that if the served client-side script resource would be generated by a
server-side script which includes all the dependent code. It seems worth to
investigate that.


PointedEars
 
R

Richard Cornford

Peter said:
Not accounting for certain browser bugs/incompatibilities due
to not having a long enough history working with browsers to
know about particular bugs. Testing would show the bugs but
testing in currently rare browsers (eg. IE4, Sunrise, Lobo)
may not be considered a wise cost by the business. Using a
prepackaged library that already works in all cases would be
much cheaper and more attractive to a project manager even
with a certain amount of increased download time. Project
timelines and idealism clash on occasion.

That pre-supposes that the 'prepackaged library' was written by someone
familiar with this type of historical browser behaviour (whether is it
be a bug or not). But in practice this is precisely the problem being
discussed in the "when is a function not a function" thread. The authors
of dojo and jquery have themselves not had the experience to know that
it is normal for collections to report 'function' from a typeof
operation. So when they encounter a browser that is doing no more or
less than has been done by many browsers in the past they find that
their code cannot cope, and start reporting that as a bug. Obviously
these individuals are not aware that there code has these issues
following from their inexperience, but now your project manager cannot
solve his problem by just using a library unless he has someone who
knows the issues themselves and so can point out which libraries
properly handle them and which doe not. At which point the original "not
having a long enough history working with browsers" has stopped
applying.

That may be true of a library like Prototype that is distributed
as a single file. Libraries like Dojo, YUI are distributed in
multiple files so even as the library gains capabilities it
doesn't mean the whole library has to be downloaded for each
page.

The extent to which that solves the problem depends on the sizes of the
chinks and how their dependencies work. The base part of dojo is 3500
odd line of code and is pretty much essential for anything else. It is
unlikely that that will tend to get any smaller with the passage of
time.
It seems granularity of a library's distributed source code
is a core source of contention. Should a library be distributed
as one file like Prototype and jQuery or distributed as many
files like YUI to be combined as needed by a developer working
on a particular page? If multiple files how small should each
file be. If multiple files, how should they be combined for a
particular page?

Generally javascript code should be sent to the browser in as few chunks
as possible. how code is handled before that is optional (and probably
best driven by the requirements of the development context).

I think there is another way to look at what the library is
providing to the user.

The user is the person sitting in front of a web browser interacting
with the end result. Users should not be confused with programmers as
programmers have much more responsibility for what they do.
In the terms you use below, the library is providing an
interface and a base implementation of that interface.
In the simple cases where it is excessively complex, I
can see that some developers would just say "we will
never agree how simple the simplest implementation of
this interface should be." It may make sense to some
people that the simplest interface should at least work
in the popular 80% of the browsers and having a simpler
version for particular browsers is just more code to
maintain. Although performance and download times are
important it is also necessary to retain a small code
base and so a medium complexity implementation of the
interface as the base implementation may be better to
some people than having many simpler ones kicking around
the hard drive.

For the general problem of maximising code re-use it is not advantageous
to maintain a small code base (quite the opposite). For an individual
project that may be an advantage, but then you can maintain the
distinction between the modules that are in the application and those
that are not.
So the library provides a medium complexity base
implementation and when it is insufficient for a less
common case

And over the top for the more common cases.
the developer writes code to handle the less
common case.

But only if the developer has learnt how to do that.
This is very similar/identical to
what you are suggesting below.

Getting there, but it is still important that the interface design stage
takes into account the need to accommodate the whole range of possible
implementations of the interface.

This would be ideal.

Ideal, and so the question is to what extent it can be achieved in
reality.
When you write "larger-scale" what do you mean? Do you mean
distributed in a single file or a low number of multiple files?
Or do you mean just many lines of code?

Files and code are largely irrelevant. Mostly I am getting at
interdependence and the number and diversity of facilities provided in
interdependent units. Physical size does follow when many facilities are
strongly interdependent, but that is just a consequence.
I think this is a great strategy overall. It formalizes the
idea of "a sufficient implementation."

Many 'sufficient implementations', where each is 'sufficient' for
differing requirements.
Why adhere so strictly to this "and no more" requirement?

Because that criteria provides constant direction. Abandon it and there
is no telling how far things may end up drifting in the opposite
direction.
It is ok performance-wise or more profitable overall
(development dollars vs net income) in many situations
to send 5-10% unused code to the browser.

Maybe 5-10% but without direction there is nothing to make you stop at
(or near) 10%, and there a plenty of real world cases where an imported
library is being so slightly used that almost all of the downloaded code
is not being used.

This seems to be an argument against having the absolute
simplest implementations in the collections of objects.
By having a medium-complexity interface implementation as
the base implementation, when a CSS re-design occurs there
_may_ be no need to swap out the simplest implementation
for a slightly more complex implementation.

But that is a very big "may". One CSS change may bring scrolling
overflow into the picture, another may change a floating block element
into a list item, and a whole spectrum of other possibilities exist.
Your "medium-complexity" implementation just cannot take everything into
account without actual being the complex implementation that everyone is
trying to avoid even writing (let alone using).

Because the future re-design cannot be predicted there is no point in
trying. It makes most sense to deal with the current situation but to do
it in a way that has the flexibility to minimise the effort needed to
accommodate the unknowable future changes.
This saves developer time which is more expensive then
production server time.

Yes, so make the task of accommodating any possible future changes in
requirements as quick and simple as possible.
It seems that to implement your strategy of multiple interface
implementations that multiple source files grouped in a directory
for each interface would be a practical approach.

Yes that certainly would be one sensible structure for such a
collection.
Some sort of configuration file and build process could
concatenate the various appropriate files together for a
particular page.

Could, but aggregating the components necessary for a project with copy
and paste gets the job done. Changes in an application's source files
during the development of a project tend to mostly be additions rather
than changes in what has already been created, so any elaborate "build
process" might be added complexity for no good reason.
Is that similar to
the approach you take?

I have not got to the point of expressing capabilities and dependencies
in the sort of machine readable form that would be necessary for an
automated build process. Eventually that may become desirable (or even
necessary), but I prefer to get a good handle on the issues involved
before moving in those sorts of directions (as if you commit yourself to
something inadequate you usually end up getting stuck with it).

Richard.
 
T

Thomas 'PointedEars' Lahn

Richard Cornford wrote
Peter said:
It seems granularity of a library's distributed source code
is a core source of contention. Should a library be distributed
as one file like Prototype and jQuery or distributed as many
files like YUI to be combined as needed by a developer working
on a particular page? If multiple files how small should each
file be. If multiple files, how should they be combined for a
particular page?

Generally javascript code should be sent to the browser in as
few chunks as possible. [...]

Why?


PointedEars
 
D

David Mark

Richard Cornford wrote
Generally javascript code should be sent to the browser in as
few chunks as possible. [...]

Why?
One reason is that browsers limit the number of simultaneous http
connections and scripts must be downloaded in their entirety before
the rest of the page can be parsed. Modularity should apply to the
logical design of a script library, but there are other considerations
for the physical file structure.
 
T

Thomas 'PointedEars' Lahn

Peter said:
Richard Cornford wrote
Peter Michaux wrote:
It seems granularity of a library's distributed source code
is a core source of contention. Should a library be distributed
as one file like Prototype and jQuery or distributed as many
files like YUI to be combined as needed by a developer working
on a particular page? If multiple files how small should each
file be. If multiple files, how should they be combined for a
particular page?
Generally javascript code should be sent to the browser in as
few chunks as possible. [...]
Why?

To reduce page load time.

Maybe we have a definition problem here. Given that the question was which
would be better, one large or several small script resources, how do you
define a chunk?


PointedEars
 
D

David Mark

Peter said:
Richard Cornford wrote
Peter Michaux wrote:
It seems granularity of a library's distributed source code
is a core source of contention. Should a library be distributed
as one file like Prototype and jQuery or distributed as many
files like YUI to be combined as needed by a developer working
on a particular page? If multiple files how small should each
file be. If multiple files, how should they be combined for a
particular page?
Generally javascript code should be sent to the browser in as
few chunks as possible. [...]
Why?
To reduce page load time.

Maybe we have a definition problem here. Given that the question was which
would be better, one large or several small script resources, how do you
define a chunk?

The fact is that if you break up a large script into smaller chunks
(ie files), it will slow down the page load as additional http
connections are required. That doesn't mean you necessarily should
lump every script together in one file. It is just one thing to
consider when determining the physical file structure of your code,
which often does not mirror its logical modularity.
 
P

Peter Michaux

Peter said:
Richard Cornford wrote
Peter Michaux wrote:
It seems granularity of a library's distributed source code
is a core source of contention. Should a library be distributed
as one file like Prototype and jQuery or distributed as many
files like YUI to be combined as needed by a developer working
on a particular page? If multiple files how small should each
file be. If multiple files, how should they be combined for a
particular page?
Generally javascript code should be sent to the browser in as
few chunks as possible. [...]
Why?
To reduce page load time.

Maybe we have a definition problem here. Given that the question was which
would be better, one large or several small script resources, how do you
define a chunk?

I think "chunks" has meant "files" in this discussion and the question
being do you send the same code as one file or several files. Sending
as one file saves requests. Requests increase download time.

Peter
 
P

Peter Michaux

That pre-supposes that the 'prepackaged library' was written by someone
familiar with this type of historical browser behaviour (whether is it
be a bug or not). But in practice this is precisely the problem being
discussed in the "when is a function not a function" thread. The authors
of dojo and jquery have themselves not had the experience to know that
it is normal for collections to report 'function' from a typeof
operation. So when they encounter a browser that is doing no more or
less than has been done by many browsers in the past they find that
their code cannot cope, and start reporting that as a bug. Obviously
these individuals are not aware that there code has these issues
following from their inexperience, but now your project manager cannot
solve his problem by just using a library unless he has someone who
knows the issues themselves and so can point out which libraries
properly handle them and which doe not. At which point the original "not
having a long enough history working with browsers" has stopped
applying.

Point taken but I think the people adopting some of the worse offender
libraries are deferring to the comparatively greater experience of the
library authors.

The extent to which that solves the problem depends on the sizes of the
chinks and how their dependencies work. The base part of dojo is 3500
odd line of code and is pretty much essential for anything else. It is
unlikely that that will tend to get any smaller with the passage of
time.

Agreed. Large base file dependencies certainly are offensive when I am
looking at a library's design.

For the general problem of maximising code re-use it is not advantageous
to maintain a small code base (quite the opposite).

That seems contradictory. Can you please expound on that statement?


Because that criteria provides constant direction. Abandon it and there
is no telling how far things may end up drifting in the opposite
direction.

This sounds like fear of a slippery slope and adhering to this
principle gives a sense of doing the right thing as opposed to
entering into grey areas. I too like black and white decision making
but I admire people that are comfortable in the grey areas. I'm
consistently amazed how they have such great confidence that what they
are doing is correct when their decisions are based on instinct rather
than a formal argument. I don't think their decision making approach
is invalid.


But that is a very big "may". One CSS change may bring scrolling
overflow into the picture, another may change a floating block element
into a list item, and a whole spectrum of other possibilities exist.
Your "medium-complexity" implementation just cannot take everything into
account without actual being the complex implementation that everyone is
trying to avoid even writing (let alone using).

Because the future re-design cannot be predicted there is no point in
trying. It makes most sense to deal with the current situation but to do
it in a way that has the flexibility to minimise the effort needed to
accommodate the unknowable future changes.

This purity seems a little too idealistic to me. What I meant is
something like this...if a company uses floating blocks and list items
and has a history of switching between the two then it would make
sense that the base implementation of some interface can handle both
cases rather than one implementation for each situation. If the
company suddenly adds a third variant then it could be lumped in with
the implementation that can handle the previous two situation or a new
implementation could be made. It seems to me that an individual
company will have common gui components or gui experience so the
lumping option isn't a slippery slope to negative infinity. It might
slip to handle five or so cases but that's it. It seems to be more of
a judgement call. The problem in a general purpose library is it could
slip too far because so many companies are trying to wedge their
features into a single implementation.

Yes, so make the task of accommodating any possible future changes in
requirements as quick and simple as possible.

Indeed a good requirement.

Yes that certainly would be one sensible structure for such a
collection.


Could, but aggregating the components necessary for a project with copy
and paste gets the job done. Changes in an application's source files
during the development of a project tend to mostly be additions rather
than changes in what has already been created, so any elaborate "build
process" might be added complexity for no good reason.

What happens when a new browser comes along requiring a change in one
of the interface implementations that has been pasted all over the
place? Sure there is grep or similar but modifying just one file and
typing "build" seems like a better option to me. I really do believe
in the DRY principle for this.

I have not got to the point of expressing capabilities and dependencies
in the sort of machine readable form that would be necessary for an
automated build process. Eventually that may become desirable (or even
necessary), but I prefer to get a good handle on the issues involved
before moving in those sorts of directions (as if you commit yourself to
something inadequate you usually end up getting stuck with it).

It certainly has been difficult for me to negotiate a build process
with the server-side programmers. They automatically assume that I
will be writing JavaScript in single files in the format the code be
deployed. To me that is almost the same as expecting a Java programmer
to be directly writing compiled byte code or jar files. The irony is
the server-side programmers have tools galore for accomplishing their
code writing tasks but they don't think I could benefit from some
tools to do my job. I suppose my tools bleed over to their side but
the opposite is not true so my tools are a nuisance.

Peter
 
R

Richard Cornford

Peter said:
Point taken but I think the people adopting some of the worse
offender libraries are deferring to the comparatively greater
experience of the library authors.

No doubt, but when you have a situation where inexperience tends to lead
to overconfidence and authors declaration that their own work is fit for
some purpose or another doesn't mean very much.

That seems contradictory. Can you please expound on that
statement?

If a "library" consist of numerous implementations of common interfaces,
where each implementation is suited to a specific application contexts,
then the more of these implementations there are in the "library" the
greater the odds are that the next project will be covered by the
"library" and not require the creation of a new implementation. Thus the
larger the collection the greater the chances of being in a position to
take advantage of code re-use.

This has to recognise that there are two code bases; the "library's"
code base and the application's code base. For the application the
smaller the better, but for the "library" the reverse is likely true.
This sounds like fear of a slippery slope

Where slippery slopes exist trepidation/fear is not an unreasonable
attitude to take towards them.
and adhering to this principle gives a sense of doing the
right thing as opposed to entering into grey areas.

A sense of direction is most valuable when working in a grey area.
I too like black and white decision making but I admire
people that are comfortable in the grey areas. I'm
consistently amazed how they have such great confidence
that what they are doing is correct when their decisions
are based on instinct rather than a formal argument. I
don't think their decision making approach is invalid.

Have you considered what an "instinct" is in terms of software
development? surely it is an impression that a move in one direction
would be a change for the worse and that the opposite direction would be
a change for the better?

This purity seems a little too idealistic to me. What I
meant is something like this...if a company uses floating
blocks and list items and has a history of switching
between the two then it would make sense that the base
implementation of some interface can handle both
cases rather than one implementation for each situation.

Maybe, but you now have two "if"s. If those "if"s can be seen to be true
at the stage of creating the code then they should be taken into
consideration in the design.
If the company suddenly adds a third variant then it could
be lumped in with the implementation that can handle the
previous two situation or a new implementation could be made.

Unless the new design effectively precludes the previous two, at which
point that is little reason to be executing code that could cope with
what has bee precluded.
It seems to me that an individual company will have common
gui components or gui experience so the lumping option isn't
a slippery slope to negative infinity. It might slip to handle
five or so cases but that's it.

But what is the point of attempting to handle five or so cases if only
two or them are still possibilities in the context? It is the case that
significant presentation re-design does happen, but it does not happen
frequently and it does tend to unpredictable. Unpredictable because the
characteristic of designers that is valued in this context is their
ability to come up with something appealingly new.
It seems to be more of a judgement call.

And in my judgement there is no point in trying to second guess the
future discussions of an unknown graphic designed. If that is not the
applicable situation then different judgements may be appropriate.
The problem in a general purpose library is it could
slip too far because so many companies are trying to wedge
their features into a single implementation.

And if it could slip, and could slip too far then we do have a slippery
slope here, and an undesirable destination at the bottom of hat slope.
Indeed a good requirement.
What happens when a new browser comes along requiring a change
in one of the interface implementations that has been pasted
all over the place?

Well designed modules (with rationally designed feature detection) do
not tend to require changes when new browsers come along. You will
recall that I started experimenting with these modular component design
principles back in 2003 and the intervening period has seen new versions
of many browsers and a number of entirely new brewers, yet very little
of the 2003-2004 code has needed any modification to cope with them.
Sure there is grep or similar but modifying just one file and
typing "build" seems like a better option to me. I really do
believe in the DRY principle for this.

I am not saying that a build process is a bad idea, only that it is over
the top for the simpler contexts.
It certainly has been difficult for me to negotiate a build
process with the server-side programmers. They automatically
assume that I will be writing JavaScript in single files in
the format the code be deployed.

And also assume that they know enough about your expertise that what
they think of as the right thing to do will be the right thing to do?
To me that is almost the same as expecting a Java
programmer to be directly writing compiled byte code
or jar files.

A slight exaggeration, but still expressing the fact that just because
javascript is delivered as source code it should not be assumed that it
would be developed in its delivered form.
The irony is the server-side programmers have tools galore
for accomplishing their code writing tasks but they don't
think I could benefit from some tools to do my job. I
suppose my tools bleed over to their side but the opposite
is not true so my tools are a nuisance.

It is maybe not that surprising that many server-side programmers
exhibit some contempt for javascript programmers, given a general
perception of javascript as a 'toy' language reinforced by the easily
observed fact that most of the people writing it don't really know what
they are doing.

In the end you will be able to change their attitude by demonstrating
that you do know what you are talking about.

Richard.
 
B

Brian Adkins

That pre-supposes that the 'prepackaged library' was written by someone
familiar with this type of historical browser behaviour (whether is it
be a bug or not).

Does it? Maybe it simply pre-supposes the library author is *more*
familiar than the user of the library. There are no perfect solutions
here, so we're left with choosing the least imperfect.

Every "build vs. buy" decision involves pros/cons that must be
evaluated. In my case, although I enjoy the JavaScript language, I
need to spend the majority of my time on server-side development, so I
don't have the time to work out all the browser portability issues,
etc. If a library does a "good enough" job, then it could save
valuable time.

If the detractors of current mainstream JavaScript libraries would
produce better libraries, that would be beneficial, but I expect than
any libraries put forth as an example on this forum would get picked
apart like all the rest.

I doubt you're unaware of these issues, so I'm curious why you asked
the question, "What are the "disadvantages" of writing everything
yourself?"

Brian Adkins
 
D

David Mark

Does it? Maybe it simply pre-supposes the library author is *more*
familiar than the user of the library. There are no perfect solutions
here, so we're left with choosing the least imperfect.

In the case of JS libraries, you are left with choosing the least
incompetent.
Every "build vs. buy" decision involves pros/cons that must be
evaluated. In my case, although I enjoy the JavaScript language, I
need to spend the majority of my time on server-side development, so I
don't have the time to work out all the browser portability issues,
etc. If a library does a "good enough" job, then it could save
valuable time.

Name one that does a "good enough" job.
If the detractors of current mainstream JavaScript libraries would
produce better libraries, that would be beneficial, but I expect than

I have. But I choose not to give it away for beer money donations.
It is quite beneficial from where I am sitting. For you, not so much.
any libraries put forth as an example on this forum would get picked
apart like all the rest.

I post bits and pieces of my library here all the time. I truly
welcome any attempts to pick apart the examples. If the authors of
the "mainstream libraries" would have done the same, instead of
authoring in a vacuum, perhaps they would have gotten more useful
results.
 
B

Brian Adkins

Name one that does a "good enough" job.

Depending on the context, Prototype, Scriptaculous, Dojo, YUI,
JQuery ...

Now maybe you're salivating right now because you're familiar with
what you think are major problems in each of the above and can't wait
to enlighten us with your "knowledge", so you may want to re-read
"depending on the context" before you reply.

In other words, there are contexts for which it's more beneficial to
use a subset of one of the above libraries than to write the code from
scratch. I know this to be true because I'm using portions of
Prototype.js via Rails integration with absolutely no issues. It's
fairly trivial stuff in my case, but the advantage is that it's pre-
written, pre-tested code that is open source and available for input
from others, so improvements can be made.
I have. But I choose not to give it away for beer money donations.
It is quite beneficial from where I am sitting. For you, not so much.

I should have stated "produce better open source libraries".
I post bits and pieces of my library here all the time. I truly
welcome any attempts to pick apart the examples. If the authors of
the "mainstream libraries" would have done the same, instead of
authoring in a vacuum, perhaps they would have gotten more useful
results.

Interesting. Would you be willing to post a bit, or piece, of
significant JavaScript code that would pass the gauntlet of
comp.lang.javascript ? I'm not saying it's impossible, only highly
improbable.
 
D

David Mark

Depending on the context, Prototype, Scriptaculous, Dojo, YUI,
JQuery ...

Now maybe you're salivating right now because you're familiar with
what you think are major problems in each of the above and can't wait
to enlighten us with your "knowledge", so you may want to re-read

I have no desire to enlighten you (or the mouse in your pocket) about
anything. Issues concerning the four libraries listed (and the
Scriptaculous plug-in for Prototype) have been discussed here ad
nauseum.
"depending on the context" before you reply.

If you failed to catch on, perhaps you should re-read previous threads
on the subject.
In other words, there are contexts for which it's more beneficial to
use a subset of one of the above libraries than to write the code from
scratch. I know this to be true because I'm using portions of
Prototype.js via Rails integration with absolutely no issues. It's

What makes you think there are "absolutely no issues?" Nobody
reported any? Using Prototype alone is a major issue.
fairly trivial stuff in my case, but the advantage is that it's pre-

If it is fairly trivial stuff, then why use a 70K library to implement
it?
written, pre-tested code that is open source and available for input

Pre-tested on a small subset of agents and yet there are still lots of
issues.
from others, so improvements can be made.

Yet the code rarely sees significant changes. I've seen the babbling
that goes on in Rails support tickets. You are up the creek if you
are relying on that group to fix anything in a timely fashion.
I should have stated "produce better open source libraries".

Yes, you should have.
Interesting. Would you be willing to post a bit, or piece, of
significant JavaScript code that would pass the gauntlet of
comp.lang.javascript ? I'm not saying it's impossible, only highly
improbable.

Can and have. And whether code "passes" is not the issue. The point
is that I post code here in hopes of it getting picked apart as I know
there are people here with the experience to do so. If the authors of
Prototype, jQuery, etc. had bothered to do the same, perhaps they
wouldn't have blundered so badly. Certainly they wouldn't continue to
rely on browser sniffing at this late date.

And aren't you the OP who wanted a canned menu script? Don't tell me
that is what you are using Prototype for. Did you notice that I
posted a link to one earlier in the thread? Feel free to use it and
pick it apart all you want. I would say that the sum of its parts is
fairly significant and I welcome any coherent feedback on it.
 
R

Richard Cornford

Brian said:

Yes it does.
Maybe it simply pre-supposes the library author
is *more* familiar than the user of the library.

If the problem is "not having a long enough history working with
browsers to know about particular bugs" then using code written by
someone else who may have a longer experience but still does not have a
"long enough history" cannot solve that problem. It may represent
abdicating responsibility for the problem, which may be perceived as
sufficient, but that does not change the outcome.
There are no perfect solutions here,

That depends what the issues are. There are certainly some complete
solutions to these types of issues.
so we're left with choosing the least imperfect.

If none of these issues had ever been solved, or nobody had ever managed
to create a genuinely cross-browser script then it certainly would by
now be acceptable to always give up the attempt. However, some perfect,
or near perfect, solutions have been demonstrated possible, so defeatism
is not indicated here.
Every "build vs. buy" decision involves pros/cons that
must be evaluated. In my case, although I enjoy the
JavaScript language, I need to spend the majority of my
time on server-side development, so I don't have the time
to work out all the browser portability issues, etc. If a
library does a "good enough" job, then it could save
valuable time.

The "saves valuable time" argument gets made from time to time, but it
often does not hold water because for any given library there is still a
need to spend time learning how to use the library. And since no single
library can achieve applicability for all contexts without being both
ridiculously bloated and every over the top for most of those contexts
it then becomes necessary to spend time learning how to use many
libraries.

The advantage of spending the same time learning how to address the
issues of browser scripting for yourself is that anything learnt in the
process is generally knowledge that can be applied to all situations,
while learning how to use any single library is worthless knowledge as
soon as that library proves insufficient or inappropriate for any task.
If the detractors of current mainstream JavaScript libraries
would produce better libraries, that would be beneficial,

Beneficial to who?
but I expect than any libraries put forth as an example on
this forum would get picked apart like all the rest.

Code posted to this newsgroup only gets "picked apart" if it can be
picked apart. And having your own code "picked apart" is an extremely
efficient way of learning to write better code.
I doubt you're unaware of these issues, so I'm curious why
you asked the question, "What are the "disadvantages" of
writing everything yourself?"

I asked the question to see if it would be answered at all, and if
answered to see how coherent the answer would be. Experience here shows
me that many assertions get made as if they were self-evident facts, but
that not all of them are and some of the individuals making the
assertions haven't even thought about what they are saying to the point
of being able to coherently express the reasoning behind their
assertions. This situation is best exposed by asking the questions and
then observing the lack of answers.

Richard.
 
R

Richard Cornford

Brian said:

Yes it does.
Maybe it simply pre-supposes the library author
is *more* familiar than the user of the library.

If the problem is "not having a long enough history working with
browsers to know about particular bugs" then using code written by
someone else who may have a longer experience but still does not have a
"long enough history" cannot solve that problem. It may represent
abdicating responsibility for the problem, which may be perceived as
sufficient, but that does not change the outcome.
There are no perfect solutions here,

That depends what the issues are. There are certainly some complete
solutions to these types of issues.
so we're left with choosing the least imperfect.

If none of these issues had ever been solved, or nobody had ever managed
to create a genuinely cross-browser script then it certainly would by
now be acceptable to always give up the attempt. However, some perfect,
or near perfect, solutions have been demonstrated possible, so defeatism
is not indicated here.
Every "build vs. buy" decision involves pros/cons that
must be evaluated. In my case, although I enjoy the
JavaScript language, I need to spend the majority of my
time on server-side development, so I don't have the time
to work out all the browser portability issues, etc. If a
library does a "good enough" job, then it could save
valuable time.

The "saves valuable time" argument gets made from time to time, but it
often does not hold water because for any given library there is still a
need to spend time learning how to use the library. And since no single
library can achieve applicability for all contexts without being both
ridiculously bloated and every over the top for most of those contexts
it then becomes necessary to spend time learning how to use many
libraries.

The advantage of spending the same time learning how to address the
issues of browser scripting for yourself is that anything learnt in the
process is generally knowledge that can be applied to all situations,
while learning how to use any single library is worthless knowledge as
soon as that library proves insufficient or inappropriate for any task.
If the detractors of current mainstream JavaScript libraries
would produce better libraries, that would be beneficial,

Beneficial to who?
but I expect than any libraries put forth as an example on
this forum would get picked apart like all the rest.

Code posted to this newsgroup only gets "picked apart" if it can be
picked apart. And having your own code "picked apart" is an extremely
efficient way of learning to write better code.
I doubt you're unaware of these issues, so I'm curious why
you asked the question, "What are the "disadvantages" of
writing everything yourself?"

I asked the question to see if it would be answered at all, and if
answered to see how coherent the answer would be. Experience here shows
me that many assertions get made as if they were self-evident facts, but
that not all of them are and some of the individuals making the
assertions haven't even thought about what they are saying to the point
of being able to coherently express the reasoning behind their
assertions. This situation is best exposed by asking the questions and
then observing the lack of answers.

Richard.
 
R

Richard Cornford

Brian said:

Yes it does.
Maybe it simply pre-supposes the library author
is *more* familiar than the user of the library.

If the problem is "not having a long enough history working with
browsers to know about particular bugs" then using code written by
someone else who may have a longer experience but still does not have a
"long enough history" cannot solve that problem. It may represent
abdicating responsibility for the problem, which may be perceived as
sufficient, but that does not change the outcome.
There are no perfect solutions here,

That depends what the issues are. There are certainly some complete
solutions to these types of issues.
so we're left with choosing the least imperfect.

If none of these issues had ever been solved, or nobody had ever managed
to create a genuinely cross-browser script then it certainly would by
now be acceptable to always give up the attempt. However, some perfect,
or near perfect, solutions have been demonstrated possible, so defeatism
is not indicated here.
Every "build vs. buy" decision involves pros/cons that
must be evaluated. In my case, although I enjoy the
JavaScript language, I need to spend the majority of my
time on server-side development, so I don't have the time
to work out all the browser portability issues, etc. If a
library does a "good enough" job, then it could save
valuable time.

The "saves valuable time" argument gets made from time to time, but it
often does not hold water because for any given library there is still a
need to spend time learning how to use the library. And since no single
library can achieve applicability for all contexts without being both
ridiculously bloated and every over the top for most of those contexts
it then becomes necessary to spend time learning how to use many
libraries.

The advantage of spending the same time learning how to address the
issues of browser scripting for yourself is that anything learnt in the
process is generally knowledge that can be applied to all situations,
while learning how to use any single library is worthless knowledge as
soon as that library proves insufficient or inappropriate for any task.
If the detractors of current mainstream JavaScript libraries
would produce better libraries, that would be beneficial,

Beneficial to who?
but I expect than any libraries put forth as an example on
this forum would get picked apart like all the rest.

Code posted to this newsgroup only gets "picked apart" if it can be
picked apart. And having your own code "picked apart" is an extremely
efficient way of learning to write better code.
I doubt you're unaware of these issues, so I'm curious why
you asked the question, "What are the "disadvantages" of
writing everything yourself?"

I asked the question to see if it would be answered at all, and if
answered to see how coherent the answer would be. Experience here shows
me that many assertions get made as if they were self-evident facts, but
that not all of them are and some of the individuals making the
assertions haven't even thought about what they are saying to the point
of being able to coherently express the reasoning behind their
assertions. This situation is best exposed by asking the questions and
then observing the lack of answers.

Richard.
 
B

Brian Adkins

If none of these issues had ever been solved, or nobody had ever managed
to create a genuinely cross-browser script then it certainly would by
now be acceptable to always give up the attempt. However, some perfect,
or near perfect, solutions have been demonstrated possible, so defeatism
is not indicated here.

That's good news. Are you aware if this cross-browser JavaScript
knowledge been consolidated and captured somewhere?
The "saves valuable time" argument gets made from time to time, but it
often does not hold water because for any given library there is still a
need to spend time learning how to use the library. And since no single
library can achieve applicability for all contexts without being both
ridiculously bloated and every over the top for most of those contexts
it then becomes necessary to spend time learning how to use many
libraries.

If the time to learn how to use a library exceeded the time saved by
using the library, then I wouldn't consider the library "good enough".

I like to write my own code as much as anyone, but it's not always the
best investment in time. Over the years, I've benefited by being able
to reuse libraries in various languages, so I was a bit surprised at
what seems like a prevalent attitude of "write it all yourself" on
this newsgroup. Do you think that comes from the constraints of using
libraries in JavaScript, or is there another factor I'm missing?
 
D

David Mark

That's good news. Are you aware if this cross-browser JavaScript
knowledge been consolidated and captured somewhere?



If the time to learn how to use a library exceeded the time saved by
using the library, then I wouldn't consider the library "good enough".

What makes you think you have learned how to use Prototype? The only
way to learn it is to read the code and the only thing you can learn
from that is that it is a bad idea to rely on it.
I like to write my own code as much as anyone, but it's not always the
best investment in time. Over the years, I've benefited by being able
to reuse libraries in various languages, so I was a bit surprised at
what seems like a prevalent attitude of "write it all yourself" on

The attitude is "don't rely on the incompetent output of those who
don't understand browser scripting."
this newsgroup. Do you think that comes from the constraints of using
libraries in JavaScript, or is there another factor I'm missing?

You should have said "the constraints of generalized open source
libraries."

What you are missing is that the "million monkey" approach does not
work for JavaScript libraries. This is especially true once a
severely flawed library (e.g. Prototype) achieves widespread use.
Proposals for even minor changes are debated to death and ultimately
dismissed due to paranoia about breaking existing workarounds on
thousands of sites.

If you must use Prototype or the like, you will have to deal with the
consequences sooner or later. There is no point in seeking
affirmation of your delusional decision here.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,483
Members
44,901
Latest member
Noble71S45

Latest Threads

Top