Matt said:
Dubious? Any specific criticisms are welcome.
What would be the point of enumerating the many specific implementation
flaws in your code when you refuse to even recognise the fundamental
design flaw?
I disagree. There is a lot to be learned by using and inspecting
pre-written libraries which solve the exact problem you are facing.
There may be something to be learnt about language use, or the
employment of DOM features, but there won't be much to be learnt about
script design. But then you argue that your libraries "solve" the
problem with 10 minutes work, so they may never be subject to
examination by individuals employing them. And an attitude that it is
better to refer people to copy and paste scripts, rather than assisting
them in better understanding the task and its issues, will not assist
them in untangling the code within those libraries.
A search for "Richard Cornford" +javascript returns few results
That doesn't seem like a search combination calculated to locate code.
And IMO, snippets of example code are useful and great for discussing
the finer points of the language and its use, but they aren't
solutions.
You have a very personal definition of "solution". To my mind a solution
modifies a situation such that there are no problems remaining. In you
definition a solution modifies the problem into something you are willin
g to ignore.
Writing small snippets to do very specific low-level tasks
is one thing,
Encapsulating commonly needed and specific (usually low level) tasks
into efficient small components is a viable way of authoring re-usable
code. The individual components are not a solution to anything (except
not having to worry about how that particular aspect of the larger
problems is going to be handled), but they are the building blocks of
larger applications. And once any individual component has been
rigorously tested in isolation its behaviour can be relied upon to
contribute towards the creation of a reliable larger application.
Any sufficiently large collection of such components become the tools
with which anything can be built, and the nearest thing to a library
that is viable with browser scripting. Though such a collection would
never be imported complete into a web page, it would just be the source
form which suitable components were acquired for a specific application.
But there is no point trying to create and distribute such a collection
of components, the individuals using them need to understand what they
do and how they work in order to choose the correct component for any
situation, and employ it effectively. And any author may prefer to
choose a level of abstraction that suited their individual style. It is
also more practical to build such a collection in a response to
requirements, so a new requirement may require the creation of a new
component but, if suitably designed, that component becomes available
for re-use in future similar situations.
To that end the greatest good can be done for the prospective browser
scripter by teaching them to build their own components.
but writing solutions which solve real problems on real web
sites using a wide range of browsers and supporting features that
would be needed by a wide range of users is quite another.
This has no baring. In the development of most things there will be a
stage where viability has been demonstrated (objectively) but no actual
application exists. What sort of progress would be possible if a
demonstration of viability was routinely dismissed because it preceded
its applications?
In many cases, the "right way to do things" simply doesn't work in
real-world situations, because of browser bugs and quirks, or because
it's not generalized enough to be widely useful.
When the "right thing to do" has been demonstrated to be the only way of
handling all browsers (regardless of quirks and bugs) how can that not
be sufficiently general?
Yes, and I still think you represent about 2% of javascript
developers with that opinion
You do like to throw numbers about don't you. The implication of that
statement is that on the occasions when the suitability of libraries for
use in a browser scripting context has been debated on this group 98% of
the readers of (and participators in) those debates have disagreed with
the proposition that they are unsuitable, but not one of them has
managed to think up a single viable counter argument to post. So if
there is such a widespread belief in the suitability of libraries in
that context then it doesn't appear to have any rational basis.
And in any given situation, it's either worth the effort, or it is
not.
That is a running theme in these discussions, the people who can't do it
believe that there is more effort involved, the people who can do it
don't see much difference. But the latter group must be better qualified
to judge.
Just because something can be done perfectly doesn't mean it
justifies the time or expense to do so.
And if there is no significant difference in time or expense?
And last week we were discussing the consequences of needlessly
designing out 5% of turnover.
But who's 80/20 rule is this? What does it actually state? Do your
commercial clients know that, as a software developer, you feel entitled
to design them out of up to 20% of their turnover based on some spurious
"rule" when that is demonstrably avoidable?
If everyone waited for perfect solutions before releasing software,
we would never have any software!
Software houses seem very interested in maximising the reliability of
their output. Running QA departments, investigating in and implementing
design, testing and project management practices intended to minimise
problems, and rapidly identify and rectify any that remain. They care
very much that what they release is of the highest achievable quality,
if they could identify perfection prior to releasing software then they
would. QA is there specifically to identify things that need to be fixed
prior to relaese.
I think it's always best to promote the best solution to any given
problem. But a bunch of "code-perfect" snippets still require
subtantial effort and knowledge to assemble into a working solution.
If someone comes here with a question about how to achieve X, we can
either point out 25 ways to code correctly and write clean code which
degrades perfectly and leave them with nothing but pieces to glue
together, or we can offer them a packaged solution which will solve
their problem in 10 minutes with 5 lines of code. I prefer the
latter, which they can dig into and learn from.
Again you are applying your unusual definition of "solution". Take you
table sorting library, someone wants to sort the contents of a table by
clicking on column headers, a common enough desire. You direct them to
your table sorting library and 10 minutes later the have a web page in
which they can sort a table by clicking on the column headers (at least
on the sub-set of javascript capable browsers that fulfil your criteria
of suitability). You would say they have a "solution", they may also say
they have a solution, but what they actually have is a different
problem. Because now they have introduced a javascript dependency that
means no client-side scripting equals no table contents. (They may also
have rendered themselves subject to prosecution under some nation's
accessibility legislation, which may also be considered a problem.)
Now contrast that with the DOM table sorting scripts. OK, they only work
on javascript capable dynamic DOM browsers (but those fall on the
acceptable side of your 80/20 criteria anyway), so they detect the
required dynamic DOM support and only act when it is available, but the
table is defined in the HTML and only manipulated by the script. A worst
case failure may leave the user unable to sort the table (at least on
the client as this process is very amenable to direct server-side
fall-back) but whatever happens the user can still read the contents of
the table. The script provides a useful enhancement to the page, but
does not detract from its usability.
You library solves one problem by introducing another, the DOM version
solves the same problem (to the same criteria of acceptability) but does
not introduce any other problems into the situation.
Indeed the DOM version can be layered over a system that displayed and
sorted tables on the server in a way that enabled it to short-circuit
requests for server-side sorting and do that locally whenever the
browser supported dynamic DOM manipulation. Your library would
necessitate two distinct back end processes to achieve similar
reliability, and the transition to the servers-side backup in the event
of failure on the client side would be less that transparent. It is
maybe the way that the inappropriateness of the fundamental design of
your libraries would require you to jump through hoops to create a
reliable system that is contributing to your impression that creating a
reliable system is difficult, time-consuming and expensive.
Anyone viewing my pages containing _javascript libraries_ without
javascript enabled is surely missing the point,
I would say that visiting a demonstration of any javascript code with a
javascript disabled browser is a very obvious test for the acceptability
of its degradation strategy (though the author may simplify the test
process by providing a means of directly disabling the script without
necessitating the disabling of javascript).
and I don't care if the page is broken
for them. I have a limited amount of time in my
day, and I can't cater to everyone, nor do I try
You can cater for everyone, but not caring to try is guaranteed to mean
that you never will.
Richard.