'Flavors' of JS?

J

Jim Witte

What do people feel about this statement: "JS exists in so many
flavours across so many browsers (and across the html/xhtml/xml divides)
that it is becoming undesirable to include any on a site."

Jim
 
R

Richard Cornford

Jim said:
What do people feel about this statement: "JS exists in so many
flavours across so many browsers (and across the html/xhtml/xml
divides) that it is becoming undesirable to include any on a site."

It demonstrates as much understanding of browser scripting as your
proposition yesterday of 'making a "Scheme" version of JS'.

Javascript does not exist in "so many flavours", all current browsers
implement javascript based on ECMA 262, at least fully implementing the
2nd edition, with the majority implementing the 3rd. Having a formal
specification for the language provides a clear standard against which
implementations can be judged, and non-conforming implementations can be
identified and corrected so that they do conform. The result is that
there is ever less diversity in implementations, and already a
sufficiently consistent/reliable language core for any practical
purpose.

Browser DOMs have never been consistent so the current state of affairs
does not differ from anything that has gone before, but browsers are now
converging around the W3C DOM standards, providing an ever more
consistent core of functionality.

It has been demonstrated (often) that it is possible to create browser
scripts that exhibit planed behaviour in the face of all of the
permutations of environments that they may encounter on the Internet,
and provide a valuable enhancement when they encounters any environment
that is sufficiently supportive. It may not be a trivial design task to
create such a script but there is no reason for not using it once
created.

So, insofar as anything is "becoming", the task is becoming easier. The
desirability of the use of javascript is not related to the diversity of
environments that can be scripted. It can only be related to the
suitability of the implementation of the script for its environment(s).

In a browser script failure is inevitable (as it is always possible for
the user to disable scripting) so a suitable script must be designed so
it will not cripple (or even harm) a web page when it fails, but once
the design makes sense in the face of total failure there is always a
path of clean depredation to be followed when the browser does not
support the specific features required by the script.

Richard.
 
R

Robin Becker

Richard said:
It demonstrates as much understanding of browser scripting as your
proposition yesterday of 'making a "Scheme" version of JS'.

Javascript does not exist in "so many flavours", all current browsers
implement javascript based on ECMA 262, at least fully implementing the
2nd edition, with the majority implementing the 3rd. Having a formal ........
In a browser script failure is inevitable (as it is always possible for
the user to disable scripting) so a suitable script must be designed so
it will not cripple (or even harm) a web page when it fails, but once
the design makes sense in the face of total failure there is always a
path of clean depredation to be followed when the browser does not
support the specific features required by the script.

Richard.

I see the following statistics for the web site that I have data for.
There are 8 browsers with some usage.
We use javascript, but sparingly. It is pretty hard to even
test against all these browsers.

46.79% MSIE 6.0
33.45% Mozilla/5.0
3.00% MSIE 5.5
2.22% MSIE 5.0
1.95% Googlebot/2.1 (+http://www.googlebot.com/bot.html)
1.88% Konqueror/3.2
1.55% libwww-perl/5.63
1.41% Python-urllib/2.0a1
0.81% Mozilla/3.01 (compatible;)
0.72% Konqueror/3.1
0.72% Yahoo! Slurp
0.60% Opera/7.2
0.50% Python-urllib/1.15
4.40% Other/unknown
 
R

Richard Cornford

Robin Becker wrote:
I see the following statistics for the web site that I have
data for. There are 8 browsers with some usage.

And this is relevant because?
We use javascript, but sparingly. It is pretty hard to even
test against all these browsers.
<snip>

It always was impossible to test against all browsers, there are just to
many, and always some you have never heard of. That doesn't mean it is
impossible to author for all browsers.

But finding it difficult to test with 8 browsers doesn't sound like you
are trying very hard. The computer I as currently sitting at has 24
installed web browsers on the partition it is currently booted from, and
two more bootable partitions with 30-odd more, and that is only one of
the computers I use for browser testing.

Richard.
 
R

Robin Becker

Richard said:
Robin Becker wrote:
<snip>
.......
It always was impossible to test against all browsers, there are just to
many, and always some you have never heard of. That doesn't mean it is
impossible to author for all browsers.

But finding it difficult to test with 8 browsers doesn't sound like you
are trying very hard. The computer I as currently sitting at has 24
installed web browsers on the partition it is currently booted from, and
two more bootable partitions with 30-odd more, and that is only one of
the computers I use for browser testing.

Richard.

Unfortunately I don't get paid to do this full time. Like many our company has
no real webmaster and certainly no javascript experts. Most of the forms we
produce are tested by the end user. Their brief is usually Mozilla latest, IE
5-6 on PC with Mac/Linux browsers a poor relation. The argument against using
complex javascript is not whether it's feasible, but if it is economic. It's
just not easy/cheap enough for most producers. It is arguable, that by coping
with all the horrors of the web, the experts just delay standards for JS and
DOM. Differentiation is in the interest of the JS programmers and probably the
browser suppliers.
 
D

Douglas Crockford

What do people feel about this statement: "JS exists in so many
flavours across so many browsers (and across the html/xhtml/xml divides)
that it is becoming undesirable to include any on a site."

It is false in many ways.

The JavaScript language processors in browsers are remarkably
consistent. Microsoft's JScript is far more compliant with its official
language standard than Microsoft's C++ is.

HTML/XHTML/XML are independent of the scripting language.

Browsers are horribly inconsistent in their interpretation of HTML and
especially in the interfaces they present to the scripting language.
This is the source of the difficulty. We are still suffering from
mistakes made in the browser wars and the inadequacy of web standards.

In spite of that, it is possible to design scripts that can run in a
wide variety of browsers. It requires knowledge and discipline.

The functionality provided by local computational resources is very
desirable.

And finally, "flavour" is spelled "flavor".

http://www.crockford.com/javascript/javascript.html
 
R

Robin Becker

Douglas Crockford wrote:

......
And finally, "flavour" is spelled "flavor".

In English it is spelt "flavour", the "flavor" variant is American. Probably
need some dynamic JS to detect the reader :)
 
R

Richard Cornford

Robin Becker wrote:
Unfortunately I don't get paid to do this full time. Like many our
company has no real webmaster and certainly no javascript experts.
Most of the forms we produce are tested by the end user.

So every development mistake equals a disgruntled user?
Their brief is usually Mozilla latest, IE 5-6 on PC
with Mac/Linux browsers a poor relation.

Which explains you log statistics, you design for a limited set of
browsers/configurations, the users of other browsers/configurations
don't hang around clocking up log entries (because they rapidly realise
they are wasting their time), and then you use the log entries to
justify designing for the browsers that your visitors appear to use.
It's a chicken and egg relationship that becomes a vicious circle.
The argument against using complex javascript is not
whether it's feasible, but if it is economic. It's just not
easy/cheap enough for most producers.

Cross-browser scripts do not have to be complex. Indeed K.I.S.S. is a
very worthwhile design principle to follow.

The economic relationship is not being meaningfully judged. It is always
going to be relatively expensive to employ someone who doesn't know how
to do something to carry out that task. Suppose someone wanted to employ
me to repair a car engine. Given suitable tools/equipment and reference
material I probably could carry out that task, but I would be learning
as I went, and that would be expensive in terms of an hourly rate
because it would take many additional hours form me to learn enough to
succeed at that task (and probably expensive in terms of parts as I
broke things making mistakes).

But I can complete a properly specified browser script as quickly as
anyone else could write a browser specific version, so why would the
cross-browser alternative be more expensive. The economic consideration
arises from employing people for the task who lack the skill to do it
better; basically just the consequences of an initial false economy.

However, it comes down to the question of whether a web site is a
revenue source or not. If it isn't then why would any company bother? If
it is a revenue source then why *unnecessarily* restrict its potential
to produce revenue?
It is arguable, that by coping with all the horrors of the web,
the experts just delay standards for JS and DOM.

Those standards state very clearly that client-side scripting is an
optional extra (just as CSS is). So the universal adoption of DOM
standards wouldn't preclude the need to design for the consequences of
total script failure (as the users of any browsers will always be at
liberty to turn scripting off). And once that possibility has been
covered a script that exclusively employs DOM standard methods is
cross-browser, because if the browser doesn't support those standards it
only needs to detect that fact and cleanly degrade to the underlying
HTML (and back-end systems) that would be all that was available to the
users with scripting disabled/unavailable. That is, an exclusively DOM
standard script should still exhibit planed behaviour in the face of any
browser environment it encounters regardless of whether that browsers
supports the required standard (and/or scripting).

But if it tuns out that the browser actually supports a non-standard
feature (possibly as an alternative to an unsupported DOM method) then
there is no good reason for a script not to take advantage of it when
available.

However, in a commercial context, what sort of argument goes "It is the
user's choice of browser that justifies our not doing business with
them", when all browsers are capable of supporting HTTP and HTML (and
particularly HTML forms) and that is all that is needed to actually
carry out business transactions over the Internet?
Differentiation is in the interest of the JS programmers
and probably the browser suppliers.

Differentiation is not in the interests of JS programmers; we are not
masochists. We are working without the certainty of a known environment,
and that is not going to change even with the universal adoption of DOM
standards because the top of the range desktop browsers will always have
additional non-standard features that are not available to all browsers
(on all platforms) and new revisions of (and extensions to) those
standards will continue to be produced. There will also always be a
desire to exploit the available features of any browser to the maximum
extent possible, and the techniques that allow viable scripting in an
unknown environment will still be capable of meaningfully accommodating
that desire (just as they are now).

Richard.
 
R

Richard Cornford

Robin said:
Douglas Crockford wrote:
.....

In English it is spelt "flavour", the "flavor" variant is American.
<snip>

The distinction should probably be between British English and American
English (I wonder how Australians normally spell it?). English is no
longer exclusively the native language of England (and hasn't been for
some considerable time).

But the OP appears to be in the USA so may appreciate the correction for
future use.

Richard.
 
O

optimistx

What about this strategy:

Project1. Construct the application with pure html, no javascript. Cost is
C1, calendar time T1

Project2. When the project P1 has been completed, everything is working
well, customers are satisfied, boss is happy,
make a new proposal to the management .

Probaly cost C2 is about the same order of magnitude as C1, and the duration
T2 about the same as T1.

Which advantages can I show to the management in order to persuade to accept
project P2?

a) some seconds in access speeds sometimes ? (when checking form input,
mainly).

b) some frills, whistles, bells, which are completely unnecessary and even
annoying for a serious customer?

The future for me as an enthusiastic javascript programmer does not look
very bright, if this is true. Toy language, amusement park for teenagers?
 
R

Robin Becker

Richard Cornford wrote:

......
Which explains you log statistics, you design for a limited set of
browsers/configurations, the users of other browsers/configurations
don't hang around clocking up log entries (because they rapidly realise
they are wasting their time), and then you use the log entries to
justify designing for the browsers that your visitors appear to use.
It's a chicken and egg relationship that becomes a vicious circle.

I think actually they specify their desires against a larger
set of statistics. In practice they're doing 90-10% rules of thumb. The first
90% takes half the total development. Somewhere along the path to 100% browser
acceptance is a cut-off point where the additional work cannot be justified.

If a big client rings up our end user saying that something's amiss with his MAC
IE 5.2 then we fix it.
.....
Cross-browser scripts do not have to be complex. Indeed K.I.S.S. is a
very worthwhile design principle to follow.

I agree totally, but their complexity is in the knowledge base that I or any one
else needs to have to get 90, 95 or 100% usability. If I look at any of the web
sites devoted to listing the foibles of the various browsers there are probably
thousands of differences. That is complexity.
The economic relationship is not being meaningfully judged. It is always
going to be relatively expensive to employ someone who doesn't know how

No matter who is employed to do web programming there will be bugs. The cost of
an expert such as yourself may not be low enough to justify the additional 4% of
browsers that might be supportable. The testing costs alone of supporting an
additional 30 browsers would be considerable.

The mechanic argument is relevant. Average cars are now a job for specialists only.
Differentiation is not in the interests of JS programmers; we are not
masochists. We are working without the certainty of a known environment,

I believe earlier you said that companies should employ best qualified people.
You assert as well that we can support cleanly all browsers. The conclusion is
obvious. Differentiation certainly benefits expert js programmers. I assume they
are masochists only if they really believe it's desirable to support 30-50
platforms.
 
R

Richard Cornford

Robin said:
Richard Cornford wrote:

I think actually they specify their desires against a larger
set of statistics. In practice they're doing 90-10% rules of thumb.
The first 90% takes half the total development. Somewhere along the
path to 100% browser acceptance is a cut-off point where the
additional work cannot be justified.

If a big client rings up our end user saying that something's amiss
with his MAC IE 5.2 then we fix it.

And the big potential client using Mac IE 5.2 (or Safari)? Doesn't
he/she go and look elsewhere?
I agree totally, but their complexity is in the knowledge base that I
or any one else needs to have to get 90, 95 or 100% usability. If I
look at any of the web sites devoted to listing the foibles of the
various browsers there are probably thousands of differences. That is
complexity.

You are looking at the details and missing the design issue. For any
script there are just two possibilities, the browser fully supports the
features required by the script, or it doesn't (with all javascript
incapable browsers falling into the latter category). To support 100% of
browsers (without even knowing anything about all of those browsers) it
is only necessary to design the HTML so it makes sense when the script
fails and write the script so the it detects the availability of the
features that it needs prior to attempting to use them and only acts
when they have been verified as being available.

That simple design pattern covers 100% of browsers from the outset, the
work is in maximising the number of browsers that fall into the
supporting category for each individual script. But now it doesn't
matter if script development cannot push that past 90% of javascript
capable browsers because the remaining 10% are still supported by the
site (along with all javascript incapable browsers) as the underlying
HTML makes sense without the script.
No matter who is employed to do web programming there will be bugs.
The cost of
an expert such as yourself may not be low enough to justify the
additional 4% of browsers that might be supportable.

Where does 4% come form? The most often quoted figure for javascript
disabled/incapable browsers is 8-12% (so the javascript capable browsers
that lack the required features would be on top of that).

And it would be a mistake to assume that any expertise in browser
scripting carries a premium, it doesn't. Most of the IT world regard
javascript as a toy language that any fool can write (and are usually
happy to assign the task to the nearest fool available, with the
consequences that we observe on the Internet).

But the numbers game is a little more complex than just getting back the
potential customers lost through failing to support their browsers. Lets
say that is 5%, and lets say that 20 competitors are doing the same and
turning away 5% of their potential customers as well (and they are). Now
the first of those sites to welcome all customers regardless of their
browser is not just regaining the 5% they had been turning away but is
potentially picking up the 5% that each of those competitors is still
turning away (5*20 == 100).

Coincidentally, I have just been reading an article reporting a
£50,000.00 out of court settlement of a case brought using the UK
disability discrimination act against a web site for denying access to
its services to the users of the types of browsers/browser
configurations favoured by some groups covered by that act.
The testing costs alone of supporting an
additional 30 browsers would be considerable.

The mechanic argument is relevant. Average cars are now a job for
specialists only.

Which is why it makes sense to hire a suitable specialist to do the job.
I believe earlier you said that companies should employ best
qualified people.

Suitably qualified will do.
You assert as well that we can support cleanly all browsers.
Absolutely.

The conclusion is obvious. Differentiation certainly
benefits expert js programmers.

Differentiation might necessitate browser scripting expertise, but you
won't find many javascript programmers who wouldn't have preferred
complete standardisation of browsers from the outset. Unfortunately we
have to deal with the world as it is rather than as we would like it to
be, and at lest the resulting design challenge is a worthy application
for the intellect.
I assume they are masochists only if they really believe
it's desirable to support 30-50 platforms.

There are something in the order of 130 existing web browsers, and new
ones all the time.

Richard.
 
M

Matt Kruse

Richard Cornford said:
That simple design pattern covers 100% of browsers from the outset, the
work is in maximising the number of browsers that fall into the
supporting category for each individual script. But now it doesn't
matter if script development cannot push that past 90% of javascript
capable browsers because the remaining 10% are still supported by the
site (along with all javascript incapable browsers) as the underlying
HTML makes sense without the script.

Richard, you write novels on this group every day, but I've never seen
real-world examples of your work, or a web site of the libraries you've
written, or anything. Do you have anything?

I mean, if you actually have time to write all the stuff you do here every
day, and create great reusable code to the standards you push, and put that
into practice in a real-world setting with a project manager hounding you
and clients making ridiculous requests, then more power to you. I'd like to
see the results. I mean, is it all talk and wishful thinking, or do you
actually manage to practice what you preach? The real world is not always as
ideal as you make it sound :)
 
R

Richard Cornford

optimistx said:
What about this strategy:

Project1. Construct the application with pure html, no
javascript. Cost is C1, calendar time T1

Project2. When the project P1 has been completed, everything
is working well, customers are satisfied, boss is happy,
make a new proposal to the management .

Probaly cost C2 is about the same order of magnitude as
C1, and the duration T2 about the same as T1.

You haven't actually stated what Project 2 is, which makes judging what
it would cost and how long it would take extremely speculative.

If it is a complete replacement for Project 1 then it probably will cost
much the same (aided only by the fact that the back-end programmers will
be able to lift most of the logic directly from Project 1). But that
also renders much of the cost of Project 1 wasted.

If it is a layering of client-side scripting over Project 1 then there
is no reason to expect it to be nearly as expensive or time consuming.
Granted if your proposed design strategy is to be taken literally and
Project 1 has been implemented with no consideration of it's suitability
for client-side manipulation the results will be more expensive and time
consuming than they need to be.

The best of both worlds would be most effectively achieved by planning
for the layering of client-side scripting over a 100% reliable
server-side fall-back from the outset, but then you don't have two
projects any more, and most of the design work for what was project 2
now happens in parallel with the designing of the back-end.

Or were you thinking in terms of that stupid "lets deal with disabled
users by having two web sites, one with the full presentation and a text
only version for anyone who can't use the first", with its consequent
ongoing maintenance burden, and the eventual divergence of the content
as the text version gets a low priority and is eventually forgotten.
Which advantages can I show to the management in order to
persuade to accept project P2?

a) some seconds in access speeds sometimes ? (when
checking form input, mainly).

If that is the only client-side functionality then your cost estimate is
way over the top. At this stage the validation algorithms have been
specified and Project 1 already contains a reference implementation (and
if the back-end was in JScript ASP the validation can virtually be cut
and pasted into the client-side).
b) some frills, whistles, bells, which are completely
unnecessary and even annoying for a serious customer?
<snip>

A strange thing with management (especially marketing management) is
that they are the ones who wanted those bells and whistles in the first
place.

In practice any new project will probably start form the position of an
existing site that is already dependent on a limited range of javascript
capable browsers, and has no shortage of bells and whistles that
somebody in the decision making process thought were a good idea. If
they are going to fund a replacement they are going to want it to be
just as capable as its predecessor was.

So the question is, can a replacement be designed that exploits the
capabilities of the browsers that the old version directly supported to
the same (or greater) extent and still be 100% usable and reliable for
the users of any other browsers browser (or unusual configuration of
those browsers)?

And the answer is yes, they can have just as much dynamic front end
stuff on at least as many browsers as it ever worked with, and they can
have 100% reliability for all through back-end fall-back, and they can
have it all in the same site. With the resulting maximised customer base
(and potential to avoid falling foul of national accessibility
legislation, where applicable).

Richard.
 
R

Richard Cornford

Matt Kruse wrote:
Richard, you write novels on this group every day,

The hour a day for two months I spent training myself to touch type has
proved one of the most productive things I have ever done over the
intervening years.

But are you complaining? What sort of discussion form would c.l.j. be if
everyone followed your example and contributed no more than references
to dubious javascript libraries? That is hardly going to contribute to
anyone's understanding and skills, a more universally functional
Internet, or the already tarnished reputation of javascript.
but I've never seen
real-world examples of your work,

If you are failing to find examples of scripts written by me that
demonstrate the application of the principles under discussion here then
I don't think you can be trying very hard.
or a web site of the libraries you've
written, or anything.

Didn't I write one of those novels explaining to you why libraries are
in inappropriate concept for Internet browser scripting?
Do you have anything?

In what sense?
I mean, if you actually have time to write all the stuff you do here
every day, and create great reusable code to the standards you push,
and put that into practice in a real-world setting with a project
manager hounding you and clients making ridiculous requests, then
more power to you. I'd like to see the results. I mean, is it all
talk and wishful thinking, or do you actually manage to practice what
you preach? The real world is not always as ideal as you make it
sound :)

Why do you think that would have any baring? It either is possible to
create scripts that match their execution to the browser's ability to
support them, and cleanly degrade to viable underlying HTML when not
supported, avoiding introducing a dependency on client-side scripting
into a system that is otherwise 100% reliable, or it is not.

As it has been demonstrated that it in fact is possible to create
scripts to that standard it makes sense that this group should promote
that standard and disseminate an understanding of the techniques needed
to achieve it. Particularly baring in mind that on the occasions that a
particular proposed action is identified as not amenable to satisfactory
clean degradation the javascript dependent alternatives are always
demonstrably worse, as anyone viewing any page employing any of your
libraries with a javascript disabled browsers would rapidly discover.

Richard.
 
M

Matt Kruse

Richard Cornford said:
But are you complaining? What sort of discussion form would c.l.j. be if
everyone followed your example and contributed no more than references
to dubious javascript libraries?

Dubious? Any specific criticisms are welcome.

There are various types of discussions on this group. One type is "Is there
a solution for X?" in which case I post my solution if I have one. My code
is fairly well-tested by thousands of users, and I get many thanks from
users of this group for directing them to a solution they can immediately
implement with success, rather than preaching about why they shouldn't be
doing what they need to do.
That is hardly going to contribute to
anyone's understanding and skills, a more universally functional
Internet, or the already tarnished reputation of javascript.

I disagree. There is a lot to be learned by using and inspecting pre-written
libraries which solve the exact problem you are facing.
If you are failing to find examples of scripts written by me that
demonstrate the application of the principles under discussion here then
I don't think you can be trying very hard.

A search for "Richard Cornford" +javascript returns few results :)
And IMO, snippets of example code are useful and great for discussing the
finer points of the language and its use, but they aren't solutions.
Writing small snippets to do very specific low-level tasks is one thing, but
writing solutions which solve real problems on real web sites using a wide
range of browsers and supporting features that would be needed by a wide
range of users is quite another.

In many cases, the "right way to do things" simply doesn't work in
real-world situations, because of browser bugs and quirks, or because it's
not generalized enough to be widely useful.
Didn't I write one of those novels explaining to you why libraries are
in inappropriate concept for Internet browser scripting?

Yes, and I still think you represent about 2% of javascript developers with
that opinion :)
In what sense?

An example. A finished product. I'm not being facetious. I've learned from
your posts and your articles, and I would find it interesting to see a
finished site which degraded nicely for all browsers and implemented complex
functionality in the ways that you recommend. Or multiple sites.
It either is possible to
create scripts that match their execution to the browser's ability to
support them, and cleanly degrade to viable underlying HTML when not
supported, avoiding introducing a dependency on client-side scripting
into a system that is otherwise 100% reliable, or it is not.

And in any given situation, it's either worth the effort, or it is not.
Just because something can be done perfectly doesn't mean it justifies the
time or expense to do so. The 80/20 rule.
If everyone waited for perfect solutions before releasing software, we would
never have any software!
As it has been demonstrated that it in fact is possible to create
scripts to that standard it makes sense that this group should promote
that standard and disseminate an understanding of the techniques needed
to achieve it.

I think it's always best to promote the best solution to any given problem.
But a bunch of "code-perfect" snippets still require subtantial effort and
knowledge to assemble into a working solution. If someone comes here with a
question about how to achieve X, we can either point out 25 ways to code
correctly and write clean code which degrades perfectly and leave them with
nothing but pieces to glue together, or we can offer them a packaged
solution which will solve their problem in 10 minutes with 5 lines of code.
I prefer the latter, which they can dig into and learn from.
Particularly baring in mind that on the occasions that a
particular proposed action is identified as not amenable to satisfactory
clean degradation the javascript dependent alternatives are always
demonstrably worse, as anyone viewing any page employing any of your
libraries with a javascript disabled browsers would rapidly discover.

Anyone viewing my pages containing _javascript libraries_ without javascript
enabled is surely missing the point, and I don't care if the page is broken
for them. I have a limited amount of time in my day, and I can't cater to
everyone, nor do I try :)
 
J

Jim Ley

Project1. Construct the application with pure html, no javascript. Cost is
C1, calendar time T1

Project2. When the project P1 has been completed, everything is working
well, customers are satisfied, boss is happy,
make a new proposal to the management .

Probaly cost C2 is about the same order of magnitude as C1, and the duration
T2 about the same as T1.

Which advantages can I show to the management in order to persuade to accept
project P2?

User experience, get the management out of the office (or use their
GPRS connection or something back to the system, and they'll see how
long each page navigation takes, see how long each stupid mistake on a
form takes, and how much server resources are used (make sure you do
it on a roaming GPRS at 10 GBP per MB)

Get some test users in - the tea lady, the pizza delivery guy,
whoever, show how frustating and how many mistakes the users make as
they're not getting enough help, or the navigation is
counter-intuitive etc.

Show them how good JS can improve all those situations, resulting in
more sales or whatever
The future for me as an enthusiastic javascript programmer does not look
very bright, if this is true. Toy language, amusement park for teenagers?

Nope, there's money in JS, it's what I do for a crust.

Also remember we're in a particularly boring time of web-authoring,
there's been nothing new in ages. However XBL * is a very nice
looking technology and that relies on javascript to do anything
interesting, and who knows even XHTML 2.0 might have something
interesting?

Jim.

* The forthcoming W3 version, not the orginal Mozilla one.
 
J

Jim Ley

Yes, and I still think you represent about 2% of javascript developers with
that opinion :)

None of the large commercial projects I've worked on use libraries,
yet many are in the thousands of lines of JS situation. Even the ones
where I've had no input at all to the system design.

Jim.
 
R

Richard Cornford

Matt said:
Dubious? Any specific criticisms are welcome.

What would be the point of enumerating the many specific implementation
flaws in your code when you refuse to even recognise the fundamental
design flaw?

I disagree. There is a lot to be learned by using and inspecting
pre-written libraries which solve the exact problem you are facing.

There may be something to be learnt about language use, or the
employment of DOM features, but there won't be much to be learnt about
script design. But then you argue that your libraries "solve" the
problem with 10 minutes work, so they may never be subject to
examination by individuals employing them. And an attitude that it is
better to refer people to copy and paste scripts, rather than assisting
them in better understanding the task and its issues, will not assist
them in untangling the code within those libraries.
A search for "Richard Cornford" +javascript returns few results :)

That doesn't seem like a search combination calculated to locate code.
And IMO, snippets of example code are useful and great for discussing
the finer points of the language and its use, but they aren't
solutions.

You have a very personal definition of "solution". To my mind a solution
modifies a situation such that there are no problems remaining. In you
definition a solution modifies the problem into something you are willin
g to ignore.
Writing small snippets to do very specific low-level tasks
is one thing,

Encapsulating commonly needed and specific (usually low level) tasks
into efficient small components is a viable way of authoring re-usable
code. The individual components are not a solution to anything (except
not having to worry about how that particular aspect of the larger
problems is going to be handled), but they are the building blocks of
larger applications. And once any individual component has been
rigorously tested in isolation its behaviour can be relied upon to
contribute towards the creation of a reliable larger application.

Any sufficiently large collection of such components become the tools
with which anything can be built, and the nearest thing to a library
that is viable with browser scripting. Though such a collection would
never be imported complete into a web page, it would just be the source
form which suitable components were acquired for a specific application.

But there is no point trying to create and distribute such a collection
of components, the individuals using them need to understand what they
do and how they work in order to choose the correct component for any
situation, and employ it effectively. And any author may prefer to
choose a level of abstraction that suited their individual style. It is
also more practical to build such a collection in a response to
requirements, so a new requirement may require the creation of a new
component but, if suitably designed, that component becomes available
for re-use in future similar situations.

To that end the greatest good can be done for the prospective browser
scripter by teaching them to build their own components.
but writing solutions which solve real problems on real web
sites using a wide range of browsers and supporting features that
would be needed by a wide range of users is quite another.

This has no baring. In the development of most things there will be a
stage where viability has been demonstrated (objectively) but no actual
application exists. What sort of progress would be possible if a
demonstration of viability was routinely dismissed because it preceded
its applications?
In many cases, the "right way to do things" simply doesn't work in
real-world situations, because of browser bugs and quirks, or because
it's not generalized enough to be widely useful.

When the "right thing to do" has been demonstrated to be the only way of
handling all browsers (regardless of quirks and bugs) how can that not
be sufficiently general?
Yes, and I still think you represent about 2% of javascript
developers with that opinion :)

You do like to throw numbers about don't you. The implication of that
statement is that on the occasions when the suitability of libraries for
use in a browser scripting context has been debated on this group 98% of
the readers of (and participators in) those debates have disagreed with
the proposition that they are unsuitable, but not one of them has
managed to think up a single viable counter argument to post. So if
there is such a widespread belief in the suitability of libraries in
that context then it doesn't appear to have any rational basis.

And in any given situation, it's either worth the effort, or it is
not.

That is a running theme in these discussions, the people who can't do it
believe that there is more effort involved, the people who can do it
don't see much difference. But the latter group must be better qualified
to judge.
Just because something can be done perfectly doesn't mean it
justifies the time or expense to do so.

And if there is no significant difference in time or expense?
The 80/20 rule.

And last week we were discussing the consequences of needlessly
designing out 5% of turnover.

But who's 80/20 rule is this? What does it actually state? Do your
commercial clients know that, as a software developer, you feel entitled
to design them out of up to 20% of their turnover based on some spurious
"rule" when that is demonstrably avoidable?
If everyone waited for perfect solutions before releasing software,
we would never have any software!

Software houses seem very interested in maximising the reliability of
their output. Running QA departments, investigating in and implementing
design, testing and project management practices intended to minimise
problems, and rapidly identify and rectify any that remain. They care
very much that what they release is of the highest achievable quality,
if they could identify perfection prior to releasing software then they
would. QA is there specifically to identify things that need to be fixed
prior to relaese.
I think it's always best to promote the best solution to any given
problem. But a bunch of "code-perfect" snippets still require
subtantial effort and knowledge to assemble into a working solution.
If someone comes here with a question about how to achieve X, we can
either point out 25 ways to code correctly and write clean code which
degrades perfectly and leave them with nothing but pieces to glue
together, or we can offer them a packaged solution which will solve
their problem in 10 minutes with 5 lines of code. I prefer the
latter, which they can dig into and learn from.

Again you are applying your unusual definition of "solution". Take you
table sorting library, someone wants to sort the contents of a table by
clicking on column headers, a common enough desire. You direct them to
your table sorting library and 10 minutes later the have a web page in
which they can sort a table by clicking on the column headers (at least
on the sub-set of javascript capable browsers that fulfil your criteria
of suitability). You would say they have a "solution", they may also say
they have a solution, but what they actually have is a different
problem. Because now they have introduced a javascript dependency that
means no client-side scripting equals no table contents. (They may also
have rendered themselves subject to prosecution under some nation's
accessibility legislation, which may also be considered a problem.)

Now contrast that with the DOM table sorting scripts. OK, they only work
on javascript capable dynamic DOM browsers (but those fall on the
acceptable side of your 80/20 criteria anyway), so they detect the
required dynamic DOM support and only act when it is available, but the
table is defined in the HTML and only manipulated by the script. A worst
case failure may leave the user unable to sort the table (at least on
the client as this process is very amenable to direct server-side
fall-back) but whatever happens the user can still read the contents of
the table. The script provides a useful enhancement to the page, but
does not detract from its usability.

You library solves one problem by introducing another, the DOM version
solves the same problem (to the same criteria of acceptability) but does
not introduce any other problems into the situation.

Indeed the DOM version can be layered over a system that displayed and
sorted tables on the server in a way that enabled it to short-circuit
requests for server-side sorting and do that locally whenever the
browser supported dynamic DOM manipulation. Your library would
necessitate two distinct back end processes to achieve similar
reliability, and the transition to the servers-side backup in the event
of failure on the client side would be less that transparent. It is
maybe the way that the inappropriateness of the fundamental design of
your libraries would require you to jump through hoops to create a
reliable system that is contributing to your impression that creating a
reliable system is difficult, time-consuming and expensive.
Anyone viewing my pages containing _javascript libraries_ without
javascript enabled is surely missing the point,

I would say that visiting a demonstration of any javascript code with a
javascript disabled browser is a very obvious test for the acceptability
of its degradation strategy (though the author may simplify the test
process by providing a means of directly disabling the script without
necessitating the disabling of javascript).
and I don't care if the page is broken
for them. I have a limited amount of time in my
day, and I can't cater to everyone, nor do I try :)

You can cater for everyone, but not caring to try is guaranteed to mean
that you never will.

Richard.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,057
Latest member
KetoBeezACVGummies

Latest Threads

Top