Matt said:
"Richard Cornford" wrote:
Nevertheless, the question wasn't whether or not it should be done,
but how to do it.
Almost all questions asked are how to do something, but many answers
turn out to be that it shouldn't be done at all. Someone wishing to save
their server the burden of doing server-side form validation, for
example, might ask how and be told how to do their validation
client-side, but they *should* be told that they *must* repeat the
validation on the server or leave it as exclusively server-side.
It is irresponsible not to consider the wisdom of any proposed action,
nobody should be handing out loaded guns upon request.
I highly doubt it. I bet at least 95% (if not more) of users visiting
a commerce site have javascript enabled.
Are you falling for the statistics? The most commonly reported
statistics for javascript incapable/disabled browsers are 8-12%, but the
range of reported statistics that I have seen is 2-80%, which says quite
a lot about the usefulness of reported statistics.
But assuming your 95% guesstimate is correct; now go to the business
people and ask them if they are willing to sacrifice 5% of turnover as a
consequence of an arbitrary and unnecessary design decision. Are they
going to be agreeing to the unnecessary sacrifice or calling for the
alternatives?
If you require javascript, you'll only alienate
a small portion of potential buyers, which may
be offset by reduced development costs.
For some reason it is only the people who don't know how to handle
implementing e-commence sites that are 100% reliable who think that it
must cost more to do so, and they are not in a good position to judge.
If something is built from scratch with the intention of being 100%
reliable then it would not cost any more than any other similar site,
there is no more actual work involved.
You have seen one that degrades to a usable and simple pure HTML UI
without client-side scripting? Where? That has got to be worth a look.
A dynamic option list library shouldn't need to consider the
conditions where it isn't used.
It should be reporting when it is unusable (unsupported) though, else
the script that would otherwise employ it will not know that it needs to
fall-back to the server.
It probably won't matter much. An extra 10k? No big deal.
If you're sending over 200k of data to the client, then that's a bad
decision (I've seen people do it).
With a recommended upper limit of 80k for total download time for a page
(else people get bored an go elsewhere) 10k is a significant percentage
of any really viable web page.
Very few browsers are incapable of this these days.
But some of the browsers that cannot do it are the latest versions
available for the devices on which they opperate. And there is not one
javascript capable browser that can do it on which scripting cannot be
disabled.
Depends on your
target audience's browser choices, I guess.
The target audience for e-commerce doesn't have to be anything but the
people with enough money to pay for the products, and there is no
relationship between having sufficient money to make purchases and any
particular web browsers (indeed it has been observed that people with
lots of surplus money sometimes have a tendency to bye electronic
gadgets, which is where you find the strangest browsers embedded). The
whole "target audience" thing is just a smokescreen used by people who
are not capable of delivering 100% of customers (and would rather find
any excuse than learn how).
Anyone browsering with
Opera5 or Netscape3 needs to be pushed into upgrading anyway.
That's right, it is the potential customer's fault when you cannot do
business with them.
If a browser supports Javascript,
I expect it to support the core features
of 1.1, at least,
Why? Support for an ECMAScript version would be a more rational
expectation, as that is the standard for the language.
including the ability to swap out option objects.
The ability to swap option elements (using the Option constructor rather
than DOM) is non-standard and no matter how many browsers copied
Netscape and implemented it there are no grounds to for hanging the
functionality of an e-commerce web site upon it.
If it doesn't, then it's broken, and the user needs to use a browser
that isn't broken if he or she expects to have a good experience on
the web.
That's right, it is the potential customer's fault when you cannot do
business with them.
But none of this matters if the script has been designed with a path of
clean-degradation to either a useable HTML UI or fall-back to server
side. If the browser doesn't support the required features the script
can recognise that condition and degrade itself under control.
Everything works for all potential customers, one way or another. Crying
"broken" when the browser doesn't satisfy you unrealistic expectations
is no way of handling the reality.
There are some assumptions you must make about the browser's
capabilities.
Yes, you have to assume it can handle HTML over HTTP, that is what
defines a web browser. Everything else is optional. But, from the pint
of view of an executing script, the relevant features are testable, so
you don't have to _assume_ anything beyond an ECMAscript compliant
scripting engine.
What if the browser doesn't implement for() loops? Do
you check for that before using them? heh.
Standard language constructs must be assumed to be functional, else the
browser just isn't scriptable, no code will run (which isn't a problem
when the script has been suitably designed), features known to only
appear in later language versions should be tested for (or avoided), as
should implementation details that are known to be buggy.
No, that's browser scripting.
Libraries make perfect sense with compiled languages and languages
running locally in a known system. In the case of compiled languages the
final code only needs to contain the code from the libraries that it is
actually going to use (the compiler can recognise dependencies and make
informed decisions about what to include). And languages running locally
can have their library resources easily to hand (on the hard disk) so
there isn't the same download and bandwidth considerations as apply to
browser scripting.
But push pages at people with an ever-decreasing attention span and if
the page doesn't show up before they get bored waiting there is every
chance you have lost a visitor that you wanted. When the window of
opportunity is small and the pipe may be narrow every byte can count.
Code bloat isn't that big of a deal when you're considering a 20k
library which is cached across many pages on a site.
And if the same task can be done with < 10k of site-tailored code?
Users stripping out code from a library is unnecessary and not
recommended, because it makes for maintenance hell later if the
library is updated.
Sending code that will _never_ be executed to remote users is
unnecessary.
So you want to re-invent the wheel each time?
What you save in download times and "superfluous code" will easily be
offset by increased development time and expense.
No, I want to tailor the code to its application, to the extent to which
its application differs the code should differ. Which is exactly what
happens with library code anyway, as the library itself does not do
anything it just provides resources that still need application specific
code to use them. The difference is that the library must attempt to
facilitate everything that could be asked of it while tailored code only
needs to facilitate what is needed of it.
Libraries and code reuse should be encouraged, IMO.
Code re-use is entirely sensible and proper, it is the library concept
that is misguided. Small re-usable task-specific components are
certainly desirable for browser scripting. But it is the nature of
libraries to provide large interdependent and all-encompassing systems
(and if they don't they end up being too inflexible).
Building up form small task-specific components easily creates complex
code that does no more than is required. But try using two separate
libraries and not only will you have all the code for the features you
don't need but you will also have doubled up all the code for the
necessary internal tasks that are common to both, three libraries and it
is worse again.
Especially for users without the skills to indepdendently develop
code to perform a function which a library could perform in a
black-box manner.
In practice it often takes as much effort to learn how to use a library
(particularly the "corss-broser" DHTML ones as it would take to script
the browser DOMs directly, so if that time is put into learning browser
APIs instead of library APIs making your own, and understanding other's,
small components is easy, as is gluing them together.
Black-boxes do not have to be big boxes.
Richard.