I don't have that much experience myself with a wide
range of browsers; I suppose that Jim, Martin or
Richard could tell us more about this.
We start with two well established principles relating to browser
scripting:-
1. Making assumptions about the browser environment is
extremely risky.
2. Feature detecting tests should be performed in a way
that is as closely related to the problem as possible
(preferably a direct one-to-one relationship).
We also have the realisation that an overly dogmatic application of
those principles in all circumstances will potentially stand in the way
of being able to create viable scripts.
There may be cases where an assumption that is not strictly valid, but
for which no example of a contrary environment has been identified,
facilitates, for example, controlled clean degradation where it might
otherwise be problematic. Such as the assumption that a browser that
dynamically supports the switching of the CSS - display - property will
exhibit a named property of - style - objects that is typeof 'string'.
Allowing the assumption that if the style object has no such property
then the browser is not going to respond to attempts to set - display -
to 'none'.
Personally I am yet to see a browser that could dynamically switch the
display of an element via the - display - property where - typeof
styleObj.display == 'string' - is not true, and also a non-dynamic
browser where it is not false (assuming a normalised - style - object
for Net 4, etc). However, it remains an assumption.
Trying to get feature detection as close to the problem as possible
could imply testing everything each and every time it is used. But code
that attempts that is burdening itself heavily, and may even end up
doing more testing than acting. When you are writing DHTML to be as
fluid as is achievable using the combination of HTML and CSS there is a
great deal that may need to be continually examined in terms of the
sizes and positions of elements (as users re-size their browser windows,
change the font-size settings, etc, and can do so at any moment).
Adding, on top of that requirement, full feature detection on every
action stands a chance of rendering the result non-viable (slowing the
script to the point where it is unacceptable to its users). Leaving the
only viable, for example, menu scripts, the ones that fall apart
whenever the font size is changed or encourage page authors to attempt
to pin-down the dynamic aspects of web pages so the menus will not
disintegrate.
The necessary, and inescapable, aspect of feature detection is that if a
feature is to be used at all it should be tested to verify that it is
available in the environment prior to its use. But prior to its use does
not necessarily mean prior to each and every use. While it is an
assumption that the environment of any given browser will not
significantly change while as script is executing, it is not that
unreasonable an assumption.
One strategy for reducing the level of feature detection testing going
on while a script is running is to give it a single "gateway" test that
is executed during an initialisation phase. Testing for the features
that the script will be using and then, if the test is passed, using
those features without additional verification. This is based on the
assumption that the environment will not change while the script is
running.
Unfortunately some aspects of tests performed during an initialisation
phase could not be as close to the problem as to qualify as a one-to-one
relationship. Testing DOM elements being one example. Instead of
examining the properties of some unknown element that is actually going
to be used by the script it is sometimes necessary to test the
corresponding properties of an element that is known to exist at that
point. The - document.body - element being a good candidate as it is
virtually guaranteed to exist (once the opening tag has been passed or
implied).
So the question is; what is it reasonable to infer from an element such
as - document.body - about the nature of other elements in the DOM. Such
an inference will be based on an assumption and so should be subject to
careful consideration.
Mike's question is really about the test made in the posted code.
Specifically - document.body.appendChild - and - typeof
document.body.firstChild!="undefined" - because the tests are applied to
the - document.body - element and the corresponding method and property
are used on a SPAN element (and could be applied to any element that
allowed text content).
My experience of web browsers suggests that those tests are safe (in
that I know of no browsers where the inference drawn form those two
tests on - document.body - will not hold true for any SPAN element in
the same environment). But I also think that the logic of the test is
reasonable because of the nature of the properties being examined. They
are both part of W3C Core DOM Node interface, and it is the intention of
the W3C that all of the elements in the DOM implement the Node interface
(along with much else). So it doesn't seem unreasonable to infer that if
any specific element implements the significant part of that interface
then all other elements within the same DOM should also be expected to.
_With_some_caveats_:-
Internet Explorer 4 has a non-W3C standard - appendChild - method on (at
least some of) its elements so it is important that no assumptions be
made based on - appendChild - alone. IE 4 also implements a -
document.createElement - method, as does Opera 6, so that is also
dangerous property to be inferring anything from. Given a script that
only really needs those two features to be supported on a W3C standard
browsers I usually throw in an additional test for - replaceChild - just
to ensure that IE 4 does not execute the code. (IE 4 cannot pass the
tests used because it does not implement - document.getElementById -
or - document.createTextNode -)
While I would be happy with examining a BODY element and making
deductions about a SPAN element form it (within the confines of a single
W3C specified interface, or a single property/method (or paired
property, e.g.:- width/height) that when implemented is common to all
elements) there are boundaries that I would not be happy to carry that
deduction across. I would not want to deduce anything about the document
element from the body, or about the body from the document, although the
document should also implement the Node interface. IE 5.0 is the problem
here as its document did not implement the Node interface (and others
may have copied Microsoft's structure at that time).
I would also be cautious about carrying the deduction from an Element to
a Text, Attribute, CDATASection, etc, Node. While the W3C intends all to
implement the Node interface I would want to re-verify the interface on
the type in question. Remember; Text nodes cannot have children so while
they should have - appendChild - method the expectation would be that
they never be used, so the browser manufacturer might consider it safe
to just omit them. Indeed there are IE 6 versions that *crash* if you,
for example, attempt to apply typeof to the - appendChild - method of an
attribute.
I have also observed (generally older) browsers where the elements
within the HEAD behaved quite differently from the displayed elements
within the BODY, being less amenable to dynamic manipulation, etc. This
would make me reluctant to apply deductions made from BODY elements (and
their descendants) to HEAD elements (and their descendants).
With those caveats, generally I would say that if the expectation is
that when one element implements a particular interface, or single
property/method, all other elements also implement it, then it is
probably safe to assume that positive verification on any one element
can be regarded as grounds for assuming that interface/property/method
to be implemented on all others. That would applly to W3C Node and
Element interfaces, the HTMLElement interface and various proprietary
features known to be common to elements in certain browsers.
I would, for example, happily assume that if the first element examined
had a numeric - offsetWidth - property then all subsequent elements
would also possess that property (though in that case I would not make
the deduction from the BODY element, as it is likely to be a special
case). And I would also be fairly happy to assume; if - offsetWidth -
then - offsetHeight -, as they wouldn't mean much in isolation.
Faced with this issue of unknown environment, testing
features on the very object to be used makes sense,
and is actually the safest option.
This approach isn't without problems, though. Testing
extensively the features on each object can render the
code unreadable,
The readability argument is often overstated. It is not unusual for
people to comment on not being able to make head or tail of some of the
code I write, because I exploit what I have learnt about javascript over
the past years and that leaves individuals who are not familiar with the
techniques unable to comprehend the code. (making it particularly
amusing when people ask about how they should go about obfuscating code
(if it was worth obfuscating there would be no need as the only people
capable of understanding it would be able to write it for themselves)
. Three years ago I would not have been capable of understanding the
code I write now.
But is it be reasonable to suggest that I should presently be writing
code that I would have been capable of understanding three years ago,
when I didn't know a fraction of what I currently know about javascript?
Should I be writing code that I know to be sub-optimal because there are
people in the world who want to be able to write javascript without
learning how best to do so (without even being interested in doing so)?
So I write objects that appear complex. They are complex because they
attempt to address all of the issues that I have learnt need addressing
(maybe not always all, but at least most). Any code addressing those
same issues would exhibit similar complexity, though maybe in a
different form.
They also seem more complex than they really are to individuals who
don't know the techniques I choose to use to address those issues, but I
make an informed choice of the techniques to apply based on my judgement
of which is best suited to the situation (very often for optimum
performance).
Above all else it is important that any apparent complexity in the code
I write is internal to objects that have very simple public interfaces
(and document those interfaces). Making internal complexity
insignificant to third parties who use the code, so long as they don't
have to put any work into maintaining it, which would only becomes
necessary if I fail to write complete cross-browser code with planned
behaviour in all environments (obviously that is never my intention).
and break the business flow of the script.
Making that level of a script as clear as possible is always a good idea
as it is where any requested changes would be needed. Either pushing the
complexities needed to handle differing browser environments down so
they are hidden behind simple interfaces, or doing that work up-front
once, certainly does leave that level of a script clearer and more
unified.
Another problem is that it can prevent code optimisation if
you leave the tests in place in each situation, not redefining
the methods (see the Russian Doll pattern;-)).
This applies particularly to general DHTML libraries made up of numerous
functions, where each function tests the browsers for its supported
methods prior to using them. It may be possible to reduce the logic of
the running code to little more than the use of those functions but the
overhead of re-testing on each call, for conditions that are unlikely to
have changed between calls, can rapidly add-up to the point where it
becomes a problem in itself.
In addition, there's also the cost introduced by such
techniques, less technical but as important; it requires
more time, more attention, more experience etc., so definitely
costs more (I don't know many people who could understand and
write advanced javascript without problems - in fact I know
of none apart from in clj - but that's not my job either).
Writing complete code; code that addresses all the relevant issues as it
operates and cleanly degrades when it cannot, is going to be more time
consuming than writing code that disregards the issues and fails
unpredictably. Any additional cost arising form doing the job properly
cannot be a good reason for not bothering.
And badly authored code must carry an additional burden in costs arising
from its unreliability and fragility. Though that may be harder to
quantify, and possibly go unnoticed. Such as a commercial site I looked
at recently where the most unreliable and javascript dependent aspect of
the entire site appeared to be the mechanism for reporting problems,
virtually guaranteeing that owners would not become aware of users
experiencing problems as a result of bad javascript authoring (and so
unaware of any needless loss of revenue resulting from it).
Rather than attempting to reduce the cost of javascript authoring by
tolerating the creation and use of inadequate scripts, I would rather be
concentrating on strategies for reducing costs through easy code re-use.
Which is why I have been writing a lot of low-level interface objects
recently. Because they offer a way of abstracting the complexity of
handling the variations in browser environments behind a simple
interface, and result in easily re-usable code without the code bloat
that follows from the use of large and interdependent javascript
libraries. It is also why I am getting interested in optimising the
configuration of independent chunks of code, because I want those
interface objects to be as self-contained as possible (so they can be
dropped into code that needs them with few (preferably zero) concerns
for interdependencies).
There is also a point where the expertise required to comprehend the
more advanced techniques, or design a complete script, while possibly
being perceived as expensive, actually reduces development costs itself.
It is not unusual for the inexperienced to get a script to broadly
'work' on one browser and then spend a lot of time thrashing about
trying to extend support to another. I have done it myself, and we see
plenty of questions on that particular subject posted to the group.
These days it is extremely rare for me to encounter new problems (and
then only when testing with the less common browsers/configurations); I
design and write cross-browser code and when I test it it mostly
exhibits the designed behaviour first time. And I can write in a day
what I would have taken a week or more to write 3 years ago. Giving me
more freedom to consider the design of the script and its
implementation. And providing a direct return in reduced hours spent in
script creation, followed by the reduced maintenance costs that follow
from good script design.
I'm therefore less and less convinced of the approach of
feature-detecting as "much" as possible. To me the best
approach is to do something in between, first performing
a rigorous intialization, testing for all methods on a
sample object (the document.body in the example code),
and then moving on to the business logic, without
testing more than required (object existence).
<snip>
Broadly I concur. Javascript is not particularly fast; the price of a
dynamic, interpreted language. Many optimisations are achieved by not
doing the same thing repeatedly when you can do it once and hold on to
the result, and (at least some, probably most) feature detection is
amenable to handling in that way.
Posting example code to the group is the area where the integration of
feature detecting techniques troubles me most. Most questions are so
simple that they do not warrant a full implementation and instead can be
addressed with little more than a simple function, or just a specific
code example.
It would be remiss to omit the feature detection entirely; that might
give the impression that doing so was acceptable. But an optimum
implementation would usually be above the level of the example code
used, design wise (particularly the "gateway" initialisation style).
A more local test and initialise pattern (such as the 'Russian doll') is
potentially beyond the comprehension of many questioners, so they may
use the code and find that it works, but they would not necessarily
learn anything from it.
That leaves posting example functions with the feature detection
demonstrated directly in the function, but in a way that means it will
be re-executed on each call (and the implication that that is an
appropriate and 'correct' style for javascript authoring).
On the whole I think it is best that code that demonstrates optimum
patterns does get posted in response to questions on the group. And if
the OPs find the result incomprehensible then at least they will have
learnt that there is more to javascript than they currently understand.
It is not as if those examples will ever be the only ones posted; the
over trivial, incomplete and/or more direct but potentially sub-optimal
examples will always appear along side the more elaborate examples (and
people have different opinions of what constitutes a good implementation
anyway).
Richard.