message
Yeah, I know Opera 7 and IE 6 have partial DOM support.
And if you insist on using implementation.hasFeature to verify their
support you will be declining the opportunity to exploit the features
that those browsers do support.
If I do feature test for every individual feature, ISKEET will
get too big.
Which is exactly the problem with the API library concept that I
described. If you insist on encapsulating the interface in one or two
big objects then the code needed to properly exploit the available
features of the browsers and provide suitable notification of their
unavailability will always render an API library that attempts anything
beyond the trivial too bulky for practical use. Forcing pages that only
need a tiny fraction of the features provided to download and initialise
a substantial piece of code, most of which they could happily do
without.
Someday, it will. Because it'll slow down feature usage until
standard implementation is complete.
The W3C DOM, even if fully implemented by everyone, is only a small part
of the scriptable aspects of web browsers and there will always be
features supported by some browsers but not all, and clients will want
to exploit those features where available. And it will always continue
to be the case that desktop browsers will have more of those features
than embedded and PDA browsers.
There's already XHTML 2.0 and DOM 3.0 in the make, I wonder
when we'll ever get to see this implemented.
And in the mean-while there may be DOM 4 and 5 and so on.
That standards begin to work after a while has been proven
by C and C++.
And the same will happen to web programming.
Even when (and if) everyone adheres to the existing published standards
there will still be non-standard features being introduced and used and
later there will be new standards attempting to formalise the best of
those non-standard features.
The only standard that is important for browser scripting is ECMA 262
(and maybe 327) because given consistent language support the
inconsistencies of the object model being scripted can be coped with.
Because I don't have proper environments to test other browsers.
I don't claim my library or website is compatible with something
when I have not seen it myself.
testing.
... which is suggested by the W3 consortium ...
In the DOM standard specifications, hasFeature() is declared
to be the means to find out about feature support.
But making decisions based on hasFeature means not using various parts
of the DOM specs at all until they are fully implemented. That might be
necessary with other languages like Java but JavaScript is more flexible
and can take advantage of what actually exists rather than waiting for
some formal declaration of completeness in a DOM implementation. A good
thing too as there aren't many complete implementations around.
Just another thing that cannot be relied upon for existing
implementations. That's really sad, but I'm convinced that'll
change in the future.
You can be as optimistic as you like but nobody is currently working in
the future. How many clients are going to pay for: "It doesn't do much
now but when the web browsers conform to the standard it will be
spectacular".
Up to now, there's always a fallback.
Up to now you have only tested on, at best, a couple of versions of 3
web browsers and you haven't attempted to implement more than half a
dozen features. With the possible exception of standard form validation
code in (exclusively) HTML documents there are no browser features that
are universally supported.
I don't use any feature without testing for its actual presence,
Yes you do (even in the 0.0.5 code).
That's not true. ISKEET proves it and will do so even more in
the future.
When did implementing an inappropriate technique prove it appropriate?
Copy-and-paste programming problems start when you want to
change something.
Imagine a script of yours that you've used by copy-and-paste in
a couple hundred web pages,
So that's one external JS file then.
and then you find out that you've overlooked some
browser-specifics that you didn't know about before.
Generally well designed, implemented and tested feature detection based
scripts are extremely resilient to unexpected browser-specific problems.
But in the event, I edit the one external JS file.
read the ISKEET manual or some of the other posts here, I just
posted a detailed reply to someone, don't wanna repeat it
umpteen times.
I read the manual and the note in the code. Neither state what the
problem is, they just assert that there is a problem, as do all of your
other replies in this thread. But asserting that there is a problem is
not the same as demonstrating that there is a problem. Your belief that
you have seen a problem in your code does not help as your code is
derived from a series of convoluted assumptions and is as likely to be
introducing the error that you are attributing to Opera as it is to be
suffering from it. Demonstrate the problem in isolation and you have a
point, repeatedly assert that there is a problem that you cannot isolate
and you will not be taken seriously.
I modified the library and didn't have to change one iota on
the actual web page. That's what encapsulation and abstraction
is all about. Update the library -- done!
The encapsulation and the abstraction are not the problem with the
concept. The problem comes with the attempt to wrap the whole thing up
in a couple of large objects to provide an API for the entire browser
DOM.
Small components providing an abstract interface to particular aspects
of a browser's DOM are entirely sensible. For example, a frequent
requirement that would need to be handled differently of various
browsers might be the need to acquire the amount by which a page was
scrolled. The following function encapsulates the required testing and
returns an object that provides a common interface to the desired
information:-
var getPageScroll = (function(){
var global = this;
var notSetUp = true;
var readScroll = {scrollLeft:NaN,scrollTop:NaN};
var readScrollX = 'scrollLeft';
var readScrollY = 'scrollTop';
var interface = {
getScrollX:function(){
return readScroll[readScrollX];
},
getScrollY:function(){
return readScroll[readScrollY];
}
};
function compatModeTest(obj){
if((document.compatMode)&&
(document.compatMode == 'CSS1Compat')&&
(document.documentElement)&&
(typeof document.documentElement[readScrollX] == 'number')){
return document.documentElement;
}else if((document.body)&&
(typeof document.body[readScrollX] == 'number')){
return document.body;
}else{
return obj;
}
}
function setUp(){
if(typeof global.pageXOffset == 'number'){
readScroll = global;
readScrollY = 'pageYOffset';
readScrollX = 'pageXOffset';
}else{
readScroll = compatModeTest(readScroll);
}
notSetUp = false;
}
return (function(){
if(notSetUp){
setUp();
}
return interface;
});
})(); //Only call getPageScroll after opening BODY tag has been parsed.
// USE:
var commonInterface = getPageScroll();
//Keep the interface reference for repeated use.
var xScroll = commonInterface.getScrollX();
var yScroll = commonInterface.getScrollY();
if(!isNaN(xScroll)){
// scroll values good!
}
Notice the absence of any interest in the type or version of the browser
and the well-defined behaviour (returning NaN, which is testable) in the
face of a failure of the browser to provide any of the scroll value
retrieval mechanisms.
Other versions might include the viewPort or document dimensions in the
interface. The choice of which to use would depend entirely on the
information that was required and the code is totally re-usable. But
when the functionality provided is not required the function is just not
included in the JS file (obviously anyone wanting to use such code still
needs to observe interdependencies, such as a element positioning
interface needing access to this interface in order to do its task, but
appropriate comments serve as sufficient reminder, or speed the
resolution of errors of omission).
Any sufficiently large collection of similar functionality encapsulating
code might be considered a library. Though it would be used by cutting
and pasting only the parts required and so completely avoid burdening
the pages that used the JS file with the download of code that they were
not going to use.
Richard.