D
David Mark
Monoliths vs. "Microlibraries"
After the recent spectacular failures of such "mature" monolithic
libraries as Prototype, Dojo and other collections of general-purpose,
"cross-browser" scripts, the pendulum of public opinion has swung back
to the idea that the "optimal" pattern for browser scripting requires
lots of small scripts working together (e.g. "microlibraries").
AIUI, much time was spent at the recent JS conference "debating" which
side was "right" in this non-argument.
This is a non-argument because there is no formal definition of a JS
"library" (let alone a "microlibrary"). But let's take a look at the
two sides to the "debate".
On one side, you've got a bunch of "Angry Nerds" who have spent years
*trying* to come up with cross-browser solutions for common browser
scripting tasks. Rather than publishing functions, they have mashed
their output together into "libraries" or "frameworks" or
"toolkits" (or whatever). Though many of these blobs claim to be
modular, they have traditionally been hamstrung by interdependencies.
Dojo is the perfect example as the base requirements are massive, even
for the simplest enhancement or application. One interdependency that
stood out to me was that their (laughably inept and wildly inaccurate)
query engine is *required* by their XHR "module". Similar giant piles
of incompetently written Javascript (e.g. Ext JS) present the same
problem.
As we've seen, such garbage dumps have required an outrageous amount
of maintenance over the years (requiring Web developers to constantly
download, test and deploy new versions). Yet they still fail
virtually every time they are presented with an environment unknown to
their authors at the time the scripts were published. It's not
surprising as none of these things are cross-browser. They are multi-
browser (due to browser sniffing and other similarly bad inferences),
which implies that they can only be expected to work in environments
where they have been *demonstrated* to work. This range excludes new
browsers and historically many older environments (e.g. IE < 8 or
compatibility mode) are out of reach as well (likely due to
inexperienced Ajax mavens who weren't around for IE5/6/7 and "don't
care" about compatibility mode).
So that "side" is simply clinging to what they have (which has gone
from mostly to completely worthless of late). Best not to follow
failures. Successful cross-browser scripts solve problems for
specific *contexts*; they cannot be described in such concise terms as
"new wave Javascript" (whatever that is).
What of these "pioneers" who are now trying to shout down the monolith
marketers after having seen some sort of light that indicates they
should be using only "microlibraries". The only atom of reality in
their faith (religion is turned to when understanding is lacking) is
that browser scripts should be as small as possible for the *context*
they are written for. They seem to long for a utopia where they can
download lots of very small scripts, mash them together and create
robust applications that run "anywhere". This was the thinking back
around the turn of the century when sites like Dynamic Drive were
popular among Web developers. That movement has long since run its
course. It took a decade and produced virtually nothing of value.
Dynamic Drive begat Prototype, jQuery, Dojo, etc., projects which
failed to further understanding or innovate (in fact, they were
defined by backwards thinking). They've spent most of the last few
years on ludicrous UI layers built on top of their rickety, outdated
(and often inappropriate for JS) library designs. Now some long to go
back and start the futile cycle anew.
I saw a post (a Tweet IIRC) recently that opined that a library that
does not "work cross-browser" is "broken", not a "microlibrary". This
is presumably from the monolithic side, implying that scripts that
don't measure up to their ideals of "cross-browser" functionality are
simply wrong. This serves to illustrate the general confusion that
surrounds this non-debate. There are so many things wrong with this
"argument" that it's hard to choose where to begin.
Again, what's a library? Virtually any script can be called a
library. It generally implies "other people's code" (which highlights
that JS developers are big on abdication of responsibility).
Scripts don't "work cross-browser". They are either designed in cross-
browser fashion or they aren't. The confusion is between cross-
browser and multi-browser scripts. You can prove that a script works
in multi-browser fashion by testing it in the environments where it is
expected to work. Authors of such scripts have always been vehemently
opposed to testing in other environments (or considering the impact of
future environments) because there was a good possibility that their
scripts would simply fall apart (i.e. crash rather than bailing out,
possibly leaving an unusable document behind) in such unknown
scenarios. That's how virtually every library/framework published in
the last ten years has been designed (yet many of them claim to be
cross-browser).
No cross-browser script can be expected to work everywhere. Cross-
browser scripts are designed to work in environments that feature all
of the required host objects and methods. They are designed to
gracefully bow out in lacking environments, leaving the document in
the same state as it would be with scripting turned off. The decision
of whether to carry on or bail out is based on feature detection and
testing (not browser sniffing or multi-browser object inferences).
The definition of "work" for cross-browser scripts is that they
function properly in capable environments and leave the rest of them
alone.
Reality mandates that authors of browser scripts have at least some
knowledge of the history of browsers in order to explain to clients
exactly where these "degradation points" occur (e.g. what happens to
IE 7 users?). Combined with a (sometimes fuzzy) view of which
browsers are actually in use by the target audience (e.g. public Web,
corporate Intranet, etc.), a determination can be made as to an
appropriate cross-browser design.
So the fact that a script does not work in all environments (often
hailed as "all browsers") does not mean that it is not cross-browser.
That determination can only be made by reading the code (abstraction
vs. observation). Of course, most JS developers are not big on
actually reading code (they just want to download it and watch it go).
Consider this dubious function (which is shockingly similar to
innumerable methods found in yesterday's "popular" frameworks):-
function getAttribute(el, n) {
return el.getAttribute(n);
}
Note that there are no comments to define the context of this
function, so it must be expected to work for all cases and
environments. You can find one of these gems in virtually every
"major" query engine, indicating that they have no shot at working
properly in a significant percentage of browsers in use *today* (e.g.
IE 7, IE 8 compat mode, etc.). This is ironic as the biggest claim to
fame for these things is that they get the IE monkey off your
back.
So let's give the function a context. This will be for an application
that needs to work in most modern browsers, but is explicitly allowed
to degrade in IE < 8 (and compatibility mode). Perhaps the owners are
okay with IE 7 users having a less dynamic experience or they might
use Conditional Comments to include lesser script(s) for those users
or they may simply present them with a static page. That's a decision
that must be made jointly between the developer and the client.
So what's missing from the code, rendering it less than cross-
browser? The feature testing. As library authors have just
*recently* figured out, the getAttribute method has been Broken as
Designed (BAD) in IE since 1999 (and remains so today in compatibility
mode). An example of feature testing that can identify such troubled
environments (among several others) can be found on this test page:-
http://www.cinsoft.net/attributes.html
One particular test result (call it t) is the indicator we are after
for this context. With this result, we can decide the fight or flight
question:-
if (t) {
var getAttribute = function(el, n) {
...
}
}
There it is. A (dubious example of a) *cross-browser* design, which
is appropriate for the stated context. In theory it should work in
browsers that feature a *working* getAttribute method for elements and
it should degrade (gracefully) in everything else. Of course, it is
only as good as its feature tests, which should be as simple and
direct as possible (i.e. test exactly what you are going to do with
the required objects and methods and nothing else).
If an application requires just this one function, then its
"gateway" (a test before proceeding) would look like this:-
if (getAttribute) {
...
}
Note that the existence of the function itself is the indicator.
That's the *only* reliable way to couple applications, libraries, add-
ons, etc. You detect features of scripts in the exact same way as you
detect features of user agents. If one is missing, you don't fiddle
with the document at all. We've been over this before and it should
be intuitively obvious that any other scheme will be less direct and
prone to compatibility problems as pieces are swapped out or
upgraded. The specific combination of required features and test
results that determine the existence (or lack thereof) of a function
are abstracted by each piece, with none privy to the inner workings of
the others. Predictably, most libraries have missed the boat on this
and have started defining less specific, extraneous flags to give
hints about which functions might work in the current environment.
Also predictably, the track record for plug-ins working from one
version to the next is appalling (lending perceived ammunition to the
"microlibrary" faction).
In summary, it is ridiculous to argue in general about the perfect
browser scripting design as appropriate designs are always married to
specific contexts. But regardless of context, the discipline of cross-
browser scripting remains the same (as it has for many years). So it
is better to understand the discipline than to choose sides in a war
of buzzwords. You just can't advance without a clearly-defined battle
plan.
Furthermore, unlike other types of programming (where aspiring browser
scripting luminaries usually come from), general-purpose libraries and
frameworks will *never* work for cross-browser scripting. Doesn't
matter how many over-complicated script loaders get written or how
many conferences are called to discuss the "problem"; the concept just
doesn't fit. It never has and it never will.
Note that this does not mean "write everything from scratch". That
line is simply a badge of inexperience. You write (or borrow)
functions for specific contexts. Eventually you will end up with
several renditions of the same function, each appropriate for a
specific context. You group these functions together to create
context-specific enhancements and applications. How can the whole
world share such a repository and leverage it to move browser
scripting forward in giant leaps? I don't know the answer to that,
but I do know that the answer will never be found unless the loud
people start asking the right questions.
So (dammit), if you want to have any shot of competing in this
particular arena, you are just going to have to bite the bullet and
learn browser scripting. It's not enough to master the JS language
(though few seem inclined to bother even with that step); browser
scripting is a discipline, and one that cannot be mastered without
understanding its basic concepts (e.g. cross-browser vs. multi-
browser). The whole "argument" of "which size library is best" is
devoid of any such concepts; it's just more confused blithering (and
haven't we had enough of that over the last ten years or so?)
After the recent spectacular failures of such "mature" monolithic
libraries as Prototype, Dojo and other collections of general-purpose,
"cross-browser" scripts, the pendulum of public opinion has swung back
to the idea that the "optimal" pattern for browser scripting requires
lots of small scripts working together (e.g. "microlibraries").
AIUI, much time was spent at the recent JS conference "debating" which
side was "right" in this non-argument.
This is a non-argument because there is no formal definition of a JS
"library" (let alone a "microlibrary"). But let's take a look at the
two sides to the "debate".
On one side, you've got a bunch of "Angry Nerds" who have spent years
*trying* to come up with cross-browser solutions for common browser
scripting tasks. Rather than publishing functions, they have mashed
their output together into "libraries" or "frameworks" or
"toolkits" (or whatever). Though many of these blobs claim to be
modular, they have traditionally been hamstrung by interdependencies.
Dojo is the perfect example as the base requirements are massive, even
for the simplest enhancement or application. One interdependency that
stood out to me was that their (laughably inept and wildly inaccurate)
query engine is *required* by their XHR "module". Similar giant piles
of incompetently written Javascript (e.g. Ext JS) present the same
problem.
As we've seen, such garbage dumps have required an outrageous amount
of maintenance over the years (requiring Web developers to constantly
download, test and deploy new versions). Yet they still fail
virtually every time they are presented with an environment unknown to
their authors at the time the scripts were published. It's not
surprising as none of these things are cross-browser. They are multi-
browser (due to browser sniffing and other similarly bad inferences),
which implies that they can only be expected to work in environments
where they have been *demonstrated* to work. This range excludes new
browsers and historically many older environments (e.g. IE < 8 or
compatibility mode) are out of reach as well (likely due to
inexperienced Ajax mavens who weren't around for IE5/6/7 and "don't
care" about compatibility mode).
So that "side" is simply clinging to what they have (which has gone
from mostly to completely worthless of late). Best not to follow
failures. Successful cross-browser scripts solve problems for
specific *contexts*; they cannot be described in such concise terms as
"new wave Javascript" (whatever that is).
What of these "pioneers" who are now trying to shout down the monolith
marketers after having seen some sort of light that indicates they
should be using only "microlibraries". The only atom of reality in
their faith (religion is turned to when understanding is lacking) is
that browser scripts should be as small as possible for the *context*
they are written for. They seem to long for a utopia where they can
download lots of very small scripts, mash them together and create
robust applications that run "anywhere". This was the thinking back
around the turn of the century when sites like Dynamic Drive were
popular among Web developers. That movement has long since run its
course. It took a decade and produced virtually nothing of value.
Dynamic Drive begat Prototype, jQuery, Dojo, etc., projects which
failed to further understanding or innovate (in fact, they were
defined by backwards thinking). They've spent most of the last few
years on ludicrous UI layers built on top of their rickety, outdated
(and often inappropriate for JS) library designs. Now some long to go
back and start the futile cycle anew.
I saw a post (a Tweet IIRC) recently that opined that a library that
does not "work cross-browser" is "broken", not a "microlibrary". This
is presumably from the monolithic side, implying that scripts that
don't measure up to their ideals of "cross-browser" functionality are
simply wrong. This serves to illustrate the general confusion that
surrounds this non-debate. There are so many things wrong with this
"argument" that it's hard to choose where to begin.
Again, what's a library? Virtually any script can be called a
library. It generally implies "other people's code" (which highlights
that JS developers are big on abdication of responsibility).
Scripts don't "work cross-browser". They are either designed in cross-
browser fashion or they aren't. The confusion is between cross-
browser and multi-browser scripts. You can prove that a script works
in multi-browser fashion by testing it in the environments where it is
expected to work. Authors of such scripts have always been vehemently
opposed to testing in other environments (or considering the impact of
future environments) because there was a good possibility that their
scripts would simply fall apart (i.e. crash rather than bailing out,
possibly leaving an unusable document behind) in such unknown
scenarios. That's how virtually every library/framework published in
the last ten years has been designed (yet many of them claim to be
cross-browser).
No cross-browser script can be expected to work everywhere. Cross-
browser scripts are designed to work in environments that feature all
of the required host objects and methods. They are designed to
gracefully bow out in lacking environments, leaving the document in
the same state as it would be with scripting turned off. The decision
of whether to carry on or bail out is based on feature detection and
testing (not browser sniffing or multi-browser object inferences).
The definition of "work" for cross-browser scripts is that they
function properly in capable environments and leave the rest of them
alone.
Reality mandates that authors of browser scripts have at least some
knowledge of the history of browsers in order to explain to clients
exactly where these "degradation points" occur (e.g. what happens to
IE 7 users?). Combined with a (sometimes fuzzy) view of which
browsers are actually in use by the target audience (e.g. public Web,
corporate Intranet, etc.), a determination can be made as to an
appropriate cross-browser design.
So the fact that a script does not work in all environments (often
hailed as "all browsers") does not mean that it is not cross-browser.
That determination can only be made by reading the code (abstraction
vs. observation). Of course, most JS developers are not big on
actually reading code (they just want to download it and watch it go).
Consider this dubious function (which is shockingly similar to
innumerable methods found in yesterday's "popular" frameworks):-
function getAttribute(el, n) {
return el.getAttribute(n);
}
Note that there are no comments to define the context of this
function, so it must be expected to work for all cases and
environments. You can find one of these gems in virtually every
"major" query engine, indicating that they have no shot at working
properly in a significant percentage of browsers in use *today* (e.g.
IE 7, IE 8 compat mode, etc.). This is ironic as the biggest claim to
fame for these things is that they get the IE monkey off your
back.
So let's give the function a context. This will be for an application
that needs to work in most modern browsers, but is explicitly allowed
to degrade in IE < 8 (and compatibility mode). Perhaps the owners are
okay with IE 7 users having a less dynamic experience or they might
use Conditional Comments to include lesser script(s) for those users
or they may simply present them with a static page. That's a decision
that must be made jointly between the developer and the client.
So what's missing from the code, rendering it less than cross-
browser? The feature testing. As library authors have just
*recently* figured out, the getAttribute method has been Broken as
Designed (BAD) in IE since 1999 (and remains so today in compatibility
mode). An example of feature testing that can identify such troubled
environments (among several others) can be found on this test page:-
http://www.cinsoft.net/attributes.html
One particular test result (call it t) is the indicator we are after
for this context. With this result, we can decide the fight or flight
question:-
if (t) {
var getAttribute = function(el, n) {
...
}
}
There it is. A (dubious example of a) *cross-browser* design, which
is appropriate for the stated context. In theory it should work in
browsers that feature a *working* getAttribute method for elements and
it should degrade (gracefully) in everything else. Of course, it is
only as good as its feature tests, which should be as simple and
direct as possible (i.e. test exactly what you are going to do with
the required objects and methods and nothing else).
If an application requires just this one function, then its
"gateway" (a test before proceeding) would look like this:-
if (getAttribute) {
...
}
Note that the existence of the function itself is the indicator.
That's the *only* reliable way to couple applications, libraries, add-
ons, etc. You detect features of scripts in the exact same way as you
detect features of user agents. If one is missing, you don't fiddle
with the document at all. We've been over this before and it should
be intuitively obvious that any other scheme will be less direct and
prone to compatibility problems as pieces are swapped out or
upgraded. The specific combination of required features and test
results that determine the existence (or lack thereof) of a function
are abstracted by each piece, with none privy to the inner workings of
the others. Predictably, most libraries have missed the boat on this
and have started defining less specific, extraneous flags to give
hints about which functions might work in the current environment.
Also predictably, the track record for plug-ins working from one
version to the next is appalling (lending perceived ammunition to the
"microlibrary" faction).
In summary, it is ridiculous to argue in general about the perfect
browser scripting design as appropriate designs are always married to
specific contexts. But regardless of context, the discipline of cross-
browser scripting remains the same (as it has for many years). So it
is better to understand the discipline than to choose sides in a war
of buzzwords. You just can't advance without a clearly-defined battle
plan.
Furthermore, unlike other types of programming (where aspiring browser
scripting luminaries usually come from), general-purpose libraries and
frameworks will *never* work for cross-browser scripting. Doesn't
matter how many over-complicated script loaders get written or how
many conferences are called to discuss the "problem"; the concept just
doesn't fit. It never has and it never will.
Note that this does not mean "write everything from scratch". That
line is simply a badge of inexperience. You write (or borrow)
functions for specific contexts. Eventually you will end up with
several renditions of the same function, each appropriate for a
specific context. You group these functions together to create
context-specific enhancements and applications. How can the whole
world share such a repository and leverage it to move browser
scripting forward in giant leaps? I don't know the answer to that,
but I do know that the answer will never be found unless the loud
people start asking the right questions.
So (dammit), if you want to have any shot of competing in this
particular arena, you are just going to have to bite the bullet and
learn browser scripting. It's not enough to master the JS language
(though few seem inclined to bother even with that step); browser
scripting is a discipline, and one that cannot be mastered without
understanding its basic concepts (e.g. cross-browser vs. multi-
browser). The whole "argument" of "which size library is best" is
devoid of any such concepts; it's just more confused blithering (and
haven't we had enough of that over the last ten years or so?)