Can there be any doubt at this point that queries were a bad idea?

D

David Mark

RobG said:
I think the basis of David's argument is that they are a bad idea
because of the differences in implementations.

That's been the result, but not the cause of the problem. The problem
is that it layers a literal ton of complexity on top of something that
should be very simple (e.g. DOM traversal). The result has been spotty
and buggy implementations that have diverged as the browsers have
converged, thereby taking up the slack in the incompatibility
department. And then there are the related issues of code size and
speed, particularly the latter which has resulted in lots of rapid-fire
updates designed to make tests run faster. It's the stupidest, most
pointless "arms race" in the history of programming and we will be
dealing with the fallout indefinitely now that QSA is becoming standard
equipment. :(

Because various
libraries introduce an additional set of inconsistencies, the range of
possible outcomes is greatly increased.

Yes, and then these bums dumped QSA on top of it, despite _knowing_ that
it would cause myriad more inconsistencies.

http://ejohn.org/blog/thoughts-on-queryselectorall/

It was done in the name of speed, yet there would be no bottleneck if
not for the ill-advised query-based designs. ;) What sort of
"programmers" would universally adopt something they know is
incompatible with their previous efforts? That idiot who botched the
Dojo query engine (and he's an idiot for lots of reasons, not just that)
said something about it being a "specs bug". Look, a spec is a spec.
Never mind if it invalidates your life's work. In this guys case, his
legacy is Dojo, so he's not exactly going to go down in history as a
luminary anyway. ;)
I think simple CSS queries can be very helpful, but most of those can
be easily replaced with simple ad hoc functions.

The simpler they get, the more likely you can do away with the CSS
selector parsing without any noticeable decrease in convenience
(assuming you find querying by CSS selectors convenient in the first
place). As a rule, performance and simplicity will increase.
Complex selectors can
be harmful, not only for performance, but they more tightly bind
programming logic to document layout so that a small change may cause
a script to fail or enhancements to become unreliable.

Yes, that's another good point. People go nuts with these things and
expect them to "just work", but the sad truth is that even the "major"
libraries fail on very simple queries in the latest major browsers (and
get worse as you go back in time until they eventually provide nothing
but wrong answers). You can't do progressive enhancement like that as
the calling applications will be hard-pressed to spot wrong answers.
I also suspect that complex selectors are more likely to have
inconsistent results, but don't have any proof.

Well, they have been demonstrated to fail (seemingly at random) in even
the simplest of cases. Doesn't it follow that moving up a degree in
complexity will increase the frequency of inconsistent results?
As for David's hyperbole, I took it as just that. Scale it back to
half-throttle and you get "[CSS] queries can lead to problems", which
is true.

There's a lot more to it than that. These things have pretty much
ruined browser scripting. At a time when it should be trivially simple
to write cross-browser applications, it is instead virtually impossible
if you buy into query-based crap like jQuery, YUI, Dojo, etc. And
unfortunately, lots of Web developers (who don't know JS, browser
scripting or the history behind any of it) have bought into it, ensuring
that most browser-based applications will continue to be laughable.
It wasn't that long ago that it was standard practice to
ensure sites worked without scripting and that script only enhanced
usability.

Ah, there have always been cranks that said otherwise, but it has
certainly gotten worse of late. It's like they don't realize that
search engines can't see their dynamic content either. They see it as
progressive, whereas anyone who has been around has graduated from that
phase (myself in the late 90's) and realized that you _must_ start with
a static HTML page, which is 100% usable and accessible, and build up
from there. Granted, some people like to point out video games and
other non-sites, but the answer is to dynamically generate links to
those pages. Obviously you don't want to index a video game, but you do
want to index the page(s) that lead to it.
Lately there has been a trend to sites that are
dysfunctional without scripting.

Yes, a disgusting trend promoted by people thoroughly ignorant of their
medium.
I expect that within a few years,
browsers that do not have efficient built-in query selector support
will find the web quite unfriendly. That will lead to problems for
less capable browsers and platforms.

If the imbeciles have their way, nothing will work without both
scripting and query support. It's madness. It's also doomed to fail as
technology will replace it with something else and the Web will be
relegated to hobbyists and scientists once again.
Incidentally, I tried surfing with Safari 1.0.3 recently - very few
sites were functional, including apple.com.

That's not surprising. Their own sites don't work on the iPhone and
IIRC, they are a supporter of the SproutCore library. (!)
Considering it's younger
than IE 6 (2004 vs. 2001),

I thought IE6 came out just prior to the end of last century, but I
could be remembering wrong. (?)
it is clear that if IE 6 didn't have the
market share it has, developers would have stopped coding around its
quirks many years ago and it would be dead by now. Imagine a web where
most browsers were DOM and CSS 3 compliant and HTML5 was mostly
implemented, then javascript libraries could focus on efficient
delivery of high-level functionality, not smoothing over browser
quirks.

The thing is that the libraries never could figure out IE6 (or why they
should stop trying to bottle conditional comments). It's not that the
reality is that unclear, but it's been clouded by massive delusions.
And this thread would be irrelevant.

And there would definitely be no need for GP JS libraries at all.
 
D

David Mark

Peter Michaux wrote:

[...]
I work on such web apps. The implementation overlap between a no-
JavaScript version and a JavaScript version is so little that business
says they don't care to pay for the no-JavaScript version.

I told you how to handle that (again). See my reply to RobG. There is
no "video game" argument.
 
G

Garrett Smith

RobG said:
[...]


Incidentally, I tried surfing with Safari 1.0.3 recently - very few
sites were functional, including apple.com. Considering it's younger
than IE 6 (2004 vs. 2001), it is clear that if IE 6 didn't have the
market share it has, developers would have stopped coding around its
quirks many years ago and it would be dead by now. Imagine a web where
most browsers were DOM and CSS 3 compliant and HTML5 was mostly
implemented, then javascript libraries could focus on efficient
delivery of high-level functionality, not smoothing over browser
quirks.

And this thread would be irrelevant.

Sounds great.

There are things that HTML 5 will not cover, though. DOM 3 Events and
other DOM specifications are controlled by the w3c.

The authors of the w3c specifications will continue to produce the same
quality APIs that we are all sorely familiar with, and as such, dom
adapters will be necessary.

Reading styles, listening for events -- these are the two most obvious,
necessary types of functions. Both require a DOM adapter to get
consistent results (out of IE). That should be obvious to anyone who has
been writing cross browser scripts.

If the pain of reading styles and listening to events (cross-browser) is
so obvious, then why are the relevant w3c groups doing nothing to
address that? Are they waiting for IE to create another rendering mode
and adopt the existing APIs? Do they believe the existing APIs are done,
perfect, needing no improvement? Why are they not looking at what can be
done to solve these problems?

The w3c individuals should be producing APIs that avoid the shortcomings
of existing APIs, allow IE to opt-in without creating yet another
rendering mode, allow authors to feature test new methods while
continuing to use existing APIs (w3c and proprietary) as fallbacks.
 
S

Scott Sauyet

David said:
For God's sake, if you want me to make progress (on the code) cut out
these marathon responses.

Pot. Kettle.

As for me, I can stop any time. In fact I won't even bother
responding to this...

.... never mind.

-- Scott
 
S

Scott Sauyet

RobG said:
Complex selectors [ ... ] more tightly bind
programming logic to document layout so that a small change may cause
a script to fail or enhancements to become unreliable.

That's odd. I think of selectors as a great way to reduce that
binding. An example I've used here before is

var links = select("div.navigation a:not(li.special a)");

To collect the equivalent list of elements with standard DOM methods
is easily doable, but takes many lines of code, all of which are bound
to the specific document layout. With selectors, if the structure
changes significantly, I can often simply update a single line of
code. Perhaps it will change to

var links = select("#leftNav a, #footer .nav a");

And that's all that I need to change when the layout is updated.
That to me is reduced binding.

-- Scott
 
L

Laurent vilday

Scott Sauyet :
RobG said:
Complex selectors [ ... ] more tightly bind
programming logic to document layout so that a small change may cause
a script to fail or enhancements to become unreliable.

That's odd. I think of selectors as a great way to reduce that
binding. An example I've used here before is

var links = select("div.navigation a:not(li.special a)");

To collect the equivalent list of elements with standard DOM methods
is easily doable, but takes many lines of code, all of which are bound
to the specific document layout. With selectors, if the structure
changes significantly, I can often simply update a single line of
code.

Fine, but why do you need a list of links ?

The more generic question being : what are you doing with a list of
nodes from a CSS selector ?

I can only find two answers (both bad IMO) :

1) to add an event listener to thoses nodes. Which instantly bring a red
flag in my mind because I believe it is a very (very) bad idea. You
should instead use event delegation on the main node (div.navigation in
the example above).

2) to change style/className of thoses nodes. Which bring another red
flag. Just modify the className of the main node and let the perfectly
fine CSS engine do the layout corrections on the childs.

CSS selectors in javascript are useless, there is at least one better
solution to every examples I have seen so far.
 
S

Scott Sauyet

Laurent said:
The more generic question being : what are you doing with a list of
nodes from a CSS selector ?

I can only find two answers (both bad IMO) :

Then I think your imagination might need some broadening. :)

How about any of these:

3. To add an on-load animation to them.

4. To pre-fetch their linked content in order to unobtrusively turn a
static page into a richer UI.

5. To iterate through them and dynamically replace those having
certain properties with JS-enhanced alternatives.

6. To attach pretty tool-tips to them.

7. To scan for use in building a Table of Contents for the page.

8. To help track across many users the manner in which link ordering
affects click rates.

9. To move them elsewhere in the DOM.

10. To delete them since your new Rich UI supplies better ways of
addressing their functionality.


These are the first things I could think of that I have actually done
which do not involve event listeners or styling. I'm sure I could
come up with others. Can't you come up with at least a few yourself?

-- Scott
 
M

Matt Kruse

Fine, but why do you need a list of links ?

I use css selectors quite often to scrape content from pages that I do
not have control over.
For that, it is very convenient.

Matt Kruse
 
P

Peter Michaux

Ah, there have always been cranks that said otherwise, but it has
certainly gotten worse of late. It's like they don't realize that
search engines can't see their dynamic content either. They see it as
progressive, whereas anyone who has been around has graduated from that
phase (myself in the late 90's) and realized that you _must_ start with
a static HTML page, which is 100% usable and accessible, and build up
from there. Granted, some people like to point out video games and
other non-sites, but the answer is to dynamically generate links to
those pages. Obviously you don't want to index a video game, but you do
want to index the page(s) that lead to it.

You are not making an solid argument for why *all* web pages must be
usable with JavaScript available. They don't and, if fact, you erode
your case with the video game argument which is valid. Google doesn't
need to be able to index my web-based email client, for another
example. There are plenty of web pages that do not need to work
without JavaScript and even if you don't work on them it is still
true. Business sees spending the money to make these pages work
without JavaScript as virtually pointless in the same way that making
a web-based email client also work by fax or snail mail would be
virtually pointless.

Peter
 
P

Peter Michaux

RobG  wrote:
Complex selectors [ ... ] more tightly bind
programming logic to document layout so that a small change may cause
a script to fail or enhancements to become unreliable.

That's odd.  I think of selectors as a great way to reduce that
binding.  An example I've used here before is

    var links = select("div.navigation a:not(li.special a)");

To collect the equivalent list of elements with standard DOM methods
is easily doable, but takes many lines of code, all of which are bound
to the specific document layout.  With selectors, if the structure
changes significantly, I can often simply update a single line of
code.  Perhaps it will change to

    var links = select("#leftNav a, #footer .nav a");

And that's all that I need to change when the layout  is updated.
That to me is reduced binding.

That is more binding than desirable. If you have control of the HTML
generation, then it is better to make it so that the JavaScript
doesn't need to be updated at all when the HTML changes.

Instead of

select("div.navigation a:not(li.special a)");

use

select(".nonSpecialNavLink");

Now someone can change the div with class "navigation" to be a ul and
your selector doesn't break. Someone can change the li elements to
span elements and the selector doesn't break.

Peter
 
S

Scott Sauyet

Peter said:
Scott Sauyet wrote:
That is more binding than desirable. If you have control of the HTML
generation, then it is better to make it so that the JavaScript
doesn't need to be updated at all when the HTML changes.

There's a lot I do differently when I have control over the HTML.
Often, though, that's mostly off-limits.

-- Scott
 
R

RobG

RobG  wrote:
Complex selectors [ ... ] more tightly bind
programming logic to document layout so that a small change may cause
a script to fail or enhancements to become unreliable.

That's odd.  I think of selectors as a great way to reduce that
binding.  An example I've used here before is

    var links = select("div.navigation a:not(li.special a)");

That binds your selector to a pretty exact layout.

To collect the equivalent list of elements with standard DOM methods
is easily doable, but takes many lines of code, all of which are bound
to the specific document layout.

Not at all. It can be done much more efficiently using a class on the
elements you want to affect, so a single line of code with a single
parameter. A getElementsByClassName method is available as a built-in
method in most browsers and is pretty simple to emulate for those
where it isn't. The result will be a method that works reliably in far
more browsers than the current query engines.

You could simply add a class of say "navLink" to those links you want
to select, then you don't care where they are in the page. Or the list
can be restricted to those that are descendents of a particular
element - either way, the result is a selection criterion that is far
less dependent on document layout.

 With selectors, if the structure
changes significantly, I can often simply update a single line of
code.  Perhaps it will change to

    var links = select("#leftNav a, #footer .nav a");

Had you used a class, you'd not need to modify your code at all.
 
D

David Mark

You are not making an solid argument for why *all* web pages must be
usable with JavaScript available.

That's not what I said at all (in the previous times we have discussed
this). In fact, my point is that your video game or inaccessible
email client will be linked to from some Web page and that page should
use a dynamically generated link for that purpose (perhaps even
detecting what features will be needed to run the app).
They don't and, if fact, you erode
your case with the video game argument which is valid.

You just missed my point.
Google doesn't
need to be able to index my web-based email client, for another
example.

*Exactly*. That's why you can use a dynamically generated link and
not worry about it.
There are plenty of web pages that do not need to work
without JavaScript and even if you don't work on them it is still
true.

I work on Web applications all the time. And the accessible pages
that link to and describe them use dynamically generated links. It
makes perfect sense to me. I think you just not following my train of
thought. Google doesn't index the apps, but it does index the pages
that describe the apps.
Business sees spending the money to make these pages work
without JavaScript as virtually pointless in the same way that making
a web-based email client also work by fax or snail mail would be
virtually pointless.

It depends on the app. Some make sense without scripting, some do
not. For the ones that do not, it is trivial to keep them away from
Google, as well as users who cannot make use of them.
 

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top