New Dojo Site--Most incompetent ever?

D

David Mark

S.T. said:
On dojotoolkit.com? I show two-thirds less size and a third-less
requests than you're showing. The YSlow firebug add-on quite likes what
they've done, in fact.

As mentioned, no (Google "Dojo Foundation").

But I'm glad you brought up that site. Assuming your estimates are
correct, it's still way too large (no matter what your mechanical Turk
says).

A few excerpts:-

"Some folks have noticed a new landing page for dojotoolkit.org, one
that includes hard numbers about the performance of Dojo vs. jQuery.
Every library makes tradeoffs for speed in order to provide better APIs,
but JavaScript toolkit performance shootouts obscure that reality more
often than not. After all, there would hardly be a need for toolkits if
the built in APIs were livable. Our new site isn’t arguing that Dojo
gives you the fastest possible way to do each of the tasks in the
benchmark, all we argue is that we provide the fastest implementation
that you’ll love using."

So they are proud of the new site and deluded enough to think that their
botched query engine is something I'll love using. :) And the bit
about the "built in APIs" is the typical BS designed to scare you into
using their crap (ostensibly it smells better than the browsers').

And "hard numbers?" That would be based on TaskSpeed of course. :(

"I gathered the numbers and stand behind them, so let me quickly outline
where they come from, why they’re fair, and why they matter to your app."

No thanks. **** off.

"Similarly Flex, Laszlo, GWT’s UI Binder, and Silverlight have
discovered the value in markup as a simple declarative way for
developers to understand the hierarchical relationships between
components, but they correspond to completely unambiguous definitions of
components they rely on compiled code — not reliably parsed markup — for
final delivery of the UI."

LOL. Look up "pseudo-intellectual" in the dictionary and you'll find a
picture of this guy.

"So from time to time I’d wondered what all the brilliant DHTML hackers
that Google had hired were up to. Obviously, building products. Sure.
But I knew these guys. They do infrastructure, not just kludges and
one-off’s. You don’t build a product like Gmail and have no significant
UI infrastructure to show for it."

Yes, he works for Google.

"There’s a ton of great code in Closure,"

Enough.

And as you like to compare everything to cinsoft.net. I see on Alexa
that this particular site is going down the proverbial toilet (at a rate
of knots).

http://www.alexa.com/siteinfo/dojotoolkit.com#trafficstats

....so there may be some hope for the Web after all:-

http://www.alexa.com/siteinfo/cinsoft.net?p=tgraph&r=home_home

Not bad for a domain that up until recently had no home page. :)

And, one final note (or nail), I see that Dojo deleted my branch. I'm
not complaining, of course, as I told them to (don't have time to walk
them through it and am tired of watching them stumble around). Didn't
take them 24 hours to jump to it either. That's it for them; they've
thrown their last paddle overboard. :)

And if anyone is crazy enough to actually want to fork Dojo (from a
relatively sane starting point), I'd be glad to send the files and
instructions.
 
D

David Mark

David said:
Groan. You again?

Here you go:-

JavaScript - http://www.dojofoundation.org/
Timeout thread: delay 0 ms
Unhandled exception: [Object DOMException]
name: Error
message: SYNTAX_ERR
stacktrace: n/a; see opera:config#UserPrefs|Exceptions Have Stacktrace

In Opera 10 no less. And check out the "layout" in anything less than a
maximized browser at a very high resolution. You can bet the developers
never did. ;)
 
D

David Mark

SteveYoungGoogle said:
David said:
SteveYoungGoogle wrote:
S.T. wrote:
On 3/8/2010 1:04 AM, David Mark wrote:
What experienced developers? What Web? Where? And scales?! I've yet
to see a site out of this bunch (even a basic page) that doesn't
download ten times what it should. A quick glance shows that the front
(well only) page of the aforementioned Foundation site weighs in at:-
Total HTTP Requests: 45
Total Size: 454259 bytes
On dojotoolkit.com?
No.
Where then?
Groan. You again?
Here you go:-

JavaScript -http://www.dojofoundation.org/
Timeout thread: delay 0 ms
Unhandled exception: [Object DOMException]
name: Error
message: SYNTAX_ERR
stacktrace: n/a; see opera:config#UserPrefs|Exceptions Have Stacktrace

In Opera 10 no less. And check out the "layout" in anything less than a
maximized browser at a very high resolution. You can bet the developers
never did. ;)

So your thread and OP is entitled "New Dojo Site--Most incompetent
ever?". The OP opens with the question "Have you seen the shiny new
Dojo Toolkit site?". But your figures for a bad site come from
http://www.dojofoundation.org/

Interesting take. As was noted clearly, that aside was about the
_Foundation_ site, which is one of several that link from the single
page Toolkit "site." Everything else in the review applies to that
page. HTH.
 
L

Lasse Reichstein Nielsen

S.T. said:
Validating is a debugging tool - that's it. It's not important if a
page "passes" or not.

True. What matters is that the page works as expected.
However, for invalid HTML, the HTML specification doesn't say what
the HTML means or how it will be parsed. I.e., you cannot possibly
know whether it will work as expected. That's why invalidly nested
HTML is bad.
No doubt there are lengthy arguments about how critical validating is
to the future of humanity, but the real world uses validation for it's
useful purposes and stops there. ALT'ing every single IMG whether
useful or not is a fool's errand.

Except for accessability, I'd agree. The HTML will still be meaningfull.
Escaping every ampersand in a URL is wasted time.

Here I disagree. Again you cannot predict what meaning a browser will
give to your characters, so you can't know that it will work as
expected.
That page says an inline element with position: absolute is computed
as a block level element, which was my original point.

That's CSS, not HTML.
If you write "<span> foo <H1> bar </H1> baz </span>", it is likely to
be interpreted as "<span> foo </span><H1> bar </h1> baz ". No amount
of inline positioning will make the H1 a child of the span.

Yes, I already conceded that point. It wasn't so much confusing the
two, rather realizing his context was with HTML validation whereas I
was looking from the browser's perspective, as I care about what the
browser does with the markup -- not what the W3C thinks of it.

But do you *know* what the browser does with invalid markup?
All browsers?

Have you tried dumping the DOM of the page to see what it's really
turned into? (And if it's the right thing, why not just write that
to begin with).

/L
 
J

John G Harris

True. What matters is that the page works as expected.
However, for invalid HTML, the HTML specification doesn't say what
the HTML means or how it will be parsed. I.e., you cannot possibly
know whether it will work as expected. That's why invalidly nested
HTML is bad.
<snip>

In addition, you don't know what future browsers will do. It could
require a major re-work.

John
 
G

Garrett Smith

David said:
SteveYoungGoogle said:
David Mark wrote:
SteveYoungGoogle wrote:
S.T. wrote:
On 3/8/2010 1:04 AM, David Mark wrote:
What experienced developers? What Web? Where? And scales?! I've yet
to see a site out of this bunch (even a basic page) that doesn't
download ten times what it should. A quick glance shows that the front
(well only) page of the aforementioned Foundation site weighs in at:-
Total HTTP Requests: 45
Total Size: 454259 bytes
On dojotoolkit.com?
No.
Where then?
Groan. You again?
Here you go:-

JavaScript -http://www.dojofoundation.org/
Timeout thread: delay 0 ms
Unhandled exception: [Object DOMException]
name: Error
message: SYNTAX_ERR
stacktrace: n/a; see opera:config#UserPrefs|Exceptions Have Stacktrace

In Opera 10 no less.

In Firefox 3.5, I get the following errors:

------------------------------------------------
An invalid or illegal string was specified" code: "12
[Break on this error] (function(){var
_1=null;if((_1||(typeo...meout(dojo._fakeLoadInit,1000);}})();
------------------------------------------------
Use of getBoxObjectFor() is deprecated. Try to use
element.getBoundingClientRect() if possible.
------------------------------------------------

Method getBoxObject for is an XUL method that was accidentally leaked
onto the dom. It became popular some time around 2007 or so. The method
was never intended to be available in HTML (and never intended for web).
Those who initially made the mistake of using it figured out not to do
that over a year ago.

I see they make an XHR to:

http://www.dojofoundation.org/dojango/media/dojango/_base.js

- to which the server responds with:

| dojo.provide("dojango._base");
| dojo.mixin(dojango, {
| test: function(){
| console.log("test");
| }
| });

Which has the result of calling their misnamed namespacing methods
"dojo.getObject" and "dojo._getProp" all to create an adapter for
console.log, thus requiring the use of firebug lite or something that
adds a global `console` method.

I don't care much what they do with the code on their site, and in fact
the code on my site is a little old, too (7 years or so).

What bothers me about Dojo is the code behind the marketing. Through the
website and through presentations, the founders of Dojo instill a sense
of confidence and empowerment to the project managers who consider using
Dojo. That in and of itself is not bad -- it is a great thing to be
able to win an audience over (I need to improve in that area).

The problem is the code itself. The code is large. There is disturbingly
faulty logic in the core of dojo itself (some of it discussed in this NG
archives).

Projects that have used Dojo FWIS, tended to have performance issues and
take too much effort to maintain.

Significantly larger projects (72+ man months) having a poor outcome or
total failure is a more significant failure than a smallish website that
throws a few errors and hangs for two seconds. The dojofoundation isn't
much worse than any other pop website nowadays.

[...]
 
D

David Mark

Lasse said:
True. What matters is that the page works as expected.

But that's an observation and as you can hardly observe anywhere near
every browser/mode/configuration, past, present and future, it is a good
idea to go with abstractions (e.g. error correction can't be counted on).
However, for invalid HTML, the HTML specification doesn't say what
the HTML means or how it will be parsed. I.e., you cannot possibly
know whether it will work as expected. That's why invalidly nested
HTML is bad.

Yes. That's what I'm getting at.
Except for accessability, I'd agree. The HTML will still be meaningfull.

Accessibility is the rule, not an exception. The ALT attributes are
needed for the HTML to make sense when - for example - images are
disabled or unavailable. Then there are blind people, users of text
browsers, etc.
 
D

David Mark

Garrett said:
David said:
SteveYoungGoogle said:
David Mark wrote:
SteveYoungGoogle wrote:
S.T. wrote:
On 3/8/2010 1:04 AM, David Mark wrote:
What experienced developers? What Web? Where? And scales?!
I've yet
to see a site out of this bunch (even a basic page) that doesn't
download ten times what it should. A quick glance shows that
the front
(well only) page of the aforementioned Foundation site weighs
in at:-
Total HTTP Requests: 45
Total Size: 454259 bytes
On dojotoolkit.com?
No.
Where then?
Groan. You again?
Here you go:-

JavaScript -http://www.dojofoundation.org/
Timeout thread: delay 0 ms
Unhandled exception: [Object DOMException]
name: Error
message: SYNTAX_ERR
stacktrace: n/a; see opera:config#UserPrefs|Exceptions Have Stacktrace

In Opera 10 no less.

In Firefox 3.5, I get the following errors:

Yep, there's a fork that uses it (when available) My Library.
The method
was never intended to be available in HTML (and never intended for web).
Those who initially made the mistake of using it figured out not to do
that over a year ago.

At least a year ago. I've never bothered to take it out My Library
though as that fork isn't used by anything but older FF versions. There
is one highly unnecessary feature test for margins on the HTML element
that I should remove though (it's been on my list forever).

They are positively wacky with that shit. They seem to feel that
downloading (and evaluating) scripts is forward-thinking and the Web
will eventually "catch up" and require such "advanced" technology. Of
course, if you saw their "global" eval method... :)

It is all part of the idiotic train of thought that every document will
eventually be a huge application, rendering navigation (what browsers
are made to do) an antiquated concept. Of course, if you just avoid the
unload listeners, you can have a huge application spread across multiple
documents and leverage the browsers' intuitive navigation interface.
- to which the server responds with:

| dojo.provide("dojango._base");
| dojo.mixin(dojango, {
| test: function(){
| console.log("test");
| }
| });

Whatever. You get the idea that they just slap things together, observe
that they seem to work in whatever browsers they have on hand? Actually
reviewing these messes must be relegated to the "waste of time"
category. Then they end up huge messes that throw exceptions in the
latest browser and not enough time to go back and rewrite them all. I
dare say they never even spoke to Time! :)
Which has the result of calling their misnamed namespacing methods
"dojo.getObject" and "dojo._getProp" all to create an adapter for
console.log, thus requiring the use of firebug lite or something that
adds a global `console` method.

Whatever again. I went over their ill-advised Firebug Lite nonsense
several times.
I don't care much what they do with the code on their site, and in fact
the code on my site is a little old, too (7 years or so).

But do your sites still function without throwing exceptions?
What bothers me about Dojo is the code behind the marketing.

Yes, it is quite bothersome. Talk to the people behind it and you will
find they are bothersome as well. They refer to themselves as "awesome
hackers" (which always turned my stomach) and staunchly refuse to
discuss any abstract concepts, preferring to go strictly by what they
can see and feel (in the current browsers they "care" about, never mind
what the future or past holds). It's no way to run a rodeo.
Through the
website and through presentations, the founders of Dojo instill a sense
of confidence and empowerment to the project managers who consider using
Dojo.

Yes, what a crock.
That in and of itself is not bad -- it is a great thing to be
able to win an audience over (I need to improve in that area).

It's called lying in my book. At best it is perpetuating myths and
delusions as facts.
The problem is the code itself. The code is large. There is disturbingly
faulty logic in the core of dojo itself (some of it discussed in this NG
archives).

It's the tip of the iceberg. Wait until you see my review of Dojo 1.4
(coming soon to cinsoft.net). Of course, it looks very much like Dojo
1.3 as nobody over there touches the core, which is understandable once
you grasp the depth of their misunderstandings (who wants to fiddle with
a foundation that they can't even explain).
Projects that have used Dojo FWIS, tended to have performance issues and
take too much effort to maintain.
Yes.


Significantly larger projects (72+ man months) having a poor outcome or
total failure is a more significant failure than a smallish website that
throws a few errors and hangs for two seconds. The dojofoundation isn't
much worse than any other pop website nowadays.

Er, size your browser window down a few notches... :)
 
S

S.T.

As mentioned, no (Google "Dojo Foundation").

Yeah, I see that now. Got confused as the bulk of your post was about
the toolkit site. 450K is steep, especially given 400K of it is images.
~200K is in sponsor logos - they may have had their hands tied there.
But I'm glad you brought up that site. Assuming your estimates are
correct, it's still way too large (no matter what your mechanical Turk
says).

150K and 30 requests seems reasonable this day in age. I'd personally
sprite the six icons on the bottom to save a few K and get rid of 5
requests, but I wouldn't rush to do it if there were other aspects of
the site to focus on.

CNN home page is a full meg (varies slightly by photos). Gets slightly
better for dial-up users after the first visit as 650K is various JS
stuff that gets cached, but they'll still lose a big chunk of dial-up
users. They don't care and it's not from ignorance.

Sites are no longer built to serve the lowest common denominator, Jakob
Nielsen be damned.
A few excerpts:-

[snip]

You hate the popular libraries. There is a real probability jQuery, Dojo
and YUI are the cause of the Darfur genocide. I get it.
And as you like to compare everything to cinsoft.net. I see on Alexa
that this particular site is going down the proverbial toilet (at a rate
of knots).

http://www.alexa.com/siteinfo/dojotoolkit.com#trafficstats

...so there may be some hope for the Web after all:-

jQuery is evolving into an effective standard at the expense of other
libraries. There will remain a significant user-base for the current
libraries, and by no means is it a 'done deal', but the lion's share is
now headed one direction. There are pros and cons to this consolidation.
http://www.alexa.com/siteinfo/cinsoft.net?p=tgraph&r=home_home

Not bad for a domain that up until recently had no home page. :)

You've got quite a bit of catch-up to go. I'd suggest you port some of
the various jQuery (or whomever) tutorials so people can see how it
works. Far easier to learn by example.
And, one final note (or nail), I see that Dojo deleted my branch. I'm
not complaining, of course, as I told them to (don't have time to walk
them through it and am tired of watching them stumble around). Didn't
take them 24 hours to jump to it either. That's it for them; they've
thrown their last paddle overboard. :)

And if anyone is crazy enough to actually want to fork Dojo (from a
relatively sane starting point), I'd be glad to send the files and
instructions.

No idea what you did for them, but sounds like a generous offer.
 
D

David Mark

S.T. said:
Yeah, I see that now. Got confused as the bulk of your post was about
the toolkit site.

Which, JFTR is not the site you cited either (it's org, not com).
450K is steep, especially given 400K of it is images.
~200K is in sponsor logos - they may have had their hands tied there.

450K is obscene and I have no doubt that it could be done with minimal
degradation for a quarter of that.
150K and 30 requests seems reasonable this day in age.

The day and age are irrelevant. That site doesn't do anything special.
I'd personally
sprite the six icons on the bottom to save a few K and get rid of 5
requests, but I wouldn't rush to do it if there were other aspects of
the site to focus on.

CNN home page is a full meg (varies slightly by photos).

Then they have incompetent developers. Not news for a news site, that's
for sure.
Gets slightly
better for dial-up users after the first visit as 650K is various JS
stuff that gets cached, but they'll still lose a big chunk of dial-up
users. They don't care and it's not from ignorance.

It is most assuredly from ignorance. 650K of JS is the punchline to a
bad joke.
Sites are no longer built to serve the lowest common denominator, Jakob
Nielsen be damned.

We've been over this. The script for my favorite mockup is 15K. My
whole library is 150K. What the hell could they be doing that needs 650K?!
A few excerpts:-

[snip]

You hate the popular libraries. There is a real probability jQuery, Dojo
and YUI are the cause of the Darfur genocide. I get it.

That's just stupid (and not the slightest bit relevant to what was
snipped). But they are contributing to businesses pissing away lots of
money on bandwidth, lost customers, unnecessary maintenance, etc.
jQuery is evolving into an effective standard at the expense of other
libraries.

No, check jQuery's logo. It is devolving and will never be any sort of
standard. Technology just doesn't work like that.
There will remain a significant user-base for the current
libraries, and by no means is it a 'done deal', but the lion's share is
now headed one direction. There are pros and cons to this consolidation.

It's already hit its peak. It's got nowhere to go but away.
You've got quite a bit of catch-up to go.

I'm just getting started. :)
I'd suggest you port some of
the various jQuery (or whomever) tutorials so people can see how it
works. Far easier to learn by example.

I agree that I need more examples. I'm working on a big one right now
that I am sure will be popular (very "wowie").
No idea what you did for them, but sounds like a generous offer.

I got rid of all of the browser sniffing and synchronous XHR. Also
cleaned every miserable file up to the point where they passed JSLint
(finding many typos and other gaffes along the way). Cleaned up some of
the comments too, which varied from confused to nauseating in places.
Those are the broad strokes and it is by no means a completed mission,
but I did get to the point of running (and passing) their unit tests (in
the major browsers at least). Due to circumstances beyond my control,
the effort was cut short.
 
S

S.T.

True. What matters is that the page works as expected.
However, for invalid HTML, the HTML specification doesn't say what
the HTML means or how it will be parsed. I.e., you cannot possibly
know whether it will work as expected. That's why invalidly nested
HTML is bad.

I worry about what the marketplace has specified, not a W3C decade-long
adventure in producing a "Recommendation" that sometimes is, sometimes
is not followed.

W3C is like the United Nations for geeks. A lumbering organization that
periodically produces some documents that the world then decides whether
they want to follow them or not. What they say means nothing to my
users. What my users see and interact with is what matters to my users.

I don't care, at all, about any document or specification the W3C
produces. I only care about what the market does (or doesn't do) with
those specifications.
Here I disagree. Again you cannot predict what meaning a browser will
give to your characters, so you can't know that it will work as
expected.

I see where you're coming from. It's not a bad practice by any means --
and perhaps I should put more effort into it -- but I'm not too worried
about it.

Put it this way, if a browser comes out and cannot successfully handle
That's CSS, not HTML.
If you write "<span> foo<H1> bar</H1> baz</span>", it is likely to
be interpreted as "<span> foo</span><H1> bar</h1> baz ". No amount
of inline positioning will make the H1 a child of the span.

Again, I'm not advocating nesting blocks within inline -- not a good
practice. But should it occur it's really not a big deal. For instance
if I have:

<span class="blue">
We sell blue widgets cheap!
</span>

.... and decide, for SEO purposes, I want:

<span class="blue">
We sell
<h2 style="display: inline; font-size:1em;">blue widgets<h2>
cheap!
</span>

.... I'm not going to panic. Maybe I'll get around to restructuring
outlying tags, but it won't be because I'm worried whether I'll pass W3C
validation.

An unlikely example. I'd agree it's best to avoid Hx tags inside spans,
but objected to a scathing condemnation of Dojo's site because they had
a block inside an inline and had the audacity to allow CSS to ensure the
user sees the intended effect. Suggesting they swap the absolute
positioned span to an absolute positioned div is fine. Mocking them
because they haven't bothered to was absurd.
But do you *know* what the browser does with invalid markup?
All browsers?

I don't know what all browsers do with valid markup. I know what the
overwhelming percentage of browser visits to my sites do with my markup.
That's the best I can do.

I have no delusions of my pages being future-proof, whether they
validate or not. I think anyone who believes their pages are
future-proof because they validate on W3C is kidding themselves.
Have you tried dumping the DOM of the page to see what it's really
turned into? (And if it's the right thing, why not just write that
to begin with).

Not sure exactly what you're asking. Not sure how to dump the DOM.

If you're talking computed styles, I tested if an absolute positioned
span was rendered 'display: block' on various browsers

http://jsfiddle.net/9xrYg/

Also tested innerText/textContent on <span>a<h1>b</h1></span> to see
current browsers rendered it as <span>a</span><h1>b</h1>. Didn't appear
to be the case - always returned 'ab'.
 
D

David Mark

S.T. said:
I worry about what the marketplace has specified, not a W3C decade-long
adventure in producing a "Recommendation" that sometimes is, sometimes
is not followed.

The marketplace specifies sites that work. The recommendations are
followed by the browser developers more often than they are not; and
regardless, they are all we have to go on as the browser developers
don't share their error correction algorithms.
W3C is like the United Nations for geeks. A lumbering organization that
periodically produces some documents that the world then decides whether
they want to follow them or not. What they say means nothing to my
users. What my users see and interact with is what matters to my users.

But you can't judge your work by empirical evidence as you can't see
every browser/mode/configuration, past, present and future. To ensure
that your sites work (and continue to work) in the maximum number of
environments, validating your markup is an essential first step. After
that you need to consider what scripts you will allow between your
markup and your users. Just because you can't see something fail
doesn't mean it isn't failing (or will fail) for somebody somewhere.
You start out with no scripts, which means there is nothing to foul up
your documents. With each script that you add, particularly if you do
not know _exactly_ what they are doing, you increase the likelihood of
pissed off users.
I don't care, at all, about any document or specification the W3C
produces. I only care about what the market does (or doesn't do) with
those specifications.

But you can't quantify that. Just be assured that the browser
developers do care about those documents, so they represent your only
solid clues as to what browsers can be expected to do. Trying to
observe browsers to determine what will fly is folly. You could more
easily make general predictions about the behavior of birds by observing
pigeons at the park.
I see where you're coming from. It's not a bad practice by any means --
and perhaps I should put more effort into it -- but I'm not too worried
about it.

It shouldn't take more than five minutes to clean up the typical invalid
document. The bogus entities are some of the easiest to spot and
correct. You shouldn't even need a validation service to fix those, but
should always use one to check your work (another five seconds or so).
Only then can you stop worrying about the problem as it will no longer
exist.
Put it this way, if a browser comes out and cannot successfully handle
<a href="page.php?a=1&b=2">link</a> -- it's not going to have any market
share to warrant consideration.

You have no idea what a browser (or other agent, search engine, etc.)
may do in the future, even those that already enjoy a healthy share of
the market. Look what happens to bad sites every time a new version of
IE comes out. In most cases, it is the sites, not the browser that is
to blame. I validate my markup and CSS, use sound scripts with
appropriate feature testing and I can't remember the last time I had to
change a thing due to a new browser coming out (and contrary to your
previous assertion, I primarily work on very complicated sites and
applications). Coincidence?
Again, I'm not advocating nesting blocks within inline -- not a good
practice. But should it occur it's really not a big deal. For instance
if I have:

You are very quick to dismiss what you perceive as small issues. Why
not just do things right and avoid the cumulative effect of such an
attitude, which is invariably undesirable behavior (if not today,
tomorrow and if not your browser, one of your users').
<span class="blue">
We sell blue widgets cheap!
</span>

... and decide, for SEO purposes, I want:

<span class="blue">
We sell
<h2 style="display: inline; font-size:1em;">blue widgets<h2>
cheap!
</span>

Search engines can't stand tag soup. ;)
... I'm not going to panic.

Of course not, you should simply structure your document appropriately
from the start and the search engines will love them. If you find
yourself trying to fool the search engines, you are doing something wrong.
Maybe I'll get around to restructuring
outlying tags, but it won't be because I'm worried whether I'll pass W3C
validation.

There's no reason to worry about _why_ you are doing it right. If you
simply do things right, everything else (e.g. search engines, disabled
visitors, oddball browsers) will take care of itself.
An unlikely example. I'd agree it's best to avoid Hx tags inside spans,

Yes, of course it is. It's a very silly rookie mistake and I only
pointed it out as the author in question had puffed about being an
"expert" on markup and didn't like hearing that he wasn't. Perhaps
he'll learn something from this dialog. Or perhaps not if I am any
judge of human nature. :(
but objected to a scathing condemnation of Dojo's site because they had
a block inside an inline and had the audacity to allow CSS to ensure the
user sees the intended effect.

That was one of dozens of rookie mistakes detailed.
Suggesting they swap the absolute
positioned span to an absolute positioned div is fine. Mocking them
because they haven't bothered to was absurd.

It wasn't mocking. I'm okay, they suck. Now that's mocking. :)
I don't know what all browsers do with valid markup. I know what the
overwhelming percentage of browser visits to my sites do with my markup.
That's the best I can do.

You can't possibly know that and certainly whatever you do know in that
regard has a near future expiration date as new browsers (and browser
versions) come out constantly these days. Add another five minutes of
effort (and a bit more understanding) to your best.
I have no delusions of my pages being future-proof, whether they
validate or not.

All you can do is your best. :) It's worked for me for a very long
time. That's history, not delusion. In contrast, your position sounds
very much like eschewing smoke detectors because you have no delusions
of your house being fireproof. Doesn't make any sense does it?
I think anyone who believes their pages are
future-proof because they validate on W3C is kidding themselves.

It's just one measure. Put the detectors in. Granted, your house may
still burn down.
Not sure exactly what you're asking. Not sure how to dump the DOM.

Firebug is one tool that will let you inspect the DOM. Newer browsers
have similar tools built in.
If you're talking computed styles, I tested if an absolute positioned
span was rendered 'display: block' on various browsers

Well that was a waste of time as you could have just read the specs you
seem so keen on avoiding. ;)
http://jsfiddle.net/9xrYg/

Also tested innerText/textContent on <span>a<h1>b</h1></span> to see
current browsers rendered it as <span>a</span><h1>b</h1>. Didn't appear
to be the case - always returned 'ab'.

Current browsers?
 
D

David Mark

S.T. said:
Fascinating tidbit Andrew.

I'm guessing validating is important to programmer types

If you have been reading the replies, you should have realized by now
that it is important to competent and trustworthy types.
because many
have personalities that gravitate toward 0/1 - true/false structure.

That makes no sense at all. Like I said, you should take the five
minutes to install a smoke detector. Yes (obviously), your house may
still burn down.
Unfortunately the web offers no such measure.

You take measures. The Web just so happens to offer tools to help.
Validating against W3C
gives the illusion of such measure, but the scale is irrelevant.

So I say take five minutes and validate and at least skim the results.
You say "the scale is irrelevant?" What does that even mean?
 
S

S.T.

For me, not caring about validation is the equivalent of incompetence.

Fascinating tidbit Andrew.

I'm guessing validating is important to programmer types because many
have personalities that gravitate toward 0/1 - true/false structure.
Unfortunately the web offers no such measure. Validating against W3C
gives the illusion of such measure, but the scale is irrelevant.
 
S

S.T.

But you can't judge your work by empirical evidence as you can't see
every browser/mode/configuration, past, present and future. To ensure
that your sites work (and continue to work) in the maximum number of
environments, validating your markup is an essential first step. After
that you need to consider what scripts you will allow between your
markup and your users. Just because you can't see something fail
doesn't mean it isn't failing (or will fail) for somebody somewhere.
You start out with no scripts, which means there is nothing to foul up
your documents. With each script that you add, particularly if you do
not know _exactly_ what they are doing, you increase the likelihood of
pissed off users.

You have a tendency to speak of web pages as static entities that need
to survive the test of time. That's rarely the case.

Only the content need "survive". More often than not, and nearly always
on any large-scale site, content is safely tucked away in a database, or
stored as a number of files in a directory (i.e. images) or other
self-contained manner. Often content has a long life cycle.

The markup to render this content, the server-side scripting to
manipulate and present the content and the client-side scripting to
interact with this content is ever-evolving. It's easily fixed. It's
frequently changed/updated/redone from scratch. It has a shorter life
cycle than the browser market, and infinitely shorter than the W3C.

I can see the points you're making about the dangers of invalid markup
and popular js library's to a current page's future. However there is
(virtually) no need to protect my site's markup and interactive
functionality from the future. It's not intended to make it there.

Perfect the content for use in the future. Work in the 'now' for the
presentation.

Short on time.
 
D

David Mark

S.T. said:
You have a tendency to speak of web pages as static entities that need
to survive the test of time. That's rarely the case.

No, you have misconstrued my message somehow. Static or not, the same
rules apply.
Only the content need "survive". More often than not, and nearly always
on any large-scale site, content is safely tucked away in a database, or
stored as a number of files in a directory (i.e. images) or other
self-contained manner.

Stored as a number of files in a directory? That pretty much describes
anything and everything. And yes, some of those files may be databases.
Often content has a long life cycle.

Some content does, some content doesn't. Irrelevant regardless.
The markup to render this content, the server-side scripting to
manipulate and present the content and the client-side scripting to
interact with this content is ever-evolving.

You aren't really saying much of substance. It sounds as if you are
building towards an excuse to just do anything and never mind what
happens as a consequence.
It's easily fixed.

You don't have to fix what is not broken.
It's
frequently changed/updated/redone from scratch.

What is? You've pretty much said everything is redone from scratch often.
It has a shorter life
cycle than the browser market, and infinitely shorter than the W3C.

You are focusing on the future, but I take it you haven't mastered the
past or present yet.
I can see the points you're making about the dangers of invalid markup
and popular js library's to a current page's future.

And its present (which is influenced by the past).
However there is
(virtually) no need to protect my site's markup and interactive
functionality from the future.

Ah, so you are talking about _your_ sites. Again, concentrate on the
present for now.
It's not intended to make it there.

And it surely won't if it is already broken in the present.
Perfect the content for use in the future.

"Perfecting" content is beyond the scope of this discussion. I'm not
even sure what that means in most contexts.
Work in the 'now' for the
presentation.

The CSS? Yes, it should work today (and hopefully tomorrow as well or
your future visitors will be upset). And no, it doesn't make the
slightest bit of sense to perpetually rewrite style sheets (or markup
for that matter).

It's a good thing that the W3C is so slow. It makes it easy to reuse
the same templates for years on end. And now that the browsers (at
least the majors) have converged somewhat, it is even less likely that
you would need to jump through hoops for them either. I have the
sneaking suspicion that you are trying to justify doing things in some
extremely hard way that is completely alien to me (and no, it isn't
because I think of all Web pages as static).
Short on time.

If you knew Time as well as I do... :)
 
G

Garrett Smith

David said:

[...]
The problem is the code itself. The code is large. There is disturbingly
faulty logic in the core of dojo itself (some of it discussed in this NG
archives).

It's the tip of the iceberg. Wait until you see my review of Dojo 1.4
(coming soon to cinsoft.net). Of course, it looks very much like Dojo
1.3 as nobody over there touches the core, which is understandable once
you grasp the depth of their misunderstandings (who wants to fiddle with
a foundation that they can't even explain).

The core has many dependencies and so trying to adjust that, with so
many things that require it, is a risk. OTOH, the problems won't go away
by ignoring them. Fixing the problems sounds like a good idea, but then
there would have to be somebody capable of doing that, and if it is
going to be done by the original authors, then much learning should take
place prior to doing that (or different mistakes will be made).

Regaring your review, I would like to suggest the following document:

http://www.jibbering.com/faq/notes/review/

I'd also like comments on how it can be improved.
Er, size your browser window down a few notches... :)

The Dojo site is a fine incompetence exemplar, but so are others.

A bad corporate website is not nearly as bad as a large corporate
software project failing. Half a million dollars is not something that
should be thrown in the trash.
 
E

Eric Bednarz

S.T. said:
Put it this way, if a browser comes out and cannot successfully handle
<a href="page.php?a=1&b=2">link</a> -- it's not going to have any
market share to warrant consideration.

That is *one* (1) example to support your thesis. Browsers with any
market share commonly support at least 96 counter examples.

<http://bednarz.nl/tmp/entref/>
 
G

Garrett Smith

Eric said:
That is *one* (1) example to support your thesis. Browsers with any
market share commonly support at least 96 counter examples.

<http://bednarz.nl/tmp/entref/>

Good example, it is also important consider characters that need to be
percent-encoded, such as: (, ), %.

And that's just for A href.

If the HTML is valid, the program can be focused more on the
requirements problems and not how browsers handle errors.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,013
Latest member
KatriceSwa

Latest Threads

Top