Jeff said:
When the software uses the W3C/WAI 'guidelines' to rate a
site ......
When the software authors impose a bogus interpretation of the WAI
guidelines there is no reflection in the WAI or its guidelines.
What other criteria is available for accessiblity testing?
Human judgement of course; preferably informed and intelligent. The
criteria being; Is the result an accessible web site or not.
I have personally experience an instance where an action taken to
satisfy Bobby's automated testing directly resulted in an otherwise
broadly accessible web site being rendered unusable by anyone who could
not operate a mouse (or similar pointing device). If the site's authors
had had the goal of creating an accessible site that would have been an
obvious failure, their actual goal was just to satisfy Bobby so they
succeeded. But it is not difficult to see that an exclusive dependence
on mechanical accessibility checking makes little contribution to the
accessibility of web sites, and is even directly harmful to that cause.
Human judgement: Is the result an accessible web site or not. The WAI
provide no more than guidance in making that judgement.
You might want to look at 'guideline 1.1' before making
any further comments.
1.1 Provide a text equivalent for every non-text element
(e.g., via "alt", "longdesc", or in element content). This
includes: images, graphical representations of text (including
symbols), image map regions, animations (e.g., animated GIFs),
applets and programmatic objects, ascii art, frames,
****** scripts *******, images used as list bullets, spacers,
graphical buttons, sounds (played with or without user
interaction), stand-alone audio files, audio tracks of video,
and video.
I have read it, and I have thought about it. And I have concluded that
the "text equivalent" of many things is no text at all. An image acting
as a spacer, sounds played in the background for atmosphere, and almost
anything scripted.
Where scripts stand out is that they may act in a way that could have a
text equivalent. For example, a "tool tip" script may be presenting
supplementary information that should still be included when scripting
it as a tool tip was not viable or meaningful. Or a drop-down navigation
menu, where the absence of dynamic and interactive script support should
leave some means of navigation that will inevitable have a significant
text content.
What we are disputing here is not that there should be viable
alternatives for when scripted actions are impossible or do not make
sense, but how that is to be achieved.
When providing a 'text equivalent' makes sense it makes sense in all
circumstances where the scripted action that it is an alternative to
does not make sense or is impossible. Thus you need a mechanism for
providing those 'text equivalents' that is _mutually_exclusive_ to the
scripted action for which they are an equivalent.
NOSCRIPT elements do not provide that mechanism because they are
mutually exclusive to the wrong condition. They are only used when
script interpretation in unsupported or disabled.
While scripts designed for clean degradation to viable underlying HTML
(and/or server-side fall back) will be in a position to provide those
text equivalents both when scripting is unavailable or disabled and
whenever the environment does not support the facilities needed by
script in order to act. They even facilitate the selective disabling
scripted actions by the users, such as the user of a screen reader maybe
preferring not to have an animated drop-down menu (because chunks of a
page appearing and disappearing doesn't read that well) but still
preferring client-side form validation.
You missed a few words from what I posted:
http://www.w3.org/TR/html401/interact/scripts.html#idx-script-6
If the user agent doesn't support scripts, .....
I take that to mean: if the browser doesn't have a scripting
engine or scripting is disabled.
There is no dispute about the mechanism of NOSCIRPT elements.
The browser DID support scripts (scripting enabled),
it just didn't support the TCL script.
Which was Matt's (much as I am loathed to admit it (and wish I had
spotted it myself) ;-), very good) point. The NOSCRIPT element did
exactly what it was specified to do, and completely failed to contribute
anything to the outcome. No text equivalent was provided, and Bobby went
away happy that another inaccessible web site had met its dubious
criteria.
Therefore it is up to the programmer to ensure
that the browser supported such scripts.
Would you also ask the programmer to ensure that there were no
interruptions to the network while their script was running, no power
failures, that nobody unplugged any of the computers involved, etc, etc?
There are conditions that are outside of the control of programmers, and
the execution environment of an Internet browser script is a condition
outside of the control of the author of that script.
A browser script cannot know anything about its execution environment
until it starts executing in that environment. And if it never starts
executing it never will know anything about that environment. A
programme, no matter how it is coded, cannot ensure that it will be
executed, only how it will execute if and when it is executed.
What the script author can do is design their script for clean
degradation to underlying viable HTML (and/or server side fall-back) so
that its failure to execute (or its inability to act, or its choice not
to act) leaves that underlying HTML providing the alternative to its
action. And having done that the NOSCRIPT element has become redundant
because not acting through lack of support or choice would have the same
satisfactory outcome as being unable to act.
It is not the browsers responsibility.
Who was ever going to blame the browser? The responsibility lies with
the author. It is a responsibility to achieve a meaningful mutually
exclusive relationship between the successful execution of scripts in an
unknown browser environment and their failure to execute, for whatever
reason.
NOSCRIPT elements do not provide that relationship so it is the
responsibility of the author to employ an alternative mechanism that
does. And once they have don that there is no longer any need for
NOSCRIPT elements.
No its a html rendering process term
My question what not in what context is the term used. I wanted an
explanation or the sequence of actions and/or event that explain the use
of the words "jump to" in "browser will ignore any script tags and jump
to the noscript tag", because in most context where the words "jump to"
are use the accompanying concept would have no relationship to the
actual actions of an HTML parser or rendered with scripting
disabled/unsupported.
It will allow you to have your page pass the W3C/WAI guidelines
No it won't. It might get you passed automated accessibility testing
software like Bobby but the WAI's primary guideline is to create an
accessible web site and NOSCIRPT elements are redundant in achieving
that. And empty NOSCRIPT elements are doubly (and self evidently)
redundant.
Yes they 'fail to execute' but the handling of this failure
is completely different.
In what way are those two conditions handled differently? In both cases
the content of SCRIPT elements will not be executed and in both cases
the content of NOSCRIPT elements will be displayed/presented.
Right comment, wrong area. So bite me.
In what sense "right comment"?
http://www.htmlguru.com/
Caveat: I haven't checked out every single page on every
available browser but the home page is ample demonstration.
An error dialog and a blank screen on script enabled IE 6; that is about
as far from "successfully execute on all (script capable) browsers" as
you can get. But an examination of the fist couple of function bodies
suggests at least another couple of browsers where the script will
error-out, and that is without even trying.
There are infinitely better candidates posted to this group on a weekly
basis. But still none are clamed to "successfully execute on all (script
capable) browsers".
Richard.