Anyone else noticed this issue?

Discussion in 'Javascript' started by Justin E. Miller, Mar 7, 2008.

  1. In XHTML 1.0 Strict, the correct declaration for calling an external
    JavaScript is <script type="text/javascript" src="script2.13.js" />
    right? Running the validator on my page, I got it being correct as
    either the preceding or <script type="text/javascript"
    src="script2.13.js"></script>. I would figure one would want to use the
    former considering it uses less bytes of data, though largely irrelevant
    on most connections.

    I'm using Firefox 3.0 B3 and have noticed that the page is blank,
    despite it passing validation by the W3 validator. The same page works
    in Opera and Firefox though. Not unsurprisingly, it doesn't
    work in IE 6. I'm running Linux, so I can't run IE7 currently to test it
    in there. It seems to be a bug with the programs not handling the
    XHTML+XML declaration, so I submitted a bug report to Mozilla. I just
    wanted to see if it was just my computer screwing things up.

    I'm learning JavaScript through a book that is using XHTML 1.0
    Transitional and these are just scripts from that book. I changed the
    code to be Strict because I prefer to use it.

    They are online at:

    I don't need any feedback on it other than whether or not it works in
    the browser you're using.

    Thanks for your time.
    Justin E. Miller, Mar 7, 2008
    1. Advertisements

  2. Justin E. Miller

    SAM Guest

    Justin E. Miller a écrit :
    I hope nobody could think what you tell (less bytes of data)
    Whith a form and checkboxes in regard of each case would have been a
    good help for tester (tests not oppening in same window. ie: in a frame?)

    IE 5.2 Mac : empty pages for most of tests, only the first works

    iCab 3 : tells me for your file index :
    Erreur HTML (3/53) : Le DOCTYPE est inconnu.
    HTML Eror (3/53) : Unknown DOCTYPE.
    all seems to work except the referrer is unknown

    Firefox for Mac-Intel
    referrer is unknown (undefined)
    SAM, Mar 7, 2008
    1. Advertisements

  3. Justin E. Miller

    Henry Guest

    That is a 'correct declaration' not "the correct declaration".
    For genuine XHTML, where you have the choice, maybe.
    Given that IE 6 does not support XHTML at all that is not surprising.
    IE 7 does not support XHTML at all either, so there is no point. At
    least there would be no point if what you were using was XHTML.
    Which "XHTML+XML declaration"? When served over HTTP(S) the only
    significant declaration is the HTTP Content-Type header.
    It is neither a bug nor your computer. This is an author screw up.
    Understandable, but not necessarily that good an idea.
    Oh dear.
    The HTTP headers sent with that page include:-

    HTTP/1.1 200 OK
    Content-Length: 591
    Date: Fri, 07 Mar 2008 12:53:33 GMT
    Content-Type: text/html
    ETag: "8002cefc-24f-47ce1a94"
    Server: Apache/1.3.34 Ben-SSL/1.55
    Last-Modified: Wed, 05 Mar 2008 03:59:16 GMT

    And there I see a Content-Type header asserting the resource to be
    "text/html", so not 'application/xhtml+xml'. Thus you page is not, and
    never has been XHTML. The mark-up you are using is tag-soup HTML mark-
    up. You are getting varying results because tag-soup HTML mark-up (and
    particularly flavours that resemble XHTML) need the application of
    error correction by the tag-soup HTML parser in order for them to be
    represented as an HTML DOM (and rendered from that DOM). Error-
    correction, being non-standardised (and non- standardisable), is more
    or less effective in different browsers.

    All you are saying in observing that Opera and Safari are handling
    your particular flavour of tag-soup mark-up is that their tag-soup
    parsers are more accommodating than those in IE or Firefox.

    The pity of this is that it is a very common misconception that the
    distinction between XHTML and HTML is determined by the mark-up used,
    while in reality it is entirely down to the HTTP Content-Type headers
    sent with the mark-up. From the scripting point of view this is an
    important distinction because browsers receiving a 'text/html' will
    build an HTML DOM to be scripted while browsers receiving an
    'application/xhtml+xml' (assuming they can handle it at all) will
    build an XHTML DOM. The two types of DOM are significantly distinct,
    and distinct in a way that means that most non-trivial HTML DOM
    scripts will not operate if they are exposed to an XHTML (and also the
    reverse). Thus much mark-up that resembles XHTML (but is actually tag-
    soup HTML because it is served as 'text/html' gets scripted in a way
    that means in the even of the mark-up ever being interpreted as XHTML
    (i.e. served as 'application/xhtml+xml') the web page in question will
    instantly be rendered broken.

    I would be willing to wager a month's income on the book you have that
    used 'XHTML 1.0 Transitional' having demonstrated only HTML DOM
    scripting. It certainly does not appear to have explained the
    practical distinctions relating to scripting between XHTML and HTML to

    You are also going to have to go back to your Firefox bug report and
    apologise for wasting everyone's time.

    You are not necessarily the best judge of what you need in this
    Henry, Mar 7, 2008
  4. Justin E. Miller

    VK Guest

    Wrong. That is a very common mistake caused by some ugly erroneous -
    yet alas popular - XHTML manuals. This error is sustained by ill-
    famous "Appendix C" from W3C which is plain wrong in some of its
    parts. This once highly discussed document has only historical value
    by now - as mainly XHTML by itself - but I once traced the possible
    origin of the error to this source.
    "In SGML-based HTML 4 certain elements were permitted to omit the end
    tag; with the elements that followed implying closure. XML does not
    allow end tags to be omitted. All elements other than those declared
    in the DTD as EMPTY must have an end tag. Elements that are declared
    in the DTD as EMPTY can have an end tag or can use empty element

    This way <script src="something.js/> is no more valid than <body/> or
    <p/> or <div/>
    If _treated by XML rules_ it produces well-formed XML structure but in
    any shall case it will be a valid XML structure because no one can
    turn non-empty elements into empty elements and vice versa out of some
    personal desires disregarding the declared (or assumed) DTD.

    SCRIPT element is not and never was an EMPTY element. It is an NON-
    EMPTY element respectively requiring the closing tag. This element may
    have the optional SRC attribute pointing to the external script file
    and the script content itself between the opening and closing tags.

    If both the SRC value and element content are provided then the
    algorithm goes as follow:
    1) If the given UA supports SRC tag and if the pointed file is
    accessible over the network then this file is loaded and parsed;
    element content - if any - is disregarded.
    2) If the given UA doesn't support SRC tag then its value is
    disregarded and element content is parsed.
    3) If the given UA does support SRC tag but the pointed file is not
    accessible due to a network error then SRC value is disregarded and
    element content is parsed. This last algorithm step - despite being
    declared - is not alas implemented on the majority of the current UAs.

    The fact that one doesn't put anything between the opening and closing
    tags obviously doesn't change anything in the spelled XML rules.
    If one needs more info or space to complain about it then
    comp.infosystems.www.authoring.html or comp.text.xml are the most
    appropriate places for that as the question doesn't really have too
    much of relation with the Javascript programming itself.

    For the actual programming is is only important to remember that the
    only proper form for external scripts is
    <script src="anything.js"></script>
    and any other suggested forms - no matter who does suggest them - are
    wrong and most possibly fail in this or that or all enviroments.

    Hope it answers your question.
    VK, Mar 7, 2008
  5. Thanks to everyone who helped answer my questions. I appreciate the help
    and will update my script to reflect what it should be.
    Justin E. Miller, Mar 7, 2008
  6. I'm afraid that statement, in its absolute, is just plain wrong. The
    document type is _not_ determined by the Content-Type header, but by the
    DOCTYPE declaration in the markup document or the result of inference
    that a Web client makes when it is missing. (So much for theory.)

    It is instead correct to say that currently known client application
    software handles even Valid XHTML markup as if it were tag-soup HTML
    markup if it is served with `Content-Type: text/html'.

    It is important to make that distinction in the argumentation because in the
    future Web clients may behave differently, and maybe there are Web clients
    that do so today.
    Hence that wording is correct, with emphasis on "the distinction between ...
    is determined". It depends on the Web client what the outcome of that
    determination, the selection of a suitable user agent for parsing, is.

    Thomas 'PointedEars' Lahn, Mar 7, 2008
  7. Justin E. Miller

    Henry Guest

    Is it? There is no doubt that the browser receiving this page is going
    to interpret it as HTML (not XHTML), build and HTML (not XHTML) DOM to
    be scripted, and so use a tag-soup HTML parser (rather than an XML
    parser) to parse the mark-up.
    Did I say it was? The content type header determines how the content
    will be handled when it arrives. If that header asserts that the
    contents are HTML then they will be parsed as (usually tag-soup) HTML
    (with error correction applied where necessary (and within the
    error-correction facilities provided by the tag-soup parser in
    The DOCTYPE does not determine the type of a document. The most a
    DOCTYPE could ever do is make an assertion about the type of a
    document. But like any assertion that assertion can be a false
    But there you have the same problem as you get trying to assert a
    character encoding in a META tag inside the mark-up. In that case
    the parser is already decoding characters by the time it gets to
    find out how it 'should' be doing so, a situation with an obvious
    contradiction. And with a DOCTYP the parser must already be parsing
    before it gets to see the DOCTYPE. Now an XML/XHTML parse might be
    in a position to throw up its hands in horror at encountering an
    HTML DOCTYPE (or even not finding one at all) but for a tag-soup
    HTML parser that option is not realistic.

    The point of a tag-soup HTML parser is to accommodate errors in the
    mark-up, and that means it is expecting to see errors in the mark-up.
    So if it were to encounter a DOCTYPE that would be valid and correct
    in an XHTML document how can it see that as anything but an error in
    an HTML document? But the errors are to be accommodated and so the
    XHTML-like DOCTYPE changes nothing beyond necessitating some error
    As there is no meaningful distinction between an erroneous HTML
    document that happens, by an accumulation of coincidence of errors,
    to 100% resemble a well-formed and valid XHTML document, and a
    well-formed and valid XHTML document the question of how a client
    application handles a document is very important.
    There was a very short period when Konqueror did try to switch the
    interpretation of mark-up based upon the DOCTYPE rather than the
    HTTP content-type header. They stopped doing that for two very
    good reasons. First that there are many more documents with XHTML
    DOCTYPES than documents that can be successfully handled as XHTML.
    And second; because when scripted even 'Appendix C XHTML' documents
    cannot be handled as XHTML because almost no HTML DOM scripts will
    function correctly if exposed to an XHTML DOM.

    So yes, browsers may attempt this in the future, but they will
    learn their mistake very quickly when they see how broken their
    browser is going to look as a result.

    It is interesting to note what the W3C are doing themselves at the
    moment. Mostly they content-negotiate and serve
    'application/xhtml+xml' headers with their mark-up to
    Firefox/Gecko browsers. But the mark-up validator page is served
    as 'text/html' despite having XHTML-like mark-up. Why? Because it
    is scripted and almost no non-trivial DOM scripts can operate with
    both HTML DOMs and XHTML DOMs (and the effort required to do
    content negotiation with scripts far exceeds to returns so people
    (including the W3C) prefer not to attempt it, or just don't know
    how to do it to start with).
    Practical control of the interpretation still lies with originating
    server (and so by implication is either under the influence of the
    author or known to be pre-fixed factor at the time of authoring).
    Henry, Mar 10, 2008
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.