if I wanted to never use innerHTML, what else would I use?

J

John W. Kennedy

Jim said:
The mandatory requirements of XML processing are not good for users,
the fail with incomprehensible error that Mozilla does is nothing but
confusing to the user.

This is a fundamental problem with XHTML.

It is /not/ a fundamental problem with XHTML, it is a fundamental
problem with so-called "web browsers" that can't be bothered to follow
standards (in some cases, due to incompetence, but, in Microsoft's case,
because it is their deliberate policy to ignore and sabotage standards
wherever possible).

--
John W. Kennedy
"But now is a new thing which is very old--
that the rich make themselves richer and not poorer,
which is the true Gospel, for the poor's sake."
-- Charles Williams. "Judgement at Chelmsford"
 
J

Jim Ley

It is /not/ a fundamental problem with XHTML, it is a fundamental
problem with so-called "web browsers" that can't be bothered to follow
standards (in some cases, due to incompetence, but, in Microsoft's case,
because it is their deliberate policy to ignore and sabotage standards
wherever possible).

Er, no, any standard which requires you to show users messages that
they are not equipped to understand is a failure which user agents
with non-technical users will not follow.

Which is why IE doesn't support XML mimetypes for XHTML, and Opera
ignores the requirement. Why do you prefer Operas behaviour than
IE's here?

Jim.
 
J

John Bokma

Thomas 'PointedEars' Lahn said:
John said:
Thomas 'PointedEars' Lahn said:
Jim Ley wrote:
[...] Thomas 'PointedEars' Lahn [...] wrote:
Jim Ley wrote:
How much support for DOM 3 will be in IE 7? It's struggling to
implement DOM 2 fully.
DOM 3 is about XML, IE7 is an HTML user agent.
Well, AFAIK IE 7 Final has not been released yet. Are you saying
that IE 7 Beta 2 still does not support application/xhtml+xml and
XML document types like XHTML?
The IE7 team are on record saying that application/xhtml+xml will
not be a supported type of IE7.
D'oh.

This is a good thing.

Pardon?

JFTR: I think it is definitely a Bad Thing, for it keeps XHTML a
corner language instead of helping it to become a cornerstone language
of Web authoring.

Do we need such a cornerstone? I have my doubts.
The contradiction and -- I must say -- hypocrisy
expressed by Microsoft in this matter becomes obvious when you look at
the wannabe-X(HT)ML code they produce (e.g. in the MSDN Library) and
serve that as text/html to IE's tag soup parser.

MS is not Bill Gates sitting behind a desk. It's a huge organisation in
which people perfectly well can have different views on one matter.
Yes, I hope so. But only if there is proper support in all widely
distributed user agents. So far that is not the case, and it is a
pity that it appears to stay so, 6 years after the first XHTML
specification to which also several Microsoft people contributed to.

And it will probably (and hopefully) not happen. XHTML = XML, which
means that one tiny mistake might stop the parser. Remember the Netscape
3.x (IIRC) days, that you got white pages (or was it grey)?
Yes, I can. And once XHTML as application/xhtml+xml, parsed by an XML
parser, gets broad support by user agents, there are no disadvantages
of it left when compared to HTML.

Oh, yes, there are if the document is not well-formed, the parser should
give up.
However, I have named both before
(here), and /this/ discussion is not on-topic and I will not continue
it here.

You could give a message-id :-D
That is _their_ problem.


True, however that is not a valid argument against XHTML as a
hopefully _future_ "mainstream" markup language. And I was talking
about a possible, and for me desirable, future only.

The major valid argument is that the parser is too strict to be
practically useful in the real world.
 
T

Thomas 'PointedEars' Lahn

John said:
Thomas said:
Jim said:
[...] Thomas 'PointedEars' Lahn [...] wrote:
Jim Ley wrote:
The IE7 team are on record saying that application/xhtml+xml will
not be a supported type of IE7. D'oh.
This is a good thing.
Pardon?

It means it's time to start the official campaign:

"Microsoft is deliberately sabotaging the Web."

"Microsoft's brand-new IE7 is already six years out of date."

"Microsoft Internet Explorer is not a web browser."

Count me in :)


Regards,
PointedEars
 
R

RobG

Jim said:
But it's pretty irrelevant given that DOM 3 L&S is defined over XML
documents, and not HTML ones... so you're not actually comparing
apples and apples here.

Yes, a good point. Are there any specific issues with using the Firefox
(Gecko?) XMLserializer with HTML?

As far as I can tell, provided the closing tags of empty elements are
fixed (i.e. '/>' is replaced with '>' ) the results are valid HTML 4
strict. I tested a few pages by using the serializer to generate page
HTML source and running it through the W3C validator.

Tag names are capitalised, so it seems to understand HTML (I guess from
the DTD) even though it is an XML serializer.
 
J

John Bokma

Ian Collins said:
One I would tout is that XHTML forces you to separate the content form
the style, a good thing IMHO.

How does HTML 4.01 strict fail in this regard?
I don't use any WYSIWYG editors, so I can't comment on how well they
work with XHTML.

I have found that I get a lot more 'code' reuse once the two are
separate.

Is that because you chose to do it or because the language forces you to
do so? As a programmer I always have a good laugh when in a discussion
someone claims that language A is better compared to B, because A "forces*
things. IMO, you should do things because they have a good reason to be
done thus, not because some odd "force".

It might be that XHTML drops a lot of things that should never have been
in HTML in the first place, but it also adds a few kludges.

And in no way it stops someone creating a design horror, code-wise.

Not every HTML page that validates is a well thought out coded page.
And same holds for X(HT)ML
 
J

John Bokma

but authoring webpages is something people do by hand - or by other
non machine repeatable processes - ie you plug together a load of
components that tie together content from DB's etc.

Given that all code has bugs, why should we prevent users getting
content simply because of a minor bug in a piece of software?

Yup, true. For pages with static content, one can validate, upload, and
the work is done. But more and more webpages are generated on the fly or
change too often.

How many people don't complain if a bug crashes their application?
 
J

John Bokma

John W. Kennedy said:
It is /not/ a fundamental problem with XHTML, it is a fundamental
problem with so-called "web browsers" that can't be bothered to follow
standards

The recommendation (it's not a standard) for XML is that if the document
is not well-formed, the parser should *stop* and report.

You really think my mom is waiting for stuff like:

Error at line #121 open tag found without close tag.
(in some cases, due to incompetence, but, in Microsoft's case,
because it is their deliberate policy to ignore and sabotage standards
wherever possible).

You're mistaken, most standards you call standards are recommendations and
working drafts.
 
R

Richard Cornford

John said:
It means it's time to start the official campaign:

"Microsoft is deliberately sabotaging the Web."

"Microsoft's brand-new IE7 is already six years out
of date."

"Microsoft Internet Explorer is not a web browser."

Now try expressing that in terms that my mother will understand and care
enough about to stop her using IE 5.5 as a web browser.

Richard.
 
V

VK

Ian said:
If user agents were strict,
poorly constructed sites would have to clean up their acts.

If user agents were too strict to get a satisfactory browsing
experience, users would screw on them and switch on lesser strict user
agents: and I assure you that the proposal would fit the demand within
few days.

Some people keep forgetting that browsers is a *business* and a *fight
for the market*.

John Doe and ACME, Inc do not give a crap how profoundly hierarchical
page structure is or how well the content is separated from the layout.

John Doe wants secure browsing will all cool twists he just saw at
Jonsons' - and self-made page for his little daughter w/o getting
associate degree in web development.

ACME, Inc. wants the best cost/effectiveness solution with minimum
maintenance cost.

Both of them are absolutely indifferent how will you deliver it: by
separating layout and content or by mixing them through a text
randomizer.

You manage to satisfy _both_ - you get the market (or a good part of
share).

You disappoint anyone of both - you name goes to the history. And no
one will remember in one year how standard was your browser at the
moment of its death.
 
J

John Bokma

Ian Collins said:
It doesn't, I was in cloud cuckoo land this morning when I posted
that...

:) been there too (too often)
No, it's all down to the separation, I've found bits of XHTML without
any style included tend to get reused more often than those
constrained with a specific layout. Same goes for the CSS classes.

Yup, true, but you can do the same with HTML 4.01 strict. It all comes
down (again) to the person typing the code.

I have no idea how XHTML could help http://johnbokma.com/ , for example.
 
J

John Bokma

Ian Collins said:
Doesn't this encourage people to produce well formed mark-up?

Yes, just well-formed. The problem is not the human generated static
pages, but pages that come out of CMS etc. systems. IIRC phpBB uses XHTML,
but I don't even want to think about how many pages generated by it fail,
maybe even before people have been tweaking the templates.
This
may not be the be all and end all, but surely encouraging good
practice is a good thing and a basis for good design.

IMNSHO, with all coding, it depends on the person doing the coding, not
what the language forces. If one makes a mess in HTML 4.01, the person
will make a mess in XHTML. Maybe not the same kind of mess, but a mess it
will be.
We don't say that about our compilers :)

True, but my mom is not using a compiler, for example. Nor are most of my
friends. Yet they will be confronted with compiler warnings if XHTML
becomes mainstream (which I guess will never happen).
If user agents were strict,

Netscape 3.x was, IIRC. Ages ago I tested my pages with Netscape just for
that reason (on an Indy :p, had to buy extra memory to keep NC going for
several minutes more )
poorly constructed sites would have to clean up their acts. I know
this is pie in the sky, but who knows?

Netscape, because they changed the parser in 4.x (or later? Can't recall).
 
J

John Bokma

Ian Collins said:
Drifting back on topic, one benefit I'd like to see form strong
XML/XHTML support for user agents is better DOM conformance, to make the
job of us poor JavaScript authors easier. From what I've seen, the two
go hand in hand.

Ok, I know too little about that to comment on it.
It may not be common on the public internet, but have worked with XML
interfaces that work well for machine to machine communication and with
matching CSS, for humans.

Yes, I use XML internally ((almost) my whole jb site is marked up in XML
files, which are turned into HTML) and I know the advantages from a
developers point of view, and the disadvantages to my mom :-D
If the internet is to move on, we have to look beyond HTML. The
combination of XML/CSS and JavaScript offers unlimited potential.

Can you explain why HTML + JavaScript can't offer such a thing? Also,
because I see no reason why a HTML parser can't fix errors instead of
dying at the first one, and create a parse tree that is well formed.

XML parsing reminds me too often of the first compilers I used (or wrote
myself), that stopped at the first error. Recompile, next error,
recompile, next error.
Perhaps if there was a good application for XML in porn, there would be
better support :)

A Porn application that now and then doesn't show porn, but "Parse error
at line 12123". Uhm....
 
J

John W. Kennedy

Jim said:
Er, no, any standard which requires you to show users messages that
they are not equipped to understand is a failure which user agents
with non-technical users will not follow.

Like TCP/IP? This is a strawman. Get it right, or don't do it.

--
John W. Kennedy
"But now is a new thing which is very old--
that the rich make themselves richer and not poorer,
which is the true Gospel, for the poor's sake."
-- Charles Williams. "Judgement at Chelmsford"
 
J

John W. Kennedy

John said:
The recommendation (it's not a standard) for XML is that if the document
is not well-formed, the parser should *stop* and report.

You really think my mom is waiting for stuff like:

Error at line #121 open tag found without close tag.

No, I think your mom is waiting for websites coded by the competent.
You're mistaken, most standards you call standards are recommendations and
working drafts.

A verbal quibble to pass off Microsoft's vicious behavior as acceptable,
and you know it.

--
John W. Kennedy
"But now is a new thing which is very old--
that the rich make themselves richer and not poorer,
which is the true Gospel, for the poor's sake."
-- Charles Williams. "Judgement at Chelmsford"
 
L

Lasse Reichstein Nielsen

Thomas 'PointedEars' Lahn said:
Lasse Reichstein Nielsen wrote:

Utter nonsense. It is not that hard to write Valid markup, may it be HTML,
XML, or XHTML, if you put a little effort in it.

It's not *hard* in any way. Anybody who can close the parentheses
correctly on a Lisp program should have no problem writing XML.

You do have to stay attentive, making sure you don't misspell a tag or
attribute name, that you remember the "/>" on empty elements, that you
write only correct values into attributes, etc. That's not hard, just
demanding.

XML is also quite verbose, especially if you try to give tags
meaningfull (and thereby more easily readable) names. And you have to
write the end tag every time, even if the name is exactly the same as
the start tag. An assisting editor could help you here, filling in
the tag name when you start a closing tag.

If your editor doesn't understand the document definition (defined by,
e.g., DTD or XMLSchema), then it can't help you correct typos. If it
also assist you filling ind end tags, you will even create correctly
matched tags when you misspell a start tag.

All this means that writing XML soon becomes mindnumbingly boring.
The combination of requireing attention to detail *and* being boring
is the classic recipy for making stupid errors.

Possible? Absolutely. Hard? Hardly! Slow and errorprone? Check!

So, to make writing XML even remotely pleasent, the editor should
at least be able to check the syntax against a document definition.
Syntax highlighting is also important, since black-on-white XML
isn't very readable either.

XML is the lowest common denominator of formats. It is extremely well
suited for machine reading, and just as unsuitable for human
reading. Sure, it's easy to learn the basic structure, since
everything is really the same. It's also very hard to read any larger
document *because* it all looks the same. Having to pick out the tag
names, i.e., words, in a block of text, i.e., other words, begs for
syntax highlighting, at the least. HTML was actually better, since it
allowed you to write your tags with capital letters if you so
preferred.


But I guess the verbosity is really the killer for me. The boredom.

It's not like parsers are hard to build these days. Dumbing everything
down to XML makes sense for machine-to-machine communication, but for
humans writing information to computers, a language specific to the
application domain can make everything so much easier that it is
ridiculous.


(The development of syntax:

Math:
x |-> x * 2

Scheme:
(lambda (x) (* x 2))

MathML:
<lambda>
<bvar><ci> x </ci></bvar>
<apply>
<times/>
<ci> x </ci>
<cn> 2 </cn>
</apply>
</lambda>

:)

/L
 
J

John Bokma

Thomas 'PointedEars' Lahn said:

[a] Don't post childish plonk messages
I consider kicking a posting out of a group extremely rude, so if
there was anyone giving a reason for ploinking it was you, not me.

Maybe you're new to Usenet, but breaking a thread in pieces is stupid. The
discussion is here, and altough it's partially offtopic, it doesn't make
sense to run a part of it in a group you picked, while the other part goes
on here.

Get a life.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,770
Messages
2,569,583
Members
45,073
Latest member
DarinCeden

Latest Threads

Top