if I wanted to never use innerHTML, what else would I use?

J

John Bokma

John W. Kennedy said:
No, I think your mom is waiting for websites coded by the competent.

We're all human after all. I am afraid that making sure that each and
every piece of code coming out of a program is 100% well formed. (Sounds
like solving the halting problem to me).
A verbal quibble to pass off Microsoft's vicious behavior as
acceptable, and you know it.

IMO, w3c is wrong with wasting time on a "standard" that will never make
it (I call *that* incompetence). They *should* focus on improving HTML,
not dreaming up something that no webmaster is ever going to use, *unless*
UA's are able to handle non-well-formed documents.

And I am sure UA's are not going to be made to choke on each and every
document that's not well-formed, so there goes the XHTML dream.

Computers are here, IMNSO, to make life easier. Giving no output because
someone made a typing error is just plain crazy in an end user situation.

It's like selling an email client which rejects all email with a spelling
mistake.
 
J

John Bokma

Lasse Reichstein Nielsen said:
So, to make writing XML even remotely pleasent, the editor should
at least be able to check the syntax against a document definition.
Syntax highlighting is also important, since black-on-white XML
isn't very readable either.

Well written! I maintain my site in XML, and am already looking for a way
to hide the XML details internally in an editor I am about to write.
(The development of syntax:

Math:
x |-> x * 2

Scheme:
(lambda (x) (* x 2))

MathML:
<lambda>
<bvar><ci> x </ci></bvar>
<apply>
<times/>
<ci> x </ci>
<cn> 2 </cn>
</apply>
</lambda>

:)

Good one, and quite true. Gazing on XML for hours, daily, really does make
you wonder.
 
J

John W. Kennedy

John said:
We're all human after all. I am afraid that making sure that each and
every piece of code coming out of a program is 100% well formed. (Sounds
like solving the halting problem to me).

Don't be ridiculous; making XML well-formed is trivial. Most languages
with rapid release cycles already have bulletproof XML support either
built-in or in well-known public archives.
IMO, w3c is wrong with wasting time on a "standard" that will never make
it (I call *that* incompetence). They *should* focus on improving HTML,
not dreaming up something that no webmaster is ever going to use, *unless*
UA's are able to handle non-well-formed documents.

Because webmasters are too lazy to do their job right? Screw 'em!

I just had to spend six hours of my life hand-editing someone's tag soup
into a usable form.

--
John W. Kennedy
"But now is a new thing which is very old--
that the rich make themselves richer and not poorer,
which is the true Gospel, for the poor's sake."
-- Charles Williams. "Judgement at Chelmsford"
 
V

VK

John said:
I just had to spend six hours of my life hand-editing someone's tag soup
into a usable form.

The soup is in the head, *not* in the standard.

One can make a soup-less HTML delivery system.

You think that XML implies a soup-less thinking and data management?
How much do you bet for practical jokes from my collection?
 
J

John Bokma

Ian Collins said:
John said:
Ian Collins <[email protected]> wrote:
[..]
I was always under the impression that the HTML DTD could be extended.
There's a big difference form extending and creating something new.
When you extend, you still have all the baggage from the base document
and you can't (I don't think) change existing entities.

You can point to your own DTD. Major question is: is each recent browser
going to honor that request.
 
J

John Bokma

John W. Kennedy said:
Don't be ridiculous; making XML well-formed is trivial.

*sigh*, yes, handcoding and verifying it is.

But you have to think about XML generated on the fly from a database. If
it's trivial to make that well-formed, maybe you can explain why so much
software has so many bugs?

For example, imagine a program that during a fetch from a database, and
spitting out XML, suddenly reports:

"Error: value of x should be > 10 and < 23"

That would result in a nice white page now wouldn't it? And even if you
encode > and < you still risk not writing out closing elements.
Most languages
with rapid release cycles already have bulletproof XML support either
built-in or in well-known public archives.

Uhm, I use Perl, and certainly wouldn't call the XML support bullet
proof. Maybe you talk about a different language?
Because webmasters are too lazy to do their job right? Screw 'em!

LOL, yeah

"Lift Your Skinny Fists Like Antennas to Heaven" [1]
I just had to spend six hours of my life hand-editing someone's tag
soup into a usable form.

Now add up all the hours people have to spend to clean up their sites
and wonder, should a small bunch of people at W3C waste that much time
and money because they think that a parser should be more strict?

Finally, what's the point in sites showing white pages? A better option
would be to have a browser report parsing issues back to the webmaster
*and* doing it's best to render the page.

My own site was down for a few hours some time ago. You think people
always report such problems? I was lucky that a friend visited the site.
So what's the point of strict parsers? It doesn't improve code quality.


[1] Godspeed You Black Emperor! album title
 
J

John Bokma

Ian Collins said:
At risk of drifting back on topic, for sites which rely on dynamic
content generated in the user agent, an accurate DOM representation is
required. So the markup has to be well formed.

Now this can (and currently is to varying degrees) be done by the user
agent (look at the innerHTML of some malformed HTML) or by an
application like HTML tidy, but I'd consider myself a fool if I
generated bad markup and then wondered why the page dynamic content
didn't work.

So getting back to your last sentence, well formed markup is the first
step to code quality. You can't build on shaky foundations.

Which referred to strict parsers in a user agent. The web works as it is
now, why suddenly spin it back 10 years? Just because some people
*think* that a strict UA suddenly will improve the web? Those people
clearly have little serious programming experience. It's not the rules
that generate quality, it's the person doing the work.

I am not saying that a developer should't use tools to catch mistakes,
but IMNSHO it's crazy to think that by giving someone tools, he/she
suddenly becomes an experienced coder. (I have seen people work around
compiler warnings instead of fixing the issue at hand)

Again: a well-formed document doesn't mean that it's well formed from a
human point of view.

And IMNSHO, visitors shouldn't be bothered with such issues.
 
C

ceplma

John said:
The major valid argument is that the parser is too strict to be
practically useful in the real world.

Which is THE good thing of XHTML and why I would like it to be much more
common (or at least for its authoring to be better) -- XHTML being XML
allows me to do all wild thing with it like processing with XSLT etc. (yes,
I know I could quite often process HTML with XSLT, but it is always
something close to playing in Las Vegas).

Matej
 
J

John Bokma

Which is THE good thing of XHTML and why I would like it to be much
more common (or at least for its authoring to be better) -- XHTML
being XML allows me to do all wild thing with it like processing with
XSLT etc. (yes, I know I could quite often process HTML with XSLT, but
it is always something close to playing in Las Vegas).

Nobody is going to stop you from doing the real work in XML, and
converting it, when needed, into HTML. That's what I do.

But I don't feel the need to move the debugging process to my visitors, if
they ever would report back to me (which the majority probably won't).
 
V

VK

John said:
Which referred to strict parsers in a user agent. The web works as it is
now, why suddenly spin it back 10 years? Just because some people
*think* that a strict UA suddenly will improve the web? Those people
clearly have little serious programming experience. It's not the rules
that generate quality, it's the person doing the work.

Right. In addition (I missed it in my first post) I would like to say:

If anyone is really starving for a "non-forgiving" browser, then it
already exists. Simply go to <http://www.w3.org/Amaya/> and get your
copy of Amaya.

<quote>
The main motivation for developing Amaya was to provide a framework
that can integrate as many W3C technologies as possible. It is used to
demonstrate these technologies in action while taking advantage of
their combination in a single, consistent environment.
</quote>

After several days...sorry... hours ... sorry... I would say minutes...
of browsing one can switch back to the regular "error-tolerant"
browser.

It would be great to hear one's *sincere* feedback. Does one still
dream about "stricter UA" or she was magically cured? :)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,770
Messages
2,569,583
Members
45,075
Latest member
MakersCBDBloodSupport

Latest Threads

Top