VK said:
Yeah, with the first step and right in the middle
For Firefox DOM Inspector itself you can change the view option by
going in DOM Inspector window to View > Show Whitespace Nodes. This
affects the DOM Inspector display *only*, no DOM Tree changes.
To deal with the ... observed phenomenon ... programwise see for
instance:
<
http://www.codingforums.com/showthread.php?t=7028>
<
http://developer.mozilla.org/en/docs/Whitespace_in_the_DOM>
<
http://developer.mozilla.org/en/docs/Talk:Whitespace_in_the_DOM>
That's some very interesting stuff! However, I think I will just leave
the 'phantom nodes' be for now, and learn to work with them.
After reading the articles you suggested, the thought of attempting to
remove the phantom nodes terrified me somewhat. Perhaps it WILL work
effectively, but to me it looks like there is always something that
just 'might' go wrong, so you may not get the results you'd expect,
(for example, you could accidentally concatenate two words without the
whitespace). Also, removing them takes time, and resources, and
apparently IE renders the phantom nodes differently to other browsers
(no surprises there), so it would just make sense to make a small
abstraction layer which can be placed over methods such as
firstChild(), nextSibling(), childNodes() etc... just check to see if
the child is a whitespace, and if it is, perhaps return the next child
(even if that happens to be a whitespace, too).
For childNodes(), we could have a method that will return an array of
elements and textNodes, without the phantom spaces. How will we know if
it's a phantom node? This I am not entirely sure of. From observation,
it looks like phantom nodes tend to follow elements of nodeType '1'.
Although there is a chance the whitespace we are assuming is a phantom
node, actually is not, it doesn't matter. It's very unlikely that we
will need to do anything with the whitespace, even if it's not a
phantom node, so we can work with the text/element that follows. As we
aren't removing anything from the page, the formatting will remain the
same.
We can also use the normalize() method, which when used on an example
like this:
<span>
This is
some
text
</span>
will join text nodes so we will end up with the code rendered like
this:
<span>
This is some text
</span>
I think this is a useful method to utilise, but I seem to remember
reading that IE doesn't support it, so I guess it's just a case of
going back to basics, and doing everything with XP scripting, and not
relying on methods that aren't supported by all browsers.
I am not saying my method is correct, however, I noticed that it wasn't
posted on any of the forums you suggested, and as you rightly said,
there may never be an agreement as to who's method is the best. Granted
that sometimes a certain method can be better than others, and other
times a different approach is more effective, but in my humble opinion,
I believe that removing the phantom nodes is a recipe for disaster! I
think we should all just work with them.
I think it's the programmers responsibility to ensure that their HTML
comes with phantom nodes in all of the right places, or none at all. If
they are sending both the HTML and the JavaScript to the user, there is
no reason for the script not to be compatible with the code, and vice
versa. Having to phantom nodes at all would be the preference as it
would save page loading times (marginally, yes, but exponantially in
the long run on a busy server). I think my method (which probably isn't
unique) should cover both scenarios, and of course be helpful to people
who are coding JavaScript that works independantly of the page (i.e a
Firefox extension).
Thanks again VK.