Output VALUE of INPUT textfield using document.write

  • Thread starter Stumped and Confused
  • Start date
S

Stumped and Confused

Hello,

I really, really, need some help here - I've spent hours trying to find a
solution.

In a nutshell, I'm trying to have a user input a value in form's
textfield. The value should then be assigned to a variable and output
using document.write.

(Note, there is no submit button or other form elements. Basically
whatever the user places in the INPUT textfield, I want echoed elsewhere
on the webpage.)

---

Here's what hopefully should happen:

[ myName ] (web visitor types value in Input Text Field)

Welcome myName! (value of text field appears in web page using
document.write.

---

What I have so far:

I know I can declare a value and use document.write.

var myName = "Tom";

document.write=(myName);

But how do I assign the VALUE of the INPUT textfield to a variable name
and have it output using document.write? Please help - I've spent hours on
this!

Thanks in advance.
Stumped & Confused.
 
Y

Yann-Erwan Perio

Stumped said:
In a nutshell, I'm trying to have a user input a value in form's
textfield. The value should then be assigned to a variable and output
using document.write.

document.write will open a new document, clearing the current document,
which is probably not what you want. The dynWrite function from the FAQ
should give you better results:)

<URL:http://jibbering.com/faq/#FAQ4_15>

However if you just want to write text and not HTML, it's probably
better to use standard DOM methods.


<form action="#">
What's you name&nbsp;?
<input>
<input type="button" value=":)"
onclick="hello(this.form.elements[0].value,'foo')">
</form>
<span id="foo"></span>

<script type="text/javascript">
function hello(what,where){
var target;
if(document.getElementById &&
document.createTextNode &&
document.body &&
document.body.appendChild &&
typeof document.body.firstChild!="undefined"
){ //all the methods we want to use are supported

target=document.getElementById(where);
if(target) {
if(target.firstChild==null) {
//first time calling
//add the text node
target.appendChild(document.createTextNode(""));
}
if(/\S/.test(what)) { //not empty
target.firstChild.nodeValue="Hello, "+what;
}
}

}
}
</script>
 
P

PDannyD

Stumped and Confused said:
Hello,

I really, really, need some help here - I've spent hours trying to
find a solution.

In a nutshell, I'm trying to have a user input a value in form's
textfield. The value should then be assigned to a variable and output
using document.write.

I had a conversion script which may be coerced into doing what you want.
Have to wait until Monday though because it's on the PC at work.

It's bound to be crap because it was one of the first useable scripts I
wrote but if you've not found a solution by Monday I'll post the relevant
bits here.
 
S

Stumped and Confused

Thank you - although I have found a solution (thank you, Yann-Erwan
Perio), I'll be interested in any possible alternative solutions -
especially, if it helps my learning.

Cheers and thank you!
 
M

Michael Winter

On Fri, 17 Sep 2004 22:00:58 +0200, Yann-Erwan Perio

[snip]
<form action="#">

A quick aside:

I know that IE is incapable of understanding that in a link href="" refers
to the current document[1], but it does treat action="" properly (in IE 6
- don't know about earlier versions).

[snip]
if(document.getElementById &&
document.createTextNode &&
document.body &&
document.body.appendChild &&
typeof document.body.firstChild!="undefined"
){ //all the methods we want to use are supported

I did bring this up once before a long time ago, but I forget my wording,
the thread (I think I hijacked one), and the result, so I'll ask again.

Is it safe to assume that because a method, like appendChild, is supported
on one type of node, that it will be supported on all nodes. Logic would
suggest that such an assumption is flawed, but is it in practice? Recent
major W3C DOM-supporting browsers should present no problems, but not
having experience with a wide cross-section of user agents and versions,
I'm not absolutely certain.

In the script that you presented, it would be trivial to rewrite it to be
more cautious, but things but your displayed approach, and the one I would
use, become disparate when something like the iteration of a collection,
and the application of methods the contained nodes, was required.

[snip]

Mike


[1] I was quite shocked when learning that.
 
Y

Yann-Erwan Perio

Michael said:
Is it safe to assume that because a method, like appendChild, is
supported on one type of node, that it will be supported on all nodes.
Logic would suggest that such an assumption is flawed, but is it in
practice? Recent major W3C DOM-supporting browsers should present no
problems, but not having experience with a wide cross-section of user
agents and versions, I'm not absolutely certain.

I don't have that much experience myself with a wide range of browsers;
I suppose that Jim, Martin or Richard could tell us more about this.

At first sight, your arguments seem very convincing. It is indeed
trivial to change the detection a little bit so that it tests whether
the methods needed are required on the object I want to use, and not on
another object.

History demonstrates that object models have differed across user
agents, and that a same model could be implemented very differently
across browsers. Moreover, numerous agents have been created, which
makes it nearly impossible to test a piece of code on every possible
agent (and probably not acceptable from an economic point of view).

Faced with this issue of unknown environment, testing features on the
very object to be used makes sense, and is actually the safest option.

This approach isn't without problems, though. Testing extensively the
features on each object can render the code unreadable, and break the
business flow of the script. This is the same problem as with localized
exceptions handled by try/catch constructs; it forces you to handle the
exception at many levels, probably too many to have something neat. This
is quite contradictory if you consider that the script will have three
possible states: run fine, run in a degraded mode, or don't run (and
degrade fine).

Another problem is that it can prevent code optimisation if you leave
the tests in place in each situation, not redefining the methods (see
the Russian Doll pattern;-)).

In addition, there's also the cost introduced by such techniques, less
technical but as important; it requires more time, more attention, more
experience etc., so definitely costs more (I don't know many people who
could understand and write advanced javascript without problems - in
fact I know of none apart from in clj - but that's not my job either).

I'm therefore less and less convinced of the approach of
feature-detecting as "much" as possible. To me the best approach is to
do something in between, first performing a rigorous intialization,
testing for all methods on a sample object (the document.body in the
example code), and then moving on to the business logic, without testing
more than required (object existence).

Years until 7-version browsers were like a "bubbling cauldron of
ideas"[1], where specifications were designed, tested, thrown away.
However things differ now, there are existing standards and models which
are recognized among vendors and implemented in the same way. We've
entered a new phase, where things are not written from scratch but built
on solid ground - therefore evolving less quickly (sadly). So in the
future it is unlikely to see diverging models (they wouldn't be profitable).

This means that "DOM-conformant" agents should/will really be
conformant. The strategy of using another object to test for methods
support appears to be valid, since these two objects would implement the
same DOM interfaces (a node is a node); you might indeed have methods
working on an object and not the other, but that'll be more rare with
time (and you could as well have a problem the other way around (reading
a property making the script fail, like mimeTypes in not-so-old Operas)).

Eventually, it comes back to the definition of detection; what model is
supported by the user agent, and to which extent. In the end, only
experience will draw the line; the ability to adopt the two-fold model
requires a good knowledge of the whole history (browsers, DOMs,
implementations) so that the testing phase really covers all major
potential issues; but when you have it, I believe that the strategy of
testing "everything" is simply not worth the effort anymore; I'm more
inclined in doing iterative development, doing 95% of the detection at
first (giving the degraded mode to non-supporting agents), and then
correcting if anything wrong comes by (a hole in my experience). There's
a cost of non-quality which, I've come to realise, is simply too high.


It's getting late here, so I hope all of this made sense (wasn't too
disappointing/boring) and that I have, if not answered your question, at
least raised some, without drifting too far away.


Regards,
Yep.
 
P

PDannyD

Stumped and Confused said:
Thank you - although I have found a solution (thank you, Yann-Erwan
Perio), I'll be interested in any possible alternative solutions -
especially, if it helps my learning.

Cheers and thank you!

Found it on a disk I'd brought home.
It's one of my earliest efforts. It works but there isn't any error checking
and there's no DOCTYPE. I think the <!-- --> comments are no longer
necessary unless you have a very old browser but it doesn't seem to hurt to
leave them in.

Full working source code cut-n-pasted below.

=========================

<html>
<head>
<title>Conversion Calculator</title>
<script type="text/javascript">
<!--
function places(value)
{
value=Math.round(value*100)/100
return value
}

function convert(measure)
{
switch (measure)
{
case "mm" :
x=document.myform.mm.value
document.myform.inch.value = places(x/25.4)
document.myform.feet.value= places((x/25.4)/12)
document.myform.cm.value= places(x/10)
document.myform.metres.value= places(x/1000)
break

case "inch" :
x=document.myform.inch.value
document.myform.mm.value=places(x*25.4)
document.myform.feet.value=places(x/12)
document.myform.cm.value=places(x*2.54)
document.myform.metres.value=places(x*.0254)
break

case "metres" :
x=document.myform.metres.value
document.myform.mm.value=places(x*1000)
document.myform.cm.value=places(x*100)
document.myform.inch.value=places(x/.0254)
document.myform.feet.value=places((x/.0254)/12)
break

case "feet" :
x=document.myform.feet.value
document.myform.inch.value=places(x*12)
document.myform.mm.value=places(x*12*25.4)
document.myform.cm.value=places(x*12*2.54)
document.myform.metres.value=places(x*12*.0254)
break

case "cm" :
x=document.myform.cm.value
document.myform.mm.value=places(x*10)
document.myform.metres.value=places(x/100)
document.myform.inch.value=places(x/2.54)
document.myform.feet.value=places((x/2.54)/12)
break

default:
}
}
-->
</script>
<noscript>
<p>You need to enable Javascripts to use this utility.</p>
</noscript>

</head>
<body>

<p>
<b>Insert a number in any input field and<br>
it will be automatically converted into all the others.</b>
<br>
This requires Javascript
</p>

<form name="myform">
<input name="mm" onkeyup="convert('mm')"> Millimetres<br>
<br>
<input name="cm" onkeyup="convert('cm')"> Centimetres<br>
<br>
<input name="metres" onkeyup="convert('metres')"> Metres<br>
<br>
<input name="inch" onkeyup="convert('inch')"> Inches<br>
<br>
<input name="feet" onkeyup="convert('feet')"> Feet<br>
<br>
<input type="reset" value="Clear">
</form>

</body>
</html>
 
M

Michael Winter

Michael said:
Is it safe to assume that because a method, like appendChild, is
supported on one type of node, that it will be supported on all nodes.
Logic would suggest that such an assumption is flawed, but is it in
practice? Recent major W3C DOM-supporting browsers should present no
problems, but not having experience with a wide cross-section of user
agents and versions, I'm not absolutely certain.
[snip]

At first sight, your arguments seem very convincing.

And your counter is just as persuasive. I'll do my best to respond.

[snip]
Faced with this issue of unknown environment, testing features on the
very object to be used makes sense, and is actually the safest option.

Indeed, which is why I favour a full testing stratagy, but...
This approach isn't without problems, though. Testing extensively the
features on each object can render the code unreadable, and break the
business flow of the script. This is the same problem as with localized
exceptions handled by try/catch constructs; it forces you to handle the
exception at many levels, probably too many to have something neat. This
is quite contradictory if you consider that the script will have three
possible states: run fine, run in a degraded mode, or don't run (and
degrade fine).

....this certainly a potential stumbling block. That said, this problem hsa
always existed in programming. Perhaps with full support for exception
handling, it would be easier to create degradable scripts by moving
decisions regarding fallback to a more abstract level. I suppose that a
detailed example would be needed to investigate that properly, but
unfortunately, the continued use of wish-they-were-dead browsers like NN4
would scuttle any workable solution, should support for them be required.
Another problem is that it can prevent code optimisation if you leave
the tests in place in each situation, not redefining the methods (see
the Russian Doll pattern;-)).

I still haven't read that thread, yet. I started it, but became distracted.
In addition, there's also the cost introduced by such techniques, less
technical but as important; it requires more time, more attention, more
experience etc., so definitely costs more (I don't know many people who
could understand and write advanced javascript without problems - in
fact I know of none apart from in clj - but that's not my job either).

But if you're familiar with such a cautious approach, does it really add
extra cost? People often argue that writing "good" code takes extra time,
but that is simply because they aren't used to writing it. Certainly, a
very complex script will add overhead if there are many possible fallback
routes to cover, but would such a situation arise in your average web site?

There is also the factor of education. It's fine for those versed in
cross-browser scripting to say, "I don't have to worry about testing for
that [whatever "that" may be], because I know it will be there", but it
this piecemeal testing something that should be passed on to others?
Without experience, how can they judge what is needed and what isn't? By
our own admissions, neither of us are fully qualified to make such a
determination with any degree of authority.
I'm therefore less and less convinced of the approach of
feature-detecting as "much" as possible. To me the best approach is to
do something in between, first performing a rigorous intialization,
testing for all methods on a sample object (the document.body in the
example code), and then moving on to the business logic, without testing
more than required (object existence).

There are merits to that, but I'm still not certain it's something to be
adopted at the moment.
Years until 7-version browsers were like a "bubbling cauldron of
ideas"[1], where specifications were designed, tested, thrown away.
However things differ now, there are existing standards and models which
are recognized among vendors and implemented in the same way. We've
entered a new phase, where things are not written from scratch but built
on solid ground - therefore evolving less quickly (sadly). So in the
future it is unlikely to see diverging models (they wouldn't be
profitable).

[Rant]

And we're back to original issue: the specifications aren't implemented
consistently. Mozilla claims to fully comply with the various DOM
specifications through the hasFeature method, something which is reserved
for the truly compliant, but some of its bugs completely break
specification. Opera has very good support for the DOM, but it misses some
of the lesser-used, but basic, methods and properties. And let's not
forget IE (Oh, how I really wish we could). Until implementations are
complete and fully adopted by end-users - something that will take many
years - we stuck with relying on only one thing: feature detection, and
due to the incomplete support, is any testing that is less than
comprehensive reliable?

If vendors decided for once that they'd wait until they finished
development before releasing a product, we might simply be writing:

var imp;
if((imp = document.implementation) && imp.hasFeature
&& imp.hasFeature('HTML', '2.0'))
{
// Yay! Full HTML DOM support.
}

when looking for the various DOM methods. However, the typical rush "not
to be left behind" has littered the landscape with little, if anything,
that can truly pass the test above.

[/Rant]
This means that "DOM-conformant" agents should/will really be conformant.

In time, but not now.
The strategy of using another object to test for methods support appears
to be valid, since these two objects would implement the same DOM
interfaces (a node is a node); you might indeed have methods working on
an object and not the other, but that'll be more rare with time (and you
could as well have a problem the other way around (reading a property
making the script fail, like mimeTypes in not-so-old Operas)).

Yes, you would hope so.
Eventually, it comes back to the definition of detection; what model is
supported by the user agent, and to which extent. In the end, only
experience will draw the line; the ability to adopt the two-fold model
requires a good knowledge of the whole history (browsers, DOMs,
implementations) so that the testing phase really covers all major
potential issues; but when you have it, I believe that the strategy of
testing "everything" is simply not worth the effort anymore; I'm more
inclined in doing iterative development, doing 95% of the detection at
first (giving the degraded mode to non-supporting agents), and then
correcting if anything wrong comes by (a hole in my experience). There's
a cost of non-quality which, I've come to realise, is simply too high.

The only issue here is, how can you tell if you have omitted something? It
relies on you finding it yourself or it being reported, but neither is
likely to happen. You can't possibly test with all user agents, and many
visitors would ever bother reporting something, as they'd never grasp what
was wrong.

I appreciate your position and I would adopt it, but only if it can be
proved reliable. On a website, it would be a simple matter of updating
code, but posts to this group can't be so easily rectified.
It's getting late here, so I hope all of this made sense (wasn't too
disappointing/boring) and that I have, if not answered your question, at
least raised some, without drifting too far away.

You certainly have made good points. I'm curious to know what the other
regulars here have to say on the matter.

Apologies for the brief rant,
Mike
 
R

Richard Cornford

I don't have that much experience myself with a wide
range of browsers; I suppose that Jim, Martin or
Richard could tell us more about this.

We start with two well established principles relating to browser
scripting:-

1. Making assumptions about the browser environment is
extremely risky.
2. Feature detecting tests should be performed in a way
that is as closely related to the problem as possible
(preferably a direct one-to-one relationship).

We also have the realisation that an overly dogmatic application of
those principles in all circumstances will potentially stand in the way
of being able to create viable scripts.

There may be cases where an assumption that is not strictly valid, but
for which no example of a contrary environment has been identified,
facilitates, for example, controlled clean degradation where it might
otherwise be problematic. Such as the assumption that a browser that
dynamically supports the switching of the CSS - display - property will
exhibit a named property of - style - objects that is typeof 'string'.
Allowing the assumption that if the style object has no such property
then the browser is not going to respond to attempts to set - display -
to 'none'.

Personally I am yet to see a browser that could dynamically switch the
display of an element via the - display - property where - typeof
styleObj.display == 'string' - is not true, and also a non-dynamic
browser where it is not false (assuming a normalised - style - object
for Net 4, etc). However, it remains an assumption.

Trying to get feature detection as close to the problem as possible
could imply testing everything each and every time it is used. But code
that attempts that is burdening itself heavily, and may even end up
doing more testing than acting. When you are writing DHTML to be as
fluid as is achievable using the combination of HTML and CSS there is a
great deal that may need to be continually examined in terms of the
sizes and positions of elements (as users re-size their browser windows,
change the font-size settings, etc, and can do so at any moment).

Adding, on top of that requirement, full feature detection on every
action stands a chance of rendering the result non-viable (slowing the
script to the point where it is unacceptable to its users). Leaving the
only viable, for example, menu scripts, the ones that fall apart
whenever the font size is changed or encourage page authors to attempt
to pin-down the dynamic aspects of web pages so the menus will not
disintegrate.

The necessary, and inescapable, aspect of feature detection is that if a
feature is to be used at all it should be tested to verify that it is
available in the environment prior to its use. But prior to its use does
not necessarily mean prior to each and every use. While it is an
assumption that the environment of any given browser will not
significantly change while as script is executing, it is not that
unreasonable an assumption.

One strategy for reducing the level of feature detection testing going
on while a script is running is to give it a single "gateway" test that
is executed during an initialisation phase. Testing for the features
that the script will be using and then, if the test is passed, using
those features without additional verification. This is based on the
assumption that the environment will not change while the script is
running.

Unfortunately some aspects of tests performed during an initialisation
phase could not be as close to the problem as to qualify as a one-to-one
relationship. Testing DOM elements being one example. Instead of
examining the properties of some unknown element that is actually going
to be used by the script it is sometimes necessary to test the
corresponding properties of an element that is known to exist at that
point. The - document.body - element being a good candidate as it is
virtually guaranteed to exist (once the opening tag has been passed or
implied).

So the question is; what is it reasonable to infer from an element such
as - document.body - about the nature of other elements in the DOM. Such
an inference will be based on an assumption and so should be subject to
careful consideration.

Mike's question is really about the test made in the posted code.
Specifically - document.body.appendChild - and - typeof
document.body.firstChild!="undefined" - because the tests are applied to
the - document.body - element and the corresponding method and property
are used on a SPAN element (and could be applied to any element that
allowed text content).

My experience of web browsers suggests that those tests are safe (in
that I know of no browsers where the inference drawn form those two
tests on - document.body - will not hold true for any SPAN element in
the same environment). But I also think that the logic of the test is
reasonable because of the nature of the properties being examined. They
are both part of W3C Core DOM Node interface, and it is the intention of
the W3C that all of the elements in the DOM implement the Node interface
(along with much else). So it doesn't seem unreasonable to infer that if
any specific element implements the significant part of that interface
then all other elements within the same DOM should also be expected to.
_With_some_caveats_:-

Internet Explorer 4 has a non-W3C standard - appendChild - method on (at
least some of) its elements so it is important that no assumptions be
made based on - appendChild - alone. IE 4 also implements a -
document.createElement - method, as does Opera 6, so that is also
dangerous property to be inferring anything from. Given a script that
only really needs those two features to be supported on a W3C standard
browsers I usually throw in an additional test for - replaceChild - just
to ensure that IE 4 does not execute the code. (IE 4 cannot pass the
tests used because it does not implement - document.getElementById -
or - document.createTextNode -)

While I would be happy with examining a BODY element and making
deductions about a SPAN element form it (within the confines of a single
W3C specified interface, or a single property/method (or paired
property, e.g.:- width/height) that when implemented is common to all
elements) there are boundaries that I would not be happy to carry that
deduction across. I would not want to deduce anything about the document
element from the body, or about the body from the document, although the
document should also implement the Node interface. IE 5.0 is the problem
here as its document did not implement the Node interface (and others
may have copied Microsoft's structure at that time).

I would also be cautious about carrying the deduction from an Element to
a Text, Attribute, CDATASection, etc, Node. While the W3C intends all to
implement the Node interface I would want to re-verify the interface on
the type in question. Remember; Text nodes cannot have children so while
they should have - appendChild - method the expectation would be that
they never be used, so the browser manufacturer might consider it safe
to just omit them. Indeed there are IE 6 versions that *crash* if you,
for example, attempt to apply typeof to the - appendChild - method of an
attribute.

I have also observed (generally older) browsers where the elements
within the HEAD behaved quite differently from the displayed elements
within the BODY, being less amenable to dynamic manipulation, etc. This
would make me reluctant to apply deductions made from BODY elements (and
their descendants) to HEAD elements (and their descendants).

With those caveats, generally I would say that if the expectation is
that when one element implements a particular interface, or single
property/method, all other elements also implement it, then it is
probably safe to assume that positive verification on any one element
can be regarded as grounds for assuming that interface/property/method
to be implemented on all others. That would applly to W3C Node and
Element interfaces, the HTMLElement interface and various proprietary
features known to be common to elements in certain browsers.

I would, for example, happily assume that if the first element examined
had a numeric - offsetWidth - property then all subsequent elements
would also possess that property (though in that case I would not make
the deduction from the BODY element, as it is likely to be a special
case). And I would also be fairly happy to assume; if - offsetWidth -
then - offsetHeight -, as they wouldn't mean much in isolation.

Faced with this issue of unknown environment, testing
features on the very object to be used makes sense,
and is actually the safest option.

This approach isn't without problems, though. Testing
extensively the features on each object can render the
code unreadable,

The readability argument is often overstated. It is not unusual for
people to comment on not being able to make head or tail of some of the
code I write, because I exploit what I have learnt about javascript over
the past years and that leaves individuals who are not familiar with the
techniques unable to comprehend the code. (making it particularly
amusing when people ask about how they should go about obfuscating code
(if it was worth obfuscating there would be no need as the only people
capable of understanding it would be able to write it for themselves)
:). Three years ago I would not have been capable of understanding the
code I write now.

But is it be reasonable to suggest that I should presently be writing
code that I would have been capable of understanding three years ago,
when I didn't know a fraction of what I currently know about javascript?
Should I be writing code that I know to be sub-optimal because there are
people in the world who want to be able to write javascript without
learning how best to do so (without even being interested in doing so)?

So I write objects that appear complex. They are complex because they
attempt to address all of the issues that I have learnt need addressing
(maybe not always all, but at least most). Any code addressing those
same issues would exhibit similar complexity, though maybe in a
different form.

They also seem more complex than they really are to individuals who
don't know the techniques I choose to use to address those issues, but I
make an informed choice of the techniques to apply based on my judgement
of which is best suited to the situation (very often for optimum
performance).

Above all else it is important that any apparent complexity in the code
I write is internal to objects that have very simple public interfaces
(and document those interfaces). Making internal complexity
insignificant to third parties who use the code, so long as they don't
have to put any work into maintaining it, which would only becomes
necessary if I fail to write complete cross-browser code with planned
behaviour in all environments (obviously that is never my intention).
and break the business flow of the script.

Making that level of a script as clear as possible is always a good idea
as it is where any requested changes would be needed. Either pushing the
complexities needed to handle differing browser environments down so
they are hidden behind simple interfaces, or doing that work up-front
once, certainly does leave that level of a script clearer and more
unified.

Another problem is that it can prevent code optimisation if
you leave the tests in place in each situation, not redefining
the methods (see the Russian Doll pattern;-)).

This applies particularly to general DHTML libraries made up of numerous
functions, where each function tests the browsers for its supported
methods prior to using them. It may be possible to reduce the logic of
the running code to little more than the use of those functions but the
overhead of re-testing on each call, for conditions that are unlikely to
have changed between calls, can rapidly add-up to the point where it
becomes a problem in itself.
In addition, there's also the cost introduced by such
techniques, less technical but as important; it requires
more time, more attention, more experience etc., so definitely
costs more (I don't know many people who could understand and
write advanced javascript without problems - in fact I know
of none apart from in clj - but that's not my job either).

Writing complete code; code that addresses all the relevant issues as it
operates and cleanly degrades when it cannot, is going to be more time
consuming than writing code that disregards the issues and fails
unpredictably. Any additional cost arising form doing the job properly
cannot be a good reason for not bothering.

And badly authored code must carry an additional burden in costs arising
from its unreliability and fragility. Though that may be harder to
quantify, and possibly go unnoticed. Such as a commercial site I looked
at recently where the most unreliable and javascript dependent aspect of
the entire site appeared to be the mechanism for reporting problems,
virtually guaranteeing that owners would not become aware of users
experiencing problems as a result of bad javascript authoring (and so
unaware of any needless loss of revenue resulting from it).

Rather than attempting to reduce the cost of javascript authoring by
tolerating the creation and use of inadequate scripts, I would rather be
concentrating on strategies for reducing costs through easy code re-use.
Which is why I have been writing a lot of low-level interface objects
recently. Because they offer a way of abstracting the complexity of
handling the variations in browser environments behind a simple
interface, and result in easily re-usable code without the code bloat
that follows from the use of large and interdependent javascript
libraries. It is also why I am getting interested in optimising the
configuration of independent chunks of code, because I want those
interface objects to be as self-contained as possible (so they can be
dropped into code that needs them with few (preferably zero) concerns
for interdependencies).

There is also a point where the expertise required to comprehend the
more advanced techniques, or design a complete script, while possibly
being perceived as expensive, actually reduces development costs itself.
It is not unusual for the inexperienced to get a script to broadly
'work' on one browser and then spend a lot of time thrashing about
trying to extend support to another. I have done it myself, and we see
plenty of questions on that particular subject posted to the group.

These days it is extremely rare for me to encounter new problems (and
then only when testing with the less common browsers/configurations); I
design and write cross-browser code and when I test it it mostly
exhibits the designed behaviour first time. And I can write in a day
what I would have taken a week or more to write 3 years ago. Giving me
more freedom to consider the design of the script and its
implementation. And providing a direct return in reduced hours spent in
script creation, followed by the reduced maintenance costs that follow
from good script design.
I'm therefore less and less convinced of the approach of
feature-detecting as "much" as possible. To me the best
approach is to do something in between, first performing
a rigorous intialization, testing for all methods on a
sample object (the document.body in the example code),
and then moving on to the business logic, without
testing more than required (object existence).
<snip>

Broadly I concur. Javascript is not particularly fast; the price of a
dynamic, interpreted language. Many optimisations are achieved by not
doing the same thing repeatedly when you can do it once and hold on to
the result, and (at least some, probably most) feature detection is
amenable to handling in that way.

Posting example code to the group is the area where the integration of
feature detecting techniques troubles me most. Most questions are so
simple that they do not warrant a full implementation and instead can be
addressed with little more than a simple function, or just a specific
code example.

It would be remiss to omit the feature detection entirely; that might
give the impression that doing so was acceptable. But an optimum
implementation would usually be above the level of the example code
used, design wise (particularly the "gateway" initialisation style).

A more local test and initialise pattern (such as the 'Russian doll') is
potentially beyond the comprehension of many questioners, so they may
use the code and find that it works, but they would not necessarily
learn anything from it.

That leaves posting example functions with the feature detection
demonstrated directly in the function, but in a way that means it will
be re-executed on each call (and the implication that that is an
appropriate and 'correct' style for javascript authoring).

On the whole I think it is best that code that demonstrates optimum
patterns does get posted in response to questions on the group. And if
the OPs find the result incomprehensible then at least they will have
learnt that there is more to javascript than they currently understand.
It is not as if those examples will ever be the only ones posted; the
over trivial, incomplete and/or more direct but potentially sub-optimal
examples will always appear along side the more elaborate examples (and
people have different opinions of what constitutes a good implementation
anyway).

Richard.
 
M

Mick White

Richard said:
We start with two well established principles relating to browser
scripting:-

1. Making assumptions about the browser environment is
extremely risky.
2. Feature detecting tests should be performed in a way
that is as closely related to the problem as possible
(preferably a direct one-to-one relationship).

And don't forget that just because the browser claims support for a
method or property, it doesn't mean that the UA does, in fact, support
it/them.

Case in point, Safari 1.0.2 supports "cellIndex" property of a table
cell element, but it always returns the Number "0".
Mick
 
R

Richard Cornford

Michael said:
Yann-Erwan Perio wrote:
But if you're familiar with such a cautious approach,
does it really add extra cost?

If you are familiar with the cautions approach then the chances are that
you are also familiar with a larger proportion of the issues around
browsers scripting, and have developed strategies for addressing them.
Designing an implementation that applies that knowledge may take longer
than disregarding the issues in the design, and may involve physically
writing more code. But once it has become second nature it wouldn't
necessarily take significantly longer, and being in a position to
recognise and address the issues from the outset should result in
implementations that require less ongoing maintenance (properly
implemented cross-browser scripts should exhibit planned behaviour in
100% of environments and be future proof, so they should not need
maintenance at all).
People often argue that writing "good" code takes extra
time, but that is simply because they aren't used to
writing it.

It is often amusing to find that it is the individuals who cannot write
cross-browser code who most vigorously assert that doing so costs more,
when they are not really in a good position to judge.

There is also the factor of education. It's fine for
those versed in cross-browser scripting to say, "I don't
have to worry about testing for that [whatever "that" may
be], because I know it will be there",

As far as I am concerned the only property that I never bother to test
for is a global - document - property.
but it this piecemeal testing something that should be
passed on to others? Without experience, how can they
judge what is needed and what isn't?

Incomplete testing should not be encouraged, but you original question
was more about a style of testing.
By our own admissions, neither of us are fully qualified
to make such a determination with any degree of authority.

Nobody is fully qualified. That would require a detailed underrating of
_all_ scriptable browsers, and I don't think anyone is in a position to
even name all. Indeed it is the act of actively perusing scriptable web
browsers that convinces me of the impossibility of knowing them all, and
makes me unwilling to trust that anything but the - document - property
is universally implemented.
There are merits to that, but I'm still not certain
it's something to be adopted at the moment.
<snip>

Incomplete testing should not be encouraged, but you original question
was more about a style of testing. Specifically the validity of testing
one host object and then using the tested methods/properties on another.
The more general aspect of Yep's suggestion; the performing of the tests
(or at least as many as are practical) once during an initialisation
phase, is a completely viable strategy that offers advantages in
performance.

As I have said, I would want to restrict the extent to which the results
of tests for properties and methods on one object are applied to other
objects. But the strategies applied to feature detection deserve
consideration, and the development of approaches that gain the
reliability of feature detection without letting it become a burdensome
overhead to running code.

Richard.
 
Y

Yann-Erwan Perio

Richard Cornford wrote:

<snip synthesis>

I've liked your synthesis, the clear distinction you made about nature
of properties being tested is truly important, and I may have overlooked
that by trying to be too general (I should have detailed more).
The readability argument is often overstated. It is not unusual for
people to comment on not being able to make head or tail of some of the
code I write, because I exploit what I have learnt about javascript over
the past years and that leaves individuals who are not familiar with the
techniques unable to comprehend the code.

Actually I didn't think about this kind of readibility, although I'll
address the matter afterwards - I meant that the additional tests within
the code would divert the reader from the real logic. Examining a style
object the typeof-ing a CSS property many times, or testing node
manipulation methods repeatedly has definitely nothing to do with the
real action (highlighting an element, swapping a node...). I've always
been irritated to have to test the environment before writing the
relevant code, hence my will to separate the two processes as cleanly as
possible.
But is it be reasonable to suggest that I should presently be writing
code that I would have been capable of understanding three years ago,
when I didn't know a fraction of what I currently know about javascript?
Should I be writing code that I know to be sub-optimal because there are
people in the world who want to be able to write javascript without
learning how best to do so (without even being interested in doing so)?

I'm frankly unsure about this; the part in me that loves javascript
completely agree with that; javascript is a powerful language, and using
it to its maximum can only give better results; when some guy doesn't
know it, then he just has to learn it, as for every other languages.

Now, my (limited) experience shows that few people are experts or
willing to become some; yet they're the ones given the responsabilities
to develop and maintain systems. They'll do a fair job (the "you" of
three years ago would have done a pretty good job in javascript), using
sub-optimal yet easier-to-maintain techniques, at least to meet the
clients' requirements, and for a reduced visible cost. Then (and before)
you have analysts which estimate the hidden cost (failures and so on),
and generally find it acceptable (the cost of 'implementing' a non-quality).

I'd really like to agree with you on this cost subject, all the more the
arguments you've developed all make sense - but they seem to apply in
rather specialised areas. I don't have your experience, though, so I
probably lack proper analytical data to draw a definite conclusion on
this matter.
So I write objects that appear complex. They are complex because they
attempt to address all of the issues that I have learnt need addressing
(maybe not always all, but at least most). Any code addressing those
same issues would exhibit similar complexity, though maybe in a
different form.

Yes; I can only agree there, and have come to recognise and get familiar
to this kind of complexity.
Making that level of a script as clear as possible is always a good idea
as it is where any requested changes would be needed. Either pushing the
complexities needed to handle differing browser environments down so
they are hidden behind simple interfaces, or doing that work up-front
once, certainly does leave that level of a script clearer and more
unified.

Encapsulating the functionality inside a component might be a solution,
and actually may be the best solution. However when your script is
tightly related to the very nature of the host conceptual model, like
manipulating nodes, to which extent is it correct to encapsulate the
functionality within a component?

I can perfectly accept moving "graphical" logic inside components since
models differ greatly across user agents (your script for measuring
client dimensions is an excellent example of this), but I feel uneasy
about how to code elements manipulation (like, years ago, the first time
I encountered vectors and the likes in java, with such a detailed
interface when I was used to javascript arrays).

I realise that maybe my primary mistake is to not define the scope of
javascript applications; you're talking about low-level components,
whereas I wanted to address the approaches as a whole - I'll have to
rethink about it, though I already know I'll have to refine my vision.
Rather than attempting to reduce the cost of javascript authoring by
tolerating the creation and use of inadequate scripts, I would rather be
concentrating on strategies for reducing costs through easy code re-use.

I suppose this is the right approach; as for me, I'm working on a
per-mission basis, with different teams each time, and different
competencies required for the missions each time - so I've just noticed
I might not have been able to capitalize on 'technical' experience so far.
There is also a point where the expertise required to comprehend the
more advanced techniques, or design a complete script, while possibly
being perceived as expensive, actually reduces development costs itself.

A very good point.

On the whole I think it is best that code that demonstrates optimum
patterns does get posted in response to questions on the group. And if
the OPs find the result incomprehensible then at least they will have
learnt that there is more to javascript than they currently understand.
It is not as if those examples will ever be the only ones posted; the
over trivial, incomplete and/or more direct but potentially sub-optimal
examples will always appear along side the more elaborate examples (and
people have different opinions of what constitutes a good implementation
anyway).

ACK; I'll try to support this vision in the future.


Cheers,
Yep.
 
R

Richard Cornford

Yann-Erwan Perio said:
Richard Cornford wrote:
... . I've always been irritated to have to test the
environment before writing the relevant code, hence my
will to separate the two processes as cleanly as possible.

I entirely agree. To a large extent I want to separate out the testing
for efficiency, so that what only needs doing once can be done only
once, but that does make the code that acts distinct form the code that
tests (more or less) and that distinction is valuable.

I'm frankly unsure about this; the part in me that loves
javascript completely agree with that; javascript is a
powerful language, and using it to its maximum can only
give better results; when some guy doesn't know it, then
he just has to learn it, as for every other languages.

Now, my (limited) experience shows that few people
are experts or willing to become some; yet they're
the ones given the responsabilities to develop and
maintain systems.

That is the "javascript isn't real programming" attitude, and mostly on
the part of management. So while "real" programming has people working
on methodologies that attempt to cope with the limitations in the human
ability to conceive complexity in large systems, and that promote the
creation of reliable, maintainable, and re-usable code (with the active
intention of redacting development costs), the management who choose to
employ non-programmers to write javascript (because it isn't a "real"
programming job) don't see any of the benefits of the research into
optimising the task of programming.
They'll do a fair job (the "you" of three years ago
would have done a pretty good job in javascript),

Three years ago I was attempting to do the best job I was capable of,
but in retrospect the results were not good (or even fair). And I don't
think I was providing value for money either, as it seemed to take ages
to achieve consistent results in even the limited set of browsers that
were covered by the specifications I was implementing. I certainly
wasn't accounting for a fraction of the issues that have become second
nature over the intervening period, as I was totally ignorant of their
existence (let alone how to handle them).
using sub-optimal yet easier-to-maintain techniques,

One of the things that makes me look back on those early scripts with
feelings approaching shame was the release of Opera 7. Coming as it did
towards the end of the period in which I learnt, and then developed, my
feature detection techniques. Those early scripts either couldn't take
advantage of Opera 7's dynamic capabilities, or they failed
uncontrollably with it. Well, Opera 7 hadn't appeared in the
specification list (how could it) so strictly there was nothing wrong
with that, as such. But the feature detecting scripts that I had been
writing, that had mostly been non-functional (but cleanly degrading) on
Opera <= 6, all took advantage of Opera 7's new dynamic features as soon
as they were exposed to it, without the need to change a single line of
code.

And that, in a large part, is the maintenance issue. You can employ
individuals with a trivial grasp of the subject to create and maintain
scripts, and they may be relatively cheep and easy to employ, but they
are going to take longer to create anything, it will probably be created
in ignorance (or disregard for) of pertinent issues, and it will be
significantly less robust and so need more maintenance. Or you can
employ individuals who have acquired the relevant skills/knowledge and
have the equivalent code produced quicker or better code in the same
time, and have results that have been designed to be reliable in any
(and especially the unknown) browser environment, so you can throw more
at them and they will still work. Requiring relatively little (or
potentially no) on-going maintenance.

Encapsulating the functionality inside a component might
be a solution, and actually may be the best solution.
However when your script is tightly related to the very
nature of the host conceptual model, like manipulating
nodes, to which extent is it correct to encapsulate the
functionality within a component?

I can perfectly accept moving "graphical" logic inside
components since models differ greatly across user agents
(your script for measuring client dimensions is an excellent
example of this), but I feel uneasy about how to code elements
manipulation ...

I haven't yet seen Node manipulation as a candidate for a low-level
component. It is mostly a case of the browser either supporting the
desired manipulation (in a W3C DOM compliant, dynamic way) or it being
unsupported. There is not much point in wrapping a single interface in
another that would have to be at leas as complex.

There are numerous node related repetitive tasks that can be implemented
as parameterised function calls, and recurrent structures that can be
abstracted into object that only expose their externally relevant
aspects (such as wrapping an HTML branch in an absolutely positioned DIV
structure that clips to the viewport, you initially pass it its contents
and then tell it where you want it on the page and it clips itself to
suit (and handles changes due to scrolling and re-sizing) internally).

Mostly a component with a simple interface suits situations where two or
more (active) possibilities are facilitated by browsers. Though I am
finding myself recognising more common structures in javascript code
itself, and seeing ways of implementing them as components or
object-augmenting functions. The potential for code re-use goes up each
time I implement one of these, and recently I have surprised myself with
how quickly it has been possible to write some extremely flexible DHTML
scripts using them (and with how little debugging the results need
during testing (which is because the components themselves are
effectively pre-debugged)).
I realise that maybe my primary mistake is to not define
the scope of javascript applications; you're talking about
low-level components, whereas I wanted to address the
approaches as a whole - I'll have to rethink about it, though
I already know I'll have to refine my vision.

I still end up with a top level of task-specific object definitions, a
'gateway' feature detection test or two and the logic to
initialise/instantiate the task-specific objects. The low-level
components act to reduce the size of that structure (and its
complexity), but they don't rally alter its nature.
I suppose this is the right approach; as for me, I'm working
on a per-mission basis, with different teams each time, and
different competencies required for the missions each time
- so I've just noticed I might not have been able to capitalize
on 'technical' experience so far.
<snip>

I am not sure that it is the right approach, which is point of
discussing it in a public forum. I know that code re-use is a good idea
(nobody should take much convincing of that) but my approach of creating
low-level interfaces to the variable aspects of the browser environment
is primarily a strategy aimed at achieving code re-use without the
overheads (in less than efficient code and bloated end results) that
follow from large (and often interdependent) javascript DHTML libraries.
I am encouraged by being able to quickly produce reliable cross-browser
results through the application of that strategy, but that doesn't mean
I am right (or that it is the optimum strategy).

Richard.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,904
Latest member
HealthyVisionsCBDPrice

Latest Threads

Top