'Flavors' of JS?

M

Matt Kruse

Richard, I believe that you are being intentionally obtuse, so I won't
respond to everything you've written.
I believe we have different views of the world, and maybe both of our views
are correct in their owns ways, I don't know. I do know that I have
thousands of sites who have benefitted from my approach, and thousands of
users who have thanked me or donated money to me. And some emails in
response to this thread and others expressing disgust with your attitude in
this group. So it's clear to me that I'm not nearly as "wrong" you wish to
imply. But I'll address a couple of your points...

Richard Cornford said:
What would be the point of enumerating the many specific implementation
flaws in your code when you refuse to even recognise the fundamental
design flaw?

Obviously the world is black and white to you, either people agree with you
or they are wrong.
But your willingness to write over 100 lines of response, yet not point out
a single technical criticism of code, says to me that you'd rather argue
endlessly and prove your correctness than actually accomplish anything.
That's a bummer.
(and, btw, I certainly never claim that my code is perfect - I'm always
working to improve it, as time permits. In many cases, I already know how to
improve it, I just don't have time to do so.)
And an attitude that it is
better to refer people to copy and paste scripts, rather than assisting
them in better understanding the task and its issues, will not assist
them in untangling the code within those libraries.

Not everyone wants or needs to understand javascript in order to use it.
I don't subscribe to such elitist attitudes.
That doesn't seem like a search combination calculated to locate code.

Yet no other suggestions or url's are posted?
Again, you're more interesting in arguing than anything else.
You apparently have no real-world sites or libraries to put forward as
examples of successful implementations of your views?
You have a very personal definition of "solution". To my mind a solution
modifies a situation such that there are no problems remaining.

I think that's a highly flawed definition.
I think a solution accomplishes a goal given a set of requirements.
If supporting non-javascript-enabled browsers isn't a requirement, for
example, then the set of available solutions may change.
Furthermore, I don't think a javascript solution needs to also hold the hand
of the developer and instruct them on how to degrade gracefully, or decide
if they really want to use javascript at all. That's outside the scope of a
javascript library. The user should understand and decide on those issues
before deciding on the javascript solution needed to solve its part.
Encapsulating commonly needed and specific (usually low level) tasks
into efficient small components is a viable way of authoring re-usable
code.

True, but it's also useless by itself, and requires javascript knowledge to
assemble anything that actually accomplishes a goal.
I build small, efficient components also. They exist in my libraries. I just
choose to assemble them into larger scripts which solve a bigger problem.
And once any individual component has been
rigorously tested in isolation its behaviour can be relied upon to
contribute towards the creation of a reliable larger application.

Exactly. And that's the purpose of a library of functions, of course.
But, you hate libraries. You want to re-write everything all the time.
It is
also more practical to build such a collection in a response to
requirements

There are some requirements which are very basic and very common. Creating
libraries for these functions is very practical.

For example, having a "popup date picker" for a date input field is a common
requirement. Yet, it's a pretty complex task to actually implement. Why
would a person who is not interesting in learning or writing javascript
build it from scratch with low-level components, rather than using a
pre-packaged solution like mine or others?

Even though the size of my library is nearly 35k, it provides a lot of
functionality out-of-the-box. Most people can have it working very quickly,
have all the features they need, and have all major browsers supported, with
very little effort. If used in many pages, it's probably cached anyway, so
it wouldn't impact speed much at all.

If they built it from scratch to have the same functionality as you would
propose, they may spend 50 or more hours of coding and testing, and spend a
large amount of money to get exactly the same result - or something that
isn't as complete without them even realizing it.

To me, the first option is clearly superior for _most_ people.
Again you are applying your unusual definition of "solution". Take you
table sorting library...

Well, you can certainly find flaws in that library given the current state
of web browsers. I've debated about whether to leave it up there anymore,
but I do still find people who need it, so I've left it up.

It was written way back when 4.x was the most current Netscape browser, and
using DOM methods wasn't even an option yet. Way back then, if you wanted
client-side table sorting, it was a tough, tricky thing to implement.
_Especially_ if you wanted to support Netscape 4. Can you point me to
another client-side table-sorting script which supports Netscape4? They
might exist, but they are rare indeed.

That specific library exists for people writing web apps where javascript is
enabled, and users might be using Netscape 4. Believe it or not, that
situation exists more than any of us would like! And for developers in this
situation, looking for a client-side table sorting script, my library offers
a very unique and functional solution for them, when writing it from scratch
might cost them considerable time and cost.
Now contrast that with the DOM table sorting scripts. OK, they only work
on javascript capable dynamic DOM browsers (but those fall on the
acceptable side of your 80/20 criteria anyway)

In a wide-open internet situation, perhaps.
I recently had an email exchange with a user whose client still had
Netscape4.78 as their standard browser. All of your fancy DOM stuff simply
didn't apply to him at all, and he was thankful that there were solutions
out there that still supported ancient browsers. Clearly, not all peoples'
requirements are as simple as you would like to believe.
You library solves one problem by introducing another, the DOM version
solves the same problem (to the same criteria of acceptability) but does
not introduce any other problems into the situation.

I certainly understand your point. Look at my dhtml tree:
http://www.mattkruse.com/javascript/mktree/
That's an approach to adding javascript functionality to plain html content,
and I love the idea quite a bit. But it simply doesn't work in all
situations.
maybe the way that the inappropriateness of the fundamental design of
your libraries would require you to jump through hoops to create a
reliable system that is contributing to your impression that creating a
reliable system is difficult, time-consuming and expensive.

No, I understand exactly what you're saying, and I've done plenty of things
implemented in the ways you describe. However, NOT ALL SITUATIONS ARE THE
SAME, despite your insistence that everything fit into your pre-defined box.
I would say that visiting a demonstration of any javascript code with a
javascript disabled browser is a very obvious test for the acceptability
of its degradation strategy

Degredation strategy is up the person implementing the library, not up to
the library itself.
You can cater for everyone, but not caring to try is guaranteed to mean
that you never will.

You are very wrong, sir Richard.
It means that I have 24 hours in a day, and I use them in the ways that are
most beneficial to users of my code, and profitable to me.
If that doesn't please YOU, I simply do not care. :)
 
J

Java script Dude

Jim Witte said:
What do people feel about this statement: "JS exists in so many
flavours across so many browsers (and across the html/xhtml/xml divides)
that it is becoming undesirable to include any on a site."

Jim

When talking about the main browsers, IE5.5sp2+ and Mozilla 1.1+ are
quite similar browsers that are capable of handling similar JavaScript
Needs as long as you use the standard compliant Mozilla as your
development bed. With all the other browsers (non-IE and non-moz), you
can set up the pages to have less features to ensure minimal errors to
users.

Mozilla is an awesome development platform. And as long as you follow
the standards, all of your code should work in IE with mild x-browser
tweaking. The biggest problem for JS developers is that they learn the
"simplified" black hole methods offered in IE and ignore the standard
based methods in Mozilla (document.all[] vs
document.getElementById()). They then end up banging their heads on
all cases where they entered IE black holes and need to re-think most
of their code.

If the other (Non IE) browsers don't follow compatiblity with
Mozilla's compatibilities, then too bad give those users less
features.

Mozilla and IE will be the dominant browsers in the future with the
preceeding gaining ground by the hour.
 
B

Brian Genisio

Matt said:
Richard, I believe that you are being intentionally obtuse, so I won't
respond to everything you've written.
I believe we have different views of the world, and maybe both of our views
are correct in their owns ways, I don't know. I do know that I have
thousands of sites who have benefitted from my approach, and thousands of
users who have thanked me or donated money to me. And some emails in
response to this thread and others expressing disgust with your attitude in
this group. So it's clear to me that I'm not nearly as "wrong" you wish to
imply. But I'll address a couple of your points...

I need to play the devil's side for a bit. I once developed a full
corporate site using the JSAPI library set, which seemed pretty cool at
the time. Everything looked really great... until we posted to the
public server. I cannot tell you how many complaints I got from people
about "I cant see page X", "I cannot see the text in the third column of
page Y", "My Browser locks up when I click on Z".

There was not much I could do, aside from debugging the libraries. At
the time, that was out of scope, and JSAPI had no plans to support IE 6,
in which my pages would not load at all. It was a real mess.

I got so frustrated, that when I did a page redesign, we decided that we
would no longer use something like a JS library. Any scripting would be
done by us, and we would do as much standardization on the server side
(using PHP includes).

I feel real confident with the new site design. I was left feeling that
the concept of a JS library is great in theory, but slow, buggy and
unreliable in practice.

I am not saying "All Libraries are bad". I am saying that I dont think
they are necessarially a good solution in most situations... which
brings me to the next point I feel I need to chime in on...
I think that's a highly flawed definition.
I think a solution accomplishes a goal given a set of requirements.
If supporting non-javascript-enabled browsers isn't a requirement, for
example, then the set of available solutions may change.
Furthermore, I don't think a javascript solution needs to also hold the hand
of the developer and instruct them on how to degrade gracefully, or decide
if they really want to use javascript at all. That's outside the scope of a
javascript library. The user should understand and decide on those issues
before deciding on the javascript solution needed to solve its part.

Boom! Hit it right on the head. To say that "a solution modifies a
situation such that there are no problems remaining" is incorrect. A
perfect, or complete solution makes it so there are no problems
remaining. This is very, VERY rarely the case in software development.

A solution is something that meets the requiements.

A good solution is something that meets the requiements, has a solid
design, facilitates reuse, is maintainable, runs effiently, and has a
solid user interface. ( I know there are more attributes, but you get
my point)

A perfect solution "modifies a situation such that there are no problems
remaining."

It is, indeed, an ivory tower concept to believe that a perfect solution
exists, without writing all components of a solution with the end goal
in mind.

Brian
 
R

Richard Cornford

:
... . I do know that I have thousands of sites who have
benefitted from my approach, and thousands of users who
have thanked me or donated money to me.

Hmm, and maybe it is that donated money that motivates you to direct
anyone with an even vaguely related problem to one of your libraries and
to deprecate the people proposing alternative design strategies and
generally authoring javascript to standards suitable for the Internet.
Because you scripts do not approach those standards the implication that
they are not suited to Internet use would, if widely accepted, directly
impact on your revenue stream.

I can understand how financial self interest can motivate you to argue
against people promoting best practices in web authoring in the way that
you do. On the other hand, the most valuable contributors to the group
are not even asking for anything in return, just trying to contribute to
a generally better application of javascript, and on a purely voluntary
basis. My general experience is that you are most likely to hear what
you need to know from someone who has nothing to gain from telling you.
And some emails in response to this thread and others
expressing disgust with your attitude in this group.
So it's clear to me that I'm not nearly as "wrong" you wish to
imply.

A hearsay straw pole of people whose javascript authoring abilities
leaves them feeling unqualified to comment to the group directly.
Obviously the world is black and white to you,
either people agree with you or they are wrong.

It probably comforts you to think that. In practice I am a believer in
reasoned argument as a route to decision making. I can be convinced by a
valid counter argument, and have been (as could be verified through
google as I have engaged in many public debates on newsgroups). And the
fact that my opinions have chanced over time stands witness to an
absence of mental rigidity on my part.
But your willingness to write over 100 lines of response, yet
not point out a single technical criticism of code,

What would be the point, if I spend my time enumerating the problems,
and you spend your time correcting them, what is the result? A bad
concept well implemented; really just a waste of both of our efforts.
You may not see it yet but in the long run my time is better spent
encouraging you to see the design issue.
says to me that you'd rather argue endlessly and prove
your correctness than actually accomplish anything.

Considering the validity of your counter arguments, and responding to
them, is progressing the debate. It accomplishes a better understanding
of the argument.

Not everyone wants or needs to understand javascript in order
to use it. I don't subscribe to such elitist attitudes.

So you actively oppose others attempting to promote an understanding of
the language and the issues related to its application? Well you may
have a motivation for adopting that position, but what is
comp.lang.javascript for if not that? A vending machine for cut and
paste scripts?

I think that's a highly flawed definition.
I think a solution accomplishes a goal given a set of requirements.
If supporting non-javascript-enabled browsers isn't a requirement, for
example, then the set of available solutions may change.

Did anyone ever say that a javascript dependency wouldn't be acceptable
on an Intranet? (Though there are valid arguments for designing Intranet
scripts to Internet standards.) In an Intranet context a "solution" that
introduces a javascript dependency is not necessarily introducing an
alternative problem because it is possible to know in advance that all
of the browsers that will be exposed to the script will be javascript
capable, that is what you get with an Intranet. You can know that there
are no problems remaining it the end result because the javascript
dependence is not a problem.

On the Internet you know in advance that some of the browsers will be
javascript incapable, so you know that javascript will be a problem for
some. That problem is there whether you choose to ignore it or not, and
if its introduction was not necessary that is an important factor in the
assessment of "solutions".
Furthermore, I don't think a javascript solution needs to also hold the hand
of the developer and instruct them on how to degrade gracefully, or decide
if they really want to use javascript at all. That's outside the scope of a
javascript library. The user should understand and decide on those issues
before deciding on the javascript solution needed to solve its part.

If the point of a generalised library is to reduce the effort of
developers using it then it fails at that task if it leaves the problem
of deciding when it is necessary to degrade to the developer using it.
The library should, at minimum, be in a position to be queried about its
viability in whatever environment it may find itself, else the developer
using it is left with the need to test the environment themselves to see
if the library is going to be functional. But if the library hasn't been
designed with paths of clean degradation any degradation handling has to
be implemented entirely by the developer, which hardly contributes to
the proposition that library use is convenient and inexpensive. It is
more likely to encourage an attitude in that developer that making
provision for the javascript incapable is more work than it is worth.

There is also your suggestion that the users of such a library don't
need to understand javascript authoring issues. But then you are
expecting these individuals without that understanding to recognise the
contexts in which they should be providing fall-back, how (and what) to
test for those conditions, how to integrate any alternative content and
how to implement the scripts needed to do that.
True, but it's also useless by itself, and requires javascript
knowledge to assemble anything that actually accomplishes a goal.

Programming and not programming are mutually exclusive. To program
javascript you need javascript knowledge, you also need to understand,
for example, boolean logic. These aren't cruel impositions intended to
keep the uninitiated from scripting web browsers they are just the
obvious fundamental requirements for the task. A desire to program
without learning how to program is at best unrealistic.
I build small, efficient components also. They exist in my libraries. I just
choose to assemble them into larger scripts which solve a bigger
problem.

Where do you think those small components come from? They initially get
created to handle aspects of bigger problems, once created they get
re-used when another problem needs that same facility.
Exactly. And that's the purpose of a library of functions, of course.
But, you hate libraries. You want to re-write everything all the time.

What makes you think I re-write everything. In practice the only part of
a process that I write for most scripts is the application specific
control logic, and the user of any library has no choice but write that
anyway. The difference is that I get to include only the functionality
that is needed by a script, while the user of a library has little
choice but include the entire library for each and every task they want
to use a library for.
There are some requirements which are very basic and very common.
Creating libraries for these functions is very practical.

And the lower the level of the task the more common it becomes.
For example, having a "popup date picker" for a date input field is a common
requirement. Yet, it's a pretty complex task to actually implement.

More complex a task to implement in a truly general way, suddenly you
need to accommodate any possible presentation, any date range, interact
with arbitrary form controls and combinations of forms, deal with any
possible HTML structures and content and so on. Leaving any general
solution bloated with code needed to handle the possibilities, most of
which will not apply to any actual application, and then the general
version may still have overlooked some possibilities (often the subject
of questions on the group).
Why would a person who is not interesting in learning
or writing javascript build it from scratch with low-level
components, rather than using a pre-packaged solution like
mine or others?

Someone not interested in learning or writing javascript is probably
being unrealistic in having a desire to use javascript (and weren't
these the people you just suggested should be capable of identifying and
handling degradation issues for themselves?).

But primary reasons for site specific implementations are the ability to
match the implementation code to the specific situation, maximising
efficiency and minimising the associated download, and the considerably
reduced complexity when clean degradation specific to the situation is
implemented directly in the code used
Even though the size of my library is nearly 35k, it provides a lot of
functionality out-of-the-box. Most people can have it working very quickly,
have all the features they need, and have all major browsers supported, with
very little effort.

But at the end of that process they have a system that will not work for
all potential users (at least on the Internet) where they could have a
system that is usable by all. Unless they are now going to set about
adding a degradation strategy, which will probably take as much effort
to do reliably as it would have taken to implement as task specific
script with integrated fall-back from the outset.
If used in many pages, it's probably cached anyway, so
it wouldn't impact speed much at all.

Nothing gets cached unless it is downloaded at least once.
If they built it from scratch to have the same functionality as you would
propose, they may spend 50 or more hours of coding and testing, and spend a
large amount of money to get exactly the same result -

50 hours? Not for someone who knew what they were doing. But the result
wouldn't be exactly the same result, it would be a result tailored to
the situation of its use, potentially with the paths of degradation
designed in from the outset and so requiring no additional work in order
to achieve 100% reliability.
or something that isn't as complete
without them even realizing it.

When completeness exceeds need then it is only contributing to code
bloat.
To me, the first option is clearly superior for _most_ people.


Well, you can certainly find flaws in that library given the current state
of web browsers. I've debated about whether to leave it up there anymore,
but I do still find people who need it, so I've left it up.

It was written way back when 4.x was the most current Netscape
browser,

I can see that it is accommodating Netscape 4 that results in that
script being the way it is. But table sorting is still easy to fall-back
to server-side sorting so that approach would accommodate Netscape 4's
shortcomings, while the dynamic DOM browsers achieve your 80/20
criteria.

That specific library exists for people writing web apps where
javascript is enabled, and users might be using Netscape 4. ...
<snip>

That's fine, so long as you make it clear that it would be a poor choice
in any other context.

Degredation strategy is up the person implementing the library,
not up to the library itself.
<snip>

And you make this requirement of the "person who is not interesting in
learning or writing javascript"? But isn't demonstration of a library an
application of that library? It will still demonstrate its potential for
degradation in that application.

Richard.
 
R

Richard Cornford

Brian Genisio wrote:
A perfect solution "modifies a situation such that there are no
problems remaining."

It is, indeed, an ivory tower concept to believe that a perfect
solution exists, ...
<snip>

By any reasonable criteria when no problems remain a solution has been
achieved. The limitation is that it may be so difficult to identify
residual problems that it would be impossible to achieve certainty that
an outcome qualified as a solution.

That problem has a direct parallel in epistemology; it is impossible to
know when something (say a scientific theory) is true. But that doesn't
hinder the growth of knowledge, scientific or technical progress. It
doesn't hinder progress because while truth is an ideal concept and
cannot be attributed to knowledge, not-true can easily be identified. So
progress is achieved by the invention of new ideas and the elimination
of ideas identified as not-true. Resulting in a movement toward truth,
regardless of the theoretically unknowable nature of truth.

A similar state applies to "solution", it may not be possible to be
certain of one when you have it, and it may not actually be practical to
achieve it, but that doesn't prevent it from being a target that can be
moved towards.

An obvious strategy in moving towards an ideal solution would be follow
the logic that allows progress towards truth, identifying and
eliminating anything that qualifies as not-a-solution.

So, in an Internet context, a proposition that removes the original
problem but introduces an alternative problem that was not inherent in
the system is not moving towards the elimination of problems from the
system, it hasn't even reduced the number of problems, just modified
one. Such that proposition can be identified as not-a-solution.

It doesn't matter how idealistic and unachievable (or unknowable) a goal
is if you can tell when you are moving towards it, and that movement is
objectively progress. The elimination of not-true is a movement towards
truth, the elimination of not-a-solution is a movement towards a
solution, and any sufficiently extended sequence of movements
exclusively towards a destination will eventually result in arrival at
that destination [1], even if it is impossible to identify arrival.

Richard.

[1] Zeno would not necessarily have agreed.
 
M

Matt Kruse

Richard Cornford said:
Hmm, and maybe it is that donated money that motivates you to direct
anyone with an even vaguely related problem to one of your libraries and
to deprecate the people proposing alternative design strategies and
generally authoring javascript to standards suitable for the Internet.

I direct people to my code if I have a solution which directly solves their
problem, and gives them example code to inspect if they want to see how it
is done. I certainly do not deprecate people proposing alternatives.
Donations are not a driving force for me. Giving people higher-quality
solutions than are available at most of the cut-n-paste javascript sites is
the motivation, and getting many users of my scripts so that I can improve
them, which also directly benefits me and my private work.
So you actively oppose others attempting to promote an understanding of
the language and the issues related to its application?

Not at all. Ideally, in my mind, a poster would get both a quick answer with
a library or function to solve their requirement, and possibly more posts
which discuss the issue in more detail which they can learn from.
Programming and not programming are mutually exclusive. To program
javascript you need javascript knowledge, you also need to understand,
for example, boolean logic. These aren't cruel impositions intended to
keep the uninitiated from scripting web browsers they are just the
obvious fundamental requirements for the task.

You're confusing "programming" javascript with "using" javascript.
You don't have to know how a car works to drive one, do you?
You don't have to understand the fundamentals of electronics or operating
systems to use a computer, do you?
At some point, the lower levels must be hidden to people operating at higher
levels. I do not think that it's unreasonable to be a web developer and want
javascript functionality without fully understanding everything needed to
make it work. If the javascript coder can sufficiently hide enough from you,
then you just need to deal with an interface, not with the implementation.

A developer can understand enough about javascript to know when it is
appropriate to use it, and how to degrade gracefully in case users don't
have it enabled - but NOT understand it enough to implement a popup div
which is positioned correctly in all browsers (even old ones) and interacts
with the user. There's no reason those details can't be hidden from the
person implementing the library.

For example, if a user wants to have an expandable tree structure, they can
use mine and give their <ul> structure a certain class, and instantly have a
tree implemented. They don't need to know how it works, and they don't need
to have any javascript knowledge. Yet it solves their problem in an elegant
and robust way. In cases where this is possible (obviously not every
situation can be this clean) then this is a great solution for users who
want extended javascript-based functionality, but don't have any programming
knowledge.
Where do you think those small components come from? They initially get
created to handle aspects of bigger problems, once created they get
re-used when another problem needs that same facility.

Do you make these functions available anywhere?
I've long been wanting to get together a solid, robust collection of very
low-level functions that perform certain tasks, which developers can then
either use directly in their pages or incorporate into higher-level scripts.
I don't like the idea of things like full APIs which handle events and
positioning and dhtml and all that - they are too bloated for my tastes.
But, if there was a single function which gave the position of an element,
for example, and it worked in every browser that could possibly be tested,
that would be a very valuable thing to share.

Typically, these low-level functions are re-written by a number of
developers for their own libraries, and very few of them are as complete and
robust as they could be if everyone combined their knowledge and talents.

If you have any ideas about how the users of this group could assemble such
a collection of developer tools, I'd like to hear it.
More complex a task to implement in a truly general way, suddenly you
need to accommodate any possible presentation, any date range, interact
with arbitrary form controls and combinations of forms, deal with any
possible HTML structures and content and so on. Leaving any general
solution bloated with code needed to handle the possibilities, most of
which will not apply to any actual application

Requirements change. Why re-code, when you could have handled the general
cases from the beginning?
Adding an additional 5k to a library to solve a number of general cases is a
_GOOD THING_. IMO.
would propose, they may spend 50 or more hours of coding and testing, and
spend a large amount of money to get exactly the same result -
50 hours? Not for someone who knew what they were doing.

Unless you have implemented a generalized popup date-picker (if you have,
where is it?), I don't think you understand.
I can see that it is accommodating Netscape 4 that results in that
script being the way it is. But table sorting is still easy to fall-back
to server-side sorting so that approach would accommodate Netscape 4's
shortcomings, while the dynamic DOM browsers achieve your 80/20
criteria.

Not when the client says "we use netscape4, and want client-side table
sorting!" :)
 
B

Brian Genisio

Richard said:
Brian Genisio wrote:


<snip>

By any reasonable criteria when no problems remain a solution has been
achieved. The limitation is that it may be so difficult to identify
residual problems that it would be impossible to achieve certainty that
an outcome qualified as a solution.

That problem has a direct parallel in epistemology; it is impossible to
know when something (say a scientific theory) is true. But that doesn't
hinder the growth of knowledge, scientific or technical progress. It
doesn't hinder progress because while truth is an ideal concept and
cannot be attributed to knowledge, not-true can easily be identified. So
progress is achieved by the invention of new ideas and the elimination
of ideas identified as not-true. Resulting in a movement toward truth,
regardless of the theoretically unknowable nature of truth.

A similar state applies to "solution", it may not be possible to be
certain of one when you have it, and it may not actually be practical to
achieve it, but that doesn't prevent it from being a target that can be
moved towards.

An obvious strategy in moving towards an ideal solution would be follow
the logic that allows progress towards truth, identifying and
eliminating anything that qualifies as not-a-solution.

So, in an Internet context, a proposition that removes the original
problem but introduces an alternative problem that was not inherent in
the system is not moving towards the elimination of problems from the
system, it hasn't even reduced the number of problems, just modified
one. Such that proposition can be identified as not-a-solution.

Just because a solution A' to problem A creates another problem B, it
does not mean that A' is not a solution to A. Problem B may be less
significant, and unrelated to problem A. Does this mean that A' is not
a solution to A?

For instance, if Problem A is that "It doesnt Work", and Solution A' is
"Use X method", new problem B might be "It is somewhat slow
(inefficient)".

In this case, Problem B is not a critical problem (as viewed by the
requirements developers), where Problem A is. Now, assume that Solution
A" fixes problem A, and it's new Problem C is solved by C' "An Admin
needs to log in every x hours to do something". Problem C can be
solved, through automation, but a new problem D exists "The system is
complicated, and difficult to maintain".

At this point, the only problem that exists any more, is problem D.
Problem A was solved through A"xC', which is what causes problem D.

I know this is a long and boring example, but I am trying to illustrate
a practice that happens in out-the-door products on a regular basis...
Solving one problem, but creating a smaller, unrelated problem that is
managable.

Ideally, there should be a final solution to A that will be a perfect
solution. This solution may be unrealistic in budget/schedule, and
concessions are made for the A"xC' solution.

Dont get me wrong, I absolutely understand that virtue in speaking
ideally, but I must say that there is a difference between a perfect
solution and a solution. You can talk all the "theory vs proof"
metaphores you want (which I agree with), but in a system like this, a
perfect solution is sometimes practically unatainable. (For example...
having a standards board change something that they dont want to change)

I digress,
Brian
 
B

Brian Genisio

Matt said:
Requirements change. Why re-code, when you could have handled the general
cases from the beginning?
Adding an additional 5k to a library to solve a number of general cases is a
_GOOD THING_. IMO.

The only problem I see with that, is that 5K of javascript code in
library form is compiled on the first pass of the JS interpeter, if it
is executed or not. If you are only using one function from that
library, there is a lot of extra processing to include code that is not
being used.

Brian
 
M

Matt Kruse

Brian Genisio said:
The only problem I see with that, is that 5K of javascript code in
library form is compiled on the first pass of the JS interpeter, if it
is executed or not. If you are only using one function from that
library, there is a lot of extra processing to include code that is not
being used.

In theory, yes. But I've had pages with over 200k of javascript (literally -
and no, I didn't design them) which executed in 2 seconds. Computers are
very fast. Compiling 5k of javascript takes such little time that it becomes
completely negligible.

I'm not in the crowd of people who optimizes code so that, when executed in
a loop 10,000 times, it performs 1 second faster. I think that kind of
optimization is a huge waste of time, and more of a "fun" exercise in
programming rather than a practical one :)
 
D

Dr John Stockton

JRS: In article <[email protected]>, seen in
news:comp.lang.javascript said:
Matt Kruse wrote:

The only problem I see with that, is that 5K of javascript code in
library form is compiled on the first pass of the JS interpeter, if it
is executed or not. If you are only using one function from that
library, there is a lot of extra processing to include code that is not
being used.

Those on dial-up or radio links will prefer not to receive an
unnecessary 5K of code.
 
B

Brian Genisio

Matt said:
In theory, yes. But I've had pages with over 200k of javascript
(literally - and no, I didn't design them) which executed in 2
seconds. Computers are very fast. Compiling 5k of javascript
takes such little time that it becomes completely negligible.

This assumes you have a fast computer. I suppose it all depends on your
audience, but the average computer user these days does not have
anything better than a 300 Mhz PC with 64 MB of memory. Many people
have much slower machines yet.
I'm not in the crowd of people who optimizes code so that, when executed in
a loop 10,000 times, it performs 1 second faster. I think that kind of
optimization is a huge waste of time, and more of a "fun" exercise in
programming rather than a practical one :)

Hmmm... maybe you should. By always thinking about efficiency,
optimization is rarely necessary. Of course, 1 second faster seems
silly, but what if is a 1 second improvement from something that takes
1.1 seconds? If you are improving from 2 seconds to 1 second, the
improvement is probably not good enough.

The "fun" exercises you are talking about change the speed by orders of
magnitude (not just linear changes). O(nln) is MUCH faster than O(n^2)
for instance. In the example of 1.1 seconds, versus 0.1 seconds for
10,000 items, what happens if you increase to 20,000 items? You are
likely looking at 2.2 vs 0.2 seconds. We are not talking about being 1
second slower, we are talking about being 1100% slower! And what about
the memory considerations? Memory optimizations are often as important
as speed. What happens if you are using up all of your systems memory?

If you really think that speed optimizations are a huge waste of time,
then you have never written an application of any substance. I would
recommend _ALWAYS_ thinking about the correct algorithm to use for a
given situation. It will become so "second nature" that it will not
take any extra time to implement it. The result: Effiency every time.

I am not talking about theory here... I am not talking about "fun"
exercises. I am talking about real, out-the-door products used by real
customers. Effiency always matters.

Ok, I will get off my high horse... It is just that I get sick of having
to find effiency problems in code after the fact (From me, or from other
developers).

Have a good day,
Brian
 
M

Matt Kruse

Brian Genisio said:
This assumes you have a fast computer. I suppose it all depends on your
audience, but the average computer user these days does not have
anything better than a 300 Mhz PC with 64 MB of memory. Many people
have much slower machines yet.

Even still, a difference of 5k in library size is negligible. I have some
200mhz machines here I can test with, I should run some comparisons, just to
see for sure what the difference is.
By always thinking about efficiency,
optimization is rarely necessary.

I agree, and I do consider efficiency (caching objects and references to
objects, etc). My point is, when a script runs fine and there are no
complaints, spending an hour to squeeze out .0001s better speed is not time
well spent.
Of course, 1 second faster seems
silly, but what if is a 1 second improvement from something that takes
1.1 seconds?

Over 10,000 iterations?
That's a speed increase for a single iteration (typical) from .0011 seconds
to .00001 seconds, which is completely unrecognizable. If the code is
actually going to be executed 10,000 times, then that's a different story
(and probably a design flaw ;)

I doubt anything is going to actually be executed 10,000 times in
javascript. If it is, there's probably something wrong. But many people who
I see tweaking for speed increases actually have to run something thousands
of times in a loop to even measure the difference in execution time. If it's
only going to run oncec or twice, and you need to run it 10,000 times to see
a speed increase, then it's not a practical exercise.
If you really think that speed optimizations are a huge waste of time,
then you have never written an application of any substance.

I never say the former, and I certainly have done the latter!
My belief is that speed optimizations are not the best use of a develoepers
time if the speed increase is so small so as to not even be noticed, and if
a block of code needs to be executed 10,000 times in order to notice a 1
second speed increase, then the time spent optimizing could be better spent
somewhere else.
Ok, I will get off my high horse... It is just that I get sick of having
to find effiency problems in code after the fact (From me, or from other
developers).

I'd much rather deal with slightly inefficient code than code that isn't
commented and is poorly designed to begin with. I'll gladly sacrifice .1
seconds of speed in exchange for code clarity :)
 
B

Brian Genisio

Matt said:
I never say the former, and I certainly have done the latter!
My belief is that speed optimizations are not the best use of a develoepers
time if the speed increase is so small so as to not even be noticed, and if
a block of code needs to be executed 10,000 times in order to notice a 1
second speed increase, then the time spent optimizing could be better spent
somewhere else.

Ok, I thought you were talking globally, as opposed to locally (to
Javascript programming). There are still things you can do in
Javascript to make it faster, but I agree... it is rare to do something
10,000 times.

Of course, in other languages, this type of thing happens all the time.
For instance, the web brower will do this often. Immagine a page with
three frames (4 DOM models in all), and each page is somewhat complex.
Since all attributes are nodes, the node tree can easilly make it to
10,000 items. The tree algorithms must be efficient in this matter.

Database searches... could be billions of times.

Also, when I say 10,000 items, that doesnt mean 10,000 times that code
is run. It means a function being executed on 10,000 items. Though, it
is rare you want to run an algorithm on 10,000 items, and have a worst
case that is better than 10,000 items. (There are not many O(c)
algorithms on n items out there)

In my original post, I was saying that if you make something faster,
from 1.1 seconds to 1.0 seconds, this is not a speed increase. You are
likely only changing a constant in the Big O characterization. If,
instead, you can bring something from 1.1 seconds to 0.1 seconds... now
we're talking. This is likely orders of magnitude faster. More than
just a constant change.

I digress,
Brian
 
M

Matt Kruse

Dr John Stockton said:
Those on dial-up or radio links will prefer not to receive an
unnecessary 5K of code.

They may prefer not to receive an unnecessary 5k of site-wide CSS rules,
too, but do you recommend against using global CSS files? Would you rather
have a separate CSS file for every page on your site, containing only the
definitions required by that page?

hell, there may be HTML pages with 1k of whitespace! Should everyone start
compressing all of their HTML source so as to not send unnecessary
whitespace characters???

If a user objects to an extra 5k of javascript code, they are surely
browsing without images on, and probably have javascript turned off to begin
with.

It's kind of a ridiculous argument you're trying to make, isn't it?
 
R

Richard Cornford

Matt Kruse wrote:
You're confusing "programming" javascript with "using" javascript.

Javascript is a programming language, using javascript is programming.
Using a program written in javascript is might be considered distinct,
but there aren't that many programs written in javascript that can be
used without writing at least some additional code to control them.
You don't have to know how a car works to drive one, do you?

You cannot drive a car and know nothing about how it works, such as the
fact that it consumes fuel, oil, water, etc, as it operates. Where
understanding how they actually work becomes most valuable is when they
stop working properly.
You don't have to understand the fundamentals of electronics or
operating systems to use a computer, do you?

I have met one or two people who use computers without any understanding
of how they work. They seem to operate on the basis of inventing their
own superstitions about what the computer is doing; it isn't an approach
that allows them to be very productive.

I do not think that it's unreasonable to be a web
developer and want javascript functionality without fully
understanding everything needed to make it work.

A web developer should understand the issues surrounding the use of
javascript prior to using it, and consider those issues when specifying
3rd party scripts to be used. And if they intend to create javascript
themselves then the are trying to be a programmer and should expect to
have to acquire suitable understanding.
If the javascript coder can
sufficiently hide enough from you, then you just need to
deal with an interface, not with the implementation.

That would depend a lot on the interface. It is certainly possible for
an HTML author to create a suitable HTML structure, give it an ID and
then pass that ID as a string parameter to a javascript function that
then handles everything else, including the degradation (as that would
be just not acting, leaving the HTML unmodified in the page). Such a
script could be used by someone with no javascript knowledge, just clear
documentation, and would be well suited to situations where a javascript
author was writing scripts to be used by HTML and server script writing
colleges who needed to do as little work as possible to deploy their
scripts.

On the other hand, a library providing an interface as an API (or
requiring complex configuration details in the form of javascript
structures) needs considerably more understanding to be usefully
employed.
A developer can understand enough about javascript to know when it is
appropriate to use it, and how to degrade gracefully in case users
don't have it enabled - but NOT understand it enough to implement a
popup div which is positioned correctly in all browsers (even old
ones) and interacts with the user.

How would it be possible for a developer to not know enough to be able
to position a DIV and also know enough to be able to respond usefully
when a browser was not going to be able to position a DIV?
There's no reason those details
can't be hidden from the person implementing the library.

They could be, a script can be inherently cleanly degrading (by
manipulating structures defined in the HTML and not acting on browsers
that cannot support it), and a library could be written to flag its
inability to act usefully (or be queried on the subject) , though that
still leaves the person employing such a library with the problem of
doing something useful in respons.
For example, if a user wants to have an expandable tree structure,
they can use mine and give their <ul> structure a certain class, and
instantly have a tree implemented. They don't need to know how it
works, and they don't need to have any javascript knowledge. Yet it
solves their problem in an elegant and robust way.

Because that script is based on CSS and manipulating HTML defined
structures it is relatively robust. It is the type of script that is
easy to employ without much understanding of its mechanism. It is also
the type of script that is easy to cleanly degrade, because the list is
in the HTML and the script could detect browser support for the required
features and not act whenever they are not available. It does not
introduce a dependency on javascript.

However, I would not describe your implementation as cleanly degrading
because its response to some unsupporting environment may be to error
out (lacking much in the way of feature detecting), fortunately before
it has done anything to the HTML it is acting on so the page would
remain usable. The worst it will do is show the user an error message
(generally not considered a good thing in itself).

Do you make these functions available anywhere?

Some.

... . But, if there was a single function which
gave the position of an element, for example, and it worked in every
browser that could possibly be tested, that would be a very valuable
thing to share.

Some browsers do not make any element positioning information available
(except maybe the old Netscape 4 info for A, IMG and layers), so a good
element position interface would also have to be able to signal its
inability to provide useful information.

But you still would not want a general method because a general method
might have to take into account possibilities such as an element being
contained within a scrolling DIV that was scrolled to some extent at the
time. That is a lot of extra work in doing the calculations, but would
not apply in most situations. A range of methods would be better, so the
one best suited to the situation of its use could be used.

If you have any ideas about how the users of this group could
assemble such a collection of developer tools, I'd like to hear it.

When contributors to this group provide detailed explanations, or
examples of cross browser scripts they often feature components that
could be usefully employed in broader contexts. Any sufficiently regular
reader of the group will be exposed to pretty much everything they are
likely to need (and then there are the archives).
Requirements change.
Why re-code, when you could have handled the
general cases from the beginning?

Requirements can change, but they may not, so equally: why code for the
generality when you have a specification to work from?

But in practice a changed requirement would only necessitate changing
parts of a script, probably just replacing a couple of functions (and
maybe just swapping them for others that already exist, maybe with a
little modification to suite).
Adding an additional 5k to a library to solve a number of
general cases is a _GOOD THING_. IMO.

As I said before, you have a theoretical 80k maximum window in which to
serve a page, preferably nearer 40. Every chink taken needlessly in
script is eating away at the user's willingness to wait. 5k may not
sound like much, if the difference between covering all of the potential
for changes in requirements can be accommodated in 5k, and it may not
swing the balance in itself, but it could be 5k better spent.
Unless you have implemented a generalized popup date-picker (if you
have, where is it?), I don't think you understand.
<snip>

What is this about? I explain to you why I don't think libraries are
suited to browser scripting and you ask me where you can find libraries
that I have written. I explain to you why I don't think broad
generalised scripts are suited to browser scripting, but apparently I
cannot "understand" unless I have spent my time doing something that my
experience tells me is a fundamentally flawed approach.

OK, if you wanted a script that could interact with all of the various
types and combinations of form control to which a date selection
mechanism could be applied then maybe it would take 50 hours (there are
a lot of possibilities to cover). But in reality reasonable site design
would use a consistent style of form control (or control combination)
for the entering of dates wherever it was required (it would be bad UI
design to do otherwise), making accommodating all of the possible
permutations pointless and certainly reducing the task to considerably
less that 50 hours.

Richard.
 
R

Richard Cornford

Brian Genisio wrote:
I know this is a long and boring example, but I am trying to
illustrate a practice that happens in out-the-door products on a
regular basis... Solving one problem, but creating a smaller,
unrelated problem that is managable.

It is reasonable to be pragmatic, but I can't see needlessly introducing
a javascript dependency as a smaller and manageable problem. It can only
look that way if you adopt a position of not caring.
Ideally, there should be a final solution to A that will be a perfect
solution. This solution may be unrealistic in budget/schedule, and
concessions are made for the A"xC' solution.
<snip>

Your example is rather frightening. I would not be happy to categorise
it as a solution at all, at least without the qualification "temporary".
Schedule constraints may necessitate it, but budget considerations are
never aided by an increased maintenance burden (that becomes a bit open
ended), and you know full well that problem D will manifest itself at
the worst moment possible.

Richard.
 
B

Brian Genisio

Richard said:
Brian Genisio wrote:



It is reasonable to be pragmatic, but I can't see needlessly introducing
a javascript dependency as a smaller and manageable problem. It can only
look that way if you adopt a position of not caring.

Are you talking about a solution in Javascript? Or a solution in
general? The scope of Javascript in a browser is very small compared to
the general practice of software development. It is real easy in
Javascript to say that a perfect solution exists for many problems you
encouter. This is because for the most part, Javascript programming is
a trivial exercise, in a small, descrete environment.

How about when your development solution spans multiple operating
systems and multiple languages?
<snip>

Your example is rather frightening. I would not be happy to categorise
it as a solution at all, at least without the qualification "temporary".
Schedule constraints may necessitate it, but budget considerations are
never aided by an increased maintenance burden (that becomes a bit open
ended), and you know full well that problem D will manifest itself at
the worst moment possible.

Richard.

I once worked on a system that integrated four operating systems on over
6 computers, and ran software in about 12 different programming
languages and communicated over 6 communication standards to come up
with a solution that worked well.

When it finally worked, there was one glaring problem... it was
extremely complex. The solution solved the problem, and did it well,
but was difficult to debug and had a steep learning curve. I am
convinced that this solution was as good as anyone could have come up
with.

This is an example of a real-world software project with thousands of
requirements, and it met every one. Is this not a solution?

Brian
 
D

Dr John Stockton

JRS: In article <[email protected]>, seen in
news:comp.lang.javascript said:
They may prefer not to receive an unnecessary 5k of site-wide CSS rules,
too, but do you recommend against using global CSS files? Would you rather
have a separate CSS file for every page on your site, containing only the
definitions required by that page?

If the nature of the site was such that I was likely to visit only one
page, then as a user I would naturally prefer only the definitions
needed on that page. But if I was likely to visit many pages, so that
definitions were multiply used, I would prefer collected definitions.

hell, there may be HTML pages with 1k of whitespace! Should everyone start
compressing all of their HTML source so as to not send unnecessary
whitespace characters???

Yes, they should, at least for popular professional sites; and comment
should also be removed. It is in the interests of their readers, after
all. That assumes, of course, that the HTML is intended to be read only
by browsers and not by people.

There are two common classes of unnecessary whitespace; spaces at the
ends of lines, and indentation. The former can very easily be removed
automatically; removing the former needs an understanding of <pre>, but
is equally trivial if it is known to be absent.

However, recognise that size reduction is most important for those on
slow links, which are likely to have hardware compression which will be
effective on leading whitespace. Code bloat is far more important.

If a user objects to an extra 5k of javascript code, they are surely
browsing without images on, and probably have javascript turned off to begin
with.

It's kind of a ridiculous argument you're trying to make, isn't it?

You need to be sensible about it. A large all-purpose routine of which
only a small part is likely to be used within a site is not sensible; it
is merely showing-off on the part of its author.


Authors should recognise that there is a difference in the use of code
libraries between (Web) javascript and, say, Delphi. A Delphi
programmer can freely use many library units, because the units stay on
the local machine and only the needed parts go into the distributed EXE
(but needs to think a bit more when writing DLLs). But a Web author
with many library files available should be selective about which parts
he puts in Web pages or include files, and hoe they should be
distributed among those.

Full optimisation is impractical; but a little thought on such matters
should enable avoidance of full pessimisation.


H'mmm - there's another reason for removing redundancy; it diminishes
the load on the server. In particular, it diminishes the download for
an individual author. Large authors will pay directly for the amount of
service provided; small authors may have a fixed allowance, so that more
compact pages means more readers.


Eschew surplusage !
 
J

Jim Ley

In theory, yes. But I've had pages with over 200k of javascript (literally -
and no, I didn't design them) which executed in 2 seconds. Computers are
very fast. Compiling 5k of javascript takes such little time that it becomes
completely negligible.

That's fast, initial hit time for a current project of mine which
involves 200 JS load, is around 14 seconds on a 2Ghz P4 not under
load, it's not all compilation, but it's a significant part of it.

Was the 200k something particularly simple?

Jim.
 
J

Jim Ley

On Tue, 20 Apr 2004 08:37:13 -0500, "Matt Kruse"
I doubt anything is going to actually be executed 10,000 times in
javascript. If it is, there's probably something wrong.

you've never done something onmousemove, or using css properties with
JS, they are processed a lot... but yes, there's often no point, but
you're talking about seconds, even 1/10th of a second is often too
long in UI's users notice it.
I'd much rather deal with slightly inefficient code than code that isn't
commented and is poorly designed to begin with. I'll gladly sacrifice .1
seconds of speed in exchange for code clarity :)

I'd be amazed if your users will, a second is an age.

Jim.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top