Looking for a code snippet

J

January Weiner

Hi,

I am looking for a simple JavaScript functionality. Before I try to
reinvent the wheel, maybe someone will point me directly to an existing
code snippet.

I need a message for the user that is displayed while the page is loaded;
this message should be updated from time to time (e.g. "computing the
matrix, 80% ready").

I assume this could be done using XMLHttpRequest() -- load the outline of
the page first, displaying the first message, and then regularly ask
the server for news on our computation and feel them in the right place of
the document. (a similar solution w/o Javascript -- reload the page each X
seconds until finished, each time a different message is being displayed).

Otherwise, is there a possibility to execute Javascript while the page is
being loaded? This would make the coding on the CGI side easier.

Thanks in advance,

January
 
T

Thomas 'PointedEars' Lahn

January said:
I need a message for the user that is displayed while the page is loaded;
this message should be updated from time to time (e.g. "computing the
matrix, 80% ready").

Why would you need that, given that the UA has a progress display already?
Why are you transmitting message bodies this large in the first place?
I assume this could be done using XMLHttpRequest() -- load the outline of
the page first, displaying the first message, and then regularly ask
the server for news on our computation and feel them in the right place of
the document. (a similar solution w/o Javascript -- reload the page each X
seconds until finished, each time a different message is being displayed).

Reads like a really bad idea. The first approach would depend on JS/ES, DOM
and XHR support. The second approach would take infinite time in the worst
case as reloading always reloads the entire document from the start.
Otherwise, is there a possibility to execute Javascript while the page is
being loaded?

Of course there is. You can put the `script' element within your `body'
element at your convenience, provided the result is still Valid markup.
And with strict feature testing you may even attempt to create and insert
elements then.

But again, I don't see the point of all of this. You should redesign your
application so that it outputs data in smaller chunks.
This would make the coding on the CGI side easier.

You are not making much sense. Maybe you are only looking for a simple wait
message on form submit, something like what Bugzilla does. For example:

https://bugzilla.mozilla.org/buglis...e+desc&bug_status=__open__&content=javascript


PointedEars
 
J

January Weiner

Thomas 'PointedEars' Lahn said:
Why would you need that, given that the UA has a progress display already?
Why are you transmitting message bodies this large in the first place?

I think that is because that is what is expected by the user. This is how
the things are in my field (bioinformatics). You submit a query (e.g.
protein sequence), then a complex algorithm on the server side does
something that takes a long time (e.g. BLAST search) and then the user gets
e.g. 100 results. The user (which includes me as well) expects one page
with all results, which you can search using ctrl-f or print out.

Cf one of the most popular examples of such search engines, the BLAST web
site www.ncbi.nlm.nih.gov/blast/

Apart from that -- it is not even the size of the loaded HTML, it just
takes a long time to do the actual algorithm behind the scenes.
Reads like a really bad idea. The first approach would depend on JS/ES, DOM
and XHR support. The second approach would take infinite time in the worst
case as reloading always reloads the entire document from the start.

Interestingly, this is how many of the aforementioned websites work -- and
they seem to be quite popular. Not that it makes them correct, mind you.
However, I solved the problem exactly like you suggested:
Of course there is. You can put the `script' element within your `body'
element at your convenience, provided the result is still Valid markup.
And with strict feature testing you may even attempt to create and insert
elements then.

I did not know that you are guaranteed that the JavaScript gets executed
immediately as it is read by the browser (I thought you could run into
problems like the browser waiting until certain tags are closed etc.). But
it seems to work quite OK.
But again, I don't see the point of all of this. You should redesign your
application so that it outputs data in smaller chunks.

I am sure that this could be done somehow. However, I have not the
faintest idea how. I guess I could output the search results e.g. ten at a
time with small "next >" and "< prev" buttons, but that would only make
everyone angry: you would like to print out the whole thing or to be able
to search through the report for whatever interests you. And,
additionally, it would make my application stand out and have an exotic
behaviour.
You are not making much sense. Maybe you are only looking for a simple wait
message on form submit, something like what Bugzilla does. For example:

Nope. Sorry I appear to be a little confused, I am really more confident
with C and Perl. What I mean is: if I do it via automatic page reloading,
this is the way to do it (I know it works like that in several websites and
I have done it myself a few times):

1) the CGI that does the search receives the parameters, forks. One
process shows the "please wait" HTML and makes the page reload itself
automatically. The other part detaches itself and runs the search
program, producing some sockets / files / whatever for IPC.
2) when the CGI is called with a job id (by the reloading page), it does
an IPC / lookup of job-specific files to find out what is happening
with the running job. It either finds that the script is still running
and displays the status information (and forces the page to reload
again) or displays the final output.

For this, one needs to make more complex CGI. The way I did it now, it
works much simpler: every now and then the CGI sends out a JavaScript code
that updates the status message, and when it is finished, removes it. No
need to fork, detach, redirect stdin/stdout/stderr, doing IPC.

Cheers,
j.
 
T

Thomas 'PointedEars' Lahn

January said:
I think that is because that is what is expected by the user. This is how
the things are in my field (bioinformatics). You submit a query (e.g.
protein sequence), then a complex algorithm on the server side does
something that takes a long time (e.g. BLAST search) and then the user gets
e.g. 100 results. The user (which includes me as well) expects one page
with all results, which you can search using ctrl-f or print out.

Nevertheless, what would it help the user to know that 80% of the results
were already output, iff the UA not already provided that figure through its
progress bar/display? They would have to wait for 100% anyway.
Cf one of the most popular examples of such search engines, the BLAST web
site www.ncbi.nlm.nih.gov/blast/

Since bioinformatics does not need to be a field a potential reader is
specialized in (and in the case of this reader it isn't), they would not be
able to make a reasonable and comparable query unless told what to input there.
Apart from that -- it is not even the size of the loaded HTML, it just
takes a long time to do the actual algorithm behind the scenes.

I see.
Interestingly, this is how many of the aforementioned websites work --

No, certainly they don't. It would seem you use a definition of "reloading"
that is not in accordance with the generally accepted definition.
[...] What I mean is: if I do it via automatic page reloading,
this is the way to do it (I know it works like that in several websites and
I have done it myself a few times):

1) the CGI that does the search receives the parameters, forks. One
process shows the "please wait" HTML and makes the page reload itself
automatically. The other part detaches itself and runs the search
program, producing some sockets / files / whatever for IPC.
2) when the CGI is called with a job id (by the reloading page), it does
an IPC / lookup of job-specific files to find out what is happening
with the running job. It either finds that the script is still running
and displays the status information (and forces the page to reload
again) or displays the final output.

For this, one needs to make more complex CGI.

That is probably true. However, a previous XHR may serve to display the
progress as well. What would be required then is another HTTP connection
and another resource that yields the percentage of the processing; the
latter could be a publicly available text file that is written occasionally
by the searching CGI script.
The way I did it now, it works much simpler: every now and then the CGI
sends out a JavaScript code that updates the status message, and when it
is finished, removes it. No need to fork, detach, redirect
stdin/stdout/stderr, doing IPC.

However, as you have stated, in this case displaying the content would
constitute only a small fraction of the entire process and so the XHR
approach would be more reasonable. The current approach could then
serve as a fallback, though.


PointedEars
 
J

January Weiner

Nevertheless, what would it help the user to know that 80% of the results
were already output, iff the UA not already provided that figure through its
progress bar/display? They would have to wait for 100% anyway.

Umm, I might have confused an issue or two. My whole point is to inform
the user at what stage the submitted job is and how long it is going to
take. Note that the time to run the job on the server is much longer than
the time that is required to actually put together the HTML results page
(which happens as the very last stage of the job and takes just a moment).
Since bioinformatics does not need to be a field a potential reader is
specialized in (and in the case of this reader it isn't), they would not be
able to make a reasonable and comparable query unless told what to input there.

That is not correct. The tools are used by biologists in general. And
biologists tend to be on the conservative side when it comes to UI. I know
what I am talking about, I am one by training :)
No, certainly they don't. It would seem you use a definition of "reloading"
that is not in accordance with the generally accepted definition.

What I _mean_ is: call the same URL again (using e.g.
HTTP-EQUIV="refresh"), which is the address of a CGI with the job ID as a
parameter. The CGI then repeatedly checks what has happened to the job in
question, and renders a page with the respective information for the user
or with the actual job output.
That is probably true. However, a previous XHR may serve to display the
progress as well. What would be required then is another HTTP connection
and another resource that yields the percentage of the processing; the
latter could be a publicly available text file that is written occasionally
by the searching CGI script.

Precisely. This is what I outlined. This requires to have more than one
script, a coordination between the process that runs the actual job and the
CGI wrapper etc. Been there, done that, it is not hard, but if I can
achieve it with a JS-driven progress bar (as I did now), the better.

Just for the reference (and hopefully criticism), this is the way I do it
now:

1) the body tag looks like:
<body onload="document.getElementById('loading').style.display='none'">

2) just behind that I have the following code:
<script type="text/javascript">
document.write(
'<div id="loading" class="progress">Progress:\
<br><div id="status">Loading... please wait</div>\
<br><div id="details"></div></div>')
</script>

[sorry, I do not know how to break a string correctly in JS; the
backslashes before newlines here are shell-style and I just included them
here to format the posting correctly]

The class progress is defined in a CSS as follows:

.progress {
position:absolute;
left:30%;
width:40%;
top:20%;
color:white;
background-color:crimson;
padding:10px;
margin:10px;
border-style:groove;
border-width:thick;
border-color:black;
}


3) throughtout the document I have statements like
<script type="text/javascript">
document.getElementById('status').innerHTML='Harvesting sequences:';
</script>

or

<script type="text/javascript">
document.getElementById('details').innerHTML='50 % done';
</script>

This way, I achieve the following:
1) no weird things appear if JS is switched off -- just no progress update
is displayed
2) whether my script dies or not, the red progress box is turned off as
soon as the body element close tag is read
3) the progress information is easy to control from withing the CGI script.
However, as you have stated, in this case displaying the content would
constitute only a small fraction of the entire process and so the XHR
approach would be more reasonable. The current approach could then
serve as a fallback, though.

Yep, but the CGI sends out chunks of HTML all the time that it is working.

Cheers,

j.
 
T

Thomas 'PointedEars' Lahn

January said:
Umm, I might have confused an issue or two. My whole point is to inform
the user at what stage the submitted job is and how long it is going to
take. Note that the time to run the job on the server is much longer than
the time that is required to actually put together the HTML results page
(which happens as the very last stage of the job and takes just a moment).

Hmmm. How long it is going to take is not much different from showing a
percentage value.
That is not correct. The tools are used by biologists in general. And
biologists tend to be on the conservative side when it comes to UI. I know
what I am talking about, I am one by training :)

What I meant was that you can not people *here* expect to know how to use
that particular Web application properly, and so to have a chance to observe
the behavior that you desire.
What I _mean_ is: call the same URL again (using e.g.
HTTP-EQUIV="refresh"), which is the address of a CGI with the job ID as a
parameter. The CGI then repeatedly checks what has happened to the job in
question, and renders a page with the respective information for the user
or with the actual job output.

ACK. I did not expect the application to hold back its results until it
is completely finished. Quick access to served content is the reason why
progressive rendering exists.
Precisely. This is what I outlined. This requires to have more than one
script,

It requires one client-side and one server-side script at the minimum.
Which is pretty much what is required already.
a coordination between the process that runs the actual job and the
CGI wrapper etc.

But you would need that anyway if the message for the user was not simply a
wait message but included say, a progress percentage display. So instead of
writing chunks of client-side script code it would equally be possible if a
text file was written that was accessible by a client-side script through
another HTTP request.
Been there, done that, it is not hard, but if I can achieve it with a
JS-driven progress bar (as I did now), the better.

The progress bar would be there as well but you would not need to write out
chunks of script code for that.
Just for the reference (and hopefully criticism), this is the way I do it
now:

1) the body tag looks like:
<body onload="document.getElementById('loading').style.display='none'">

That assumes that if an element can be written with document.write()
(originates from DOM Level 0), DOM Level 2 methods and proprietary CSS
scripting would be available as well. You should at test all used features
at runtime so that the message would not be written if they are not supported:

http://www.jibbering.com/faq/faq_notes/not_browser_detect.html
2) just behind that I have the following code:
<script type="text/javascript">
document.write(
'<div id="loading" class="progress">Progress:\
<br><div id="status">Loading... please wait</div>\
<br><div id="details"></div></div>')

Omit the unnecessary `br' elements; you have block-level (div) elements
already. You also have to escape ETAGO delimiters (`</') within CDATA
content, in ECMAScript implementations preferably with `<\/'.
</script>

[sorry, I do not know how to break a string correctly in JS; the
backslashes before newlines here are shell-style and I just included them
here to format the posting correctly]

'...'
+ '...'

or

new Array(
'...',
'...'
).join("");

or

[
'...',
'...'
].join("");

See http://PointedEars.de/es-matrix for information about the level of
support for each feature.
[...]
3) throughtout the document I have statements like
<script type="text/javascript">
document.getElementById('status').innerHTML='Harvesting sequences:';
</script>

or

<script type="text/javascript">
document.getElementById('details').innerHTML='50 % done';
</script>

Bad idea. D::gEBI() is standardized (DOM Level 1+), `innerHTML' is not.
At least you should do feature tests at runtime instead of those reference
worms.
This way, I achieve the following:
1) no weird things appear if JS is switched off -- just no progress update
is displayed

But there are not only the binary states of client-side script support being
switched on and off (or rather be available or not); different levels of
script support and support for the required interfaces have to be taken into
account as well.
2) whether my script dies or not, the red progress box is turned off as
soon as the body element close tag is read

Only hopefully, as you perform no runtime feature tests so far.
3) the progress information is easy to control from withing the CGI script.

True.


PointedEars
 
J

January Weiner

Hmmm. How long it is going to take is not much different from showing a
percentage value.

I thought you meant the percentage of the page that was loaded.
What I meant was that you can not people *here* expect to know how to use
that particular Web application properly, and so to have a chance to observe
the behavior that you desire.

Ops, sorry, my fault.
ACK. I did not expect the application to hold back its results until it
is completely finished. Quick access to served content is the reason why
progressive rendering exists.

Problem is: "search" here is not an SQL query. It is mostly a
sophisticated sequence comparison algorithm relative to Needleman-Wunsch.
After you have searched in a database (that is, compared each db sequence
with the query) you must sort the results, compute statistics and decide,
which results (out of the many thousands) should be shown to the user.
Then, for these results, you get some more information (e.g. external links to
protein databases, annotation informations etc.) and render the page.
It requires one client-side and one server-side script at the minimum.
Which is pretty much what is required already.
[snip]

But you would need that anyway if the message for the user was not simply a
wait message but included say, a progress percentage display. So instead of
writing chunks of client-side script code it would equally be possible if a
text file was written that was accessible by a client-side script through
another HTTP request.

OK, let me see if I understand: The client script makes asynchronous
requests to find out what is happening to the job on the other side, and
respectively updates the page that the user is doing (e.g. it looks for a
link called "..../output/jobid-message.txt" and depending on its contents
it either displays the message, e.g. "still running calculations" or
fetches the formatted results from the location given in that message).

This is essentially what I was always doing, albeit my scripts were both
server-side (and the communications between them was slightly more
sophisticated, but also simple enough). The client had only to refresh the
URL in regular intervals.
The progress bar would be there as well but you would not need to write out
chunks of script code for that.

[...]

[...]

Thanks for all the usefull JavaScript comments! I have read the links that
you have mentioned. I had NO IDEA how complex these issues are and that
there is so much difference between the browsers. The web site on testing
the client just scared the hell out of me. I think I stick to my perlish
CGIs for now :-(( I have already a quite complex system to watch over --
CGIs in Perl, some inline C code, C programs, SQL queries and servers,
adding this level of complexity of JavaScript on the top of it would make
me go nuts in just no time.
Bad idea. D::gEBI() is standardized (DOM Level 1+), `innerHTML' is not.

What would be a better idea?
At least you should do feature tests at runtime instead of those reference
worms.

What are "reference worms"?

[...]

Once again, thanks for all the remarks!

Cheers,

j.
 
J

January Weiner

Hmmm. How long it is going to take is not much different from showing a
percentage value.

I thought you meant the percentage of the page that was loaded.
What I meant was that you can not people *here* expect to know how to use
that particular Web application properly, and so to have a chance to observe
the behavior that you desire.

Ops, sorry, my fault.
ACK. I did not expect the application to hold back its results until it
is completely finished. Quick access to served content is the reason why
progressive rendering exists.

Problem is: "search" here is not an SQL query. It is mostly a
sophisticated sequence comparison algorithm relative to Needleman-Wunsch.
After you have searched in a database (that is, compared each db sequence
with the query) you must sort the results, compute statistics and decide,
which results (out of the many thousands) should be shown to the user.
Then, for these results, you get some more information (e.g. external links to
protein databases, annotation informations etc.) and render the page.
It requires one client-side and one server-side script at the minimum.
Which is pretty much what is required already.
[snip]

But you would need that anyway if the message for the user was not simply a
wait message but included say, a progress percentage display. So instead of
writing chunks of client-side script code it would equally be possible if a
text file was written that was accessible by a client-side script through
another HTTP request.

OK, let me see if I understand: The client script makes asynchronous
requests to find out what is happening to the job on the other side, and
respectively updates the page that the user is doing (e.g. it looks for a
link called "..../output/jobid-message.txt" and depending on its contents
it either displays the message, e.g. "still running calculations" or
fetches the formatted results from the location given in that message).

This is essentially what I was always doing, albeit my scripts were both
server-side (and the communications between them was slightly more
sophisticated, but also simple enough). The client had only to refresh the
URL in regular intervals.
The progress bar would be there as well but you would not need to write out
chunks of script code for that.

[...]

[...]

Thanks for all the usefull JavaScript comments! I have read the links that
you have mentioned. I had NO IDEA how complex these issues are and that
there is so much difference between the browsers. The web site on testing
the client just scared the hell out of me. I think I stick to my perlish
CGIs for now :-(( I have already a quite complex system to watch over --
CGIs in Perl, some inline C code, C programs, SQL queries and servers,
adding this level of complexity of JavaScript on the top of it would make
me go nuts in just no time.
Bad idea. D::gEBI() is standardized (DOM Level 1+), `innerHTML' is not.

What would be a better idea? (google,google) Oh, I see:
http://slayeroffice.com/articles/innerHTML_alternatives/

Oh well.
At least you should do feature tests at runtime instead of those reference
worms.

What are "reference worms"?

[...]

Once again, thanks for all the remarks!

Cheers,

j.
 
T

Thomas 'PointedEars' Lahn

January said:
I thought you meant the percentage of the page that was loaded.

A possibility that AIUI would not apply in your case.
ACK. I did not expect the application to hold back its results until
it is completely finished. Quick access to served content is the
reason why progressive rendering exists.

Problem is: "search" here is not an SQL query. [...]
ACK
[Use XHR with refresh only as fallback]

OK, let me see if I understand: The client script makes asynchronous
requests to find out what is happening to the job on the other side, and
respectively updates the page that the user is doing (e.g. it looks for a
link called "..../output/jobid-message.txt" and depending on its
contents it either displays the message, e.g. "still running
calculations" or fetches the formatted results from the location given in
that message).
Exactly.

This is essentially what I was always doing, albeit my scripts were both
server-side (and the communications between them was slightly more
sophisticated, but also simple enough). The client had only to refresh
the URL in regular intervals.

With XHR you would be able to refresh everything only if XHR either was not
supported, or if it was supported and the process result was ready for output.
[...] Thanks for all the usefull JavaScript comments! I have read the
links that you have mentioned. I had NO IDEA how complex these issues
are and that there is so much difference between the browsers. The web
site on testing the client just scared the hell out of me.

Don't be :)
I think I stick to my perlish CGIs for now :-(( I have already a quite
complex system to watch over -- CGIs in Perl, some inline C code, C
programs, SQL queries and servers, adding this level of complexity of
JavaScript on the top of it would make me go nuts in just no time.

You really should take the time to look into it; it is not that complicated
(and you would find good help here), but an implementation would probably
make your site stand out in your field. Two of the better XHR tutorials are:

http://jibbering.com/2002/4/httprequest.html
http://developer.mozilla.org/en/docs/AJAX:Getting_Started
What would be a better idea? (google,google) Oh, I see:
http://slayeroffice.com/articles/innerHTML_alternatives/

Oh well.

It would seem you misunderstood me. Even though the recommendations you
found look good, I meant that it was a bad idea to use those features
together *untested*:
What are "reference worms"?

It is a slang term I coined some time ago for longer property accesses that
often lack feature tests. For example,

document

is a(n object) reference, as are

document.getElementById

and

document.getElementById('details')

and

document.getElementById('details').innerHTML

However, say for some reason

document.getElementById('details')

yielded `null', `undefined' or in any other way not an object reference,

document.getElementById('details').innerHTML

would cause a runtime error (e.g. `undefined has no properties').
That last line is an reference worm that should be avoided.

You can also find repeated reference worms like

document.forms[...].elements[...].value
document.forms[...].elements[...].disabled
...

where a simple

o.value
o.disabled
...

after

var o = document.forms[...].elements[...];

would have sufficed. Where in this case passing the reference to the form
object would have been more efficient, leading to something like

function submitHandler(f)
{
var es = f.elements;
if (es["..."].value == ...
&& es["..."].value == ...)
{
return false;
}

return true;
}
Once again, thanks for all the remarks!

You're welcome.


Regards,

PointedEars
 
I

Ian Hobson

January said:
Hi,

I am looking for a simple JavaScript functionality. Before I try to
reinvent the wheel, maybe someone will point me directly to an existing
code snippet.

I need a message for the user that is displayed while the page is loaded;
this message should be updated from time to time (e.g. "computing the
matrix, 80% ready").

I assume this could be done using XMLHttpRequest() -- load the outline of
the page first, displaying the first message, and then regularly ask
the server for news on our computation and feel them in the right place of
the document. (a similar solution w/o Javascript -- reload the page each X
seconds until finished, each time a different message is being displayed).

Otherwise, is there a possibility to execute Javascript while the page is
being loaded? This would make the coding on the CGI side easier.
Hi January,

If I can start a side thread....

I suspect you might run into a problem with the server. Most web servers
are configured with timeouts that squash any thread that takes too long
to produce results. 300 seconds is common. This protects the server,
(shared) from a run-away process.

Your server has to start a separate program to do the work: one that is
NOT controlled by the web server's self-protection mechanisms.

This means the web server and the worker process are only loosely
coupled. The obvious link is for the worker to log its progress to a
file that the server can read. You won't get the current status, only
the last logged point.

We don't know how long the process will take - that depends upon
workload, and other factors we can't control. Therefore the browser will
have to ask for many status reports, and the server will have to aquire
them without stalling.

Conclusion - no solution based upon showing a page as it is received
will work - the thread generating the page will take too long, and get
killed.

Because of the time-out protection, the status update *has* to be
initiated by the browser, so that the browser can do the bulk of the
waiting. The on-load event of a status page, can set a timeout ending
with a refresh.

You could refresh the whole page, or use AJAX methods to simply get the
latest information, and update the current page locally. Which you
choose is really a matter of taste.

You can refresh the whole page with

onload="window.setTimeout('location.refresh',10000);"

The approach of starting a separate job, will have the benefit that if
your browser crashes or is lost, the processing will continue to
completion. If the results are cached to disk the user can reconnect and
view them. They will not have been lost.

Regards

Ian
 
T

Thomas 'PointedEars' Lahn

Ian said:
You could refresh the whole page, or use AJAX methods to simply get the
latest information, and update the current page locally. Which you
choose is really a matter of taste.

It isn't, but instead it is a matter of competence (a competent author
wants to reach as many users as possible under reasonable circumstances),
and it should not be an either-this-or-that design decision.
You can refresh the whole page with

onload="window.setTimeout('location.refresh',10000);"

That does nothing but evaluate `undefined' every 10 seconds. Should be:

onload="window.setTimeout('location.reload(true)', 10000);"

But this should be used in addition to meta[http-equiv="refresh"] in case
that `meta' element is not supported (or support for it is disabled, as
possible e.g. in Opera) but script support is. If XHR is to be employed,
too, then this should run only if XHR support was not available.

Speaking of which, cache-controlling headers will have to be sent in order
for the meta element-based refresh to work properly.

An interval of 10 seconds is probably too large for this application.


PointedEars
 
J

January Weiner

Ian Hobson said:
I suspect you might run into a problem with the server. Most web servers
are configured with timeouts that squash any thread that takes too long
to produce results. 300 seconds is common. This protects the server,
(shared) from a run-away process.

Nope. Not mine :) Of course I have changed the defaults to suit the jobs
that run for a quite a long time. Doing so has a pleasant side effect: I
rather do want to kill of jobs that run longer than for, say, fifteen
minutes. If the job is not controlled by the server, then I need to set
quotas on my system, and user-specific quotas on the top of it (since other
jobs on the system are allowed to run for a longer time), or build some
functionality like this in the job script (extra work again).
Your server has to start a separate program to do the work: one that is
NOT controlled by the web server's self-protection mechanisms.

As I said somewhere -- that is how I usually do that, but I was just
exploring a new, "neater" / more transparent way / cool modern
javascripthish way. Although I do not know much about JS, I am quite
fluent in Perl.
This means the web server and the worker process are only loosely
coupled. The obvious link is for the worker to log its progress to a
file that the server can read. You won't get the current status, only
the last logged point.

....or do any other IPC. Yes, that is the way that something like that
usually is done.

[snip]
You could refresh the whole page

Yep, this is how I usually did that.
, or use AJAX methods to simply get the
latest information, and update the current page locally. Which you

....as Pointedears suggested.
onload="window.setTimeout('location.refresh',10000);"

or with http refresh, so that non-JS browsers work as well (if I understand
correctly).
The approach of starting a separate job, will have the benefit that if
your browser crashes or is lost, the processing will continue to
completion. If the results are cached to disk the user can reconnect and
view them. They will not have been lost.

That, of course, is a very good point.

Regards,

j.
 
I

Ian Hobson

Thomas said:
Ian Hobson wrote:

That does nothing but evaluate `undefined' every 10 seconds. Should be:

onload="window.setTimeout('location.reload(true)', 10000);"
Opps. Thanks for the correction.

Memory error :(

Ian
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,483
Members
44,902
Latest member
Elena68X5

Latest Threads

Top