Trouble putting Javascript in an anchor tag

L

laredotornado

Hi,

I'm trying to do something simple that is blowing my mind right now.
I'm on Firefox on Mac 10.6.3. I have this function ...

<script type="text/javascript">
function delete(ruleId) {
if (confirm("Are you sure you want to delete this rule?")) {
location = '/sweeps/delete?id=' + ruleId;
} // if
} // ruleId
</script>

and then I have this link ...

<a href="javascript:var ret = delete(2);">Delete</a>

Clicking on the link does nothing (function isn't invoked and there
are no JS errors in the console). If I take out the "var ret =", the
browser attempts to load the Javascript in the address bar and the
output is "true". What am I doing wrong?

Thanks, - Dave
 
S

Sean Kinsey

Hi,

I'm trying to do something simple that is blowing my mind right now.
I'm on Firefox on Mac 10.6.3.  I have this function ...

                <script type="text/javascript">
                        function delete(ruleId) {
                                if (confirm("Are you sure you want to delete this rule?")) {
                                        location = '/sweeps/delete?id=' + ruleId;
                                }      // if
                        }       // ruleId
                </script>

and then I have this link ...

<a href="javascript:var ret = delete(2);">Delete</a>

Clicking on the link does nothing (function isn't invoked and there
are no JS errors in the console).  If I take out the "var ret =", the
browser attempts to load the Javascript in the address bar and the
output is "true".  What am I doing wrong?

Thanks, - Dave

use "javascript:void(delete(2))" to fix your issue.

BUT.. NEVER EVER EVER expose methods with side effects (create,
modify, delete) using GET (what you are doing).
There has been stories about web spiders that have caused havoc
because of this, and unexpected behavior in applications due to some
browser preloading url's that it 'think' the user might navigate to.
 
T

Thomas 'PointedEars' Lahn

laredotornado said:
<a href="javascript:var ret = delete(2);">Delete</a>

Clicking on the link does nothing (function isn't invoked and there
are no JS errors in the console). If I take out the "var ret =", the
browser attempts to load the Javascript in the address bar and the
output is "true". What am I doing wrong?

`delete' is an operator, a reserved word, a keyword (ES5, 7.6.1.1). It can
never be the identifier of a function declaration (ES5, section 13; you
should have gotten a syntax error before). Since no object, including the
global object, has a `2' property to begin with, evaluation of the
/UnaryExpression/ does not result in a Reference, so the result is `true'
(ES5, section 11.4.1, step 2).

And read the FAQ why you should not use `javascript:'.


PointedEars
 
T

Thomas 'PointedEars' Lahn

Thomas said:
`delete' is an operator, a reserved word, a keyword (ES5, 7.6.1.1). It
can never be the identifier of a function declaration (ES5, section 13;
you
should have gotten a syntax error before). Since no object, including the
global object, has a `2' property to begin with, evaluation of the
/UnaryExpression/ does not result in a Reference, so the result is `true'
(ES5, section 11.4.1, step 2).

Sorry, the explanation is not quite correct. The reason is that `2' cannot
be produced by /MemberExpression/ or /Identifier/, which would result in a
Reference value. Instead, it can only be produced by
/DecimalIntegerLiteral/, through /DecimalLiteral/, through /NumericLiteral/,
through /Literal/, and the result of that is not a Reference value.

var o = {2: "foo"};
with (o)
{
delete 2;
}

/* "foo" */
console.log(o[2]);

with (o)
{
delete o[2];
}

/* undefined */
console.log(o[2]);

o = {a: "foo"};
with (o)
{
delete a;
}

/* undefined */
console.log(o.a);


PointedEars
 
T

Thomas 'PointedEars' Lahn

Sean said:
use "javascript:void(delete(2))" to fix your issue.

No, `delete' would be still parsed as the operator. Interestingly enough,
it turns out you can declare a function with identifier `delete' in
Mozilla.org JavaScript 1.8.2 (no syntax error), but you cannot access it.
That you can declare it, is a bug.
BUT.. NEVER EVER EVER expose methods with side effects (create,
modify, delete) using GET (what you are doing).
True.

There has been stories about web spiders that have caused havoc
because of this,

Those spiders should then be blocked as they would be FUBAR if they existed.
and unexpected behavior in applications due to some browser preloading
url's that it 'think' the user might navigate to.

If that applied here, one could not ever use the `location' property in Web
applications. You are confusing this with URI-type element attributes, and
it is doubtful whether those browsers should not be considered buggy as well
in that case.

Stop spreading FUD.


PointedEars
 
S

Sean Kinsey

Sean Kinsey wrote:

Those spiders should then be blocked as they would be FUBAR if they existed.

If they existed? Are you questioning the existence of spiders/
crawlers?
If that applied here, one could not ever use the `location' property in Web
applications.  You are confusing this with URI-type element attributes,and
it is doubtful whether those browsers should not be considered buggy as well
in that case.

I am not confused at all; I was referring to the concept of using GET
for operations with side effects, not whether they were accessed using
'location.href=foo' or using a standard anchor element.
And by the way, whether those browsers are 'buggy' or not, has nothing
to do with the issue.
Stop spreading FUD.

You got to be joking, should anyone really take a statement like that
coming from you seriously?
Come on...

Stop rambling.
 
J

Jeremy J Starcher

Sean said:
There has been stories about web spiders that have caused havoc because
of this,[*]

Those spiders should then be blocked as they would be FUBAR if they
existed.

If I understanding the above usage of "this" correctly, referring back to
spiders which have altered data by following links, there have been a
number of cases about spiders following links with side effects and
wiping out data.

(This account is anonymousized)
http://thedailywtf.com/Articles/The_Spider_of_Doom.aspx


Things with side effects should be send POSTed.

"The "get" method should be used when the form is idempotent (i.e.,
causes no side-effects). Many database searches have no visible side-
effects and make ideal applications for the "get" method.

If the service associated with the processing of a form causes side
effects (for example, if the form modifies a database or subscription to
a service), the "post" method should be used."[1]


[1] http://www.w3.org/TR/html401/interact/forms.html#submit-format
 
T

Thomas 'PointedEars' Lahn

Jeremy said:
Thomas said:
Sean said:
There has been stories about web spiders that have caused havoc because
of this,[*]
Those spiders should then be blocked as they would be FUBAR if they
existed.

If I understanding the above usage of "this" correctly, referring back to
spiders which have altered data by following links, there have been a
number of cases about spiders following links with side effects and
wiping out data.

So what? The solution for that is not to change your client-side code, but
to lock those spiders out, if they even still exist. More simple, use only
script includes for such code and prevent spiders from indexing them. And
fix your server-side code jumping to conclusions such as:

Have you even read that article? If Googlebot does not use cookies (i.e.
does not send them), it could not have been considered to have logged on and
wreaking havoc with the CMS had the login test not been written as
ridiculous as

if ($cookieNotSet or $cookieSetToFalse)
{
// logged on
}

instead of the proper

if ($cookieSet and $cookieSetToTrue)
{
// ...
}
Things with side effects should be send POSTed.

Yes, but for other reasons than suggested here. It's not borken spiders but
crackers which should be guarded against.


PointedEars
 
T

Thomas 'PointedEars' Lahn

Sean said:
If they existed? Are you questioning the existence of spiders/
crawlers?

I am questioning that spiders/crawlers this buggy would survive for a
considerable time on the Web, and so yes, if they still exist. If they ever
existed and were the actual reason for the failure (and not the buggy Web
developer's code).
I am not confused at all; I was referring to the concept of using GET
for operations with side effects, not whether they were accessed using
'location.href=foo' or using a standard anchor element.

But that's the very point. A spider/crawler needs to support a minimum of
ES/JS+DOM to recognize such redirections for what they are. Name one.
And by the way, whether those browsers are 'buggy' or not, has nothing
to do with the issue.

Yes, it has. Those browsers would not survive on the Web as nobody would
want to use them.


PointedEars
 
T

Thomas 'PointedEars' Lahn

Thomas said:
Yes, but for other reasons than suggested here. It's not borken spiders
but crackers which should be guarded against.

.... and users hitting the Back button, of course.


PointedEars
 
H

Hamish Campbell

I am questioning that spiders/crawlers this buggy would survive for a
considerable time on the Web, and so yes, if they still exist.  If theyever
existed and were the actual reason for the failure (and not the buggy Web
developer's code).

If a spider can break your site, the issue must be *caused* by buggy
code, but there are plenty of spiders that don't behave 'nicely'. The
New Zealand Web Harvest by the National Library for example. It
ignores robots.txt and traverses as many .nz pages as possible with
the aim of curating public sites as part of their responsibilities
under the National Library Act.
But that's the very point.  A spider/crawler needs to support a minimumof
ES/JS+DOM to recognize such redirections for what they are.  Name one.

Appeal to ignorance. Prove that you can't build a spider like so, and
that no-one has already done so.

Or crackers using automated tools (sometimes know as... spiders). The
circle is complete.

Yes, this is all moot.. just code securely in the first place.
 
T

Thomas 'PointedEars' Lahn

Stefan said:
It's not the spiders who are at fault.

Yes, at least in part they are.
GET requests are supposed to be idempotent, meaning they don't change the
server state [in a significant way]. The first part is specified in the
HTTP specs;

No, that does _not_ follow from the Specifications (here, HTTP/1.1
[RFC 2616]):

| 9.1.2 Idempotent Methods
|
| Methods can also have the property of "idempotence" in that (aside
| from error or expiration issues) the side-effects of N > 0 identical
| requests is the same as for a single request. The methods GET, HEAD,
| PUT and DELETE share this property. [...]

That means the same GET request should have the same side effect no matter
how often it is made. Meaning that it CAN have side effects such as
deletion of data.
[...]
But that's the very point. A spider/crawler needs to support a minimum
of ES/JS+DOM to recognize such redirections for what they are. Name one.

GoogleBot.

You are not referring to <http://googlewebmastercentral-
de.blogspot.com/2009/12/bezahlte-links-in-javascript-code.html>, are you?
If yes said:
No it doesn't. You do _not_ allow GET requests to trigger |delete|s in a
web application, you use POST for that.

Re-read the discussion. You are preaching to the choir, misunderstanding
what the issue is.
[...]
You've seen the story on thedailywtf.com that Jeremy Starcher posted.

And had the Web developer writing the login check been competent, the
problem would never have occured in the first place.
I remember a similar story (possibly even from the same site, but I can't
find it ATM), where a company's web developer couldn't explain why all
of their content mysteriously vanished every time their CEO logged in.
It turned out that the CEO was using the Alexa toolbar to monitor the
ranking of the company's website vs those of their competitors. Part of
what the Alexa toolbar did was to load and analyze every link on a page,
including

<a href="app?id=42&delete=1">[icon]</a>

The CEO was logged in, so the session cookie was sent along with the
request. That story incorrectly blamed Alexa for this ****-up, and their
"solution" was to uninstall the toolbar. Probably not a bad idea, but
the real WTF was to let GET requests trigger deletes.

.... *without* authentication, yes. See?


PointedEars
 
T

Thomas 'PointedEars' Lahn

Ah, OK. I missed login part.
It turned out that the CEO was using the Alexa toolbar to monitor the
ranking of the company's website vs those of their competitors. Part of
what the Alexa toolbar did was to load and analyze every link on a page,
including

<a href="app?id=42&delete=1">[icon]</a>

The CEO was logged in, so the session cookie was sent along with the
request. That story incorrectly blamed Alexa for this ****-up, and their
"solution" was to uninstall the toolbar. Probably not a bad idea,

Although an update might have sufficed.

Still, I don't think the Alexa toolbar should have sent the login cookie in
this case. So it was Alexa's fault. And Alexa is not really a browser; I
maintain that any browser who sent authentication information while
prefetching resources is FUBAR and would not/should not survive.


PointedEars
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,770
Messages
2,569,584
Members
45,077
Latest member
SangMoor21

Latest Threads

Top