Numeric value problems

M

Michael Wojcik

Scott said:
Have you? I guess for languages like C/C++ or Java, or other
languages meant for heavy-weight lifting, this makes a good deal of
sense.

A standard peephole algebraic-simplification optimization, often
implemented even in toy compilers. There's a short discussion in the
"dragon book"[1] (section 9.10), and something similar should be in
any general text on compiler design.
I've never run across it before.

Really? Have you worked on compilers? (That's an honest question -
it's not like every good programmer has had occasion to write a compiler.)
Have you seen it for dynamic
languages? I imagine "eval" might erect barriers to many
optimizations that might otherwise be possible.

To some optimizations, yes, but typically not to single-statement and
peephole ones. Those are applied to the parse tree, so they happen
after any manipulation of the statement in question has occurred.

So, for example, if the source contains:

var foo = bar * 1;

there's no reason why the ES interpreter can't eliminate the
multiplication after parsing that statement - no eval is going to
affect the tree after it's been parsed. And similarly, if the code
contains:

var baz = "bar";
var foo = eval(baz + " * 1");

the string concatenation will happen first, then the string "bar * 1"
will be parsed (by eval), and optimizations can then be performed on
that parse tree. Nothing can change the string once it's passed to
eval - even in some future multithreaded ES implementation - so the
dynamic features of the language don't impede optimization at that point.


Where dynamic languages interfere with optimization is in larger
transformations. For example, inlining short functions is complicated
if those functions can be redefined at runtime.

There's a lot of literature on optimizing the LISP and ML language
families. I've seen some fairly dramatic optimization transformations
for Scheme (a LISP variant with first-order functions), for example,
including a general translation (ie, one proven to be correct for any
program) that converts all recursion into iteration.


[1] Aho, Sethi, Ullman. _Compilers: Principles, Techniques, and
Tools_. 1986. For a long time the standard textbook for
introduction-to-compilers classes in the US; called the "dragon book"
after its cover illustration.
 
S

Scott Sauyet

In my Mac, with OSX 10.6.2, parseInt(txt, 10) is the fastest in any
browser: [ ... ]

I modified the test slightly so that the labels would appear on the
charts.

http://scott.sauyet.com/Javascript/Demo/2010-01-22a/

Note that the label "t \ 0" is for "t | 0", which is hard to do use in
Google charts.

My results, all on Win XP SP2:

Chrome 3.0.195.27 : http://tinyurl.com/yduzdjw
FF 3.5.7 : http://tinyurl.com/yd9dl7u
IE 8 : http://tinyurl.com/ybu8ykn
Opera 9.64 : http://tinyurl.com/ydxyp8e
Safari 4.0.3 : http://tinyurl.com/ycs69dp

In IE and Opera, "Number(txt)", "parseFloat(txt)", and "parseInt(txt,
10)" were all significantly slower than "+txt", "1 * txt", "txt \ 0",
"txt & 0xffffffff", and "txt >>> 0". Safari was similar, except that
"txt & 0xffffffff" was also relatively slow. My Chrome and Firefox
results were similar to yours.

-- Scott
 
J

Jorge

In my Mac, with OSX 10.6.2, parseInt(txt, 10) is the fastest in any
browser: [ ... ]

I modified the test slightly so that the labels would appear on the
charts.
Thanks.

   http://scott.sauyet.com/Javascript/Demo/2010-01-22a/

Note that the label "t \ 0" is for "t | 0", which is hard to do use in
Google charts.
Ok.

My results, all on Win XP SP2:

    Chrome 3.0.195.27 :http://tinyurl.com/yduzdjw
    FF 3.5.7          :http://tinyurl.com/yd9dl7u
    IE 8              :http://tinyurl.com/ybu8ykn
    Opera 9.64        :http://tinyurl.com/ydxyp8e
    Safari 4.0.3      :http://tinyurl.com/ycs69dp

In IE and Opera, "Number(txt)", "parseFloat(txt)", and "parseInt(txt,
10)" were all significantly slower than "+txt", "1 * txt", "txt \ 0",
"txt & 0xffffffff", and "txt >>> 0".  Safari was similar, except that
"txt & 0xffffffff" was also relatively slow.  My Chrome and Firefox
results were similar to yours.

Not in the latest versions: Windozes XP Service Patch 2:

Safari 404: http://tinyurl.com/y89u87v
Opera 1050: http://tinyurl.com/ydhzx3a

It's slower only in IE, what ? in IE ? how come ?
 
S

Scott Sauyet

In my Mac, with OSX 10.6.2, parseInt(txt, 10) is the fastest in any
browser: [ ... ]
I modified the test slightly so that the labels would appear on the
charts.
Thanks.

   http://scott.sauyet.com/Javascript/Demo/2010-01-22a/
Note that the label "t \ 0" is for "t | 0", which is hard to do use in
Google charts.
Ok.

My results, all on Win XP SP2:
    Chrome 3.0.195.27 :http://tinyurl.com/yduzdjw
    FF 3.5.7          :http://tinyurl.com/yd9dl7u
    IE 8              :http://tinyurl.com/ybu8ykn
    Opera 9.64        :http://tinyurl.com/ydxyp8e
    Safari 4.0.3      :http://tinyurl.com/ycs69dp
In IE and Opera, "Number(txt)", "parseFloat(txt)", and "parseInt(txt,
10)" were all significantly slower than "+txt", "1 * txt", "txt \ 0",
"txt & 0xffffffff", and "txt >>> 0".  Safari was similar, except that
"txt & 0xffffffff" was also relatively slow.  My Chrome and Firefox
results were similar to yours.

Not in the latest versions: Windozes XP Service Patch 2:

Safari 404:http://tinyurl.com/y89u87v
Opera 1050:http://tinyurl.com/ydhzx3a

Interesting. When I upgraded to Safari 4.0.4, I got similar results
to yours. But in Opera, upgraded to 10.10, running on Windows XP SP2,
my results are very different:

http://tinyurl.com/yejramy

-- Scott
 
R

Richard Cornford

Interesting. When I upgraded to Safari 4.0.4, I got similar
results to yours. But in Opera, upgraded to 10.10, running
on Windows XP SP2, my results are very different:
<snip>

There is much more to take into account than just browser and browser
version. For example, it was once observed that a *simple* regular
expression match can be faster than an equivalent string comparison.
That is counter-intuitive (given how simply a string comparison can be
programmed to be), and attempts to verify it proved inconclusive
because even on the same browser some people were seeing one result
and others anther, for the same test code. It turned out that where
the regular expression was being seen as faster was on machines using
a P4 processor, while P3 processors showed the string comparison as
faster. This has been attributed to the longer pipeline on the P4,
which seems to work in favour of simple regular expressions while
working against the (smaller and simpler) string comparison code.

It seems that when comparing javascript operation performance it is
necessary to gather statistics from a good range of hardware in
addition to a good range of software (browsers and OSs (and versions
thereof).

In relation to measuring relative performance of javascript operations
there seems to be something missing from these thread recently. And
that is the apparent realisation that performance does not matter that
much whenever things are fast enough, and that the place where things
are least likely to be fast enough will be in the worst performing
browsers. These days the worst performing 'common' browser will either
be IE 6, or for those outside the commercial web development world, IE
7. In cases were it will matter, it will almost always be advisable to
use the operation with the best relative performance in IE 6 or 7 even
if that costs some performance in, say, Chrome or Safari, because the
latter two start off so much faster than the old Microsoft offerings
that they will probably still be faster even after paying for the IE
optimisations. Thus if the old IEs are moved towards acceptable
performance then all the others will probably be fine anyway.

So for the cases were performance is likely to matter all of these
posted timings would not be providing the most important information
(which changes will help the worst performing browsers) even if they
were detailed/comprehensive enough to draw useful conclusions about
the browsers being tested.

Richard.
 
S

Scott Sauyet

In relation to measuring relative performance of javascript operations
there seems to be something missing from these thread recently. And
that is the apparent realisation that performance does not matter that
much whenever things are fast enough, and that the place where things
are least likely to be fast enough will be in the worst performing
browsers.

I'm afraid I brought performance into this discussion; it was totally
inappropriate, and, it looks like, at least partially incorrect.

You are absolutely right here, though. Unless the OP was having
performance issues that could be traced to the string-to-number
conversion, the best bet is to use the technique that is most
maintainable. Now all we have to do is decide which one is
cleanest! :)

-- Scott
 
S

Scott Sauyet

Have you?  I guess for languages like C/C++ or Java, or other
languages meant for heavy-weight lifting, this makes a good deal of
sense.

A standard peephole algebraic-simplification optimization, often
implemented even in toy compilers. There's a short discussion in the
"dragon book"[1] (section 9.10), and something similar should be in
any general text on compiler design.

It's nice to know there's someone covering my back with such
optimizations!

Really? Have you worked on compilers? (That's an honest question -
it's not like every good programmer has had occasion to write a compiler.)

I've had to build compilers for toy languages and in college (many
years ago!) I wrote a compiler for Forth. But all my compiler
experience has been academic, either as a university student or in
reading for my interests. It's never been part of my day job, or even
of my major personal projects. So yes, and no. Even the compiler I
wrote in college was to a very simplified virtual machine. I never
even attempted optimizations like this.

To some optimizations, yes, but typically not to single-statement and
peephole ones. Those are applied to the parse tree, so they happen
after any manipulation of the statement in question has occurred.

So, for example, if the source contains:

        var foo = bar * 1;

there's no reason why the ES interpreter can't eliminate the
multiplication after parsing that statement - no eval is going to
affect the tree after it's been parsed.

No, there's no reason that it can't. But do compiler writers often
add such an optimization, that is, one that looks at every
multiplication to see if one of the multiplicands is hard-coded to 1?
I understand that it's possible to do this, but I'm curious as to
whether it happens.
And similarly, if the code contains:

        var baz = "bar";
        var foo = eval(baz + " * 1");

the string concatenation will happen first, then the string "bar * 1"
will be parsed (by eval), and optimizations can then be performed on
that parse tree. Nothing can change the string once it's passed to
eval - even in some future multithreaded ES implementation - so the
dynamic features of the language don't impede optimization at that point.

Of course, but appropriate analysis of this would also allow the same
optimization:

var baz = "barrister";
var foo = eval(baz.substring(0, Math.floor(Math.random()) == 0 ?
3 : 2) + " * 1");

I'd be really surprised if that were built into any compiler. But it
would be possible. My question is about what real-world compiler
builders actually do.
Where dynamic languages interfere with optimization is in larger
transformations. For example, inlining short functions is complicated
if those functions can be redefined at runtime.

There's a lot of literature on optimizing the LISP and ML language
families. I've seen some fairly dramatic optimization transformations
for Scheme (a LISP variant with first-order functions), for example,
including a general translation (ie, one proven to be correct for any
program) that converts all recursion into iteration.

I've read small amounts of the literature, but never delved deeply
into this. It's fascinating to see what is actually possible.

-- Scott
 
J

Jorge

I'm afraid I brought performance into this discussion; it was totally
inappropriate

Yeah, sure. Why? Some benchmarking is always a good thing.

This particular one serves to destroy the common belief that function
calls are necessarily slower than operators.
 
D

Dmitry A. Soshnikov

On Jan 22, 11:10 am, "Dmitry A. Soshnikov"
[...]
Also interesting result is that parseInt(...) without radix is faster
than with radix 10 - as some default value on implementation level
will be taken without parsing radix (but not in Chrome again - there
it's slower).

In my Mac, with OSX 10.6.2, parseInt(txt, 10) is the fastest in any
browser:

Yep, easily can be so, as correctly was mentioned, we have to take all
the other stuff into account analyzing this. That's my test code with
results (on WinXP):

var
k,
n = 500000,
t1, t2, t3,
t = +new Date,
v = '1',
r;

for (k = n; k--;)
r = parseInt(v, 10);

t1 = new Date - t;

t = new Date;
for (k = n; k--;)
r = parseInt(v);

t2 = new Date - t;

t = new Date;
for (k = n; k--;)
r = +v;

t3 = new Date - t;

alert(
'parseInt(v, 10): ' + t1 + '\n' +
'parseInt(v): ' + t2 + '\n' +
'+v: ' + t3
);

Results:

FF 3.5.7:

parseInt(v, 10): 71
parseInt(v): 22
+v: 76

IE8:

parseInt(v, 10): 578
parseInt(v): 532
+v: 390

Chrome 3.0.195.38:

parseInt(v, 10): 110
parseInt(v): 129
+v: 216

Safairy 4.0.3 (531.9.1):

parseInt(v, 10): 122
parseInt(v): 138
+v: 40

Opera 10.10

parseInt(v, 10): 719
parseInt(v): 609
+v: 344

Regards.

/ds
 
D

Dmitry A. Soshnikov

On Jan 23, 8:47 pm, "Richard Cornford" <[email protected]>
wrote:

[...]
Number(x) - is the most explicit, self-documenting and so clearest
type-conversion-to-number strategy.

Seems so, by the algorithm it's nearest to the unary plus operator (or
vice versa, no matter).

Also, will be good to notice, that `parseInt' in difference from
`Number' and unary plus will help to convert data like e.g. '1a', so
by the situation different way can be choosen:

parseInt('1a', 10); // 1
Number('1a'); // NaN
+'1a'; // NaN

/ds
 
T

Thomas 'PointedEars' Lahn

Richard said:
I have long-since been convinced by Lasse's proposition that -
Number(x) - is the most explicit, self-documenting and so clearest
type-conversion-to-number strategy.

How can it be the clearest (or cleanest) strategy if except in border
cases you cannot know the outcome, or IOW, if the outcome differs among
implementations?


PointedEars
 
M

Michael Wojcik

Scott said:
Scott Sauyet wrote:
I've never run across [peephole algebraic simplification] before.
Really? Have you worked on compilers? (That's an honest question -
it's not like every good programmer has had occasion to write a compiler.)

I've had to build compilers for toy languages and in college (many
years ago!) I wrote a compiler for Forth. But all my compiler
experience has been academic, either as a university student or in
reading for my interests. It's never been part of my day job, or even
of my major personal projects. So yes, and no. Even the compiler I
wrote in college was to a very simplified virtual machine. I never
even attempted optimizations like this.

Fair enough. In my undergrad compiler class we *did* implement some
peephole optimizations, dead code elimination, and that sort of thing;
but since classes are always time-restricted, it's a question of where
the instructor wants to focus. Your compiler class might well have
spent more time on, say, parsing than mine did.
No, there's no reason that it can't. But do compiler writers often
add such an optimization, that is, one that looks at every
multiplication to see if one of the multiplicands is hard-coded to 1?
I understand that it's possible to do this, but I'm curious as to
whether it happens.

I guess we'd have to interview a statistically-significant sample of
compiler writers to be sure. :)

I believe commercial compilers generally do use the standard peephole
optimizations - things like algebraic simplification, reduction in
strength, dead code elimination, and so on. (They also typically
include much more ambitious optimizations.) The same probably applies
to a lot of non-commercial compilers; certainly the OCaml team, for
example, is very interested in optimization.

Peephole optimizations are really quite easy to implement. They're
basically a filter between the parser and the code generator, and they
look for simple patterns. And the code to examine all multiplications
is no more difficult the code to examine one (in fact, it's simpler).
Of course, but appropriate analysis of this would also allow the same
optimization:

var baz = "barrister";
var foo = eval(baz.substring(0, Math.floor(Math.random()) == 0 ?
3 : 2) + " * 1");

I'd be really surprised if that were built into any compiler.

But that's my point - it's no different from the code above. Here it
is in pseudocode:

1. Get the result from Math.random()
2. Get the result from Math.floor()
3. Determine the result from the trinary operator
4. Get the result from baz.substring()
5. Perform string concatenation
6. Pass resulting string to eval()
7. Parse the substring - this is what eval always does
8. Optimize the resulting parse tree
9. Evaluate the optimized parse tree

Here's the thing: steps 7 and 9 are what the compiler (or interpreter)
always has to do anyway, when it processes a source file. Adding step
8 is not a lot of work, and it generally pays off, so it's something
compiler writers (seem to) usually do as a matter of course once the
initial compiler is working.

When you have steps 7, 8, and 9, the logical implementation of eval is
to just call them - exactly as the compiler/interpreter does when it
processes a source file.

This was S. R. Russell's famous insight that led to LISP's
read-eval-print loop interpreter [1]: eval is the compiler, and if you
just compile a statement at a time in a loop, you have an interpreter.

So optimization, if it's implemented at all, probably does happen with
arbitrarily complex eval expressions. The key is that it happens when
the final string passed to eval is parsed.

(Of course, optimization may happen before then, too, on the parts of
the expression that produce that string to be evaluated.)


[1] http://www-formal.stanford.edu/jmc/history/lisp/node3.html
 
R

Richard Cornford

How can it be the clearest (or cleanest) strategy

Where 'clearest strategy' was qualified with "type-conversion-to-
number".
if except in border cases you cannot know the outcome, or
IOW, if the outcome differs among implementations?

So that hangs on an "if". Do the outcomes differ among implementations
(beyond the provisions allowed in the specification for the ToNumber
function(9.3.1) when the number of significant digits goes over 20)?
And if they do isn't that an implementation bug that needs fixing?

In any event, relative terms like "clearest" suggest comparison
between alternatives, and only type-conversion-to-number alternatives
would be appropriate for that comparison, so are there any
environments were - Number(x) - would differ in its outcome from, for
example, - (+x) -, or - (x*1) -? The specs clearly assert that it
shouldn't.

Richard.
 
S

Scott Sauyet

Fair enough. In my undergrad compiler class we *did* implement some
peephole optimizations, dead code elimination, and that sort of thing;
but since classes are always time-restricted, it's a question of where
the instructor wants to focus. Your compiler class might well have
spent more time on, say, parsing than mine did.

Probably not, as I never had a compiler course per se, just an
assignment in a course on language design. I studied religion and
mathematics, and only took a few computer science courses; language
design was the only really deep one. I keep saying that I'll go back
one of these days and get a second Masters, this time in CS; I'd
really like to learn more about compilers, about DBMSs, and sometimes
just about fundamental algorithms. But there's always so much to do!
I guess we'd have to interview a statistically-significant sample of
compiler writers to be sure. :)

Of course. But you clearly have deeper knowledge of this than I do.
In your experience...?

[ ... ]
Of course, but appropriate analysis of this would also allow the same
optimization:
    var baz = "barrister";
    var foo = eval(baz.substring(0, Math.floor(Math.random()) == 0 ?
3 : 2) + " * 1");
I'd be really surprised if that were built into any compiler.

But that's my point - it's no different from the code above. Here it
is in pseudocode:

1. Get the result from Math.random()

Whoa! Which result? That was the point of the whole thing. The
appropriate analysis I mentioned was that it's clear from the
definitions that

Math.floor(Math.random()) === 0

but it's quite a leap to imagine coding this particular optimization
into your compiler. Or is it not? Do you know if there's much work
at this level?
[ ... ] This was S. R. Russell's famous insight that led to LISP's
read-eval-print loop interpreter [1]: eval is the compiler, and if you
just compile a statement at a time in a loop, you have an interpreter.

So optimization, if it's implemented at all, probably does happen with
arbitrarily complex eval expressions. The key is that it happens when
the final string passed to eval is parsed.

Thank you for this very informative post.

-- Scott
 
T

Thomas 'PointedEars' Lahn

Richard said:
Where 'clearest strategy' was qualified with "type-conversion-to-
number".

Yes, I noticed that.
So that hangs on an "if".

No, it does not. I tried to use a figure of speech ("How can it be ...
if ...?"), maybe wrong, obviously unsuccessfully.
Do the outcomes differ among implementations
(beyond the provisions allowed in the specification for the ToNumber
function(9.3.1) when the number of significant digits goes over 20)?

Yes, it has been showed before.
And if they do isn't that an implementation bug that needs fixing?

No, for the Specification they implement allows it.
In any event, relative terms like "clearest" suggest comparison
between alternatives, and only type-conversion-to-number alternatives
would be appropriate for that comparison, so are there any
environments were - Number(x) - would differ in its outcome from, for
example, - (+x) -, or - (x*1) -? The specs clearly assert that it
shouldn't.

Read again. AISB, the ES3F still allows implementations to recognize the
octal format in a String argument, but makes no requirement; ES5 disallows
it. Both Editions define a requirement for recognition of the hexadecimal
format.

The clearest strategy is the one most obvious in source code; it does not
apply to `+' or `*1'. The clearest one is that where you can tell without
testing what the outcome is going to be; that does not apply to Number(),
`+', or `*1'.


PointedEars
 
R

Richard Cornford

Yes, I noticed that.



No, it does not. I tried to use a figure of speech ("How can it
be ... if ...?"), maybe wrong, obviously unsuccessfully.


Yes, it has been showed before.

So it should not be too much trouble to show again. Personally I have
never seen anyone showing - Number(x) - results differing from - (+x)
- results, or any (non-rounding related) differences between
implementations.
No, for the Specification they implement allows it.

No it doesn't. The part of the spec that is relevant here is:-

| 9.3.1 ToNumber Applied to the String Type
|
| ToNumber applied to strings applies the following grammar to the
| input string. If the grammar cannot interpret the string as an
| expansion of StringNumericLiteral, then the result of ToNumber
| is NaN.
|
| StringNumericLiteral :::
| StrWhiteSpace[opt]
| StrWhiteSpace[opt] StrNumericLiteral StrWhiteSpace[opt]
|
| StrWhiteSpace :::
| StrWhiteSpaceChar StrWhiteSpace[opt]
|
| StrWhiteSpaceChar :::
| ...
|
| StrNumericLiteral :::
| StrDecimalLiteral
| HexIntegerLiteral
|
| ...
|
| Some differences should be noted between the syntax of a
| StringNumericLiteral and a NumericLiteral (section7.8.3):
|
| ...
| * A StringNumericLiteral that is decimal may have any number of
| leading 0 digits.
| ...

Unlike the case with NumericLiterals, there is no potential for
StringNumericLiteral being interpreted as octal because decimal
values are allowed "any number of leading 0 digits", and so there
is no basis for distinguishing them from octal values.

(And appendix B does not include any additions/alternatives for
StringNumericLiteral syntax.)
Read again. AISB, the ES3F still allows implementations to
recognize the octal format in a String argument,

No it doesn't, not for the internal ToNumber function.
but makes no requirement; ES5 disallows it. Both Editions define
a requirement for recognition of the hexadecimal format.

The clearest strategy is the one most obvious in source code;
it does not apply to `+' or `*1'. The clearest one is that
where you can tell without testing what the outcome is going
to be; that does not apply to Number(), `+', or `*1'.

Being able to tell what the outcome would be implies knowing the input
(or at lest how all the input possibilities will be handled), and
knowing the input for - Number(x) - does render the output
predictable.

Richard.
 
M

Michael Wojcik

Scott said:
Of course. But you clearly have deeper knowledge of this than I do.
In your experience...?

Well, my experience only includes working on a couple non-toy
compilers, seeing the guts of a few others, and looking at the output
of maybe a dozen or so on top of that. I think they pretty much all
had peephole optimizations like this.

As far as ECMAScript specifically goes, I supposed we could take a
look at the open-source implementations to see. That's a bit far down
on the priority list at the moment, though.
Whoa! Which result?

The one derived at runtime. I'm not explaining something clearly here.
The point is that the optimization in question - removing the
multiply-by-one step - happens at runtime, when the final string is
evaluated in the body of the eval method. By that time, we've called
Math.random, etc.

In other words: we don't expect the optimizer to remove the *string*
"* 1" from the final string that's passed to eval, or even to get rid
of the eval call entirely and just turn this into "var foo = bar".
There are some really ambitious optimizations that use metadata about
standard functions to eliminate them entirely from expressions where
they have no effect, but let's assume that's not happening here.[1]

Instead, we expect this particular optimization to happen at run time,
and only once the string has been passed to eval and eval has parsed
it. In other words:

1. Program starts running
2. Program reaches the line with the eval call
3. The string for eval is constructed
4. eval is called
5. eval parses the string
6. The optimizer performs peephole optimizations on the parse tree
7. The optimized parse tree is executed by the interpreter

Now, we generally can assume this same process would happen even in a
compiled language (that supported an eval operation). There are
basically two ways to implement eval: with an interpreter, or with
just-in-time compilation. In either case, the logical thing to do is
to perform peephole optimization on the parse tree produced by eval's
parser before sending it to the interpreter or JIT compiler.


[1] For one thing, we'd have to know that eg Math.random has not been
replaced by the program prior to this line being executed.
 
S

Scott Sauyet

The one derived at runtime. I'm not explaining something clearly here.
The point is that the optimization in question - removing the
multiply-by-one step - happens at runtime, when the final string is
evaluated in the body of the eval method. By that time, we've called
Math.random, etc.


Yes, that's where my confusion lay. I thought you were describing
ahead-of-time static optimizations that would remove them from the
syntax tree at a relatively early stage. This makes much more sense
now!

In other words: we don't expect the optimizer to remove the *string*
"* 1" from the final string that's passed to eval, or even to get rid
of the eval call entirely and just turn this into "var foo = bar".
There are some really ambitious optimizations that use metadata about
standard functions to eliminate them entirely from expressions where
they have no effect, but let's assume that's not happening here.[1]

Okay. But at least now you can understand the surprise I'd been
expressing! :)

Again, thank you. It was very informative.

-- Scott
 
J

Jorge

Yes, that's where my confusion lay.  I thought you were describing
ahead-of-time static optimizations that would remove them from the
syntax tree at a relatively early stage.  This makes much more sense
now!

FF does some static optimizations @ compile time:

(function f () { 2+2; return ("3."+"14")*"2"+x;}).toSource()
-> "(function f() {return 6.28 + x;})"
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,756
Messages
2,569,535
Members
45,008
Latest member
obedient dusk

Latest Threads

Top