It makes sense to group functions and objeccts, or objects and
null, but not both at the same time. To a programmer, the
"typeof" operator tells the type of a value. It groups objects
and the null value, but distinguishes functions and objects.
Internally, functions are objects, but the null value is its own type.
A reasonable point. typeof may insist that null is an object which
implies that there is a null object and the variable would hold a
reference to it, but the last thing that null will ever do is behave as
if it was an object and null is separately categorised in the spec.
I noted the time taken, and then changed the assignment to x into
+"4.4E+4";
Number("4.4E+4")
parseFloat("4.4E+4")
and compared the times. The results were (approximatly averaged):
Base unary + Number parseFloat
IE6 1100 1600 2730 2830
Moz 1180 2100 7300 6700
Op7 2150 2800 4520 5320
(on a 1GHz Athlon CPU)
The unary + is indeed much faster, although neither takes
significant time. If you only do a few conversions, the
efficiency is not worth caring about, but if you get into
the tens of thousands or more, then the diffrerence is
measurable.
That is, only optimize the inner loop. Or, as a saying goes:
"10% of the code takes 90% of the execution time".
Maybe its my obsession with absolute performance but I am inclined to
think that when one approach is objectively optimum then there needs to
be a positive reason for not using it in any context no matter how small
the individual gains. Unary + though is also very short so its use will
reduce the download slightly (even if parenthesised for code clarity).
Incidentally, when I was speed testing the various methods I noticed
that there is quite a variation in performance with different input.
Obviously the length of the string has an influence but also the nature
of the number. If the number is easily represented as an IEEE double
precision float (2 for example) the conversion is faster than when the
result needs some approximation in its representation. But as I recall
the most noticeable differences were when type converting string that
would result in NaN.
As they would with Number, parseFloat or parseInt or when written as
literals. Since Javascript has only one number type, and it
corresponds to IEEE 754 double-precission (64 bit) floating point
numbers, it can only represent all integers up to 2^54 exactly.
The ones below that limit are also floats with exponents, only the
exponent is zero. The ones above need to have exponents larger than
zero, and therefore can't range over all the numbers.
I was considering not even mentioning the limit on integers, I figured
that if the site has 2^54 pages the visitor will die of old age before
they get to 2^54+1 ;-)
Richard.