[F'up2 comp.lang.javascript]
Stefan said:
I have read this in a World-Wide Web encyclopedia:
For example, given the following C code:
int f(int i) {
return i + 1;
}
Emscripten would output the following JS code:
function f(i) {
i = i|0;
return (i + 1)|0;
}
Do you think that the »|0« is necessary to express the
C semantics in JavaScript, or could the speed of the
generated code be improved by omitting it?
First of all, this does _not_ express the C semantics in JavaScript, and
there is no other way. “int†is a *generic* type in C/C++; IIUC, the result
could be a 32-bit integer when compiled for a 32-bit platform or a 64-bit
integer when compiled for a 64-bit platform.
<
http://en.wikibooks.org/wiki/C_Programming/Reference_Tables#Table_of_Data_Types>
<
http://stackoverflow.com/questions/11438794/is-the-size-of-c-int-2-bytes-or-4-bytes>
By contrast, using the binary bitwise OR operator, as with all ECMAScript
binary bitwise operators, *always* creates an IEEE-754 double-precision
*floating-point* value representing a *32-bit* integer value. Also, not
only the result will be such a value, but also the operands are converted
to such values internally, before the operation is performed.
<
http://ecma-international.org/ecma-262/5.1/#sec-11.10>
Conversion to an integer value of the ECMAScript Number type (i.e., where
the fractional part of the mantissa is zero), which appears to be the goal
here, can be better achieved with the Math.floor() and Math.ceil()
functions, e.g.:
if (typeof Number.prototype.toInteger != "undefined")
{
Number.prototype.toInteger = function () {
return (this < 0 ? Math.ceil(this) : Math.floor(this));
};
}
/**
* Frobnicates this value
*
* @return {int} i
* The value to be frobnicated
* @return {int}
* The frobnicated value
*/
function f (i)
{
return (+i).toInteger() + 42;
}
[JSX:array.js features another converter, for
jsx.array.BigArray.prototype.slice() & friends¹, that for practical reasons
more closely matches the Specification (ToInt32), but does not convert to
32-bit floating-point integer.]
Of course, this still does not remotely implement the C semantics. One
aspect of it is that code where you pass a non-integer would not compile.
Since ECMAScript uses dynamic type-checking, it is not possible to prevent
compilation. But at the very least passing unsuitable values should cause
an exception to be thrown, so that it becomes unnecessary to handle them,
e.g.:
function f (i)
{
if (i % 1 != 0)
{
/* JSX

bject.js provides jsx.InvalidArgumentError instead */
throw new TypeError('f: Invalid argument for "i": ' + i + ':'
+ typeof i + '[' + _getClass(i) + ']'
+ (i != null ? ' by ' + (_getFunctionName(i.constructor) || '?')
: '')
+ '; expected integer');
}
return i + 42;
}
_________
¹ supports arrays with up to 2âµÂ³âˆ’1 numerically indexed elements²
² Because 2âµÂ³+1 is indistinguishable from 2âµÂ³ due to precision limits,
so that element overflow could not be detected, I had to reduce
jsx.array.BigArray.MAX_LENGTH to Math.pow(2, 53) - 1 recently.
(This is also the reason why standard Array instances can hold only
up to 2³²_−1_ elements, so that the largest possible index is
Math.pow(2, 32) _- 2_.)