Ah, found it. The parts that are making sense are:
register PyObject **stack_pointer;
#define TOP() (stack_pointer[-1])
#define BASIC_POP() (*--stack_pointer)
...(line 1159)...
w = POP();
v = TOP();
if (PyInt_CheckExact(v) && PyInt_CheckExact(w)) {
/* INLINE: int + int */
register long a, b, i;
a = PyInt_AS_LONG(v);
b = PyInt_AS_LONG(w);
i = a + b;
if ((i^a) < 0 && (i^b) < 0)
goto slow_add;
x = PyInt_FromLong(i);
... Which is more than I was picturing was involved. I understand it
is also specific to CPython. Thanks for the pointer to the code.
My basic question was, what is the difference between compilers and
interpreters, and why are interpreters slow? I'm looking at some of
the answer right now in "case BINARY_ADD:".
The basic difference between a (traditional) compiler and an
interpreter is that a compiler emits (assembly) code for a specific
machine. Therefore it must know the specifics of the machine (how many
registers, memory addressing modes, etc), whereas interpreters
normally define themselves by their conceptual state, that is, a
virtual machine. The instructions (bytecode) of the virtual machine
are generally more high-level than real machine instructions, and the
semantics of the bytecode are implemented by the interpreter, usually
in a sort-of high level language like C. This means the interpreter
can run without detailed knowledge of the machine as long as a C
compiler exists. However, the trade off is that the interpreter
semantics are not optimized for that machine.
This all gets a little more hairy when you start talking about JITs,
runtime optimizations, and the like. For a real in-depth look at the
general topic of interpretation and virtual machines, I'd recommend
Virtual Machines by Smith and Nair (ISBN:1-55860910-5).
-Dan