Interpreters have advantages and disadvantages.
Interpreted languages can implement dynamic features that can be
very difficult to implement in compiled languages. Interpreted
languages tend to be slower than compiled languages however.
Ruby _is_ particularly bad in this respect though. Unfortunately a lot
of this
is tied to what makes Ruby pleasant to use.
For example, a native compiler for Ruby can't sanely generate all the
classes
at compile time and reason about them, which is the normal approach
for more
static languages.
One thing is that new classes can be made available by loading more
code at
runtime, that's not hard. What is hard is dealing with the fact that
there
is no clearly delineated execution stages for Ruby - code can execute
from the
very first line of the script has been parsed conceptually, including
_inside_
class definitions, and that code can mutate the classes being defined
or that
already has been defined, and can have side effects.
A lot of efficiency of compilers for native languages comes from that.
In C++,
for example, a compiler can safely inline methods if the method isn't
declared
virtual, or if it's declared virtual but the compiler can decisively
find the right
method, which it often can.
It can also do things like hoist vtable lookups out of loops, because
the class won't change at runtime, whereas in Ruby, every iteration
through a
loop could potentially change the entire class hierarchy.
These things aren't impossible to overcome, and I'm confident that
near C
level performance is _possible_ for Ruby (at the worst case cost of
almost
completely re-generating code for large parts of the app if you do
something nasty,
like evaling code that re-opens core classes), but it's not easy.
Even something "trivial" like compactly packing an object into the
smallest
amount of memory is not easy in Ruby, since you have no definitive way
of
knowing the number of instance variables at "compile time" for all
legal
programs, as new instance variables can be set at runtime. Worst case
here
to get memory efficiency is to modify all live objects in the system.
And a lot of it could have been made easier without sacrificing much.
Some small
restrictions on what is allowed at "compile time" for example, and a
clear
delineation of what would be executed at "compile time" vs. "runtime"
would make
a lot of optimizations far easier.
A cleanup of introspection so that a compiler could reasonably decide
not to support
the textual versions of eval (which effectively requires linking with
a full
interpreter or compiler) without breaking almost every Ruby script in
existence
(yes, I'm exaggerating) would also do wonders.
There are many things like that, which comes out of designing Ruby
without even
thinking about the implications for compilation.
That's ok, but it does mean that Ruby is one of the least suitable
languages for
compilation I've used, and achieving good performance will take a lot
of extra
effort, and achieving great performance will be damn hard. That said,
I use Ruby
for almost everything I do these days - only very rarely does the
performance of
the interpreter make much difference for me.
Vidar