Ruby Vs. Java

M

Martin DeMello

"I'm going to try both Java and Ruby out before I choose. "

But Java _is_ faster.
It's also uglier and less fun than Ruby.

If you need a proper speed, just stick to the static
languages. If you want the fun, go with Ruby. :>

If you want a JVM static language, I'd strongly suggest scala rather than java.

martin
 
L

Lloyd Linklater

Michael said:
You use a hex editor? Real hackers use only ones and zeros�and that's
only if they have ones.

You had zeros? I worked for a company that could not afford zeros. We
had to use the letter "O".
 
L

Lloyd Linklater

In the olden days, programs used to be a combination of assembly and a
low level compiled language like C or Pascal. Lotus was actually
written wholly in assembler back in the day. The comparison of C to
assembler was, in the early days, much like the comparison of Java to
Ruby today. However, the speed comparison of assembler to C did not
sustain its original conclusion for long. As compilers improved, it was
possible to wrote wholly in C and have it execute faster than in pure
assembler.

Personal anecdote #1: actual testing

I was working at Quantum at the time (hard disk maker) and this very
controversy arose. The managers listened to the philosophical debate
for a while and decided to settle it. We had volunteers from each side
to write code in their idiom doing theoretical and practical tests of
speed. By theoretical I mean do <this thing> X times and see how fast
it is. By practical, I mean that you had to see how many seeks,
read/writes, etc you could get in a certain time with the different
algorithms.

The pure language, no assembler, won hands down.

Personal anecdote #2: compiler comparisons

I was a sysop on compuserve for the Borland topics for a while and this
came up again in various threads. There were several assembler devotees
that were pushing their notions. They maintained that they could write
an assembler routine that was fewer lines than anything that a compiled
language could manage. They tried several examples. In each case,
someone wrote a pascal method, compiled it, then looked at the generated
assembler and posted it. The compiled version was always shorter than
the one written in assembler.

Does this prove that it will always be so? Nah. It proves that trying
to out compile the compiler these days is waste. You MAY get one line
less here or there but only with tremendous effort, so why bother? Is
assembler useless? Nah, but its reputation is overrated.

What does this mean in this context?

The speed of execution is not as simple as looking at the stopwatch.
With hardware prices coming down down down, you can get muscle machines
to pick up any such slack. But, that costs money. Well, so does
development. If you can develop a lot faster in Ruby, is it worth it to
get another machine and run some of the parts in threads that will get
it done in faster fashion? If it is not, then speed may not be as
critical as you think.

There are many ways to get things moving faster. Here is one I saw
recently that I consider to be quite helpful. (It also has one of my
favorite lines I have ever read in a technical article:

"However, all these caching measures won't hide a basic problem: you are
performing lots of database queries, and it's harshing your mellow."

How can you top that???

In short, Java might run faster, but that should not make a difference.
You can use hardware and design to make up for that. After that is
done, it is the time to develop and maintain the code that will be what
matters most.

FWIW, and IMHO
 
M

M. Edward (Ed) Borasky

Lloyd said:
You had zeros? I worked for a company that could not afford zeros. We
had to use the letter "O".

Be careful ... I understand Scott Adams is rather nasty about people
stealing his better Dilbert lines. :) But ... in truth, although ILLIAC
I assembler did have symbolic *addressing*, it did not have symbolic op
codes ... the machine language of the system was so simple and logical
that they weren't needed.

Other notes on that bygone day -- programmers used hexadecimal notation,
but it was called "sexideciaml". And after 9 you had either K, S, N, J,
F, L or +, -, N, J, F, L. Those happened to be the right codes on the
five-hole paper tape that the Teletypes punched.
 
M

M. Edward (Ed) Borasky

Lloyd said:
In the olden days, programs used to be a combination of assembly and a
low level compiled language like C or Pascal. Lotus was actually
written wholly in assembler back in the day. The comparison of C to
assembler was, in the early days, much like the comparison of Java to
Ruby today. However, the speed comparison of assembler to C did not
sustain its original conclusion for long. As compilers improved, it was
possible to wrote wholly in C and have it execute faster than in pure
assembler.

Personal anecdote #1: actual testing

I was working at Quantum at the time (hard disk maker) and this very
controversy arose. The managers listened to the philosophical debate
for a while and decided to settle it. We had volunteers from each side
to write code in their idiom doing theoretical and practical tests of
speed. By theoretical I mean do <this thing> X times and see how fast
it is. By practical, I mean that you had to see how many seeks,
read/writes, etc you could get in a certain time with the different
algorithms.

The pure language, no assembler, won hands down.

I'm really curious about two things:

1. The processor architecture, and
2. The language.

There once was an architecture called VLIW, embodied in a
mini-supercomputer called Multiflow. This architecture was so
complicated that it literally *had* to have a compiler -- no human could
even program it, let alone optimize code for it. The compiler used a
technique called "trace scheduling" to do this.

The punchline is that the optimization problem for this beast was
NP-complete. Now *most* compiler optimization problems are NP-complete
once you express them as true combinatorial optimization, and the good
folks at Multiflow weren't oblivious to that fact. However, their
approximations were still slow relative to what simpler architectures
required, and Multiflow went out of business. They disappeared without a
trace.

<ducking>
 
A

Ari Brown

Wow thanks for all the support everyone!

QUOTE:
----------------------------------------------------------------------
-----
How do you know that both Ruby and Java are not both too slow?
How do you know that both Ruby and Java are not both way fast enough?
----------------------------------------------------------------------
-----
Well, have any of you played any kind MORPG? Ever heard of lag? I
hate
lag and I want little of it as possible. But I guess that also
depends
on the quality of the code.

Lag is also largely caused by a DoS of packets to the server. Want a
fix? Use multiple servers (like WoW), and get LOTS of bandwidth.



---------------------------------------------------------------|
~Ari
"I don't suffer from insanity. I enjoy every minute of it" --1337est
man alive
 
L

Lloyd Linklater

M. Edward (Ed) Borasky said:
I'm really curious about two things:

1. The processor architecture, and
2. The language.

1. It was on a 386, 16MHz with no math coprocessor. (How is THAT for
old???)

2. We were using straight C and Microsoft's 5.1 compiler on DOS 3.1 if
memory serves.

Note: I was the one that did the theoretical programming. My buddy did
the practical and he did it with a kind of a cheat. He would write
code, then look at the assembler that the compiler produced. He would
tweak the code and look again. Whatever produced the least number of
assembler lines is what he used. His code was more than an order of
magnitude faster.

Theoretical speed is somewhat like the velocity metrics on high
performance cars. Just because it goes 0-60 in 4 seconds does not mean
that you can get it to do that. It is much the same with java vs. Ruby.
Many articles I have read, and with which I agree, say that development
and maintenance are the biggest costs for most projects. Getting things
up and earning revenue as fast as possible is not to be underestimated.
 
M

Matt Lawrence

That was the previous system; only one key.

Ummm, I actually use an iambic keyer. One side for dots and the other for
dashes.

-- Matt
It's not what I know that counts.
It's what I can remember in time to use.
 
K

Kenneth McDonald

Nick said:
Which programming language is faster - Ruby or Java?

This is one of the things that will decide whether I use Ruby or Java so
help is appreciated greatly.

Thanks.
Java is much faster than Ruby (on average), and can now approach or even
match the speed of C in many cases. Of course, you'll spend more time
implementing those fast bits of code in Java.

I didn't look at all of the replies to the original note, but none of
the ones I did read mentioned JRuby. Worth checking out; use Ruby
(executed by the Java virtual machine) for the non-performance-critical
parts of your application, and Java for the parts that require speed.

Ken
 
C

Chad Perrin

Java is much faster than Ruby (on average), and can now approach or even
match the speed of C in many cases. Of course, you'll spend more time
implementing those fast bits of code in Java.

I suspect you mean C++. I don't recall seeing much in the way of
evidence that Java had quite gotten as fast as C -- which is still about
twice as fast as C++ for many purposes.
 
L

Lloyd Linklater

Java's speed comes largely from its pre-compiling. It was quite slow in
the early days. Is Ruby likely to get such a boon in the foreseeable
future? That would certainly be something that pointy haired managers
could boldly hold forth in meetings for consideration.
 
E

Eliot Miranda

Lloyd said:
Java's speed comes largely from its pre-compiling. It was quite slow in
the early days. Is Ruby likely to get such a boon in the foreseeable
future? That would certainly be something that pointy haired managers
could boldly hold forth in meetings for consideration.

Um, no. Java's speed comes principally from
a) its non-object numeric types, and
b) from sophisticated VMs that do adaptive optimization (see Sun HotSpot
Server VM)

If by precompiling you mean compiling to bytecode then, no, this won't
of itself give great speed. Java as always compiled to bytecode but the
early Sun reference VM - a bytecode interpreter - was still slow. It's
easy to write slow bytecode interpreters. YARV for Ruby is currently
also a slow bytecode interpreter.

If you want to understand how Ruby can be made to run fast I recommend
reading about the Strongtalk VM and reading Urs Hölzle's thesis,
"Adaptive Optimization for Self: Reconciling High
Performance with Exploratory Programming",
available on the web as
http://www.cs.ucsb.edu/labs/oocsb/papers/urs-thesis.html
http://www.cs.ucsb.edu/labs/oocsb/papers/hoelzle-thesis.pdf

HTH
 
C

Charles Oliver Nutter

Eliot said:
Um, no. Java's speed comes principally from
a) its non-object numeric types, and
b) from sophisticated VMs that do adaptive optimization (see Sun HotSpot
Server VM)

If by precompiling you mean compiling to bytecode then, no, this won't
of itself give great speed. Java as always compiled to bytecode but the
early Sun reference VM - a bytecode interpreter - was still slow. It's
easy to write slow bytecode interpreters. YARV for Ruby is currently
also a slow bytecode interpreter.

This is precisely why we've been able to get very good performance in
JRuby. Though we still have an interpreted mode, which runs slower than
Ruby 1.8, we also have a nearly complete Ruby-to-JVM-bytecode compiler
that executes consistently faster than Ruby 1.8 and in some cases faster
than Ruby 1.9. In general, the difficult task has been structuring the
bytecode and the call pipeline in such a way as to allow HotSpot to do
its optimization.

This also means that Java and JRuby and similar adaptive optimizing
runtimes require some "warm-up time". Java code will get faster as it
executes, but for short benchmarks it will usually be much slower than
its full potential. The same applies to JRuby, and as a result JRuby
will be better for longer-running processes (unless, of course, you
don't mind it being a little slow early on).

As far as Ruby vs Java...why not just use both: www.jruby.org

- Charlie
 
P

Phlip

Chad said:
I suspect you mean C++. I don't recall seeing much in the way of
evidence that Java had quite gotten as fast as C -- which is still about
twice as fast as C++ for many purposes.

C++ is only grossly slower than C in one situation: Someone used C++ as a
Very High Level Language, to write a big object model.

For number crunching, C++ Template Metaprogramming can compete with C,
Assembler, and even Fortran.

And, as always, nothing beats simply picking the correct algorithm, first...
 
R

Robert Dober

If you want to understand how Ruby can be made to run fast I recommend
reading about the Strongtalk VM and reading Urs H=F6lzle's thesis,
"Adaptive Optimization for Self: Reconciling High
Performance with Exploratory Programming",
available on the web as
http://www.cs.ucsb.edu/labs/oocsb/papers/urs-thesis.html
http://www.cs.ucsb.edu/labs/oocsb/papers/hoelzle-thesis.pdf
http://citeseer.ist.psu.edu/rd/38028164,51308,1,0.25,Download/http:=
//citeseer.ist.psu.edu/cache/papers/cs/3280/http:zSzzSzwww.sunlabs.comzSzre=
searchzSzselfzSzpaperszSzhoelzle-thesis.pdf/hlzle94adaptive.pdf

Robert
--=20
I'm an atheist and that's it. I believe there's nothing we can know
except that we should be kind to each other and do what we can for
other people.
-- Katharine Hepburn
 
I

Isaac Gouy

... and use
Lua. It's harder to program than Ruby but much easier than Java, and its
speed can compete with C.


Compete and lose.

"Lua is a tiny and simple language, partly because it does not try to
do what C is already good for, such as sheer performance, low-level
operations, or interface with third-party software. Lua relies on C
for those tasks."

Preface xiii, Programming in Lua
http://books.google.com/books?id=ZV...lua&btnG=Google+Search&sa=X&oi=print&ct=title
 
R

r

Which programming language is faster - Ruby or Java?

This is one of the things that will decide whether I use Ruby or Java so
help is appreciated greatly.

Decisions are hard... don't know what you're programming, but Java
plain sucks for web apps.

As for Ruby and web apps--it's 9 months later...had the training got
the t-shirt, still not doing Ruby or Rails...how long can one carry a
suitcase...?

I don't get paid for anything but web apps, and other enterprise
stuff. So I can't say about other types of programs.

I for one am very tired of wanting to use Ruby for webapps...can't
believe I wasted 9 months downloading all this crap...

rm -rf ~home/ruby-stuff

cd /usr/local/Zend/htdocs

HTH,
-r
 
I

Isaac Gouy

This is precisely why we've been able to get very good performance in
JRuby. Though we still have an interpreted mode, which runs slower than
Ruby 1.8, we also have a nearly complete Ruby-to-JVM-bytecode compiler
that executes consistently faster than Ruby 1.8 and in some cases faster
than Ruby 1.9. In general, the difficult task has been structuring the
bytecode and the call pipeline in such a way as to allow HotSpot to do
its optimization.

This also means that Java and JRuby and similar adaptive optimizing
runtimes require some "warm-up time". Java code will get faster as it
executes, but for short benchmarks it will usually be much slower than
its full potential. The same applies to JRuby, and as a result JRuby
will be better for longer-running processes (unless, of course, you
don't mind it being a little slow early on).



I'm a bit confused about what you might mean, help me understand.

Do you mean small benchmark programs will be "much slower" when run
once rather than run 100 times? How much slower - 0.1x 10x 1000x ?

Do you mean we should not assume small program performance is a
reasonable estimate of large program performance?


Incidentally, I don't think "warm-up time" works as a description of
adaptive optimization - it makes it sound like a one-time-thing,
rather than continual profiling decompilation recompilation adapting
to the current hotspot.
 
C

Charles Oliver Nutter

Isaac said:
I'm a bit confused about what you might mean, help me understand.
Absolutely!

Do you mean small benchmark programs will be "much slower" when run
once rather than run 100 times? How much slower - 0.1x 10x 1000x ?

A description of JRuby internals will help here.

JRuby starts running almost all code in interpreted mode right now. This
is partially because the compiler is incomplete, and can't handle all
syntax, but also partially because parsing + compilation + classloading
costs more than just parsing, frequently so much more that performance
gains during execution are outweighed.

So JRuby currently has the bytecode compiler in JIT mode. As methods are
called, the number of invocations are recorded. After a certain
threshold, they are compiled. We do not do any adaptive optimization in
the compiler at present, though we do a few ahead-of-time optimizations
by inspecting the AST. Compiled code does not (with very few exceptions)
ever deoptimize.

Because of the JRuby JIT, we must balance our compilation triggers with
the JVMs. Ideally, we get things compiled quickly enough for HotSpot to
take over and make a big difference without compiling *too many* methods
or compiling them too frequently and having a negative impact on
performance.
Do you mean we should not assume small program performance is a
reasonable estimate of large program performance?

It depends how small. For example, if the top-level of a script includes
a while loop of a million iterations, it will not be indicative of an
app that has such loops in methods, as part of an object model, and so
on, because that top-level may not ever compile to bytecode (since it's
only called once) or may only execute once and never be JITed by the
JVM. Soon, when the compiler is complete, we could theoretically compile
scripts on load, but it remains to be seen if that will incur more
overhead than it is worth. And it still wouldn't solve the problem of
long-running methods or scripts that are only invoked once.

As an example, try running the two following scripts in JRuby and
comparing the results:

SCRIPT1:

t = Time.now
i = 0
while i < 10_000_000
i += 1
end
puts Time.now - t
t = Time.now
i = 0
while i < 10_000_000
i += 1
end
puts Time.now - t
t = Time.now
i = 0
while i < 10_000_000
i += 1
end
puts Time.now - t
t = Time.now
i = 0
while i < 10_000_000
i += 1
end
puts Time.now - t
t = Time.now
i = 0
while i < 10_000_000
i += 1
end
puts Time.now - t

SCRIPT2:

def looper
i = 0
while i < 10_000_000
i += 1
end
end

5.times {
t = Time.now
looper
puts Time.now - t
}

My results:

SCRIPT1:
9.389
9.194
9.207
9.198
9.191

SCRIPT2:
9.128
9.012
2.001
1.822
1.823

This is fairly typical. And this also should be of interest to you for
the alioth shooutout benchmarks; simply re-running the same script in a
loop will not allow JRuby or HotSpot to really get moving, since each
run through the script will define *new* classes and *new* methods that
must "warm up" again. You must leave the methods defined and re-run only
the work portion of the benchmark.
Incidentally, I don't think "warm-up time" works as a description of
adaptive optimization - it makes it sound like a one-time-thing,
rather than continual profiling decompilation recompilation adapting
to the current hotspot.

In our case, it's a bit of both. There's some warm-up time for JRuby to
compile to bytecode, and then there's the adaptive optimization of
HotSpot which is a bit of a black box to us. We are working to reduce
JRuby's warm up time to get HotSpot in the picture sooner.

- Charlie
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,773
Messages
2,569,594
Members
45,120
Latest member
ShelaWalli
Top