Why Ruby?

M

Marc Weber

 - you never know when syntax or similar errors occur..
If you compare PHP with Java you can extend abstract classes in PHP
which will give a runtime error. I'm sure there are some PHP analysis
tools which can detect this kind of error before running your web
application.
Available since PHP 5.3
He thanks. I only knew about create_function.
For those who are to lazy to look it up (PHP lambda summary):
Syntax looks like this:

[static] function (/*args*/) use (&$x, $y) { /* body */ };

You can use "use" to put vars into the scope of the function body.
Note that & will create references to the var, no matter which type
its contents has. If you don't use "use" vars which are also present in
outer scope will be null.. [static] is used to not include the $this
reference.

So maybe PHP get's closer. Can you do this as well now?
echo new FooBar()->fun();

About parallelism: I only used ruby to run some php -l commands to check
syntax in parallel. It's perfect for this kind of task: Create a quick
work list, start 3 threads and pop items off from that list.
That's why I recommended learning Ruby rather than PHP. Because PHP
can't get this job done although it is a scripting language.

Marc Weber
 
D

David Masover

I realize I'm late to the party, but...

I've asked several friends and associates (application developers) what
programming language they recommend for new development. The most
prevalent answer was Ruby (with Ruby-On-Rails a close second). This was
surprising to me, since my understanding is that Java and C (et al) are
most prevalent.

Quite possible, but it depends entirely what you're doing.
Is Ruby a good programming language for general purpose usage?

That depends what you're trying to do.
I don't want to skew responses by specifying a particular application or
usage. However, please DO respond with qualified answers if you feel
that is appropriate.

A quick analysis of Ruby's weaknesses:
- Even once you hack in support for Lisp-like macros, it's likely not going
to feel as natural as Lisp.
- Slow. Not as slow as people suppose it is, but it's not C, or even Lisp.
- Can be difficult to bundle into one exe, so it _may_ be difficult for
Windows desktop applications.

Now, I never got enough into Lisp to get really good at macros, and Ruby's
syntax is still flexible enough to do interesting things with it -- in fact,
at least a few of the great examples I've seen of Lisp macros can be done in
Ruby, though they obviously aren't Macros in Ruby.

So my answers are mostly going to be qualified by the other two concerns. Ruby
is my favorite language in every other respect, so I'm going to say, choose
Ruby for everything except places where you actually need vertical performance
(performance on a single machine) -- but actually measure it, don't just
assume! -- and for places where your target output is a single .exe, unless
you can figure out a better way to bundle a Ruby app for Windows.

This does mean, by the way, that Ruby is ideal for web development. You
control the installation (so you just make sure to get a web host which
supports Ruby, or which gives you enough control to use it), and you can
always throw more hardware at it, which is cheaper than developer time. There
are exceptions to this rule, but when you actually get to the point where
you're so big that it's worth a few months of developer time to shave 10% off
in performance, that's a nice problem to have, and it's worth getting there
before your competitors do.
That is,
is it worth the time and effort to become proficient?

That's a different question.

I never use Lisp, and I still consider it worth the time and effort to at
least learn the language. Ruby is very easy to pick up, and you should be able
to see very quickly whether or not it's worth the time and effort to become
_more_ proficient.

If you already know Java, many concepts will translate right over, but the
beauty of Ruby's syntax will make it hard for you to look at a Java program
again.

The biggest reason you should learn Ruby is to understand what it means for
code to look pretty, and why you might want your code to look pretty. Look at
some Ruby on Rails examples, and try to keep in mind that Rails is written in
pure Ruby -- that is, Rails is a Ruby library that adds this kind of thing:

30.seconds.from_now

However...
Again, I don't want to sway responses by
specifying a background for the learner. Might be a relatively new
student of programming, might be an old-timer with decades of
development experience.

This is also not something you can remove from the question. Again, if you
already know Java, some of the object model will be easier to understand, like
the concept of object references. If you know C++, it may take a bit for you
to understand why it's weird to ask about "passing by value" in Ruby, versus
"passing by reference".

Similarly, if you're just starting out, it depends who you ask -- I would say
you should learn Ruby, so you get excited about programming, and so you
actually start programming faster, without having to learn about nasty low-
level things like pointers and memory allocation. Others would say just the
opposite -- you should start low-level, so that by the time you get to Ruby,
you understand exactly what the language is doing under the covers. Either
way, you should eventually learn both high-level and low-level languages, for
the same reason -- you want to understand just what you're asking the language
to do for you.

On the other hand, if you are already incredibly proficient in something like
assembly language or COBOL, you might find that you've already found your
niche, and your job will likely not become obsolete -- so you might want to
learn Ruby as a curiosity, but it's questionable how useful it will be to the
actual work you do. If you're already incredibly proficient in Lisp, Ruby
might be a hard sell, because there are specific, measurable ways that Ruby is
less powerful than Lisp -- the biggest reason I prefer Ruby is syntax, and
most Lisp people _like_ s-expressions.
 
B

Brian Candler

Ruby is my language of choice for many applications because of the
following:

(1) the lack of boilerplate. It encourages you to partition your problem
into small digestible bits because you only need to type

class Foo
...
end

to get a class - virtually zero effort. Compare the line noise required
in Perl to do the same thing.

(2) the supremely sane data model - *all* values are references to
objects. (In Perl you have Scalars, Arrays, and References to Arrays,
the latter being held in a scalar variable, and a load of special case
nonsense like filehandles and typeglobs. C++ is worse; you have
integers, pointers to integers and references to integers).

(3) pure personal preference, e.g. I don't like python's run-off-a-cliff
indentation syntax. I had to use it in Occam years ago, and I didn't
like it then either.

(4) the ruby 1.8 C intepreter ("MRI") is portable and runs on lots of
things, even tiny embedded systems with 4MB of flash (e.g. OpenWrt)

(5) it's easy to work in different programming styles. Functional
languages made a lot more sense once I was familiar with things like
blocks and enumerables in ruby.

(6) it's not Java.

The downsides for me are a lot of accumulated warts and special cases in
the language, generally aimed at "doing the right thing" but sometimes
catching you out. Examples: auto-splat, lambda-vs-proc-vs-block,
different behaviour of ^ and $ in regular expressions. I also detest
ruby 1.9's String handling which means I'm staying on 1.8; I know I'll
ultimately have to move to a different language entirely.

The documentation is variable from poor to bad. The language is not
formally defined, neither its syntax nor semantics, and sometimes you
just have to treat it like a black box and experiment to find out how
things actually behave.

Sometimes it can be hard to find your way around other people's code,
because ruby doesn't enforce that class Foo::Bar is defined in file
foo/bar.rb (or that it's even defined in source code at all; it might be
defined dynamically at run time). You're reliant on the good sense of
the person who wrote the code to organise it sensibly, and you can write
bad code in Ruby just as in any other language.

Finally, in some spheres there are simply better tools for the job. If
you want to handle ten thousand concurrent client connections then
erlang is probably a better fit (yeah, there are event-driven libraries
in Ruby which with effort can achieve the same, but this is an area
where erlang excels). Ditto if you want to build systems with huge
uptime and zero-downtime live code upgrades. But try deciphering an
erlang backtrace and you'll wish you were back with ruby.

HTH,

Brian.
 
S

Seebs

I also detest
ruby 1.9's String handling which means I'm staying on 1.8; I know I'll
ultimately have to move to a different language entirely.

I'm stumped on this one. Overall, I rather prefer 1.9's string model, and
wish it had been that way all along. It makes more sense to me that
"foo"[1] == "o" than that "foo"[1] = 111. It is a little surprising if I
think of it from a C perspective, which is certainly my native perspective,
but overall it's a cleaner answer and more consistent with string handling.
In particular, it's cleaner because it's consistent with slices of more than
one character. :)

Or is there something else in the new String that you don't like?

-s
 
B

Brian Candler

Seebs said:
Or is there something else in the new String that you don't like?

It's as complex as hell. I took the trouble to document about 200
behaviours of String in 1.9, and I still haven't really scratched the
surface. http://github.com/candlerb/string19/blob/master/string19.rb

The scariest bit for me is that a simple expression like

a = b + c

(where b and c are both Strings) can raise exceptions. Writing your
program so that you can be *sure* it won't raise an exception is hard.
Even the same program running on two different computers with the same
version of ruby 1.9 and the same input data may crash on one but not on
the other.

I don't want to have to expend effort working around artefacts of the
language, especially when dealing with binary data.
 
M

Marnen Laibow-Koser

Brian said:
It's as complex as hell. I took the trouble to document about 200
behaviours of String in 1.9, and I still haven't really scratched the
surface. http://github.com/candlerb/string19/blob/master/string19.rb

The scariest bit for me is that a simple expression like

a = b + c

(where b and c are both Strings) can raise exceptions.

So what?
Writing your
program so that you can be *sure* it won't raise an exception is hard.

Not at all. That's what rescue is for.
Even the same program running on two different computers with the same
version of ruby 1.9 and the same input data may crash on one but not on
the other.

I don't want to have to expend effort working around artefacts of the
language, especially when dealing with binary data.

Binary data doesn't belong in Strings. Period. The only reason you
have it in there in the first place is that 1.8's piss-poor String
handling allows you to treat strings as byte arrays.

I haven't used 1.9 yet, so take this with a grain of salt, but my
impression is that encoding-aware Strings that aren't byte arrays is
exactly the right thing for Ruby to have.

Best,
-- 
Marnen Laibow-Koser
http://www.marnen.org
(e-mail address removed)
 
B

Brian Candler

Marnen said:
Binary data doesn't belong in Strings. Period.

And Ruby doesn't provide any other suitable data type. At least, IO#read
and #write only operate with Strings.

Python 3 is going down the route of two different data types: one for
binary data, one for character data. Erlang similarly has "binaries" but
also list of integers (if you want a list of codepoints)
 
M

Michal Suchanek

It's as complex as hell. I took the trouble to document about 200
behaviours of String in 1.9, and I still haven't really scratched the
surface. http://github.com/candlerb/string19/blob/master/string19.rb

The scariest bit for me is that a simple expression like

=C2=A0 =C2=A0a =3D b + c

(where b and c are both Strings) can raise exceptions. Writing your
program so that you can be *sure* it won't raise an exception is hard.
Even the same program running on two different computers with the same
version of ruby 1.9 and the same input data may crash on one but not on
the other.

I don't want to have to expend effort working around artefacts of the
language, especially when dealing with binary data.

The complexity stems from the inherent issues of handling strings in
multiple encodings. In 1.8 the support was nearly non-existent. In 1.9
the support is improved at the cost of increased complexity.

I think it would be nice if somebody wrote a library that adds
autoconversion to strings. While it's not hard to hack the support for
a particular piece of code doing it as a general library would
probably require a bit of thinking, especially since we still don't
have namespaces.

You can't do much better than what 1.9 has. In 1.8 a =3D b + c was
guaranteed to not throw an exception but it could easily produce
complete nonsense as result in exactly the cases where 1.9 would throw
an exception. Obviously, you can override the 1.9 + method to do a
conversion automatically instead and face the consequences if the
encoding information in the string was wrong.

I agree that having to deal with this for binary data as well is the
somewhat unfortunate result of sharing the string class for both text
strings and binary data. The upside of such sharing, especially in 1.8
which lacked the subtyping of String by encoding was the ability to
interpret binary data as text when looking for textual magic such as
GIF89a.

You can't have everything at once. A simple solution fails for some
more complex problems, a more complete solution has to be set up for
any particular simple case.

Thanks

Michal
 
S

Seebs

It's as complex as hell. I took the trouble to document about 200
behaviours of String in 1.9, and I still haven't really scratched the
surface. http://github.com/candlerb/string19/blob/master/string19.rb
Ahh.

The scariest bit for me is that a simple expression like

a = b + c

(where b and c are both Strings) can raise exceptions. Writing your
program so that you can be *sure* it won't raise an exception is hard.

I'd rather get an exception than silently get incoherent output, though.
I don't want to have to expend effort working around artefacts of the
language, especially when dealing with binary data.

To some extent, I agree, but I was under the impression that you could
address this by specifying a desired encoding.

-s
 
S

Seebs

I haven't used 1.9 yet, so take this with a grain of salt, but my
impression is that encoding-aware Strings that aren't byte arrays is
exactly the right thing for Ruby to have.

It is certainly a useful thing to have, but I'm not sure that it's a good
idea to do away with byte arrays.

I have a program which listens for UDP packets containing a hunk of data,
which is a string of binary bits and pieces, such as 3-byte integer values,
flag bits, and so on. I can't change the format of the packets. I have
some Ruby code which is doing the obvious thing -- taking the byte arrays
that are returned as string objects by the underlying syscall, and managing
it using unpack(), etcetera.

If strings are not the right tool for holding hunks of binary data, such as
those you'd get from performing a raw binary read(2) on a data file, what is?
The array type seems INCREDIBLY expensive for this -- do I really want to
allocate over two thousand objects to read in a 2KB chunk of data?

-s
 
T

Tony Arcieri

[Note: parts of this message were removed to make it a legal post.]

I'd rather get an exception than silently get incoherent output, though.

Amen to that, nothing worse than PHP's "3 dog night" + 2 = 5
 
M

Michal Suchanek

I'd rather get an exception than silently get incoherent output, though.


To some extent, I agree, but I was under the impression that you could
address this by specifying a desired encoding.

Unless you forget ;-)

Thanks

Michal
 
J

Jonathan Schmidt-Dominé - Developer

Hi!

Ruby is nice when you want to have a straight-forward, easy to use language.
You can write stuff very quickly, there are few restrictions and it is very=
=20
constistent (OOP). There are also some nice specials like blocks and mixins=
=2E=20
Ruby has good support for additional libraries (e.g. Qt+KDE for GUI).
But also when you simply want to calculate some things you can easily write=
=20
100 lines of Ruby. You should also consider that Ruby is terribly slow. E.g=
=2E=20
if you want to try it a few billion cases in an algorithm, that is often=20
possible in realistic time in C++ or even Java but not in Ruby. It is even=
=20
slower than scripting-languages like Python or Falcon. Currently C++ and Ru=
by=20
are my favourtire language, C++ does not know a lot of limits, is very fast=
=20
and provides very cool things with templates etc. (compile time meta-
programming), but it is also a bit complicated and inconstistent, Ruby is a=
=20
really nice toy I use wheneverit is easily possible.
A lot of people say that C++ is horrible, very inconsistent and complicate.=
=20
But that is not true. Most stuff is very consistent but it takes more time =
to=20
learn it, it is easier to think like Ruby than to think like C++.
And please do not learn Java, it is simply a stupid language, inconsistent=
=20
like C++, not as dynamic as Ruby and not as fast and not as many compile-ti=
me=20
capabilities as C++.

Jonathan

=2D-----------------------
Automatisch eingef=FCgte Signatur:
Es lebe die Freiheit!
Stoppt den Gebrauch propriet=E4rer Software!
Operating System: GNU/Linux
Kernel: Linux 2.6.31.8-0.1-default
Distribution: openSuSE 11.2
Qt: 4.6.2
KDE: 4.4.62 (KDE 4.4.62 (KDE 4.5 >=3D 20100203)) "release 2"
KMail: 1.13.0
http://gnu.org/
http://kde.org/
http://windows7sins.org/
 
S

Sebastian Hungerecker

It makes more sense to me that
"foo"[1] == "o" than that "foo"[1] = 111. It is a little surprising if I
think of it from a C perspective, which is certainly my native perspective,
but overall it's a cleaner answer and more consistent with string handling.
In particular, it's cleaner because it's consistent with slices of more than
one character.:)
By that logic array[index] should return a single-element array
instead of the element itself to be more consistent with array
slicing.
 
A

Aaron Gifford

Strings are just fine for binary data, IMNSHO. That's what 'BINARY'
encoding is there for.

Is this a poll? *laugh* It's starting to sound like one.

Aaron out.
 
M

Marnen Laibow-Koser

Jonathan said:
Hi!

Ruby is nice when you want to have a straight-forward, easy to use
language.
You can write stuff very quickly, there are few restrictions and it is
very
constistent (OOP).
Yup.

There are also some nice specials like blocks and
mixins.

Those aren't specials; they're core language features.
Ruby has good support for additional libraries (e.g. Qt+KDE for GUI).
But also when you simply want to calculate some things you can easily
write
100 lines of Ruby. You should also consider that Ruby is terribly slow.
E.g.
if you want to try it a few billion cases in an algorithm, that is often
possible in realistic time in C++ or even Java but not in Ruby.

Depends on the implementation. MRI is slow (I wouldn't say "terribly"
slow). Ruby EE and YARV are faster. JRuby is probably faster yet. All
are plenty fast enough for most general-purpose applications.
It is
even
slower than scripting-languages like Python or Falcon. Currently C++ and
Ruby
are my favourtire language, C++ does not know a lot of limits, is very
fast
and provides very cool things with templates etc. (compile time meta-
programming), but it is also a bit complicated and inconstistent, Ruby
is a
really nice toy I use wheneverit is easily possible.
A lot of people say that C++ is horrible, very inconsistent and
complicate.
But that is not true. Most stuff is very consistent but it takes more
time to
learn it, it is easier to think like Ruby than to think like C++.

Sorry, no. C++ looks inconsistent because it is: it's C with some
object orientation bolted on.
And please do not learn Java, it is simply a stupid language,
inconsistent
like C++, not as dynamic as Ruby and not as fast and not as many
compile-time
capabilities as C++.

Java's more consistent than C++, and more portable. I don't much like
Java, but I'll use it over C++ any day. And of course the JVM is
fabulous when coupled with a *decent* language like JRuby.
Jonathan

------------------------
Automatisch eingef�gte Signatur:
Es lebe die Freiheit!
Stoppt den Gebrauch propriet�rer Software!
Operating System: GNU/Linux
Kernel: Linux 2.6.31.8-0.1-default
Distribution: openSuSE 11.2
Qt: 4.6.2
KDE: 4.4.62 (KDE 4.4.62 (KDE 4.5 >= 20100203)) "release 2"
KMail: 1.13.0
http://gnu.org/
http://kde.org/
http://windows7sins.org/

Best,
-- 
Marnen Laibow-Koser
http://www.marnen.org
(e-mail address removed)
 
B

Bill Kelly

Seebs said:
I'd rather get an exception than silently get incoherent output, though.
Likewise.

It seems to me encodings are less artifacts of *the* language and more
artifacts of *language*.
To some extent, I agree, but I was under the impression that you could
address this by specifying a desired encoding.

Indeed, one can force_encoding ASCII-8BIT, if one wants "a = b + c" to
simply concatenate bytes without complaining that one may be jamming two
incompatible encodings together.

Also, reading a file opened in "rb" mode returns strings with encoding
already set to ASCII-8BIT.

So it's still possible to treat strings as binary in 1.9.


If it were really true that at any given point in my program, I can't
be sure that string 'b' doesn't have some random, incompatible encoding
from string 'c', then I think I'd agree with Brian that string handling
in 1.9 has become unreasonably complex.

But in practice, so far it has worked well for me to transcode to UTF-8
at I/O boundaries. (Or, to use "rb" or force ASCII-8BIT if I know I'm
specifically dealing with binary data.)

So far, I'm just not experiencing much pain in dealing with encodings in
1.9. And the places I have encountered exceptions, have been occasions
when I really would have been jamming incompatible encodings together,
and I was glad to know about it rather than be producing bogus data.

(In this case I was reading lines via popen() from a program ostensibly
outputting ISO_8859_1, but which under some circumstances, for some
fields, could output UTF-8 or MACROMAN. So yes, I had to do some extra
work at the I/O boundary to try to handle such cases as well as possible;
but that is hardly Ruby's fault.)


Regards,

Bill
 
S

Seebs

It makes more sense to me that
"foo"[1] == "o" than that "foo"[1] = 111. It is a little surprising if I
think of it from a C perspective, which is certainly my native perspective,
but overall it's a cleaner answer and more consistent with string handling.
In particular, it's cleaner because it's consistent with slices of more than
one character.:)
By that logic array[index] should return a single-element array
instead of the element itself to be more consistent with array
slicing.

Hmm. You have a point.

I guess, to me, "foo"[1] should be an o. If printing it yields a number,
instead of the letter o, something has gone wrong.

-s
 
S

Sebastian Hungerecker

I guess, to me, "foo"[1] should be an o. If printing it yields a number,
instead of the letter o, something has gone wrong.
We agree on that. I've always thought ruby should have a Char class, so
"foo" could
behave basically like a collection of Char. At least as far as [] is
concerned.
 
S

Seebs

I guess, to me, "foo"[1] should be an o. If printing it yields a number,
instead of the letter o, something has gone wrong.
We agree on that. I've always thought ruby should have a Char class, so
"foo" could
behave basically like a collection of Char. At least as far as [] is
concerned.

That might work.

I think the reason you need a single-character-string now is that things
like UTF-8 may make it ambiguous what the next "character" is, and not all
characters are a single byte.

So there's really two *separate* semantic changes.

1. Subscripting gives textual data rather than raw numbers.
2. Sometimes that textual data isn't a single byte.

These are related, but not quite the same. The issue, I think, is that the
first implies the second, because some encodings have single bytes which are
not a character, but rather, the preamble to a character.

-s
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,772
Messages
2,569,593
Members
45,108
Latest member
AlbertEste
Top