You can do things with multiple inheritance that can never be done
with interface implementations alone.
Nitpick, but I think Alan Turing would disagree with you
(
http://en.wikipedia.org/wiki/Turing_completeness)
Can it be abused? probably.
Does that justify tieing the programmers hands? I would argue no.
And I would argue "yes". It sounds like you're looking at this from a
programmer's perspective (and more precisely, from a highly skilled
programmer's perspective): "I can handle C++, so there's no reason for a
programming language like Java to exist, or if Java must exists, there's
no reason for it to be significantly different from C++."
To determine whether something justifies a decision-maker's decision,
look at it from the perspective of that decision-maker: Sun wanted to
invent a new programming language that would be attractive to large
corporations. The majority of programmers at the time were using C++, so
its Java was designed with similar syntax. But Java didn't need to stay
backwards compatible with C, so they removed a lot of parts that the
Java-team deemed "ugly" which was probably mainly maintained in C++
primarily for backwards compatibility.
Additionally, the majority of programmers are mediocre (this is by
definition of "mediocre"), and were abusing and getting entangled in some
of C++ more complex parts. For these particular programmers, things like
multiple-inheritance and other such features were doing more harm than
good, and so they were removed from Java.
By creating and releasing Java, Sun did not "tie" anyone's hands in
the sense of "removing" freedom: Any programmers who want to program in
C++ are free to do so. Instead, they added freedom by allowing programmers
who wish to guarantee that they'll never need to encounter MI and other
"complex" features of C++ the freedom to do so: by working in Java
instead.
I find the statement that "explicit deallocation of memory is a bad
thing" to be laughable. It assumes that this MUST be done in C++. If
you look at boost shared pointers and the like you will find that
object lifetime can be easily and, more importantly, predictably
managed. There seems to be an implicit assumption in this rather
childish statement in the philosophy behind garbage collection that
memory is the only resource that must be managed. File handles,
network connections, and other resources that are just as important to
manage and must be managed in a way that is more deterministic that
can be done by relying on garbage collection. The result is that, if
you code a class in Java that must manage one of these resources, you
MUST add methods that allow the application to release the resources
and these MUST be invoked before the object reference is released.
So, tell me again, how does GC really help here??
It doesn't. GC is meant for collecting memory-related garbage. For
things like network connections and file handles, you'd typically use a
"try-finally" statement to release those resources.
There are a lot of other "features" of Java that I find very
annoying. Examples:
- Lack of unsigned integer types. I realise that, with two's
complement math, the math operations between signed and unsigned is
all the same. However, I find that not being able to use unsigned
types results in sign extension in places where it was not expected
and results in casting/masking operations that only serve to clutter
the source code.
Has anyone really ever claimed this to be a "feature"? I've
occasionally been annoyed by this: You know, like when I'm writing a game,
and I want some data type which can store a bit more than than an int, but
not so much as a long, and which will never store a negative value (e.g.
damage done via some weapon), and I feel a bit bad about wasting 4 extra
bytes for using a long instead of an int. But really, it's a minor
annoyance, and it usually passes as soon as I move onto other parts of the
game.
- Bit shift operations only appear to be implemented for the integer
type. The result: if I am trying to do bit shifts on something
smaller than an integer, a byte or a short, the arguments are promoted
to integers and the reslt of the shift is an integer. So in order to
assign that back to something smaller than an integer, guess what,
yet another cast is needed.
This may be just a personal perception, but bit-shifting seems like
one of those things which is getting "close to the hardware", and thus
outside of Java's area of expertise. The only domain I know where bit
shifting is really used from a mathematical (and thus machine-agnostic)
perspective is cryptography, but I understand that Sun provides some
library support for common cryptographic manipulations, so you shouldn't
need to write your own anyway. I've never used this part of the library,
so I don't know how good, or complete, they are, but from limited study of
cryptography I've done, the number one lesson seems to be that you
probably shouldn't be writing your own cryptography routines anyway.
Me measure of the "goodness" of a language or its implementation is
whether I can accomplish the task I set out to do without having to
fight the language or its implementation. If I have to litter my code
with cluttering casts simply to produce the correct results or to
satisfy the childish demands of the compiler, I become annoyed. If
the language hobbles me rather than allowing me to express the design,
it also becomes annoying. My personal experience is that Java is
cluttered with these kind of issues.
I think a lot of people feel the same way about, e.g., C++ or Python,
or any other language. E.g. "Memory leak? Can't the
compiler/runtime-environment/whatever see when I'm done with this object
and get rid of it for me?" or "I have to play with the whitespace to make
the compiler happy? WTF?"
I think until AI has progressed enough that natural language is viable
as a programming language, then for every programming language, there will
exist someone out there who will consider some of the "demands" of the
compiler to be "childish". Perhaps even with natural language, there will
be some pointy haired boss out there who will still consider certain
demands (e.g. logical self-consistency) to be childish. "Do what I mean,
not what I say".
It seems to be easy to forget that there are reasons for writing
programs that transcend the language or environment in which it is
developed. At the end of the day, my value to my employer is not in
the languages I have mastered but in what I can produce that can solve
our customers problems and/or needs. To the extent that the language
and/or environment facilitates that, it becomes a valuable tool. To
the extent that the language/environment fetters that, it becomes less
valuable and less likely to be chosen as a tool. No language or
environment is free from blemish. Java has some real advantages but I
do not consider it a replacement for what can be done in C++.
And I don't think Java is marketed as a replacement for what *can* be
done in C++. There are some programs which are better expressed in C++,
and there are some programs that are better expressed in Java. Because of
momentum, there are a bunch of programs which would be better expressed in
Java, but which are currently being written in C++. It's *those* programs
for which Java is being pushed as a replacement for C++.
- Oliver