Help me!! Why java is so popular

  • Thread starter amalikarunanayake
  • Start date
M

Mark Thornton

rad said:
How can it negative? I'm not saying you're wrong, but how can any
byte-coded language outperform a binary language if they are doing the
same thing? It can't, because you have to convert the byte code to
the native binary stream before you can execute it. So I'm thinking
you mean certain algorithms are more efficiently handled by the JVM?
Please elucidate -- I heard someone say Java memory management now
exceeds C and I thought it was an interesting notion and probably
related to some ingenius optimizations in memory mgt algorithms,
though I honestly don't know.

Garbage collection can have an advantage in multi threaded applications,
and in Java we can have exact c. The JIT can optimise (and inline) the
code you are actually using today. I have code where part of computation
is represented by an interface and the implementation selected depends
on the data being processed.
In a compiled language you can't optimise across the interface call
whereas with Java you can. Even better you can generate byte code at run
time (e.g. compile an expression typed by the user) and then the JIT can
compile that to machine code and if it is simple enough inline it.
Java can devirtualise a method call if at some point you happen to have
only one implementation in the JVM.
No doubt, and Java's fine messaging implementation and rich set of
protocol support, eg, makes it a good vehicle for such things. A
service I'm fine with -- a utility I need to fire up over and over,
not so fine with that. I wouldn't write that in Java.
If it is something you fire up manually then the startup time isn't
important (takes you longer to type the command). If it is run from a
script, then it rather depends on the script (i.e. if the script system
maintains a JVM to execute little tasks in, the startup time can again
be negligible).
At one time there was a Java compiler that let you go from Java
to .EXE. I used it quite a bit, although the .EXEs it generated were
pretty fat for the functionalit they implemented. Then again Java's
not about creating .EXEs, so that didn't surprise me.
Such things still exist but have always struggled to match the
performance of regular JVMs.
How does one get Java to run faster than a compiled language?
Take advantage of its strength, the ability to compile dynamically
loaded (or created) code and optimise the whole without regard for
module boundaries. Also take advantage of garbage collection --- its
performance has dramatically improved in recent years.

Mark Thornton
 
R

raddog58c

One point you're missing (and it doesn't seem you're
alone) is that there is no need to shut down the JVM after
your "utility" is finished. You can just leave the JVM
sitting there, already warmed up and ready to go, just
waiting to be told to load another class.

This does presume the JVM will be needed, and if it is that's fine.
But it uses a fair amount of memory that other applications are now
locked out from -- either that or it has to be paged out, another hit
on system performance.
As a concrete example of this approach, consider a
browser running a sequence of applets, one after another.

Right, if the machine is running a large Java app like a application
web server, or if all your code snippets are java, then you'd
capitalize on the ready-to-go scenario of an already up and running
JVM.
The only actual data I've seen on this topic is now
quite old, dating from the first few years of Java. IIRC
it was an article in CACM, describing an experiment in
which several dozen computer science students wrote programs
using various languages: Java and C++ and C (I think). The
programs were then assessed for correctness and performance.
The average performance of the Java programs (I forget how
they defined "average" and "performance") was poorer than
that of programs in the other languages, *but* the variation
between languages was much smaller than the variation between
individual programmers. That is, performance depended much
more on programmer skill than on language choice. So the
answer to your question "How does one get Java to run faster"
may simply be "Write better code."

I believe the article was from an age before JIT, back
when Java really was a purely interpreted language. I don't
know whether the experiment has been repeated with more
recent implementations.

This kind of reminds me of my first stab at writing a java program
that I'll share -- feel free to scroll by if this is not of
interest...

My first experience with Java was about 1996 when for grits and shins
I wanted to compare Java against C, since we were doing a lot of C and
Java was this "new kid on the block.". So I decided to do a program
that had a reasonable balance of I/O vs computational load as a means
to compare -- it was a searching function ala "grep" to scan log files
on a telephony switch. The logs were over a megabyte with ASCII text
-- pretty standard "logs" if you will.

Both programs required a case-sensitive string enclosed in quotes as
input. I really didn't know any of the particulars of Java, but since
it was block structured I decided for a "fair" comparison I'd use the
same algorithm in both. The algorithm looked more or less like this:

int line = 0;
open file
while( ! eof )
{
buffer;
++line;
if( readline(buffer) == endoffile )
eof = true;
else if( buffer contains string )
print "found string on line #" line
}
close file;

The print was a printf in C and System.out.println in java. In C
buffer was declared as "char buffer[nnn]" and in Java it was a "String
buffer" -- for both it was declared inside the loop. The C search
used strstr(buffer, searchStr) and java used
buffer.indexOf(searchStr).

I know a lot of Java experts are shaking their heads, but remember I
didn't know the nuances of Java; I just knew it was block structured,
and perusing a Java reference book was enough to get it to compile.

The benchmarking test used an RS/6000 with AIX, and the results were
staggering. Tthe C program could search the entire log file and
produce the desired output in about 18 seconds. The Java version ran
for over 11 minutes before it finally blew off because it was out of
memory. I tried and retried and I could not get it to run through to
completion.

Of course today I know using a String was bad and that a StringBuffer
would have been better, plus declaring an Object inside the loop was
causing GC to run nonstop I imagine. I hadn't used any kind of
BufferedReader -- I don't remember what I used to be honest, but it
wasn't the best choice.

So admittedly, I made a lot of java "rookie" mistakes.

Anyways, the exercise was valuable, as I realized Java simplified some
things, but to be good it required some meta knowledge that wen't
beyond syntax and semantics -- it was very important what objects you
used, where you put them, and that you understood the side effects
caused by the objects (sync vs non-sync, immutable, etc) you used.

For those who only code in Java you're probably saying to yourself
"okay, everyone knows that, so your point is?" -- but if you use most
other languages, a buffer is pretty much a buffer, and whether it's
declared as an object or a character array doesn't have such a
dramatic effect on performance, so outside of the Java community it's
compelling. I went into my exercise unaware of this difference, and
needless to say I wasn't highly impressed with Java at that time.

Java's come a long way since then, and I've come a long way with Java,
too. I'm still learning more about the language and its nuances, and
I do like it. But I'm not so in love with any language, from C to
shining J, to explain away issues if they exist, so that's why I'm
intrigued by performance comparisons where java is said to outperform
natively compiled languages. On a fundamental level that's
impossible; but though ingenuity and circumstance, they do some pretty
amazing things with the language.

To me it still boils down to understanding the problem you're trying
to solve and selecting the best algorithms, language, architectures,
etc, that solve the problem. And I believe fundamentally in realizing
this is not the only problem to be solved, so keep in mind the impact
your solution will have on the other solutions running before,
concurrently and afterwards. Sometimes Java is ideal; other times
it's a bad choice because it's a heavyweight runtime; blindly applying
C, Java or whatever to every problem space isn't very wise.

At any rate, that's why I don't write much Java for my desktop at this
time -- most of what I run does not requrie a JVM, so to use Java + a
JVM to solve the quick-n-dirties on my workstation would be like
hunting mice with a howitzer. I write lots of Java in the Enterprise
world, on web servers, and to provide distributed services, all of it
engaged within the Spring framework, and it's extremely flexible, easy
to use, and nice.

Everything does have its place.

I do appreciate the responses... thanks!
 
M

Mark Thornton

raddog58c said:
On Feb 7, 11:14 am, "Chris Uppal" <[email protected]
THIS.org> wrote:
disadvantage out of the gate. The late binding to environment could
help close the gap, but that's not guaranteed because the .EXE can be
precompiled for the target deployment environment and if so the race
is over.

The single/multi processor state can be changed after an application has
been installed. A JVM will adjust accordingly, but happens to your EXE
that was selected/compiled for the single processor that existed at
install time? There used to be an issue with maths coprocessors (and may
be again if AMDs ideas surface in a product). I think it is still
possible for processor upgrades to add SSE3 or similar capability.

Mark Thornton
 
L

Lew

raddog58c said:
This does presume the JVM will be needed, and if it is that's fine.
But it uses a fair amount of memory that other applications are now
locked out from -- either that or it has to be paged out, another hit
on system performance.

Do you use Windows?

- Lew
 
C

Chris Uppal

Mark said:
A JVM will adjust accordingly, but happens to your EXE
that was selected/compiled for the single processor that existed at
install time?

Which raises the question of how much extra testing is needed to account for
the fact that the JITer will adapt to the host processor and thus be executing
different programs on different machines ?

Personally, I think the answer is "damn little" -- except in special cases.
The JVM people have shown themselves to be good at producing implementations
which behave the same despite the adaptive behaviour (except considerations of
speed, naturally ;-) so I'd put my testing budget into looking for the mistakes
/we/ make in preference to the ones that Sun's VM engineers make.

The one qualification I'd make to that is that the testing machines shouldn't
be such that they are likely to /mask/ problems -- so they should be
multiprocessor boxes, and with no more "grunt" than the worst-case target
machine. (Though, I suppose a case could be made that at least some testing
should be done on a machine with as near as possible the same grunt as the
fastest/biggest target machine during the release's anticipated lifetime.)

-- chris
 
?

=?ISO-8859-1?Q?Arne_Vajh=F8j?=

raddog58c said:
These are good... thanks. It would depend on the nature of the
application, as some of the optimizations, unless significant,
wouldn't make up the difference in the time it took to compile the
byte code into machine code.

Yes - unless significant.
This does presume that we're comparing the post-compiled byte code
against the precompiled code in the runtime binary (.EXE, .COM, etc).
The fact the conversion is done at run time and would have to be done
every time the code is run (unless it's cached) puts it at a
disadvantage out of the gate. The late binding to environment could
help close the gap,

Or it could cover the gap 200%.

You are basically proving that Java is not efficient by assuming so.

Meaning you proved nothing.

Arne
 
R

raddog58c

Do you use Windows?

- Lew


I think I may have answered this, but if not I use Windows, Solaris,
AIX and Linux currently for Java, C++, C and Perl development.
Occasionally I'll (begrudgingly_ modify some legacy COBOL batch or
CICS code. I grew up writing embedded controller software for micro
processors and occasionally as high powered as 80186's --
communications protocol implementation, optical and winchester disk
drivers (SCSI and homespun), smart devices like smart phones. Lots of
device driver dev for MS/DOS 1.10 through Windows, Xenix/Unix, IBM TPF
(Transaction Processing Facility, a high-speed IBM/mainframe OS), VAX/
VMS. Pretty wide range of stuff overall, which gives me both good and
bad perspectives on things.
 
R

raddog58c

The single/multi processor state can be changed after an application has
been installed. A JVM will adjust accordingly, but happens to your EXE
that was selected/compiled for the single processor that existed at
install time?

Well, I'm recompiling it for the new target environment while they're
swapping the gear on the production box.

The entire argument around auto-configuration of runtime code is
valid, but it assumes you'll be changing the underlying platform over
the lifetime of the code. While that can and does happen in some
environments, sometimes frequently, there are many envs in which
that's not the case, or change is infrequent enough that it's trivial
to regenerate the runtime for the appropriate target if you need the
fastest speed you can get.

And just as the JVM can autoadapt, a programmer could build different
versions. Obviously it's not a side effect of the environment when
you're spinning it yourself, but just to be fair regarding all
available solutions.... startup logic could perform the same set of
checks to find out what hardware is installed on the host, and the
startup overlord can manipulate the runtime in some manner (rename the
binaries, change startup link pointers, etc) to facilitate the best-of-
breed for that environment accordingly.

The difference, obviously, is that with Java the programmer doesn't
have to concern her or himself with these mundane details -- all java
programs will inherently obtain this effect based on the JVM's
capacity to provide it. The skilled engineer can easily build a
library of platform checking and invoke or autoinstall the right code
when the changed hardware is detected. If that dynamicism were needed
based on the problem space, it's a reasonably easy feature to build.
"Back in the day" when controllers were really dumb, we used to write
streaming tapes with interrecord gaps that were the minimum size
possible and still provided ample time to perform inter-record
processing (programming DMA controllers, generating CRCs, etc) on the
slowest processors on the market at the time. It was always
interesting, as we had to run a sequence of instructions at startup to
see how fast they'd execute -- the faster they ran, the longer we
needed to configure our wait/delay loops in the inter-record
processing routines. That stuff was all based on the main clock,
since "smart" controllers weren't available to most devices at the
time, so drivers ran on the host's CPU.

Anyways, I'm not poopoo'ing the dynamic feature of Java -- it's very
cool and has its place to be sure. By the same token, HW engineers
may eventually standardize CPUs and aux processors, like FP
processors, to render the differences insignificant. And it's like a
lot of things provided by Java's dynamicism: they're awesome when you
need'm, pointless and wasteful when you don't.

More often than not I don't require the dynamicism, so it alone is not
a big selling point for me. Your mileage may vary, and that's cool,
it's just for me Java's dynamics are not why I choose it -- as I
mentioned, for me the overhead is a turn off. Heck, I'm not even
attracted by the fact the programmer is freed from the need to manage
her or his memory space -- I don't personally find memory mgmt
terribly challenging work.

What sells me on Java is the breadth of elegant frameworks, the
general (not always) ease of adding new functionality by adopting
classes into your app, and the completenes of many classes out and
about.

For example, we use Spring framework at the office, and it's a really
fabulous way to construct a distributed network of components as
services, and it does certain things to greatly reduce the "noise" in
code such as using "injection" to clean up constructors. Pretty
compelling stuff, and really a much bigger selling point, IMO, than
running under the auspices of a JVM.
 
R

raddog58c

Or it could cover the gap 200%.

You are basically proving that Java is not efficient by assuming so.

Meaning you proved nothing.

Arne

I wasn't trying to prove anything. Common sense says if you have to
convert from one format to another before you begin executing, you
have an extra step and obviously all things being equal you are not as
efficient, period, end of sentence.

Converted code that's more efficient you could make up for the
conversion - it would depend on the problem space, run duration, and
how well/badly each program were written. The converted code would
need to be more efficient to have a chance to make up for the extra
step. If it were equal to or less efficient, you will not make up the
gap.

That's not a proof -- it's an observation of reality, right?

Well written code in a language like C optimally compiled for the
native environment is going to be tough to beat unless you write in
native assembler langauge. I have actually had to write in native
assembler on more than one occasion in real-time systems where
nanoseconds mattered. That's atypical, but these situations do
exist. Anyone suggesting an interpreted language is the way to go in
these environments either has no understanding of the problem space,
or they've got a lot of explaining to do to make that assertion stick.

At any rate, the bottom line is you can't execute 100 instructions
faster than 50 instructions if they're running on average the same
number of clock cycles -- until someone invents a CPU with an ALU that
executes intermediate code directly, cycles have to be used by the
interpreter to transform to the native binary.

I used to program in Lisp years ago it had similiar issues. Don't hold
me to it, but if I remember right there was a movement by Lisp
aficionados to create a CPU that did exactly that: executed Lisp
directly. These machines obviously never gained a lot of popularity.

There's a beauty in interpreted languages, but instruction-level
efficiency is a trade off you make for the functionality and late-
binding paradigm that interpretation provides. I give the JVM
architects a lot of credit, as under the right circumstances they
glean a lot of efficiency out of Java byte code -- but on balance the
Java code I've worked with in batch, GUI, and web server applications
has not been impressive from a speed standpoint. Functionality wise
it's great, however, so that's the emphasis upon which one should
focus with respect to interp. languages, IMHO.
 
L

Lew

raddog58c said:
Anyways, I'm not poopoo'ing the dynamic feature ...

"Pooh-poohing". "Pooh-pooh" is to express derision. "Poopoo" is a euphemism
for animal excrement.

- Lew
 
L

Lew

raddog58c said:
Well written code in a language like C optimally compiled for the
native environment is going to be tough to beat unless you write in
native assembler langauge. I have actually had to write in native
assembler on more than one occasion in real-time systems where
nanoseconds mattered. That's atypical, but these situations do
exist. Anyone suggesting an interpreted language is the way to go in
these environments either has no understanding of the problem space,
or they've got a lot of explaining to do to make that assertion stick.

Others have pointed out that the JIT compiler can beat compile-time
optimizations in some cases by virtue of having a different view of the
situation. Escape analysis, for example, is a runtime phenomenon.

Because JVM optimizations are global and dynamic, whereas compile-time
optimizations are more local and static, the JVM may actually achieve
significantly better performance because it follows different analysis paths.
This could, and according to what I've read, does achieve far better
performance than "C optimally compiled" code.

Myths are supported by misguided intuition. Reality often moves in surprising
ways.

- Lew
 
L

Lew

I think I may have answered this, but if not I use Windows, Solaris,
AIX and Linux currently for Java, C++, C and Perl development.
... Pretty wide range of stuff overall, which gives me both good and
bad perspectives on things.

My point is that those who use Windows already accept bloatware and the
concomitant hit on performance. It may not be such an impediment to platform
effectiveness as some may think.

- Lew
 
N

nukleus

Lew said:
"Pooh-poohing". "Pooh-pooh" is to express derision. "Poopoo" is a euphemism
for animal excrement.

What a gem.

I wish I could have you sitting on my bookshelf.
If only I could push a button and say:

Lew, tell me, what is THIS thingy here?

And, sure enough, there is ALWAYS an answer.

Have you thought of Virtual Lew project?

Could be quite a thing...

Do you want me to quick prototype that thing
to give ya an idea of what can be done?
 
N

nukleus

Lew said:
My point is that those who use Windows already accept bloatware and the
concomitant hit on performance.

I wouln't rush to use the performance issue
if I were a java freakazoid.
It may not be such an impediment to platform
effectiveness as some may think.

I agree with this.
Not only bloatware, but MONSTERWARE,
with a foot print of a inter-galactic space ship.

But what I like about Microsucks development environment
is that it is very intuitive and very simple in terms
of assisting you with many things you normally do,
starting from editing and down to compilation, run time,
debugging, and you name it. You can edit the HTML or
XML files with full support of toolkits and gadgets,
probably better than some dedicated HTML editors can.
And it is all integrated. Just drag and drop the file
from your directory, and boom, you are in HTML edit mode
and the toolkit is hanging right there, down to style sheet
editing on a sophisticated level.

You want a multi-file search and replace
because you have some idea how to make your code better?
- No problem, number of options.

You want virtually unlimited, multi-file undo/redo?
- Sure, not an issue at all.

You want a sophisticated cut and paste
for cut and paste freaks like myself?
- Well, the easiest thing in the world.
I just wish they wouldn't automatically
indent my comment lines. Cause there is a reason
it is done this way. Borland is MUCH more flexible
in terms of chosing varios editing options.

You want just about the most sophisticated debugging
imaginable?
- No problem, be my guest.

You want multi-file bookmarks while editing?
- Piece o cake.

And if you think MS is a bloatware,
yes, it is, but it handles memory very gracefully.
It may eat a hundred megs or so, but is is very
non obtrusive in terms of handling virtual memory
and you don't have to wait for half an hour while
clicking on some tab when it is swapping in and out
a few gigs, just like the other major players do.

I feel it is like a hand in glove.
With all their giga-sizes, it is just about the
most complete and most comfortable environment
to work with, having virtually all the power
you can imagine, even though some of their
automation or "assistance" I could get by without,
only if I could switch it off with some checkbox.

But I can't.
And so I have to live with their automated "assistants",
jumping into my face with just about any mouse move,
before I even click!

And about the UGLIEST thing of all,
is this ziga-sized deadly embrace
between Sun and Microsucks,
where anything beyond AWT is not "supported".
The very concept of CLASSPATH is not supported,
packages are replaced with "assemblies",
and just about everything imaginable is done
to anihilate Sun
and convert it to dust.

And Sun does the same thing.
I lived next door to their world headquarters,
the Sun city.

Maan...
What a desolation land.

Amazingly enough, people here are not even willing
to discuss these issues, even though some of them
are very experienced developers, and DO have something
to contribute to the equasion.

But they just accepted that shadow theatre
as some kind of ultimate reality,
not to be even questioned.

And so, this NWO thing is marching on.

Stomp

Stomp

Stomp

Stomp

Stomp

Stomp

Stomp

Stomp

Stomp

Stomp

Stomp

Stomp

Stomp

Stomp

Stomp

Stomp

Stomp
 
C

Chris Uppal

raddog58c said:
And just as the JVM can autoadapt, a programmer could build different
versions. Obviously it's not a side effect of the environment when
you're spinning it yourself, but just to be fair regarding all
available solutions.... startup logic could perform the same set of
checks to find out what hardware is installed on the host, and the
startup overlord can manipulate the runtime in some manner (rename the
binaries, change startup link pointers, etc) to facilitate the best-of-
breed for that environment accordingly.

That approach runs into the problem of combinatorial explostion -- which is why
it is only used in limited ways and in rather extreme cases. The thing is that
a JIT has more information available to it than any possible static analysis.
That is a /fundamental/ advantage, and cannot be clawed back (though it can be
wasted); just as having to do extra work at runtime is a fundamental
/disadvantage/ which can only be compensated for, but never eliminated.

BTW, I'd not advocating one approach over the other here, just discussing what
the approach taken by current JVM's /is/.

-- chris
 
C

Chris Uppal

raddog58c said:
Well written code in a language like C optimally compiled for the
native environment is going to be tough to beat unless you write in
native assembler langauge. I have actually had to write in native
assembler on more than one occasion in real-time systems where
nanoseconds mattered. That's atypical, but these situations do
exist. Anyone suggesting an interpreted language is the way to go in
these environments either has no understanding of the problem space,
or they've got a lot of explaining to do to make that assertion stick.

FWIW, an interpreter can be an advantage in some situations (I mean a classical
interpreter, not a JITing code generator): whenever code space is more of a
bottleneck than execution speed. And reducing the /bulk/ of the code executed
can, depending on architecture, have performance benefits too.

At any rate, the bottom line is you can't execute 100 instructions
faster than 50 instructions if they're running on average the same
number of clock cycles

Actually they can ;-) If the right instructions are issued in the right order
then they can beat a smaller number of instructions doing the same thing but in
the wrong order. There's a nice Wikipeadia article on one form of
memory-access optimisation
http://en.wikipedia.org/wiki/Loop_nest_optimization
which I tried translating into C, and was somewhat shocked to find a (on one
machine) a full factor of 10 speed up...

Someday, I'm going to translate that code into Java and see how the server
JVM's JITing of the naive and fully hand-optimised versions of the code
compares with that of the C compilers I tried. I can post the C code if
anyone's interested (or wants to check my code or results).

As far as I know, such optimisations are outside the scope of current, or even
projected, JITers ... unfortunately. OTOH weaker optimisations, such as
dynamic loop unrolling, which is sensitive to details of the processor's memory
architecture and to actual data sizes, are certainly feasible -- but I don't
want to give the impression that such optimisations are used today (they may
be, they may not, I just don't know).

There's a beauty in interpreted languages, but instruction-level
efficiency is a trade off you make for the functionality and late-
binding paradigm that interpretation provides. I give the JVM
architects a lot of credit, as under the right circumstances they
glean a lot of efficiency out of Java byte code

You probably already know this, but just to be clear. Java bytecode is
essentially irrelevant in considerations of performance. It's best thought of
as a high-level programming language (OO, with GC, etc) which is used as
reasonably compact, portable, /transmission/ medium.

Whether /any/ bytecode survives until runtime is implementation dependent.
IBM's "Jikes research JVM" (a very impressive bit of work, and incidentally
written in Java) translates /all/ bytecode into native code (but only invokes
an optimiser for discovered hotspots). They report that a fast-and-simple
translation phase takes essentially the same time as just loading and parsing
the bytecode in the first place. The current Sun range (at least for Intel
boxes) use a mixture of more-or-less direct interpretation of the bytecode plus
JITing for the hotspots. (The interpreter itself, by the way, is generated at
runtime as part of JVM startup, using the same code-generation framework as the
JITer uses. I think that's rather cute).

-- chris
 
L

Lew

What a gem.

I wish I could have you sitting on my bookshelf.
If only I could push a button and say:

Lew, tell me, what is THIS thingy here?

And, sure enough, there is ALWAYS an answer.

Have you thought of Virtual Lew project?

Could be quite a thing...

Do you want me to quick prototype that thing
to give ya an idea of what can be done?

Great idea.

- Lew
 
R

raddog58c

"Pooh-poohing". "Pooh-pooh" is to express derision. "Poopoo" is a euphemism
for animal excrement.

- Lew


<<"Pooh-poohing". "Pooh-pooh" is to express derision. "Poopoo" is a
euphemism
for animal excrement.

- Lew >>

Thank you, and as the radicaldog that I am, I restate that I was not
poopooing Java. ;-)
 
R

raddog58c

Others have pointed out that the JIT compiler can beat compile-time
optimizations in some cases by virtue of having a different view of the
situation. Escape analysis, for example, is a runtime phenomenon.

Because JVM optimizations are global and dynamic, whereas compile-time
optimizations are more local and static, the JVM may actually achieve
significantly better performance because it follows different analysis paths.
This could, and according to what I've read, does achieve far better
performance than "C optimally compiled" code.

Myths are supported by misguided intuition. Reality often moves in surprising
ways.

- Lew

Fair enough, but the key words are "may" and "could" -- often they
will not, particularly in situations where there really is no
alternative improvement. In that case the additional overhead to
store, manage and reference a knowledge base "may" and "could" further
deplete resources and degrade the overall system's throughput.

It's really 100% context sensitive.
 
R

raddog58c

My point is that those who use Windows already accept bloatware and the
concomitant hit on performance. It may not be such an impediment to platform
effectiveness as some may think.

- Lew

That's an excellent point.

To the spirit of the thread, and from my experience working in
reasonably diverse range of languages, I still don't believe Java's
popularity has in any way, shape or form been a byproduct of its
legendary performance. I think in many cases the overhead incurred to
run a JVM is acceptable because of the functionality the code base
provides.

So winding back a bit, the reason that functionality can exist with
Java has much to do with the language/environment which frees
programmers from a lot of resource management and allows them to focus
more on plugging together components. Time to market and all that
jazz.

Like a lot of interpreted languages, Java's development cycle is
pretty excellent I think. A programmer can get a fairly complicated
application off the ground quickly due to a number of things brought
about both by the interp-language development paradigm that allows for
immediate detection of errors and compile-and-test-as-you-code
sessions.

I think those are the kinds of things which has made Java popular.
The performance issues are the trade off -- I find most Java-only
coders will explain away any performance issues by pointing out all
the great things you get in return, or how in controlled or specific
circumstances modern JVMs can use environment-specific knowledge to
coerce code into a better behaving form.

I'm fluent in a decent array of languages, compiled and interpreted,
as well as a good number of assembler languages. If it does nothing
else, it helps me to avoid falling madly in love with any one language
and making everything about it okay. The JVM and its mitigation
strategies are great, but it's not about that -- it's about Java's
ease of coding that's made it popular. Coding Java is like traveling
in a Winnebago, where coding C is like traveling on a dirt bike.
There are tradeoffs to both....


One other point related to Java on the downside IMO is a side effect
of exactly what's good about Java: ease of code reuse.

I have seen a lot of inefficiently written components used in a lot of
applications. Because Java encourages the programmer to let the JVM
handle a lot of details it seems to me -- and I welcome other opinion
on this -- that many Java coders get "lazy" about what's really going
on inside the code. If it's slow, they write it off as "there's a lot
going on." They seldom dig much below the surface, because that's
what Java seems to be trying to free you from having to do. It's
valuable in Java to make sure the appropriate techniques and objects
are used and to understand the implications of a bad choice -- refer
back to my 1st Java program, the "grep" implementation, to see what
wrong use/type of objects can do to runtime performance.

Java is popular partly (largely?) due to the ease with which you can
get an app going, but unless one is disciplined that very speed can
lead to less than optimal solutions. I'm certainly not suggesting
easier programming languages are bad, just thinking outloud with
respect to human nature.

Thoughts/experiences?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,767
Messages
2,569,572
Members
45,045
Latest member
DRCM

Latest Threads

Top