Do you suggest me using IDE when I'm learning JAVA

J

Jim Janney

BGB / cr88192 said:
be careful not to underestimate the sheer hackery power made available by
these seemingly trivial pointers...

plain pointers allow the likes of custom memory management, garbage
collection, and the implementation of dynamic typing.

function pointers make possible the creation of self-modifying programs
(among other things), as well as custom code-loading, JIT compilers, and
adding "eval" and other features to C, ...


this is part of what makes C hard to learn, but also what makes it difficult
for C and C++ programmers to commit to using only Java...

sadly though, C / C++ and Java are like water and oil...

JNI isn't exactly pretty, is it? The last time I needed to do that I
ended up using SWIG, which handles it pretty well.
 
T

Tom Anderson

But then, we also think by deleting, which might result in a
negative LOC number, while still indicating an advancement
toward project goals.

Yes! You are absolutely right! Deletion is the developer's most powerful
tool. I always think of Michelangelo's possibly apocryphal approach to
sculpting his famous statue of David: just chip away all the parts of the
block of stone that aren't part of the statue.

Aha. I came across this idea of programming being a design rather than a
manufacturing activity, and thus of source code being a design rather than
a product, a while ago, but forgot who originated it - thanks for
reminding me. He's spot on.

tom
 
A

Arne Vajhøj

well, in my case, a lot has to do with familiarity...

C# raises the issue of differences between MS's implementation and Mono, and
the lack of good alternatives...

Java leaves one in a world of primarily Java-only...

Why.

Probably 99.9% of Java code does work with Java code. IDE's
written in Java are very widely used in the PHP, Python, Ruby world.
true, but in my case, OpenGL is the most convinient way (in C and C++) to
pull off portable GUI code (the main other alternative being to maintain
local bitmaps and draw into these...).

What is wrong with GTK, Qt, wxWidgets etc. ?
but, the issue with GTK# is that AFAIK it is not available for MS's .NET
implementation...

providing redundant GUI between WinForms and GTK#, or demanding use of Mono
on Windows, would also be lame...

possibly it would leave one needing to find some way to gloss over the
differences.

Mono works fine on Windows.

A Mono installation is rather non-intrusive. It would not
break anything installing it as part of an install.

And with a little careful packaging then a GTK# app should
also run with MS .NET CLR.
but, this is only if one is willing to write the whole thing in Java, which
granted, seems like a little bit of a strain for someone far more used to
(the relative anarchy of) C and C++...

If there are nothing you really need C for, then learning Java
may be a better option than to create a monster mix of Java and C.

Arne
 
A

Arne Vajhøj

All humour aside, the significant FORTRAN for many of us must have been
either FORTRAN 66 or FORTRAN 77. My start to programming was actually
FORTRAN 66...now that I look carefully at what FORTRAN 77 added, it
appears that I was also able to get away with not having a CHARACTER
data type.

As near as I can tell FORTRAN IV did have a logical IF.

Yes. But no END IF.

Arne
 
A

Arne Vajhøj

I think I think more than type, but practically this is a massive
time-waster relative to actually getting code written...

it is like spending time speculating about the future:
this is wasting time, since the future is has not happened yet.

The difference between good code and bad code can be rather
significant in cost.

It may cost more to produce good code, but over the life cycle
of a typical application bad code will turn out to be a lot more
expensive.

Arne
 
A

Arne Vajhøj

If (taken from Wikipedia) thought/thinking is "an intellectual exertion
aimed at finding an answer to a question or the solution of a practical
problem", which is a good definition for our purposes, then how could
something which is designed to help us create and organize and present
information _not_ contribute to better thought/thinking?

The problem is that IDE's are very good at helping with the
trivial stuff (like generating getters and setters, finding syntax
errors etc.) and does not provide much for all the difficult
stuff (designing API's, designing data structures, picking
algorithm etc.).

Arne
 
A

Arne Vajhøj

No, this is rubbish. Programmers don't spend ages sitting there
thinking, and then do a bit of typing. This is pure fiction. We think as
we type - we think *by* typing, by putting our ideas down, working
through the details, trying things out, seeing what works and what
doesn't. Masses of thinking goes on, but it's not like some caricature
of a mathematician, staring at a ceiling for days on end and then
jotting down a complete theorem at the end of it. To do the thinking,
you have to work through it - and better tools let you work through it
faster. If by the use of autocomplete and type inference and whatnot i
can flesh out a for-loop over the entries of a map in which i filter the
keys with such-and-such a test and transform the corresponding values
with this-or-that function in thirty seconds rather than three minutes,
then that's two minutes and thirty seconds less that's taken me to learn
if my idea for the loop works or not. That's the kind of thing i spend
my time doing when i program, and an IDE does help me to do that faster.

But that is not how good software gets created.

Any software of real world size will be unmaintainable. Real complex
software created that way will never even work.

You analyze requirements, come up with an architecture and a
design. Producing the actual code is a minor part of the work.

The before mentioned paper has some statistics for productivity.
It varies with code complexity and project size. But all are
in the range of 1-25 man months per KLOC. That is 1/4-6 lines
per hour. Either they are extremely slow typists or they actually
do some heavy thinking between the typing.

So no - it is not fiction that software engineers think more
than type.

The most extreme form I have ever heard about were a project
for some traffic control system - the engineers spend all
their time writing a formal specification of requirements
and functionality - and then they got somebody else to
do the trivial work of typing in the Ada code (which could then
be verified against the specs).

No - I don't think that approach is cost efficient for most
apps where it is only about money. But it illustrates how
pure engineering software development can be. Let us call
it the other extreme from your 100% typing and thinking as
you type approach.

Arne
 
M

Mike Schilling

Arne said:
It is not easy.

Some consider that an advantage, because it is part of why it is not
used much!

JNI has to be complex, because it has to expose many of the complicated
features of the JVM (reference counting, exceptions, etc.) without
compromising the safety features of the JVM. Imagine if you could generate
a .h file that represents a class layout and write JNI that is passed object
pointers, allowing you to access the object directly and at the same time
scribble all over memory in the best uncontrolled C fashion. JNI would be a
joy to write, but a nightmare to debug.
 
M

Mike Schilling

Arne said:
Yes. But no END IF.

no THEN or ELSE either, just the single-line

IF (logical-expr) statement

F77 added

IF (logical-expr) THEN
...
ELSE IF (logical-expr) THEN
...
ELSE
...
END IF
 
G

Graham

Clarence Blumstein said:
Do you suggest me using IDE when I'm learning JAVA? because I'm about
to using Eclipse when I'm learning JAVA? Did you using IDE while/when
are a beginner?

I haven't read the whole thread but when I started tinkering with Java
on Unix all I had was Vi.

I never liked Forte, end up using Eclipse, tried JBuilder but found it
less reliable than Eclipse, never spent the time learning NetBeans when
I should have given it a fair shot.

Now you don't say what your experience is. If you are a total beginner
(or near enough) give BlueJay a shot. It isn't an IDE but a graphical
learning tool that you can write Java and execute bits of your code.

Graham
 
A

Arved Sandstrom

Arne said:
The problem is that IDE's are very good at helping with the
trivial stuff (like generating getters and setters, finding syntax
errors etc.) and does not provide much for all the difficult
stuff (designing API's, designing data structures, picking
algorithm etc.).

Arne

You're quite correct...if your process for doing the "difficult" stuff
doesn't involve using an IDE. However, in real-life most APIs, data
structures etc get designed through an iterative coding process in an IDE.

There's nothing wrong with this approach per se. Clients of an API only
see it when they get it, and if your design process leading up to API
release happened to include lots of refactoring in an IDE, as opposed to
UML diagramming and so forth, so what?

Let's look at this another way: why is it somehow superior to do lots of
editing and thinking while using UML and pencil & paper, and it's
somehow inferior to do the same editing and thinking with code? If I'm
thinking up method signatures for my new interface, it's totally
immaterial as to whether I draw a pretty UML diagram or code them up in
a Java interface right away. I may as well - the Java interface is as
meaningful to me or any other programmer as staring at a UML diagram,
and at that stage it's very much a legitimate design artifact.

And my personal belief is that you're more likely, 90 percent of the
time, to get an equally good result faster by involving an IDE at an
early stage, and starting to refine your API or data structure through
use in tests. Your tests are expressing your planned usage...and
*testing* the actual implementation as you refine it. Let's be real - if
you did a whack of design first you'd still end up doing all that coding
and _refactoring_ and testing anyway, 90 percent of the time...and
overall needed more time to attain the same result.

The important thing in software production is that we have good
requirements analysis, good analysis & design, good implementation, good
testing and so forth. But IMHO it's still very much a traditional
waterfall mentality that the design, for example, cannot involve writing
code. A lot of the coding in newer methodologies isn't really
implementation - it's design. And this process - design refinement in
code - is assisted by IDEs.

AHS
 
T

Tom Anderson

But that is not how good software gets created.

Any software of real world size will be unmaintainable. Real complex
software created that way will never even work.

You analyze requirements, come up with an architecture and a
design. Producing the actual code is a minor part of the work.

Have you just fallen through a timewarp from the 1970s or something?
That's called 'waterfall', and it doesn't work.
The before mentioned paper has some statistics for productivity. It
varies with code complexity and project size. But all are in the range
of 1-25 man months per KLOC. That is 1/4-6 lines per hour. Either they
are extremely slow typists or they actually do some heavy thinking
between the typing.

Or producing one finished line of code involves typing more than one line
of code.
The most extreme form I have ever heard about were a project for some
traffic control system - the engineers spend all their time writing a
formal specification of requirements and functionality - and then they
got somebody else to do the trivial work of typing in the Ada code
(which could then be verified against the specs).

Cases like that have been recorded (the guy on the MULTICS project someone
posted about earlier). They get recorded because they're so exceptional -
the great majority of software is not built that way.

I would be very interested to know in detail how that project was done,
though; i'd make a small wager on the ground truth being rather divergent
from that description.

tom
 
C

cr88192

Arne Vajhøj said:
The difference between good code and bad code can be rather
significant in cost.

It may cost more to produce good code, but over the life cycle
of a typical application bad code will turn out to be a lot more
expensive.

this doesn't mean one has to forsake good conventions, such as using regular
naming conventions, or modularizing things, ...

also effective I have found is to find things which are similar, and use
them as design templates.
in this case, bad design decisions which didn't work, and so were dropped,
will generally have already been caught by the prior implementation.

or, someone can cross-reference between different designs for similar
things, and see if they can find a good set of tradeoffs...


only that one need not sit around trying to plan out everything in advance
(into some distant future-time...). since almost inevitably, this wastes
lots of time thinking (say, a person spends 5 hours planning what only takes
them 1 hour to write), when they could have easily gotten much more done.

similarly, if one tries to address the future, almost inevitably the thing
ends up over-complicated and over-engineered in ways which don't ideally
match the project at hand, or don't exactly match how the project will go in
the future.

over-complexity and over-engineering is something to be avoided...


what then typically matters is making the code work right then, and any
problems can be fixed up as one goes along. (and in the long run, this often
ends up with the better solution...).

if it aint broke, don't fix it (unless there is some clearly better
solution...).


take for example, Microsoft:
much of their technology is built upon decades of kludgery and relatively
short-term hacks to address all manner of problems.

take for example .NET binary images (which are essentially a hacked over
version of PE/COFF, which is a hacked-over version of COFF and DOS-style
EXE's...).


but, it all works fairly solidly none-the-less...

they get Windows sold, and they provide a platform with many niceties and
conviniences.
if it were really that bad, they would either get stuck in their own mess
(unable to add more features, ...), or consumers would be unwilling to buy
their products, or something...
 
C

cr88192

Arne Vajhøj said:
But that is not how good software gets created.

it depends on who is defining "good"...

Any software of real world size will be unmaintainable. Real complex
software created that way will never even work.

I have an approx 300 kloc compiler/VM framework (it was closer to 500 kloc,
but I shaved it down some mostly by a great dead-code cleanup...).

and, very little planning was done up front...

(although, admitted, I often write a short spec describing what feature I am
considering and maybe do a mock-up of the API, so it is maybe more design
than simply beating it together and beating it into working...).

in fact, most of the time I have little idea where it is I am going (in the
long term), so in large part the overall design magically appears out of
nowhereland...

most of the time is spent instead adding features and fixing problems...

You analyze requirements, come up with an architecture and a
design. Producing the actual code is a minor part of the work.

I say, this doesn't scale...
much above about a 25-50 kloc component, this doesn't seem like a reasonable
strategy...

my strategy is more like biology, where code may be duplicated and refined,
and components may be split when they become too large or complex, and dead
or useless code and features are shaved off.

The before mentioned paper has some statistics for productivity.
It varies with code complexity and project size. But all are
in the range of 1-25 man months per KLOC. That is 1/4-6 lines
per hour. Either they are extremely slow typists or they actually
do some heavy thinking between the typing.

yeah, that is slow...

my output volume tends to be a bit higher, even though I still spend most of
my time doing other stuff (unrelated to coding or design...).

So no - it is not fiction that software engineers think more
than type.

dunno...

maybe most of them are just lazy...

nothing broke that day == free day to watch stuff and play games...

maybe they can use a KVM switch and a VGA-box to hide the XBox360 hidden
under their desk if their boss just happens to wander by...

can't miss the next episode of Shippuden or One Piece...
oh wow, the Spoony One has released another review and Nostalgia Critic has
reviewed another movie, can't miss this...


and at the slightest hint of hearing steps in the background, a quick
double-scroll-lock or alt-tab and they are back in Visual Studio or Eclipse,
starting intently at whatever is the current problem, busy away at the task
at hand, or at least until the boss has left again...

The most extreme form I have ever heard about were a project
for some traffic control system - the engineers spend all
their time writing a formal specification of requirements
and functionality - and then they got somebody else to
do the trivial work of typing in the Ada code (which could then
be verified against the specs).

now what is the cost of a bug in a traffic-control system:
well, apart from obvious problems, like 2 green lights and traffic colliding
head-on, or a green light at the same time as the cross-walk, probably not a
whole lot to go wrong...

if they really want to know, they could beat together some tests or a
traffic simulation to look for bugs or performance bottlenecks...

No - I don't think that approach is cost efficient for most
apps where it is only about money. But it illustrates how
pure engineering software development can be. Let us call
it the other extreme from your 100% typing and thinking as
you type approach.

I don't think it is effective for most apps...

I also don't think it is likely to result in good software either.
there is far more that a person is likely to see when testing or using an
application than when pondering about it...


someone with paper:
wow, this design is so elegant, ...

person using the app:
wow, this sucks, who the hell thought it was a good idea to put this over
here?...

they run it through the profiler, and find that some unanticipated design
issue has caused performance to be horrible, ...


problems like horrible UI's, terrible performance, or unusable bugs, are
fairly likely to show up in use, but are trivial to overlook if one is
trying to design something on paper...

at least with solid prototypes, one can know the thing works, and it is
simply a matter of beating out most of the bugs prior to the final
release...

or such...
 
A

Arne Vajhøj

no THEN or ELSE either, just the single-line

IF (logical-expr) statement

F77 added

IF (logical-expr) THEN
...
ELSE IF (logical-expr) THEN
...
ELSE
...
END IF

END IF should be read as "block if".

:)

Arne
 
B

BGB / cr88192

Arne Vajhøj said:
Why.

Probably 99.9% of Java code does work with Java code. IDE's
written in Java are very widely used in the PHP, Python, Ruby world.

fair enough, but committing to loss of pointers, and ones' pre-existing
code, is a bit of a leap...

granted, there is no universal reason here, mostly personal ones...
a mixed codebase is easier to deal-with.

What is wrong with GTK, Qt, wxWidgets etc. ?

GTK:
GPL, doesn't work well on Windows, doesn't build with MSVC, ...

Qt: well, previously it was proprietary and non-free on Windows, but I think
things have changed, but I have not looked into it. IIRC, Qt also requires
C++, and doesn't allow one to write their GUI code in plain C.

wxWidgets: no personal experience with this.


typically, I prefer to avoid any options which depend on uncontrolled
3rd-party libraries, instead preferring to stick to options known to exist
on a given arch.

this would mean, for example, GTK is safe to use on Linux, and DirectX is
safe to use on Windows, for example, but trying to use GTK on Windows or DX
on Linux is asking for problems...


the issue though is that glossing over GUI toolkits is a little ugly, hence
my laziness and tendency to roll my own rather than deal with the native
widgets issue (where in my case, my widgets are typically mostly inspired by
the "Windows Classic" style...).

Mono works fine on Windows.

A Mono installation is rather non-intrusive. It would not
break anything installing it as part of an install.

this is a bit to drag along though...

And with a little careful packaging then a GTK# app should
also run with MS .NET CLR.

well, dunno, I would have to look into if anyone has pulled this off well...


at least at this time though, AFAIK, Windows Forms also works on Mono...

If there are nothing you really need C for, then learning Java
may be a better option than to create a monster mix of Java and C.

well, it is a matter of familiarity:
one has to "take a dive off the deep end" to willingly abandon their
existing codebase, and for this there needs to be good reason.

a mixed codebase minimizes code loss, but admitted does add its share of
issues...

granted, this is a subjective answer...


I guess the problem in my case is that my personal Java experience is FAR
less than my C and C++ experience, and I am not sure if I can get done
anywhere near the same level of stuff...

but, JNI is also hideous (I have had this experience some amount already,
writing inter-language boilerplate generally being an less-than-preferable
experience...).

admittedly, I may know of a few hacks I could use...

public class GlueNative {
public native Object apply(String name, Object[] args); //apply
arbitrary C function to args
public native Object apply(Object ref, Object[] args); //apply
function-object to args
public native Object getVar(String name); //get
reference to object
public native Object eval(String expr);
//eval expr and provide result
....
}

where I would likely just reuse some of my stuff for gluing ECMAScript to C
(patch Java into a custom ECMAScript implementation, which is in turn
capable of accessing most of C land, via more internal hackery...).

then with a slight bit of fudging, the API can be improved (mostly adding
lots of type-specific calls, ...).

or such...
 
B

BGB / cr88192

Jim Janney said:
JNI isn't exactly pretty, is it? The last time I needed to do that I
ended up using SWIG, which handles it pretty well.

fair enough...

I haven't used SWIG much personally...

in my case though, it is possible I could write an alternative to javah,
which could maybe generate a more elaborate wrapper (like, actually generate
much of the C-side code as well).

JNI's Java-side interface, with a fairly direct C-side interface, would be
much better than the present situation, where JNI allows a fairly powerful
interface, but is akin to mucking around in pure evil, and JNA moves most of
the mess to the Java side.

however, I am left to think some that part of the problem is how Java and
the JVM are designed, where the design itself impedes having a particularly
transparent interface (essentially, between Java and pretty much anything
not Java / built-on Java...).


hacking around the core issues would essentially put one at odds with the
pre-existing architecture (in one way or another).

but, granted, it may well be possible to tool-up or hack around these issues
in any number of ways, but I have yet to see or hear any particularly good
way to do so (or, at least within the context of an "orthodox" JVM
architecture, for example, some of the stuff done in GCJ or similar being
excluded here...).

also goes for the possibility of compiling the language for a different VM
architecture, VM-specific hacks (I can imagine a few if this is allowed),
....


many other languages have different binding semantics, and so don't present
this particular issue...


for example, it is far easier to transparently glue ECMAScript onto an
underlying C-based implementation.
for example, I created a magic object which "contains" the entire C
toplevel, and so I can just fetch, say, "printf", "strdup", ... from this
object, and call them as if they were ECMAScript functions.

the implementation then uses a whole lot of internal hackery to glue
together the typesystems.

but, in all it has worked fairly well (much better than expected considering
the level of mismatch and internal hackery used, where often it just
"guesses" how to best marshall some data...).

admitted, the thing currently still has a few unresolved issues:
it can't currently deal with C-side structures or complex pointer-based
types (the logic is not yet in place for this), ...


sadly, this issue has swayed some of my effort from using Java, to using
ECMAScript (even though Java is a bit more solid language for building
code).

admittedly, if I could get my own Java implementation into working order, I
could also try gluing Java and ECMAScript (probably via interfaces), but
this will not work with a stock JVM, and my own implementation is far from
being usably complete...

admittedly, there IS the considered option of using JNI and giving an API
into this custom ECMAScript VM, and using this to access C land, but
admittedly this is an ugly and inefficient way to do this...


admittedly, the .NET CLR can interface with a lot of my stuff with
relatively little issue (I have tested, the mechanics would seem to all be
in working order...). just, I have a few of my own reservations regarding
the .NET CLR...


but, oh well...
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,763
Messages
2,569,563
Members
45,039
Latest member
CasimiraVa

Latest Threads

Top