Future reuse of code

P

Peter E.C. Dashwood

[snip]
(The fundamental goal of commercial computing has been to have computers
that are capable of "understanding" Business needs and meeting them, in a
manner that would enable a Business User (or Users) to identify and design
the system and "explain" what is needed to the computer, in as simple a
manner as possible, without need for in depth technical skills.

This, to me, screams for the implementation for the DWIM (Do What I Mean)
command.

<G> It certainly would, Doc, were it not for the Interaction and Iteration
which is fundamental to achieving it.
(note the inclusion of the word "explain"...this is an iterative and
interactive process, rather than a "Do what I want/mean" process...)

Pete.
 
K

Karl Heinz Buchegger

Peter E.C. Dashwood said:
www.aboutlegacycoding.com/Archives/V3/V30501.asp
www.aboutlegacycoding.com/Archives/V3/V30201.asp

I'm sorry, these are links to just two of the articles I have written which
are pertinent. I don't normally promote my own work, but you did ask for
some links and these will at least give you an idea of where I'm coming
from.

No problem.
The links are fine.
[snip example]

There are some assumptions in the above whimsy (the computer has certain
inbuilt "concepts" - [you could think of this as a set of components with
all the attributes and Methods of a CUSTOMER, for instance], a natural
language interface, a "test" database for each of its concepts, however, all
of these things are possible with today's technology, and what is currently
"bleeding edge" will be passe in 15 years...), but I don't think anyone with
a programming background would say it was "impossible" or "infeasible".

That reminds me on 'SHRDLU' :)
Of course this demonstrates a solution to just one class of problem. There
are many others. But there are many other solutions also...

Ever read D.Hoffstadter - Goedel, Escher, Bach (I guess you did)
The augumented semantical network and its size seems to be one of the
big problems with this.

Anyway: We have wandered way off topic, but you gave me a hint to leave
my small little programming world (which concentrates around 3D graphics)
and start looking over the fence again.

Thanks for sharing your thoughts.
 
G

goose

Joe Zitzelberger said:
???Huh???

Which sort of Java isn't portable? I get binary compatibility on all
desktop/server/enterprise machines and many embedded as well.

a couple of things:
1. my std C programs has portability on *ALL* the platforms
that java has *AND* *MORE*.
2. name the embedded devices.

C compilers are available for all the platforms that java runs on,
but java is only available for platforms that are big enough to run
it.

bottom line: code written in java runs on less platforms than code
written in C.

next time, try thinking *before* you post !
If that
fails (it never has) I get source level compatibilily (the compiler is
written in java after all...) across all the platforms.

no you dont, you only got a few platforms. you can count the number
of java platforms without going into a 4 digit number ... miniscule.
C is ubiquitous enough to assume that if you got a digital machine,
you can get a C compiler for it.

if all you have is java experience, you have to look at each machine
and decide whether or not to use it for your project *before* you
even decide if it meets your *other* requirements.
Now it might not make any sense for me to try and open a dialog box on a
stoplight, and the stoplight manufacturer might well leave those
libraries out, but that hardly makes it non-portable.

tell me, how does one get as stupid as you did ?
write me a crc8 algorithm *in* *java* that I can reuse for the
stoplight!!!

I can write the same thing in C and use it on everything from
a stoplight to a cray.
All of this talks of applications, not applets which were a cute, but
useless toy.
sigh.


Have you ever actually tried porting an application to another
hardware/os using std C?

not only have I *done* (not "try") this before, I found that the
number of changes /needed/ to be done to get it going on the new platform
were few.

now, if you wanted to run java code on one of the many platforms that
cannot run java, you have to rewrite.
It is not just a recompile, there are plenty
of issues the original programmers must have planned for -- and they
usually don't.

so you *assume* they dont ???

your answer to me pointing out that java is rather limited
is "C programmers usually write badly" ???

you might double your IQ if you grew a second brain cell

goose,
insulting? surely you're too stupid to notice?
 
?

=?ISO-8859-1?Q?Thomas_Gagn=E9?=

Just to split English hairs...
<snip>

C compilers are available for all the platforms that java runs on,
but java is only available for platforms that are big enough to run
it.

bottom line: code written in java runs on less platforms than code
written in C.
Because of its demands, Java runs on /fewer/ platforms than code written
in C. Because of C's lower requirements it can run on lessor platforms
(less CPU and memory).

I think my 12th grade English teacher would be proud. ;-)
 
W

William M. Klein

One of the comments that I have frequently made in response to Peter's posts
in comp.lang.cobol (and which I will now share with the other newsgroups on
this list) is that IF (and I have mixed opinions on this) the "future" of
business (particularly) computer programming moves more and more into
"component - mix-and-build / drag-and-drop type" end user application design
and DEVELOPMENT, that it will STILL require "someone" at the other end of
the new (and improved) tools doing the "lower-level" programming that
provides such tools (and components and libraries).

I can well imagine the current "trend" away from every medium-to-large
shop/business/company from having their own (semi-)large programming staff
to using a combination of off-the-shelf (but customizable) applications and
end-user available "tools". However, both of these require SOMEONE
somewhere designing/programming the tools themselves.

Whether this programming is done in OO, "procedural", waterfall, or whatever
type environments, is not something that my "crystal ball" tells me (yet).

If anything, my GUESS is that the "end users" of such tools will be LESS
tolerant of "iterative find a mistake and fix it" applications than users of
in-house developed applications. This means that I can see the "return" of
more strictly enforced "sign-off" of design, testing, development phases of
the TOOLS and components that are delivered to "purchasers".

--
Bill Klein
wmklein <at> ix.netcom.com
Peter E.C. Dashwood said:
Marco,

a good and fair response.
<much snippage>
 
D

docdwarf

[snip]
(The fundamental goal of commercial computing has been to have computers
that are capable of "understanding" Business needs and meeting them, in a
manner that would enable a Business User (or Users) to identify and design
the system and "explain" what is needed to the computer, in as simple a
manner as possible, without need for in depth technical skills.

This, to me, screams for the implementation for the DWIM (Do What I Mean)
command.

<G> It certainly would, Doc, were it not for the Interaction and Iteration
which is fundamental to achieving it.
(note the inclusion of the word "explain"...this is an iterative and
interactive process, rather than a "Do what I want/mean" process...)

It would appear that if Meaning could be translated into action by a
single, simple DWIM command then that single command would constitute the
Interaction and no further Iterations would be necessary as what was Meant
would be Done.

DD
 
A

Alistair Maclean

[snip]
(The fundamental goal of commercial computing has been to have computers
that are capable of "understanding" Business needs and meeting them, in a
manner that would enable a Business User (or Users) to identify and design
the system and "explain" what is needed to the computer, in as simple a
manner as possible, without need for in depth technical skills.

This, to me, screams for the implementation for the DWIM (Do What I Mean)
command.

DD

Unfortunately Doc, the DWIM command would not be adequate. I have worked
for users where they complained about what they were given because,
although it met their stated requirements and was exactly as they had
asked, it was not what they needed to do their jobs. How about a GWIN
command (gimme wot i need)?
 
W

William M. Klein

It would appear that if Meaning could be translated into action by a
single, simple DWIM command then that single command would constitute the
Interaction and no further Iterations would be necessary as what was Meant
would be Done.

DD

And on the 8th day, it was said,

"Let there be profits,
and lo,

all the business were profitable,
and all the applications worked as desired,
and all the user support personnel were helpful
and ..."
 
P

Peter E.C. Dashwood

William M. Klein said:
One of the comments that I have frequently made in response to Peter's posts
in comp.lang.cobol (and which I will now share with the other newsgroups on
this list) is that IF (and I have mixed opinions on this) the "future" of
business (particularly) computer programming moves more and more into
"component - mix-and-build / drag-and-drop type" end user application design
and DEVELOPMENT, that it will STILL require "someone" at the other end of
the new (and improved) tools doing the "lower-level" programming that
provides such tools (and components and libraries).

It is normal and natural for programmers to see things from a programming
perspective.

What I am suggesting is bigger than that.

I foresee a time when there will be no need for the "lower level"
programming you are talking about, Bill.

The job of building the tools will be complete. And "smart software" will do
the enhancements to it.

There are already specialised tools and wizards to do specific jobs.
(Network management, for instance). Programmers are not redeveloping these
tools; you could argue that they are "complete". They work and do the job
they are designed for. In fact, it would not be desirable to have
programmers "fiddling" with them.

One of the points often overlooked about component based systems is that
these components encapsulate EXPERTISE as well as functionality. I have
purchased components to provide specific functionality, initially to save me
the time of writing it myself, then found that the component provided
Methods I would never have dreamed of (in addition to the ones I needed),
because the programmer who wrote it had many years of EXPERTISE in that
particular area and was aware of things I couldn't be aware of unless I had
also spent years in a particular niche.

I only have one lifetime and I cannot be an expert in everything. Sooner or
later it is necessary to trust someone else's experience.

Your assumption that there will ALWAYS be a requirement for "low level"
programmers to keep maintaining tools is, at best, arguable, at worst, just
dead wrong.

I can well imagine the current "trend" away from every medium-to-large
shop/business/company from having their own (semi-)large programming staff
to using a combination of off-the-shelf (but customizable) applications and
end-user available "tools". However, both of these require SOMEONE
somewhere designing/programming the tools themselves.

But not INDEFINITELY.
Whether this programming is done in OO, "procedural", waterfall, or whatever
type environments, is not something that my "crystal ball" tells me (yet).

It really doesn't matter. Sooner or later, the job will be "done"...
If anything, my GUESS is that the "end users" of such tools will be LESS
tolerant of "iterative find a mistake and fix it" applications than users of
in-house developed applications. This means that I can see the "return" of
more strictly enforced "sign-off" of design, testing, development phases of
the TOOLS and components that are delivered to "purchasers".

Well, your guess is wide of the mark, Bill. The iteration is required to get
closer to the goals, within the time allocated to do so (timebox). These ARE
"in-house developed applications". I have managed projects where we did this
(no guessing involved). It is foreign to the way you may have been used to
working, but it is VERY successful, PARTICULARLY with end users. (Why
wouldn't it be? They are involved in the process throughout and feel that it
is as much theirs as ITs. Users and IT people work together to achieve
specific goals within a specified time period. Programmers write code (or
select and drop components) towards the achievement of the common goal.
Users can see something taking shape as the process unfolds, and they are
able to ensure it meets CURRENT (today's) business requirements and balance
the priorities according to the Business needs.

On such a project, users and programmers discuss requirements together and
the programmers cut code. It is not such a big leap to have smart software
cut the code. I believe that is what will happen within the next 15 years.
It is just too expensive to maintain "multi-lingual" programming departments
in-house. Smart software will take on this role. Even if it isn't
"intelligent", this lack can be compensated for by keeping Humans in the
loop. These Humans will be end Users, not technicians.

Pete.
 
P

Peter E.C. Dashwood

Karl Heinz Buchegger said:
Peter E.C. Dashwood said:
www.aboutlegacycoding.com/Archives/V3/V30501.asp
www.aboutlegacycoding.com/Archives/V3/V30201.asp

I'm sorry, these are links to just two of the articles I have written which
are pertinent. I don't normally promote my own work, but you did ask for
some links and these will at least give you an idea of where I'm coming
from.

No problem.
The links are fine.
[snip example]

There are some assumptions in the above whimsy (the computer has certain
inbuilt "concepts" - [you could think of this as a set of components with
all the attributes and Methods of a CUSTOMER, for instance], a natural
language interface, a "test" database for each of its concepts, however, all
of these things are possible with today's technology, and what is currently
"bleeding edge" will be passe in 15 years...), but I don't think anyone with
a programming background would say it was "impossible" or "infeasible".

That reminds me on 'SHRDLU' :)

Sorry, you lost me ..."SHRDLU"?
Ever read D.Hoffstadter - Goedel, Escher, Bach (I guess you did)

Nope, none of the above.

Everything I know, I learned on the shop floor. (In 38 years, even if I was
a slow learner (which I'm not...<G>), I'd have to pick up SOMETHING,
wouldn't I?)

I realise that there is a wealth of excellent work available now and I would
encourage young people to read it. (It's a bit late for me...<G>).

Try and understand that in 1965 not a lot was known about the theory of
computing, and what was known was not proliferated because it could mean
competitive advantage. Try to imagine a world WITHOUT standard "best
practices", installation standards, computing science courses, standard
algorithms even... We did things and got programs working. Next time we
wrote a program, we tried to make it "better" than the last one we wrote. (I
still do that to this day, but I realise there are thngs I could do better
that I am not going to change now...).
The augumented semantical network and its size seems to be one of the
big problems with this.
Anyway: We have wandered way off topic, but you gave me a hint to leave
my small little programming world (which concentrates around 3D graphics)
and start looking over the fence again.

Thanks for sharing your thoughts.

It is always a pleasure, and thank you for being interested.

Pete.
 
J

jce

Peter E.C. Dashwood said:
It is normal and natural for programmers to see things from a programming
perspective.

What I am suggesting is bigger than that.
I foresee a time when there will be no need for the "lower level"
programming you are talking about, Bill.
What is a "lower level" programmer. The boundaries will shift but there
will still always be the lower level...maybe it's not a bit and byte
programmer but a trainer. It's still a form of programming. Even in a
sophisticated piece of adaptive software someone would have to provide the
learning environment - initially, until that role gets replaced and we move
along the escalator again. I still see this as a role of a "programmer"
more than an "end user" - though the lines may become blurry.
The job of building the tools will be complete. And "smart software" will do
the enhancements to it.
Don't forget the cost factor. It's still cheaper to pay a labourer 80c a
day to strip bark from wood for 10 hours than it is to maintain the
machinery to do it. Money will be the determining factor of what will and
won't happen - and soon after the class struggle. I assume the same to be
more true in IT - it already is happening....free overtime is cheaper than a
good toolset.
There are already specialised tools and wizards to do specific jobs.
(Network management, for instance). Programmers are not redeveloping these
tools; you could argue that they are "complete". They work and do the job
they are designed for. In fact, it would not be desirable to have
programmers "fiddling" with them.

Will the same TCP/IP still be used used in 15 years? Can we still use the
same network management tools when routers are obsolete, switches aren't
what they used to be.

Hardware *still* drives software. Plug n Play never saved the day....I
don't see tools fixing themselves to work with the hardware unless there is
another operating environment layer...made by...

Another important thing you don't consider is the power of the hobbyist and
the yearning that some people have to do things.
People could all drive automatic cars. But they still opt for manual.
People could ride in buses that drop them off at the exact destination, but
people like to drive and do. The business world isn't shielded from this.
Success of tools is still dependent on their usage. Unless people all jump
high onto a new set I think 15 years is too short for anything dominant to
come along and shape the IT world you envision. Linux didn't happen because
of Redhat or IBM but because of the underlying support from regular workers.
I only have one lifetime and I cannot be an expert in everything. Sooner or
later it is necessary to trust someone else's experience.
Your assumption that there will ALWAYS be a requirement for "low level"
programmers to keep maintaining tools is, at best, arguable, at worst, just
dead wrong.
Not talking for Bill....maybe your view of "low level" is too narrow ;-)
Smart software will take on this role. Even if it isn't
"intelligent", this lack can be compensated for by keeping Humans in the
loop. These Humans will be end Users, not technicians.
What happens when the end User cannot get the results he wants...who's he
gonna call?

User: "Give me X"..
I can give you a kinda X
User: "Give me X"
I can give you a kinda X
User: "Give me X"
I can give you a kinda X
User: "Hey tech geek..can you get this thing to give me X?"
User: "Hey, where'd you go....hey tech geek...."
User: "Hellooo?..is anyone there...."
I can give you a kinda X, you still want it?

If I had to guess....Bill is too narrow and you're too wide....we'll get
something in between I'm sure (in 15 years!)

Last post on the matter - though I will no doubt keep reading :)

JCE

btw: I hope you're a little wrong...because I still got to ride this out for
another few decades whilst you are relaxing on the shore somewhere.
 
S

Stephane Richard

Are you saying that in 20 years, a programmer wont have the tools to make
his own programming language, his own OS should he or she decide to? And
they call that progress? I call it going backwards here. If this is what
I'm gonna face in 20 years, I'll be making endless copies of DOS Linux and
maybe OS/2 so that I have the choice to do what I want. (note that I didn't
mention Windows ;-)

Here's my view of things, from my point of view, so you can't sue me for
saying this...hehehe.

We haven't even begun to touch the tip of the iceburg into what we can do as
far as software development goes. And while microsoft seems to be amazed by
it's Windows, Where it's been, where it is now, and where's it's going to
be, since about 3 to 4 thousand programmers went into the making of Windows,
I'm not impressed by those results. This having been said, So far, all
we've done for all these decades, is make the computer do that we dont want
to do. (Hefty calculations, any repetitive tasks, games (not for the same
reasons of course :). But we haven't even begun to tap into the potential
that's ahead of us.

To me What you are suggesting is that we let the others come up with the new
stuff, give the users the ability adjust/change what the user did through
the use of somewhat flexible modules, and that's it for the programmer? I'm
thinking much longer term than that. After this step of yours happens, do
you really think that everything will have been made that can be made in the
whole computer industry? I beg to differ, as this approach the the future
of computing is one of many thousands of avenues, and I'm not saying there's
only that way out of it, even if this ever gets made, it wont close the door
to the rest of the potentials that still are, to date, untouched.

But that's my vision of it, once your implementation exists and is stable,
do you think the users, ever so evolving as you say (which I do have to
agree that they are) will stay contended with this? that they wont want
more? Give a man an inch, he'll take foot, etc etc etc....I dont seen that
human behavior stopping anytime soon. To stop that human behavior, we might
as well stop populating since after 5 billion people we can safely assume
we've conceived every possible kind of human being? not at all :). far
from it. And the same goes for programming. Your view is one of many
parralel views, views that will all equally evolve, each in their own
specific ways, each towards very specific and unique goals. And as long as
they are computers, there will be programmers. And programming languages
that will range from low level to high level. The way Pascal is adjusting
to the current reality of development, I dont fear that it can adapt to any
new programming concept we can throw at it. It's been doign great at
adapting thus far.

Remember, software development is not a user only oriented concept. :) at
least not in my book.

And that's my 0.02 cents worth :)....(ok maybe there's a couple dollars in
there instead :).
 
S

Stephane Richard

Quoted: " SQL Server for example has a "drag and drop" tool that allows
processing streams to be built in minutes. These same streams using
procedural code would take days."

funny, me in 15 years, I dont see microsoft in the picture. ;-)
 
R

Roedy Green

bottom line: code written in java runs on less platforms than code
written in C.

Bottom line is the odds of a Java app running correctly without
modifications are much higher than a C program. C programs don't
nearly strictly enough specify the desired behaviour. To make code
that runs on many platforms you have to create a quite extensive set
of macros, one library for each platform.

Even the size of an int is not nailed down for heaven sake.
 
J

James J. Gavan

Roedy said:
Bottom line is the odds of a Java app running correctly without
modifications are much higher than a C program. C programs don't
nearly strictly enough specify the desired behaviour. To make code
that runs on many platforms you have to create a quite extensive set
of macros, one library for each platform.

Even the size of an int is not nailed down for heaven sake.

Roedy,

Haven't been there but lived real close in Guildford at one stage -
reading you C and Java people, I feel like a Wimbledon spectator with my
head zinging from left to right as the opponents take a swipe at the
ball !

For the uninitiated it really is difficult to balance the truth between
your opposing camps. Is there anywhere, but anywhere, where the observer
can get a reasonably unbiased balanced view of pros and cons per
language ?

Of course I could suggest if you have a real problem, use OO COBOL <G>.

BTW - took a very quick look at your Java Glossary, and noted your
reference to lack of FIFO and LIFO in Java lists. Surely that can't be a
big deal, possibly cloning your own list class. Although
collections(lists) are included in both Fujitsu and Micro Focus versions
of OO COBOL - our J4 Standards Committee currently has collections as an
on-going topic at the moment. I doubt we'll finish up with a collection
specifically geared to FIFO/LIFO. I can handle it quite easily at
present from either an Ordered or SortedCollection :-

*>FIFO
move 1 to MyIndex
invoke MyCollection "at" using MyIndex returning anElement

*>LIFO
invoke MyCollection "size" returning lsSize
*> above gives total elements
*> then I can do either of the following :-

invoke MyCollection "at" using lsSize returning anElement
*> OR
move lsSize to MyIndex
invoke MyCollection "at" using MyIndex returning anElement

If you haven't got what you want - James Gosling's fault. (He was born
in Calgary).
Guess he should have checked the Smalltalk hierarchy more closely before
he sat down to re-invent the wheel <G>.

I might add I can invoke both C and Java, with COBOL classes written to
support invoking Java. I have no need at the moment as I have rich
support of collections and GUIs built into the product.

One comment that came up here in this thread early on was "Use the right
tools for the job", not necessarily those exact words, but a point made
often in the COBOL group. Somebody of course nodded sagely at this pearl
of wisdom. Not always, but more often than not, that phrase translates
to "Use the free or cheapest tools you can get to do the job". Can't
knock people for that attitude, but I do wish they would come on in an
'honest' mode.

"Using the right tool" - here's one that came up recently from Brazil
in my Tech Support group. "How can I emulate an on-line Help file where
you key in some characters and then the entry is highlighted in the
Listbox ?".

Quite naturally a support person suggested, "Go to this site
www.xxxxx.com and check out their OCX". I thought, "I betcha that's
possible in COBOL". It is. It was a piece of cake. Micro Focus has
values for Windows events, and it looked like some four were
possibilities. All I had to do was a quick test of the four to get the
one which would immediately trigger an event based on a single
keystroke.

Problem solved ! Having done that as an interest exercise, I can already
see where it can be RE-USED in real applications.

With so many COBOLers using old, effective and established (mainfrme)
compilers, without any OO, naturally there's a whole daffy of people who
automatically address problems through C or Java, or whatever.

Note, none of the above has anything to do with the proselytizing of
components by Pete. Dashwood - I'm talking about REALLY using OO COBOL !

Jimmy, Calgary AB
 
G

goose

Bat Guano said:
big like my mobile phone?

whats your point ? that java rnus on your mobile phone ?

<NEWS FLASH> C probably targets that too </NEWS FLASH>

and it also targets many that java does not run on ?

so what exactly *is* your point ? java runs on a *fraction*
of platforms that C targets.


does your mobile have under a K of ram ?

thought not

goose,
java code isn't as portable as C code.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,770
Messages
2,569,584
Members
45,075
Latest member
MakersCBDBloodSupport

Latest Threads

Top