Can a low-level programmer learn OOP?

N

Neil Cerutti

So Kay actually invented NONE of the concepts that make a PL an
OOPL. He only stated the concepts concisely and named the
result OOP,

Naming and categorizing something shouldn't be underestimated as
an accomplishment, though. The exercise can have profound
results. For example, consider "marriage." ;)
Under a claim of Academic Impunity (or was that "Immunity"),
here's another historical tid-bit. In a previous empolyment we
once had a faculty applicant from CalTech who knew we were
using Simula as our introductory and core language in our CS
program, so he visited Xerox PARC before coming for his
inteview. His estimate of Alan Kay and Smalltalk at that time
(early 80s) was that "They wanted to implement Simula but
didn't understand it--so they invented Smalltalk and now don't
understand _it_!"

Heh, heh. Thanks for the intersting info.
 
S

Steve Holden

Aahz said:
For that matter, even using OOP a bit with C++ and Perl, I didn't get it
until I learned Python.


Newbie. ;-)

(I started with BASIC in 1976.)
Newbie ;-)

(I started with Algol 60 in 1967).
My experience is that learning GUI programming is difficult. Moreover,
GUI programming in C involves a lot of boilerplate that can be automated
more easily with Python. So I think this will be a better solution.
I used to write in C for the SunView platform (back in the days when the
GUI was integrated into the kernel as the only way to get acceptable
speed on the display). From what I remember, "Hello World" took about 40
lines.

The immense (relatively speaking: this was 1985) size of the libraries
required was one of the primary justifications for implementing shared
libraries.
Note very very carefully that Python does not require an OOP style of
programming, but it will almost certainly be the case that you just
naturally start using OOP techniques as you learn Python.

That's very true. I still use a lot of (perhaps too much) procedural
coding, but driving the object-oriented libraries is a great way for a
noob to get started in OOP.

regards
Steve
--
Steve Holden +1 571 484 6266 +1 800 494 3119
Holden Web LLC/Ltd http://www.holdenweb.com
Skype: holdenweb http://del.icio.us/steve.holden
--------------- Asciimercial ------------------
Get on the web: Blog, lens and tag the Internet
Many services currently offer free registration
----------- Thank You for Reading -------------
 
?

=?ISO-8859-1?Q?BJ=F6rn_Lindqvist?=

You can sometimes get better performance in C++ than in C, because C++
has "inline". Inline expansion happens before optimization, so you
can have abstractions that cost nothing.

C99 has that too.
Python is a relatively easy language, easier than C++, Java,
or even Perl. It's quite forgiving. The main implementation,
CPython, is about 60x slower than C, though, so if you're trying
to implement, say, a rapidly changing digital oscilloscope display,
the result may be sluggish.

But if the data for that oscilloscope comes from an external device
connected via a serial port, execution speed won't matter.
 
B

Bruno Desthuilliers

Chris Carlen a écrit :
Hi:

From what I've read of OOP, I don't get it. I have also found some
articles profoundly critical of OOP. I tend to relate to these articles.

However, those articles were no more objective than the descriptions of
OOP I've read in making a case. Ie., what objective
data/studies/research indicates that a particular problem can be solved
more quickly by the programmer, or that the solution is more efficient
in execution time/memory usage when implemented via OOP vs. procedural
programming?

None. Definitively. wrt/ developper time and memory, it's mostly a
matter of fit-your-brains. If it does, you'll find it easier, else
choose another programming style. wrt/ cpu time and memory, and using
'low-level' languages (C/C++/Pascal etc) OO is usually worse than
procedural for simple programs. For more complex ones, I'd say it tends
to converge since these programs, when written procedurally, usually
rely on many abstraction/indirection layers.
The problem for me is that I've programmed extensively in C and .asm on
PC DOS way back in 1988. Then didn't program for nearly 10 years during
which time OOP was popularized. Starting in 1999 I got back into
programming, but the high-level-ness of PC programming and the
completely foreign language of OOP repelled me. My work was in analog
and digital electronics hardware design, so naturally I started working
with microcontrollers in .asm and C. Most of my work involves low-level
signal conditioning and real-time control algorithms, so C is about as
high-level as one can go without seriously loosing efficiency.

You may still want to have a look on some more functional languages like
Haskell, OCaml or Erlang. But if you find OO alien, I doubt you'll have
a strong feeling for functional programming.
The
close-to-the-machine-ness of C is ideal here. This is a realm that I
truly enjoy and am comfortable with.

Hence, being a hardware designer rather than a computer scientist, I am
conditioned to think like a machine. I think this is the main reason
why OOP has always repelled me.

OTOH, OO is about machines - at least as conceveid by Alan Key, who
invented the term and most of the concept. According to him, each object
is a (simulation of) a small machine.
Perhaps the only thing that may have clicked regarding OOP is that in
certain cases I might prefer a higher-level approach to tasks which
involve dynamic memory allocation.

While OO without automatic memory management can quickly become a major
PITA, OO and GC are two orthogonal concepts - some languages have
builtin support for OO but nothing specific for memory management
(ObjectPascal, C++, ObjectiveC), and some non-OO languages do have
builtin memory management (mostly but not only in the functional camp).
If I don't need the execution
efficiency of C, then OOP might produce working results faster by not
having to worry about the details of memory management, pointers, etc.

It's not of feature of OO per se. But it's clear that not having (too
much) to worry about memory management greatly enhance productivity.
But I wonder if the OOP programmers spend as much time creating classes
and trying to organize everything into the OOP paradigm as the C
programmer spends just writing the code?

Don't you design your programs ? AFAICT, correct design is not easier
with procedural programming.

Now to answer your question, I'd say it depends on your experience of
OO, and of course of the kind of OO language you're using. With
declaratively statically typed languages - like C++, Java etc - you are
forced into a lot of upfront design (way too much IMHO). Dynamic
languages like Smalltalk, Python or Ruby are much more lightweight in
this area, and tend to favor a much more exploratory style - sketch a
quick draft on a napkin, start coding, and evolve the design while
you're coding.

And FWIW, Python doesn't *force* you into OO - while you'll be *using*
objects, you can write most of your code in a procedural way, and only
"fall down" into OO for some very advanced stuff.
Ultimately I don't care what the *name* is for how I program. I just
need to produce results.

Indeed !-)
So that leads back to objectivity. I have a
problem to solve, and I want to find a solution that is as quick as
possible to learn and implement.

Problem:

1. How to most easily learn to write simple PC GUI programs

GUI are one of the best (and more successfull) application of OO - and
as a matter of fact, even GUI toolkits implemented in plain C tend to
take an OO approach (GTK+ being a clear example, but even the old
Pascal/C Mac GUI API does have a somewhat "object based" feeling).
that will
send data to remote embedded devices via serial comms, and perhaps
incorporate some basic (x,y) type graphics display and manipulation
(simple drawing program). Data may result from user GUI input, or from
parsing a text config file. Solution need not be efficient in machine
resource utilization. Emphasis is on quickness with which programmer
can learn and implement solution.

So what you want is an hi-level, easy to learn language with a rich
collection of libraries. The Goodnews(tm) is that Python is one of the
possible answers.
2. Must be cross-platform: Linux + Windows.

Idem. You can even add most unices and MacOS X to the list.
This factor can have a big
impact on whether it is necessary to learn a new language, or stick with
C. If my platform was only Linux I could just learn GTK and be done
with it. I wouldn't be here in that case.

Possible solutions:

Form 1: Use C and choose a library that will enable cross-platform GUI
development.

Pro: Don't have to learn new language.
Con: Probably will have difficulty with cross-platform implementation
of serial comms. This will probably need to be done twice. This will
waste time.

Con: C is a low-level language (not a criticism - it has been designed
so), which greatly impact productivity.
Con: the only serious C (not++) cross-platform GUI toolkit I know is
GTK+, which is less cross-platform than wxWidgets, and *is* OO.
Form 2: Use Python and PySerial and TkInter or wxWidgets.

I'd probably go for wxWidgets.
Pro: Cross-platform goal will likely be achieved fully.

Very likely. There are a couple of things to take care of, but nothing
close to what you'd have to do in C.
Have a
programmer nearby with extensive experience who can help.
Con: Must learn new language and library.

Yes, obviously. The (other) GoodNews(tm) is that, according to most
estimations, an experimented programmer can become productive in Python
in a matter of weeks at worst (some manage to become productive in a few
days). This won't mean you'll master the language and use it at its
best, but don't worry, you'll get things done, and perhaps in less time
than with C.
Must possibly learn a
completely new way of thinking (OOP)

Not necessarly. While Python is OO all the way down - meaning that
everything you'll work with will be an object (functions included) -, it
doesn't *force* you into OO (IOW : you don't have to define classes to
write a Python program). You can as well use a procedural - or even
somewhat functional - approach, and most Python programs I've seen so
far are usually a mix of the three.
not just a new language syntax.

You forgot one of the most important part of a language : idioms. And
it's definitively *not* idiomatic in Python to use classes when a
simpler solution (using plain functions and modules) is enough.
This might be difficult.

Not necessarily that much.
Form 3: Use LabVIEW

Pro: I think that the cross-platform goal can be met.
Con: Expensive. I would prefer to use an Open Source solution. But
that isn't as important as the $$$. I have also generally found the 2D
diagrammatical programming language of "G" as repelling as OOP. I
suspect that it may take as much time to learn LabVIEW as Python.

I don't have much knowledge of LabVIEW so I can't comment on this. But I
remember a thread here about G, and I guess you'll find Python much more
familiar - even if you'll need some 'thinking adjustment' to grok it.
In
that case the time spent on Python might be better spent since I would
be learning something foundational as opposed to basically just learning
how to negotiate someone's proprietary environment and drivers.

IMHO, the biggest gain (in learning Python vs LabVIEW) is that you'll
add a very valuable tool to your toolbox - the missing link between C
and shell scripts.
Comments appreciated.
HTH
 
B

Bruno Desthuilliers

Chris Carlen a écrit :
(snip)
Why? Why is OOP any better at explaining a state machine to a computer?

I don't know if it's "better", but state machines are the historical
starting point of OO with the Simula language.
I can write state machines all over the place in C,

And even in assembler - so why use C ?-)
which tend to be
the core of most of my embedded programs. I can write them with
hardcoded logic if that seems like the easy thing to do any the
probability of extensive changes is extremely low. They are extremely
easy to read and to code. I have written a table-driven state machine
with arbitrary-length input condition lists. The work was all in
designing the data structures.

Which is another approach to OO. When programming in C, you do use
structs, don't you ? And you do write functions operating on instances
of these structs ? And possibly, turn these structs into ADT ? Well, one
possible definition of "objects" is "ADT + polymorphism".
Why would OOP be better?

Whoever pretend it's absolutely "better" should be shot down. I do find
OO *easier* than pure procedural programming, but I started programming
with mostly OO (or at least object-based) languages, and only then
learned pure procedural languages (and then bits of functional
programming). It's not a matter of being "better", it's a matter of what
style fits your brain. If OO doesn't fit your brain, then it certainly
won't be "better" *for you*.
Different is not better. Popular is not
better. What the academics say is not better. Less lines of code might
be better, if the priority is ease of programming.

and maintenance, and robustness (AFAICT, the defect/LOC ratio is
somewhat constant whatever the language, so the less code the less bugs).
Or, less machine
execution time or memory usage might be better, if that is the priority.
Indeed.

Until I can clearly understand why one or the other of those goals might
better be realized for a given problem with OOP vs. procedures, I just
don't get it.

Seems quite sane.
I will keep an open mind however, that until I work with it for some
time there is still the possibility that I will have some light go on
about OOP. So don't worry, I'm not rejecting your input.

This is a very simplistic - and as such, debatable - assertion IMHO. On
my Linux box, a cat-like program is hardly faster in C than in Python
(obviously since such a program is IO bound, and both implementations
will use the native IO libs), and for quite a few computation-heavy
tasks, there are Python bindings to highly optimised C (or C++) libs. So
while it's clear that Python is not about raw execution speed, it's
usually quite correct for most applicative tasks. And when it isn't,
well, it's always possible to recode the critical parts in Pyrex or C.
 
A

Alex Martelli

Chris Carlen said:
From what I've read of OOP, I don't get it. I have also found some
articles profoundly critical of OOP. I tend to relate to these articles.

OOP can be abused (particularly with deep or intricate inheritance
structures). But the base concept is simple and clear: you can bundle
state and behavior into a stateful "black box" (of which you may make as
many instances, with independent state but equal behavior, as you need).
Hence, being a hardware designer rather than a computer scientist, I am
conditioned to think like a machine. I think this is the main reason
why OOP has always repelled me.

I'm an MS in EE (minoring in computer engineering) by training (over a
quarter century ago:); I "slid" into programming kind of inexhorably
(fate obviously wanted me to:) but paradigms such as OOP (and
functional programming, but that's another subject) always made sense to
me *in direct analogy to my main discipline*. A JK flip-flop and a D
flip-flop are objects with 1-bit states and different behavior; I may
put in my circuit as many (e.g.) J-K flip-flops as I need, and each will
have separate state, even though each will have identical behavior (how
it responds to signals on the J and K lines). I don't need to think
about how a J-K flip-flop is *made*, inside; I use it as a basic
component in designing richer circuits (well, I did back when I DID
design circuits, but I haven't _totally_ forgotten:). I do know how to
make one in terms of transistors, should I ever need to (well, maybe I'd
have to look it up, but I _used_ to know:), but such a need is unlikely
to arise, because it's likely to be there as a basic component in
whatever design library I'm supposed to use for this IC.

Components much richer than J-K flip-flops are obviously more common
nowadays, but remember my real-world experience designing chips is from
the early '80s;-). Nevertheless the concept of a "bundle of state and
behavior" is still there -- and a direct, immediate analogy to OOP.
(Functional programming, OTOH, is analogous to stateless input-output
transformation circuits, an even more basic concept in HW design:). If
anything, it's the concept of "procedural programming" that has no
direct equivalent in HW design (unless you consider microcode "HW", and,
personally, I don't;-). [[Fortunately as a part of the CE minor I did
learn Fortran, Lisp and Pascal, and a few machine-languages too, so I
wasn't totally blown away when I found myself earning a living by
programming rather than by designing chips, but that's another
story:)]]


Alex
 
J

James Stroud

Chris said:
Hi:

From what I've read of OOP, I don't get it. I have also found some
articles profoundly critical of OOP.

I've also found articles critical of Darwinism--but we can chalk that up
to religious zealotry can't we?

Any gui more complicated than a few entry fields and some checkbuttons
is going to lend itself to OOP--so if you want to do GUI, learn OOP. The
time you spend learning OOP will be about 1/10th the time required to
debug a modestly complicated gui. This is especially true of guis that
require real-time feedback behavior.

If you just want to enter some values and set some flags and then hit
"go", you could always program the GUI in HTML and have a cgi script
process the result. This has a lot of benefits that are frequently
overlooked but tend to be less fun than using a bona-fide toolkit like
WX or QT.

James
 
H

Hendrik van Rooyen

Chris Carlen said:
Form 2: Use Python and PySerial and TkInter or wxWidgets.

Pro: Cross-platform goal will likely be achieved fully. Have a
programmer nearby with extensive experience who can help.
Con: Must learn new language and library. Must possibly learn a
completely new way of thinking (OOP) not just a new language syntax.
This might be difficult.

This is the way to go. - Trust me on this.

When you describe your history, it is almost an exact parallel to mine.
In my case, I have been doing real low level stuff (mostly 8031 assembler)
since 1982 or so. And then I found python in a GSM module (Telit), and
I was intrigued.

I really appreciate your comments on OO - it parallels a lot of what I feel
as there is a lot of apparent BS that does not seem to "do anything" at first
sight.

However- for the GUI stuff, there is an easily understood relationship between
the objects and what you see on the screen - so its a great way of getting
into OO - as far as people like you and me will go with it, which is not very
far, as we tend to think in machine instructions...

And for what its worth - you can programme assembler-like python, and it also
works.

The best thing to do is just to spend a few days playing with say Tkinter.
I use a reference from the web written by John W Shipman at New Mexico
Tech - it is succinct and clear, and deserves more widespread publicity.

Google for it - I have lost the link, although I still have the pdf file.

You will also find the interactive prompt that you get when you type
python at a command prompt invaluable - it lets you play with and debug
small code snippets so that you can learn as you go along - it really speeds
up the whole learning process, and makes it almost painless.

All this talking is just wasting time - you could have had your first frame up
on the screen already, with a blank canvas, ready for drawing. It really goes
that quick, once you start.

So the answer to the title question is: Yes - a low level programmer can learn
OOP, and its in fact easier than it looks, as almost all the heavy lifting has
been done for you by others.

- Hendrik
 
M

Michele Simionato

Any gui more complicated than a few entry fields and some checkbuttons
is going to lend itself to OOP--so if you want to do GUI, learn OOP.

Yep, there is nothing to be added to that. Except maybe that if you
don't care
too much about the look&feel you may consider starting with Tkinter.
Pros:

1. it is part of the standard library, and you already have it;
2. it is possibly the easiest/simplest GUI out there;
3. it runs pretty much everywhere with minimal fuss.

Michele Simionato
 
J

John J. Lee

[Chris Carlen]
From what I've read of OOP, I don't get it. I have also found some
articles profoundly critical of OOP. I tend to relate to these
articles.

If you want to know the truth, and opt to neither trust a friend or
colleague, nor spend the time to try it yourself, here's a third way:

Compile Qt (a toolkit like wx or Tk) and watch the list of source file
names scroll past. Beautiful! Perhaps there's some better way of
doing GUIs, but watching that list of source files, one realises that
that's an academic question: practically, OOP fits GUIs -- and much of
the other code in Qt -- so well, and so much effort has been put into
these GUI toolkit libraries, that one would be a fool not to use them
right now. A somewhat separate issue: You'd also be a fool not to
apply OOP to the GUI code *you* write *using* one of those OOP GUI
toolkits. Though you won't learn that all at once or without
conscious effort, that's not an obstacle with Python -- you can start
small.

Of course there's some level of experience / project size / project
longevity / number of people involved below which dashing it off using
what you know right now will be quicker, but the break-even point is
not far off in your case, I think.


[chris]
However, those articles were no more objective than the descriptions
of OOP I've read in making a case. Ie., what objective
data/studies/research indicates that a particular problem can be
solved more quickly by the programmer, or that the solution is more
efficient in execution time/memory usage when implemented via OOP
vs. procedural programming?
[bruno]
None. Definitively. wrt/ developper time and memory, it's mostly a
matter of fit-your-brains. If it does, you'll find it easier, else
[...]

How do we have confidence that that's true without doing experiments?
AFAIK, only a few such experiments have been done (not counting
research that does not meet basic standards of competence or is not
peer-reviewed).

I think some programming techniques are simply better than others for
certain tasks, even when including the variation in people's abilities
(but excluding the cost of people learning those techniques, which can
of course be significant). Of course, measurement is tricky because
of differences between programmers, but it's not impossible.


John
 
J

John J. Lee

Note very very carefully that Python does not require an OOP style of
programming,
agree


but it will almost certainly be the case that you just
naturally start using OOP techniques as you learn Python.

There's some truth to this. But stagnation is also very easy to
achieve, without conscious effort to improve.

Also, reading OOP books (and this list) is still beneficial, both
before and after you've understood each concept: before because it
helps to learn new concepts at a faster rate, and to learn concepts
you'd otherwise miss; after because it helps "clean up" and extend
your understanding and because it teaches you standard names for
things, helping communication.


John
 
R

Rustom Mody

OOP can be abused (particularly with deep or intricate inheritance
structures). But the base concept is simple and clear: you can bundle
state and behavior into a stateful "black box" (of which you may make as
many instances, with independent state but equal behavior, as you need).

Many years ago (86??) Wegner wrote a paper in OOPSLA called Dimensions
of Object Orientation in which he called the 'base concept' of 'bundle
of state and behavior' as 'object based' programming and
'object-oriented' as object-based + inheritance.

What Alex is saying is (in effect) that object-based is simple and
clear (and useful) whereas the object-orientation is subject to abuse.

This anyway is my experience: C++ programmers are distinctly poorer
programmers than C programmers -- for some strange reason closeness to
the machine has a salutary effect whereas the encouragment of
uselessly over-engineered programs makes worse programmers.

GUI is one of those cases wherein inheritance actually helps people
produce better code but this is something of an exception.

And even here one of the most widely used popularisers of GUIs has
been VB which was (at least initially) not object-oriented. VB shows
that language orientation -- tailoring 'the language' of drag-n-drop
to GUI-building and not just GUI-use -- wins over OOP.
Ruby/Rails is another example of language-oriented programming though
I feel it goes too far in (de)capitalizing, pluralizing,
(de)hyphenizing etc towards 'readability'.
[Sorry if this offends some people -- just my view!]

And this makes me wonder: It seems that Tkinter, wxpython, pygtk etc
are so much more popular among pythonistas than glade, dabo etc.

Why is this?
 
W

Wayne Brehaut

Newbie ;-)

(I started with Algol 60 in 1967).

Newbie ;-)

(I started with Royal McBee LGP 30 machine language (hex input) in
1958, and their ACT IV assembler later! Then FORTRAN IV in 1965. By
1967 I too was using (Burroughs) Algol-60, and 10 years later upgraded
to (DEC-10) Simula-67.)

Going---going---
 
W

Wayne Brehaut

Many years ago (86??) Wegner wrote a paper in OOPSLA called Dimensions
of Object Orientation in which he called the 'base concept' of 'bundle
of state and behavior' as 'object based' programming and
'object-oriented' as object-based + inheritance.

Not quite--according to him:

object-based + classes => class-based
class-based + class inheritance => object-oriented

I.e., "object-oriented = objects + classes + inheritance".

This was not the, by then, standard definition: to be OO would require
all four of:

1. modularity (class-based? object-based?)
2. inheritance (sub-classing)
3. encapsulation (information hiding)
4. polymorphism ((sub-) class-specific response to a message, or
processing of a method)

Unfortunately, most of the "definitions" (usually just hand-waving,
loosey-goosey descriptions) found on the web include none--or only one
or two--of these fundamental requirements by name, and are so loose
that almost any proramming paradigm or style would be OO.
What Alex is saying is (in effect) that object-based is simple and
clear (and useful) whereas the object-orientation is subject to abuse.

But OO is also simple and clear (if clearly defined and explained and
illustrated and implemented), and ANY programming style is subject to
abuse. During the hey-day of Pascal as an introductory programming
language (as often misused as more than that) I found many often
spent much of their time defining the data types their program would
use.
This anyway is my experience: C++ programmers are distinctly poorer
programmers than C programmers -- for some strange reason closeness to
the machine has a salutary effect whereas the encouragment of
uselessly over-engineered programs makes worse programmers.

But this is a tautology: "over-engineered" programs are, by definition
or terminology, not a good thing--independent of what PL or style
they're finally implemented in (assuming that by "engineering" you
mean "design" or similar). Many of my Pascal students over-engineered
their solutions to simple problems too?
GUI is one of those cases wherein inheritance actually helps people
produce better code but this is something of an exception.

This seems to imply that the list of applications you have in mind or
have worked on includes fewer domains that might profit from full OO
instead of just OB. My guess is that there are many application
domains in which analysts and programmers often think in an "OO way",
but implement in just an OB way because of the PL they or their
employer requires or prefers: in some--perhaps many--of these cases
they have to do "manually" what OO would have automated.

There is a problem, though, of (especially university and college)
education and training in OOP "talking about" how glorious OO is, and
requiring students to use OO techniques whether they're most
appropriate or not (the "classes early" pedagogical mindset). And
this problem is compounded by teaching introductory programming using
a language like Java that requires one to use an OO style for even
trivial programs. And by using one of the many very similar
introductory texbooks that talk a lot about OO before actually getting
started on programming, so students don't realize how trivial a
program is required to solve a trivial problem, and hence look for
complexity everywhere--whether it exists or not--and spend a lot of
time supposedly reducing the complexity of an already simple problem
and its method of solution.

But as I noted above, a similar problem occurred with the crop of
students who first learned Pascal: they often spent much of their time
defining the data types their program would use, just as OO
(especially "classes early") graduates tend to waste time
"over-subclassing" and developing libraries of little-used classes.

The answer is always balance, and having an extensive enough toolkit
that one is not forced or encouraged to apply a programming model that
isn't appropriate and doesn't help in any way (including
maintainability). And starting with a language that doesn't brainwash
one into believing that the style it enforces or implies is always the
best--and texbooks that teach proper choice of programming style
instead of rigid adherence to one.

wwwayne
 
W

Wayne Brehaut

quoth the Wayne Brehaut:


Mel? Is that you?

http://www.pbm.com/~lindahl/mel.html

Ha-ha! Thanks for that!

Although I'm not Mel, the first program I saw running on the LGP-30
was his Blackjack program! In 1958 I took a Numerical Methods course
at the University of Saskatchewan, and we got to program Newton's
forward difference method for the LGP-30. Our "computer centre tour"
was to the attic of the Physics building, where their LGP-30 was
networked to a similar one at the Univeristy of Toronto (the first
educational computer network in Canada!), and the operator played a
few hands of Blackjack with the operator there to illustrate how
useful computers could be.

A few years later, as a telecommunications officer in the RCAF, I
helped design (but never got to teach :-( ) a course in LGP-30
architecture and programming using both ML and ACT IV AL, complete
with paper tape input and Charactron Tube
(http://en.wikipedia.org/wiki/Charactron) output--handy, since this
display was also used in the SAGE system.

We weren't encouraged to use card games as examples, so used
navigational and tracking problems involving fairly simple
trigonometry.

wwwayne
 
D

Dennis Lee Bieber

Depending upon the tool set available, OOP doesn't really have
"programmers" writing code... It has analysts designing objects in the
abstract, feeding them to the toolset, and the toolset creates the code
templates to be filled in with actual implementation details.
Though I should have added that, in Python, the toolset tends to
be... just an editor... I don't think anyone has created the equivalent
of Rational ROSE for Python.
--
Wulfraed Dennis Lee Bieber KD6MOG
(e-mail address removed) (e-mail address removed)
HTTP://wlfraed.home.netcom.com/
(Bestiaria Support Staff: (e-mail address removed))
HTTP://www.bestiaria.com/
 
B

Ben Finney

Dennis Lee Bieber said:
Though I should have added that, in Python, the toolset tends to
be... just an editor...

Much more than that. The common toolset I see used is:

* A customisable, flexible, programmer's text editor
* The online documentation in a web browser
* A Python interactive shell
* An automated build tool like 'make'
* A programmable shell for ad-hoc tasks
* A multi-session terminal program to run all these simultaneously

What I *don't* see is some single all-in-one tool specific to
programming a particular language. Having learned one good instance of
each of the above, it seems silly to need to learn all of them again
simply because one has started using Python as the programming
language.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,773
Messages
2,569,594
Members
45,119
Latest member
IrmaNorcro
Top