Docstrings considered too complicated

  • Thread starter Andreas Waldenburger
  • Start date
M

mk

Steve said:
Puts me in mind of Mario Wolczko's early attempts to implement SmallTalk
on a VAX 11/750. The only bitmapped display we had available was a Three
Rivers PERQ, connected by a 9600bps serial line. We left it running at
seven o'clock one evening, and by nine am the next day it had brought up
about two thirds of the initial VM loader screen ...

You tell these young kids, and they just don't believe you!

For the uncouth yobs, err, culturally-challenged:


(Four Yorkshiremen)

Regards,
mk
 
D

D'Arcy J.M. Cain

But I was still able to play Nim and Duck Shoot (after keying it in)!

Did you ever play Star Trek with sound effects? I was never able to
get it to work but supposedly if you put an AM radio tuned to a
specific frequency near the side with the I/O card it would generate
static that was supposed to be the sound of explosions.

Of course, the explosions were happening in a vaccum so maybe the
silence was accurate. :)
 
M

MRAB

D'Arcy J.M. Cain said:
Did you ever play Star Trek with sound effects? I was never able to
get it to work but supposedly if you put an AM radio tuned to a
specific frequency near the side with the I/O card it would generate
static that was supposed to be the sound of explosions.

Of course, the explosions were happening in a vaccum so maybe the
silence was accurate. :)
A computer with sound? Heresy! :)

There was some hardware that could be added to provide some sort of
video output, IIRC.

<checks wikipedia>

Aha, here it is:
http://en.wikipedia.org/wiki/MK14

and also here:

http://www.nvg.ntnu.no/sinclair/computers/mk14/mk14_photos.htm

The top-left picture looks like it has all its chips, so is the
fully-expanded version.

Oh, I forgot about the red reset button!
 
A

Andreas Waldenburger

For the uncouth yobs, err, culturally-challenged:


(Four Yorkshiremen)


That's the definitive version. I mean, if you're going to talk vintage,
talk vintage.

/W
 
R

Roy Smith

"D'Arcy J.M. Cain said:
In case some of you youngsters think that there is a typo in the above,
no, he means a total of 640 bytes. In today's terms that would be
approx 0.0000006GB.

640 bytes! Man, did you have it easy. When I was a kid, we had to walk 10
miles uphill to school in the snow, oh, wait, wrong story. When I was a
kid, we had a D2 kit (http://tinyurl.com/yfgyq4u). We had 256 bytes of
RAM. And we thought we were lucky! Some of the other kids, if they wanted
to store a number, they had to scratch the bits into stone tablets with
their bare hands...

When's the last time you had a computer where the instruction manual warned
you against wearing silk or nylon clothing?
 
G

Gregory Ewing

MRAB said:
Mk14 from Science of Cambridge, a kit with hex keypad and 7-segment
display, which I had to solder together, and also make my own power
supply. I had the extra RAM and the I/O chip, so that's 256B (including
the memory used by the monitor) + 256B additional RAM + 128B more in the
I/O chip.

Luxury! Mine was a Miniscamp, based on a design published in
Electronics Australia in the 70s. 256 bytes RAM, 8 switches
for input, 8 LEDs for output. No ROM -- program had to be
toggled in each time.

Looked something like this:

http://oldcomputermuseum.com/mini_scamp.html

except that mine wasn't built from a kit and didn't look
quite as professional as that one.

It got expanded in various ways, of course ("hacked" would
be a more accurate word). Memory expanded to 1.5KB, hex keyboard
and display (built into an old calculator case), cassette tape
interface based on a circuit scrounged from another magazine
article (never quite got it to work properly, wouldn't go at
more than about 4 bytes/sec, probably because I used resistors
and capacitors salvaged from old TV sets). Still no ROM, though.
Had to toggle in a bootstrap to load the keyboard/display
monitor (256 bytes) from tape.

Somewhere along the way I replaced the CPU with a 6800 -
much nicer instruction set! (Note for newtimers -- that's
*two* zeroes, not three.)

During that period, my holy grail was alphanumeric I/O. I was
envious of people who wrote articles about hooking surplus
teleprinters, paper tape equipment and other such cool
hardware to their homebrew micros -- sadly, no such thing was
available in NZ.

Then one day a breakthrough came -- a relative who worked
in the telephone business (government-owned in NZ at the time)
managed to get me an old teleprinter. It was Baudot, not ASCII,
which meant uppercase only, not much punctuation, and an
annoyingly stateful protocol involving letters/figures shift
characters, but it was heaps better than nothing. A bit of
hackery, of both hardware and software varieties, and I got
it working. It was as noisy as hell, but I could input and
output ACTUAL LETTERS! It was AMAZING!

As a proof of concept, I wrote an extremely small BASIC
interpreter that used one-character keywords. The amount of
room left over for a program was even smaller, making it
completely useless. But it worked, and I had fun writing it.

One thing I never really got a grip on with that computer
was a decent means of program storage. Towards the end of it's
life, I was experimenting with trying to turn an old 8-track
cartridge player into a random access block storage device,
using a tape loop. I actually got it to work, more or less,
and wrote a small "TOS" (Tape Operating System) for it that
could store and retrieve files. But it was never reliable
enough to be practical.

By that stage, umpteen layers of hackery using extremely
dubious construction techniques had turned the machine into
something of a Frankenstein monster. Calling it a bird's nest
would have been an insult to most birds. I wish I'd taken
some photos, they would have been good for scaring potential
future grandchildren.

My next computer was a Dick Smith Super 80 (*not* System 80,
which would have been a much better machine), Z80-based, built
from a kit. I had a lot of fun hacking around with that, too...
but that's another story!
 
G

Gregory Ewing

Steve said:
Puts me in mind of Mario Wolczko's early attempts to implement SmallTalk
on a VAX 11/750. The only bitmapped display we had available was a Three
Rivers PERQ, connected by a 9600bps serial line. We left it running at
seven o'clock one evening, and by nine am the next day it had brought up
about two thirds of the initial VM loader screen ...

A couple of my contemporary postgraduate students worked on
getting Smalltalk to run on an Apple Lisa. Their first attempt
at a VM implementation was written in Pascal, and it wasn't
very efficient. I remember walking into their room one day
and seeing one of them sitting there watching it boot, drawing
stuff on the screen v...e...r...y... s...l...o...w...l...y...

At least their display was wired directly to the machine
running the code. I hate to think what bitmapped graphics at
9600 baud would be like!
 
G

Gregory Ewing

D'Arcy J.M. Cain said:
Did you ever play Star Trek with sound effects?

Not on that machine, but I played a version on an Apple II
that had normal speaker-generated sounds. I can still
remember the sound that a photon torpedo (a # character IIRC)
made as it lurched its way drunkenly across the sector and
hit its target. Bwoop... bwoop... bwoop... bwoop... bwoop...
bwoowoowoowoowoop! (Yes, a photon torpedo makes exactly
five bwoops when it explodes. Apparently.)

I carried a listing of it around with me for many years
afterwards, and attempted to port it to various machines,
with varying degrees of success. The most successful port
was for a BBC Master that I picked up in a junk shop one
day.

But I couldn't get the sounds right, because the BBC's
sound hardware was too intelligent. The Apple made sounds
by directly twiddling the output bit connected to the
loudspeaker, but you can't do that with a BBC -- you
have to go through its fancy 3-voice waveform generating
chip. And I couldn't get it to ramp up the pitch rapidly
enough to make a proper photon-torpedo "bwoop" sound. :-(

I also discovered that the lovely low-pitched beep that
the original game used to make at the command prompt had
a lot to do with the resonant properties of the Apple
II's big plastic case. Playing a square wave through
something too high-fidelity doesn't sound the same at
all.
I was never able to
get it to work but supposedly if you put an AM radio tuned to a
specific frequency near the side with the I/O card it would generate
static that was supposed to be the sound of explosions.

Of course, the explosions were happening in a vaccum so maybe the
silence was accurate. :)

Something like that might possibly happen for real. I could
imagine an explosion in space radiating electromagnetic
noise that would sound explosion-like if you picked it
up on a radio.

This might explain why the Enterprise crew could hear things
exploding when they shot them. They were listening in at RF!
 
G

Gregory Ewing

Steven said:
True, but one can look at "best practice", or even "standard practice".
For Python coders, using docstrings is standard practice if not best
practice. Using strings as comments is not.

In that particular case, yes, it would be possible to
objectively examine the code and determine whether docstrings
were being used as opposed to above-the-function comments.

However, that's only a very small part of what goes to make
good code. Much more important are questions like: Are the
comments meaningful and helpful? Is the code reasonably
self-explanatory outside of the comments? Is it well
modularised, and common functionality factored out where
appropriate? Are couplings between different parts
minimised? Does it make good use of library code instead
of re-inventing things? Is it free of obvious security
flaws?

You can't *measure* these things. You can't objectively
boil them down to a number and say things like "This code
is 78.3% good; the customer requires it to be at least
75% good, so it meets the requirements in that area."

That's the way in which I believe that software engineering
is fundamentally different from hardware engineering.
 
M

MRAB

Gregory said:
Luxury! Mine was a Miniscamp, based on a design published in
Electronics Australia in the 70s. 256 bytes RAM, 8 switches
for input, 8 LEDs for output. No ROM -- program had to be
toggled in each time.

Looked something like this:

http://oldcomputermuseum.com/mini_scamp.html

except that mine wasn't built from a kit and didn't look
quite as professional as that one.
[snip]
By the standards of just a few years later, that's not so much a
microcomputer as a nanocomputer!

I was actually interested in electronics at the time, and it was such
things as Mk14 which lead me into computing.
 
P

python

Not to out do you guys, but over here in the states, I started out with
a Radio Shack 'computer' that consisted of 10 slideable switches and 10
flashlight bulbs. You ran wires betweens the slideable switches to
create 'programs'. Wish I could remember what this thing was called - my
google-fu fails me. This was approx 1976 when Popular Science's ELF-1
with 256 bytes was quite a sensation.

Cheers,
Malcolm
 
S

Steven D'Aprano

In that particular case, yes, it would be possible to objectively
examine the code and determine whether docstrings were being used as
opposed to above-the-function comments.

However, that's only a very small part of what goes to make good code.
Much more important are questions like: Are the comments meaningful and
helpful? Is the code reasonably self-explanatory outside of the
comments? Is it well modularised, and common functionality factored out
where appropriate? Are couplings between different parts minimised? Does
it make good use of library code instead of re-inventing things? Is it
free of obvious security flaws?

You can't *measure* these things. You can't objectively boil them down
to a number and say things like "This code is 78.3% good; the customer
requires it to be at least 75% good, so it meets the requirements in
that area."

That's the way in which I believe that software engineering is
fundamentally different from hardware engineering.


You are conflating two independent questions:

(a) Can we objectively judge the goodness of code, or is it subjective?

(b) Is goodness of code quantitative, or is it qualitative?

You can turn any qualitative measurement into a quantitative measurement
by turning it into a score from 0 to 10 (say). Instead of "best/good/
average/poor/crap" just rate it from 1 through 5, and now you have a
measurement that can be averaged and compared with other measurements.

The hard part is turning subjective judgements ("are these comments
useful, or are they pointless or misleading?") into objective judgements.
It may be that there is no entirely objective way to measure such things.
But we can make quasi-objective judgements, by averaging out all the
individual quirks of subjective judgement:

(1) Take 10 independent judges who are all recognised as good Python
coders by their peers, and ask them to give a score of 1-5 for the
quality of the comments, where 1 means "really bad" and 5 means "really
good". If the average score is 4 or higher, gain a point. If the average
score is 3 or lower, lose a point.

(2) Take 10 independent judges, as above, and rate the code on how self-
explanatory it is. An average score of 3 or higher gives a point; an
average of under 2 loses a point.

(Note that I'm more forgiving of non-self-explanatory code than I am of
bad comments. Better to have no comments than bad ones!)

And so on, through all the various metrics you want to measure.

If the total number of points exceeds some threshold, the software
passes, otherwise it fails, and you have a nice list of all the weak
areas that need improvement.

You can do fancy things too, like discard the highest and lowest score
from the ten judges (to avoid an unusually strict, or unusually slack,
judge from distorting the scores).

If this all seems too expensive, then you can save money by having fewer
judges, perhaps even as few as a single code reviewer who is trusted to
meet whatever standards you are hoping to apply. Or have the judges rate
randomly selected parts of the code rather than all of it. This will
severely penalise code that isn't self-explanatory and modular, as the
judges will not be able to understand it and consequently give it a low
score.

Of course, there is still a subjective component to this. But it is a
replicable metric: any group of ten judges should give quite similar
scores, up to whatever level of confidence you want, and one can perform
all sorts of objective statistical tests on them to determine whether
deviations are due by chance or not.

To do all this on the cheap, you could even pass it through something
like PyLint, which gives you an objective (but not very complete)
measurement of code quality.

The real problem isn't that defining code quality can't be done, but that
it can't be done *cheaply*. There are cheap methods, but they aren't very
good, and good methods, but they're very expensive.
 
G

Gregory Ewing

Steven said:
(a) Can we objectively judge the goodness of code, or is it subjective?

(b) Is goodness of code quantitative, or is it qualitative?

Yes, I'm not really talking about numeric vs. non-numeric,
but objective vs. subjective. The measurement doesn't have
to yield a numeric result, it just has to be doable by some
objective procedure. If you can build a machine to do it,
then it's objective. If you have to rely on the judgement of
a human, then it's subjective.
But we can make quasi-objective judgements, by averaging out all the
individual quirks of subjective judgement:

(1) Take 10 independent judges who are all recognised as good Python
coders by their peers, and ask them to give a score of 1-5 for the
quality of the comments...

Yes, but this is an enormous amount of effort to go to, and
at the end of the day, the result is only reliable in a
statistical sense.

This still seems to me to be qualitatively different from
something like testing the tensile strength of a piece of
steel. You can apply a definite procedure and obtain a
definite result, and no human judgement is required at all.
 
G

Gregory Ewing

MRAB said:
By the standards of just a few years later, that's not so much a
microcomputer as a nanocomputer!

Although not quite as nano as another design published
in EA a couple of years earlier, the EDUC-8:

http://www.sworld.com.au/steven/educ-8/

It had a *maximum* of 256 bytes -- due to the architecture,
there was no way of addressing any more. Also it was
divided into 16-byte pages, with indirect addressing
required to access anything in a different page from
the instruction. Programming for it must have been
rather challenging.

As far as I know, the EDUC-8 is unique in being the
only computer design ever published in a hobby magazine
that *wasn't* based on a microprocessor -- it was all
built out of 9000 and 7400 series TTL logic chips!
 
D

D'Arcy J.M. Cain

Although not quite as nano as another design published
in EA a couple of years earlier, the EDUC-8:

Byte Magazine once published plans for building a computer that had one
instruction. I believe it was a six bit address so that would make it
a max of 64 bytes. If I recall, the single instruction was "SUBTRACT
AND JUMP ON CARRY." Now that's nano!
 
A

Albert van der Horst

Although not quite as nano as another design published
in EA a couple of years earlier, the EDUC-8:

http://www.sworld.com.au/steven/educ-8/

It had a *maximum* of 256 bytes -- due to the architecture,
there was no way of addressing any more. Also it was
divided into 16-byte pages, with indirect addressing
required to access anything in a different page from
the instruction. Programming for it must have been
rather challenging.

As far as I know, the EDUC-8 is unique in being the
only computer design ever published in a hobby magazine
that *wasn't* based on a microprocessor -- it was all
built out of 9000 and 7400 series TTL logic chips!

There was the 74 computer in Elektuur (now Elektor).
That was a quite respectable computer, built (you guessed)
from 74-series chips. How many were built, I don't know.

Groetjes Albert
 
A

Albert van der Horst

I definitely remember that old MS-DOS programs would treat
Ctrl-Z as an EOF marker when it was read from a text file and
would terminate a text file with a Ctrl-Z when writing one.

I don't know if that was because the underlying filesystem was
still did everything in blocks or if it was because those
MS-DOS programs were direct ports of CP/M programs. I would
have sworn that the orignal MS-DOS file API was FCB based and
worked almost exactly like CP/M. IIRC, the "byte stream" API
showed up (in the OS) sever versions later. The byte stream
API was implemented by many compiler vendor's C libraries on
top of the block-oriented FCB API.

My programming reference manual for MSDOS 6.0 (1993)
states the FCB stuff as "superseded" (not obsolete or obsolescent).
It states:
"A programmer should not use a superseded function except to
maintain compatibility with versions of MS-DOS earlier than
version 2.0."

FCB did *not* support paths, but you could access the current
directory.

Groetjes Albert
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,764
Messages
2,569,566
Members
45,041
Latest member
RomeoFarnh

Latest Threads

Top