superseded by c++?

R

Richard Bos

cr88192 said:
that is the issue, some people define OO in some overly narrow way (must
includes classes, instances, inheritence, ...).

now, not everyone agrees with this set of definitions...

Indeed, and Paul's plusplusomania is showing again, since the language
I've heard described as "not really OO, because it doesn't implement
feature X" is not ECMAScript, but C++.
(Oh, and he's clearly never worked with Adobe InDesign.)

Richard
 
P

Paul Hsieh

Interesting, I'm working on such a language at the moment. My last effort
sort of worked but being still statically linked lacked the spontaneity of a
proper dynamic language.

Well, if you are working on a language, may I give you my opinion?
Please stay away from pure dynamic typing. It seems like a good idea,
and can make initial coding easier, but run-time type conflicts are an
error that is impossible to exhaustively test for yet is easily
handled by strong typing. Implicit typing is a much better idea.
This way you can remain strongly typed, while still saving a lot of
the redundant type declarations. I think that C++ is trying to move
in that direction, but will not be able to escape the complications
from its C heritage.

I was thinking about doing a language design myself based on this
idea, but I don't really have the time and energy to see it all the
way through. My idea was to, in part, revive BASIC, by leveraging
explicit variable suffix naming conventions: a % suffix is a number, a
$ suffix is a string, and I would add # for associative arrays
(arbitrary mapped hash tables). The idea would be to have a type
system equivalent to Lua's except for strong typing by virtue of the
variable suffixes. (I thought it would be a good idea to have some
nice features like pervasive serialization, but that means its hard to
encode functions or system objects such as an open file, or a thread
as first class values.)

But so long as the associative array was not an explicitly described
type, you still had runtime type conflict potential. Perhaps it
could be saved by having a grammar of suffixes for variables names,
and some sort of declarations like:

#$ ::= { $ -> % }
#@ ::= { $ -> #$, % -> %, @(% <- setradius(%),% <- getarea()) }

where % means integer, @ means methods, and #$ and $@ are used defined
suffixes (and types). The idea, then would be that a variable that
had no suffix would, somehow or another, be implicitly typed. If a
function declaration had ambiguously defined parameters, then the
function becomes a template; the parameter types would be determined
at call site time. (Ambiguous or conflicting local variable types
would lead to a compile time error.) The idea would be to make sure
it was directly translatable to a C back-end. I also had a lot of
ideas for the optimizer (which for this language, it would be to infer
that a number was an integer, for example, or that a hash was really a
vector, or a fixed length array, or that a string was static, etc.)

My goals would be: Real type checking, but easy to program due to
implicit typing and automatic templating, and truly comparable to C in
terms of performance by way of very good translation to C code. But I
haven't really thought it all the way through.
All I can get out of OO are examples of things I've been able to do in
dynamic/interpreted languages for years. Maybe a bit of nice syntax and the
ability to package things neatly.

Hmmm ... as must be clear from my posts here, the ability to use the
type system to essentially have an "algebra on interfaces" is what
makes OO have some sort of relevance to me. Hiding an implicit "this"
pointer, or associating certain functions to certain data, just
doesn't do anything for me. But inheritance? Its the one real reason
to use OO as far as I am concerned.
Getting back to C (this is c.l.c), that's great as an implementation/target
language. For that purpose it should stay simple (ideally get even simpler
and shed baggage, but that's not going to happen).

Actually C is not a really great target language because I have never
seen one callable as a library. So the possibilities for
introspection are basically taken away from you, unless you demand
that your deployed software also have access to a local C compiler
(which generally cannot be relied upon). A lot of scripting languages
have a kind of x = dostring("a = b + c; return a;") kind of
functionality, which basically demand that you have a built-in
interpretor, but you are denied the possibility of compiling it
because the compiler is not a callable library. For this and other
reasons, scripting languages typically target a custom bytecode
machine.
 
B

Bart C

Paul said:
Well, if you are working on a language, may I give you my opinion?
Please stay away from pure dynamic typing. It seems like a good idea,

OK. It hits performance anyway.
.. But inheritance? Its the one real reason
to use OO as far as I am concerned.

If I put in any OO stuff, it will be on my terms, ie. in a form I can fully
understand.
Actually C is not a really great target language because I have never
seen one callable as a library.

Well I've never used C yet for such a purpose; as a target language I don't
see any problems. Don't quite understand your library reference, you mean a
function to compile a string as language source? Doubt that will be a
requirement. If ever used by anyone else then yes a C compiler will be
required.
... A lot of scripting languages
have a kind of x = dostring("a = b + c; return a;") kind of
functionality, which basically demand that you have a built-in
interpretor, but you are denied the possibility of compiling it
because the compiler is not a callable library. For this and other
reasons, scripting languages typically target a custom bytecode
machine.

Generating bytecode is very attractive, but the interpreter would have to be
in C (for reasons of portability). Main problem is whether I can get the
required performance in pure C, no matter how good the optimiser.

And yes in this case it would be easy to compile a string (in my language)
using a built-in interpreter (although more likely to be from a file). No C
compiler is needed, only the C runtime built-in to the binary.
 
C

cr88192

Richard Bos said:
Indeed, and Paul's plusplusomania is showing again, since the language
I've heard described as "not really OO, because it doesn't implement
feature X" is not ECMAScript, but C++.
(Oh, and he's clearly never worked with Adobe InDesign.)

yeah.

I think the usual idea for C++ is the use of MI in place of Interfaces,
interfaces being a feature thought up to deal with the lack of MI. IMO, the
difference is more one of pedantics than practice though.


I am not too fussy really though, my view of OO being more like this:
we have "objects", which can contain value slots and methods;
we have some means of extending objects;
objects have their own independent state and the concept of "identity".

how these are done exactly, is not too important IMO.

so, we can do OO in C, because we can do these things ourselves.
but, C is not itself OO, because these are not built into the language.


now, class/instance and prototypes IMO mostly just represent 2 somewhat
different views.
I personally like prototype OO more, but also would like something that
plays well with static typing (though, in a pure form, it is overly
constraining, in a looser form, it is in general, fairly useful to have).

in my case, if I were to be stuck with a "pure" language, I would probably
have to pick a dynamic one.

for an impure language, I would probably go with a mostly static one.

lucky for us, such "pure" languages are fairly rare (mostly academic...).
 
C

cr88192

Bart C said:
Interesting, I'm working on such a language at the moment. My last effort
sort of worked but being still statically linked lacked the spontaneity of
a
proper dynamic language.

mine tended to be the inverse, namely, bytecoded languages, with a mostly
dynamic core, but relying of type inferrence and similar to squeeze a lot
more speed out of them.
All I can get out of OO are examples of things I've been able to do in
dynamic/interpreted languages for years. Maybe a bit of nice syntax and
the
ability to package things neatly.

yeah, that is the issue.

mainstream languages gradually add these features, piece by piece, but have
this bad habit of hackishly kludging on each and every feature, and then use
the powers of marketing to make it sound like they have just discovered how
to turn lead into gold or feces into caviar or something...

not only this, but they reimplement these old and well known features, in an
often ugly, hackish, and weak manner, and then come up with completely new
terminology for it.


but, dynamic languages have their bad points as well.

Getting back to C (this is c.l.c), that's great as an
implementation/target
language. For that purpose it should stay simple (ideally get even simpler
and shed baggage, but that's not going to happen).

well, at present, I am gradually extending C by adding compiler extensions.
in general, where reasonable, I stick close to things that exist.

many features are ripped fairly directly off of GCC.

another set of features was originally self-innovated, but in its current
form looks very close to GLSL (I have a subset/superset of GLSL style vector
support, builtin quaternion support, ...).


I am currently considering adding lexical closures (basically, as inner
functions with a special keyword).

int (*foo(int x))(int y)
{
__lambda int bar(int y) { return(x+y); }
return(bar);
}

now, for implementation reasons, these closures will have a certain odd
quirk (aka: don't go generating them in loops...), but this could probably
be resolved later.

at present I don't plan on implementing anonymous functions/closures though
(but, this is mostly just a syntactic issue...).


....
 
B

Ben Bacarisse

CBFalconer said:
Underlined above.

OK, that was my guess, but since you did not provide any evidence I
wanted to be sure. My statement was vague, but I just did not want to
leave the idea in the air that to do proper generics one must forfeit
strict compile-time type checking.
Well, when I was using Pascal solidly, I never missed those
abilities.

Ah, so your argument is not that it is hard to get right but that is
it not needed at all. The bit you underlined was not claiming that
generic programming is important (though I do think it is a Good
Thing) it was claiming that to get it right requires a relatively
complex type system.

This is now way off-topic. We should move this to comp.programming.
 
D

David Thompson

#if OT == FURTHER


(LOC _is_ a fairly good metric of code _size_, though not the only
one. It's just not a good metric of _value_ or _usefulness_ or any of
the other things we actually want like quality or other ilities.)
I think that may have been the actual point of the original claim -
COBOL (at some unspecified time) had the largest LOC count precisely
because it is an absurdly bloated language.

I don't think that was the point; I think the point is that COBOL was
for several _decades_ used for _an awful lot_ of programs. (I mean
that in the en_US sense of extremely many, not the literal sense of 'a
group of programs that are very bad, or even terrifying'. <G>)

And I don't agree it's inherently bloated. It is somewhat more verbose
than C, but so are most other 3GLs, because C was designed to be
terse, or at least to allow and somewhat encourage brevity. _For the
type of applications it was designed for_ (an important limitation)
competently written COBOL code can be reasonably short and clear.

I think there are two big true things that contributed to this
reputation (in addition to snarkiness and oneupmanship).

One technical one is that there is a fairly big syntactic (source)
overhead per program-unit; this makes it more difficult to use large
numbers of small program-units, or in modern jargon it discourages
factoring. This in turn tends to encourage cut-and-paste programming,
where the same code (or nearly the same) is unnecessarily duplicated.
With sufficient skill and (self or group) discipline this effect can
be substantially reduced -- but that isn't always done; see next.

Another organizational or perhaps social one is that COBOL is pretty
'safe'. There are basically no pointers, and arrays and strings can be
and usually are bounds-checked. Many other errors, like file EOF and
arithmetic overflow (in most cases), are either handled cleanly (which
is not always the same as correctly!) or reported clearly. As a result
programs can be coded successfully -- or nearly so -- by, to put it as
politely as I can, somewhat less clever and knowledgeable (and thus
typically less expensive) people; one common term for this used to be
'code monkey(s)'. And IME such people have a deplorable tendency
toward 'cargo-cult' programming, which tends to incorporate a lot of
unnecessary, wasteful, and even counterproductive steps.

C has the 'advantage', or at least characteristic, that if you are
able to get nontrivial programs to work _at all_ you are likely
intelligent enough to get them to work correctly, for the bulk of the
sort of problems and applications that most people want programmed.
(Though there are still specific domains and applications for which
just being a competent C programmer is not enough.)

(Also, quite a lot of COBOL code was actually generated by various
tools, and was not intended to be readable and wasn't; but the same is
true now for C also. In these cases I don't count the generated code
as really 'source', but rather the input(s) to the tool(s).)
I've been told that one of
the key design goals of COBOL is to allow developers to write code that
a manager who knows nothing about programming could read and understand.

I'm not sure about 'knows _nothing_', but certainly the ability of
someone who is not a 'real' (full-time) programmer to get at least a
rough idea by reading the code was an intentional and important goal.
Remember that at the time COBOL was created essentially the only
competitors were machine and assembly languages generally
incomprehensible to 'lay' people, and FORTRAN, which was designed to
be used by scientists and engineers -- and often described as
unreadable by anyone else, although I think a substantial factor was
that the algorithms being programmed were intellectually difficult,
rather than the language itself being inherently so. The idea that
(more) people could actually understand (part of) what the computer
was doing was novel and very attractive, I think unsurprisingly so.

- formerly david.thompson1 || achar(64) || worldnet.att.net
 
J

James Kuyper

David said:
#if OT == FURTHER

On Fri, 04 Jan 2008 12:08:38 GMT, James Kuyper


I don't think that was the point; I think the point is that COBOL was
for several _decades_ used for _an awful lot_ of programs. (I mean
that in the en_US sense of extremely many, not the literal sense of 'a
group of programs that are very bad, or even terrifying'. <G>)

I'm sure that was a significant portion of the reason for COBOL's large
line count. It was designed by government committee, and mandated in
many contexts for use on many government contracts; that would be more
than sufficient to ensure the popularity of even a mediocre language.

....
And I don't agree it's inherently bloated. It is somewhat more verbose
than C, but so are most other 3GLs, because C was designed to be
terse, or at least to allow and somewhat encourage brevity. _For the
type of applications it was designed for_ (an important limitation)
competently written COBOL code can be reasonably short and clear.

I learned COBOL nearly three decades ago, without ever having used it
outside of the class where I learned it in; my memory has gone hazy, and
I'm sure it's evolved since then. Therefore, it's entirely possible that
I've inadequate basis to justify judging the current language. I just
remember writing lines like

MULTIPLY UnitsSold BY UnitPrice GIVING TotalSold.

Forgive me if I've messed up on the details of syntax; I hope I've at
least got the basic principle right. I don't think that's merely
"somewhat more verbose" than

TotalSold = UnitsSold*UnitPrice;

That's a LOT more verbose.

....
....
rather than the language itself being inherently so. The idea that
(more) people could actually understand (part of) what the computer
was doing was novel and very attractive, I think unsurprisingly so.

I'm not knocking the idea; just explaining the concept. To professional
programmers, it sounds like a joke; to managers it's a very important
consideration, and rightly so. I'm not sure it's important enough to
justify choosing COBOL over C, but it's certainly important.
 
C

CBFalconer

James said:
.... Cobol discussion snip ...

MULTIPLY UnitsSold BY UnitPrice GIVING TotalSold.

Forgive me if I've messed up on the details of syntax; I hope
I've at least got the basic principle right. I don't think that's
merely "somewhat more verbose" than

TotalSold = UnitsSold*UnitPrice;

That's a LOT more verbose.

As compared to:

* TotalSold = * UnitsSold * UnitPrice;

which leaves the newbie totally confused as to the meaning of *. I
could go further. But, ya gotta speak da language.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top