The way to read STL source code

S

Stefan Ram

Rod Pemberton said:
learning how that language is converted to assembly

Even given a specific source language and a specific
assembly language, there is no specific way »how that
[source ]language is converted to assembly«, this
instead depends on the compiler[ author].

Moreover, this will not be possible, when the language
to be learned already is an assembly language.
 
E

Ebenezer

Newsgroups: comp.lang.c++,comp.lang.misc
Followup-To: comp.lang.misc



  Reading others' code /is/ the way to improve the skills:

    - Of course, one should select code written by masters,
      not code written by arbitrary authors. For example,
      in the realm of C++, one might read boost source code.

I suggest "chewing the meat and spitting the bones" with Boost.
Some of it is great, but not all of it.


Brian
Ebenezer Enterprises
http://webEbenezer.net
 
M

MikeWhy

Stefan Ram said:
The same could be said about reading a programming book.

.... as well as blogs and (no surprise here) usenet posts.
The same could be said about judging a programming book.

This is a different point. The gain in reading books and others' code is
broader exposure to ideas, approaches, and idioms. We aren't all so learned
that only the true masters have something of value to teach us.

Your cup overfloweth, Grasshopper.
 
8

88888 Dihedral

在 2012å¹´2月19日星期日UTC+8下åˆ12æ—¶02分27秒,Ebenezer写é“:
I suggest "chewing the meat and spitting the bones" with Boost.
Some of it is great, but not all of it.


Brian
Ebenezer Enterprisessort
http://webEbenezer.net

I am interested in the vector and the map part
which can support the desired auto-size management only.

For sets of cardinals within 16 bits, I'll use my own
library.

For sparse binary trees of n levels ,
I'll use the heap sort style map.

But I solve problems by abstract sets, graphs and trees.
 
R

Rod Pemberton

Stefan Ram said:
The same could be said about reading a programming book.


The same could be said about judging a programming book.

It depends entirely on the book. Is the book teaching how to program a
certain language or is the book teaching only language concepts? If you
can't tell whether a book is teaching how to program a language, teaching
just the concepts of a specific language, or teaching generic programming
concepts, you'll never make it as a programmer.
(Macchiavelli wrote about this, see the last paragraph in

http://ebooks.adelaide.edu.au/m/machiavelli/niccolo/m149p/chapter23.html

Macchiavelli also wrote this, see the "Because there are three ..." section:
http://ebooks.adelaide.edu.au/m/machiavelli/niccolo/m149p/chapter22.html
From which source?

E.g., Harbison and Steele's "C: A Reference Manual" is one of the best books
ever written on C and doesn't teach a single thing about programming in C...

Basically, you want to learn the language elements from one which does not
teach how to program. You want one that is purely technical in nature.
Sometimes, these are referred to as programmer's reference manuals. Good
programming concepts should not be required by the language: structured
programming, object-oriented programming. If the language doesn't allow the
programmer to produce junk or "spaghetti code" or use pointers, the language
is most likely worthless. It's up to the programmer to put the puzzle
pieces together using pure programming concepts they've learned from other,
non-language specific, sources.
Starting to code without having every read example
programs written by others?

Why not? I did.

Many years ago, I taught myself BASIC followed by 6502 assembly. I did so
without *any* books on programming. I only had one trivial programming
class prior to that, in LOGO. BASIC and 6502 were learned entirely from
programmer's reference manuals which did not teach any programming concepts,
like structured code, etc. After learning those, I had Pascal in High
School, Fortran at University, followed by self-taught C, and a PL/I variant
learned entirely on the job, etc.

Personally, I believe that individuals who have not learned how to program
in assembly at a young-ish age, i.e., early teens or pre-teen, never fully
grasp certain programming concepts, like the concept of null. Those who've
learned later tend to have problems with understanding how the OOP, or
dynamicly typed, or HLL language they prefer doesn't generate the assembly
they expected. Assembly is the foundation upon which programming
languages are built. As a professional programmer, you must understand what
the high-level code is being converted into. If the compiled code doesn't
do what the high-level code was "supposed" to do, it's your job that's on
the line. You can argue that the C or C++ compiler is not compliant with
the ANSI or ISO specifications all day long, but that won't get you anywhere
with an angry employer.
So one also has to learn at least on assembly language,
which processor do you suggest?

6502. Maybe Z80 ... You had to ask. Many of the fundamental concepts are
still present in microprocessors that were developed in that era, e.g., x86.
Languages that flourished upon 8-bit micro's also contain many of the needed
concepts, e.g., C and Forth. Personally, I wouldn't start them out with a
RISC design. Unfortunately, that goes against most current collegiate level
thinking. University professors have embraced the idea that "RISC is the
future", even though it's fundamentally flawed. Unfortunately, modern
languages and architectures are obsoleting the ability to learn some of
those concepts, but the need is still present to use HLLs, e.g., 8-bit
bytes, contiguous memory, etc.


Rod Pemberton
 
R

Rod Pemberton

Stefan Ram said:
Rod Pemberton said:
learning how that language is converted to assembly

Even given a specific source language and a specific
assembly language, there is no specific way »how that
[source ]language is converted to assembly«, this
instead depends on the compiler[ author].

I believe that's partially true, but not entirely. Many assembly languages
are very similar. Microprocessors standardized on certain concepts around
1974 or so, e.g., 8-bit bytes for ASCII and EBCDIC, contiguous memory,
pointer size equivalent to integer size, typeless integers, etc. Many HLL
compilers, e.g., for C, produce the nearly the same code but adjusted for
different platforms. But, it's still the programmer's responsibility to
understand what that HLL code is producing for their specific tools. That
usually just means learning what the tools produce.
Moreover, this will not be possible, when the language
to be learned already is an assembly language.

Mr. Rice's examples were all HLLs.


Rod Pemberton
 
R

Rod Pemberton

Stefan Ram said:
But we should not forget that there are more man-hours in
maintenance programming than in programming a new program
from scratch.
True.

That means the typical work day of a
programmer is not writing his own code (as you wrote »your
code« above), but maintaining code written by someone else.
True.

And the first step to do this properly is to read and to
understand that code that is to be modified.

No.

Again, you're biasing a novice's understanding with code they don't
understand which may be horrid. The first step is for the person to become
proficient in the language. They don't have to be a master, but they should
be strong in the language. Once proficient, reading the code of others is
no longer a challenge and they can recognize the worth of the code they've
read. If they're able to recognize that it's worthless, they can correct
it. If they can't, they'll propagate that garbage forever via cut-n-paste.
So, training to
read code also is just a preparation for the very activity
that will most often be exercised when being a programmer.

No.


Rod Pemberton
 
D

Dmitry A. Kazakov

Why not? I did.

So did I, but programming is not what it was 30 years ago. It is less
coding and much more analysis, design, reuse, testing, integration.
Learning must focus on most important cases and patterns. These cannot be
identified without help, at least without wasting much time. Both starting
from scratch and reading code look poor here.
Personally, I believe that individuals who have not learned how to program
in assembly at a young-ish age, i.e., early teens or pre-teen, never fully
grasp certain programming concepts, like the concept of null.

Huh, I would rather prevent students from being spoiled by programming.
They should learn fundamental things first, e.g. math. Unfortunately, it is
like sex, they get exposed anyway. :)-))
Those who've
learned later tend to have problems with understanding how the OOP, or
dynamicly typed, or HLL language they prefer doesn't generate the assembly
they expected. Assembly is the foundation upon which programming
languages are built. As a professional programmer, you must understand what
the high-level code is being converted into.

I doubt that an exposure to some machine code is the right tool for gaining
such understanding. And why did you choose specifically the machine code?
Why not the micro code? Why not the states of the pipelines and the cache?
Why not individual transistors? Why not electrons, atoms and fields?

Another objection is that the machine code is not what is really going on
unless the program is some number-crunching beast. Most of the programming
issues regarding time, resources, bugs are addressed to the software
components, synchronization, protocols, hardware I/O, OS calls, the stuff
you cannot learn looking at the machine code.
If the compiled code doesn't
do what the high-level code was "supposed" to do, it's your job that's on
the line. You can argue that the C or C++ compiler is not compliant with
the ANSI or ISO specifications all day long, but that won't get you anywhere
with an angry employer.

Working around compiler bugs rarely requires looking at the generated code.
I don't remember single case I need that.
6502. Maybe Z80 ... You had to ask.

MACRO-11 (PDP-11 or VAX) or at least 68k.
Personally, I wouldn't start them out with a RISC design.

Sure. RISC would prove weakness of your concept. :)-))
 
M

Marco van de Voort

["Followup-To:" header set to comp.lang.misc.]
But we should not forget that there are more man-hours in
maintenance programming than in programming a new program
from scratch. That means the typical work day of a
programmer is not writing his own code (as you wrote ?your
code? above), but maintaining code written by someone else.
And the first step to do this properly is to read and to
understand that code that is to be modified. So, training to
read code also is just a preparation for the very activity
that will most often be exercised when being a programmer.

Following that logic, you shouldn't look at "master's" code either, since
the code you will be maintaining most likely won't be done by a "master".

Finding tricks to analyse dodgy code won't be learned by looking at perfect
oode.
 
S

Stanley Rice

Again, you're biasing a novice's understanding with code they don't
understand which may be horrid. The first step is for the person to
become proficient in the language.

How to be proficient in the language without reading others' code, please? Maybe reading other's code is the first step whiling learning a language, after all, we have to read some code snippets and some demon that explain some functions written in this language.

I recalled that when I begin to learn to play basketball, I will some vedioand NBA, watching other playing basketball including some of my friends. Of course they are not professional in playing basketball, but watching themplaying not only give me fun, but also improve my skills, or impression.
 
B

Bo Persson

Ebenezer said:
I suggest "chewing the meat and spitting the bones" with Boost.
Some of it is great, but not all of it.

Yes, some libaries are so full of configuration macros and workarounds
for ancient compilers that you will have a hard time even finding some
code.

Been there, done that.


Bo Persson
 
D

Dombo

Op 18-Feb-12 2:08, Stanley Rice schreef:
Another reason that I want to read the source code is that I want
to improve my understanding of data structure and algorithm in the
same time. STL covers nearly all the basic data structure and algorithms.

Doing that by reading or single stepping through the STL code is about
the most convoluted way to accomplish that. For a variety of reasons the
code you will find in a typical standard library implementation is far
from straightforward, there is way too much stuff in it that will
distract you from what you really want to learn.

Studying code is not a good way to learn the fundamentals; you are
better of with a good book about data structures and algorithms, and as
a exercise try to implement some by yourself to see if you really grasp
them. After that you might be in a better position to actually
understand the STL code, if the need ever arises.
 
S

Stefan Ram

Rod Pemberton said:
Many years ago, I taught myself BASIC followed by 6502 assembly.

This was the same sequence in my case, with the special
twist that I did not have access to a computer in those
days, so I wrote my BASIC code on paper with a pencil and
did not see it run for some months until I then eventually
got access to a BASIC implementation on a personal
electronic transactor.

But there might be a fallacy here: When one has learned
riding horses in a special way, this might stil apply today.
Horses do not change so fast. But the landscape of
processors and programming languages has changed, even
though the 6502 is still manufactured today.

The fallacay could be: »I learned programming this way,
therefore, every one else also should learn it this way.«
(»Because I only know the way how I did learn it, I cannot
imagine any other possibility.«)

When you learned a language with a 0 pointer, you possibly
understood it in terms of certain 6502 behavior, so it
possibly seemed to you that your 6502 knowledge helped you
to understand the 0 pointer concept. You possibly never made
the experience how it is to learn a 0 pointer concept
without ever having been exposed to an assembler language
before, so you possibly cannot imagine how someone could
grasp the 0 pointer concept without a 6502 model in his
mind. But that does not necessarily mean that it is really
necessary for someone else to learn things in this order.

In my case, for example, I could say: »The best way to learn
programming for everyone is to write the programs on paper
with a pencil for some months, before one starts to sit in
front of a keyboard.« But soon young people will have to
learn how to use a pencil just for this, because they start
to write using keyboards - there are indeed some schools who
are already dropping the teaching of longhand IIRC.

In my case, the good thing about writing code on paper was
that I did not rush to execute my code, but instead took my
time to write and re-read it carefully and then learned to
»mentally execute« the code: Because I had no implementation,
I »executed« each statement mentally trying to imitate what
the machine would do. A capability, which surely is helpful
in programming.

PS: I made a spelling mistake in another post, I wrote
»Macchiavelli«, but the correct spelling seems to be
»Machiavelli« - sorry.

PPS: some quotations about pencil programming:

»When I wrote TeX originally in 1977 and '78, of course
I didn't have literate programming but I did have
structured programming. I wrote it in a big notebook in
longhand, in pencil. Six months later, after I had gone
through the whole project, I started typing into the computer.«

http://www.gigamonkeys.com/blog/2009/10/05/coders-unit-testing.html

»He declined offers of typing help, and just kept
writing away in pencil. He rewrote parts, copied
things over, erased and rewrote.

Finally André took his neat final pencil copy to a
terminal and typed the whole program in (...)
the VTOC manager worked perfectly from then on.«

http://www.multicians.org/andre.html
 
N

Nick Keighley

Stefan Ram said:
learning how that language is converted to assembly [is important]

I don't think so. When I come across a novel language feature I like
to think how it would be implemented. But I use an imaginary machine
(or these days, C!). Long ago I knew Z80 assembler (actually much of
Z80 machine code) but I can't see the utility in converting chunks of C
++ (or scheme or Haskell)into Z80 assembler.
  Even given a specific source language and a specific
  assembly language, there is no specific way »how that
  [source ]language is converted to assembly«, this
  instead depends on the compiler[ author].

I believe that's partially true, but not entirely.
agreed

 Many assembly languages
are very similar.  Microprocessors standardized on certain concepts around
1974 or so, e.g., 8-bit bytes for ASCII and EBCDIC, contiguous memory,
pointer size equivalent to integer size, typeless integers, etc.

but many C programmers (and maybe C++ programmers) assume these are
inviolate laws! They end up writing horribly unportable programs for
no reason except intellectual laziness and lack of imagination about
hardware engineers (and compiler writers) might do.

Your concentration on assembly code encourages this (naff) type of
coding.
 Many HLL
compilers, e.g., for C, produce the nearly the same code but adjusted for
different platforms.  But, it's still the programmer's responsibility to
understand what that HLL code is producing for their specific tools.

nope. I get on just fine not knowing what code my HLL is producing.
 
M

MikeWhy

Nick said:
Stefan Ram said:
learning how that language is converted to assembly [is important]

I don't think so. When I come across a novel language feature I like
to think how it would be implemented. But I use an imaginary machine
(or these days, C!). Long ago I knew Z80 assembler (actually much of
Z80 machine code) but I can't see the utility in converting chunks of
C ++ (or scheme or Haskell)into Z80 assembler.
Even given a specific source language and a specific
assembly language, there is no specific way »how that
[source ]language is converted to assembly«, this
instead depends on the compiler[ author].

I believe that's partially true, but not entirely.
agreed

Many assembly languages
are very similar. Microprocessors standardized on certain concepts
around 1974 or so, e.g., 8-bit bytes for ASCII and EBCDIC,
contiguous memory, pointer size equivalent to integer size, typeless
integers, etc.

but many C programmers (and maybe C++ programmers) assume these are
inviolate laws! They end up writing horribly unportable programs for
no reason except intellectual laziness and lack of imagination about
hardware engineers (and compiler writers) might do.

Your concentration on assembly code encourages this (naff) type of
coding.
Many HLL
compilers, e.g., for C, produce the nearly the same code but
adjusted for different platforms. But, it's still the programmer's
responsibility to understand what that HLL code is producing for
their specific tools.

nope. I get on just fine not knowing what code my HLL is producing.

This might be the salient point, at least for this narrow part of the
discussion.

Each foo(), *foo, foo, ++foo, foo++, and for that matter, '{' and '}' has
in my mind very specific implications and costs. As I type or read each
'if', 'switch', 'for', 'try', 'catch', and 'return', I hold in mind on a
subconscious level the underlying assembler, albeit on a broad generalized
model. For that matter, I also hold a simplified model of how awk or perl
scripts are read and translated as I write them. Reading the Erlang language
spec recently, I can "see" the interpreter, how it's built (or how I might
build one), and the runtime cost of many of its constructs. C#'s runtime
type binding holds few mysteries if you understand COM, and Java's byte-code
interpreter holds no mysteries at all.

Like some others, I also had my start with BASIC, and then dug into 6502
assembler to understand the unannotated ROM disassembly. This seemed a
completely natural progression. (Apparently, Bill and I have an even longer
history together than I had imagined before this moment.)

Before that, though, I wrote toy-sized FORTRAN without the slightest notion
beyond "this is how you iterate over these calculations N number of times",
and "this statement label is how you tie this statement to that other."
Certainly it's possible to write useful programs without understanding the
how or why behind them.

While I don't intend any of this to be judgemental, I don't see how I can do
what I do with just a collection of rote rules and a notion of magic that
happens under the covers. As perhaps extreme examples, memory barriers and
fences are vague notions and superstition with that view, and using CUDA
effectively is not possible at all without an underlying understanding of
how the parts fit together.
 
R

Rod Pemberton

Stefan Ram said:
The fallacay could be: »I learned programming this way,
therefore, every one else also should learn it this way.«
(»Because I only know the way how I did learn it, I cannot
imagine any other possibility.«)

Well, I believe I already stated why. Younger programmers seem to have
problems with many of the basic concepts of microprocessors. IMO, this
comes from teaching things like RISC, OOP, HLLs, etc.
[...]
In my case, for example, I could say: »The best way to learn
programming for everyone is to write the programs on paper
with a pencil for some months, before one starts to sit in
front of a keyboard.«

Why is that the best way? With pencil and paper, there is no ability to
confirm the correctness of the program. That's one of the fundamental
things taught in math and science: the ability to check your work. Without
an implementation, how does one do that?

Pencil and paper is good in that it forces you to think out the problem and
solution versus that of trial and error until it works without actually
understanding why it now works. But, isn't that what math and sciences
classes in school are supposed to teach? I.e., thinking and problem
solving.
[...]
In my case, the good thing about writing code on paper was
that I did not rush to execute my code, but instead took my
time to write and re-read it carefully and then learned to
»mentally execute« the code: Because I had no implementation,
I »executed« each statement mentally trying to imitate what
the machine would do. A capability, which surely is helpful
in programming.

True.


Rod Pemberton
PS I missed the Machiavelli misspelling. I'm not good with names.
Apparently, I (horrors!) cut-n-pasted. No, I'm not perfect with spelling
either, but good. It's not the only word misspelled by you though. See
"fallacay" "stil" etc.
 
R

Rod Pemberton

Nick Keighley said:
On Feb 19, 8:54 am, "Rod Pemberton" <[email protected]>
wrote: ....

But I use an imaginary machine [...]

How do you confirm correctness of both high-level and low-level code with an
imaginary machine? Is your confirmation of correctness imaginary also? ...

That's one of the problems with the ANSI and ISO C specifications: it
doesn't specify a working machine model. Without it, there is no reference
working version of C that other implementations can be compared against.
Without it, everything in the specifications is open to interpretation as to
accuracy. That's the major problem that plagues the pedants on comp.lang.c.
Each of them has their own model in their head of how C works and what the
English words and phrases in the C standards mean. The British frequently
have a different understanding of "either-or" versus Americans. It seems
that not one of the pedants has ever programmed in assembly or believe they
need to. None have read "The C Rationale", or read historical works by the
C creators like "The Development of the C Language" or "Portability of C
Programs and the UNIX System" or "The C Language Calling Sequence" etc which
explain what C is supposed to do and make reference to assembly. If they
did, none of them would ever argue against the understanding that C is built
upon assembly and assembly is needed to fully understand C.

Forth programmers hate the fact that their early specifications did specify
a machine model. The problem wasn't that they specified a model, but they
specified a very narrow model that wasn't able to adapt, or had radical
changes in the following specification. But, that is far better than not
specifying one at all. Many Forths were able to comply very accurately with
their machine models. That allowed Forth to be widely implemented. Today,
a couple of the early specifications are loved, and still used while a
couple are hated.
(or these days, C!).

I'm not familiar with the C! language. Did you mean the C language?
Perhaps you meant C# language? It seems that Wikipedia, Esolangs, nor
Google or Yahoo, will find anything on C! or C exclamation language ...
HOPL is inaccessible for some reason.

However, if C! is a C derivative, it most likely inherited the basic
functionality that a programmer needs to know. Most C derivative
languages have kept that functionality because it has become standard
due to microprocessors.
[...] but I can't see the utility in converting chunks of
C++ (or scheme or Haskell)into Z80 assembler.

Yes, converting C++ for ARM to Z80 to understand how a HLL is converted to
assembly would be questionable ... You'd want to study the ARM assembly
code generated for C++. They need to understand how a HLL is converted to
the assembly for their platform. They need to know how basic language
constructs are converted to assembly, e.g., pointers, control-flow,
procedures or function calls, etc.
but many C programmers (and maybe C++ programmers)
assume these are inviolate laws! They end up writing horribly
unportable programs for no reason except intellectual laziness
and lack of imagination about hardware engineers
(and compiler writers) might do.

Your concentration on assembly code encourages this (naff)
type of coding.

Ok, this is the same BS portable C argument that plagues comp.lang.c
thought ...

The first inherent problem with that argument is that microprocessors have
standardized on a basic computing architecture. Nearly all computing
platforms today use that architecture either because they are microprocessor
based or because they use the same hardware, e.g., memory. They
standardized way back around 1974. Mainframes followed suit slightly later.
What that means is that almost no one today - or in the past two decades for
that matter - has access to obscure, obsoleted, perverse platforms - or
other platforms that *NEVER* shouldv'e have been used to create the ANSI C
standard in the first place - where "portability" of C or C++ is actually
needed. Have you ever had access to a non-emulated EBCDIC platform in your
entire lifetime? (No.) Have you ever had access to a 16-bit byte platform
in your entire lifetime? (No.) Have you ever had access to a 9-bit
character platform in your entire lifetime? (No.) So, like it or not, it's
entirely up to the users of obscure platforms to fix the "non-portable"
nature of C code for their platforms.

The second inherent problem with that argument is even C that is
specification compliant is truly only 30% portable. And, that again is due
to standardization of microprocessors on 8-bit bytes for ASCII or EBCDIC,
contiguous memory, two's complement integers, generic integers so pointers
are the same size, etc. Another 50% of C can be made to work correctly, but
it's not portable by default. The compiler has to make it work. And, 20%
of C is just non-portable. IIRC, you were on c.l.c, so I shouldn't have to
mention stuff like setjmp, signals, buffering, POSIX based file I/O, etc.
nope. I get on just fine not knowing what code my HLL is producing.

Then, I'd quess you won't get fired for producing faulty code either ...
So, that qualifies you as either a novice or hobbyist. (Yes, I suspect you
probably aren't ...) A professional would do so as a matter of course. How
do you know your compiler is compliant with a specification, or your
compiler accurately implements what you expect your code to do?
Testing only? What if you missed something you could've caught had you
simply just read the assembly?


Rod Pemberton
 
D

Dmitry A. Kazakov

Nick Keighley said:
On Feb 19, 8:54 am, "Rod Pemberton" <[email protected]>
wrote: ...

But I use an imaginary machine [...]

How do you confirm correctness of both high-level and low-level code with an
imaginary machine?

Huh, the whole idea of a correctness proof is to do it *before* running the
code. You need no machine for that.
Is your confirmation of correctness imaginary also? ...

As any confirmation is. Moreover, such imaginary confirmations are much
stronger than any tests. Because they hold not only for a concrete machine
and a concrete run, but for any machine and any run.
 
N

Nick Keighley

learning how that language is converted to assembly [is important]
I don't think so. When I come across a novel language feature I like
to think how it would be implemented. But I use an imaginary machine
(or these days, C!). Long ago I knew Z80 assembler (actually much of
Z80 machine code) but I can't see the utility in converting chunks of
C ++ (or scheme or Haskell)into Z80 assembler.
Even given a specific source language and a specific
assembly language, there is no specific way »how that
[source ]language is converted to assembly«, this
instead depends on the compiler[ author].
I believe that's partially true, but not entirely.
[...]
nope. I get on just fine not knowing what code my HLL is producing.

This might be the salient point, at least for this narrow part of the
discussion.

Each foo(), *foo, foo, ++foo, foo++, and for that matter, '{' and '}' has
in my mind very specific implications and costs.


I don't think this has been true for many years. I certainly don't
think it has been useful.
As I type or read each
'if', 'switch', 'for', 'try', 'catch', and 'return', I hold in mind on a
subconscious level the underlying assembler, albeit on a broad generalized
model.

I may have done that in the past but I don't think I do now. maybe
deep-deep down in my subconcious there's a Z80 or 68000 running. but I
doubt it.

To me C is the machine at the bottom.
For that matter, I also hold a simplified model of how awk or perl
scripts are read and translated as I write them.

never had that for Awk. Certainly not for Perl. this may explain why
I'm a rubbish Perl programmer... I have no semantic model for Perl.
Reading the Erlang language
spec recently, I can "see" the interpreter, how it's built (or how I might
build one), and the runtime cost of many of its constructs.

I'm (in a desultory way) grappling with Haskell. Maybe your approach
would help me.
[...] I wrote toy-sized FORTRAN without the slightest notion
beyond "this is how you iterate over these calculations N number of times",
and "this statement label is how you tie this statement to that other."
Certainly it's possible to write useful programs without understanding the
how or why behind them.

I'm not sure I agree. I think tou have to have some sort of idea what
is going on. I've been reading Fowler's "Domain Specific Languages"
recently. And I realised that's where i stole the term "Semantic
Model" from. This is one of those rare books where I go "yes! that's
what I was trying to do/say!". It's like he read my mind, sorted it
all out then squirted it on to paper.
While I don't intend any of this to be judgemental, I don't see how I cando
what I do with just a collection of rote rules and a notion of magic that
happens under the covers.

yes, you have to have some set of primitives and some way of combinign
them, and the primitives must have some well defined semantics
(meaning). I just disgaree that some arbitary assembler is the best
way to describe the semantics. Level confusion at best.

"Dr. I have this pain in my leg!" "Your quarks are misplaced"
 
N

Nick Keighley

But I use an imaginary machine [...]

How do you confirm correctness of both high-level and low-level code withan
imaginary machine?

How do you "confirm correctness" by translating it into I86 assembler?
to me "correctness" is a mathematical property.

And if its good enough for Knuth...
 Is your confirmation of correctness imaginary also? ...

code inspection, DbC, unit test
That's one of the problems with the ANSI and ISO C specifications: it
doesn't specify a working machine model.

that is one of it's major strengths.

Though I've read a few language definitions in my time and *none* of
them went anywhere near the bare metal.
 Without it, there is no reference
working version of C that other implementations can be compared against.

there is a clear definition of the semantics of each primitive.
Luckily uncontaminaed by any particular implementation.

We are beginning to repeat ourselves and making no progress in
convincing each other.

I've made my points. Re-read my previous posts if wish for
reiteration.

[assume massive snippage]
[...]  None have read "The C Rationale",

I have! lovely document.

[a Clike language] most likely inherited the basic
functionality that a programmer needs to know.

exceptions, closures, continuations, modads,

lost of stuff C doesn't provide.

Ok, this is the same BS portable C argument that plagues comp.lang.c
thought ...

The first inherent problem with that argument is that microprocessors have
standardized on a basic computing architecture.  Nearly all computing
platforms today use that architecture either because they are microprocessor
based or because they use the same hardware, e.g., memory.  They
standardized way back around 1974.  Mainframes followed suit slightly later.
What that means is that almost no one today - or in the past two decades for
that matter - has access to obscure, obsoleted, perverse platforms - or
other platforms that *NEVER* shouldv'e have been used to create the ANSI C
standard in the first place - where "portability" of C or C++ is actually
needed.

if in the following by platform you mean hardware then in many cases
"yes I have used such things". But if by platform you mean "have I
programmed in C on these things" then "no"
Have you ever had access to a non-emulated EBCDIC platform in your
entire lifetime?  (No.)

but I've encountered none ASCII character sets.
Have you ever had access to a 16-bit byte platform
in your entire lifetime? (No.)

yes. And used HLLs on them.
 Have you ever had access to a 9-bit
character platform in your entire lifetime? (No.)

I've encountered 5 and 6 bit character sets. I've used 24 bit
machines. I've used 1's complement machines.

[...] it's [...] the
programmer's responsibility to understand what that HLL
code is producing for their specific tools.
nope. I get on just fine not knowing what code my HLL is producing.

Then, I'd quess you won't get fired for producing faulty code either ...
So, that qualifies you as either a novice or hobbyist.  (Yes, I suspectyou
probably aren't ...)  A professional would do so as a matter of course.

no. I known plenty of professional progarmmers and vanishingly few of
them look at the assembler. Most would laugh at the idea.
 How
do you know your compiler is compliant with a specification, or your
compiler accurately implements what you expect your code to do?
Testing only?  What if you missed something you could've caught had you
simply just read the assembly?

in my experience compiler bugs are very rare
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,537
Members
45,020
Latest member
GenesisGai

Latest Threads

Top