The way to read STL source code

N

Nick Keighley

Again, you've confirm my experiences in contrast to NK's claims.

3 or 4 in 20 years sounds about right to me (a little high). Sounds
pretty low to me.

NK claimed bugs in compilers were pretty unusual. Sounds like we
agree.
Exactly.  You won't know the code executes as intended until it is executed,

you can have a pretty good idea. And saring at megabytes of assembler
ain't gonna help you.
which was why I asked this of NK:

RP> How do you confirm correctness of both high-level and
RP> low-level code with an imaginary machine?  Is your
RP> confirmation of correctness imaginary also? ...

oh were you asking if I did unit tests? Yes various validations are
done ***on the HLL code***. Including inspection, unit test and system
test. Whats that got to do with asesmbly code?
To which, I got more than one absurd response that imaginary
confirmation of correctness was entirely valid ...

have you heard of DbC or code inspection?
 
J

Jorgen Grahn

Actually the more you know the language, the less you would even want
to look at other people's code. Instead, what one ends up doing is reading
tutorials, reference manuals and books on things like algorithms, data
containers and design patterns. The only place you would look at someone
else's code is in those tutorials (but usually in those the code is not
a whole program, but just the relevant lines that demonstrate the thing
being explained).

If you ever need to look at someone else's code it usually means that
you are tasked to maintaining/fixing/refactoring said code. That's like
cleaning someone else's underwear by hand. Yuk.

I think of it more as sex involving more than one person ...

(I see there's a different subthread about all this, so I won't push
it any further here.)

/Jorgen
 
N

Nomen Nescio

Rod Pemberton said:
The first inherent problem with that argument is that microprocessors have
standardized on a basic computing architecture. Nearly all computing
platforms today use that architecture either because they are microprocessor
based or because they use the same hardware, e.g., memory. They
standardized way back around 1974. Mainframes followed suit slightly
later.

No, mainframes had the architecture in place in 1964 with System 360. That basic
platform still exists today obviously with many extensions as System Z. Why
do you keep trying to rewrite history? IBM solved all the problems Intel has
today back in good old 1964.
What that means is that almost no one today - or in the past two decades for
that matter - has access to obscure, obsoleted, perverse platforms - or
other platforms that *NEVER* shouldv'e have been used to create the ANSI C
standard in the first place - where "portability" of C or C++ is actually
needed. Have you ever had access to a non-emulated EBCDIC platform in your
entire lifetime?

Sure, for most of my entire lifetime.
Have you ever had access to a 16-bit byte platform in your entire
lifetime?

Yes and I'm pretty sure you have also
(No.) Have you ever had access to a 9-bit character platform in your
entire lifetime?

Maybe, can't remember. Pretty sure the PDP-8 used a 12 bit word but that is
way bigger than what you asked about ;-)
So, like it or not, it's entirely up to the users of obscure platforms to
fix the "non-portable" nature of C code for their platforms.

Agreed but I solve the problem by not using C. YMMV ;-)

Then, I'd quess you won't get fired for producing faulty code either ...
So, that qualifies you as either a novice or hobbyist.

Or an H1B
 
N

Nick Keighley

wrote:
...

But I use an imaginary machine [...]

let's have some context:-

***
learning how that language is converted to assembly [is important]

I don't think so. When I come across a novel language feature I like
to think how it would be implemented. But I use an imaginary machine
(or these days, C!). Long ago I knew Z80 assembler (actually much of
Z80 machine code) but I can't see the utility in converting chunks of
C
++ (or scheme or Haskell)into Z80 assembler.
***

in other words I was talking about implementaion of novel language
features. Say virtual functions or closures in languages that support
them
How do you confirm correctness of both high-level and low-level code withan
imaginary machine?  Is your confirmation of correctness imaginary also?...

for HLLs code walkthrus, code inspection, unit test, system test, use
of asserts.

I don't confirm correctness at the machine code level, but in
principle the same techniques apply.

<snip>
 
N

Nick Keighley

3 or 4 in 20 years sounds about right to me (a little high). Sounds
pretty low to me.

say 2 in 20 years taht's 1 every 10 years. So I didn't find them all
myself, people I worked with did. Say the average team size is 5
that's 1 every 50 man years. So a programmer working alone would be
lucky to see one compiler bug in his entire carreer. I call that rare.

<snip>
 
R

Rod Pemberton

Juha Nieminen said:
Which is one of the reasons why asm is seldom used directly.

Asm is seldom used directly "for what" ... ?

I don't know what you're talking about here.

Assembly is still used directly by assembly programmers, and directly as the
output of most compilers. There are only a handful of compilers that
directly emit machine language, e.g., OpenWatcom. Those that don't emit
assembly typically emit C which when compiled by a C compiler then typically
emits assembly which when assembled by an assembler emits machine code. So,
where is it that "asm seldom used directly"?

Were you simply trying to say that most assemblers don't support safe
programming features, like type checking, and therefore we use (or should
use) HLLs instead? Ludicrous! Some assemblers have such features if people
would just learn them and use them. Personally, I use HLLs because I don't
have to keep track of a variable's location in my head, e.g., memory,
register, or stack. I don't like playing "Where's Waldo?" when programming.
The original claim was that C supports everything that any other
language supports. Not true. C does not support most of the safety
mechanisms that other languages have.

Sure it does. Don't use the unsafe features:

a) There is only one instance where I've ever used a goto in C. That was
for clarity. Unstructured control-flow is available in C, but one does not
have to use it. Unchecked pointers are available in C, but one does not
have to them either.

b) That's what languages that emit C do. That's what MISRA C or Safer C do.
Or, you can create a new more safe C syntax, like C derivative such as Java
do.

If you're interested in which languages emit C or more on language success,
here are a few links to some of my other c.l.m. posts:
http://groups.google.com/group/comp.lang.misc/msg/b48b2f5abe7cff07
http://groups.google.com/group/comp.lang.misc/msg/a8e78f82691706cb
http://groups.google.com/group/comp.lang.forth/msg/47c602f348f362f6


Rod Pemberton
 
R

Rod Pemberton

Nomen Nescio said:
No, mainframes had the architecture in place in 1964 with System 360.

No, only *one* mainframe - singular - "had the architecture in place ..."

Also, there apparently is no support that IBM's mainframe was the genesis
for the use of the same architecture adopted by microprocessors. The
development of standardized characters sets, e.g., ASCII and extended ASCII,
had a strong impact on standardizing 8-bit bytes for characters.
That basic platform still exists today obviously with many extensions as
System Z. Why do you keep trying to rewrite history?

I'm not. Mainframes (plural) didn't standardize on IBM's System 360 design.
Microprocessors did circa 1974. Mainframes (plural) still hadn't done so by
1978. How is that rewriting history?
IBM solved all the problems Intel has today back in good old 1964.

What problems?
Sure, for most of my entire lifetime.

I'm aware of your personal hell. Or, everyone else would call it hell, but
you've embraced it as heaven ... What do you think real Hell will be like
for you? ;-)
Yes and I'm pretty sure you have also

Well, too many systems, let me check ... I don't have info on all of them.
I did mean char size, but if talking address unit size then, unfortunately
that's a "yes" ... However, anyone a few years younger than me, can't
answer that as a "yes". So, shut up, you're ruining my point! ;-)


Rod Pemberton
 
R

Rod Pemberton

BartC said:
....


An interpreter may well be *implemented* in assembly language.

That's typical.
Every instruction that the interpreter deals with could be implemented
by a carefully hand-crafted and optimised block of assembly code!

So? An interpreter still reimplements for a virtual machine what native
assembly already does, i.e., slower.
Sometimes better than automatically compiled code.

The "hand" optimized sequences for interpreters are much smaller than what a
compiler usually optimizes. So, while the small pieces might be optimized
better, overall the larger compiler pieces are better optimized.
Tell that to the guy who implemented the LuaJIT interpreter. Numeric
benchmarks are often faster than optimised C!

Well, tell him JIT is not an interpreter:
http://en.wikipedia.org/wiki/Just-in-time_compilation
C is pretty good, but it has to spend a lot of time recognising constructs
in the source code [which] could be expressed directly in a higher level
language and which can be executed directly (vector operations for
example). It doesn't always manage to do that.

Those constructs must still be converted to assembly. So, even if you can
"express them directly", what real advantage is that? They have to be
converted to the underlying assembly language no matter what. If you're
looking for "short-hand" to express "missing" some constructs, use some
integers and an array ...
And if implementing something unusual, you can often write almost directly
from pseudo-code in a higher level language, but a quick, throwaway
implementation in C may well be slower; you need to spend time with C in
achieving what's already been done inside the interpreter.
....


Rod Pemberton
 
R

Rod Pemberton

Juha Nieminen said:

Well, I stated why. It's the part about the research into and
implementation of optimizations.
It's not very hard to believe. If the language has no support
for a certain higher-level concept, then the compiler cannot use
that concept to perform optimizations.

So, even though every "Turing complete" language can be expressed in terms
of an OISC, none of them can optimize a higher-level concept that is not
designed into in the high-level language? Do they actually do any
optimization at all according to you? You do understand that no
architecture executes the higher-level concept, yes? I.e., they all execute
the optimized low-level assembly created for those high level concepts
whether designed into the language or not ... In theory, any equivalent
program, no matter what "Turing complete" language is used, when translated
to an OISC and optimized, should produce the exact same program, entirely
independent of whatever high-level concepts that language has or doesn't.
Do you agree?

OISC
http://en.wikipedia.org/wiki/One_instruction_set_computer
You cannot express everything in C that can be expressed in asm.

There is a sharp cutoff between what you can and can't express in C that is
equivalent to assembly. C maps onto basic or generic assembly extremely
well, especially on 8-bit byte, contiguous memory, generic integer,
machines. The major impediment is some of the type checking introduced with
ANSI C. C captures nearly all of the functionality present in early 8-bit
microprocessors, except the carry flag, rotates, etc. However, using that
functionality may require a few casts or having a chart of the type
conversion rules so you can convert types legally. With x86, you generally
can't generate the more specialized or modern assembly instructions, e.g.,
SMM, MMX, PM, etc without inline assembly. You may also some have
issues with DSPs or RISC instruction sets fitting well with C. Of course,
word-sized machines were known to be a problem for C from the start.
(You can probably achieve the same functionality, but
you cannot achieve the same efficiency in every single case.)

It depends on what assembly functionality you're referring too.
Yes, into *C* language optimizations, not other languages.
What C does not support as a concept the C compiler
cannot optimize.

If it's expressed as C, it sure can! That's what C compiler optimization
does.

What you, perhaps, intended to say is that if C language does not support a
certain concept, then the C compiler might not optimize it as effectively as
a native language compiler with aggressive optimization for that concept.
That could be true, but doesn't have to be. See OISC comments above.
Nope. That's the genius in C++ exceptions: Support for exceptions does
not slow down the program in any way (compared to compiling the program
without support for exceptions). (It was, AFAIK, in fact a requirement
by the standardization committee, that exceptions would be added to the
standard only if it's possible to support them without compromising the
speed of the program. It turns out that it's possible.)

(Of course *throwing* an exception has overhead, but that's to be
expected. Exceptions are designed to be used to handle fatal errors,
not for normal operation. The main point is that when no exceptions
are thrown, the code is in no way slower than eg. the equivalent C
program would be, even though exceptions are supported and could
be thrown at any moment, at any point in the code.)

So, if an exception is thrown, C++ is slows downs due to the overhead. How
is that any different from a signal in C? How is that any different to me
stating there is substantial overhead present in assembly due to
save-restore state mechanism? So, your comments here seem fine to me,
except I don't get the "Nope."


Rod Pemberton
 
R

Rod Pemberton

Nick Keighley said:
3 or 4 in 20 years sounds about right to me (a little high). Sounds
pretty low to me.

The problem is they are a source of error you haven't located, but should.
NK claimed bugs in compilers were pretty unusual.
Sounds like we agree.

Yes, I'd agree that you can agree with yourself and reference yourself in
the third person too ...
you can have a pretty good idea.

Since when is "a pretty good idea" equivalent to confirming correctness of
execution?
And saring at megabytes of assembler
ain't gonna help you.

Do you stare at megabytes of HLL code too? That won't help you either ...
oh were you asking if I did unit tests? Yes various validations are
done ***on the HLL code***. Including inspection, unit test and
system test. Whats that got to do with asesmbly code?

Is the code executed as part of these tests? I.e., confirmed, but not
confirmed on an "imaginary machine" ...

(PS. Are you aware two letters in your various words have their locations
switched? It's been quite a few times now ... asesmbly vs. assembly
.... -ign vs. -ing ... etc)
have you heard of DbC or code inspection?

By DbC, you mean like MISRA C, Safer C, or static code analysis, dynamic
code analysis ... or something else?

Wikipedia's DbC page mentions preconditions. I'd say that was the over 500
identical cut-n-pasted if() statements to determine if our data was in the
correct location of the 5Mloc program I once worked on ... No we couldn't
procedurize that if(). That would've been too slow, supposedly.

Wait ... what?!?! So, you believe in code inspection of the HLL code, but
*not* the emitted assembly? Well, I just don't know what to say about that
contradiction ... Are you playing Devil's advocate?


Rod Pemberton
 
N

Nick Keighley

3 or 4 in 20 years sounds about right to me (a little high). Sounds
pretty low to me. [...]
NK claimed bugs in compilers were pretty unusual.
Sounds like we agree.

Yes, I'd agree that you can agree with yourself and reference yourself in
the third person too ...

I claimed compiler bugs were pretty rare. Someone came up with a
number (3 to 4 cases in 20 years). You said "you've confirm my
experiences". I pointed out that 3-4/20 years is a low rate and hence
that compiler bugs /are/ pretty unusual.

So we agree compiler bugs are fairly unusual?

Note I'm fairly careful about what I say. When I say "fairly unusual"
I don't mean "never happens".

And [staring] at megabytes of assembler ain't gonna help you.

Do you stare at megabytes of HLL code too?  That won't help you either ....

so what's the point of looking at the generated code? If the HLL
verifies then the machine code has been verified as a side effect.
Is the code executed as part of these tests?  I.e., confirmed, but not
confirmed on an "imaginary machine" ...

yes the code is executed. What do you think a unit test is? And I'll
point out again you misquoted me on "imaginary machines". Stop doing
this.
(PS.  Are you aware two letters in your various words have their locations
switched?  It's been quite a few times now ... asesmbly vs. assembly
... -ign vs. -ing ... etc)

yes I'm a rubbish typeist
By DbC, you mean like MISRA C, Safer C, or static code analysis, dynamic
code analysis ... or something else?

Design by Contract
Wikipedia's DbC page mentions preconditions.

and post-conditions?
 I'd say that was the over 500
identical cut-n-pasted if() statements to determine if our data was in the
correct location of the 5Mloc program I once worked on ...  No we couldn't
procedurize that if().  That would've been too slow, supposedly.

no idea what this has to do with DbC
Wait ... what?!?!  So, you believe in code inspection of the HLL code, but
*not* the emitted assembly?  Well, I just don't know what to say about that
contradiction ...  Are you playing Devil's advocate?

no. I see no contradiction. Machine code is validated as a side-effect
of HLL validation. I've /never/ seen anyone inspect the assembler
output of a compiler as a matter of course.
 
N

Nomen Nescio

Rod Pemberton said:
No, only *one* mainframe - singular - "had the architecture in place ..."

That's kind of a tautology since mainframe is like Kleenex or Jeep. There is
really only one...
Also, there apparently is no support that IBM's mainframe was the genesis
for the use of the same architecture adopted by microprocessors.

I didn't mean to suggest that, only that IBM standardized things very well
ten years early than your 1974 timeline for microprocessors....to the
extent stuff written for OS/360 is still object code compatible 48 years
later. Then 99% of the machines that came after it that were aimed at that
class of machine were either partial or total clones of the S/360 370
etc. If that isn't standardization I don't know what is. There haven't been
that many successful mainframe class machines but many many microprocessor
designs have had their day in the sun.
The development of standardized characters sets, e.g., ASCII and extended
ASCII, had a strong impact on standardizing 8-bit bytes for characters.

I would suggest EBCDIC had a strong effect on standardizing 8 bit bytes for
characters. I don't know what the timeline was but IBM ruled the world and
so did EBCDIC. Just depends where you are on the space time continuum. Now
everything's UNICODE so I guess the argument mutens somewhat.
I'm not. Mainframes (plural) didn't standardize on IBM's System 360 design.
Microprocessors did circa 1974. Mainframes (plural) still hadn't done so by
1978. How is that rewriting history?


What problems?

For you they might not be problems since Intel is what you know but when we
(IBM guys) look at Intel we ask stuff like

Do you really need to code a loop to move a string of characters or
compare character strings longer than 4 or 8 or 16 bytes??? we haven't
ever had to do that for strings less than 256 bytes and for 25 years or
more not for strings of any length. To us that looks stupid.

Do you really have only 8 "general purpose registers" and need 2 or 3 of
those to reference the stack? We had 16 true honest to goodness gprs since
the first series of machines. AMD fixed that with AMD64 but they didn't go
far enough considering they had a chance. Given Intel's lack of storage to
storage instructions we have in IBM you really need more registers to
avoid playing hide and go seek on the stack all the time.

Do you really have no register saving convention on Intel? Every OS and
individual function defines what has to be on the stack and what registers
have to be changed? We've had standard linkage for almost the whole
history of the machine and OS and the implementations on Intel look
broken.

There's a bunch more but I don't remember it all now.
Well, too many systems, let me check ... I don't have info on all of them.
I did mean char size, but if talking address unit size then, unfortunately
that's a "yes" ... However, anyone a few years younger than me, can't
answer that as a "yes". So, shut up, you're ruining my point! ;-)

Hehe I'm sure there are plenty of old guys around older than us. For one
thing check out comp.lang.fortran. There are some guys that know how to code
on vacuum tube computers...

The "other" Rod Pemberton
 
G

Geoff

no. I see no contradiction. Machine code is validated as a side-effect
of HLL validation. I've /never/ seen anyone inspect the assembler
output of a compiler as a matter of course.

I don't do it very often anymore but I did it a lot in an embedded
system using a Microtec Research C compiler for Z80/Z180. It emitted
some code that thrashed around with the HL and DE registers, swapping
them circularly for about 4 instructions. Email to them with sample
code got me a phone call from the compiler maintainer and the compiler
was fixed that afternoon. Routine? Far from it, but checking the
product of the compiler's translation can sometimes produce insights.
 
N

Nick Keighley

[...] So, you [NK] believe in code inspection of the HLL code, but
*not* the emitted assembly? Well, I just don't know what to say about that
contradiction [...]
no. I see no contradiction. Machine code is validated as a side-effect
of HLL validation. I've /never/ seen anyone inspect the assembler
output of a compiler as a matter of course.

I don't do it very often anymore but I did it a lot in an embedded
system using a Microtec Research C compiler for Z80/Z180. It emitted
some code that thrashed around with the HL and DE registers, swapping
them circularly for about 4 instructions. Email to them with sample
code got me a phone call from the compiler maintainer and the compiler
was fixed that afternoon. Routine? Far from it, but checking the
product of the compiler's translation can sometimes produce insights.

it's a matter of balance. I've looked at assmembler in the past and if
i suspected a compiler bug (or had some other good reason) I'd look at
it again. But it certainly isn't an every-day activity for either
myself /or/ for anyone I've worked with in recent years.

I suspect Mr Pemberton is trolling a bit. And I bite.
 
R

Rod Pemberton

Nomen Nescio said:
....


That's kind of a tautology since mainframe is like Kleenex or Jeep.
There is really only one...

So, you mean ... Crays? Oh, I see ...

It seems our resident IBM pundit offered us another "IBM [still] rules the
world" proclamation from some dystopian reality ... I wouldn't classify
IBM's mainframes as an iconic brand. As history has shown, no one wanted
IBM making their PCs either ...

In reality - which you apparently haven't been exposed to since they
locked you in a room in early 1960's - there are lots of mainframes,
minicomputers, supercomputers, e.g., Cray, Stratus, Tandems, VAX, ...

Wikipedia refers to "IBM and the Seven Dwarfs", so there are another seven
.... etc.
http://en.wikipedia.org/wiki/Mainframe_computer

BTW... After they let you out of that room, did you still see the color of
the walls? How many years did it take for the color burn-in to fade? Do
you find it harsh on the eyes to read non-orange or non-green text? Are you
converting everything in your house to pixie tubes for the flicker or vacuum
tube glow? How many boxes of programming manuals and photo-copied terminal
manuals do you have at home? Do you plan on using them for kindling? They
aren't on acid free paper you know ... ;-) Well, you can recycle the jokes
on your buddies at the next poker night, sporting event, or camping trip,
etc.
For you they might not be problems since Intel is what you know

I know gp x86 and knew 6502. Although I've not programmed them in
assembly, I've been exposed to Z80, 68000, DEC VAX, PA-RISC,
Transputer, etc.
[...] but when we
(IBM guys) look at Intel we ask stuff like

Do you really need to code a loop to move a string
of characters or compare character strings longer
than 4 or 8 or 16 bytes???

That's not necessary with x86, but that is a fast way and the slowest way.
There are three ways you can do that on x86: string instructions with a
repeat prefix (fastest or slowest depending), loop using the loop
instruction (slow), or loop using a conditional branch (fast).
[...] we haven't ever had to do that for strings less than 256 bytes
and for 25 years or more not for strings of any length.

The original 8086 (1978) had the string instruction
with repeat prefix.
To us that looks stupid.
Ok.

Do you really have only 8 "general purpose registers" and
need 2 or 3 of those to reference the stack?

The 64-bit instruction set has more registers. The 16-bit and 32-bit
instruction set has 8 GP registers. You need one to reference a stack, if
you use a stack. You need another one if you're using a certain style of
creating the linked-list for stack frames. If you're using more than 2 or
3, you're doing something wrong anyway. The instructions are well
distributed as to minimize need of them.
We had 16 true honest to goodness gprs since
the first series of machines.

So?

If "we" don't need or use that many registers, why is it important that
you have more than "us"? ...
AMD fixed that with AMD64 but they didn't go
far enough considering they had a chance.

They provided more of them, but register shortage wasn't and still isn't an
issue with the Intel, IMO. It's only an issue with those from RISC
backgrounds, post 8-bit microprocessor backgrounds, or university professors
who continue to spread the myth of register starvation as if the problem
wasn't solved ...
Given Intel's lack of storage to storage instructions [...]

What do you mean by that?

For x86 (general purpose instructions), only one instruction is needed for
data moves from register-to-register, register-to-memory, stack-to-register,
or stack-to-memory (and vice-versa). It's only memory-to-memory or
stack-to-stack that requires multiple instructions on x86. Seriously, how
often do you do the latter on your precious IBMs?
Given Intel's lack of storage to storage instructions
we have in IBM you really need more registers to
avoid playing hide and go seek on the stack all the time.

(I'll take it there is an implied "versus what" between
"instructions ... we have".)

You don't understand CISC do you?
http://en.wikipedia.org/wiki/Complex_instruction_set_computing

Didn't you ever code on an 8-bit microprocessor a non-orthogonal instruction
set? I.e., in the pre-RISC and define all non-RISC as CISC era ...

CISC instruction sets aren't orthogonal. This increases code density. They
are also optimized to minimize register need.

It seems you also don't understand what a register reorder buffer (ROB) is
either, apparently. A ROB allows renaming of a small number of registers
onto a larger set. So, the P6 mapped those 8 registers to 96. The actual
dependencies of register use between instructions is limited. So, having
only a few named registers, isn't really much of an issue.
Do you really have no register saving convention on Intel?

There is MS' cdecl, which is fairly standard for non-GNU, non-FOSS software.
But, other compilers define their own also, e.g., GNU compilers use a
different combination. OpenWatcom's C compiler can produce quite a few
combinations, including register only. (Why am I hearing mental projections
of "... but the x86 is register starved ... "?)

http://en.wikipedia.org/wiki/X86_calling_conventions

There are also standardized object formats, e.g., TIS and OMF:
http://en.wikipedia.org/wiki/Relocatable_Object_Module_Format
http://en.wikipedia.org/wiki/Executable_and_Linkable_Format
Every OS and individual function defines what has to be on
the stack and what registers have to be changed? We've had
standard linkage for almost the whole history of the machine
and OS and the implementations on Intel look
broken. ....

The "other" Rod Pemberton

Ok, could you please stop that? I asked you before.

From using Google and Yahoo, I know what a few "other" Rod Pemberton's do
for a living. One is a golfer. One is a banker. etc. It'd be quite rare
if another was a programmer. Of the more common names that begin with "Rod"
whose last name is Pemberton, there are supposedly these people living in
the US:

4 Rod
2 Roderick
7 Rodgers
1 Rodman


Rod Pemberton
 
R

Rod Pemberton

Nick Keighley said:
On Feb 22, 11:12 am, "Rod Pemberton" <[email protected]>
wrote: ....

[...]
If the HLL verifies then the machine code has been verified as a side
effect.

a) "Verifies" how?
b) No. That just means that whatever was "verified" didn't trigger or run
into the error.
Is the code executed as part of these tests? I.e., confirmed,
but not confirmed on an "imaginary machine" ...

yes the code is executed. [...]
....

And I'll point out again you misquoted me on "imaginary
machines". Stop doing this.

Huh? What are you talking about misquoted? In response to me saying one
needs to check the assembly output, you said:

NK> I don't think so. When I come across a novel
NK> language feature I like to think how it would
NK> be implemented. But I use an imaginary machine
NK> (or these days, C!).

For which, I asked this about the "imaginary machine":

RP> How do you confirm correctness of both high-level
RP> and low-level code with an imaginary machine? Is
RP> your confirmation of correctness imaginary also? ...

From which you progressly migrated into real world testing of code that was
supposedly being tested somehow on an imaginary machine ... That seems to
be this of the thread.
and post-conditions?

That too, but I was mentioning, sarcastically, a real-world repeated
identical if() statement as DbC "pre-conditions". They clearly weren't
coded with the concepts of DbC. They were there to ensure that "wrong data"
didn't enter the "wrong part" of the program and become corrupted or cause
other hazardous unknowns.
I'd say that was the over 500 identical cut-n-pasted if()
[...] ....

Machine code is validated as a side-effect of HLL validation.

Well, definately not for an imaginary machine ...

And, just because a program currently works correctly with the input sets
you've tried, doesn't mean it will later with a different data set or
different combination of user selections. We ran into that issue all the
time on the 5MLoc program. We ran extremely large datasets and tested and
retested every feature to the point of repugnance. All that means is the
bug hasn't been discovered. That bug can be in compiler output, or it can
be in the programmer's expectations about what the compiler should be doing,
but isn't, doesn't, or didn't. Without verifying the emitted assembly does
what you expect the HLL code to do, how do you know it is?


Rod Pemberton
 
N

Nick Keighley

On Feb 22, 11:12 am, "Rod Pemberton" <[email protected]>
[...]
If the HLL verifies then the machine code has been verified as a side
effect.

a) "Verifies" how?

um... "verify" n. to confirm the correctness of.

If the HLL is correct the machine code is correct (barring unobserved
side effects). Do these happen a lot?
b) No.  That just means that whatever was "verified" didn't trigger or run
into the error.

doesn't this apply to both HLL and machine code?
yes the code is executed. [...]
And I'll point out again you misquoted me on "imaginary
machines". Stop doing this.

Huh?  What are you talking about misquoted?  In response to me sayingone
needs to check the assembly output, you said:

NK> I don't think so. When I come across a novel
NK> language feature I like to think how it would
NK> be implemented. But I use an imaginary machine
NK> (or these days, C!).

this is as an aid to understanding a novel language feature. I imagine
how it might be implemented. This isn't part of a verification
strategy.
For which, I asked this about the "imaginary machine":

RP> How do you confirm correctness of both high-level
RP> and low-level code with an imaginary machine?

I don't. Like I said, it is used as an aid to understanding. Mental
scaffolding.
Is
RP> your confirmation of correctness imaginary also? ...

From which you progressly migrated into real world testing of code that was
supposedly being tested somehow on an imaginary machine ...  That seemsto
be this of the thread.

I never said I tested on an imaginary machine. Though I have done host
testing as part of the verification of a program destined to run on a
different target.

Well, definately not for an imaginary machine ...

And, just because a program currently works correctly with the input sets
you've tried, doesn't mean it will later with a different data set or
different combination of user selections.

this applies to all forms of testing. Hence walkthrus, inspection, DbC
etc.
Unit Test is supposed to be a little more resistant to this affect as
there is a systematic attempt to check all boundary conditions etc.

The onlt "fool proof" alternative is formal methods.
 We ran into that issue all the
time on the 5MLoc program.  We ran extremely large datasets and tested and
retested every feature to the point of repugnance.  All that means is the
bug hasn't been discovered.  That bug can be in compiler output, or it can
be in the programmer's expectations about what the compiler should be doing,
but isn't, doesn't, or didn't.  Without verifying the emitted assembly does
what you expect the HLL code to do, how do you know it is?

what makes you think verifying machine code is any easier than
verifying HLL code? How do I validate a mega-byte of HLL by looking at
the machine code?
 
K

Krice

I have no ideas where to begin to start with, and what to do

I think the best way to learn programming is by learning
concepts. After all source code is just an implementation
of concepts. C++ is a "difficult" language and often there
is limited time to create programs which has resulted to
poor quality source code.
That is why learning by reading source code can be a bad idea.
Of course there is some good source code as well, but it's
rare and you still have to learn the concepts to understand it.
 
R

Rod Pemberton

Nick Keighley said:
....

If the HLL is correct the machine code is correct [...]

This assumes the HLL code and machine code are 100% identical. That assumes
the compiler is 100% perfect in it's translation. That also assumes the
libraries are 100% perfect in their implementation. The HLL code can be
100% correct syntactically and the generated machine code can be wrong.
[...] whatever was "verified" didn't trigger or run
into the error.

doesn't this apply to both HLL and machine code?

If you can interpret the HLL code and you can compile it to machine code
too, then yes. Otherwise, no, it only applies to the machine code since the
HLL can't be fully "verified" except by compiling it to machine code.
[...]
And, just because a program currently works correctly with
the input sets you've tried, doesn't mean it will later with a
different data set or different combination of user selections.

this applies to all forms of testing. [...]

If you don't take issues here, why do you do so above?!
what makes you think verifying machine code is any easier
than verifying HLL code?

I didn't say it is.
How do I validate a mega-byte of HLL by looking at
the machine code?

By compiling and studying the output for all the language elements, type
conversions, etc you use to understand if your compiler is emitting them
correctly, or that your understanding of what the HLL code does is correct
in terms of the assembly. You don't have study the entire output of a large
program.


Rod Pemberton
 
D

Dombo

Op 25-Feb-12 11:25, Rod Pemberton schreef:
By compiling and studying the output for all the language elements, type
conversions, etc you use to understand if your compiler is emitting them
correctly, or that your understanding of what the HLL code does is correct
in terms of the assembly.

Only for that particular case; especially with (aggressive)
optimizations enabled there is not a simple mapping of language
constructs to assembly instructions. Sometimes slight variations in the
HLL code may cause the compiler to produce dramatically different output.
You don't have study the entire output of a large
program.

Studying the assembly output of just a small part of a program proves
exactly nothing, nor does studying the assembly output of the entire
program. Like a compiler might not correctly translate HLL to machine
instructions, the processor may not necessarily execute the machine
instructions correctly in all cases.

If you want to test a piece of software you test if it produces the
correct responses given a set stimuli. Often it is a good idea to
automate this process, especially for products with a long life-cycle.
This is a lot cheaper and useful than manually inspecting the assembly
output of a compiler. In the end the user doesn't give a rats ass about
the machine instructions; he/she only cares if the software works correctly.

Personally I inspect the compiler output for only two reasons:
1. Improve my understanding what the compiler can optimize and what not.
This has taught me that unlike popular belief code tweaks that make the
code less readable doesn't make the code necessarily faster (often quite
the contrary). Of course you always have to measure regardless.
2. I have a strong suspicion code does not execute correctly because of
a compiler bug. For me this is something I do only as a last resort
after confirming it is definitely not a bug in the HLL code (which is
much more likely), consulting the language specification if necessary.
In my experience the chances of being bitten by a compiler bug are very
small. More often than not perceived compiler bugs are in reality caused
by HLL code that has some undefined behavior and uninitialized
variables. But compiler bugs do exist and are most likely to pop up when
aggressive optimization settings are used.

When C++ was new to me studying the assembly output of the compiler did
give me a better understanding of the language in a way. It helped me to
better understand better why things are the way they are in the
language, and what symptoms you can expect when there are certain flaws
in the HLL code. It certainly gives me an edge compared to colleagues
who are pretty clues less about how a compiler translates code to
machine instructions, and to which the assembly output looks like a load
of gibberish. That being said I wouldn't recommend it as the best way to
learn a programming language.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,483
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top