[Q] How far can stack [LIFO] solve do automatic garbage collectionand prevent memory leak ?

S

Standish P

[Q] How far can stack [LIFO] solve do automatic garbage collection and
prevent memory leak ?
Because a stack has push and pop, it is able to release and allocate
memory. We envisage an exogenous stack which has malloc() associated
with a push and free() associated with a pop.
The algorithm using the stack would have to be "perfect" to prevent
stack overflow or condition of infinite recursion depth. This would
involve data type checking to filter out invalid input. The task must
be casted in an algorithm that uses the stack. Then the algorithm must
be shown to be heuristically or by its metaphor, to be correct using
informal reasoning.
Are there any standard textbooks or papers that show stacks
implemented in C/C++/Python/Forth with malloc/free in push and pop ?
If Forth is a general processing language based on stack, is it
possible to convert any and all algorithms to stack based ones and
thus avoid memory leaks since a pop automatically releases memory when
free is an intrinsic part of it.
K&R ANSI has the example of modular programming showing how to
implement a stack but he uses a fixed length array. It is also
possibly an endogenous stack. We look for an exogenous stack so that
element size can vary.

=======
Standish P <[email protected]>

Another way to pose my question, as occurred to me presently is to ask
if a stack is a good abstraction for programming ? Certainly, it is
the main abstraction in Forth and Postscript and implementable readily
in C,C++ and I assume python.

It is true that the other languages such as F/PS also have borrowed
lists from lisp in the name of nested-dictionaries and mathematica
calls them nested-tables as its fundamental data structure.

I am asking for a characterization of algorithms that benefit from
this abstraction or programming paradigm and comparison with others.

The whole case of OOP is the clustering of thought, ie book-keeping,
in the mind of the programmer around fewer objects than ten or twenty
fold functions.

so the name of the game is the same, ie to help the programmer keep
track of things for writing fault free code without chasing every
case, easy visualization, easy recall and communication with fellow
programmers of abstract concepts in terms of real world objects and
easy modification and code reuse.
 
J

John Kelly

Another way to pose my question, as occurred to me presently is to ask
if a stack is a good abstraction for programming ? Certainly, it is
the main abstraction in Forth and Postscript and implementable readily
in C,C++ and I assume python.
so the name of the game is the same, ie to help the programmer keep
track of things for writing fault free code without chasing every
case, easy visualization, easy recall and communication with fellow
programmers of abstract concepts in terms of real world objects and
easy modification and code reuse.


"Go is an attempt to combine the ease of programming of an interpreted,
dynamically typed language with the efficiency and safety of a
statically typed, compiled language. It also aims to be modern, with
support for networked and multicore computing"

"To make the stacks small, Go's run-time uses segmented stacks. A newly
minted goroutine is given a few kilobytes, which is almost always
enough. When it isn't, the run-time allocates (and frees) extension
segments automatically"

http://golang.org/doc/go_lang_faq.html
 
J

John Passaniti

Another way to pose my question, as occurred to me presently is
to ask if a stack is a good abstraction for programming ?
Certainly, it is the main abstraction in Forth and Postscript
and implementable readily in C,C++ and I assume python.

A stack is a fine abstraction for some kinds of programming. It fails
for others. In languages where functions are first-class entities
that can be stored and passed around like any other kind of data,
stacks can be problematic because a function can out-live the stack
frame they were created in.
It is true that the other languages such as F/PS also have borrowed
lists from lisp in the name of nested-dictionaries and mathematica
calls them nested-tables as its fundamental data structure.
No.

The whole case of OOP is the clustering of thought, ie book-keeping,
in the mind of the programmer around fewer objects than ten or twenty
fold functions.

That's one view of OOP. It's not the only one.
so the name of the game is the same, ie to help the programmer keep
track of things for writing fault free code without chasing every
case, easy visualization, easy recall and communication with fellow
programmers of abstract concepts in terms of real world objects and
easy modification and code reuse.

You, like probably everyone else who has thought about how to
"simplify" languages will eventually end up at the same place-- you'll
have a model that meets your objectives, but with some programmers
will find is unnecessarily limiting. More importantly, you'll run
into some classes of problems for which your simple model makes things
inordinately complicated relative to what other languages and
paradigms offer.

Here's a better idea: Become familiar with the languages you've
cited, and more. I would recommend Forth, Lisp, Smalltalk or Ruby,
Lua or JavaScript. Learn each and then come back and tell us if you
think limiting the programmer to objects with stack-ordered lifetimes
is enough. Oh, and while you're at it, dip your toes into a problem
domain you don't normally do any work in. If you're an embedded
systems guy, then spend some time writing a non-trivial web
application. Go outside your comfort zone and find a problem domain
where cherished idioms and tools no longer apply. I think it will
open your eyes.
 
S

Standish P

A stack is a fine abstraction for some kinds of programming.  It fails
for others.  In languages where functions are first-class entities
that can be stored and passed around like any other kind of data,
stacks can be problematic because a function can out-live the stack
frame they were created in.


No.

you are contradicting an earlier poster from forth who admitted the
part on dicts.
That's one view of OOP.  It's not the only one.

and what can you add to enlighten the readers on the other view ?
You, like probably everyone else who has thought about how to
"simplify" languages will eventually end up at the same place-- you'll
have a model that meets your objectives, but with some programmers
will find is unnecessarily limiting.  More importantly, you'll run
into some classes of problems for which your simple model makes things
inordinately complicated relative to what other languages and
paradigms offer.

The objective is to discuss those cases via specific examples (not
generalities), and talk ABOUT them, not AROUND them.
Here's a better idea:  

Its a very fine wild goose chase project statement.
 
S

Standish P

you are contradicting an earlier poster from forth who admitted the
part on dicts.





and what can you add to enlighten the readers on the other view ?





The objective is to discuss those cases via specific examples (not
generalities), and talk ABOUT them, not AROUND them.


Its a very fine wild goose chase project statement.

program a universe simulator using a turing machine.
 
J

John Passaniti

you are contradicting an earlier poster from forth who admitted the
part on dicts.

Then they are wrong.

You asked if Forth "borrowed" lists from Lisp. It did not. In Lisp,
lists are constructed with pair of pointers called a "cons cell".
That is the most primitive component that makes up a list. Forth has
no such thing; in Forth, the dictionary (which is traditionally, but
not necessarily a list) is a data structure that links to the previous
word with a pointer. This is in fact one of the nice things about
Lisp; because all lists are created out of the same primitive cons
cell, you can consistently process any list in the system. In Forth,
any lists (such as the dictionary, if it is a list) are specific to
their purpose and have to be treated individually.

I don't know what you mean by "nested-dictionaries." There is no such
thing in Forth. Dictionaries don't nest. You can create wordlists,
but each wordlist is flat. When most people think of a nested
dictionary, they would think of a structure that would allow any
arbitrary level of nesting, not a string of flat wordlists.
and what can you add to enlighten the readers on the other view ?

How one views OOP depends on the language and implementation. Your
statement about having fewer than "ten or twenty fold" functions is
completely arbitrary and is more a matter of style and skill in
decomposition than an intrinsic quality about objects. The poetic
"clustering of thought" is vague but a I guess could be an informal
notion of the bundling of related state and methods. And referring to
it as "book-keeping" suggests some kind of static relationship between
state and methods, although that is not the case in architectures that
stress dynamic relationships. Many people only know OOP through
static, class-based models (such as in languages like C++). But there
are other models. Objects can also be represented not with classes
but by cloning existing objects and then mutating them as needed.
Objects can also be represented with a functional interface using a
closure around an environment. In such cases, objects may be far more
fluid than in static class-based models, and shift over time into
different roles. In such systems, "book-keeping" isn't static. Or
put another way, the language and implementation drive the flavor that
a particular object has.
The objective is to discuss those cases via specific examples (not
generalities), and talk ABOUT them, not AROUND them.

I'd be happy to discuss specific examples, but your understanding of
Forth is flawed, and until you learn more about Forth, I don't think
it would be helpful.

And actually, I did provide a specific example. You apparently didn't
understand it, so let me be more explicit. Here is a function in
Scheme:

(define (hello name)
(lambda ()
(begin
(display "Hello ")
(display name))))

This defines a function that returns another function. You can think
of this as a constructor for a light-weight object that has one value
("name") and one default method (to print "Hello <name>"). The
function that is returned can be stored, passed around, and otherwise
out-live the invocation of this function. For example:

(define example (hello "John"))

In your stack mindset, the value "John" would disappear after the call
to "hello". But in Scheme, the value lives on, as it is part of the
closure captured at the time the function was created.

A stack mindset would not allow this. And this would eliminate the
vast majority of functional programming from your language's
abilities. Maybe you don't care, or maybe you still don't see the
value in this. In that case, I suggest you learn the language and
then think about what your stack mindset prevents.
Its a very fine wild goose chase project statement.

No, it is a vivid example of what you don't know-- and what you don't
know is what will limit you later.
 
D

Dennis Lee Bieber

There are many different ways to implement a heap. The above is
not a good one, and I doubt that it's really used anywhere.
In particular, maintaining a "used list"... The address of a "used"
block is in the possession of the application itself and separate from
the memory management... When the application frees it, then it gets
added to the free list... adjacent freed blocks get merged into larger
ones, etc.

Then there is the old Macintosh "handle" which is the address of...
another address. Handles being allocated from a fixed block of memory
containing addresses of variable blocks elsewhere in memory. The memory
manager thereby being free to move the data around (compacting free
space) associated with updating the address stored in the handle itself.
Application code never has to know the actual location of the data.



When one gets down to the roots, everything is just an array of same
sized items (8, 16, 32, 64, 36, 60 bits depending on era and hardware)
accessed by an index starting at 0. All else is structure added on top
of that.
 
J

John Nagle

Standish P said:
[Q] How far can stack [LIFO] solve do automatic garbage collection and
prevent memory leak ?
Because a stack has push and pop, it is able to release and allocate
memory. We envisage an exogenous stack which has malloc() associated
with a push and free() associated with a pop.

See

How many programmers have applied the ideas of these papers in their
programming practice ? I paste the abstract for convenience

Abstract:
This paper describes a memory management discipline for programs that
perform dynamic memory allocation and de-allocation. At runtime, all
values are put into regions. The store consists of a stack of regions.
All points of region allocation and deallocation are inferred
automatically, using a type and effect based program analysis. The
scheme does not assume the presence of a garbage collector.

That's actually an interesting idea. If you can figure out object
lifetimes at compile time, allocation can be made far more efficient.

One of the basic questions is whether a function will ever "keep"
an object. That is, will the function ever keep a reference to an
object that outlives the return from the function? In many cases,
one can easily determine at compile time that a function will never
keep a passed object. In such a case, you don't have to do reference
count updates on the object. Most math functions have this property,
even vector and matrix math functions - they have no persistent state.

Python could use a bit of this. If a function argument can
be identified as "non-kept", then the function doesn't need to
do reference count updates on it. If a local variable in
a function is used only by "non-keep" functions and operations,
it can be created on the stack and released cheaply at block exit.

One can go much further in lifetime inference than this, as
the papers demonstrate. There's a big win in the simple optimization
of identifying "non-keep" parameters, especially in mathematical work
where they're very common. It's not clear that getting fancier than
that is a win.

Does Shed Skin have this optimization? It should.

John Nagle
 
T

Torben Ægidius Mogensen

Standish P said:
How many programmers have applied the ideas of these papers in their
programming practice ?

The method was built into a SML compiler which was used (among other
things) to build a web server, which was used (among other things) to
make the web pages and course administrative system at the IT University
of Copenhagen, Denmark.

Good enough for you?

Torben
 
N

Nick Keighley

How are these heaps being implemented ? Is there some illustrative
code or a book showing how to implement these heaps in C for example ?

any book of algorithms I'd have thought

http://en.wikipedia.org/wiki/Dynamic_memory_allocation
http://www.flounder.com/inside_storage_allocation.htm

I've no idea how good either of these is
Are dictionaries of forth and postscript themselves stacks if we
consider them as nested two column tables which lisp's lists are in
essence, but only single row. Multiple rows would just be multiple
instances of it at the same level inside parens.

I can't make much sense of that. But you seem to see Lisp data
structures in all sorts of strange places. I don't see that Lisp lists
are "nested two column tables"
we can peek into stacks which is like car.
no.


if it is not unusually
costly computation, why not allow it ? there is no need to restrict to
push and pop.

some stacks have a top() operation.

roll( stack_name, num)

itself can give all those postfix permutations that push and pop cant
generate with a single stack. Can we use dictionaries to generate
multiple stacks inside one global stack ?

I've no idea what you on about
 
N

Nick Keighley

On 8/17/10 10:19 AM, Standish P wrote

he's saying a forth dictionary isn't a lisp s-exp. Well it isn't.
Not at all.  A Forth dictionary is a simple linked list, not the
complicated kind of nested structures you're referring to.  You really
seem addicted to very complex structures.

I thought he had the opposite problem! I thought it was trying to
knock in all his programming nails with same stack-based hammer.

 They really aren't necessary for general programming.

whaever *that* is
 
S

spinoza1111

Is this all that a heap is or is there more to it ? I have been
looking for simple but complete explanation of heap for a while and
not gotten to it. I think I am looking for a stack allocation on the
same pattern. In a disk, a file is fragmented in many contiguous
blocks and is accessed automatically.









This you might want to take this to the Forth people because they are
marketing their language as a cure for all that plagues programming
today.

No, they're not. Stack based languages have seen better days and Forth
(and the SL/1 language I supported with compilers at Bell-Northern
Research) were last in fashion in the 1970s. Processors seldom could
multitask, so it wasn't recognized that the stack could be a
performance bottleneck, where stack operations cannot be pipelined or
executed in parallel.

John Hennessy of Stanford and MIPS made the stack must die case at ACM
ASPLOS in 1987. Niklaus Wirth was also at this conference at which I
was a fly on the wall, maintaining that the stack was good for
reliability and verifiability of software.

Forth had a snowball's chance because it forces ordinary programmers
to think in Reverse Polish notation and is for the above reasons hard
to pipeline, although of course it can be pipelined.
was wrong, and needs to be brought up to date:
You cannot do everything in a stack
Unless you code an almighty hack
If you're a coding Knight who says, "Neep",
You'll probably need to implement a heap
A pile a heap of benefits you'll reap
If only my advice in your brain you'll keep
And avoid memory leaks from which data doth seep
By using a well-implemented, well structured, and well-documented
Heap!
[Chorus of Sailors]
We will to heart your advice take, and always use a heap!
[Soloist]
Oh thank you do
To this be true
And always my sage advice do keep
That you always need to use a heap!- Hide quoted text -
- Show quoted text -
 
A

Alex McDonald

No, they're not.

That I agree with.
Stack based languages have seen better days and Forth
(and the SL/1 language I supported with compilers at Bell-Northern
Research) were last in fashion in the 1970s. Processors seldom could
multitask, so it wasn't recognized that the stack could be a
performance bottleneck, where stack operations cannot be pipelined or
executed in parallel.

John Hennessy of Stanford and MIPS made the stack must die case at ACM
ASPLOS in 1987. Niklaus Wirth was also at this conference at which I
was a fly on the wall, maintaining that the stack was good for
reliability and verifiability of software.

Forth had a snowball's chance because it forces ordinary programmers
to think in Reverse Polish notation and is for the above reasons hard
to pipeline, although of course it can be pipelined.

I really don't understand much of what you're saying here; Forth can
be implemented on processors that have several hardware assisted
stacks, 1 stack or even no stack at all. Multitasking? Why's that a
problem? And why is it hard to pipeline? Are you thinking of a
specific processor?
 
P

Paul Rubin

Elizabeth D Rather said:
Lol. Forth supported multitasking on every processor it was
implemented on in the 70's, with blazing speed compared to competitive
techniques. I have never seen stack operations to be a bottleneck.

I think "multitasking" in that post refers to superscalar execution,
which wasn't done in the 1970's except on supercomputers. That the
stack is a bottleneck is the precise reason that optimizing Forth
compilers do complicated flow analysis to translate stack operations
into register operations.
 
J

John Nagle

I think "multitasking" in that post refers to superscalar execution,
which wasn't done in the 1970's except on supercomputers. That the
stack is a bottleneck is the precise reason that optimizing Forth
compilers do complicated flow analysis to translate stack operations
into register operations.

Some small FORTH machines had dedicated stack hardware. On each
CPU cycle, the CPU could do one stack access, one main memory access,
and one return stack access. This was before cacheing; those CPUs
were slow relative to their memory, so a non-cached
one-instruction-per-clock machine made sense.

In the superscalar era, there's not much of an advantage to avoiding
stack accesses. x86 superscalar machines have many registers not
visible to the program, as the fastest level of cache. In practice,
the top of the stack is usually in CPU registers. The "huge number
of programmer-visible register" machines like SPARCs turned out to be
a dead end. So did making all the instructions the same width; it
makes the CPU simpler, but not faster, and it bulks up the program
by 2x or so.

John Nagle
 
S

Standish P

Stacks (at least as far as Forth uses them) and heaps are fundamentally
different things.

...


In Forth, they go in "data space", which might or might not be in the
dictionary, and is almost never in a dynamically managed heap; certainly
not on a stack.
...




Lol.  Forth supported multitasking on every processor it was implemented
on in the 70's, with blazing speed compared to competitive techniques.
I have never seen stack operations to be a bottleneck.


Mostly it had a "snowball's chance" because it was never picked up by
the CS gurus who, AFAIK, never really took a serious look at it.

Its quite possible that the criticism is unfair, but dont you think
that in part some responsibility must be borne by your organization in
not doing a good job of education ? I have looked at this book you
authored in the past few weeks and found a link for your convenience
now. This is entitled Advanced .....

http://www.amazon.com/Forth-Applica...books&qid=1282175842&sr=8-1#reader_1419685767

Show me on what page does it explain how Forth implements dynamic
binding or lexical binding and takes care of the scope of definition
of the "nouns" ?

Provide me with a link, if you kindly would, that can take me to a
tutorial of Forth internals or discusses this issue.
Cheers,
Elizabeth

She is quite humble. Take a look at this page,

http://www.forth.com/resources/evolution/index.html

She is currently the number 1 in the forth world and if there was a
nobel prize in forth, it would go to these three.


Authors

Elizabeth D. Rather
FORTH, Inc.
5959 W. Century Blvd.
Suite 700
Los Angeles, CA 90045

Elizabeth Rather is the co-founder of FORTH, Inc. and is a leading
expert in the Forth programming language. Elizabeth was a colleague of
Chuck Moore back when he worked at NRAO in the early 1970s. During his
development of Forth, she became the second ever Forth programmer.
Since then, she has become a leading expert in the language and one of
its main proponents. Elizabeth was the chair of the ANSI Technical
Committee that produced the ANSI Standard for Forth (1994). She is an
author of several books on Forth and gives regular training seminars
on its usage.

Donald R. Colburn

c/o Digital Media Magic
14712 Westbury Rd.
Rockville, MD 20853

Don Colburn was one of the earliest Forth users. He was one of the
founders of the Forth Interest Group, and contributed to the
development of the first public-domain figForth. Subsequently, he
founded Creative Solutions, Inc. (CSI), which introduced MacForth™ in
1984. MacForth was the first programming language capable of running
on the Macintosh when it was first introduced. Don was a member of
the ANSI Technical Committee that produced the ANSI Standard for Forth
(1994). He died in 2009.

Charles H. Moore
Computer Cowboys
40 Cedar Lane
P.O. Box 127
Sierra City, CA 96125

Chuck Moore is Chairman and CTO of Green Arrays, Inc. He co-founded
FORTH, Inc., in 1971 and went on to develop a Forth-based chip
(RTX2000) in the mid 1980s, derivatives of which are still being used
widely by NASA. At Computer Cowboys, Mr. Moore designed the Sh-Boom
microprocessor and then co-founded iTv, an Internet Appliance
manufacturer. During the 1990s, he used his own CAD software to design
several custom VLSI chips, including the F21 processor with a network
interface. More recently, he invented colorForth and ported his VLSI
design tools to it. Moore served as CTO for IntellaSys during
development of the S40 multi-computer chip.
 
S

Standish P

You asked if Forth "borrowed" lists from Lisp.  It did not.  In Lisp,
lists are constructed with pair of pointers called a "cons cell".
That is the most primitive component that makes up a list.  Forth has
no such thing; in Forth, the dictionary (which is traditionally, but
not necessarily a list) is a data structure that links to the previous
word with a pointer.  

Would you show me a picture, ascii art or whatever for Forth ? I know
what lisp lists look like so I dont need that for comparison. Forth
must have a convention and a standard or preferred practice for its
dicts. However, let me tell you that in postscript the dictionaries
can be nested inside other dictionaries and any such hiearchical
structure is a nested associative list, which is what linked list,
nested dictionaries, nested tables are.
 
K

Keith Thompson

Standish P said:
Mostly it had a "snowball's chance" because it was never picked up by
the CS gurus who, AFAIK, never really took a serious look at it.

Its quite possible that the criticism is unfair, but dont you think
that in part some responsibility must be borne by your organization in
not doing a good job of education ? [snip]
Show me on what page does it explain how Forth implements dynamic
binding or lexical binding and takes care of the scope of definition
of the "nouns" ?
[...]

Show me how this is relevant to comp.lang.c, comp.lang.c++, comp.theory,
or comp.lang.python. Please trim the Newsgroups line.
 
S

Standish P

Its quite possible that the criticism is unfair, but dont you think
that in part some responsibility must be borne by your organization in
not doing a good job of education ? [snip]
Show me on what page does it explain how Forth implements dynamic
binding or lexical binding and takes care of the scope of definition
of the "nouns" ?
[...]

Show me how this is relevant to comp.lang.c, comp.lang.c++, comp.theory,
or comp.lang.python.  Please trim the Newsgroups line.

provide a rigorous proof that people are more interested in the
nauseating nude pictures that you post of your mother in the
newsgroups than in the subject of forth implementation. as a matter of
fact a lot of people in various language groups are interested in
implementation aspects and the languages borrow ideas from each other.

now, get away from my thread and take away your odious posts which
positively cause me nausea and vomiting.

we will soon find out the game of stacks.
 
J

John Kelly

fact a lot of people in various language groups are interested in
implementation aspects and the languages borrow ideas from each other.

Standish sounds like Spinoza in disguise. Nevertheless, ngs like c.l.c
need more diversity. Standards minutia is boring.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,764
Messages
2,569,565
Members
45,041
Latest member
RomeoFarnh

Latest Threads

Top