[Q] How far can stack [LIFO] solve do automatic garbage collectionand prevent memory leak ?

A

Alex McDonald

Hugh said:
[SNIP ;]
The real problem here is that C, Forth and C++ lack automatic garbage
collection. If I have a program in which I have to worry about memory
leaks (as described above), I would be better off to ignore C, Forth
and C++ and just use a language that supports garbage collection. Why
should I waste my time carefully freeing up heap space? I will very
likely not find everything but yet have a few memory leaks anyway.
IOW Hugh has surpassed GIGO to achieve AGG -
*A*utomatic*G*arbage*G*eneration ;)

The C programmers reading this are likely wondering why I'm being
attacked. The reason is that Elizabeth Rather has made it clear to
everybody that this is what she wants:http://groups.google.com/group/comp..lang.forth/browse_thread/thread/c...

Every Forth programmer who aspires to get a job at Forth Inc. is
obliged to attack me. Attacking my software that I posted on the FIG
site is preferred, but personal attacks work too. It is a loyalty
test.

Complete bollox. A pox on your persecution fantasies.

This isn't about Elizabeth Rather or Forth Inc. It's about your
massive ego and blind ignorance. Your example of writing code with
memory leaks *and not caring because it's a waste of your time* makes
me think that you've never been a programmer of any sort. Ever.

In a commercial environment, your slide rule code would be rejected
during unit testing, and you'd be fired and your code sent to the bit
bucket.

This isn't about CS BS; this is about making sure that banks accounts
square, that planes fly, that nuclear reactors stay sub-critical; that
applications can run 24 by 7, 365 days a year without requiring any
human attention.

So who designs and writes compilers for fail-safe systems? Who designs
and writes operating systems that will run for years, non-stop? Where
do they get the assurance that what they're writing is correct -- and
provably so? From people that do research, hard math, have degrees,
and design algorithms and develop all those other abstract ideas you
seem so keen to reject as high-falutin' nonsense.

I'd rather poke myself in the eye than run any of the crap you've
written.
 
A

Anton Ertl

Alex McDonald said:
Your example of writing code with
memory leaks *and not caring because it's a waste of your time* makes
me think that you've never been a programmer of any sort. Ever.

Well, I find his approach towards memory leaks as described in
<779b992b-7199-4126-bf3a-7ec40ea801a6@j18g2000yqd.googlegroups.com>
quite sensible, use something like that myself, and recommend it to
others.

Followups set to c.l.f (adjust as appropriate).

- anton
 
N

Nick Keighley

my library is currently inaccessible. Normally I'd have picked up
Sedgewick and seen what he had to say on the subject. And possibly
Knuth (though that requires taking more of a deep breath).

Presumably Plauger's library book includes an implementation of
malloc()/free() so that might be a place to start.

serves me right for not checking
:-(
The wikipedia page is worthless.  

odd really, you'd think basic computer science wasn't that hard...
I found even wikipedia's description of a stack confusing and heavily
biased towards implementation
 
J

John Passaniti

The C programmers reading this are likely wondering why I'm being
attacked. The reason is that Elizabeth Rather has made it clear to
everybody that this is what she wants: [http://tinyurl.com/2bjwp7q]

Hello to those outside of comp.lang.forth, where Hugh usually leaves
his slime trail. I seriously doubt many people will bother to read
the message thread Hugh references, but if you do, you'll get to
delight in the same nonsense Hugh has brought to comp.lang.forth.
Here's the compressed version:

1. Hugh references code ("symtab") that he wrote (in Factor) to
manage symbol tables.
2. I (and others) did some basic analysis and found it to be a poor
algorithm-- both in terms of memory use and performance-- especially
compared to the usual solutions (hash tables, splay trees, etc.).
3. I stated that symtab sucked for the intended application.
4. Hugh didn't like that I called his baby ugly and decided to expose
his bigotry.
5. Elizabeth Rather said she didn't appreciate Hugh's bigotry in the
newsgroup.

Yep, that's it. What Hugh is banking on is that you won't read the
message thread, and that you'll blindly accept that Elizabeth is some
terrible ogre with a vendetta against Hugh. The humor here is that
Hugh himself provides a URL that disproves that! So yes, if you care,
do read the message thread. It won't take long for you to get a clear
impression of Hugh's character.
 
J

John Passaniti

What about using what I learned to write programs that work?
Does that count for anything?

It obviously counts, but it's not the only thing that matters. Where
I'm employed, I am currently managing a set of code that "works" but
the quality of that code is poor. The previous programmer suffered
from a bad case of cut-and-paste programming mixed with a
unsophisticated use of the language. The result is that this code
that "works" is a maintenance nightmare, has poor performance, wastes
memory, and is very brittle. The high level of coupling between code
means that when you change virtually anything, it invariably breaks
something else.

And then you have the issue of the programmer thinking the code
"works" but it doesn't actually meet the needs of the customer. The
same code I'm talking about has a feature where you can pass message
over the network and have the value you pass configure a parameter.
It "works" fine, but it's not what the customer wants. The customer
wants to be able to bump the value up and down, not set it to an
absolute value. So does the code "work"? Depends on the definition
of "work."

In my experience, there are a class of software developers who care
only that their code "works" (or more likely, *appears* to work) and
think that is the gold standard. It's an attitude that easy for
hobbyists to take, but not one that serious professionals can afford
to have. A hobbyist can freely spend hours hacking away and having a
grand time writing code. Professionals are paid for their efforts,
and that means that *someone* is spending both time and money on the
effort. A professional who cares only about slamming out code that
"works" is invariably merely moving the cost of maintaining and
extending the code to someone else. It becomes a hidden cost, but why
do they care... it isn't here and now, and probably won't be their
problem.
If I don't have a professor to pat me on the back, will my
programs stop working?

What a low bar you set for yourself. Does efficiency, clarity,
maintainability, extensibility, and elegance not matter to you?
 
J

Joshua Maurice

It obviously counts, but it's not the only thing that matters.  Where
I'm employed, I am currently managing a set of code that "works" but
the quality of that code is poor.  The previous programmer suffered
from a bad case of cut-and-paste programming mixed with a
unsophisticated use of the language.  The result is that this code
that "works" is a maintenance nightmare, has poor performance, wastes
memory, and is very brittle.  The high level of coupling between code
means that when you change virtually anything, it invariably breaks
something else.

And then you have the issue of the programmer thinking the code
"works" but it doesn't actually meet the needs of the customer.  The
same code I'm talking about has a feature where you can pass message
over the network and have the value you pass configure a parameter.
It "works" fine, but it's not what the customer wants.  The customer
wants to be able to bump the value up and down, not set it to an
absolute value.  So does the code "work"?  Depends on the definition
of "work."

In my experience, there are a class of software developers who care
only that their code "works" (or more likely, *appears* to work) and
think that is the gold standard.  It's an attitude that easy for
hobbyists to take, but not one that serious professionals can afford
to have.  A hobbyist can freely spend hours hacking away and having a
grand time writing code.  Professionals are paid for their efforts,
and that means that *someone* is spending both time and money on the
effort.  A professional who cares only about slamming out code that
"works" is invariably merely moving the cost of maintaining and
extending the code to someone else.  It becomes a hidden cost, but why
do they care... it isn't here and now, and probably won't be their
problem.

I agree. Sadly, with managers, especially non-technical managers, it's
hard to make this case when the weasel guy says "See! It's working.".
 
J

John Bokma

John Passaniti said:
The C programmers reading this are likely wondering why I'm being
attacked. The reason is that Elizabeth Rather has made it clear to
everybody that this is what she wants: [http://tinyurl.com/2bjwp7q]

Hello to those outside of comp.lang.forth, where Hugh usually leaves
his slime trail. I seriously doubt many people will bother to read
the message thread Hugh references, but if you do, you'll get to
delight in the same nonsense Hugh has brought to comp.lang.forth.
Here's the compressed version:

I did :). I have somewhat followed Forth from a far, far distance since
the 80's (including hardware), and did read several messages in the
thread, also since it was not clear what Hugh was referring to.
 
J

John Passaniti

I agree. Sadly, with managers, especially non-technical
managers, it's hard to make this case when the weasel
guy says "See! It's working.".

Actually, it's not that hard. The key to communicating the true cost
of software development to non-technical managers (and even some
technical ones!) is to express the cost in terms of a metaphor they
can understand. Non-technical managers may not understand the
technology or details of software development, but they can probably
understand money. So finding a metaphor along those lines can help
them to understand.

http://c2.com/cgi/wiki?WardExplainsDebtMetaphor

I've found that explaining the need to improve design and code quality
in terms of a debt metaphor usually helps non-technical managers have
a very real, very concrete understanding of the problem. For example,
telling a non-technical manager that a piece of code is poorly written
and needs to be refactored may not resonate with them. To them, the
code "works" and isn't that the only thing that matters? But put in
terms of a debt metaphor, it becomes easier for them to see the
problem.
 
J

Joshua Maurice

Actually, it's not that hard.  The key to communicating the true cost
of software development to non-technical managers (and even some
technical ones!) is to express the cost in terms of a metaphor they
can understand.  Non-technical managers may not understand the
technology or details of software development, but they can probably
understand money.  So finding a metaphor along those lines can help
them to understand.

http://c2.com/cgi/wiki?WardExplainsDebtMetaphor

I've found that explaining the need to improve design and code quality
in terms of a debt metaphor usually helps non-technical managers have
a very real, very concrete understanding of the problem.  For example,
telling a non-technical manager that a piece of code is poorly written
and needs to be refactored may not resonate with them.  To them, the
code "works" and isn't that the only thing that matters?  But put in
terms of a debt metaphor, it becomes easier for them to see the
problem.

But then it becomes a game of "How bad is this code exactly?" and "How
much technical debt have we accrued?". At least in my company's
culture, it is quite hard.
 
D

Dennis Lee Bieber

It obviously counts, but it's not the only thing that matters. Where
I'm employed, I am currently managing a set of code that "works" but
the quality of that code is poor. The previous programmer suffered
from a bad case of cut-and-paste programming mixed with a
unsophisticated use of the language. The result is that this code
that "works" is a maintenance nightmare, has poor performance, wastes
memory, and is very brittle. The high level of coupling between code
means that when you change virtually anything, it invariably breaks
something else.

<ack> And I thought such "programmers" were rare...

I had one about 15 years ago that was a cut&paste type... Cutting
from OTHER'S code -- so having no idea of what the code even did
operationally.

In this case it was code to access a bank of GPIB/HPIB devices...
This person wrote one program per device in the bank. Problem: Under the
VMS drivers, each program had to do a bus init/reset to connect... That
meant that each program after the first in the set-up script ended up
UNDOING the set-up of the previous program!

I had to redo the suite into a single consolidated program once we
got to the field site that had the hardware. (Yes, we were handicapped
by having to code from spec with no actual hardware until delivery to
the customer <G>)

I'm pretty sure this person is not reading this group -- she retired
and/or quit a few years later, after the second phase fiasco:

The above was a proof-of-concept, the follow-on was to fully
automate the concept. The assignment was split between "control" and
"post-processing" -- she had "control" and I had "post-processing". Two
weeks before CDR she hadn't produced a single concept of how this
control system would function... I essentially architected the entire
system in a two week span! (I'll admit, in hindsight, I could have done
a better job -- I attempted to use VMS asynchronous service traps to
emulate parallel processing and avoid starting three or four disjoint
processes; what I needed is what is now called "threads", but had no
direct equivalent in VMS FORTRAN 77; but since ASTs are triggered code
[interrupt handlers would be the simple way to describe them] I had to
sort of invert the logic of the application so the main program didn't
do much). After the CDR, she claimed she understood the design and could
code it... <sigh> With delivery date approaching we had no code from her
that was usable -- drafted a third person to assist and I practically
wrote all the code too.
 
B

Brad

Your example of writing code with
memory leaks *and not caring because it's a waste of your time* makes
me think that you've never been a programmer of any sort.

"Windows applications are immune from memory leaks since programmers
can count on regular crashes to automatically release previously
allocated RAM."
 
N

Navkirat Singh

"Windows applications are immune from memory leaks since programmers
can count on regular crashes to automatically release previously
allocated RAM."


Sorry if I may sound rude, but I have to do this on the windows applications comment - hahahahaha
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,057
Latest member
KetoBeezACVGummies

Latest Threads

Top