Linked lists

A

Arne Vajhøj

Christian said:
Nice example for Functional language in use is Windows.. larger and
larger parts of that os are written in F#

Considering that you can run Windows without .NET, then it can not
be that big parts.

And as far as I know the biggest .NET parts are .NET and the tools
itself. And they are not made in F#.

Arne
 
T

Tom Anderson

If it is good, then it will eventually show up in compilers.

Oh, indeed. But he, his supervisor, and his examiners, all computer
scientists of the 'high church' kind, didn't feel that such implementation
was a necessry part of his work.

And as much as he enjoyed his PhD, he's now hacking user interaction
technology on GNOME in objective C, so i think his urge to implement has
finally won out!
Universities are supposed to do basic research not create products.

Of course. But to my mind, implementation is an essential part of
invention. I wouldn't demand that anyone produce a production-quality
compiler as part of their thesis, but actually implementing the algorithm
they've invented would seem to my naive brain to be a necessity. "Real
artists ship", as Steve Jobs said.
But you do not decide whether to use ArrayList, LinkedList or HashMap
by throwing a dice.

You utilize that someone has analyzed their big O characteristics.

And you do not make your database structures randomly either.

You utilize that someone has invented relational algebra.

You may not be using Math directly, but you are standing
on the shoulders of a lot of mathematicians.

Absolutely! I don't dispute that for a second. Cryptography and random
number generation are other examples of this - crypto involves some
serious mathematical rocket science, and it has to be done absolutely
right for it to do its job. But i don't need to understand it to use it,
any more than i need to understand how a transistor works to program a
computer. You need an appreciation of what sort of thing is going on
behind the scenes, and its implications for you - for instance, you need
to know what O(1) and O(n) are, and which applies to the various
operations of LinkedList and ArrayList. But you don't need to be able to
analyse a sorting algorithm and prove that it's O(log log n). You do need
to be able to do a rough kind of big-O analysis if you're doing
performance tuning, so you can work out if your code has any hope of going
fast, but formal proofs are not generally needed.
There are many years of experience that shows that other sciences
than math and CS is a good background for software development.

Exactly.

tom
 
A

Arne Vajhøj

Tom said:
Of course. But to my mind, implementation is an essential part of
invention. I wouldn't demand that anyone produce a production-quality
compiler as part of their thesis, but actually implementing the
algorithm they've invented would seem to my naive brain to be a
necessity.

No.

It is not unusual in science not to be able to test something.
Absolutely! I don't dispute that for a second. Cryptography and random
number generation are other examples of this - crypto involves some
serious mathematical rocket science, and it has to be done absolutely
right for it to do its job. But i don't need to understand it to use it,
any more than i need to understand how a transistor works to program a
computer.

I would claim that those who has a basic understanding of this stuff
in general write better code that uses this stuff than those that has
to consider it black box.
You need an appreciation of what sort of thing is going on
behind the scenes, and its implications for you - for instance, you need
to know what O(1) and O(n) are, and which applies to the various
operations of LinkedList and ArrayList. But you don't need to be able to
analyse a sorting algorithm and prove that it's O(log log n). You do
need to be able to do a rough kind of big-O analysis if you're doing
performance tuning, so you can work out if your code has any hope of
going fast, but formal proofs are not generally needed.

True.

But it is a lot easier to do an informal analysis if one has received
training and and done formal analysis a couple of times.

Arne
 
T

Tom Anderson

I don't buy that. A bad programmer can write COBOL in any language,
including Haskell. However, the kind of people who learn Haskell tend to
be the kind of people who aren't going to do that, hence there's probably
less hackish Haskell than there is hackish java, C, perl, or even LISP.

That's an interesting article, but i think it's total crap (you will be
AMAZED to hear!). He's essentially saying that there's nothing difficult
in java, that difficult concepts are somehow restricted to C (pointers)
and ML/Haskell (functional wizardry). This is trivially untrue: all of us
face difficult problems in java every day of our working lives; it
wouldn't be called work if we didn't. It's just that the problems are not
at the level of pointers and bit-twiddling as they are in C, or formal
fancy footwork as they are in the egghead functional languages. Our daily
problems are mostly about small-scale design [1] - how you break a
required function down into concepts and behaviours, or collaborations and
responsibilities between and of classes. If he wants to ask his
interviewees tough questions to weed out the thick or incapable ones, he
can ask them about that. And since that's what they'll actually be doing
for a living, it might even serve him better than asking them circus
questions like whether they can recite the S combinator backwards.

tom

[1] And, it has to be said, doing your work in the face of crappy code
written by other people (i use a Heraclitian definition of 'other people'
here!).
 
T

Tom Anderson

No.

It is not unusual in science not to be able to test something.

?

You must have studied a very different kind of science to me.
I would claim that those who has a basic understanding of this stuff in
general write better code that uses this stuff than those that has to
consider it black box.

I would agree. There's a principle which my dad calls 'one level down' -
you need to understand how something works one level down from the level
at which you're using it. It doesn't need to be a detailed understanding,
but it needs to be enough to have an idea of how (and whether) the thing
is going to work in any given situation. My contention is that you don't
need to have a great knowledge of CS to get that level of understanding
for the things we use as working programmers.
True.

But it is a lot easier to do an informal analysis if one has received
training and and done formal analysis a couple of times.

True.

tom
 
T

Tom Anderson

A great theoretical approach that rarely, if ever, works in workaday
programming. I have been promised many times the opportunity to
refactor software, then been forbidden to do so based on variants of,
"If it ain't broke, don't fix it."

I have learned that I must do it right the first time, because
management rarely permits, and then not willingly, the opportunity to
improve it.

I agree with you that following the "do it badly now, we can fix it later"
road will lead you into quagmires and infernal realms.

But that's not quite how i read blueparty's dictum; i thought he was
saying that when you don't have a lot of time, you choose the option that
you can actually get done, not the one that's best in some more general
sense. The point being that it's better to deliver something crufty than
to not deliver at all. "Real artists ship", as i've said elsewhere on this
group today. Of course, in reality, choosing the high road of Doing It
Right No Matter How Long It Takes doesn't mean you won't ship, it just
means you'll ship late, or have less time to do everything else that needs
to be done, but that's also a bad outcome.
I have also learned that it's no slower, and usually faster, to create
an optimal solution up front.

That's the crux. I certainly agree that sometimes it's worth spending a
bit longer on a task to build something that's going to be a better
foundation for tomorrow. But sometimes, that effort turns out not to be
necessary, or your idea of what was better turns out to have been wrong,
in which case the extra time is wasted. I have absolutely no idea of the
relative frequencies of these cases - clearly, you believe that the former
is more typical.

One of the axioms of XP is that you should always write the code that's
necessary to solve the problem immediately at hand, not some potential
future problem, which has some bearing on this situation. However, i don't
think any but the most fundamentalist XPist would argue that this means
you should do everything as a quick hack.
Agile shops are sort of an exception to this,

In theory!
but I have not had the privilege of working in those environments.

I am lucky enough to have that privilege, but even here, we have a
pragmatic (or 'lazy', if you prefer) approach to tidying up our mess. When
we come across something foetid in the codebase, rather than dropping
everything else to fix it, we tend to ask "is this hurting us?", and if it
isn't, just leave it as it is. There are a few specific things i can think
of in our current project that have been bugging me for months now (one
major duplication of functionality, one pointless inconsistency), but
which we still haven't got round to fixing, because they doesn't really
matter - or rather, they matter less than adding the next feature, or
fixing an actual bug.

Although the duplication (where there are two classes A and B which have
pretty similar behaviour) has meant that in another class which has to do
some stuff with a collection of As and Bs, there are separate but very
similar methods for doing that stuff, which means it actually has hurt us.
If i'd been writing the stuff-doing code, i would have taken the
opportunity to refactor, but i wasn't, and the guy who was doing it is the
guy who generated the A-B redundancy in the first place. I get the
impression he believes that the duplication is actually a Good Thing,
although we haven't really talked about why.
In my career, when I have refactored I've had to do it surreptitiously, and
put up with getting in trouble for it. The only exception was a team that
was ignored by management for six months, and therefore was able to redesign
the architecture on its own. The result was a 200-fold (not 200%, 200-fold)
increase in productivity for adding features to the product, and a change
from unreliable results to completely reliable results.

Incidentally, this is in line with the theory of software project
management, as described in several books I've read on the subject.
The strangest thing about this to me is that both the theory of software
development and of software project management supports behaviors that
managers I've worked with will not usually trust.

This probably won't come as a surprise to anyone who's familiar with the
work of Scott Adams.
Theories like keep an effective team together even during periods of
inactivity, or the tenets of agile programming. Despite the
anti-academic prejudice expressed at times in this thread, the theories
are actually more pragmatically useful than many of the practices of
less theoretically-inclined practitioners.

This line of argument has a slightly scarecrowish look to it; those of us
proposing the lining up and shooting of computer scientists (and perhaps
by extension, managementologists) are not proposing their replacement with
reason-averse buffoons of the breed all too often found in corner offices,
and thus the abandonment of any and all good ideas. Although that's
certainly a common outcome in real-world situations where intellectuals
have been purged - one of the few things i took away from my high school
history lessons!

tom
 
M

Martin Gregorie

Actually CMD in later Windows versions is rater powerful, but the syntax
makes assembler look structured and readable.
I was meaning the native (formerly COMMAND) shell. It would be hard to
make it weaker!

However, my main point was the style of programming that GUI programming
seems to encourage - a few large, monolithic, multi-purpose chunks as
opposed to the *nix style of a lot of small, single purpose commands.
Using these to create special-purpose pipelines seems like a good way of
sliding code reuse under an apprentice programmers skin.

I wasn't really picking on Windows, BTW. Just using it as the best-known
example of unwieldy compendium programs. VAX/VMS and Tandem's Guardian
were at least as bad - Guardian especially, since it seemed to have about
six command line utilities in total, each containing so many commands
that they all had their own internal command line processors. Scripting
gets nearly impossible in that environment.
That type of stuff is probably best learned by experience.
Sure, but would still form a pretty useful part of courses. I may not
have done a CS course, but like everybody else in IT I've done a shed-
load of programming, database and design courses over the years, but I
don't recall any of them even mentioning that API or CLI design was an
important issue. Even in the ones that mentioned code reuse.
 
A

Arne Vajhøj

I was meaning the native (formerly COMMAND) shell.

COMMAND is a DOS/Win95/Win98/WinME thing.

Today Windows usually means NT/2000/XP/2003/Vista/2008.
However, my main point was the style of programming that GUI programming
seems to encourage - a few large, monolithic, multi-purpose chunks as
opposed to the *nix style of a lot of small, single purpose commands.

My XP has 366 .EXE files in C:\Windows\System32 - I can not recognize
the few large commands you describe.

Arne
 
M

Martin Gregorie

I have also learned that it's no slower, and usually faster, to create
an optimal solution up front.

I couldn't agree more. A case in point is that in one organisation I
worked in two similarly sized projects started at the same time. Both
were on the same mainframe and written in COBOL with an IDMSX database.

We spent a lot of time up front getting the data model right while
sorting out a user interface design and security model that would make
the data manipulation easy for the users. Then we designed a program
structure that would work well with the data model and the UI structure.
At this point we could draw and get approval for the screens and write a
few module specs to check that we had the APIs and control structure
right. Finally, we started coding.

Meanwhile the other project had hammered together a data model, drawn a
bunch of screens and started coding with little thought about how it
would all hang together. I know for certain that they didn't design a or
implement a coherent system structure because they said "it wasn't
needed". Even the screens designed by different people were inconsistent
in the placement of standard user controls, etc..

By the time we had our first subsystem ready for user review, the other
lot were crowing that they had 80% of their system written and why were
we so far behind them. Then they started systems integration and hit snag
after snag. Meanwhile we knew we had a fully debugged systems structure,
complete with performance instrumentation and diagnostics because the
developing first subystem had validated our design. As a result we were
quietly dropping subsystem after subsystem into our structure and not
finding any surprises.

To cut a long story short our system was complete and in productive use,
and spreading its tentacles through the organization due to user demand,
for about a year before the other project managed to go live.
 
A

Arne Vajhøj

Tom said:
?

You must have studied a very different kind of science to me.

I probably did (my masters degree is in Economics).

But there are plenty of things in science that can not be directly
tested.

Let us take an example: theories about the beginning of the universe.
I would agree. There's a principle which my dad calls 'one level down' -
you need to understand how something works one level down from the level
at which you're using it. It doesn't need to be a detailed
understanding, but it needs to be enough to have an idea of how (and
whether) the thing is going to work in any given situation. My
contention is that you don't need to have a great knowledge of CS to get
that level of understanding for the things we use as working programmers.

My point is that you actually do have a big knowledge of CS, but
that you are just not thinking about it as CS.

Arne
 
A

Arved Sandstrom

Tom Anderson wrote: [ SNIP ]
I would agree. There's a principle which my dad calls 'one level down'
- you need to understand how something works one level down from the
level at which you're using it. It doesn't need to be a detailed
understanding, but it needs to be enough to have an idea of how (and
whether) the thing is going to work in any given situation. My
contention is that you don't need to have a great knowledge of CS to
get that level of understanding for the things we use as working
programmers.

My point is that you actually do have a big knowledge of CS, but that
you are just not thinking about it as CS.

Arne

Computer science and computer engineering both. I'd wager that the
majority of advances in software development come from working
programmers and designers and testers, not from academia.

There's also no shortage of CS academics who jump all over concepts that
appear in industry first, and mine them for papers. Some of this is
valuable, if it puts adhoc things on a solid foundation, but a lot of it
is parasitism.

I'm not discounting computer science as a discipline that we all use, nor
am I criticizing the many very good computer scientists who have made
significant contributions to what we do. However, the year to year
contribution that computer scientists make is, I believe, overshadowed by
the contributions of actual industry practitioners (i.e. the logical, if
not legal, equivalent of engineers and technicians).

AHS
 
L

Lew

Tom said:
I agree with you that following the "do it badly now, we can fix it
later" road will lead you into quagmires and infernal realms.

But that's not quite how i read blueparty's dictum; i thought he was
saying that when you don't have a lot of time, you choose the option
that you can actually get done, not the one that's best in some more
general sense. The point being that it's better to deliver something
crufty than to not deliver at all. "Real artists ship", as i've said
elsewhere on this group today. Of course, in reality, choosing the high
road of Doing It Right No Matter How Long It Takes doesn't mean you
won't ship, it just means you'll ship late, or have less time to do
everything else that needs to be done, but that's also a bad outcome.

And I agree with the points you made in your post, including the parts I
snipped. The key is in the definition of "do it right" - I don't endorse some
ivory-tower concept of theoretical perfection that demands ultimate design. I
also don't think it means that one refactors out all duplication of code or
"fixes" code that causes no damage. So when you say,
even here, we have a pragmatic (or 'lazy', if you prefer) approach
to tidying up our mess. When we come across something foetid in the
codebase, rather than dropping everything else to fix it, we tend
to ask "is this hurting us?", and if it isn't, just leave it as it is.

I am right with you. What I do mean by "doing it right" is to use the
cleanest approach one can conceive with attention to big-O analysis and other
programming to interfaces and other such best practices, while meeting the
deadline. When it is possible to refactor, *without* screwing with deadlines,
I like to do it incrementally and with responsibility for timeliness.

There is also the fact that some apparent "cruft" is there for a reason - it
handles some corner case or data anomaly or something that an apparently more
elegant approach would get wrong. Careless refactoring can cause incorrect
behavior.
 
B

blue indigo

There's also no shortage of CS academics who jump all over concepts that
appear in industry first, and mine them for papers. Some of this is
valuable, if it puts adhoc things on a solid foundation, but a lot of it
is parasitism.

Actually, the technical term is "commensalism". It's only parasitism if it
harms the host.

(I minored in biology.)
 
M

Martin Gregorie

But it's thinking you only learn to do by doing.

I'm reminded of the friend of mine who did a PhD on a new way of
representing programs in compilers; he spent three years coming up with
it and proving it was right, and after he submitted his thesis, he
mentioned that it was a shame that he'd never got round to actually
implementing it. I don't mean to belittle his work at all (from what i
undestood of it, it was very clever), but the fact is that he has no
real idea if it's a good basis for a compiler, because he hasn't tried
it.
What did he write his algorithm in? If he used Z notation there might
some justification for his claim.
 
A

Arne Vajhøj

Arved said:
Tom Anderson wrote: [ SNIP ]
I would agree. There's a principle which my dad calls 'one level down'
- you need to understand how something works one level down from the
level at which you're using it. It doesn't need to be a detailed
understanding, but it needs to be enough to have an idea of how (and
whether) the thing is going to work in any given situation. My
contention is that you don't need to have a great knowledge of CS to
get that level of understanding for the things we use as working
programmers.
My point is that you actually do have a big knowledge of CS, but that
you are just not thinking about it as CS.

Computer science and computer engineering both. I'd wager that the
majority of advances in software development come from working
programmers and designers and testers, not from academia.

That depends on how you count.

The industry produces tons of small enhancements and refinements.

But my guess would be that most of the inventions that really
revolutionize comes from academia.

Which is how it should be.

Big business is supposed to fund the research that makes money for
them in next fiscal year.

Universities are supposed to come up with the ideas that can be
commercialized in 5-10-20-40 years.
There's also no shortage of CS academics who jump all over concepts that
appear in industry first, and mine them for papers. Some of this is
valuable, if it puts adhoc things on a solid foundation, but a lot of it
is parasitism.

Even just explaining known stuff has a purpose.

How do you think students learn things ? By only reading stuff that
gives the author the Nobel prize (or in this case the Turing award) ?

No. Besides inventing things universities also has a role of
communicating stuff.

Arne
 
M

Martin Gregorie

I would hold against that the real big things that changed the internet
came from academia.

Sun was founded in Stanford?
Google and its special algorithms were invented in academia. Companys
like Akamai were founded by theorists..

The big changing stuff came from academia ... no matter if its the
google algorithm ... or Amazon's recommendation system.
Or scientific research establishments:

- the first stored computer program was run on Baby at Manchester
University
- Dr. Wang developed ferrite core RAM at Harvard.
- microcode came from MIT (the Whirlwind) and Cambridge (Maurice Wilkes)
- The first packet switch network was implemented at the National
Physical Laboratory at Teddington, UK. This work fed into ARPANET.
- The World Wide Web came from CERN.

OTOH some equally important stuff came from the industry:

- the punched card and paper tape were commercial inventions
- magnetic tape data storage was commercially developed at UNIVAC
- disk storage (floppy and hard disks) were invented at IBM,
but drums were a prewar invention, apparently in Austria.
- DRAM semiconductor memory was invented at IBM
- Grace Hopper wrote the first compiler at UNIVAC
- operating systems seem to have been a commercial development
- database development was commercial, IDS (the Codasyl DB basis)
by Charles Bachmann at GE, hierarchic (IMS) at IBM and Relational
by Ted Codd, also at IBM.
- C and UNIX were developed at the AT&T research labs.

So, it looks as if the honours are fairly evenly spread, but with the
advantage to the industry.
 
L

Lew

Or scientific research establishments:

- the first stored computer program was run on Baby at Manchester
  University
- Dr. Wang developed ferrite core RAM at Harvard.
- microcode came from MIT (the Whirlwind) and Cambridge (Maurice Wilkes)
- The first packet switch network was implemented at the National
  Physical Laboratory at Teddington, UK. This work fed into ARPANET.
- The World Wide Web came from CERN.

OTOH some equally important stuff came from the industry:

- the punched card and paper tape were commercial inventions
- magnetic tape data storage was commercially developed at UNIVAC
- disk storage (floppy and hard disks) were invented at IBM,
  but drums were a prewar invention, apparently in Austria.
- DRAM semiconductor memory was invented at IBM
- Grace Hopper wrote the first compiler at UNIVAC
- operating systems seem to have been a commercial development
- database development was commercial, IDS (the Codasyl DB basis)
  by Charles Bachmann at GE, hierarchic (IMS) at IBM and Relational
  by Ted Codd, also at IBM.
- C and UNIX were developed at the AT&T research labs.

So, it looks as if the honours are fairly evenly spread, but with the
advantage to the industry.

George Lucas's employees produced 40-60% of the papers in ACM's
SIGGRAPH for years during the 80s.

This can be seen as computer science or "real-world" programming in
that "real-world" programmers were publishing academic papers based on
"real-world" work, then their academic publications cycled back to
"real-world" practitioners.

Asking which is more significant is like asking whether the sperm or
the egg is more important to creating a baby.
 
J

Joshua Cranmer

Martin said:
OTOH some equally important stuff came from the industry:

I believe you omitted the transistor, from Bell Labs?
So, it looks as if the honours are fairly evenly spread, but with the
advantage to the industry.

It looks to me that many of the industry inventions were by industrial
research labs, which wouldn't exactly be the same as invented by
"working programmers and designers and testers."
 
B

blueparty

Lew said:
A great theoretical approach that rarely, if ever, works in workaday
programming. I have been promised many times the opportunity to
refactor software, then been forbidden to do so based on variants of,
"If it ain't broke, don't fix it."

That is usually so, but, often (sooner or later) an additional
requirement offers the opportunity to improve old things.

I must admit one thing, last couple of years I am my own boss, so, well,
I have less problems with the management :) But, I have problems with
customers.....As someone said:

"Whil I worked for the company my boss was a bastard. Now I work for
myself. My boss is still a bastard, but, at least I respect him"

I have also learned that it's no slower, and usually faster, to create
an optimal solution up front.

Faster might sometimes mean less complicated, less likely to contain
bugs (because of simplicity) or easier to debug.


B
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,065
Latest member
OrderGreenAcreCBD

Latest Threads

Top