Larry Wall & Cults

R

Rupert Pigott

John said:
You seeem misinformed.
Microsoft swallowed up a team from DEC.
The were developing a operating system called PRISM.
When the project was cancelled they quit DEC in protest.
These peaple had more than a 100 years of experience in developing
muliuser /
mutitasking operating systems between them. The fact that the NT kernel
is not
entirely stable yet really shouldn't supprise anyone. Afterall Unix has
messed with
it's kernel for 30 years. But the modular arcitecture and the
microkernel are new ideas in
OS design and should in time lead to a more extensible OS than unix.

uKernels are *NOT* a new idea at all. They weren't a new idea when
NT was unleashed on the world. What people think of as "NT" is a big
pile of shite that obscures the uKernel. Since the graphics stuff
got put into ring 0 I think that you could legitimately claim that
BSD Unix is more of a micro kernel than NT. :)
(Unix tradionally has a spagetti of intercalling function calls as a
kernel.)

Remember NeXTStep ?
As for following standards thats just plain sense.
Note the Mac OS 10 / Darwin uses a unix kernel because of all the
problems with
interoperabillity OS 9 had with talking to Windows and Unix boxes.

Which I believe is derived from a Mach uKernel... The "UNIX" bits
are the FreeBSD userland utilities that surround it.

Cheers,
Rupert
 
A

Anne & Lynn Wheeler

And everybody seems to think that those people never talked to each
other. Even boasting about whose is bigger, faster, and longer
would transmit new ideas among the bit setters.

some number were co-workers on ctss before some went to 5th floor and
multics and others went to science center on the 4th floor. north
side of 545 tech sq 1st floor had lunch room on the east side and
lounge on west side; besides running into people in the elevator
.... there were coffee breaks and lunch in the lunch room and after
work in the lounge.

melinda, on her site has historical write up with some early ctss,
multics, cp/cms lore:
http://pucc.princeton.edu/~melinda/

an earlier version was posted in eight parts to vmshare computer
conferencing ... vmshare archive:
http://vm.marist.edu/~vmshare/browse?fn=VMHIST01&ft=NOTE
http://vm.marist.edu/~vmshare/browse?fn=VMHIST02&ft=NOTE
http://vm.marist.edu/~vmshare/browse?fn=VMHIST03&ft=NOTE
http://vm.marist.edu/~vmshare/browse?fn=VMHIST04&ft=NOTE
http://vm.marist.edu/~vmshare/browse?fn=VMHIST05&ft=NOTE
http://vm.marist.edu/~vmshare/browse?fn=VMHIST06&ft=NOTE
http://vm.marist.edu/~vmshare/browse?fn=VMHIST07&ft=NOTE
http://vm.marist.edu/~vmshare/browse?fn=VMHIST07&ft=NOTE
 
K

Karl A. Krueger

In comp.lang.lisp Rupert Pigott said:
Which I believe is derived from a Mach uKernel... The "UNIX" bits
are the FreeBSD userland utilities that surround it.

Well, no. Mac OS X uses a BSD kernel implemented on top of the Mach
microkernel, much as Apple's experimental mkLinux placed a Linux kernel
on top of Mach. OS X also uses a pretty standard set of BSD libraries
and utilities -- as well as the NeXT-derived ones. (You can tell the
heritage apart pretty easily -- if it's written in Objective-C, it's
from the NeXT side.)

The BSD heritage is a two-way street: Apple has contributed code
developed for OS X back to the FreeBSD and OpenBSD projects, as well as
releasing the whole Unix core of OS X as the open-source Darwin system.

It's also not particularly accurate to say that the reason Apple moved
to Unix was "interoperability". Rather, the old Mac System was simply
never designed for what it ended up being used to do. There were too
many layers of cruft -- and too many design decisions that were right
for 1984 but wrong for 1999. Single-user, cooperative multitasking, and
a network stack designed for small LANs rather than the Internet ... the
old Mac System was a great microcomputer OS but not a great workstation
OS.

When you consider that the first Macs to run OS X were several hundred
times faster than the 1984 Mac, had one thousand times as much RAM, and
had fifty thousand times as much mass storage, it should follow pretty
naturally that the constraints of the old system's design would cease to
be appropriate.

1984 Original Macintosh: 128kB RAM, 8MHz m68k, 400kB disk
1999 Power Macintosh G4: 128MB RAM, 400MHz PPC G4, 20 GB disk
 
A

Andre Majorel

The fact that the NT kernel is not entirely stable yet really
shouldn't supprise anyone. Afterall Unix has messed with it's
kernel for 30 years.

I feel compelled to point out that Linux achieved considerably
better stability after just a few years.
 
S

SM Ryan

# > Not exactly a typical editor function, agreed. I was feeling a little
# > whimsical at the time.
#
# i once did a random email/usenet signature with zippy/yow ... but i
# added two other files to it ... and then i had to fix a feature in
# yow. yow uses a 16bit random number to index a yow file ... it was ok
# as long as your sayings file was less than 64kbytes. i had to modify
# yow to handle files larger than 64kbytes ... the "sayings" file used
# for 6670 separater pages was 167k bytes and the jargon file was 413k
# bytes ... while a current zippy yow file is 52,800 bytes.

It's nice to know people still have time to work on really important things.
 
J

John Thingstad

I feel compelled to point out that Linux achieved considerably
better stability after just a few years.

I feel compelled to replay that Linux is based on the Posix standard which
is basically a recipie for writing unix. They did not write a new
operating system. They implemented a tested and proven one.
 
D

Dave Hansen

John Thingstad wrote: [...]

uKernels are *NOT* a new idea at all. They weren't a new idea when
NT was unleashed on the world. What people think of as "NT" is a big
pile of shite that obscures the uKernel. Since the graphics stuff
got put into ring 0 I think that you could legitimately claim that
BSD Unix is more of a micro kernel than NT. :)
(Unix tradionally has a spagetti of intercalling function calls as a
kernel.)

Remember NeXTStep ?

QNX is another example of a microkernel OS, "unixy" without being
unix. It's been around since, what, 1981?

AIUI, it used to be called Q-NIX, until a certain telephone company
complained.

Regards,

-=Dave
 
A

Alan Balmer

I feel compelled to replay that Linux is based on the Posix standard which
is basically a recipie for writing unix. They did not write a new
operating system. They implemented a tested and proven one.

Huh? Linux is only recently paying some attention to the POSIX
standards. I don't know the current level of compliance, though I'm
pretty sure that some parts of POSIX.4 have been implemented.

I wouldn't describe the POSIX standards as a "recipie for writing
unix", anyway.
 
P

Pascal Bourguignon

John Thingstad said:
Note the Mac OS 10 / Darwin uses a unix kernel because of all the
problems with
interoperabillity OS 9 had with talking to Windows and Unix boxes.

No that's not the reason. The reason is ONLY because of the lack of
virtual memory management (with separation of addressing spaces for
processes) in MacOS. That's the one error in design in MacOS I
identified in version 1.0 that they've dragged all along for 20
years. (And I bet that if they did not make it, AAPL would be $50-$80
now, and they'd have at least 40%-50% of market share). Instead,
they've wasted resources, CEOs and CTOs for 10 years before the NeXT
take over.


--
__Pascal Bourguignon__ http://www.informatimago.com/

Our enemies are innovative and resourceful, and so are we. They never
stop thinking about new ways to harm our country and our people, and
neither do we.
 
C

CBFalconer

Peter said:
A quick search using Google will show that while there is a
certain amount of truth in the original story, most of the
details are wrong, and the final bit about the booster rockets
is unsubstantiated. But it's still a cute story.

I know nothing about those stories, but it seems reasonable to me
that the boosters would have been designed to be transportable by
railroad, which ties their dimensions to track gauge.
 
C

CBFalconer

John said:
.... snip ...

These peaple had more than a 100 years of experience in
developing muliuser / mutitasking operating systems between
them. The fact that the NT kernel is not entirely stable yet
really shouldn't supprise anyone. Afterall Unix has messed
with it's kernel for 30 years. But the modular arcitecture
and the microkernel are new ideas in OS design and should in
time lead to a more extensible OS than unix.

The original NT (3.0) was well designed, but slow on the hardware
of the time. Then MS got to work increasing module connectivity
and reducing reliability. This is the usual premature
optimization bug, together with planned obsolescence. The result
is an unmaintainable mess.
 
R

Rupert Pigott

Alan said:
Huh? Linux is only recently paying some attention to the POSIX
standards. I don't know the current level of compliance, though I'm

Nah, that's been going on since at least 1994 when I installed it.
pretty sure that some parts of POSIX.4 have been implemented.

God only knows, as long as it works I'm not complaining. :)

Cheers,
Rupert
 
P

Peter Hansen

CBFalconer said:
I know nothing about those stories, but it seems reasonable to me
that the boosters would have been designed to be transportable by
railroad, which ties their dimensions to track gauge.

You know, it's really rather helpful when people take the time to
read the things they are trying to discuss, since quite often
those things end up answering questions that those people
might have.

See the snapes.com article that Dave Hansen (no relation) posted
for more... and a response to your reasonable thoughts above.

-Peter
 
P

Peter Wilkinson

I have been parsing character files that use ASCII and unicode (utf_16)
encoded.

I have written a script for converting unicode to ASCII given a directory.
I would like the script to test that a given file _is_ the in the 'utf_16'
format before attempting to read it, i.e. it should only convert those
files that are unicode.

Are there any built in methods, for testing a file encoding ?

Peter
 
K

Karl A. Krueger

In comp.lang.lisp Pascal Bourguignon said:
No that's not the reason. The reason is ONLY because of the lack of
virtual memory management (with separation of addressing spaces for
processes) in MacOS.

It was my impression that the Motorola 68000 CPU, upon which the
original Macintosh was based, did not support memory management in
hardware. At least, that's usually given as the reason that portable
Unix systems such as NetBSD will "never" run on the earlier 68k (or,
for that matter, 8086 or 80286) chips.
 
A

Alan Balmer

Nah, that's been going on since at least 1994 when I installed it.
That's what I mean - it's been (and is still) "going on." It ain't
soup yet., and only recently (imo) has it been taken seriously. I
think pthreads were the defining point for me.

It is certainly not the case that Linux was written by following the
POSIX "recipe."
 
A

Alan Balmer

It was my impression that the Motorola 68000 CPU, upon which the
original Macintosh was based, did not support memory management in
hardware.

That's what I remember, but wasn't there an MMU available as a
separate chip?
 
P

Pascal Bourguignon

Karl A. Krueger said:
When you consider that the first Macs to run OS X were several hundred
times faster than the 1984 Mac, had one thousand times as much RAM, and
had fifty thousand times as much mass storage, it should follow pretty
naturally that the constraints of the old system's design would cease to
be appropriate.

Yes, but the first NeXTcube or NeXTstation were not much more
powerfull than even the original Macintosh. In anycase, at the time
the Macintosh appeared, there were already 680x0 based unix workstations.
1984 Original Macintosh: 128kB RAM, 8 MHz 68000, 400 kB disk
1989 low end NeXTcube: 128MB RAM, 25 MHz 68030, 256 MB optical disk!
1999 Power Macintosh G4: 128MB RAM, 400MHz PPC G4, 20 GB disk

NeXTstep could have run on a MacIIfx (The TI Explorer ran on a MacIIfx).

--
__Pascal Bourguignon__ http://www.informatimago.com/

Our enemies are innovative and resourceful, and so are we. They never
stop thinking about new ways to harm our country and our people, and
neither do we.

"I don't think it* can be won." (*):the war on terror.
 
A

Andreas Krey

* Karl A. Krueger ([email protected])
....
It was my impression that the Motorola 68000 CPU, upon which the
original Macintosh was based, did not support memory management in
hardware.

That is not the problem; one can do memory management and multiple
address spaces in external hardware as well. But the MacOS architecture
obviously wanted to be all in one address space, as did the early
windows versions. This makes GUI easier and networking and fault
isolation harder, but it's a valid tradeoff. :)

What you can't do with the 68000 is virtual memory management
because that requires the processor to save the state of
execution in the middle of an instruction when needed data
is not physically in memory. 68020 and upwards provided that
feature, and the Sun 3/50 used a 68020 and a proprietary memory
management unit mainly consisting of two fast SRAMs to get
virtual memory support.

I don't know whether the 68000 already had user and supervisor
mode which is also (besides an MMU) a prerequisite for completely
jailing user programs.

Andreas
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,776
Messages
2,569,603
Members
45,187
Latest member
RosaDemko

Latest Threads

Top