ruby interpreter as mach kernel server (beside bsd)

Z

zuzu

ruby, starting the interactive ruby shell, but with filesystem access
to unix/bsd through the mach microkernel. in other words, a "ruby
machine" (instead of a "smalltalk machine" or "lisp machine").

what do you think?


7/7/2004 11:37 PM
http://www.jpaulmorrison.com/cgi-bin/wiki.pl?PushingTheEnvelope

so... in trying to solve the "ruby space" problem, i've been thought
experimenting with the what-if of the ruby interpreter / runtime
environment as a kernel server in parallel to a unix "personality"
server under the mach microkernel.

unlike the design of GNU/hurd, most microkernel unix operating systems
implement Mach for the basics (multi-tasking/processes, threads,
multi-processor on single machine or over a network, inter-process
communication / IPC, memory protection) and then a single "unix kernel
server" to fill out the rest of the unix functionality found in
monolithic kernels such as linux.

so if the ruby interpreter were another "kernel server" parallel to
the "unix kernel server", ruby would still interact with the
traditional unix environment, but each object in "ruby space" could be
its own process (as compiled C utilities are in traditional unix) for
truely reuseable and fault-resistent components. (this essentially
creates a virtual "ruby machine" not unlike "smalltalk machines" and
"lisp machines" of yore.)

the reason i mention such a wild idea here, other than continuing the
discourse on threads and processes, is because of how similar the Mach
design feels compared to flow-based programming FBP:

1. object-based APIs with communication channels (for example,
*ports*) as object references
2. highly parallel execution, including preemptively scheduled threads
and support for SMP
3. a flexible scheduling framework, with support for real-time usage
4. a complete set of IPC primitives, including messaging, RPC,
(a)synchronization, and notification
5. support for large virtual address spaces, shared memory regions,
and memory objects backed by persistent store
6. proven extensibility and portability, for example across
instruction set architectures and in distributed environments
7. security and resource management as a fundamental principle of
design; all resources are virtualized

http://developer.apple.com/document...rnelProgramming/Mach/chapter_6_section_1.html

it's almost as if the Mach kernel just needs a friendly interface to
running code through it, and ruby could very well be that friendly
syntactic sugar.

of course this grossly underestimates the difficulty of writing kernel
software, particularly for a microkernel such as Mach. paul, your
expertise seems very C heavy, perhaps you have a better gague of
approachability.


7/8/2004 12:26 AM
http://developer.apple.com/document...rnelProgramming/Mach/chapter_6_section_3.html

A thread

* is a point of control flow in a task.
* has access to all of the elements of the containing task.
* executes (potentially) in parallel with other threads, even threads
within the same task.
* has minimal state information for low overhead.

A task

* is a collection of system resources. These resources, with the
exception of the address space, are referenced by ports. These
resources may be shared with other tasks if rights to the ports are so
distributed.
* provides a large, potentially sparse address space, referenced by
virtual address. Portions of this space may be shared through
inheritance or external memory management.
* contains some number of threads.

Note that a task has no life of its own—only threads execute
instructions. When it is said that "task Y does X," what is really
meant is that "a thread contained within task Y does X."


to me, this sounds as though in ruby, a task would map to an object,
and a thread to a function.
 
R

Robert Klemme

zuzu said:
ruby, starting the interactive ruby shell, but with filesystem access
to unix/bsd through the mach microkernel. in other words, a "ruby
machine" (instead of a "smalltalk machine" or "lisp machine").

what do you think?

Hm, I still don't see the advantages. As for normal Ruby programs, I
can't see any. As for access to kernel functionality, I don't see the
improvement. I guess it's my lack of understanding, so I would greatly
appreciate if anyone explains this.

Kind regards

robert
 
G

gabriele renzi

ruby, starting the interactive ruby shell, but with filesystem access
to unix/bsd through the mach microkernel. in other words, a "ruby
machine" (instead of a "smalltalk machine" or "lisp machine").

I'll use L4::pistachio wich should have become stable as of now.
I believe the guys from the Io language had the interpreter running
directly on L4, wich is a wonderful microkernel (give my little
understanding of OS theory, anyway)
 
Z

zuzu

Hm, I still don't see the advantages. As for normal Ruby programs, I
can't see any. As for access to kernel functionality, I don't see the
improvement. I guess it's my lack of understanding, so I would greatly
appreciate if anyone explains this.

Kind regards

robert

one important aspect i have neglected to emphasize is the nature of
flow-based (aka "agent") programming style in ruby. see
http://www.jpaulmorrison.com/fbp/index.shtml


ok, here is a very concrete example i am faced with:

# given an apple dual-G5, essentailly four CPU cores are available to
process instructions in parallel.

# i am writing ruby code in the aforementioned style.
think of a beer bottling plant, where bottles move on a conveyor
through various machines: a washing machine, then a drying machine,
then a filling machine, then a capping machine, then a labeling
machine, then a packaging machine. each machine does not wait for
"all the bottles" to finish before passing the bottles onto the next
machine. each machine simply works with the bottles it has when it
has them.
in data-flow programming, the bottles are data which get
modified/transformed by each "machine" which would likely be
singletons or direct class method calls, and the conveyor belt is an
infinite stream but perceived by each "machine" as a bounded buffer.
so the *problem* becomes that several "machines" need to operate on
their buffers at the same time (in parallel).
at the same time, the user requires introspection to keep an eye on
what's happening where, and be able to make dynamic changes to the
data-flow arbitrarily.

# how can ruby utilize the 4 CPU cores for this massively parallel
bounded-buffer data-flow as a single unix process with only internal
threading?

one possible solution i thought of is to port the ruby interpreter as
a Mach microkernel server, sitting beside the bsd "personality"
server. each object would be a Mach task while each function would be
a Mach thread, and objects would communicate via Mach inter-process
communication (IPC). networking and filesystems can also be accessed
through Mach.

ruby gets the parallelization it needs, by adopting rather than
writing from scratch, an existing microkernel. in essence, a "ruby
machine".
 
R

Robert Klemme

one important aspect i have neglected to emphasize is the nature of
flow-based (aka "agent") programming style in ruby. see
http://www.jpaulmorrison.com/fbp/index.shtml

"Flow based" seems to me just another name for "event driven" from what I
read so far. It's a bit graph theory, a bit Petri Nets, a bit concurrency
theory - not nearly as sensational as the author tries to make us think.
# how can ruby utilize the 4 CPU cores for this massively parallel
bounded-buffer data-flow as a single unix process with only internal
threading?

So basically what you want is, that Ruby makes use of native threads. I
guess it would be much easier to implement a Ruby interpreter that uses
native threads than to make a Mach microkernel server. And it's more
portable (i.e. POSIX threads). This sounds a bit like the wrong hammer to
your problem. But then again, I'm not a microkernel expert.
one possible solution i thought of is to port the ruby interpreter as
a Mach microkernel server, sitting beside the bsd "personality"
server. each object would be a Mach task while each function would be
a Mach thread, and objects would communicate via Mach inter-process
communication (IPC). networking and filesystems can also be accessed
through Mach.

IMHO making each object a mach task would be overkill. You probably meant
each *component* (i.e. independent self contained processing unit as
described by Paul Morrison) should be a mach task.

Regards

robert


PS: Attached some sample of what I understand from "flow based".
 
Z

zuzu

"Flow based" seems to me just another name for "event driven" from what I
read so far. It's a bit graph theory, a bit Petri Nets, a bit concurrency
theory - not nearly as sensational as the author tries to make us think.

word on graph & concurrency theory, reading up on petri nets now
(wikipedia)... (also reminds me to finish reading 'Linked' by ALB.)

perhaps there's something better for me to read up on event driven
programming besides [http://c2.com/cgi/wiki?EventDrivenProgramming],
but it sounds much earlier in the evolution of an idea.
So basically what you want is, that Ruby makes use of native threads. I
guess it would be much easier to implement a Ruby interpreter that uses
native threads than to make a Mach microkernel server. And it's more
portable (i.e. POSIX threads). This sounds a bit like the wrong hammer to
your problem. But then again, I'm not a microkernel expert.

maybe i'm nitpicking, but i feel a problem exists that processes, not
threads, are necessary. when the parent process dies (perhaps because
of a bad thread), all of its threads go with it. this is a problem
when one small error causes my entire application to crash. (one
small error in one object in my web browser should not lose me all of
my "unsaved" rendered pages and URL information with it. just that
one faulty object should die and get respawned.) maintaining my human
productivity with persistent objects is more valuable than the
footprint of many processes. O(1) schedulers make "too many
processes" a moot point in a cheap hardware world anyway, methinks.
IMHO making each object a mach task would be overkill. You probably meant
each *component* (i.e. independent self contained processing unit as
described by Paul Morrison) should be a mach task.

you do not think that paul's "components" essentially map directly to
ruby "objects"?
Regards

robert


PS: Attached some sample of what I understand from "flow based".

word, i'll give it a serious look this weekend.

-z
 
R

Robert Klemme

zuzu said:
"Flow based" seems to me just another name for "event driven" from what I
read so far. It's a bit graph theory, a bit Petri Nets, a bit concurrency
theory - not nearly as sensational as the author tries to make us
think.

word on graph & concurrency theory, reading up on petri nets now
(wikipedia)... (also reminds me to finish reading 'Linked' by ALB.)
:))

perhaps there's something better for me to read up on event driven
programming besides [http://c2.com/cgi/wiki?EventDrivenProgramming],
but it sounds much earlier in the evolution of an idea.

I think in the telco world this is quite ubiquituous. SDL is used to
design such scenarions (communicating proceses) and SDL is widely used in
that area AFAIK.
maybe i'm nitpicking, but i feel a problem exists that processes, not
threads, are necessary. when the parent process dies (perhaps because
of a bad thread), all of its threads go with it. this is a problem
when one small error causes my entire application to crash. (one
small error in one object in my web browser should not lose me all of
my "unsaved" rendered pages and URL information with it. just that
one faulty object should die and get respawned.) maintaining my human
productivity with persistent objects is more valuable than the
footprint of many processes. O(1) schedulers make "too many
processes" a moot point in a cheap hardware world anyway, methinks.

Well, normally a dying Ruby thread does not kill the whole process.
Whether multiple processes or threads is not the major point. The major
point is that you need concurrency for flow based programs to happen, not
a kernel integration. The kernel integration might be a means but it
looks inappropriate to me.
you do not think that paul's "components" essentially map directly to
ruby "objects"?
Exactly.


word, i'll give it a serious look this weekend.

Don't look to hard. I guess it's not overly well designed. Just to get
the picture. Basically a processor has an incoming queue which is dealt
with by a thread. Processing can be anything from printing to sending
something to an attached processor. Concurrency saftety is ensured be the
queues. That's about it.

Regards

robert
 
Z

zuzu

zuzu said:
one important aspect i have neglected to emphasize is the nature of
flow-based (aka "agent") programming style in ruby. see
http://www.jpaulmorrison.com/fbp/index.shtml

"Flow based" seems to me just another name for "event driven" from what I
read so far. It's a bit graph theory, a bit Petri Nets, a bit concurrency
theory - not nearly as sensational as the author tries to make us
think.

word on graph & concurrency theory, reading up on petri nets now
(wikipedia)... (also reminds me to finish reading 'Linked' by ALB.)
:))

perhaps there's something better for me to read up on event driven
programming besides [http://c2.com/cgi/wiki?EventDrivenProgramming],
but it sounds much earlier in the evolution of an idea.

I think in the telco world this is quite ubiquituous. SDL is used to
design such scenarions (communicating proceses) and SDL is widely used in
that area AFAIK.

word, i think i've heard that before. if i think of the specific
context i'll post it.
Well, normally a dying Ruby thread does not kill the whole process.
Whether multiple processes or threads is not the major point. The major
point is that you need concurrency for flow based programs to happen, not
a kernel integration. The kernel integration might be a means but it
looks inappropriate to me.

but processes never corrupt/crash other processes, except in the event
of a kernel panic, correct? however much debate exists over the
safety of threads. though pan-unix compatability would be much more
popular than a mach kernel implementation (which basically means apple
xnu and gnu/hurd).

http://c2.com/cgi/wiki?ThreadsConsideredHarmful

" Some tasks may be truly independent; having independent simultaneous
flows of control is useful.
* But: Separate processes may be a better solution.
o On some OS's (ie Windows) that is much more expensive than
separate threads (on Unix derivatives, separate processes are much
cheaper)"

and according to john carmack writing quake 3:
# avoid threads if possible
# if you have to have threads, then have only one per CPU
# avoid threads if possible
# share as little data as possible between threads
# are you sure a separate process with shared memory or other IPC wouldn't do?

i think inter-process communication (IPC) is more preferable as well.
(though i'm open to discussing the semantical differences.)

i believe i am asking this same question:
"I would reply to GlenStampoultzis with a question of my own: why use
threads at all if you isolate the parts of your program properly?
Processes with message passing could do just as well, no? --
PierrePhaneuf"

hehe, um, because...?
 
R

Robert Klemme

zuzu said:
zuzu said:
one important aspect i have neglected to emphasize is the nature of
flow-based (aka "agent") programming style in ruby. see
http://www.jpaulmorrison.com/fbp/index.shtml

"Flow based" seems to me just another name for "event driven" from what I
read so far. It's a bit graph theory, a bit Petri Nets, a bit concurrency
theory - not nearly as sensational as the author tries to make us think.

word on graph & concurrency theory, reading up on petri nets now
(wikipedia)... (also reminds me to finish reading 'Linked' by ALB.)
:))

perhaps there's something better for me to read up on event driven
programming besides [http://c2.com/cgi/wiki?EventDrivenProgramming],
but it sounds much earlier in the evolution of an idea.

I think in the telco world this is quite ubiquituous. SDL is used to
design such scenarions (communicating proceses) and SDL is widely used in
that area AFAIK.

word, i think i've heard that before. if i think of the specific
context i'll post it.
threads.
I

Well, normally a dying Ruby thread does not kill the whole process.
Whether multiple processes or threads is not the major point. The major
point is that you need concurrency for flow based programs to happen, not
a kernel integration. The kernel integration might be a means but it
looks inappropriate to me.

but processes never corrupt/crash other processes, except in the event
of a kernel panic, correct? however much debate exists over the
safety of threads. though pan-unix compatability would be much more
popular than a mach kernel implementation (which basically means apple
xnu and gnu/hurd).

http://c2.com/cgi/wiki?ThreadsConsideredHarmful

" Some tasks may be truly independent; having independent simultaneous
flows of control is useful.
* But: Separate processes may be a better solution.
o On some OS's (ie Windows) that is much more expensive than
separate threads (on Unix derivatives, separate processes are much
cheaper)"

and according to john carmack writing quake 3:
# avoid threads if possible
# if you have to have threads, then have only one per CPU
# avoid threads if possible
# share as little data as possible between threads
# are you sure a separate process with shared memory or other IPC wouldn't do?

i think inter-process communication (IPC) is more preferable as well.
(though i'm open to discussing the semantical differences.)

i believe i am asking this same question:
"I would reply to GlenStampoultzis with a question of my own: why use
threads at all if you isolate the parts of your program properly?
Processes with message passing could do just as well, no? --
PierrePhaneuf"

The usual tradeoff is, that threads are cheaper (some OS call them Leight
Weight Processes) because there's less overhead involved during task
switches. Threads automatically share all their memory while for
processes you have to implement that using the operating system's
mechanisms for shared mem - or message passing. Whatever.

When I said "The major point is that you need concurrency for flow based
programs to happen, not a kernel integration." I wanted to make clear that
you don't need kernel integration to make flow base software happen in
Ruby in the first place. It's the concurrency and especially utilization
of several CPU's which can't happen with the current interpreter. (Hence
Ruby2 - AFAIK native threads is a planned feature there.)
hehe, um, because...?

It doesn't make sense. Not every instance does processing, just like the
bottles are shoved around only but without any activity on their own. You
don't want a String to have a thread of control. What should it do?

robert
 
Z

zuzu

zuzu said:
wrote:

one important aspect i have neglected to emphasize is the nature of
flow-based (aka "agent") programming style in ruby. see
http://www.jpaulmorrison.com/fbp/index.shtml

"Flow based" seems to me just another name for "event driven" from
what I
read so far. It's a bit graph theory, a bit Petri Nets, a bit
concurrency
theory - not nearly as sensational as the author tries to make us
think.

word on graph & concurrency theory, reading up on petri nets now
(wikipedia)... (also reminds me to finish reading 'Linked' by ALB.)

:))

perhaps there's something better for me to read up on event driven
programming besides [http://c2.com/cgi/wiki?EventDrivenProgramming],
but it sounds much earlier in the evolution of an idea.

I think in the telco world this is quite ubiquituous. SDL is used to
design such scenarions (communicating proceses) and SDL is widely used in
that area AFAIK.

word, i think i've heard that before. if i think of the specific
context i'll post it.
# how can ruby utilize the 4 CPU cores for this massively parallel
bounded-buffer data-flow as a single unix process with only internal
threading?

So basically what you want is, that Ruby makes use of native threads.
I
guess it would be much easier to implement a Ruby interpreter that
uses
native threads than to make a Mach microkernel server. And it's more
portable (i.e. POSIX threads). This sounds a bit like the wrong
hammer to
your problem. But then again, I'm not a microkernel expert.

maybe i'm nitpicking, but i feel a problem exists that processes, not
threads, are necessary. when the parent process dies (perhaps because
of a bad thread), all of its threads go with it. this is a problem
when one small error causes my entire application to crash. (one
small error in one object in my web browser should not lose me all of
my "unsaved" rendered pages and URL information with it. just that
one faulty object should die and get respawned.) maintaining my human
productivity with persistent objects is more valuable than the
footprint of many processes. O(1) schedulers make "too many
processes" a moot point in a cheap hardware world anyway, methinks.

Well, normally a dying Ruby thread does not kill the whole process.
Whether multiple processes or threads is not the major point. The major
point is that you need concurrency for flow based programs to happen, not
a kernel integration. The kernel integration might be a means but it
looks inappropriate to me.

but processes never corrupt/crash other processes, except in the event
of a kernel panic, correct? however much debate exists over the
safety of threads. though pan-unix compatability would be much more
popular than a mach kernel implementation (which basically means apple
xnu and gnu/hurd).

http://c2.com/cgi/wiki?ThreadsConsideredHarmful

" Some tasks may be truly independent; having independent simultaneous
flows of control is useful.
* But: Separate processes may be a better solution.
o On some OS's (ie Windows) that is much more expensive than
separate threads (on Unix derivatives, separate processes are much
cheaper)"

and according to john carmack writing quake 3:
# avoid threads if possible
# if you have to have threads, then have only one per CPU
# avoid threads if possible
# share as little data as possible between threads
# are you sure a separate process with shared memory or other IPC wouldn't do?

i think inter-process communication (IPC) is more preferable as well.
(though i'm open to discussing the semantical differences.)

i believe i am asking this same question:
"I would reply to GlenStampoultzis with a question of my own: why use
threads at all if you isolate the parts of your program properly?
Processes with message passing could do just as well, no? --
PierrePhaneuf"

The usual tradeoff is, that threads are cheaper (some OS call them Leight
Weight Processes) because there's less overhead involved during task
switches. Threads automatically share all their memory while for
processes you have to implement that using the operating system's
mechanisms for shared mem - or message passing. Whatever.

word. and appearantly in 20% of situations, relying on the OS (or
microkernel) to pass messages between processes/tasks works while
threading crashes, probably because of a goof sharing that memory.
with today's inexpensive hardware, 90% hardware in the field can
handle the penalty of the "heavier" processes/tasks to gain an
increase in human creativity resource productivity.

so if we dismiss threads, either ruby has to be able to talk to its
unix host for its own processes, or it's going to talk to the kernel
for that. i'm not sure, but i think this amounts to the same thing,
which is how i arrived at this original topic to begin with.
When I said "The major point is that you need concurrency for flow based
programs to happen, not a kernel integration." I wanted to make clear that
you don't need kernel integration to make flow base software happen in
Ruby in the first place. It's the concurrency and especially utilization
of several CPU's which can't happen with the current interpreter. (Hence
Ruby2 - AFAIK native threads is a planned feature there.)


It doesn't make sense. Not every instance does processing, just like the
bottles are shoved around only but without any activity on their own. You
don't want a String to have a thread of control. What should it do?

now that i'm answering the question, i may not have been considering
the inheritence model in ruby (if that's the reality of the
interpreter)... but my thought process was: even data in ruby is
active, and i think this is a positive consequence of data-as-objects.
a String might announce its .length or get .reverse'd or .chomp'd.

more importantly, when the machine filling a bottle dies, i don't want
the repairmen to haul away the bottle with the broken machine. i want
them to take the bottle out, install the new machine, and put that
bottle back in the new machine.

if a bottle breaks, well so it goes. but i feel the computer can work
harder to not throw the baby out with the bath water, rather than
making me create a new baby. (as fun as that may be!)

-z
 
R

Robert Klemme

word. and appearantly in 20% of situations, relying on the OS (or
microkernel) to pass messages between processes/tasks works while
threading crashes, probably because of a goof sharing that memory.
with today's inexpensive hardware, 90% hardware in the field can
handle the penalty of the "heavier" processes/tasks to gain an
increase in human creativity resource productivity.

so if we dismiss threads, either ruby has to be able to talk to its
unix host for its own processes, or it's going to talk to the kernel
for that. i'm not sure, but i think this amounts to the same thing,
which is how i arrived at this original topic to begin with.

There's DRB. And you have plain sockets. So you can do that with Ruby
today already.
now that i'm answering the question, i may not have been considering
the inheritence model in ruby (if that's the reality of the
interpreter)... but my thought process was: even data in ruby is
active, and i think this is a positive consequence of data-as-objects.

I'm not sure what exactly you mean by "data is active". It's basic OO
that each object has methods that it can respond to. Activitiy as own
thread of control is a different story. But that is not build into Ruby.
You can implement it in Ruby as I've tried to show.
a String might announce its .length or get .reverse'd or .chomp'd.

But only when asked. And constructing an event mechanism around the call
to #length is ridiculous.
more importantly, when the machine filling a bottle dies, i don't want
the repairmen to haul away the bottle with the broken machine. i want
them to take the bottle out, install the new machine, and put that
bottle back in the new machine.

The point being?
if a bottle breaks, well so it goes. but i feel the computer can work
harder to not throw the baby out with the bath water, rather than
making me create a new baby. (as fun as that may be!)

LOL

robert
 
Z

zuzu

There's DRB. And you have plain sockets. So you can do that with Ruby
today already.

i think i've touched on this already as well...

dRuby is cool. (the FBP wiki analyzes linda, btw.) however, my
current understanding is that any one instance of ruby will not
introspect across all available instances of ruby, only itself. this
makes tracking of all accessible ruby objects within any ruby instance
difficult. i suppose introspection could be extended to introspect
across dRuby ports... or maybe i'm plain wrong about this. anyone
here a dRuby expert?

i will almost certainly need to work within such a framework for the
short-term anyway.
I'm not sure what exactly you mean by "data is active". It's basic OO
that each object has methods that it can respond to. Activitiy as own
thread of control is a different story. But that is not build into Ruby.
You can implement it in Ruby as I've tried to show.


But only when asked. And constructing an event mechanism around the call
to #length is ridiculous.

silly perhaps, but:
1.) why create an exceptional circumstance for data? again, with O(1)
schedulers extra processes won't suck performance, and keeping data
protected separately increases robustness.
2.) http://c2.com/cgi/wiki?LazyProgrammer or using a thread for data
instead of a process seems like premature optimization, and breaks the
rule from hagakure of "making two things out of one".

basically, do the reasons for adding threads into the mix outweigh the
cost of having to think about them rather than just processes/IPC?
The point being?

...that just because the html renderer in firefox shits a brick is no
reason why i should lose writing this unfinished email. the faulty
object should be replaced like a lightbulb, without turning off all
the electricity in the apartment. (the computer in this analogy being
the entire building, and the internet the city.)
LOL

robert

-z
 
R

Robert Klemme

zuzu said:
i think i've touched on this already as well...

dRuby is cool. (the FBP wiki analyzes linda, btw.) however, my
current understanding is that any one instance of ruby will not
introspect across all available instances of ruby, only itself. this
makes tracking of all accessible ruby objects within any ruby instance
difficult. i suppose introspection could be extended to introspect
across dRuby ports... or maybe i'm plain wrong about this. anyone
here a dRuby expert?

Not too much of an expert, but this won't work automatically. When
working with multiple DRB servers (= processes) you'd certainly have a
single instance as a repository where all others register themselv.
Typically this is implemented as a naming service. You could as well use
LDAP or similar for this...
silly perhaps, but:
1.) why create an exceptional circumstance for data? again, with O(1)
schedulers extra processes won't suck performance, and keeping data
protected separately increases robustness.

With all architectures I know the overhead is simply too big. This would
not yield reasonable performance. And it's the wrong abstraction IMHO.
In the real world(TM) we have a differentiation between actors / subjects
(humans, animals, maybe even machines) and objects (passive things). It's
not always good to make one thing from two things - especially if they are
not very similar. Software engineering is not about finding minimal or
maxmial abstractions but to find appropriate abstractions.
2.) http://c2.com/cgi/wiki?LazyProgrammer or using a thread for data
instead of a process seems like premature optimization, and breaks the
rule from hagakure of "making two things out of one".

basically, do the reasons for adding threads into the mix outweigh the
cost of having to think about them rather than just processes/IPC?

I'd say things get more complicated if you make everything active. If you
want to know how fast your car runs at the momemt you look at the
speedometer and get the answer immediately. You don't send a request to
the car which in turn answers with a voice message that indicates current
speed.
..that just because the html renderer in firefox shits a brick is no
reason why i should lose writing this unfinished email. the faulty
object should be replaced like a lightbulb, without turning off all
the electricity in the apartment. (the computer in this analogy being
the entire building, and the internet the city.)

I get the feeling we're talking past each other. Care to explain the
concept of "ruby interpreter as mach kernel server"? How does it work?
What does it do?

robert
 
Z

zuzu

zuzu said:
Not too much of an expert, but this won't work automatically. When
working with multiple DRB servers (= processes) you'd certainly have a
single instance as a repository where all others register themselv.
Typically this is implemented as a naming service. You could as well use
LDAP or similar for this...

ugh, this sounds bloated already.
With all architectures I know the overhead is simply too big. This would
not yield reasonable performance. And it's the wrong abstraction IMHO.
In the real world(TM) we have a differentiation between actors / subjects
(humans, animals, maybe even machines) and objects (passive things).

hmm... if you're talking animate and inanimate objects... not really.
everything changes.
It's
not always good to make one thing from two things - especially if they are
not very similar. Software engineering is not about finding minimal or
maxmial abstractions but to find appropriate abstractions.

true. but i do not think this is over-abstraction. we probably need
some real numbers or testing on this though.
I'd say things get more complicated if you make everything active. If you
want to know how fast your car runs at the momemt you look at the
speedometer and get the answer immediately. You don't send a request to
the car which in turn answers with a voice message that indicates current
speed.

i need a better analogy. you don't just "know" what your car's speed
is. your speedometer senses the car's speed (from axel rotation or
something) and effects a change in needle display. you sense with
your eyes detecting light reflecting off the dial of the speedometer,
and effect change in speed with the gas pedal. it's always message
passing. the speedometer does in fact "speak" its reading to you via
reflected light.

and again, i find it important that your car will keep moving even if
your speedometer fails.

how is the programming more complicated if the pattern is consistent?
I get the feeling we're talking past each other. Care to explain the
concept of "ruby interpreter as mach kernel server"? How does it work?
What does it do?

exactly what it does now, just not constrained by the unix kernel. if
the unix kernel panics, ruby keeps going... and maybe could restart
the unix kernel. if ruby panics, it could be restarted from unix.
and requests for resources between them would be determined by mach.

-z
 
R

Richard Dale

zuzu said:
That's interesting - I've just implemented a ruby version of DCOP for the
KDE Korundom bindings. DCOP is the equivalent of Cocoa Distributed Objects,
rather than Mach tasks/ipc which is much lower level. It occured to me how
you could build an actor-like system in ruby DCOP to schedule workflow in
C++ apps. All KDE apps have interfaces exposed as DCOP, which gives you
cross process introspection, and nice easy to use cross language rpc.

You used to write 'Mach Interface Generator' files like xdr to describe the
marshalling (is it still the same?), so for ruby you would do that
dynamically. But if you're only communicating with other ruby apps that can
understand your protocol, it seems a bit limiting. Does RubyCocoa implement
Distributed Objects? That seems a better place to start on Mac OS X to me.

In Korundum, you define DCOP slots like this:

class MyWidget < KDE::pushButton
k_dcop 'void mySlot(QString)', 'QPoint getPoint(QString)'

def initialize(parent, name)
super
end

def mySlot(greeting)
puts "greeting: #{greeting}"
end

def getPoint(msg)
puts "message: #{msg}"
return Qt::point.new(50, 100)
end
end

The 'tag' or unique name for this actor is 'dcopslot/MyWidget/<slotname>' -
the name of the app and the class with the slot.

Here is an example of synchronous communication ('slots' are Qt's in process
equivalent of DCOP slots):

class SenderWidget < PushButton
def initialize(parent, name)
super
connect(self, SIGNAL('clicked()'), self, SLOT('doit()'))
end

slots 'doit()'

def doit()
dcopRef = DCOPRef.new("dcopslot", "MyWidget")
result = dcopRef.call("QPoint getPoint(QString)", "Hello from
dcopcall")
puts "result class: #{result.class.name} x: #{result.x} y:
#{result.y}"
end
end

Of course in ruby the above call() should really look like this, and use
method_missing():

result = dcopRef.getPoint("Hello from dcopcall")

Or synchronous communication:

class SenderWidget < KDE::pushButton
def initialize(parent, name)
super
Qt::Object::connect(self, SIGNAL('clicked()'), self, SLOT('doit()'))
end

slots 'doit()'

def doit()
dcopRef = KDE::DCOPRef.new("dcopslot", "MyWidget")
dcopRef.send("mySlot(QString)", "Hello from dcopsend")
end
end

And of course you can send DCOPRef's over DCOP to other KDE programs or
actors.

The script for the actor's behaviour definition can be just a ruby string
instance variable, which is eval'd for the current request in the actor's
'become' method. Each response to a message generates the script to respond
to the next message.

So the next step is to try and write the factorial example from Gul Agha
Actors book in DCOP..

-- Richard
 
R

Robert Klemme

zuzu said:
ugh, this sounds bloated already.

No. It's much easier to have a cetral repository than to try to have each
distributed object register itself which each other. Of course, if you want
complete failover and redundancy things get more complicated.
hmm... if you're talking animate and inanimate objects... not really.
everything changes.

I see you read your Heraklit. But independently of the truth of this
statements from a practical point of view it's much more valuable to make
some distinctions.
true. but i do not think this is over-abstraction. we probably need
some real numbers or testing on this though.


i need a better analogy. you don't just "know" what your car's speed
is. your speedometer senses the car's speed (from axel rotation or
something) and effects a change in needle display. you sense with
your eyes detecting light reflecting off the dial of the speedometer,
and effect change in speed with the gas pedal. it's always message
passing. the speedometer does in fact "speak" its reading to you via
reflected light.

Well, I guess you can discover message passing in every process if you just
scrutinize it thoroughly enough.
and again, i find it important that your car will keep moving even if
your speedometer fails.
Sure.

how is the programming more complicated if the pattern is consistent?

If String#length is a synchronus call (like it is today), you just invoke it
and have the answer and can work with it.

If it's asynchronous (as in message oriented systems) you have to place your
request in some inbound queue and make sure, you get another event if the
result is there. Now if the result is sent back with another message you
have to react on it and proceed with whatever calculation you were doing and
that needed the length of the String.

Now, what's more complicated?
exactly what it does now, just not constrained by the unix kernel. if
the unix kernel panics, ruby keeps going... and maybe could restart
the unix kernel. if ruby panics, it could be restarted from unix.
and requests for resources between them would be determined by mach.

So basically it's just another way to invoke the interpreter with some
better liveliness guarantees.

robert
 
M

Martin DeMello

Robert Klemme said:
If String#length is a synchronus call (like it is today), you just invoke it
and have the answer and can work with it.

If it's asynchronous (as in message oriented systems) you have to place your
request in some inbound queue and make sure, you get another event if the
result is there. Now if the result is sent back with another message you
have to react on it and proceed with whatever calculation you were doing and
that needed the length of the String.

Now, what's more complicated?

This might mesh well with Oz's dataflow concurrency, where the thread
would essentially suspend itself until the value was available.

martin
 
G

gabriele renzi

il Mon, 12 Jul 2004 12:12:04 GMT, Martin DeMello
This might mesh well with Oz's dataflow concurrency, where the thread
would essentially suspend itself until the value was available.

I think this maps quite good to the actor model in Io
(www.iolanguage.com).
In Io you call a method like:

obj methodname(args)

and if you call it like:
obj @methodname(args)

it returns you a Future object.
You then test the future object and in case you retriev the value:
future_object isReady
future_object value print

Otherwise you can avoid to handle the future object like this:
obj @@methodname(args) #returns nil

It is an interesting model, and imho seem very OO, simple and
powerful.

Now if I just was good enough to implement Object#async_send() ... ;)

PS
I may be wrong about the syntax, not really an Io programmer
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top