POLS - exception comparisons

A

Ara.T.Howard

what is the meaning of this?


jib:~ > cat a.rb
def method
raise 'foobar'
end

errors = []

2.times do
Thread.new do
begin
method
rescue => e
errors << e
end
end.join
end

p errors

p(errors[-2] == errors[-1])

p((errors[-2].class == errors[-1].class and
errors[-2].message == errors[-1].message and
errors[-2].backtrace == errors[-1].backtrace))


jib:~ > ruby a.rb
[#<RuntimeError: foobar>, #<RuntimeError: foobar>]
false
true


i'm writing some code which must be fault tolerant: something like


def error_wrap errors = [], &block
begin
block.call errors
rescue => e
errors << e
if errors.size >= @fault_tolerance
raise
else
if errors.size == 1 or (errors.size >= 2 and errors[-2] != errors[-1])
@logger.warn{ errors.last }
error_wrap errors, &block
end
end
end
end

which runs a block of code, handling errors by logging them if it's a new
error (don't busy the log duplicate errors), eventually failing if too many
errors are seen.

the problems is that

errors[-2] != errors[-1]

is always true. as the first bit of code shows, the '==' operator seems to be
implemented in a counter intuitive way. doesn't it make sense that exceptions
with the same class, message, and backtrace should be considered the same?

looking briefly at error.c leads me to believe that it's Object#== used to
compare exceptions - does this make sense?

regards.


-a
--
===============================================================================
| EMAIL :: Ara [dot] T [dot] Howard [at] noaa [dot] gov
| PHONE :: 303.497.6469
| A flower falls, even though we love it;
| and a weed grows, even though we do not love it.
| --Dogen
===============================================================================
 
R

Robert Klemme

Ara.T.Howard said:
what is the meaning of this?


jib:~ > cat a.rb
def method
raise 'foobar'
end

errors = []

2.times do
Thread.new do
begin
method
rescue => e
errors << e
end
end.join
end

p errors

p(errors[-2] == errors[-1])

p((errors[-2].class == errors[-1].class and
errors[-2].message == errors[-1].message and
errors[-2].backtrace == errors[-1].backtrace))


jib:~ > ruby a.rb
[#<RuntimeError: foobar>, #<RuntimeError: foobar>]
false
true


i'm writing some code which must be fault tolerant: something like


def error_wrap errors = [], &block
begin
block.call errors
rescue => e
errors << e
if errors.size >= @fault_tolerance
raise
else
if errors.size == 1 or (errors.size >= 2 and errors[-2] != errors[-1])
@logger.warn{ errors.last }
error_wrap errors, &block
end
end
end
end

which runs a block of code, handling errors by logging them if it's a new
error (don't busy the log duplicate errors), eventually failing if too many
errors are seen.

the problems is that

errors[-2] != errors[-1]

is always true. as the first bit of code shows, the '==' operator seems to be
implemented in a counter intuitive way. doesn't it make sense that exceptions
with the same class, message, and backtrace should be considered the same?

looking briefly at error.c leads me to believe that it's Object#== used to
compare exceptions - does this make sense?

I guess nooene ever bothered because exceptions are usually not stored
somewhere. You just throw them, catch them and print them. But you usually
don't put them in some kind of collection. You could solve this by either
using your comparison code (that compares stack traces etc.) or you convert
the error to a string and compare those.

Some additional remarks about your implementation:

- I'd put the exception only into the array if it's the first one of this
type. Maybe you have good reasons to do it otherwise (to be able to later
detect duplicates for example).

- Recursion is inefficient to do what you want, rather use 'retry'.

- Your method silently exits if it's a repeated error. IMHO that's a bad
solution since the error goes totally unnoticed (it's not logged and not
raised either).

- I don't like the idea that the block receives the errors array, but
that's probably needed to be able to react on this.

So this would be my implementation of your method:

def error_wrap errors = [], &block
begin
block.call errors
rescue => e
if errors.size >= @fault_tolerance
raise
elsif errors.empty? or errors[-1].backtrace != e.backtrace
errors << e
@logger.warn { e }
end

retry
end
end

On the broader perspective, personally I don't think having a *general*
mechanism that automatically retries code in case of an exception being
thrown is a good idea: whether or not you can repeat the code totally
depends on the code at hand and I'd guess that always repeating it is not
generally a good idea. But then you might have multiple very similar pieces
of code where it makes sense indeed to have some mechanism.

Kind regards

robert
 
A

Ara.T.Howard

I guess nooene ever bothered because exceptions are usually not stored
somewhere. You just throw them, catch them and print them. But you usually
don't put them in some kind of collection. You could solve this by either
using your comparison code (that compares stack traces etc.) or you convert
the error to a string and compare those.

Some additional remarks about your implementation:

- I'd put the exception only into the array if it's the first one of this
type. Maybe you have good reasons to do it otherwise (to be able to later
detect duplicates for example).

yes - uniq errors are warned on the way, iff we eventually succeed that's all
that happens. however, if we eventually surpass the number of allowed errors
then ALL of them are logged at a fatal level (in order and including duplicates
for completeness) and the program then exits with a failing status. the code i
posted is partial.
- Recursion is inefficient to do what you want, rather use 'retry'.

ruby is inefficient. this code is part of a clustering system that chugs
along running jobs that take between 30 minutes and 5 days so anything under
15 minutes is considered 'extremely effeicent' ;-)
- Your method silently exits if it's a repeated error. IMHO that's a bad
solution since the error goes totally unnoticed (it's not logged and not
raised either).

??

~ > ruby a.rb
W, [2004-08-24T11:01:27.799685 #17735] WARN -- : this is logged exactly once (RuntimeError)
a.rb:19
a.rb:19:in `times'
a.rb:19
a.rb:19:in `call'
a.rb:4:in `error_wrap'
a.rb:19
a.rb:19
a.rb:19: this is logged exactly once (RuntimeError)
from a.rb:19:in `times'
from a.rb:19
from a.rb:19:in `call'
from a.rb:4:in `error_wrap'
from a.rb:12:in `error_wrap'
from a.rb:11
from a.rb:19


~ > cat a.rb
require 'logger'
def error_wrap errors = [], &block
begin
block.call errors
rescue => e
errors << e
if errors.size >= @fault_tolerance
raise
else
if errors.size == 1 or (errors.size >= 2 and errors[-2] != errors[-1])
@logger.warn{ errors.last }
error_wrap errors, &block
end
end
end
end
@fault_tolerance = 2
@logger = Logger.new STDERR
error_wrap{ 2.times{ raise 'this is logged exactly once' } }

??
- I don't like the idea that the block receives the errors array, but
that's probably needed to be able to react on this.

it makes some routines be able to exit prematurely if certain very specific
conditions are met - but the general rule is to make @fault_tolerance attempts.

i use a general rule when i can't think of anything good to pass to a block -
if there is an execution context pass that, otherwise pass nothing. in this
case the list of encountered errors is the context.

So this would be my implementation of your method:

def error_wrap errors = [], &block
begin
block.call errors
rescue => e
if errors.size >= @fault_tolerance
raise
elsif errors.empty? or errors[-1].backtrace != e.backtrace
errors << e
@logger.warn { e }
end

retry
end
end

sure - this is fine too.
On the broader perspective, personally I don't think having a *general*
mechanism that automatically retries code in case of an exception being
thrown is a good idea: whether or not you can repeat the code totally
depends on the code at hand and I'd guess that always repeating it is not
generally a good idea. But then you might have multiple very similar pieces
of code where it makes sense indeed to have some mechanism.

many, many, many peices of code. if one is writing a production system that
runs on NFS it is unavoidable that many un-expected and rarely occuring errors
will occur. you may think it's only ESTALE, but then it might be EAGAIN, and
then it might be EWOULDBLOCK, then the sysads mount the system with different
options and you have whole new suite of possible errors. in short, this system
must not stop - it can have errors in the short term - but it must recover from
them and continue to run. in fact, the code this runs also is and 'immortal'
daemon - one that restarts itself on any exit condition other than success.

cheers.

-a
--
===============================================================================
| EMAIL :: Ara [dot] T [dot] Howard [at] noaa [dot] gov
| PHONE :: 303.497.6469
| A flower falls, even though we love it;
| and a weed grows, even though we do not love it.
| --Dogen
===============================================================================
 
R

Robert Klemme

I guess nooene ever bothered because exceptions are usually not stored
somewhere. You just throw them, catch them and print them. But you usually
don't put them in some kind of collection. You could solve this by either
using your comparison code (that compares stack traces etc.) or you convert
the error to a string and compare those.

Some additional remarks about your implementation:

- I'd put the exception only into the array if it's the first one of this
type. Maybe you have good reasons to do it otherwise (to be able to later
detect duplicates for example).

yes - uniq errors are warned on the way, iff we eventually succeed that's all
that happens. however, if we eventually surpass the number of allowed errors
then ALL of them are logged at a fatal level (in order and including duplicates
for completeness) and the program then exits with a failing status. the code i
posted is partial.
Aha.
- Recursion is inefficient to do what you want, rather use 'retry'.

ruby is inefficient. this code is part of a clustering system that chugs
along running jobs that take between 30 minutes and 5 days so anything under
15 minutes is considered 'extremely effeicent' ;-)
- Your method silently exits if it's a repeated error. IMHO that's a bad
solution since the error goes totally unnoticed (it's not logged and not
raised either).

??

~ > ruby a.rb
W, [2004-08-24T11:01:27.799685 #17735] WARN -- : this is logged exactly once (RuntimeError)
a.rb:19
a.rb:19:in `times'
a.rb:19
a.rb:19:in `call'
a.rb:4:in `error_wrap'
a.rb:19
a.rb:19
a.rb:19: this is logged exactly once (RuntimeError)
from a.rb:19:in `times'
from a.rb:19
from a.rb:19:in `call'
from a.rb:4:in `error_wrap'
from a.rb:12:in `error_wrap'
from a.rb:11
from a.rb:19


~ > cat a.rb
require 'logger'
def error_wrap errors = [], &block
begin
block.call errors
rescue => e
errors << e
if errors.size >= @fault_tolerance
raise
else
if errors.size == 1 or (errors.size >= 2 and errors[-2] != errors[-1])
@logger.warn{ errors.last }
error_wrap errors, &block
end
end
end
end
@fault_tolerance = 2
@logger = Logger.new STDERR
error_wrap{ 2.times{ raise 'this is logged exactly once' } }

??

Setting @fault_tolerance to 2 and comparing exceptions with != (which you
figured does not work as expected) lead to a false impression. Try this
(btw, 2.times{} is ineffective since the first invocation throws):

def error_wrap errors = [], &block
begin
block.call errors
rescue => e
errors << e
if errors.size >= @fault_tolerance
raise
else
if errors.size == 1 or (errors.size >= 2 and errors[-2].to_s !=
errors[-1].to_s)
$stderr.puts e
error_wrap errors, &block
end
end
end
end

@fault_tolerance = 10
error_wrap{ raise 'this is logged exactly once' }
puts "no exception here"

Output is

$ ruby x.rb
this is logged exactly once
no exception here

Note also, that because of the recursion backtraces will never compare
equal.
- I don't like the idea that the block receives the errors array, but
that's probably needed to be able to react on this.

it makes some routines be able to exit prematurely if certain very specific
conditions are met - but the general rule is to make @fault_tolerance attempts.

i use a general rule when i can't think of anything good to pass to a block -
if there is an execution context pass that, otherwise pass nothing. in this
case the list of encountered errors is the context.

So this would be my implementation of your method:

def error_wrap errors = [], &block
begin
block.call errors
rescue => e
if errors.size >= @fault_tolerance
raise
elsif errors.empty? or errors[-1].backtrace != e.backtrace
errors << e
@logger.warn { e }
end

retry
end
end

sure - this is fine too.

But it behaves differently (see above). Apart from that because it uses
retry this code does not have the problem of changing stack traces.
many, many, many peices of code. if one is writing a production system that
runs on NFS it is unavoidable that many un-expected and rarely occuring errors
will occur. you may think it's only ESTALE, but then it might be EAGAIN, and
then it might be EWOULDBLOCK, then the sysads mount the system with different
options and you have whole new suite of possible errors. in short, this system
must not stop - it can have errors in the short term - but it must recover from
them and continue to run. in fact, the code this runs also is and 'immortal'
daemon - one that restarts itself on any exit condition other than
success.

Ah, I see. Why don't you just use a general retry as in:

def exec_retry(tries = 3)
begin
yield
rescue Exception
tries -= 1
raise if tries == 0
retry
end
end

Kind regards

robert
 
A

Ara.T.Howard

Note also, that because of the recursion backtraces will never compare
equal.

right you are (and above) - i took your code.
Ah, I see. Why don't you just use a general retry as in:

def exec_retry(tries = 3)
begin
yield
rescue Exception
tries -= 1
raise if tries == 0
retry
end
end


it's long and terrible, here's a snippet from my post to the nfs list, it's
what i'm implementing:

From (e-mail address removed) Mon Aug 23 09:24:34 2004
Date: Mon, 23 Aug 2004 09:18:11 -0600 (MDT)
From: Ara.T.Howard <[email protected]>
To: (e-mail address removed)
Subject: [NFS] client side - application level 'dead' lock detection


nfs gurus-

i have an application which does alot of byte range locking on an nfs mounted
file. twice now, i've seen a 'dead' lock appear on our nfs server - a lock
held by a non-existent pid. none of the processes in question have been dying
unexpectedly (or exiting at all actually) and also have not been shut down
uncleanly using kill -9, for instance. basically a dead lock simply appears
after some length of time during long periods of successful locking and file
useage - at this point after about 3 weeks of 24/7 lock usage. the file is
never corrupt and i have found that i can clear the dead locks using

mv locked locked.tmp
mv locked.tmp locked

of course, this still leaves a lock on SOME file on the server, but not my
file.

discovering this i've been thinking of a way to determine via my application
WHEN this situation has arisen and to automatically recover from it. here is
my algorithim so far, it attempts to make the determination that

"i cannot get the lock AND no one else has it"

upon finding this, automatic recovery of the system is attempted.

i'd appreciate any insight as to it's effectiveness taking into account the
subtleties of real world nfs behaviour. the following
pre-conditions/assumptions/definitions apply:

- a lockd impl that works, excepting for the occasional 'dead' lock existing
on the server which prevents all clients from obtaining any new locks

- non-blocking lock types do not block, even in the presence of such locks

- moving a file out of the way, and back again, clears such 'dead' locks
(they may still exist on the server, but clients will be able to lock the
'new' file)

- the 'file in question' is the file under heavy usage/locking

- a monitor file will be used along side the file in question. this file is
simply a zero length file that all applications will use in addition to
the file in question.

- a recovery file is simply an empty file using mark the time of lock
recovery

- a 'refresher' thread is one which simply loops touching a file and
sleeping. technically this could be either a thread or process so long as
it does not go to sleep when it's process issues a blocking operation
(like fcntl).

- an auto-recovering lockfile library which uses link(2), it is atomic, safe
over nfs, and contains no bugs ( ;-) ) - exists (i have an impl).


here is the algorithim

0.
attempt to apply a non-blocking lock of type write/read to the monitor
file

1.
a. monitor lock success

start a refresher thread for the monitor file and attempt to apply a
non-blocking lock of type write/read (same lock as the one on the
monitor file) to the file in question - this should always succeed; iff
it does not raise an error, the algorithim, or something else, has
failed.

having aquired both locks, call the callback for the file in question
and, when it is complete, kill the refresher thread, unlock the file in
question, and unlock the monitor file.

if estale is encoutered during 1.a. sleep and retry on 0.

b. monitor lock failure

iff the monitor file is stale (mtime < now - max_age)

some process must have died uncleanly while holding the lock (or the
network/cpu has become very slow for that client) - we attempt
recovery:

mark recovery_start_time

create an nfs safe lockfile to serialize recovery among clients.
this is a blocking operation.

iff recovery file exists and is newer than recovery_start_time

someone else has recovered, sleep and retry on 0.

else recovery file is older than recovery_start_time or does not
exist

recover:

for file in (monitor file_in_question)
mv file file.tmp && mv file.tmp file
end

(note that this could cause estale in some other client - but
they are prepared to deal with this condition)

touch recovery file

rm lockfile

sleep and retry on 0.

else monitor file is not stale

some other process must have the lock (it's refresher thread is
running) sleep and retry on 0.

thanks in advance for any inputs/critiques.





so all the error catching is for applications that happen to be using the
'file in question' at the time of recovery. basically i'm trying to overcome
an error in lockd - it's not easy ;-(


kind regards.

-a
--
===============================================================================
| EMAIL :: Ara [dot] T [dot] Howard [at] noaa [dot] gov
| PHONE :: 303.497.6469
| A flower falls, even though we love it;
| and a weed grows, even though we do not love it.
| --Dogen
===============================================================================


-------------------------------------------------------
SF.Net email is sponsored by Shop4tech.com-Lowest price on Blank Media
100pk Sonic DVD-R 4x for only $29 -100pk Sonic DVD+R for only $33
Save 50% off Retail on Ink & Toner - Free Shipping and Free Gift.
http://www.shop4tech.com/z/Inkjet_Cartridges/9_108_r285
_______________________________________________
NFS maillist - (e-mail address removed)
https://lists.sourceforge.net/lists/listinfo/nfs


thanks for the help.

-a
--
===============================================================================
| EMAIL :: Ara [dot] T [dot] Howard [at] noaa [dot] gov
| PHONE :: 303.497.6469
| A flower falls, even though we love it;
| and a weed grows, even though we do not love it.
| --Dogen
===============================================================================
 
R

Robert Klemme

right you are (and above) - i took your code.



it's long and terrible, here's a snippet from my post to the nfs list, it's
what i'm implementing:

<snip>quote</snip>

Uuuh! That sounds ugly! I don't envy you.

Kind regards

robert
 
Y

Yukihiro Matsumoto

Hi,

In message "POLS - exception comparisons"

|doesn't it make sense that exceptions
|with the same class, message, and backtrace should be considered the same?

Interesting idea. I will add this to 1.9 and see how it work well.

matz.
 
M

Martin DeMello

i have an application which does alot of byte range locking on an nfs
mounted file. twice now, i've seen a 'dead' lock appear on our nfs
server - a lock held by a non-existent pid. none of the processes in
question have been dying

When you figure it out, could you post a followup here? Sounds like
something it'd be useful to know about.

martin
 
P

Pit Capitain

Yukihiro said:
In message "POLS - exception comparisons"

|doesn't it make sense that exceptions
|with the same class, message, and backtrace should be considered the same?

Interesting idea. I will add this to 1.9 and see how it work well.

I can't express exactly why, but to me an exception should (at least logically)
contain the context in which it occurred. By context I not only mean what went
wrong (class, message) and where has it occurred (backtrace), but also when
(time, bindings, etc). Something like a continuation.

From that point of view two exceptions never could be the same.

But since exceptions currently don't contain something like a continuation, I'd
be fine with the proposed change.

Regards,
Pit
 
J

Jamis Buck

Ara.T.Howard said:
the problems is that

errors[-2] != errors[-1]

is always true. as the first bit of code shows, the '==' operator seems
to be
implemented in a counter intuitive way. doesn't it make sense that
exceptions
with the same class, message, and backtrace should be considered the same?

looking briefly at error.c leads me to believe that it's Object#== used to
compare exceptions - does this make sense?

It depends on what you are asking when you want to know if two
exceptions are the same. I can see where, on one hand, you would want
them to be equal if they contain the same information, but on the other
hand, sometimes you might want to know: are these two objects the same
exception that was thrown at some instant x? In that case, it isn't
sufficient to merely check the exception's contents, you have to make
sure they have the same id and are literally the same object.

Don't know if I've thrown any light on the subject or not, Ara. :) But I
can see value in having exceptions compared both ways.

--
Jamis Buck
(e-mail address removed)
http://www.jamisbuck.org/jamis

"I use octal until I get to 8, and then I switch to decimal."
 
A

Ara.T.Howard

When you figure it out, could you post a followup here? Sounds like
something it'd be useful to know about.

martin

it's figured out. i have it coded in a nice and ugly fashion and have been
testing it for the last 24 hours with a bit of code that 'breaks' to lock
every few seconds and forces recovery in the clients. note that the entire
thing is an emergency only procedure that takes place ONLY when the system is
already broken (hung locks) and so the solution doesn't have to be 'perfect'.
i'm not talking about race conditions, just that it's difficult for one system
to recover the locking atomically in the presence of broken locks (for obvious
reasons) and, therefore, to ensure not killing some remote client. in my case
this is ideal, my remote clients are remote imorrtal daemons that will restart
if killed and, in fact, i want this to happen. what i don't want to happen is
for my entire system to hang. i think it's acceptable to say that - iff your
nfs locking implementation breaks my code may too, however it recovers
afterwards automatically. so, in that context - here is the basic solution


here is the basic algorithim

- an empty monitor file will be used in conjunction with the file in
question. the reason we need an extra file is so we can make guaruntees
about the way in which it is updated (mtime). this file is kept in a
directory along side the file in question. this makes recovery much
easier (atomic).

- apply your lock type (write/read) to the monitor file in non-blocking
fashion.

if lock succeeded

procede to apply the same lock type, also in non-blocking fashion, to
the file in question.

if the lock on the file in question succeeds then start a thread which
will loop touching the monitor file to keep it 'fresh' - say every ten
seconds. and procede to use the file. note that it's critical to use
non-blocking locks since blocking locks (in ruby) will stop this
thread! if you must use blocking locks then a process must be forked
to keep the monitor file fresh and you cannot use a thread. ensure
killing this thread/process

if the lock on the file in question fails raise an error - the protocol
has failed for some reason.

if lock failed

if the monitor file is fresh simply sleep a bit and retry. this will
be the normal execution path if lockd is working.

else if the monitor file is stale one of two things must be true:

- a process holding the lock has died and the nfs client/server have
not managed to clean things up (hung lock - lockd bug)

- a process holding the lock is partitioned on a slow network,
running on a frozen cpu, or somehow has the lock but cannot keep
the monitor file fresh. there is a chance that recovery may kill
this process - but it is already sick and hanging the system.

in either case we attempt lockd recovery taking the risk of killing
a sick remote client. note that MANY remote clients might realize
the hung situation at once, and so they themselves need to serialize
recovery without using fctnl locks since those locks may be hung!

lockd recovery involves:

- mark recovery start time

- create an nfs safe lockfile (my lockfile lib), this is a
'blocking' (poll/sleep) operation to ensure only one remote
process is attempting recovery at a time.

- see if someone else has already recovered (flag file exists with
timestamp greater than recovery start time). iff so quit
recovery and go back to attempting to get the first lock

- since we have the lockfile and noone else has recovered:

#
# prevent new process from using either file
#
mv directory containing files to dir.bak
#
# clear lockd locks - (force new inode info)
#
for files in monitor, file in question
cp file tmp
rm file
mv tmp file
end
#
# mark recovery time
#
touch directory/lockd_recovery
#
# restore system
#
mv dir.bak directory

- retry to aquire lock


in my case all remote clients are prepared to get a suite of errors during
transactions such as Errno::ENOENT, Errno::ESTALE, etc. the allow quite a few
of these by sleeping and retrying but, eventually give up and die. the retry
is o.k. because all access to the file must be in a transaction (so by
definition is o.k. to re-execute on failure) and even if many retries are made
and the transaction is lost - the daemon will exit and restart (logging this
info). this last situation would be bad - but not compared to having the
entire system freeze - again, this is ONLY an emergency effort.


regards.



-a
--
===============================================================================
| EMAIL :: Ara [dot] T [dot] Howard [at] noaa [dot] gov
| PHONE :: 303.497.6469
| A flower falls, even though we love it;
| and a weed grows, even though we do not love it.
| --Dogen
===============================================================================
 
M

Martin DeMello

it's figured out. i have it coded in a nice and ugly fashion and have been
testing it for the last 24 hours with a bit of code that 'breaks' to lock
every few seconds and forces recovery in the clients. note that the
entire thing is an emergency only procedure that takes place ONLY when
the system is already broken (hung locks) and so the solution doesn't
have to be 'perfect'.

Interesting. Did it turn out to be a known feature of nfs causing the
hung locks, or is it a genuine bug?

martin
 
A

Ara.T.Howard

Interesting. Did it turn out to be a known feature of nfs causing the
hung locks, or is it a genuine bug?

a genuine bug - supposedly fixed in future rh kernels, but we are stuck using
the latest 'official' rh kernel which does contain it. also, from talks on
the nfs list, i've determined that it's not an uncommon problem - one that
people that make heavy use of nfs see from time to time in various impl.
becuase of that, it's best just to be prepared for it. in 4 months of running
my code i've seen it happen only once, but it's a bit of deal breaker. in the
end, even an attempt to resolve it is better than nothing since code
experiencing it is already broken.

regards.

-a
--
===============================================================================
| EMAIL :: Ara [dot] T [dot] Howard [at] noaa [dot] gov
| PHONE :: 303.497.6469
| A flower falls, even though we love it;
| and a weed grows, even though we do not love it.
| --Dogen
===============================================================================
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top