noob question: "TypeError" wrong number of args

H

Holger

Hi guys

Tried searching for a solution to this, but the error message is so
generic, that I could not get any meaningfull results.

Anyways - errormessage:
----------------------------------------------------
TypeError: addFile() takes exactly 1 argument (2 given)
----------------------------------------------------

The script is run with two args "arg1" and "arg2":
----------------------------------------------------
import sys

class KeyBase:
def addFile(file):
print "initialize the base with lines from this file"

print "These are the args"
print "Number of args %d" % len(sys.argv)
print sys.argv
print sys.version_info
print sys.version

f = sys.argv[1]
print "f = '%s'" % f
b = KeyBase()

b.addFile(f)
----------------------------------------------------

The output - including error message
(looks like stdout and stderr are a bit out of sync...):
----------------------------------------------------
These are the args
Traceback (most recent call last):

Number of args 3
['C:\\home\\<.. bla bla snip ...>\\bin\\test.py', 'arg1', 'arg2']
(2, 4, 2, 'final', 0)
2.4.2 (#67, Oct 30 2005, 16:11:18) [MSC v.1310 32 bit (Intel)]
f = 'arg1'

File "C:\Program Files\ActiveState Komodo
3.5\lib\support\dbgp\pythonlib\dbgp\client.py", line 1806, in runMain
self.dbg.runfile(debug_args[0], debug_args)
File "C:\Program Files\ActiveState Komodo
3.5\lib\support\dbgp\pythonlib\dbgp\client.py", line 1529, in runfile
h_execfile(file, args, module=main, tracer=self)
File "C:\Program Files\ActiveState Komodo
3.5\lib\support\dbgp\pythonlib\dbgp\client.py", line 590, in __init__
execfile(file, globals, locals)
File "C:\home\hbille\projects\bc4rom\bin\test.py", line 20, in
__main__
b.addFile(f)
TypeError: addFile() takes exactly 1 argument (2 given)
----------------------------------------------------

I'm running this inside ActiveState Komodo on WinXP.

Hope one you wizards can give me pointers to either what I'm doing
wrong or maybe advise me what to modify in my setup.

Thank you!

Regards,
Holger
 
F

Fredrik Lundh

Holger said:
Tried searching for a solution to this, but the error message is so
generic, that I could not get any meaningfull results.

Anyways - errormessage:
----------------------------------------------------
TypeError: addFile() takes exactly 1 argument (2 given)
----------------------------------------------------

The script is run with two args "arg1" and "arg2":
----------------------------------------------------
import sys

class KeyBase:
def addFile(file):
print "initialize the base with lines from this file"

when defining your own classes, you must spell out the "self"
argument in your method definitions:

def addFile(self, file):
print "initialize the base with lines from this file"

see:

http://pyfaq.infogami.com/what-is-self

</F>
 
B

Ben Finney

Holger said:
----------------------------------------------------
TypeError: addFile() takes exactly 1 argument (2 given)
----------------------------------------------------

----------------------------------------------------
import sys

class KeyBase:
def addFile(file):
print "initialize the base with lines from this file"

You've misunderstood -- or never followed -- the tutorial, especially
how Python does object methods. Please follow the whole tutorial
through, understanding each example as you work through it. You'll
then have a solid basis of knowledge to go on with.

<URL:http://docs.python.org/tut/>
 
H

Holger

I guess I deserved that. :-(
I *did* read the tutorial, but then I forgot and didn't notice...
My brain is getting is slow - so thx for the friendly slap in the face
;-)
 
E

Edward Elliott

Holger said:
oops, that was kinda embarrassing.

It's really not. You got a completely unhelpful error message saying you
passed 2 args when you only passed one explicitly. The fact the b is also
an argument to b.addfile(f) is totally nonobvious until you know that 1) b
is an object not a module*, and 2) objects pass references to themselves as
the first argument to their methods. The syntax "b." is completely
different from the syntax of any other type of parameter.

The mismatch between the number of parameters declared in the method
signature and the number of arguments actually passed is nonobvious,
unintuitive, and would trip up anybody who didn't already know what was
going on. It's ugly and confusing. It's definitely a wart on the
langauge.

Making people pass 'self' explicitly is stupid because it always has to be
the first argument, leading to these kinds of mistakes. The compiler
should handle it for you - and no, explicit is not *always* better than
implicit, just often and perhaps usually. While it's easy to recognize
once you know what's going on, that doesn't make it any less of a wart.

* technically modules may be objects also, but in practice you don't declare
self as a parameter to module functions
 
S

Steve Holden

Edward said:
It's really not. You got a completely unhelpful error message saying you
passed 2 args when you only passed one explicitly. The fact the b is also
an argument to b.addfile(f) is totally nonobvious until you know that 1) b
is an object not a module*, and 2) objects pass references to themselves as
the first argument to their methods. The syntax "b." is completely
different from the syntax of any other type of parameter.
Specifically, perhaps it would be better to say "b is an instance of
some Python class or type".

Objects don't actually "pass references to themselves". The interpreter
adds the bound instance as the first argument to a call on a bound method.

I agree that the error message should probably be improved for the
specific case of the wrong number of arguments to a bound method (and
even more specifically when the number of arguments is out by exactly
one - if there's one too many then self may have been omitted from the
parameter list).
The mismatch between the number of parameters declared in the method
signature and the number of arguments actually passed is nonobvious,
unintuitive, and would trip up anybody who didn't already know what was
going on. It's ugly and confusing. It's definitely a wart on the
langauge.
Sorry, it's a wart on your brain. Read Guido's arguments in favor of an
explicit self argument again before you assert this so confidently. It's
certainly confusing to beginners, but there are actually quite sound
reasons for it (see next paragraph).
Making people pass 'self' explicitly is stupid because it always has to be
the first argument, leading to these kinds of mistakes. The compiler
should handle it for you - and no, explicit is not *always* better than
implicit, just often and perhaps usually. While it's easy to recognize
once you know what's going on, that doesn't make it any less of a wart.
Hmm. I see. How would you then handle the use of unbound methods as
first-class objects? If self is implicitly declared, that implies that
methods can only be used when bound to instances. How, otherwise, would
you have an instance call its superclass's __init__ method if it's no
longer valid to say

myClass(otherClass):
def __init__(self):
otherClass.__init__(self)
...
* technically modules may be objects also, but in practice you don't declare
self as a parameter to module functions

The reason you don't do that is because the functions in a module are
functions in a module, not methods of (some instance of) a class.
Modules not only "may be" objects, they *are* objects, but the functions
defined in them aren't methods. What, in Python, *isn't* an object?

regards
Steve
 
E

Edward Elliott

Steve said:
Objects don't actually "pass references to themselves". The interpreter
adds the bound instance as the first argument to a call on a bound method.

Sure, if you want to get technical For that matter, objects don't actually
call their methods either -- the interpreter looks up the method name in a
function table and dispatches it. I don't see how shorthand talk about
objects as actors hurts anything unless we're implementing an interpreter.

Sorry, it's a wart on your brain.

Fine it's a wart on my brain. It's still a wart.
Read Guido's arguments in favor of an
explicit self argument again before you assert this so confidently.

I would if I could find it. I'm sure he had good reasons, they may even
convince me. But from my current perspective I disagree.
It's
certainly confusing to beginners, but there are actually quite sound
reasons for it (see next paragraph).

While confusion for beginners is a problem, that's not such a big deal.
It's a trivial fix that they see once and remember forever. What I mind is
its ugliness, that the language makes me do work declaring self when it
knows damn well it won't like my code until I do what it wants (yes I'm
anthropomorphizing interpreters now). The interpreter works for me, I
don't work for it. Things it can figure out automatically, it should
handle.

Hmm. I see. How would you then handle the use of unbound methods as
first-class objects? If self is implicitly declared, that implies that
methods can only be used when bound to instances.

I fail to see the problem here. I'm taking about implicit declaration on
the receiving end. It sounds like you're talking about implicit passing on
the sending end. The two are orthogonal. I can declare
def amethod (a, b):
and have self received implicitly (i.e. get the object instance bound by the
interpreter to the name self). The sender still explicitly provides the
object instance, e.g.
obj.amethod (a,b)
or
class.amethod (obj, a, b)
IOW everything can still work exactly as it does now, only *without me
typing self* as the first parameter of every goddamn method I write. Does
that make sense?
How, otherwise, would
you have an instance call its superclass's __init__ method if it's no
longer valid to say

myClass(otherClass):
def __init__(self):
otherClass.__init__(self)
...

Like this:
myClass(otherClass):
def __init__():
otherClass.__init__(self)

self is still there and still bound, I just don't have to type it out. The
interpreter knows where it goes and what it does, automate it already!
The reason you don't do that is because the functions in a module are
functions in a module, not methods of (some instance of) a class.
Modules not only "may be" objects, they *are* objects, but the functions
defined in them aren't methods. What, in Python, *isn't* an object?

If it looks like a duck and it quacks like a duck... Functions and methods
look different in their declaration but the calling syntax is the same.
It's not obvious from the dot notation syntax where the 'self' argument
comes from. Some interpreter magic goes on behind the scenes. Great, I'm
all for it, now why not extend that magic a little bit further?
 
B

bruno at modulix

Edward said:
It's really not. You got a completely unhelpful error message saying you
passed 2 args when you only passed one explicitly. The fact the b is also
an argument to b.addfile(f) is totally nonobvious until you know that 1) b
is an object not a module*, and 2) objects pass references to themselves as
the first argument to their methods.

Nope. It's the MethodType object (a descriptor) that wraps the function
that do the job. The object itself is totally unaware of this.
The syntax "b." is completely
different from the syntax of any other type of parameter.

The mismatch between the number of parameters declared in the method
signature and the number of arguments actually passed

There's no mismatch at this level. The arguments passed to the *function
*object wrapped by the method actually matches the *function* signature.
is nonobvious,
unintuitive, and would trip up anybody who didn't already know what was
going on. It's ugly and confusing. It's definitely a wart on the
langauge.

I do agree that the error message is really unhelpful for newbies (now I
don't know how difficult/costly it would be to correct this).
Making people pass 'self'

s/self/the instance/
explicitly is stupid

No. It's actually a feature.

because it always has to be
the first argument, leading to these kinds of mistakes. The compiler
should handle it for you

I don't think this would be possible if we want to keep the full
dynamism of Python. How then could the compiler handle the following code ?

class MyObj(object):
def __init__(self, name):
self.name = name

def someFunc(obj):
try:
print obj.name
except AttributeError:
print "obj %s has no name" % obj

import types
m = MyObj('parrot')
m.someMeth = types.MethodType(someFunc, obj, obj.__class__)
m.someMeth()
- and no, explicit is not *always* better than
implicit, just often and perhaps usually. While it's easy to recognize
once you know what's going on, that doesn't make it any less of a wart.

* technically modules may be objects also,

s/may be/are/
but in practice you don't declare
self as a parameter to module functions

def someOtherFunc():
print "hello there"

m.someFunc = someOtherFunc
m.someFunc()
 
E

Edward Elliott

bruno said:
Nope. It's the MethodType object (a descriptor) that wraps the function
that do the job. The object itself is totally unaware of this.

It's shorthand, not to be taken literally.

s/self/the instance/


No. It's actually a feature.

potato, potahto.

I don't think this would be possible if we want to keep the full
dynamism of Python. How then could the compiler handle the following code
?

class MyObj(object):
def __init__(self, name):
self.name = name

class MyObj(object):
def __init__(name):
self.name = name

And the rest should work fine. When the interpreter sees a method
declaration, it can automatically 1) add the object instance parameter to
the signature, and 2) automatically bind the name self to the object
instance on dispatch. Everything else is just as before.
def someOtherFunc():
print "hello there"

m.someFunc = someOtherFunc
m.someFunc()

Complete non-sequitor, what does this have to do with self?
 
B

bruno at modulix

Edward said:
It's shorthand, not to be taken literally.

It is to be taken literally. Either you talk about how Python
effectively works or the whole discussion is useless.
potato, potahto.

tss...




class MyObj(object):
def __init__(name):
self.name = name

You skipped the interesting part, so I repost it and ask again: how
could the following code work without the instance being an explicit
parameter of the function to be used as a method ?

def someFunc(obj):
try:
print obj.name
except AttributeError:
print "obj %s has no name" % obj

import types
m = MyObj('parrot')
m.someMeth = types.MethodType(someFunc, obj, obj.__class__)
m.someMeth()

You see, wrapping a function into a method is not done at compile-time,
but at runtime. And it can be done manually outside a class statement.
In the above example, someFunc() can be used as a plain function.

In fact, almost any existing function taking at least one argument can
be turned into a method (in theory at least - practically, you of course
need to make sure the first argument is of a compatible type). This
wouldn't work with some automagical injection of the instance in the
function's local namespace, because you would then have to write
"method"'s code diffently from function's code.
And the rest should work fine. When the interpreter sees a method
declaration,

The interpreter never sees a 'method declaration', since there is no
such thing as a 'method declaration' in Python. The def statement
creates a *function* object:
.... def test(self):
.... pass
.... print "type(test) is : ", type(test)
....
type(test) is : said:
Complete non-sequitor, what does this have to do with self?

It has to do that the obj.name() syntax doesn't imply a *method* call -
it can as well be a plain function call. Also, and FWIW:.... print self.name
....Traceback (most recent call last):
File "<stdin>", line 1, in ?



HTH
 
E

Edward Elliott

bruno said:
It is to be taken literally. Either you talk about how Python
effectively works or the whole discussion is useless.

I started talking about the code-level view (programmer's perspective) so
shorthand was fine. Now that we've moved on to interpreter/compiler-level
stuff, I agree that more precision is warranted.
You skipped the interesting part, so I repost it and ask again: how
could the following code work without the instance being an explicit
parameter of the function to be used as a method ?

def someFunc(obj):
try:
print obj.name
except AttributeError:
print "obj %s has no name" % obj

import types
m = MyObj('parrot')
m.someMeth = types.MethodType(someFunc, obj, obj.__class__)
m.someMeth()

I posted the only part that needs modification. Here it is again with the
entire segment:

class MyObj(object):
  def __init__(name):
    self.name = name <== interpreter binds name 'self' to object instance.
compiler adds 'self' to method sig as 1st param.

def someFunc(obj):
try:
print obj.name <== 'obj' gets bound to first arg passed. when bound
as a method, first arg will be object instance.
when called as func, it will be first actual arg.
except AttributeError:
print "obj %s has no name" % obj

import types
m = MyObj('parrot')
m.someMeth = types.MethodType(someFunc, obj, obj.__class__) <== binds obj
to first parameter of someFunc as usual
m.someMeth()

You see, wrapping a function into a method is not done at compile-time,
but at runtime. And it can be done manually outside a class statement.
In the above example, someFunc() can be used as a plain function.

All the parameter information has been preserved. Method signatures are
unchanged from their current form, so the interpreter has no trouble
deducing arguments. You just don't actually declare self yourself. When
binding a function to an object as above, the interpreter sees and does
exactly the same thing as now.
This
wouldn't work with some automagical injection of the instance in the
function's local namespace, because you would then have to write
"method"'s code diffently from function's code.

Maybe this will make it clearer:

Programmer's view Compiler Interpreter's view
def func (a, b) func (a, b) -> func (a, b) func (a, b)
def method (a) method (a) -> method (self, a) method (self, a)

IOW the compiler adds 'self' to the front of the parameter list when
processing a method declaration. Interpreter sees the same signature as
now, only programmer doesn't have to write 'self' anymore.
The interpreter never sees a 'method declaration', since there is no
such thing as a 'method declaration' in Python. The def statement
creates a *function* object:

Fine, whatever, compiler sees method declaration, interpreter sees function
object. The point is, the interpreter sees the same thing it does now.
It has to do that the obj.name() syntax doesn't imply a *method* call -
it can as well be a plain function call.

Ok I see your point, but it doesn't matter because the interpreter sees the
same function object as before.

This confusion is partly (mostly? ;) my fault. I haven't been
distinguishing precisely between the interpreter and the compiler because
usually with Python it doesn't matter (in practice). This is clearly one
place it does. In the words of Douglas Adams: We apologize for the
inconvenience.

Also, and FWIW:
... print self.name
...
Traceback (most recent call last):
NameError: global name 'self' is not defined

Exactly, that was my point in the first place.
 
E

Edward Elliott

Bruno said:
Edward, I know I told you so at least three times, but really,
seriously, do *yourself* a favor : take time to read about descriptors
and metaclasses - and if possible to experiment a bit - so you can get a
better understanding of Python's object model. Then I'll be happy to
continue this discussion (.

Will do, if nothing else it will eliminate language barriers, which we may
be running into at this point (though you've indicated otherwise). It
probably won't happen for another week or two at though. I appreciate your
patience and willingness to engage in this discussion.

As a last ditch effort to get my point across:

Compiler, interpreter, magic-codey-runny-thingy, whatever, at some point
something has to translate this source code
def method (self, a, b): something
into a function object (or whatever you're calling the runnable code this
week). Call this translator Foo. Whatever Foo is, it can insert 'self'
into the parameter list for method, e.g. when it sees "def method (a,b)" it
pretend like it saw "def method (self,a,b)" and proceed as usual. Once it
does that, everything is exactly the same as before.

I can prove that assertion too: make a simple text processor that reads
Python source code and outputs the same source code with only one change:
insert the string 'self" as the first parameter of every "def somemethod".
Next run the output source code with the normal Python interpreter.
Everything functions *exactly* as before because the code is *exactly* the
same as what you would have written if you'd put the 'self's in there
manually. Now make the Python interpreter invoke this text processor as
the first step in processing source code. Voila, python + implicit self.

No changes to the object model.
No changes to dynamic binding.
Same "runnable" code as before.
Where is the problem in this scheme?
Or (since I haven't read up on the object model yet) simply: Is there a
problem?
 
B

Ben Finney

Edward Elliott said:
class MyObj(object):
def __init__(name):
self.name = name

So the tradeoff you propose is:

- Honour "explicit is better than implicit", but users are confused
over "why do I need to declare the instance in the method
signature?"

against

- Break "explicit is better than implicit", take away some of the
flexibility in Python, and users are confused over "where the heck
did this 'self' thing come from?" or "how the heck do I refer to
the instance object?"

I don't see a net gain by going with the latter.

-1.
 
B

Ben Finney

Edward Elliott said:
Compiler, interpreter, magic-codey-runny-thingy, whatever, at some point
something has to translate this source code
def method (self, a, b): something
into a function object (or whatever you're calling the runnable code this
week). Call this translator Foo. Whatever Foo is, it can insert 'self'
into the parameter list for method, e.g. when it sees "def method (a,b)" it
pretend like it saw "def method (self,a,b)" and proceed as usual. Once it
does that, everything is exactly the same as before.

So now you're proposing that this be a special case when a function is
declared by that particular syntax, and it should be different to when
a function is created outside the class definition and added as a
method to the object at run-time.

Thus breaking not only "explicit is better than implicit", but also
"special cases aren't special enough to break the rules".

Still -1.
 
E

Edward Elliott

Ben said:
So now you're proposing that this be a special case when a function is
declared by that particular syntax, and it should be different to when
a function is created outside the class definition and added as a
method to the object at run-time.

Thus breaking not only "explicit is better than implicit", but also
"special cases aren't special enough to break the rules".

Exactly.

Hey, 'for' and 'while' are only special cases of if/goto. Why not ditch
them and get back to basics?

Rules are made to be broken, the key is when.

Method calls are special cases no matter how you slice it. Overall I think
implicit self beats explicit self for the typical case:

def method (a):
self.a = a # self magically appears
obj.method (x)

vs

def method (self, a): # self explicit
self.a = a
obj.method (x) # arg count mismatch (except in message passing model)

Not so much for the argument mismatch problem (see start of thread), which
is easily rectified. The former is simply more elegant in my view. Less
clutter, less confusion.

Sure, if we get into calling "class.method (obj, a)" the argument mismatch
problem resurfaces with implicit self. But 1) this is a rarer case, and 2)
that's not my primary objection anyway.

As long as we're trotting out aphorisms, how about DRY: Don't Repeat
Yourself. The rule couldn't be clearer: don't repeat your SELF. ;) Yet
that's exactly what explicitly declaring self does, forces me to needlessly
repeat what everyone already knows: methods take the object instance as
their first parameter.

Whether this is a good idea is subject to debate, and I'd like to hear
discussion on the merits. What I don't want is a silly battle of maxims.
 
E

Edward Elliott

Ben said:
So the tradeoff you propose is:

- Honour "explicit is better than implicit", but users are confused
over "why do I need to declare the instance in the method
signature?"

against

- Break "explicit is better than implicit", take away some of the
flexibility in Python, and users are confused over "where the heck
did this 'self' thing come from?" or "how the heck do I refer to
the instance object?"

Essentially, but
1. it removes zero flexibility (everything works as before)
2. it's no more confusing than "where did this len/count/dir/type/str/any
other builtin thing come from?"
3. learning "how do i access this instance object" is certainly no harder or
less intuitive than learning "how do i initialize this instance object?"
I don't see a net gain by going with the latter.

Ok. Would you care to explain in more detail?
 
B

Bruno Desthuilliers

Edward Elliott a écrit :
bruno at modulix wrote:
(snip)


I posted the only part that needs modification.
Nope.

Here it is again with the
entire segment:

class MyObj(object):
def __init__(name):
self.name = name <== interpreter binds name 'self' to object instance.
compiler adds 'self' to method sig as 1st param.

def someFunc(obj):
try:
print obj.name <== 'obj' gets bound to first arg passed. when bound
as a method, first arg will be object instance.
when called as func, it will be first actual arg.
except AttributeError:
print "obj %s has no name" % obj

import types
m = MyObj('parrot')
m.someMeth = types.MethodType(someFunc, obj, obj.__class__) <== binds obj
to first parameter of someFunc as usual
m.someMeth()





All the parameter information has been preserved.
Method signatures are
unchanged from their current form,
so the interpreter has no trouble
deducing arguments. You just don't actually declare self yourself.

In this exemple, it was named 'obj', to make clear that there was
nothing special about 'self'. As you can see from the call, I didn't
actually passed the fist param, since the method wrapper takes care of
it... So if we were to implement your proposition (which seems very
unlikely...), the above code *would not work* - we'd get a TypeError
because of the missing argument.
When
binding a function to an object as above, the interpreter sees and does
exactly the same thing as now.

I'm sorry, but you're just plain wrong. *Please* take time to read about
the descriptor protocol and understand Python's object model.
Maybe this will make it clearer:

Programmer's view Compiler Interpreter's view
def func (a, b) func (a, b) -> func (a, b) func (a, b)
def method (a) method (a) -> method (self, a) method (self, a)

IOW the compiler adds 'self' to the front of the parameter list when
processing a method declaration.

1/ there is *no* 'method declaration' in Python
2/ wrapping functions into methods happens at runtime, *not* at compile
time.

(snip)
Fine, whatever, compiler sees method declaration,

There ain't *nothing* like a 'method declaration' in Python. Zilch,
nada, none, rien... All there is is the def statement that creates a
*function* (and the class statement that creates a class object).
Ok I see your point,

Not quite, I'm afraid.
Exactly, that was my point in the first place.

I'm afraid we don't understand each other here. This was supposed to
come as an illustration that, if some black magic was to 'inject' the
instance (here named 'self') in the local namespace of a 'method' (the
way you see it), we would loose the possibility to turn a function into
a method. Try to re-read both examples with s/obj/self/ in the first one
and s/self/obj/ in this last one.

Edward, I know I told you so at least three times, but really,
seriously, do *yourself* a favor : take time to read about descriptors
and metaclasses - and if possible to experiment a bit - so you can get a
better understanding of Python's object model. Then I'll be happy to
continue this discussion (.

FWIW, I too found at first that having to explicitely declare the
instance as first param of a 'function-to-be-used-as-a-method' was an
awful wart. And by that time (Python 1.5.2), it actually *was* a wart
IMVHO - just like the whole 'old-style-class' stuff should I say. But
since 'type-unification' and new-style-classes, the wart has turned into
a feature, even if this only become obvious once you get a good enough
understanding of how the whole damn thing works.

Following it's overall design philosophy, Python exposes (and so let you
take control of) almost any detail of the object model implementation.
The purpose here is to make simple things simple *and* complex things
possibles, and the mean is to have a restricted yet consistent set of
mechanisms. It may not be a jewel of pure beauty, but from a practical
POV, it ends up being more powerful than what you'll find in most
main-stream OOPLs - where simple things happens to be not so simple and
complex things sometime almost impossible - and yet much more usable
than some more powerful but somewhat cryptic OOPLs (like Common Lisp -
which is probably the most astonishing language ever) where almost
anything is possible but even the simplest things tend to be complex.

Oh, also - should I mention it here ? - Ruby is another pretty nice and
powerful OOPL, with a more 'pure' object model (at least at first sight
- I have not enough experience with it to know if it holds its
promises). While Python is My Favourite Language(tm), I'm not too much
religious about this, and can well understand that someone's feature is
someone else's wart - thanks the Lord, everyone is different and unique
-, and you may feel better with Ruby.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,904
Latest member
HealthyVisionsCBDPrice

Latest Threads

Top