Use empty string for self

P

paullanier

It seems that lots of people don't like having to prefix self. in front
of instance variables when writing methods in Python. Of course,
whenever someone suggests doing away with 'self' many people point to
the scoping advantages that self brings. But I hadn't seen this
proposal when I searched so I thought I'd throw it out there. Maybe
it's already been thrown out but I like it.

The issue I have with self. is that is makes the code larger and more
complicated than it needs to be. Especially in math expressions like:
self.position[0] = self.startx + len(self.bitlist) * self.bitwidth

It really makes the code harder to read. On the other hand,
eliminating the self. would create other issues including readability
with regards to which vars are instance vars and which come from
somewhere else.

But what if we keep the '.' and leave out the self. Then the example
looks like:
..position[0] = .startx + len(.bitlist) * .bitwidth

The 'self' is implied but the scoping rules don't change and it's still
clear when reading it that they are instance variables. We can keep
the self in the method header (or not) but that is really a separate
issue.

Any comments? Has this been discussed before?
 
P

paullanier

Thanks. I thought for sure it must have been discussed before but for
whatever reason, my googling skills couldn't locate it.
 
R

Roy Smith

Terry Hancock said:
However, there is a slightly less onerous method which
is perfectly legit in present Python -- just use "s"
for "self":

This is being different for the sake of being different. Everybody *knows*
what self means. If you write your code with s instead of self, it just
makes it that much harder for other people to understand it.
 
T

Terry Hancock

On 28 Feb 2006 15:54:06 -0800
The issue I have with self. is that is makes the code
larger and more complicated than it needs to be.
Especially in math expressions like: self.position[0] =
self.startx + len(self.bitlist) * self.bitwidth

It really makes the code harder to read. On the other
hand, eliminating the self. would create other issues
including readability with regards to which vars are
instance vars and which come from somewhere else.

But what if we keep the '.' and leave out the self. Then
the example looks like:
.position[0] = .startx + len(.bitlist) * .bitwidth

I think I'm not the only person who hates this idea. The "."
is just too cryptic, IMHO. The main objection is that it
would require "magic" to make it work, though.

However, there is a slightly less onerous method which
is perfectly legit in present Python -- just use "s"
for "self":

def mymethod(s):
# ...
s.position[0] = s.startx + len(s.bitlist) * s.bitwidth
# ...

"self" is NOT a keyword, it's just a convention.

While I still generally prefer to see "self", I still
consider the above pretty readable, and it goes more than
halfway towards your goal.

Others have suggested "_" instead of "s". However, IMHO,
it's less visible, takes up the same space as "s", and
requires the shift key, so I'd rather just use "s".

And yes, it's been discussed to death on the list. ;-)

Cheers,
Terry
 
J

John Salerno

Roy said:
Yes. To death. Executive summary: self is here to stay.

A related thing I was wondering about was the use of 'self' in class
methods as the first parameter. I understand that right now it is
necessary, but is this something that the language itself requires, or
just the way it is implemented now? It seems like a waste of typing to
always have to put self as the first parameter in every class method. Is
there no way for it to be implied?
 
G

Grant Edwards

A related thing I was wondering about was the use of 'self' in
class methods as the first parameter.

It's not a related thing, it's the same thing.
I understand that right now it is necessary, but is this
something that the language itself requires,

Yes. Sort of. When declaring a function, you have to declare
all of the formal paramters. For functions that are bound to
class instances as methods, the first formal parameter is the
object instance. It's common practice to call that parameter
"self", but you can call it something else.
or just the way it is implemented now?
No.

It seems like a waste of typing

Typing is free. At least compared to the costs of the rest of
the life-cycle of a software project.
to always have to put self as the first parameter in every
class method.

You could call that first parameter to class methods "s" if you
can't afford the three extra letters (I've got lots of extra
letters, and I can send you some if you like). If you do call
it something other than self and somebody else ever has to
maintain your code, they'll be annoyed with you.
Is there no way for it to be implied?

No.
 
J

John Salerno

Grant said:
It's not a related thing, it's the same thing.

Oh sorry. I thought the OP was asking about having to use self when
qualifying attributes, or even if he was, I didn't realize it was the
same principle as my question. And just now I was reading about new
style classes, and it also seems like a bit of extra typing to have to
subclass object, but I guess that isn't something that can be implied
right now either.
 
J

James Stroud

John said:
Oh sorry. I thought the OP was asking about having to use self when
qualifying attributes, or even if he was, I didn't realize it was the
same principle as my question. And just now I was reading about new
style classes, and it also seems like a bit of extra typing to have to
subclass object, but I guess that isn't something that can be implied
right now either.

"self" is conceptually necessary. Notice the similarities between
doittoit() and It.doittoit():


py> def doittoit(it):
.... print it.whatzit
....
py> class It:
.... whatzit = 42
.... def doittoit(self):
.... print self.whatzit
....
py> anit = It()
py> doittoit(anit)
42
py> It.doittoit(anit)
42
py> anit.doittoit()
42


If you get this example, I'm pretty sure you will understand "self" and
its necessity.
 
J

John Salerno

James said:
py> def doittoit(it):
... print it.whatzit
...
py> class It:
... whatzit = 42
... def doittoit(self):
... print self.whatzit
...
py> anit = It()
py> doittoit(anit)
42
py> It.doittoit(anit)
42
py> anit.doittoit()
42


If you get this example, I'm pretty sure you will understand "self" and
its necessity.

I do get it. I think I will just have to get used to seeing the 'self'
argument but understanding that it's not really something that is always
passed in. I'm trying to train myself to see

def doittoit(self) as def doittoit()

Of course, that might not be a good strategy, because I know when it
isn't used as an instance method (is that C terminology?), then you must
explicitly pass the self argument.
 
R

Roy Smith

John Salerno said:
I do get it. I think I will just have to get used to seeing the 'self'
argument but understanding that it's not really something that is always
passed in. I'm trying to train myself to see

def doittoit(self) as def doittoit()

That's OK as far as using your C++ experience to help understand
Python by analogy, but don't fall into the trap of trying to write C++
in Python.
 
G

Grant Edwards

I do get it. I think I will just have to get used to seeing
the 'self' argument but understanding that it's not really
something that is always passed in.

But it _is_ always passed to the function. You can even pass
it explicity when you call the method if you want:

#!/usr/bin/python

class MyClass:
def mymethod(self,p1,p2):
print self,p1,p2

instance = MyClass()

MyClass.mymethod(instance,1,2)

instance.mymethod(1,2)

The two calls are equivalent.
I'm trying to train myself to see

def doittoit(self) as def doittoit()

You would be misleading yourself.
Of course, that might not be a good strategy, because I know
when it isn't used as an instance method (is that C
terminology?), then you must explicitly pass the self
argument.

Exactly.
 
J

John Salerno

Grant said:
But it _is_ always passed to the function. You can even pass
it explicity when you call the method if you want:

I meant it isn't always explicitly passed.
#!/usr/bin/python

class MyClass:
def mymethod(self,p1,p2):
print self,p1,p2

instance = MyClass()

MyClass.mymethod(instance,1,2)

instance.mymethod(1,2)

The two calls are equivalent.

can you also say instance.mymethod(instance, 1, 2) ?
 
D

Douglas Alan

This is being different for the sake of being different. Everybody *knows*
what self means. If you write your code with s instead of self, it just
makes it that much harder for other people to understand it.

I always use "s" rather than "self". Are the Python police going to
come and arrest me? Have I committed the terrible crime of being
unPythonic? (Or should that be un_pythonic?)

I rarely find code that follows clear coding conventions to be hard to
understand, as long as the coding convention is reasonable and
consistent.

Something that I do find difficult to understand, as a contrasting
example, is C++ code that doesn't prefix instance variables with "_"
or "m_" (or what have you), or access them via "this". Without such a
cue, I have a hard time figuring out where such variables are coming
from.

Regarding why I use "s" rather than "self", I don't do this to be
different; I do it because I find "self" to be large enough that it is
distracting. It's also a word, which demands to be read. (Cognitive
psychologists have shown that when words are displayed to you your
brain is compelled to read them, even if you don't want to. I
experience this personally when I watch TV with my girlfriend who is
hearing impaired. The captioning is very annoying to me, because
it's hard not to read them, even though I don't want to. The same
thing is true of "self".)

With too many "self"s everywhere, my brain finds it harder to locate
the stuff I'm really interested in. "s." is small enough that I can
ignore it, yet big enough to see when I need to know that information.
It's not a word, so my brain doesn't feel compelled to read it when I
don't want to, and it's shorter, so I can fit more useful code on a
line. Breaking up some code onto multiple lines often makes it
significantly less readable. (Just ask a typical mathematician, who
when shown notations that Computer Science people often use, laugh in
puzzlement at their verbosity. Mathematicians probably could not do
what they do without having the more succinct notations that they
use.)

Don't take any of this to mean that succinctness is always better than
brevity. It quite often is not. Brevity is good for things that you
do over and over and over again. Just ask Python -- it often knows
this. It's why there are no "begin" and "end" statements in Python.
It's why semicolons aren't required to separate statements that are on
different lines. That stuff is extra text that serves little purpose
other than to clutter up the typical case.

|>oug
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,904
Latest member
HealthyVisionsCBDPrice

Latest Threads

Top