organizing your scripts, with plenty of re-use

B

bukzor

I would assume that putting scripts into a folder with the aim of re-
using pieces of them would be called a package, but since this is an
"anti-pattern" according to Guido, apparently I'm wrong-headed here.
(Reference: http://mail.python.org/pipermail/python-3000/2007-April/006793.html
)

Say you have ~50 scripts or so with lots of re-use (importing from
each other a lot) and you want to organize them into folders. How do
you do this simply?

The intent is to have people be able to check out the directly from
CVS and have the scripts "just work", even if they're not directly on
the PYTHONPATH.

This seems to be the best discussion on the topic, but the conclusion
seemed to be that there's no good way. That seems unthinkable
considering python's dedication to simplicity and elegance.
http://groups.google.com/group/comp.lang.python/browse_thread/thread/c44c769a72ca69fa/


It looks like I'm basically restating this post, which sadly got
dropped without further comment:
http://mail.python.org/pipermail/python-3000/2007-April/006814.html
 
S

Stef Mientki

bukzor said:
I would assume that putting scripts into a folder with the aim of re-
using pieces of them would be called a package, but since this is an
"anti-pattern" according to Guido, apparently I'm wrong-headed here.
(Reference: http://mail.python.org/pipermail/python-3000/2007-April/006793.html
)

Say you have ~50 scripts or so with lots of re-use (importing from
each other a lot) and you want to organize them into folders. How do
you do this simply?
Interesting question, ...
.... and although I've a working situation, I would like to see other
answers.

In my situation I've an estimate of about 2000 scripts (in fact every
script I ever wrote),
with about zero redundancy.
I still don't use (because I don't fully understand them) packages,
but by trial and error I found a reasonable good working solution,
with the following specifications
- (Almost) any script (what ever it uses from one of the other scripts
can run standalone
- libraries that need another main program ( e.g. a grid component needs
a GUI) can launch another main program to test itself
- All __init__ files are generated automatically
Although not containing the last ideas, here's an idea of what I do:
http://mientki.ruhosting.nl/data_www/pylab_works/pw_importing.html
cheers,
Stef
 
S

Steven D'Aprano

I would assume that putting scripts into a folder with the aim of re-
using pieces of them would be called a package,

A package is a special arrangement of folder + modules. To be a package,
there must be a file called __init__.py in the folder, e.g.:

parrot/
+-- __init__.py
+-- feeding/
+-- __init__.py
+-- eating.py
+-- drinking.py
+-- fighting.py
+-- flying.py
+-- sleeping.py
+-- talking.py


This defines a package called "parrot" which includes a sub-package
feeding and modules fighting, flying, sleeping and talking. You can use
it by any variant of the following:

import parrot # loads parrot/__init__.py
import parrot.talking # loads parrot/talking.py
from parrot import sleeping
import parrot.feeding
from parrot.feeding.eating import eat_cracker

and similar.

Common (but not compulsory) behaviour is for parrot/__init__.py to import
all the modules within the package, so that the caller can do this:

import parrot
parrot.feeding.give_cracker()

without requiring to manually import sub-packages. The os module behaves
similarly: having imported os, you can immediately use functions in
os.path without an additional import.

Just dumping a bunch of modules in a folder doesn't make it a package, it
just makes it a bunch of modules in a folder. Unless that folder is in
the PYTHONPATH, you won't be able to import the modules because Python
doesn't look inside folders. The one exception is that it will look
inside a folder for a __init__.py file, and if it finds one, it will
treat that folder and its contents as a package.

but since this is an
"anti-pattern" according to Guido, apparently I'm wrong-headed here.
(Reference:
http://mail.python.org/pipermail/python-3000/2007-April/006793.html )

Guido's exact words were:

"The only use case seems to be running scripts that happen
to be living inside a module's directory, which I've always seen as an
antipattern."

I'm not sure precisely what he means by that, because modules don't have
directories, they are in directories. Perhaps he meant package.

In that case, the anti-pattern according to Guido is not to put modules
in a folder, but to have modules inside a package be executable scripts.
To use the above example, if the user can make the following call from
the shell:

$ python ./parrot/talking.py "polly want a cracker"

and have the module talking do something sensible, that's an anti-
pattern. Modules inside a package aren't intended to be executable
scripts called by the user. There should be one (or more) front-end
scripts which are called by the user. Since they aren't intended to be
imported, they can be anywhere, not just on the PYTHONPATH. But they
import the modules in the package, and that package *is* in the
PYTHONPATH.

Using the above example, you would install the parrot folder and its
contents somewhere on the PYTHONPATH, and then have a front-end script
(say) "talk-to-parrot" somewhere else. Notice that the script doesn't
even need to be a legal name for a module, since you're never importing
it.


Say you have ~50 scripts or so with lots of re-use (importing from each
other a lot) and you want to organize them into folders. How do you do
this simply?

Of course you can have a flat hierarchy: one big folder, like the
standard library, with a mixed back of very loosely connected modules:

eating.py
drinking.py
feeding.py
fighting.py
flying.py
parrot.py
sleeping.py
talking.py


You can do that, of course, but it's a bit messy -- what if somebody
installs parrot.py and eating.py, but not drinking.py, and as a
consequence parrot.py fails to work correctly? Or what if the user
already has a completely unrelated module talking.py? Chaos.

The std library can get away with dumping (nearly) everything in the one
directory, because it's managed chaos. Users aren't supposed to pick and
choose which bits of the standard library get installed, or install other
modules in the same location.

Three alternatives are:

* put your modules in a sub-folder, and tell the user to change the
Python path to include your sub-folder, and hope they know what you're
talking about;

* put your modules in a package, tell the user to just place the entire
package directory where they normally install Python code, and importing
will just work; or

* have each and every script manually manipulate the PYTHONPATH so that
when the user calls that script, it adds its parent folder to the
PYTHONPATH before importing what it needs. Messy and ugly.
 
S

Steven D'Aprano

I still don't use (because I don't fully understand them) packages, but
by trial and error I found a reasonable good working solution, with the
following specifications

I find that fascinating. You haven't used packages because you don't
understand them, but you've used another technique that you *also* don't
understand well enough to generate a solution, and had to rely on trial
and error.

Packages are quite well documented. Since the alternative was trial-and-
error on something you also don't fully understand, why did you avoid
packages?
 
S

Stef Mientki

Steven said:
I find that fascinating. You haven't used packages because you don't
understand them, but you've used another technique that you *also* don't
understand well enough to generate a solution, and had to rely on trial
and error.

Packages are quite well documented. Since the alternative was trial-and-
error on something you also don't fully understand, why did you avoid
packages?
I want to have the possibility to import any file from any other file:

<quote from your other answer>

parrot/
+-- __init__.py
+-- feeding/
+-- __init__.py
+-- eating.py
+-- drinking.py
+-- fighting.py
+-- flying.py
+-- sleeping.py
+-- talking.py
import parrot # loads parrot/__init__.py
import parrot.talking # loads parrot/talking.py
from parrot import sleeping
import parrot.feeding
from parrot.feeding.eating import eat_cracker

</quote>

Instead of the above:
from sleeping import sleeping_in_a_bed
from eating import eat_cracker

anything wrong with that (knowing I've no redundancy) ?

cheers,
Stef
 
R

Robert Kern

I want to have the possibility to import any file from any other file:

<quote from your other answer>

parrot/
+-- __init__.py
+-- feeding/
+-- __init__.py
+-- eating.py
+-- drinking.py
+-- fighting.py
+-- flying.py
+-- sleeping.py
+-- talking.py
import parrot # loads parrot/__init__.py
import parrot.talking # loads parrot/talking.py
from parrot import sleeping
import parrot.feeding
from parrot.feeding.eating import eat_cracker

</quote>

Instead of the above:
from sleeping import sleeping_in_a_bed
from eating import eat_cracker

anything wrong with that (knowing I've no redundancy) ?

With the package layout, you would just do:

from parrot.sleeping import sleeping_in_a_bed
from parrot.feeding.eating import eat_cracker

This is really much more straightforward than you are making it out to be.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco
 
B

Buck

With the package layout, you would just do:

   from parrot.sleeping import sleeping_in_a_bed
   from parrot.feeding.eating import eat_cracker

This is really much more straightforward than you are making it out to be..

As in the OP, I need things to "Just Work" without installation
requirements.
The reason for this is that I'm in a large corporate environment
servicing many groups with their own custom environments.

Your solution requires work and knowledge on the user's part, but Stef
seems to be actually solving the complexity problem. It may seem
trivial to you, but it's beyond many people's grasp and brings the
usability and reliability of the system way down.

Like Stef, I was unable to grasp how to properly use python packages
in my environment even after reading the documentation repeatedly over
the course of several months.

The purpose of this thread is to discover and discuss how to use
packages in a user-space (as opposed to python-installation)
environment.
 
R

Robert Kern

As in the OP, I need things to "Just Work" without installation
requirements.
The reason for this is that I'm in a large corporate environment
servicing many groups with their own custom environments.

The more ad hoc hacks you use rather than the standard approaches, the harder it
is going to be for you to support those custom environments.
Your solution requires work and knowledge on the user's part,

*All* solutions require work and knowledge. There is no free lunch. The
advantage of standard Python packages is that they are understood the best and
the most widely.
but Stef
seems to be actually solving the complexity problem. It may seem
trivial to you, but it's beyond many people's grasp and brings the
usability and reliability of the system way down.

Like Stef, I was unable to grasp how to properly use python packages
in my environment even after reading the documentation repeatedly over
the course of several months.

I do believe that you and Stef are exceptions. The vast majority of Python users
seem to be able to grasp packages well enough.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco
 
B

Buck

The more ad hoc hacks you use rather than the standard approaches, the harder it
is going to be for you to support those custom environments.

I too would prefer a standard approach but there doesn't seem to be an
acceptable one.
I do believe that you and Stef are exceptions. The vast majority of Python users
seem to be able to grasp packages well enough.

You're failing to differentiate between python programmer and a
system's users. I understand packages well enough, but I need to
reduce the users' requirements down to simply running a command. I
don't see a way to do that as of now without a large amount of
boilerplate code in every script.

I've considered installing the thing to the PYTHONPATH as most people
suggest, but this has two drawbacks:
* Extremely hard to push thru my IT department. Possibly impossible.
* Local checkouts of scripts use the main installation, rather than
the local, possibly revised package code. This necessitates the
boilerplate that installation to the PYTHONPATH was supposed to avoid.
* We can work around the previous point by requiring a user-owned
dev installation of Python, but this raises the bar to entry past most
of my co-developers threshold. They are more comfortable with tcsh and
perl...

I think the issue here is that the current python-package system works
well enough for the core python devs but leaves normal python
developers without much options beyond "all scripts in one directory"
or "tons of boilerplate everywhere".
 
R

Robert Kern

I too would prefer a standard approach but there doesn't seem to be an
acceptable one.


You're failing to differentiate between python programmer and a
system's users. I understand packages well enough, but I need to
reduce the users' requirements down to simply running a command. I
don't see a way to do that as of now without a large amount of
boilerplate code in every script.

I would like to see an example of such boilerplate. I do not understand why
packages would require more than any other organization scheme.
I've considered installing the thing to the PYTHONPATH as most people
suggest, but this has two drawbacks:
* Extremely hard to push thru my IT department. Possibly impossible.
* Local checkouts of scripts use the main installation, rather than
the local, possibly revised package code. This necessitates the
boilerplate that installation to the PYTHONPATH was supposed to avoid.
* We can work around the previous point by requiring a user-owned
dev installation of Python, but this raises the bar to entry past most
of my co-developers threshold. They are more comfortable with tcsh and
perl...

Are you sure that you are not referring to site-packages/ when you say "PYTHONPATH"?

PYTHONPATH an environment variable that the user can set. He can add whatever
directories he wants to that environment variable. The user can make a directory
for his checkouts:

$ mkdir ~/LocalToolCheckouts
$ cd ~/LocalToolCheckouts
$ cvs ...

Now add that directory to the front of his PYTHONPATH:

$ export PYTHONPATH=~/LocalToolCheckouts/:$PYTHONPATH

Now everything works fine. Packages in ~/LocalToolCheckouts will get picked up
before anything else. This is a simple no-installation way to use the normal
Python package mechanism that works well if you don't actually need to build
anything.
I think the issue here is that the current python-package system works
well enough for the core python devs but leaves normal python
developers without much options beyond "all scripts in one directory"
or "tons of boilerplate everywhere".

The "vast majority" I am talking about *are* the normal Python developers.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco
 
M

Margie

I would like to see an example of such boilerplate. I do not understand why
packages would require more than any other organization scheme.


Are you sure that you are not referring to site-packages/ when you say "PYTHONPATH"?

PYTHONPATH an environment variable that the user can set. He can add whatever
directories he wants to that environment variable. The user can make a directory
for his checkouts:

   $ mkdir ~/LocalToolCheckouts
   $ cd ~/LocalToolCheckouts
   $ cvs ...

Now add that directory to the front of his PYTHONPATH:

   $ export PYTHONPATH=~/LocalToolCheckouts/:$PYTHONPATH

Now everything works fine. Packages in ~/LocalToolCheckouts will get picked up
before anything else. This is a simple no-installation way to use the normal
Python package mechanism that works well if you don't actually need to build
anything.


The "vast majority" I am talking about *are* the normal Python developers..

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco

I think that Buck's issue with your above example is that he doesn't
want his users to have to type
$ export PYTHONPATH=~/LocalToolCheckouts/:$PYTHONPATH

For example, say that the developer does this:
$ mkdir ~/LocalToolCheckouts
$ cd ~/LocalToolCheckouts
$ cvs ...

Which results in the following directory structure:

~/LocalToolCheckouts
+-- scripts/
+-- myscript.py
+-- parrot/
+-- __init__.py
+-- feeding/
+-- __init__.py
+-- eating.py
+-- drinking.py


I think he is looking for a way for users to be able to use scripts/
myscript.py (which imports parrot) without having to change their
PYTHON path with something like this:

$ export PYTHONPATH=~/LocalToolCheckouts/:$PYTHONPATH

I'm guessing that Buck has users that are running out of a cvs
repository. Although many would say those users are now "developers",
they really are not. They probably don't even know they are running
from a cvs repository. They in fact may think of it as their own
personal installation, and all they know is that they have scripts
directory and that that scripts directory has some scripts they want
to run.

As Buck said, it can often be very difficult to get things properly
and quickly installed in a large corporate environment, and providing
the user with a way to check out a cvs repository can be very quick
and easy. The problem is that once the user has access to that cvs
repository, it is difficult to tell them "hey, every time you run from
it, you need to execute this special command to set up your PYTHONPATH
environment variable."

I don't really see any good solution to this other than some
boilerplate code at the beginning of each script that overrides
PYTHON_PATH. I think what I'd do in this situation is have a dot file
that indicates that you are running out of a "dev" (ie cvs repository)
area. Then some boilerplate code like this at the beginning of your
scripts:

if os.path.exists(os.path.dirname(sys.argv[0]) + "/.dev"):
sys.path.append(os.path.abspath(os.path.dirname(sys.argv[0])) +
"/..")

This basically says "if the .dev file exists, put the parent directory
of the script into sys.path."

When an installation is done, as part of the install the .dev file
should be removed. In that case the libraries should be installed
into the standard site-packages location and the user running from an
install would automatically get their packages from the installation
area.

While having boilerplate code like this at the beginning of the script
is not great, I do think that it will save a lot of user questions/
confusion if the users are frequently running from one or more cvs
repositories. For example, if requiring the user to explictly set
PYTHONPATH, if the user has two cvs repositories, with different
versions of the code, a user that is not very python-knowlegable will
frequently forget to update their PYTHONPATH and will end up running
new scripts with an old library. Having this boilerplate code should
avoid a problem like that. So I think the pros probably outweigh the
cons.


Margie
 
B

Buck

Thanks. I think we're getting closer to the core of this.

To restate my problem more simply:

My core goal is to have my scripts in some sort of organization better
than a single directory, and still have plenty of re-use between them.
The only way I can see to implement this is to have 10+ lines of
unintelligible hard-coded boilerplate in every runnable script.
That doesn't seem reasonable or pythonic.


I would like to see an example of such boilerplate. I do not understand why
packages would require more than any other organization scheme.

This example is from the 2007 post I referenced in my OP. I'm pretty
sure he meant 'dirname' rather than 'basename', and even then it
doesn't quite work.

http://mail.python.org/pipermail/python-3000/2007-April/006814.html
import os,sys
sys.path.insert(1, os.path.basename(os.path.basename(__file__)))


This is from a co-worker trying to address this topic:
import os, sys
binpath = binpath or os.path.dirname(os.path.realpath(sys.argv[0]))
libpath = os.path.join(binpath, 'lib')

verinfo = sys.version_info
pythonver = 'python%d.%d' % (verinfo[0], verinfo[1])
sys.path.append(os.path.join(libpath, pythonver, 'site-packages'))
sys.path.append(libpath)


This is my personal code:

from sys import path
from os.path import abspath, islink, realpath, dirname, normpath,
join
f = __file__
#continue working even if the script is symlinked and then compiled
if f.endswith(".pyc"): f = f[:-1]
if islink(f): f = realpath(f)
here = abspath(dirname(f))
libpath = join(here, "..", "lib")
libpath = normpath(libpath)
path.insert(1, libpath)

$ export PYTHONPATH=~/LocalToolCheckouts/:$PYTHONPATH
This is a simple no-installation way to use the normal
Python package mechanism that works well if you don't actually need to build
anything.

This seems simple to you, but my users are electrical engineers and
know just enough UNIX commands to get by. Most are afraid of Python.
Half of them will assume the script is borked when they see a
"ImportError: No module named foo". Another 20% will then read the
README and
set their environment wrong (setenv PYTHONPATH foo). The rest will get
it to work after half an hour but never use it again because it was
too complicated. I could fix the error message to tell them exactly
what to do, but at that point I might as well re-write the above
boilerplate code.

I'm overstating my case here for emphasis, but it's essentially true.
--Buck
 
R

Rami Chowdhury

Thanks. I think we're getting closer to the core of this.

To restate my problem more simply:

My core goal is to have my scripts in some sort of organization better
than a single directory, and still have plenty of re-use between them.
The only way I can see to implement this is to have 10+ lines of
unintelligible hard-coded boilerplate in every runnable script.
That doesn't seem reasonable or pythonic.

Perhaps I've simply not followed this thread closely enough but could you
let us know a little bit more about how you intend / expect the scripts to
be used?

If there's a standard directory you expect them to be dropped into by your
users (e.g. $HOME/scripts) then surely you could do something like:

import mypathmunge

at the top of every script, and then have a mypathmunge.py in
site-packages that goes:

# mypathmunge.py
import sys, os
sys.path.insert(0, os.path.join(os.getenv('HOME'), 'scripts'))

?
I would like to see an example of such boilerplate. I do not understand
why
packages would require more than any other organization scheme.

This example is from the 2007 post I referenced in my OP. I'm pretty
sure he meant 'dirname' rather than 'basename', and even then it
doesn't quite work.

http://mail.python.org/pipermail/python-3000/2007-April/006814.html
import os,sys
sys.path.insert(1, os.path.basename(os.path.basename(__file__)))


This is from a co-worker trying to address this topic:
import os, sys
binpath = binpath or os.path.dirname(os.path.realpath(sys.argv[0]))
libpath = os.path.join(binpath, 'lib')

verinfo = sys.version_info
pythonver = 'python%d.%d' % (verinfo[0], verinfo[1])
sys.path.append(os.path.join(libpath, pythonver, 'site-packages'))
sys.path.append(libpath)


This is my personal code:

from sys import path
from os.path import abspath, islink, realpath, dirname, normpath,
join
f = __file__
#continue working even if the script is symlinked and then compiled
if f.endswith(".pyc"): f = f[:-1]
if islink(f): f = realpath(f)
here = abspath(dirname(f))
libpath = join(here, "..", "lib")
libpath = normpath(libpath)
path.insert(1, libpath)

$ export PYTHONPATH=~/LocalToolCheckouts/:$PYTHONPATH
This is a simple no-installation way to use the normal
Python package mechanism that works well if you don't actually need to
build
anything.

This seems simple to you, but my users are electrical engineers and
know just enough UNIX commands to get by. Most are afraid of Python.
Half of them will assume the script is borked when they see a
"ImportError: No module named foo". Another 20% will then read the
README and
set their environment wrong (setenv PYTHONPATH foo). The rest will get
it to work after half an hour but never use it again because it was
too complicated. I could fix the error message to tell them exactly
what to do, but at that point I might as well re-write the above
boilerplate code.

I'm overstating my case here for emphasis, but it's essentially true.
--Buck
 
R

Robert Kern

I think he is looking for a way for users to be able to use scripts/
myscript.py (which imports parrot) without having to change their
PYTHON path with something like this:

$ export PYTHONPATH=~/LocalToolCheckouts/:$PYTHONPATH

I'm guessing that Buck has users that are running out of a cvs
repository. Although many would say those users are now "developers",
they really are not.

Since he called them "co-developers", I was operating under the assumption that
they were, in fact, developers.
They probably don't even know they are running
from a cvs repository. They in fact may think of it as their own
personal installation, and all they know is that they have scripts
directory and that that scripts directory has some scripts they want
to run.

As Buck said, it can often be very difficult to get things properly
and quickly installed in a large corporate environment, and providing
the user with a way to check out a cvs repository can be very quick
and easy. The problem is that once the user has access to that cvs
repository, it is difficult to tell them "hey, every time you run from
it, you need to execute this special command to set up your PYTHONPATH
environment variable."

No, you tell them: "Add the line

source ~/LocalToolCheckouts/tool-env.csh

to your ~/.tcshrc and start a new shell."

They each do this once, and it works for all of your scripts with no
boilerplate. You can add features to this shell script, too, like adding the
script directory to their $PATH.

Or you can write a simple "installation" script, which doesn't really install
anything, just adds to their ~/.tcshrc files like so.

Or if this still is beyond your users, put a module into your scripts directory
that does the prepends ~/LocalToolCheckouts/ to sys.path . The only
"boilerplate" you need is one line in each script that imports this module.

But this is all stuff you have to do whether you organize your code into
packages or not. The organization of the code really has no effect on the
installation problems the OP faces.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco
 
C

Carl Banks

Thanks. I think we're getting closer to the core of this.

To restate my problem more simply:

My core goal is to have my scripts in some sort of organization better
than a single directory, and still have plenty of re-use between them.
The only way I can see to implement this is to have 10+ lines of
unintelligible hard-coded boilerplate in every runnable script.
That doesn't seem reasonable or pythonic.

Well, ignoring Python packaging conventions isn't reasonable or
Pythonic either, but oh well.

Here's what you should do: Create ONE script that is a single entry
point to the everything. The script would accept a subcommand, set up
the Python path, and invoke the appropirate Python module (NOT script)
corresponding to that subcommand, say by calling a main() function
you've defined within.

Users would invoke the appropirate tool using the appropriate
subcommand (similar to tools like Subversion and GIT).

If you really can't use subcommands, make symlinks to the ONE script,
and have the ONE script check what name it was invoked with.

Notice the key idea in all of this: ONE script. When you design it
that a file can be used either as a script or as a module, you are
asking for trouble.


Carl Banks
 
S

Steven D'Aprano

Notice the key idea in all of this: ONE script. When you design it that
a file can be used either as a script or as a module, you are asking for
trouble.

I agree with everything you said in your post *except* that final
comment. The basic idea of modules usable as scripts is a fine, reliable
one, provided all the modules each script calls are visible in the
PYTHONPATH. (You still have problems with recursive imports, but that can
happen with any module, not just scripts.)

The Original Poster is confusing installation difficulties with code
organization -- his problem is that users have special requirements for
installation, and he's trying to work around those requirements by
organizing his code differently.

As far as I can see, whether or not he uses a package, he will still have
the same problem with installation, namely, that his users aren't
developers, plus for non-technical reasons he can't provide an installer
and has to have users check code out of CVS. Using a package will solve
his internal code organization problems, and a simple setup script that
modifies the user's .tcshrc to include the appropriate PYTHONPATH will
solve his other problem. The solution is to work with the language,
instead of fighting the language.
 
P

Processor-Dev1l

The more ad hoc hacks you use rather than the standard approaches, the harder it
is going to be for you to support those custom environments.


*All* solutions require work and knowledge. There is no free lunch. The
advantage of standard Python packages is that they are understood the best and
the most widely.



I do believe that you and Stef are exceptions. The vast majority of Python users
seem to be able to grasp packages well enough.
Well, I started using python many years ago (when there were no
packages), so I am used to manage such messy files as described above.
But it is true that for building larger solution I always use
packages, it just looks better :).
So my solution:
Packages for job tasks and one big messy folder for my private usage
(system administration tasks, automation, etc).
 
G

Gabriel Genellina

En Mon, 05 Oct 2009 18:15:15 -0300, Rami Chowdhury
If there's a standard directory you expect them to be dropped into by
your users (e.g. $HOME/scripts) then surely you could do something like:

import mypathmunge

at the top of every script, and then have a mypathmunge.py in
site-packages that goes:

# mypathmunge.py
import sys, os
sys.path.insert(0, os.path.join(os.getenv('HOME'), 'scripts'))

Since Python 2.6 and up, you don't even need that. There exists a per-user
site directory (~/.local/lib/python2.6/site-packages on Linux; under
%APPDATA% on Windows) that is prepended to sys.path automatically; .pth
files found there are honored too (see PEP 370 [1]).

So there is no need to set the PYTHONPATH variable, nor alter the standard
site-packages directory, nor play tricks with sys.path - just install the
modules/packages inside the user site directory (or any other directory
named in a .pth file found there).

[1] http://www.python.org/dev/peps/pep-0370/
 
C

catafest

I think the basically you need to write one python script name it
__init__.py
If you want this script may include functions to reading yours scripts
from folder.
Put this script in each folder and you use Margie solution .
This allow you to use import from each folder .
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,770
Messages
2,569,586
Members
45,084
Latest member
HansGeorgi

Latest Threads

Top