What is a good way of having several versions of a python moduleinstalled in parallell?

J

Joel Hedlund

Hi!

I write, use and reuse a lot of small python programs for variuos purposes in my work. These use a growing number of utility modules that I'm continuously developing and adding to as new functionality is needed. Sometimes I discover earlier design mistakes in these modules, and rather than keeping old garbage I often rewrite the parts that are unsatisfactory. This often breaks backwards compatibility, and since I don't feel like updating all the code that relies on the old (functional but flawed) modules, I'm left with a hack library that depends on halting versions of my utility modules. The way I do it now is that I update the programs as needed when I need them, but this approach makes me feel a bit queasy. It seems to me like I'm thinking about this in the wrong way.

Does anyone else recognize this situation in general? How do you handle it?

I have a feeling it should be possible to have multiple versions of the modules installed simultaneously, and maybe do something like this:

mymodule/
+ mymodule-1.1.3/
+ mymodule-1.1.0/
+ mymodule-0.9.5/
- __init__.py

and having some kind of magic in __init__.py that let's the programmer choose version after import:

import mymodule
mymodule.require_version("1.1.3")

Is this a good way of thinking about it? What would be an efficient way of implementing it?

Cheers!
/Joel Hedlund
 
D

Diez B. Roggisch

Joel said:
Hi!

I write, use and reuse a lot of small python programs for variuos purposes
in my work. These use a growing number of utility modules that I'm
continuously developing and adding to as new functionality is needed.
Sometimes I discover earlier design mistakes in these modules, and rather
than keeping old garbage I often rewrite the parts that are
unsatisfactory. This often breaks backwards compatibility, and since I
don't feel like updating all the code that relies on the old (functional
but flawed) modules, I'm left with a hack library that depends on halting
versions of my utility modules. The way I do it now is that I update the
programs as needed when I need them, but this approach makes me feel a bit
queasy. It seems to me like I'm thinking about this in the wrong way.

Does anyone else recognize this situation in general? How do you handle
it?

I have a feeling it should be possible to have multiple versions of the
modules installed simultaneously, and maybe do something like this:

mymodule/
+ mymodule-1.1.3/
+ mymodule-1.1.0/
+ mymodule-0.9.5/
- __init__.py

and having some kind of magic in __init__.py that let's the programmer
choose version after import:

import mymodule
mymodule.require_version("1.1.3")

Is this a good way of thinking about it? What would be an efficient way of
implementing it?

Use setuptools. It can exactly do that - install different versions parallel
as eggs, and with a pre-import require-statment you require the desired
one.

Diez
 
S

Steve Holden

Diez said:
Use setuptools. It can exactly do that - install different versions parallel
as eggs, and with a pre-import require-statment you require the desired
one.

Diez

Of course a much simpler, less formal solution, is to install the
libraries required by a program along with that program in its own
directory. This more or less guarantees you will get out of sync.

Otherwise, three words:

test driven development

regards
Steve
--
Steve Holden +1 571 484 6266 +1 800 494 3119
Holden Web LLC/Ltd http://www.holdenweb.com
Skype: holdenweb http://del.icio.us/steve.holden

Sorry, the dog ate my .sigline
 
J

Joel Hedlund

First of all, thanks for all the input - it's appreciated.
Otherwise, three words:

test driven development

Do you also do this for all the little stuff, the small hacks you just
whip together to get a particular task done? My impression is that doing
proper unittests adds a lot of time to development, and I'm thinking
that this may be a low return investment for the small programs.

I try to aim for reusability and generalizability also in my smaller
hacks mainly as a safeguard. My reasoning here is that if I mess up
somehow, sooner or later I'll notice, and then I have a chance of making
realistic damage assessments. But even so, I must admit that I tend to
do quite little testing for these small projects... Maybe I should be
rethinking this?

Cheers!
/Joel
 
J

Joel Hedlund

First of all, thanks for all the input - it's appreciated.
Otherwise, three words:

test driven development

Do you also do this for all the little stuff, the small hacks you just
whip together to get a particular task done? My impression is that doing
proper unittests adds a lot of time to development, and I'm thinking
that this may be a low return investment for the small programs.

I try to aim for reusability and generalizability also in my smaller
hacks mainly as a safeguard. My reasoning here is that if I mess up
somehow, sooner or later I'll notice, and then I have a chance of making
realistic damage assessments. But even so, I must admit that I tend to
do quite little testing for these small projects... Maybe I should be
rethinking this?

Cheers!
/Joel
 
B

Ben Finney

Joel Hedlund said:
Do you also do [test-driven development] for all the little stuff,
the small hacks you just whip together to get a particular task
done? My impression is that doing proper unittests adds a lot of
time to development, and I'm thinking that this may be a low return
investment for the small programs.

My impression is that failing to have reliable code adds a lot of time
to debugging and maintenance, and it is far cheaper to have tested and
flushed out the obvious bugs early in the development process.
I try to aim for reusability and generalizability also in my smaller
hacks mainly as a safeguard.

In which case, you will be maintaining that code beyond the initial
use for which you thought it up. Maintaining code without unit tests
is far more costly than maintaining code with tests, because you have
no regression test suite: you are prone to chasing down bugs that you
though you'd fixed in an earlier version of the code, but can't figure
out when you broke it again. This time is entirely wasted.

Instead, in code that has unit tests, a bug found means another unit
test to be added (to find and demonstrate that bug). This is work you
must do anyway, to be sure that you can actually reproduce the bug;
test-driven development merely means that you take that test case and
*keep it* in your unit test. Then, once you're assured that you will
find the bug again any time it reappears, go ahead and fix it.
My reasoning here is that if I mess up somehow, sooner or later I'll
notice

With test-driven development I get much closer to the "sooner" end of
that, and much more reliably.
Maybe I should be rethinking this?

That's my opinion, yes.
 
J

Joel Hedlund

test-driven development merely means that you take that test case and
*keep it* in your unit test. Then, once you're assured that you will
find the bug again any time it reappears, go ahead and fix it.

My presumption has been that in order to do proper test-driven development I would have to make enormous test suites covering all bases for my small hacks before I could getting down and dirty with coding (as for example in http://www.diveintopython.org/unit_testing). This of course isn't very appealing when you need something done "now". But if I understand you correctly, if I would formalize what little testing I do, so that I can add to a growing test suite for each program as bugs are discovered and needs arise, would you consider that proper test-driven development? (or rather, is that how you do it?)

Thanks for taking the time!
/Joel
 
D

Diez B. Roggisch

Joel said:
My presumption has been that in order to do proper test-driven development
I would have to make enormous test suites covering all bases for my small
hacks before I could getting down and dirty with coding (as for example in
http://www.diveintopython.org/unit_testing). This of course isn't very
appealing when you need something done "now". But if I understand you
correctly, if I would formalize what little testing I do, so that I can
add to a growing test suite for each program as bugs are discovered and
needs arise, would you consider that proper test-driven development? (or
rather, is that how you do it?)

Sounds good to me. IMHO there are two ways one gathers tests:

- concrete bugs appear, and one writes a test that reproduces the bug &
eventually after the fix runs smoothly

- new features are planned/implemented, and the tests accompany them right
from the start, to allow .. .well, to test them :)

I always found it difficult to "just think" of new tests. Of course if you
_start_ with TDD, point two is applied right from the start and should
apply.

Diez
 
R

Ryan Ginstrom

On Behalf Of Joel Hedlund
My presumption has been that in order to do proper
test-driven development I would have to make enormous test
suites covering all bases for my small hacks before I could
getting down and dirty with coding (as for example in
http://www.diveintopython.org/unit_testing). This of course
isn't very appealing when you need something done "now". But
if I understand you correctly, if I would formalize what
little testing I do, so that I can add to a growing test
suite for each program as bugs are discovered and needs
arise, would you consider that proper test-driven
development? (or rather, is that how you do it?)

Have you looked at nose?
http://somethingaboutorange.com/mrl/projects/nose/

It automatically discovers your unit tests and runs them. I have a command
in my Komodo toolbox that runs the nosetests script on the current
directory. So the overhead of writing (and running) unit tests is very
small.

Usually, even when developing one-off scripts, you'll try some test cases,
maybe do some testing at the interactive prompt. That can serve as the basis
of your unit tests. Then maybe the first time you want to import that module
from another script, you can beef up your unit tests then.

If unit testing seems like too much work, people won't do it. So I think
it's good to start out by doing just enough unit testing that it's not
onerous. As you see the benefits, you'll probably seem them as less onerous,
of course.

Regards,
Ryan Ginstrom
 
F

Fredrik Lundh

Diez said:
Sounds good to me. IMHO there are two ways one gathers tests:

- concrete bugs appear, and one writes a test that reproduces the bug &
eventually after the fix runs smoothly

- new features are planned/implemented, and the tests accompany them right
from the start, to allow .. .well, to test them :)

I always found it difficult to "just think" of new tests. Of course if you
_start_ with TDD, point two is applied right from the start and should
apply.

an approach that works for me is to start by adding "sanity checks";
that is, straightforward test code that simply imports all modules,
instantiates objects, calls the important methods with common argument
types/counts, etc, to make sure that the library isn't entirely braindead.

this has a bunch of advantages:

- it gets you started with very little effort (especially if you use
doctest; just tinker a little at the interactive prompt, and you
have a first version)
- it gives you a basic structure which makes it easier to add more
detailed tests
- it gives you immediate design feedback -- if it's difficult to
think of a even a simple test, is the design really optimal?
- it quickly catches build and installation issues during future
development (including refactoring).

and probably a few more things that I cannot think of right now.

</F>
 
B

Ben Finney

(Joel, please preserve attribution lines on your quoted material so we
can see who wrote it.)

Joel Hedlund said:
My presumption has been that in order to do proper test-driven
development I would have to make enormous test suites covering all
bases for my small hacks before I could getting down and dirty with
coding

This presumption is entirely opposite to the truth. Test-driven
development has the following cycle:

- Red: write one test, watch it fail

- Green: write minimal code to satisfy all existing tests

- Refactor: clean up implementation without changing behaviour

(The colours refer to the graphical display for some unit test
runners: red for a failing test, green for a test pass.)

This cycle is very short; it's the exact opposite to your presumption
of "write huge unit test suites before doing any coding". For me, the
cycles are on the order of a few minutes long; maybe longer if I have
to think about the interface for the next piece of code.


In more detail, the sequence (with rationale) is as follows:

- Write *one* test only, demonstrating one simple assertion about
the code you plan to write.

This requires you to think about exactly what your planned code
change (whether adding new features or fixing bugs) will do, at a
low level, in terms of the externally-visible behaviour, *before*
making changes to the code. The fact that it's a single yes-or-no
assertion keeps the code change small and easily testable.

- Run your automated unit test suite and watch your new test fail.

This ensures that your test actually exercises the code change
you're about to make, and that it will fail when that code isn't
present. Thus your new test becomes a regression test as well.

- Write the simplest thing that could possibly make the new test
pass.

This ensures that you write only the code absolutely necessary to
the new test, and conversely, that *all* your code changes exist
only to satisfy test cases. If, while making the code change, you
think the code should also do something extra, that's not allowed
at this point: you need a new test for that extra feature.

Your current code change must be focussed only on satisfying the
current test, and should do it in the way that lets you write that
code quickly, knowing that you'll be refactoring soon.

- Run your automated unit test suite and watch *all* tests pass.

This ensures that your code change both meets the new test and
doesn't regress any old ones. During this step, while any tests
are failing, you are only allowed to fix them — not add new
features — in the same vein as making the new test pass, by doing
the simplest thing that could possibly work.

Fixing a failing test might mean changing the code, or it might
mean changing the test case if it's become obsoleted by changes in
requirements.

- Refactor the code unit to remove redundancy or other bad design.

This ensures that, while the code unit is fresh in your mind, you
clean it up as you proceed. Refactoring means that you change the
implementation of the code without changing its interface; all
existing tests, including the new one, must continue to pass. If
you cause any test to fail while refactoring, fix the code and
refactor again, until all tests pass.

That is: the discipline requires you to write *one* new test and watch
it fail ("Red"), then write minimal code to make *all* tests pass
("Green"), and only then refactor the code design while maintaining
the "all tests pass" state. Only then do you move on to a new code
change, starting a new cycle by writing a new test.

It ensures that the code process ratchets forward inexorably, with the
code always in a well-designed, well-factored, tested state that meets
all current low-level requirements. This also encourages frequent
commits into version control, because at the end of any cycle you've
got no loose ends.


The above effects — short coding cycles, constant tangible forward
motion, freedom to refactor code as you work on it, freedom to commit
working code to the VCS at the end of any cycle, an ever-increasing
test suite, finding regressed tests the moment you break them instead
of spending ages looking at follow-on symptoms — also have
significantly positive effects on the mood of the programmer.

When I started this discipline, I found to my surprise that I was just
as happy to see a failing test as I was to see all tests passing —
because it was proof that the test worked, and gave contextual
feedback that told me exactly what part of the code unit needed to be
changed, instead of requiring an unexpected, indefinite debugging
trek.

Yes. While that chapter is a good demonstration of how unit tests
work, the impression given by that chapter is an unfortunate
demonstration of the *wrong* way to do unit testing.

The unit test shown in that chapter was *not* written all at once, as
the chapter implies; it was rather built up over time, while
developing the code unit at the same time. I don't know whether
Pilgrim used the above cycle, but I'm positive he wrote the unit test
in small pieces while developing the code unit in correspondingly
small increments.
But if I understand you correctly, if I would formalize what little
testing I do, so that I can add to a growing test suite for each
program as bugs are discovered and needs arise, would you consider
that proper test-driven development? (or rather, is that how you do
it?)

Yes, "test-driven development" is pretty much synonymous with the
above tight cycle of development. If you're writing large amounts of
test code before writing the corresponding code unit, that's not
test-driven development — it's Big Design Up Front in disguise, and is
to be avoided.

For more on test-driven development, the easiest site to start with is
<URL:http://www.testdriven.com/> — which, though much of its community
is focussed on Java, still has much to say that is relevant to any
programmer trying to adopt the practice. Its "web links" section is
also a useful starting point.
 
N

Neil Cerutti

First of all, thanks for all the input - it's appreciated.


Do you also do this for all the little stuff, the small hacks
you just whip together to get a particular task done? My
impression is that doing proper unittests adds a lot of time to
development, and I'm thinking that this may be a low return
investment for the small programs.

For documentating and testing small hacks, try the doctest
module. It integrates with Python's unit testing modules, so if
you need to "graduate", it is simple to do so.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top