Sphinx Doctest: test the code without comparing the output.

L

Luca Cerone

Dear all,
I am writing the documentation for a Python package using Sphinx.

I have a problem when using doctest blocks in the documentation:
I couldn't manage to get doctest to run a command but completely ignoring
the output.

For example, how can I get a doctest like the following to run correctly?

... doctest:: example_1
#some directive here to completely ignore the output

Since I don't know the value of `x`, ideally in this doctest I only want
to test that the various commands are correct, regardless of
the output produced.

I have tried using the ELLIPSIS directive, but the problem is that the `...`
are interpreted as line continuation rather than `any text`:

... doctest:: example_2
...

I don't know if there is a way to make Sphinx understand that I want to ignore the whole output. I think the easiest way to solve this, would be differentiating between the ellipsis sequence and the line continuation sequence, but I don't know how to do that.

I know that I could skip the execution of print(str(x)) but this is not what I want; I really would like the command to be executed the output ignored..
Can you point me to any solution for this issue?

Thanks a lot in advance for your help,
Cheers,
Luca
 
S

Steven D'Aprano

Dear all,
I am writing the documentation for a Python package using Sphinx.

I have a problem when using doctest blocks in the documentation: I
couldn't manage to get doctest to run a command but completely ignoring
the output.

For example, how can I get a doctest like the following to run
correctly?

.. doctest:: example_1

#some directive here to completely ignore the output

The Fine Manual says that the directive you want is #doctest:+SKIP.

http://docs.python.org/2/library/doctest.html#doctest.SKIP

Although it's not explicitly listed as a directive, every option flag
corresponds to a directive. So your example becomes:
42.012345678901234


(There's no need to convert things to str before printing them.)
 
L

Luca Cerone

Dear Steven,
thanks for the help.

I am aware that I might have used the SKIP directive (as I hinted in my mail).
Even if the fine manual suggests to do so I don't agree with it, though.
The reason is simple: SKIP as the name suggests causes the code not to be run at all, it doesn't ignore the output. If you use a SKIP directive on code that contains a typo, or maybe you changed
the name of a keyword to make it more meaningful and forgot to update your docstring, then the error won't be caught.

For example:

... doctest:: example
"Hello, World!"

would pass the test. Since I am writing a tutorial for people that have even less experience than me with Python, I want be sure that the code in my examples runs just fine.
(There's no need to convert things to str before printing them.)

You are right, I modified an example that uses x in one of my functions that requires a string in input, and didn't change that.

Thanks again for the help anyway,

Cheers,
Luca
 
S

Steven D'Aprano

If you use a SKIP
directive on code that contains a typo, or maybe you changed the name of
a keyword to make it more meaningful and forgot to update your
docstring, then the error won't be caught.

And if you ignore the output, the error won't be caught either. What's
the difference?

1000

Since doctest is ignoring the output, it won't catch the failure. Even if
your code raises an exception, if you ignore the output, you'll never see
the exception because it is ignored.

So you simply can't do what you want. You can't both ignore the output of
a doctest and have doctest report if the test fails.
 
L

Luca Cerone

And if you ignore the output, the error won't be caught either. What's
the difference?


1000

The difference is that in that case you want to check whether the result is correct or not, because you expect a certain result.

In my case, I don't know what the output is, nor care for the purpose of the tutorial. What I care is being sure that the command in the tutorial is correct, and up to date with the code.

If you try the following, the test will fail (because there is a typo in the code)

... doctest:: example

and not because the output doesn't match what you expected.

Even if the command is correct:

... doctest:: example_2

this text will fail because doctest expects an output. I want to be able to test that the syntax is correct, the command can be run, and ignore whatever the output is.

Don't think about the specific print example, I use this just to show what the problem is, which is not what I am writing a tutorial about!
So you simply can't do what you want. You can't both ignore the output of

a doctest and have doctest report if the test fails.

OK, maybe it is not possible using doctest. Is there any other way to do what I want? For example using an other Sphinx extension??
 
C

Chris Angelico

The difference is that in that case you want to check whether the result is correct or not, because you expect a certain result.

In my case, I don't know what the output is, nor care for the purpose of the tutorial. What I care is being sure that the command in the tutorial is correct, and up to date with the code.

I'd call that a smoke-test, rather than something a test
harness/engine should be doing normally. All you do is see if the
program crashes. This can be extremely useful (I smoke-test my scripts
as part of my one-key "deploy to testbox" script, saving me the
trouble of actually running anything - simple syntactic errors or
misspelled function/variable names (in languages where that's a
concept) get caught really early); but if you're using this for a
tutorial, you risk creating a breed of novice programmers who believe
their first priority is to stop the program crashing. Smoke testing is
a tool that should be used by the expert, NOT a sole check given to a
novice.

ChrisA
 
S

Steven D'Aprano

The difference is that in that case you want to check whether the result
is correct or not, because you expect a certain result.

In my case, I don't know what the output is, nor care for the purpose of
the tutorial. What I care is being sure that the command in the tutorial
is correct, and up to date with the code.

If you try the following, the test will fail (because there is a typo in
the code)

.. doctest:: example


and not because the output doesn't match what you expected.


That is not how doctest works. That test fails because its output is:

SyntaxError: invalid syntax


not because doctest recognises the syntax error as a failure. Exceptions,
whether they are syntax errors or some other exception, can count as
doctest *successes* rather than failures. Both of these count as passing
doctests:

Traceback (most recent call last):
..
SyntaxError: unexpected EOF while parsing
Traceback (most recent call last):
..
ZeroDivisionError: division by zero


Consequently, doctest only recognises failures by their output,
regardless of whether that output is a value or an exception.

Even if the command is correct:

.. doctest:: example_2


this text will fail because doctest expects an output. I want to be able
to test that the syntax is correct, the command can be run, and ignore
whatever the output is.

The only wild-card output that doctest recognises is ellipsis, and like
all wild-cards, can match too much if you aren't careful. If ellipsis is
not matching exactly what you want, add some scaffolding to ensure a
match:
x = ...


will work. But a better solution, I think, would be to pick a
deterministic result:
14.566925510413032


Alas, that only works reliably if you stick to a single Python version.
Although the results of calling random.random are guaranteed to be stable
across versions, random functions build on top of random.random like
uniform are not and may need to be protected with a version check.
 
L

Luca Cerone

but if you're using this for a
tutorial, you risk creating a breed of novice programmers who believe

their first priority is to stop the program crashing. Smoke testing is

Hi Chris,
actually my priority is to check that the code is correct. I changed the syntax
during the development, and I want to be sure that my tutorial is up to date.

The user will only see the examples that, after testing with doctest, will
run. They won't know that I used doctests for the documentation..

How can I do what you call smoke tests in my Sphinx documentation?
 
L

Luca Cerone

That is not how doctest works. That test fails because its output is:

ok.. is there a tool by which I can test if my code runs regardless the output?
The only wild-card output that doctest recognises is ellipsis, and like

all wild-cards, can match too much if you aren't careful. If ellipsis is

actually I want to match the whole output.. and you can't because ellipsis is the same as line continuation...
will work. But a better solution, I think, would be to pick a

I think you are sticking too much to the examples I posted, where I used functions that are part of Python, so that everybody could run the code and test the issues.

I don't use random numbers, so I can't apply what you said.
Really, I am looking for a way to test the code while ignoring the output.

I don't know if usually it is a bad choice, but in my case is what I want/need.

Thanks for the help,
Luca
 
C

Chris Angelico

Hi Chris,
actually my priority is to check that the code is correct. I changed the syntax
during the development, and I want to be sure that my tutorial is up to date.

The user will only see the examples that, after testing with doctest, will
run. They won't know that I used doctests for the documentation..

How can I do what you call smoke tests in my Sphinx documentation?

I don't know Sphinx, so I can't help there. But maybe you should just
have a pile of .py files, and you import each one and see if you get
an exception?

ChrisA
 
S

Steven D'Aprano

I am looking for a way to test the code while ignoring the output.

This makes no sense. If you ignore the output, the code could do ANYTHING
and the test would still pass. Raise an exception? Pass. SyntaxError?
Pass. Print "99 bottles of beer"? Pass.

I have sometimes written unit tests that just check whether a function
actually is callable:

ignore = function(a, b, c)

but I've come to the conclusion that is just a waste of time, since there
are dozens of other tests that will fail if function isn't callable. But
if you insist, you could always use that technique in your doctests:

If the function call raises, your doctest will fail, but if it returns
something, anything, it will pass.
 
N

Ned Batchelder

Hi Chris,
actually my priority is to check that the code is correct. I changed the syntax
during the development, and I want to be sure that my tutorial is up to date.

If you do manage to ignore the output, how will you know that the syntax
is correct? The output for an incorrect syntax line will be an
exception, which you'll ignore. Maybe I don't know enough about the
details of doctest. It's always seemed incredibly limited to me.
Essentially, it's as if you used unittest but the only assertion you're
allowed to make is self.assertEqual(str(X), "....")

--Ned.
 
L

Luca Cerone

This makes no sense. If you ignore the output, the code could do ANYTHING
and the test would still pass. Raise an exception? Pass. SyntaxError?

Pass. Print "99 bottles of beer"? Pass.

if you try the commands, you can see that the tests fail..
for example

... doctest::

will fail with this message:

File "utils.rst", line 5, in default
Failed example:
raise Exception("test")
Exception raised:
Traceback (most recent call last):
File "/usr/lib/python2.7/doctest.py", line 1289, in __run
compileflags, 1) in test.globs
File "<doctest default[0]>", line 1, in <module>
raise Exception("test")
Exception: test

So to me this seems OK.. "Print" will fail as well...
I have sometimes written unit tests that just check whether a function

actually is callable:



ignore = function(a, b, c)



but I've come to the conclusion that is just a waste of time, since there

are dozens of other tests that will fail if function isn't callable. But

if you insist, you could always use that technique in your doctests:





If the function call raises, your doctest will fail, but if it returns

something, anything, it will pass.

I understand your point, but now I am not writing unit tests to check the correctness of the code. I am only writing a tutorial and assuming that
the code is correct. What I have to be sure is that the code in the tutorial
can be executed correctly, and some commands print verbose output which can change.

It is not enough to write >>> ignore = function(a,b,c) won't work because the function still prints messages on screen and this causes the failure of the test...
 
L

Luca Cerone

If you do manage to ignore the output, how will you know that the syntax

is correct? The output for an incorrect syntax line will be an

exception, which you'll ignore.

if the function raises an exception, the test fails, regardless of the output
Maybe I don't know enough about the

details of doctest. It's always seemed incredibly limited to me.

I agree that has some limitations
Essentially, it's as if you used unittest but the only assertion you're

allowed to make is self.assertEqual(str(X), "....")

I don't know unittest, is it possible to use it within Sphinx?
 
S

Steven D'Aprano

If you do manage to ignore the output, how will you know that the syntax
is correct? The output for an incorrect syntax line will be an
exception, which you'll ignore. Maybe I don't know enough about the
details of doctest. It's always seemed incredibly limited to me.
Essentially, it's as if you used unittest but the only assertion you're
allowed to make is self.assertEqual(str(X), "....")

More or less :)

Doc tests really are documentation first and tests second. That's its
strength. If you want unit tests, you know where to find them :)
 
N

Neil Cerutti

I understand your point, but now I am not writing unit tests to
check the correctness of the code. I am only writing a tutorial
and assuming that the code is correct. What I have to be sure
is that the code in the tutorial can be executed correctly, and
some commands print verbose output which can change.

It is not enough to write >>> ignore = function(a,b,c) won't
work because the function still prints messages on screen and
this causes the failure of the test...

It won't be very good documenation any more but nothing stops you
from examining the result in the next doctest and making yourself
happy about it.
True
 
L

Luca Cerone

It won't be very good documenation any more but nothing stops you
from examining the result in the next doctest and making yourself

happy about it.




True

Hi Neil, thanks for the hint, but this won't work.

The problem is that the function displays some output informing you of whatsteps are being performed (some of which are displayed by a 3rd party function that I don't control).

This output "interferes" with the output that should be checked by doctest.

For example, you can check that the following doctest would fail:

... doctest:: example_fake
... print "random output"
... return x10

When you run make doctest the test fails with this message:

File "tutorial.rst", line 11, in example_fake
Failed example:
myfun(10)
Expected:
10
Got:
random output
10

In this case (imagine that "random output" is really random, therefore I can not easily filter it, if not ignoring several lines. This would be quite easy if ellipsis and line continuation wouldn't have the same sequence of characters, but unfortunately this is not the case.

The method you proposed still is not applicable, because I have no way to use startswith() and endswith()...

The following code could do what I want if I could ignore the output...
... print "random output"
... return x10

fails with this message:

File "tutorial.rst", line 11, in example_fake
Failed example:
result = myfun(10)
Expected nothing
Got:
random output

(line 11 contains: >>> result = myfun(10))

A SKIP directive is not feasible either:

... doctest:: example_fake
... print "random output"
... return x10

fails with this error message:
File "tutorial.rst", line 12, in example_fake
Failed example:
result
Exception raised:
Traceback (most recent call last):
File "/usr/lib/python2.7/doctest.py", line 1289, in __run
compileflags, 1) in test.globs
File "<doctest example_fake[2]>", line 1, in <module>
result
NameError: name 'result' is not defined

As you can see is not that I want something too weird, is just that sometimes you can't control what the function display and ignoring the output is areasonable way to implement a doctest.

Hope these examples helped to understand better what my problem is.

Thanks all of you guys for the hints, suggestions and best practices :)
 
L

Luca Cerone

I don't know why but it seems that google groups stripped the indentation from the code. I just wanted to ensure you that in the examples that I have run
the definition of myfunc contained correctly indented code!

.. doctest:: example_fake



... print "random output"

... return x

10



When you run make doctest the test fails with this message:



File "tutorial.rst", line 11, in example_fake

Failed example:

myfun(10)

Expected:

10

Got:

random output

10



In this case (imagine that "random output" is really random, therefore I can not easily filter it, if not ignoring several lines. This would be quite easy if ellipsis and line continuation wouldn't have the same sequence ofcharacters, but unfortunately this is not the case.



The method you proposed still is not applicable, because I have no way touse startswith() and endswith()...



The following code could do what I want if I could ignore the output...



... print "random output"

... return x

10



fails with this message:



File "tutorial.rst", line 11, in example_fake

Failed example:

result = myfun(10)

Expected nothing

Got:

random output



(line 11 contains: >>> result = myfun(10))



A SKIP directive is not feasible either:



.. doctest:: example_fake



... print "random output"

... return x

10



fails with this error message:

File "tutorial.rst", line 12, in example_fake

Failed example:

result

Exception raised:

Traceback (most recent call last):

File "/usr/lib/python2.7/doctest.py", line 1289, in __run

compileflags, 1) in test.globs

File "<doctest example_fake[2]>", line 1, in <module>

result

NameError: name 'result' is not defined



As you can see is not that I want something too weird, is just that sometimes you can't control what the function display and ignoring the output isa reasonable way to implement a doctest.



Hope these examples helped to understand better what my problem is.



Thanks all of you guys for the hints, suggestions and best practices :)
 
S

Skip Montanaro

I don't know why but it seems that google groups stripped the indentation from the code.

Because it's Google Groups. :)

800-pound gorillas tend to do pretty much whatever they want.

Skip
 
N

Neil Cerutti

Hi Neil, thanks for the hint, but this won't work.

The problem is that the function displays some output informing
you of what steps are being performed (some of which are
displayed by a 3rd party function that I don't control).

This output "interferes" with the output that should be checked by doctest.

For example, you can check that the following doctest would fail:

.. doctest:: example_fake

... print "random output"
... return x
10

When you run make doctest the test fails with this message:

File "tutorial.rst", line 11, in example_fake
Failed example:
myfun(10)
Expected:
10
Got:
random output
10

In this case (imagine that "random output" is really random,
therefore I can not easily filter it, if not ignoring several
lines. This would be quite easy if ellipsis and line
continuation wouldn't have the same sequence of characters, but
unfortunately this is not the case.

Perhaps try the "advanced API" and define your oen OutputChecker
to add the feature that you need.

Figuring out how to best invoke doctest with your modified
OutputChecker will take some digging in the source, probably
looking at doctest.testmod. I don't see an example in the docs.
Hope these examples helped to understand better what my problem
is.

Yes, I think it's well-defined now.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,535
Members
45,007
Latest member
obedient dusk

Latest Threads

Top