Skip to content

Commit

Permalink
streamline docs, especially use "import pytest" and "pytest.*" in pyt…
Browse files Browse the repository at this point in the history
…hon code examples instead of "import py" and "py.test.*".
  • Loading branch information
hpk42 committed Nov 17, 2010
1 parent 93a4365 commit a698465
Show file tree
Hide file tree
Showing 45 changed files with 436 additions and 447 deletions.
36 changes: 18 additions & 18 deletions doc/assert.txt
Original file line number Diff line number Diff line change
Expand Up @@ -21,27 +21,27 @@ assertion fails you will see the value of ``x``::

$ py.test test_assert1.py
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev22
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: test_assert1.py

test_assert1.py F

================================= FAILURES =================================
______________________________ test_function _______________________________

def test_function():
> assert f() == 4
E assert 3 == 4
E + where 3 = f()

test_assert1.py:5: AssertionError
========================= 1 failed in 0.05 seconds =========================
========================= 1 failed in 0.03 seconds =========================

Reporting details about the failing assertion is achieved by re-evaluating
the assert expression and recording intermediate values.

Note: If evaluating the assert expression has side effects you may get a
warning that the intermediate values could not be determined safely. A
warning that the intermediate values could not be determined safely. A
common example for this issue is reading from a file and comparing in one
line::

Expand All @@ -57,14 +57,14 @@ assertions about expected exceptions
------------------------------------------

In order to write assertions about raised exceptions, you can use
``py.test.raises`` as a context manager like this::
``pytest.raises`` as a context manager like this::

with py.test.raises(ZeroDivisionError):
with pytest.raises(ZeroDivisionError):
1 / 0

and if you need to have access to the actual exception info you may use::

with py.test.raises(RuntimeError) as excinfo:
with pytest.raises(RuntimeError) as excinfo:
def f():
f()
f()
Expand All @@ -74,8 +74,8 @@ and if you need to have access to the actual exception info you may use::
If you want to write test code that works on Python2.4 as well,
you may also use two other ways to test for an expected exception::

py.test.raises(ExpectedException, func, *args, **kwargs)
py.test.raises(ExpectedException, "func(*args, **kwargs)")
pytest.raises(ExpectedException, func, *args, **kwargs)
pytest.raises(ExpectedException, "func(*args, **kwargs)")

both of which execute the specified function with args and kwargs and
asserts that the given ``ExpectedException`` is raised. The reporter will
Expand All @@ -101,14 +101,14 @@ if you run this module::

$ py.test test_assert2.py
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev22
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: test_assert2.py

test_assert2.py F

================================= FAILURES =================================
___________________________ test_set_comparison ____________________________

def test_set_comparison():
set1 = set("1308")
set2 = set("8035")
Expand All @@ -118,7 +118,7 @@ if you run this module::
E '1'
E Extra items in the right set:
E '5'

test_assert2.py:5: AssertionError
========================= 1 failed in 0.02 seconds =========================

Expand All @@ -128,7 +128,7 @@ Special comparisons are done for a number of cases:
* comparing long sequences: first failing indices
* comparing dicts: different entries

..
..
Defining your own comparison
----------------------------------------------

Expand Down
20 changes: 10 additions & 10 deletions doc/builtin.txt
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@

py.test builtin helpers
pytest builtin helpers
================================================

builtin py.test.* helpers
builtin pytest.* helpers
-----------------------------------------------------

You can always use an interactive Python prompt and type::
Expand All @@ -28,41 +28,41 @@ You can ask for available builtin or project-custom
captures writes to sys.stdout/sys.stderr and makes
them available successively via a ``capsys.readouterr()`` method
which returns a ``(out, err)`` tuple of captured snapshot strings.

capfd
captures writes to file descriptors 1 and 2 and makes
snapshotted ``(out, err)`` string tuples available
via the ``capsys.readouterr()`` method. If the underlying
platform does not have ``os.dup`` (e.g. Jython) tests using
this funcarg will automatically skip.

tmpdir
return a temporary directory path object
unique to each test function invocation,
created as a sub directory of the base temporary
directory. The returned object is a `py.path.local`_
path object.

monkeypatch
The returned ``monkeypatch`` funcarg provides these
helper methods to modify objects, dictionaries or os.environ::

monkeypatch.setattr(obj, name, value, raising=True)
monkeypatch.delattr(obj, name, raising=True)
monkeypatch.setitem(mapping, name, value)
monkeypatch.delitem(obj, name, raising=True)
monkeypatch.setenv(name, value, prepend=False)
monkeypatch.delenv(name, value, raising=True)
monkeypatch.syspath_prepend(path)

All modifications will be undone when the requesting
test function finished its execution. The ``raising``
parameter determines if a KeyError or AttributeError
will be raised if the set/deletion operation has no target.

recwarn
Return a WarningsRecorder instance that provides these methods:

* ``pop(category=None)``: return last warning matching the category.
* ``clear()``: clear list of warnings

4 changes: 2 additions & 2 deletions doc/doctest.txt
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ then you can just invoke ``py.test`` without command line options::

$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev22
test path 1: /tmp/doc-exec-519
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: /tmp/doc-exec-66

============================= in 0.00 seconds =============================
20 changes: 10 additions & 10 deletions doc/example/controlskip.txt
Original file line number Diff line number Diff line change
Expand Up @@ -8,22 +8,22 @@ Here is a ``conftest.py`` file adding a ``--runslow`` command
line option to control skipping of ``slow`` marked tests::

# content of conftest.py
import py

import pytest
def pytest_addoption(parser):
parser.addoption("--runslow", action="store_true",
help="run slow tests")

def pytest_runtest_setup(item):
if 'slow' in item.keywords and not item.config.getvalue("runslow"):
py.test.skip("need --runslow option to run")
pytest.skip("need --runslow option to run")

We can now write a test module like this::

# content of test_module.py
import py
slow = py.test.mark.slow

import pytest
slow = pytest.mark.slow

def test_func_fast():
pass
Expand All @@ -34,22 +34,22 @@ We can now write a test module like this::

and when running it will see a skipped "slow" test::

$ py.test test_module.py -rs # "-rs" means report on the little 's'
$ py.test test_module.py -rs # "-rs" means report details on the little 's'
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev22
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: test_module.py

test_module.py .s
========================= short test summary info ==========================
SKIP [1] /tmp/doc-exec-557/conftest.py:9: need --runslow option to run
SKIP [1] /tmp/doc-exec-104/conftest.py:9: need --runslow option to run

=================== 1 passed, 1 skipped in 0.02 seconds ====================

Or run it including the ``slow`` marked test::

$ py.test test_module.py --runslow
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev22
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: test_module.py

test_module.py ..
Expand Down
14 changes: 7 additions & 7 deletions doc/example/mysetup.txt
Original file line number Diff line number Diff line change
Expand Up @@ -49,15 +49,15 @@ You can now run the test::

$ py.test test_sample.py
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev22
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: test_sample.py

test_sample.py F

================================= FAILURES =================================
_______________________________ test_answer ________________________________

mysetup = <conftest.MySetup instance at 0x1ca5cf8>
mysetup = <conftest.MySetup instance at 0x16f5998>

def test_answer(mysetup):
app = mysetup.myapp()
Expand All @@ -84,7 +84,7 @@ the previous example to add a command line option
and to offer a new mysetup method::

# content of ./conftest.py
import py
import pytest
from myapp import MyApp

def pytest_funcarg__mysetup(request): # "mysetup" factory function
Expand All @@ -105,7 +105,7 @@ and to offer a new mysetup method::
def getsshconnection(self):
host = self.config.option.ssh
if host is None:
py.test.skip("specify ssh host with --ssh")
pytest.skip("specify ssh host with --ssh")
return execnet.SshGateway(host)


Expand All @@ -122,14 +122,14 @@ Running it yields::

$ py.test test_ssh.py -rs
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev22
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: test_ssh.py

test_ssh.py s
========================= short test summary info ==========================
SKIP [1] /tmp/doc-exec-560/conftest.py:22: specify ssh host with --ssh
SKIP [1] /tmp/doc-exec-107/conftest.py:22: specify ssh host with --ssh

======================== 1 skipped in 0.03 seconds =========================
======================== 1 skipped in 0.02 seconds =========================

If you specify a command line option like ``py.test --ssh=python.org`` the test will execute as expected.

Expand Down
12 changes: 6 additions & 6 deletions doc/example/nonpython.txt
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ now execute the test specification::

nonpython $ py.test test_simple.yml
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev22
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: test_simple.yml

test_simple.yml .F
Expand All @@ -39,7 +39,7 @@ now execute the test specification::
no further details known at this point.
========================= short test summary info ==========================
FAIL test_simple.yml::hello
==================== 1 failed, 1 passed in 0.43 seconds ====================
==================== 1 failed, 1 passed in 0.06 seconds ====================

You get one dot for the passing ``sub1: sub1`` check and one failure.
Obviously in the above ``conftest.py`` you'll want to implement a more
Expand All @@ -58,11 +58,11 @@ reporting in ``verbose`` mode::

nonpython $ py.test -v
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev22 -- /home/hpk/venv/0/bin/python
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30 -- /home/hpk/venv/0/bin/python
test path 1: /home/hpk/p/pytest/doc/example/nonpython

test_simple.yml <- test_simple.yml:1: usecase: ok PASSED
test_simple.yml <- test_simple.yml:1: usecase: hello FAILED
test_simple.yml:1: usecase: ok PASSED
test_simple.yml:1: usecase: hello FAILED

================================= FAILURES =================================
______________________________ usecase: hello ______________________________
Expand All @@ -71,7 +71,7 @@ reporting in ``verbose`` mode::
no further details known at this point.
========================= short test summary info ==========================
FAIL test_simple.yml::hello
==================== 1 failed, 1 passed in 0.07 seconds ====================
==================== 1 failed, 1 passed in 0.06 seconds ====================

While developing your custom test collection and execution it's also
interesting to just look at the collection tree::
Expand Down
37 changes: 14 additions & 23 deletions doc/faq.txt
Original file line number Diff line number Diff line change
Expand Up @@ -65,9 +65,9 @@ Is using funcarg- versus xUnit setup a style question?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

For simple applications and for people experienced with nose_ or
unittest-style test setup using `xUnit style setup`_
unittest-style test setup using `xUnit style setup`_ often
feels natural. For larger test suites, parametrized testing
or setup of complex test resources using funcargs_ is recommended.
or setup of complex test resources using funcargs_ may feel more natural.
Moreover, funcargs are ideal for writing advanced test support
code (like e.g. the monkeypatch_, the tmpdir_ or capture_ funcargs)
because the support code can register setup/teardown functions
Expand All @@ -82,18 +82,17 @@ in a managed class/module/function scope.
Why the ``pytest_funcarg__*`` name for funcarg factories?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

When experimenting with funcargs an explicit registration mechanism
was considered. But lacking a good use case for this indirection and
flexibility we decided to go for `Convention over Configuration`_ and
allow to directly specify the factory. Besides removing the need
for an indirection it allows to "grep" for ``pytest_funcarg__MYARG``
and will safely find all factory functions for the ``MYARG`` function
argument. It helps to alleviate the de-coupling of function
argument usage and creation.
We alternatively implemented an explicit registration mechanism for
function argument factories. But lacking a good use case for this
indirection and flexibility we decided to go for `Convention over
Configuration`_ and rather have factories specified by convention.
Besides removing the need for an registration indirection it allows to
"grep" for ``pytest_funcarg__MYARG`` and will safely find all factory
functions for the ``MYARG`` function argument.

.. _`Convention over Configuration`: http:https://en.wikipedia.org/wiki/Convention_over_Configuration

Can I yield multiple values from a factory function?
Can I yield multiple values from a funcarg factory function?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

There are two conceptual reasons why yielding from a factory function
Expand Down Expand Up @@ -126,24 +125,16 @@ by pickling and thus implicitely re-import a lot of local modules.
Unfortuantely, setuptools-0.6.11 does not ``if __name__=='__main__'``
protect its generated command line script. This leads to infinite
recursion when running a test that instantiates Processes.
There are these workarounds:

* `install Distribute`_ as a drop-in replacement for setuptools
and install py.test

* `directly use a checkout`_ which avoids all setuptools/Distribute
installation

If those options are not available to you, you may also manually
A good solution is to `install Distribute`_ as a drop-in replacement
for setuptools and then re-install ``pytest``. Otherwise you could
fix the script that is created by setuptools by inserting an
``if __name__ == '__main__'``. Or you can create a "pytest.py"
script with this content and invoke that with the python version::

import py
import pytest
if __name__ == '__main__':
py.cmdline.pytest()

.. _`directly use a checkout`: install.html#directly-use-a-checkout
pytest.main()

.. _`install distribute`: http:https://pypi.python.org/pypi/distribute#installation-instructions

Expand Down
Loading

0 comments on commit a698465

Please sign in to comment.