Skip to content

Commit

Permalink
Formatted more documentation to use the 88 column limit from Black
Browse files Browse the repository at this point in the history
  • Loading branch information
agronholm committed Oct 2, 2022
1 parent 76d10db commit a6510b9
Show file tree
Hide file tree
Showing 3 changed files with 109 additions and 95 deletions.
41 changes: 22 additions & 19 deletions docs/basics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@ The basics

.. py:currentmodule:: anyio
AnyIO requires Python 3.6.2 or later to run. It is recommended that you set up a virtualenv_ when
developing or playing around with AnyIO.
AnyIO requires Python 3.6.2 or later to run. It is recommended that you set up a
virtualenv_ when developing or playing around with AnyIO.

Installation
------------
Expand Down Expand Up @@ -34,13 +34,13 @@ The simplest possible AnyIO program looks like this::

run(main)

This will run the program above on the default backend (asyncio). To run it on another supported
backend, say trio_, you can use the ``backend`` argument, like so::
This will run the program above on the default backend (asyncio). To run it on another
supported backend, say trio_, you can use the ``backend`` argument, like so::

run(main, backend='trio')

But AnyIO code is not required to be run via :func:`run`. You can just as well use the native
``run()`` function of the backend library::
But AnyIO code is not required to be run via :func:`run`. You can just as well use the
native ``run()`` function of the backend library::

import sniffio
import trio
Expand All @@ -62,30 +62,33 @@ Backend specific options
Asyncio:

* ``debug`` (``bool``, default=False): Enables `debug mode`_ in the event loop
* ``use_uvloop`` (``bool``, default=False): Use the faster uvloop_ event loop implementation, if
available
* ``policy`` (``AbstractEventLoopPolicy``, default=None): the event loop policy instance to use
for creating a new event loop (overrides ``use_uvloop``)
* ``use_uvloop`` (``bool``, default=False): Use the faster uvloop_ event loop
implementation, if available
* ``policy`` (``AbstractEventLoopPolicy``, default=None): the event loop policy instance
to use for creating a new event loop (overrides ``use_uvloop``)

Trio: options covered in the
`official documentation <https://trio.readthedocs.io/en/stable/reference-core.html#trio.run>`_
`official documentation
<https://trio.readthedocs.io/en/stable/reference-core.html#trio.run>`_

.. note:: The default value of ``use_uvloop`` was ``True`` before v3.2.0.

.. _debug mode: https://docs.python.org/3/library/asyncio-eventloop.html#enabling-debug-mode
.. _debug mode:
https://docs.python.org/3/library/asyncio-eventloop.html#enabling-debug-mode
.. _uvloop: https://pypi.org/project/uvloop/

Using native async libraries
----------------------------

AnyIO lets you mix and match code written for AnyIO and code written for the asynchronous framework
of your choice. There are a few rules to keep in mind however:
AnyIO lets you mix and match code written for AnyIO and code written for the
asynchronous framework of your choice. There are a few rules to keep in mind however:

* You can only use "native" libraries for the backend you're running, so you cannot, for example,
use a library written for trio together with a library written for asyncio.
* Tasks spawned by these "native" libraries on backends other than trio_ are not subject to the
cancellation rules enforced by AnyIO
* Threads spawned outside of AnyIO cannot use :func:`.from_thread.run` to call asynchronous code
* You can only use "native" libraries for the backend you're running, so you cannot, for
example, use a library written for trio together with a library written for asyncio.
* Tasks spawned by these "native" libraries on backends other than trio_ are not subject
to the cancellation rules enforced by AnyIO
* Threads spawned outside of AnyIO cannot use :func:`.from_thread.run` to call
asynchronous code

.. _virtualenv: https://docs.python-guide.org/dev/virtualenvs/
.. _trio: https://github.com/python-trio/trio
134 changes: 71 additions & 63 deletions docs/cancellation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,34 +3,36 @@ Cancellation and timeouts

.. py:currentmodule:: anyio
The ability to cancel tasks is the foremost advantage of the asynchronous programming model.
Threads, on the other hand, cannot be forcibly killed and shutting them down will require perfect
cooperation from the code running in them.
The ability to cancel tasks is the foremost advantage of the asynchronous programming
model. Threads, on the other hand, cannot be forcibly killed and shutting them down will
require perfect cooperation from the code running in them.

Cancellation in AnyIO follows the model established by the trio_ framework. This means that
cancellation of tasks is done via so called *cancel scopes*. Cancel scopes are used as context
managers and can be nested. Cancelling a cancel scope cancels all cancel scopes nested within it.
If a task is waiting on something, it is cancelled immediately. If the task is just starting, it
will run until it first tries to run an operation requiring waiting, such as :func:`~sleep`.
Cancellation in AnyIO follows the model established by the trio_ framework. This means
that cancellation of tasks is done via so called *cancel scopes*. Cancel scopes are used
as context managers and can be nested. Cancelling a cancel scope cancels all cancel
scopes nested within it. If a task is waiting on something, it is cancelled immediately.
If the task is just starting, it will run until it first tries to run an operation
requiring waiting, such as :func:`~sleep`.

A task group contains its own cancel scope. The entire task group can be cancelled by cancelling
this scope.
A task group contains its own cancel scope. The entire task group can be cancelled by
cancelling this scope.

.. _trio: https://trio.readthedocs.io/en/latest/reference-core.html#cancellation-and-timeouts

Timeouts
--------

Networked operations can often take a long time, and you usually want to set up some kind of a
timeout to ensure that your application doesn't stall forever. There are two principal ways to do
this: :func:`~move_on_after` and :func:`~fail_after`. Both are used as synchronous
context managers. The difference between these two is that the former simply exits the context
block prematurely on a timeout, while the other raises a :exc:`TimeoutError`.
Networked operations can often take a long time, and you usually want to set up some
kind of a timeout to ensure that your application doesn't stall forever. There are two
principal ways to do this: :func:`~move_on_after` and :func:`~fail_after`. Both are used
as synchronous context managers. The difference between these two is that the former
simply exits the context block prematurely on a timeout, while the other raises a
:exc:`TimeoutError`.

Both methods create a new cancel scope, and you can check the deadline by accessing the
:attr:`~.abc.CancelScope.deadline` attribute. Note, however, that an outer cancel scope may
have an earlier deadline than your current cancel scope. To check the actual deadline, you can use
the :func:`~current_effective_deadline` function.
:attr:`~.abc.CancelScope.deadline` attribute. Note, however, that an outer cancel scope
may have an earlier deadline than your current cancel scope. To check the actual
deadline, you can use the :func:`~current_effective_deadline` function.

Here's how you typically use timeouts::

Expand All @@ -52,8 +54,9 @@ Here's how you typically use timeouts::
Shielding
---------

There are cases where you want to shield your task from cancellation, at least temporarily.
The most important such use case is performing shutdown procedures on asynchronous resources.
There are cases where you want to shield your task from cancellation, at least
temporarily. The most important such use case is performing shutdown procedures on
asynchronous resources.

To accomplish this, open a new cancel scope with the ``shield=True`` argument::

Expand All @@ -77,14 +80,16 @@ To accomplish this, open a new cancel scope with the ``shield=True`` argument::

run(main)

The shielded block will be exempt from cancellation except when the shielded block itself is being
cancelled. Shielding a cancel scope is often best combined with :func:`~move_on_after` or
:func:`~fail_after`, both of which also accept ``shield=True``.
The shielded block will be exempt from cancellation except when the shielded block
itself is being cancelled. Shielding a cancel scope is often best combined with
:func:`~move_on_after` or :func:`~fail_after`, both of which also accept
``shield=True``.

Finalization
------------

Sometimes you may want to perform cleanup operations in response to the failure of the operation::
Sometimes you may want to perform cleanup operations in response to the failure of the
operation::

async def do_something():
try:
Expand All @@ -93,11 +98,11 @@ Sometimes you may want to perform cleanup operations in response to the failure
# (perform cleanup)
raise

In some specific cases, you might only want to catch the cancellation exception. This is tricky
because each async framework has its own exception class for that and AnyIO cannot control which
exception is raised in the task when it's cancelled. To work around that, AnyIO provides a way to
retrieve the exception class specific to the currently running async framework, using
:func:`~get_cancelled_exc_class`::
In some specific cases, you might only want to catch the cancellation exception. This is
tricky because each async framework has its own exception class for that and AnyIO
cannot control which exception is raised in the task when it's cancelled. To work around
that, AnyIO provides a way to retrieve the exception class specific to the currently
running async framework, using:func:`~get_cancelled_exc_class`::

from anyio import get_cancelled_exc_class

Expand All @@ -109,11 +114,12 @@ retrieve the exception class specific to the currently running async framework,
# (perform cleanup)
raise

.. warning:: Always reraise the cancellation exception if you catch it. Failing to do so may cause
undefined behavior in your application.
.. warning:: Always reraise the cancellation exception if you catch it. Failing to do so
may cause undefined behavior in your application.

If you need to use ``await`` during finalization, you need to enclose it in a shielded cancel
scope, or the operation will be cancelled immediately since it's in an already cancelled scope::
If you need to use ``await`` during finalization, you need to enclose it in a shielded
cancel scope, or the operation will be cancelled immediately since it's in an already
cancelled scope::

async def do_something():
try:
Expand All @@ -127,20 +133,21 @@ scope, or the operation will be cancelled immediately since it's in an already c
Avoiding cancel scope stack corruption
--------------------------------------

When using cancel scopes, it is important that they are entered and exited in LIFO (last in, first
out) order within each task. This is usually not an issue since cancel scopes are normally used as
context managers. However, in certain situations, cancel scope stack corruption might still occur:

* Manually calling ``CancelScope.__enter__()`` and ``CancelScope.__exit__()``, usually from another
context manager class, in the wrong order
* Using cancel scopes with ``[Async]ExitStack`` in a manner that couldn't be achieved by nesting
them as context managers
* Using the low level coroutine protocol to execute parts of the coroutine function in different
cancel scopes
When using cancel scopes, it is important that they are entered and exited in LIFO (last
in, first out) order within each task. This is usually not an issue since cancel scopes
are normally used as context managers. However, in certain situations, cancel scope
stack corruption might still occur:

* Manually calling ``CancelScope.__enter__()`` and ``CancelScope.__exit__()``, usually
from another context manager class, in the wrong order
* Using cancel scopes with ``[Async]ExitStack`` in a manner that couldn't be achieved by
nesting them as context managers
* Using the low level coroutine protocol to execute parts of the coroutine function in
different cancel scopes
* Yielding in an async generator while enclosed in a cancel scope

Remember that task groups contain their own cancel scopes so the same list of risky situations
applies to them too.
Remember that task groups contain their own cancel scopes so the same list of risky
situations applies to them too.

As an example, the following code is highly dubious::

Expand All @@ -150,16 +157,17 @@ As an example, the following code is highly dubious::
tg.start_soon(foo)
yield

The problem with this code is that it violates structural concurrency: what happens if the spawned
task raises an exception? The host task would be cancelled as a result, but the host task might be
long gone by the time that happens. Even if it weren't, any enclosing ``try...except`` in the
generator would not be triggered. Unfortunately there is currently no way to automatically detect
this condition in AnyIO, so in practice you may simply experience some weird behavior in your
application as a consequence of running code like above.
The problem with this code is that it violates structural concurrency: what happens if
the spawned task raises an exception? The host task would be cancelled as a result, but
the host task might be long gone by the time that happens. Even if it weren't, any
enclosing ``try...except`` in the generator would not be triggered. Unfortunately there
is currently no way to automatically detect this condition in AnyIO, so in practice you
may simply experience some weird behavior in your application as a consequence of
running code like above.

Depending on how they are used, this pattern is, however, *usually* safe to use in asynchronous
context managers, so long as you make sure that the same host task keeps running throughout the
entire enclosed code block::
Depending on how they are used, this pattern is, however, *usually* safe to use in
asynchronous context managers, so long as you make sure that the same host task keeps
running throughout the entire enclosed code block::

# Okay in most cases!
@async_context_manager
Expand All @@ -173,12 +181,12 @@ generator fixtures. Starting from 3.6, however, each async generator fixture is
start to end in the same task, making it possible to have task groups or cancel scopes
safely straddle the ``yield``.

When you're implementing the async context manager protocol manually and your async context manager
needs to use other context managers, you may find it necessary to call their ``__aenter__()`` and
``__aexit__()`` directly. In such cases, it is absolutely vital to ensure that their ``__aexit__()``
methods are called in the exact reverse order of the ``__aenter__()`` calls. To this end, you may
find the :class:`~contextlib.AsyncExitStack` (available from Python 3.7 up, or as a backport_)
class very useful::
When you're implementing the async context manager protocol manually and your async
context manager needs to use other context managers, you may find it necessary to call
their ``__aenter__()`` and ``__aexit__()`` directly. In such cases, it is absolutely
vital to ensure that their ``__aexit__()`` methods are called in the exact reverse order
of the ``__aenter__()`` calls. To this end, you may find the
:class:`~contextlib.AsyncExitStack` class very useful::

from contextlib import AsyncExitStack

Expand All @@ -189,9 +197,9 @@ class very useful::
async def __aenter__(self):
self._exitstack = AsyncExitStack()
await self._exitstack.__aenter__()
self._task_group = await self._exitstack.enter_async_context(create_task_group())
self._task_group = await self._exitstack.enter_async_context(
create_task_group()
)

async def __aexit__(self, exc_type, exc_val, exc_tb):
return await self._exitstack.__aexit__(exc_type, exc_val, exc_tb)

.. _backport: https://pypi.org/project/async-exit-stack/
29 changes: 16 additions & 13 deletions docs/contributing.rst
Original file line number Diff line number Diff line change
@@ -1,20 +1,21 @@
Contributing to AnyIO
=====================

If you wish to contribute a fix or feature to AnyIO, please follow the following guidelines.
If you wish to contribute a fix or feature to AnyIO, please follow the following
guidelines.

When you make a pull request against the main AnyIO codebase, Github runs the AnyIO test suite
against your modified code. Before making a pull request, you should ensure that the modified code
passes tests locally. To that end, the use of tox_ is recommended. The default tox run first runs
``pre-commit`` and then the actual test suite. To run the checks on all environments in parallel,
invoke tox with ``tox -p``.
When you make a pull request against the main AnyIO codebase, Github runs the AnyIO test
suite against your modified code. Before making a pull request, you should ensure that
the modified code passes tests locally. To that end, the use of tox_ is recommended. The
default tox run first runs ``pre-commit`` and then the actual test suite. To run the
checks on all environments in parallel, invoke tox with ``tox -p``.

To build the documentation, run ``tox -e docs`` which will generate a directory named ``build``
in which you may view the formatted HTML documentation.
To build the documentation, run ``tox -e docs`` which will generate a directory named
``build`` in which you may view the formatted HTML documentation.

AnyIO uses pre-commit_ to perform several code style/quality checks. It is recommended to activate
pre-commit_ on your local clone of the repository (using ``pre-commit install``) to ensure that
your changes will pass the same checks on GitHub.
AnyIO uses pre-commit_ to perform several code style/quality checks. It is recommended
to activate pre-commit_ on your local clone of the repository (using
``pre-commit install``) to ensure that your changes will pass the same checks on GitHub.

.. _tox: https://tox.readthedocs.io/en/latest/install.html
.. _pre-commit: https://pre-commit.com/#installation
Expand All @@ -31,7 +32,8 @@ To get your changes merged to the main codebase, you need a Github account.
#. Create a branch for your pull request, like ``git checkout -b myfixname``
#. Make the desired changes to the code base.
#. Commit your changes locally. If your changes close an existing issue, add the text
``Fixes XXX.`` or ``Closes XXX.`` to the commit message (where XXX is the issue number).
``Fixes XXX.`` or ``Closes XXX.`` to the commit message (where XXX is the issue
number).
#. Push the changeset(s) to your forked repository (``git push``)
#. Navigate to Pull requests page on the original repository (not your fork) and click
"New pull request"
Expand All @@ -42,4 +44,5 @@ To get your changes merged to the main codebase, you need a Github account.
If you have trouble, consult the `pull request making guide`_ on opensource.com.

.. _main AnyIO repository: https://github.com/agronholm/anyio
.. _pull request making guide: https://opensource.com/article/19/7/create-pull-request-github
.. _pull request making guide:
https://opensource.com/article/19/7/create-pull-request-github

0 comments on commit a6510b9

Please sign in to comment.