Skip to content

Commit

Permalink
update docs
Browse files Browse the repository at this point in the history
  • Loading branch information
fafhrd91 committed Mar 15, 2017
1 parent 6b1982d commit 06b0491
Show file tree
Hide file tree
Showing 6 changed files with 166 additions and 196 deletions.
33 changes: 32 additions & 1 deletion CHANGES.rst
Original file line number Diff line number Diff line change
@@ -1,11 +1,42 @@
Major aiohttp2.0 release!
=========================

Warning! This is not final release. It contains backward incompatible change,
please check compatibility before installing on production systems.

For this release we completely refactored low-level implementation of http handling.
Finally `uvloop` gives performance improvement. Overall performance improvement
should be around 70-90% compared to 1.x version.

We took opportunity to refactor long standing api design problem across whole package.
Client exceptions handling has been cleaned up and now much more strait forward. Client payload
management simplified and allows to extends with any custom types. Client collection pool
implementation has been redesigned as well, now there is no need for actively releasing responses,
aiohttp handles connection release automatically.

Another major change, we moved aiohttp development to public organization https://github.com/aio-libs
The aiohttp community would like to thank `Keepsafe` (https://www.getkeepsafe.com) for it's support in the early days of the project.

Alas we had to make backward incompatible changes. Please check this migration document https://aiohttp.readthedocs.io/en/latest/migration.html

Please report problems or annoyance with with api to https://github.com/aio-libs/aiohttp

You can install and test this release with::

pip install https://github.com/aio-libs/aiohttp/archive/2.0.0rc1.tar.gz#egg=aiohttp-2.0.0rc1


CHANGES
=======
-------


`2.0.0rc1` (2017-03-14)
-----------------------

- Properly handle payload errors #1710

- Added `ClientWebSocketResponse.get_extra_info()` #1717

- It is not possible to combine Transfer-Encoding and chunked parameter,
same for compress and Content-Encoding #1655

Expand Down
62 changes: 20 additions & 42 deletions docs/client.rst
Original file line number Diff line number Diff line change
Expand Up @@ -119,6 +119,11 @@ You can also access the response body as bytes, for non-text requests::
The ``gzip`` and ``deflate`` transfer-encodings are automatically
decoded for you.

.. note::

This methods reads whole response body into memory. If you are planing
planing to read a lot of data consider to use streaming response.


JSON Response Content
---------------------
Expand Down Expand Up @@ -164,30 +169,6 @@ It is not possible to use :meth:`~ClientResponse.read`,
explicit reading from :attr:`~ClientResponse.content`.


Releasing Response
------------------

Don't forget to release response after use. This will ensure explicit
behavior and proper connection pooling.

The easiest way to release response correctly is ``async with`` statement::

async with session.get(url) as resp:
pass

But explicit :meth:`~ClientResponse.release` call also may be used::

await resp.release()

However it's not necessary if you use :meth:`~ClientResponse.read`,
:meth:`~ClientResponse.json` and :meth:`~ClientResponse.text` methods.
They do release connection internally but better don't rely on that
behavior.

If response still contains un-consumed data (i.e. not received from server)
underlining connection get closed and not re-used in connection pooling.


Custom Headers
--------------

Expand Down Expand Up @@ -303,22 +284,21 @@ As a simple case, simply provide a file-like object for your body::
await session.post('https://some.url/streamed', data=f)


Or you can provide an :ref:`coroutine<coroutine>` that yields bytes objects::

@asyncio.coroutine
def my_coroutine():
chunk = yield from read_some_data_from_somewhere()
if not chunk:
return
yield chunk
Or you can use `aiohttp.streamer` object::

.. warning:: ``yield`` expression is forbidden inside ``async def``.
@aiohttp.streamer
def file_sender(writer, file_name=None):
with open(file_name, 'rb') as f:
chunk = f.read(2**16)
while chunk:
yield from writer.write(chunk)
chunk = f.read(2**16)

.. note::
# Then you can use `file_sender` as a data provider:

It is not a standard :ref:`coroutine<coroutine>` as it yields values so it
cannot be used like ``yield from my_coroutine()``.
:mod:`aiohttp` internally handles such coroutines.
async with session.post('https://httpbin.org/post',
data=file_sender(file_name='hude_file')) as resp:
print(await resp.text())

Also it is possible to use a :class:`~aiohttp.streams.StreamReader`
object. Lets say we want to upload a file from another request and
Expand Down Expand Up @@ -356,17 +336,15 @@ Uploading pre-compressed data
-----------------------------

To upload data that is already compressed before passing it to aiohttp, call
the request function with ``compress=False`` and set the used compression
algorithm name (usually deflate or zlib) as the value of the
``Content-Encoding`` header::
the request function with the used compression algorithm name (usually deflate or zlib)
as the value of the ``Content-Encoding`` header::

async def my_coroutine(session, headers, my_data):
data = zlib.compress(my_data)
headers = {'Content-Encoding': 'deflate'}
async with session.post('https://httpbin.org/post',
data=data,
headers=headers,
compress=False):
headers=headers)
pass


Expand Down
Loading

0 comments on commit 06b0491

Please sign in to comment.