Skip to content

Fix typos #12

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ Spawns tasks with `map_n(fn, iterable, cb, ctx)`, then waits for results with `a

`spawn` and `map` methods is probably what you should use in 99% of cases. Their overhead is minimal (~3% execution time), and even in worst cases memory usage is insignificant.

`spawn_n`, `map_n` and `itermap` methods give you more control and flexibily, but they come with a price of higher overhead. They spawn all tasks that you want, and most of the tasks wait their turn "in background". If you spawn too much (10**6+ tasks) -- you'll use most of the memory you have in system, also you'll lose a lot of time on "concurrency management" of all the tasks spawned.
`spawn_n`, `map_n` and `itermap` methods give you more control and flexibility, but they come with a price of higher overhead. They spawn all tasks that you want, and most of the tasks wait their turn "in background". If you spawn too much (10**6+ tasks) -- you'll use most of the memory you have in system, also you'll lose a lot of time on "concurrency management" of all the tasks spawned.

Play with `python tests/loadtest.py -h` to understand what you want to use.

Expand Down
12 changes: 6 additions & 6 deletions asyncio_pool/base_pool.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ def __init__(self, size=1024, *, loop=None):

Support asynchronous context management protocol (`aenter`, `aexit`).

The main idea behind spwaning methods is -- they return newly created
The main idea behind spawning methods is -- they return newly created
futures, not "native" ones, returned by `pool.create_task` or used for
`await`. Read more about this in readme and docstrings below.
'''
Expand Down Expand Up @@ -164,7 +164,7 @@ async def spawn(self, coro, cb=None, ctx=None):
If callback `cb` coroutine function (not coroutine itself!) is passed,
`coro` result won't be assigned to created future, instead, `cb` will
be executed with it as a first positional argument. Callback function
should accept 1,2 or 3 positional arguments. Full callback sigature is
should accept 1,2 or 3 positional arguments. Full callback signature is
`cb(res, err, ctx)`. It makes no sense to create a callback without
`coro` result, so first positional argument is mandatory.

Expand Down Expand Up @@ -224,7 +224,7 @@ async def map(self, fn, iterable, cb=None, ctx=None, *,

`get_result` is function, that accepts future as only positional
argument, whose goal is to extract result from future. You can pass
your own, or use inluded `getres` object, that has 3 extractors:
your own, or use included `getres` object, that has 3 extractors:
`getres.dont` will return future untouched, `getres.flat` will return
exception object if coroutine crashed or was cancelled, otherwise will
return result of a coroutine (or of the callback), `getres.pair` will
Expand All @@ -251,7 +251,7 @@ async def itermap(self, fn, iterable, cb=None, ctx=None, *, flat=True,

async def cancel(self, *futures, get_result=getres.flat):
'''Cancels spawned or waiting tasks, found by their `futures`. If no
`futures` are passed -- cancels all spwaned and waiting tasks.
`futures` are passed -- cancels all spawned and waiting tasks.

Cancelling futures, returned by pool methods, usually won't help you
to cancel executing tasks, so you have to use this method.
Expand All @@ -277,7 +277,7 @@ async def cancel(self, *futures, get_result=getres.flat):
if tasks:
cancelled = sum(1 for task in tasks if task.cancel())
await aio.wait(tasks) # let them actually cancel
# need to collect them anyway, to supress warnings
# need to collect them anyway, to suppress warnings
return cancelled, [get_result(fut) for fut in _futures]


Expand All @@ -288,4 +288,4 @@ def _get_loop():

if hasattr(aio, 'get_running_loop'):
return aio.get_running_loop()
return aio.get_event_loop()
return aio.get_event_loop()
2 changes: 1 addition & 1 deletion docs/_readme_template.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ Spawns tasks with `map_n(fn, iterable, cb, ctx)`, then waits for results with `a

`spawn` and `map` methods is probably what you should use in 99% of cases. Their overhead is minimal (~3% execution time), and even in worst cases memory usage is insignificant.

`spawn_n`, `map_n` and `itermap` methods give you more control and flexibily, but they come with a price of higher overhead. They spawn all tasks that you want, and most of the tasks wait their turn "in background". If you spawn too much (10**6+ tasks) -- you'll use most of the memory you have in system, also you'll lose a lot of time on "concurrency management" of all the tasks spawned.
`spawn_n`, `map_n` and `itermap` methods give you more control and flexibility, but they come with a price of higher overhead. They spawn all tasks that you want, and most of the tasks wait their turn "in background". If you spawn too much (10**6+ tasks) -- you'll use most of the memory you have in system, also you'll lose a lot of time on "concurrency management" of all the tasks spawned.

Play with `python tests/loadtest.py -h` to understand what you want to use.

Expand Down