Age | Commit message (Collapse) | Author |
|
Signed-off-by: Marc-Antoine Perennou <Marc-Antoine@Perennou.com>
Co-authored-by: Alice Ryhl <alice@ryhl.io>
Co-authored-by: Carl Lerche <me@carllerche.com>
|
|
Co-authored-by: Alice Ryhl <alice@ryhl.io>
Co-authored-by: Carl Lerche <me@carllerche.com>
|
|
As tokio does not rely on poisoning, we can
avoid always unwrapping when locking by handling
the `PoisonError` in the Mutex shim.
Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
|
|
|
|
|
|
Co-authored-by: Eliza Weisman <eliza@buoyant.io>
|
|
This enables `block_in_place` to be used in more contexts. Specifically,
it allows you to block whenever you are off the tokio runtime (like if
you are not using tokio, are in a `spawn_blocking` closure, etc.), and
in the threaded scheduler's `block_on`. Blocking in `LocalSet` and the
basic scheduler's` block_on` is still disallowed.
Fixes #2327.
Fixes #2393.
|
|
The work-stealing scheduler includes an optimization where each worker
includes a single slot to store the **last** scheduled task. Tasks in
scheduler's LIFO slot are executed next. This speeds up and reduces
latency with message passing patterns.
Previously, this optimization was susceptible to starving other tasks in
certain cases. If two tasks ping-ping between each other without ever
yielding, the worker would never execute other tasks.
An early PR (#2160) introduced a form of pre-emption. Each task is
allocated a per-poll operation budget. Tokio resources will return ready
until the budget is depleted, at which point, Tokio resources will
always return `Pending`.
This patch leverages the operation budget to limit the LIFO scheduler
optimization. When executing tasks from the LIFO slot, the budget is
**not** reset. Once the budget goes to zero, the task in the LIFO slot
is pushed to the back of the queue.
|
|
A single call to `poll` on a top-level task may potentially do a lot of
work before it returns `Poll::Pending`. If a task runs for a long period
of time without yielding back to the executor, it can starve other tasks
waiting on that executor to execute them, or drive underlying resources.
See for example rust-lang/futures-rs#2047, rust-lang/futures-rs#1957,
and rust-lang/futures-rs#869. Since Rust does not have a runtime, it is
difficult to forcibly preempt a long-running task.
Consider a future like this one:
```rust
use tokio::stream::StreamExt;
async fn drop_all<I: Stream>(input: I) {
while let Some(_) = input.next().await {}
}
```
It may look harmless, but consider what happens under heavy load if the
input stream is _always_ ready. If we spawn `drop_all`, the task will
never yield, and will starve other tasks and resources on the same
executor.
This patch adds a `coop` module that provides an opt-in mechanism for
futures to cooperate with the executor to avoid starvation. This
alleviates the problem above:
```
use tokio::stream::StreamExt;
async fn drop_all<I: Stream>(input: I) {
while let Some(_) = input.next().await {
tokio::coop::proceed().await;
}
}
```
The call to [`proceed`] will coordinate with the executor to make sure
that every so often control is yielded back to the executor so it can
run other tasks.
The implementation uses a thread-local counter that simply counts how
many "cooperation points" we have passed since the task was first
polled. Once the "budget" has been spent, any subsequent points will
return `Poll::Pending`, eventually making the top-level task yield. When
it finally does yield, the executor resets the budget before
running the next task.
The budget per task poll is currently hard-coded to 128. Eventually, we
may want to make it dynamic as more cooperation points are added. The
number 128 was chosen more or less arbitrarily to balance the cost of
yielding unnecessarily against the time an executor may be "held up".
At the moment, all the tokio leaf futures ("resources") call into coop,
but external futures have no way of doing so. We probably want to
continue limiting coop points to leaf futures in the future, but may
want to also enable third-party leaf futures to cooperate to benefit the
ecosystem as a whole. This is reflected in the methods marked as `pub`
in `mod coop` (even though the module is only `pub(crate)`). We will
likely also eventually want to expose `coop::limit`, which enables
sub-executors and manual `impl Future` blocks to avoid one sub-task
spending all of their poll budget.
Benchmarks (see tokio-rs/tokio#2160) suggest that the overhead of `coop`
is marginal.
|
|
A refactor of the scheduler internals focusing on simplifying and
reducing unsafety. There are no fundamental logic changes.
* The state transitions of the core task component are refined and
reduced.
* `basic_scheduler` has most unsafety removed.
* `local_set` has most unsafety removed.
* `threaded_scheduler` limits most unsafety to its queue implementation.
|
|
|
|
Previously, thread-locals used by the various drivers were situated
with the driver code. This resulted in state being spread out and many
thread-locals being required to run a runtime.
This PR coalesces the thread-locals into a single struct.
|
|
Calls to tasks should not be nested. Currently, while a task is being
executed and the runtime is shutting down, a call to wake() can result
in the wake target to be dropped. This, in turn, results in the drop
handler being called.
If the user holds a ref cell borrow, a mutex guard, or any such value,
dropping the task inline can result in a deadlock.
The fix is to permit tasks to be scheduled during the shutdown process
and dropping the tasks once they are popped from the queue.
Fixes #1929, #1886
|
|
## Motivation
There's currently an issue in `task::LocalSet` where dropping the local
set can result in an infinite loop if a task running in the local set is
notified from outside the local set (e.g. by a timer). This was reported
in issue #1885.
This issue exists because the `Drop` impl for `task::local::Scheduler`
does not drain the queue of tasks notified externally, the way the basic
scheduler does. Instead, only the local queue is drained, leaving some
tasks in place. Since these tasks are never removed, the loop that
continues trying to cancel tasks until the owned task list is totally
empty continues infinitely.
I think this issue was due to the `Drop` impl being written before a
remote queue was added to the local scheduler, and the need to close the
remote queue as well was overlooked.
## Solution
This branch solves the problem by clearing the local scheduler's remote
queue as well as the local one.
I've added a test that reproduces the behavior. The test fails on master
and passes after this change.
In addition, this branch factors out the common task queue logic in the
basic scheduler runtime and the `LocalSet` struct in `tokio::task`. This
is because as more work was done on the `LocalSet`, it has gotten closer
and closer to the basic scheduler in behavior, and factoring out the
shared code reduces the risk of errors caused by `LocalSet` not doing
something that the basic scheduler does. The queues are now encapsulated
by a `MpscQueues` struct in `tokio::task::queue` (crate-public). As a
follow-up, I'd also like to look into changing this type to use the same
remote queue type as the threadpool (a linked list).
In particular, I noticed the basic scheduler has a flag that indicates
the remote queue has been closed, which is set when dropping the
scheduler. This prevents tasks from being added after the scheduler has
started shutting down, stopping a potential task leak. Rather than
duplicating this code in `LocalSet`, I thought it was probably better to
factor it out into a shared type.
There are a few cases where there are small differences in behavior,
though, so there is still a need for separate types implemented _using_
the new `MpscQueues` struct. However, it should cover most of the
identical code.
Note that this diff is rather large, due to the refactoring. However, the
actual fix for the infinite loop is very simple. It can be reviewed on its own
by looking at commit 4f46ac6. The refactor is in a separate commit, with
the SHA 90b5b1f.
Fixes #1885
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
|
|
The "global executor" thread-local is to track where to spawn new tasks,
**not** which scheduler is active on the current thread. This fixes a
bug with scheduling tasks on the basic_scheduler by tracking the
currently active basic_scheduler with a dedicated thread-local variable.
Fixes: #1851
|
|
## Motivation
In earlier versions of `tokio`, the `current_thread::Runtime` type could
be used to run `!Send` futures. However, PR #1716 merged the
current-thread and threadpool runtimes into a single type, which can no
longer run `!Send` futures. There is still a need in some cases to
support futures that don't implement `Send`, and the `tokio-compat`
crate requires this in order to provide APIs that existed in `tokio`
0.1.
## Solution
This branch implements the API described by @carllerche in
https://github.com/tokio-rs/tokio/pull/1716#issuecomment-549496309. It
adds a new `LocalSet` type and `spawn_local` function to `tokio::task`.
The `LocalSet` type is used to group together a set of tasks which must
run on the same thread and don't implement `Send`. These are available
when a new "rt-util" feature flag is enabled.
Currently, the local task set is run by passing it a reference to a
`Runtime` and a future to `block_on`. In the future, we may also want
to investigate allowing spawned futures to construct their own local
task sets, which would be executed on the worker that the future is
executing on.
In order to implement the new API, I've made some internal changes to
the `task` module and `Schedule` trait to support scheduling both `Send`
and `!Send` futures.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
|
|
* runtime: cleanup and add config options
This patch finishes the cleanup as part of the transition to Tokio 0.2.
A number of changes were made to take advantage of having all Tokio
types in a single crate. Also, fixes using Tokio types from
`spawn_blocking`.
* Many threads, one resource driver
Previously, in the threaded scheduler, a resource driver (mio::Poll /
timer combo) was created per thread. This was more or less fine, except
it required balancing across the available drivers. When using a
resource driver from **outside** of the thread pool, balancing is
tricky. The change was original done to avoid having a dedicated driver
thread.
Now, instead of creating many resource drivers, a single resource driver
is used. Each scheduler thread will attempt to "lock" the resource
driver before parking on it. If the resource driver is already locked,
the thread uses a condition variable to park. Contention should remain
low as, under load, the scheduler avoids using the drivers.
* Add configuration options to enable I/O / time
New configuration options are added to `runtime::Builder` to allow
enabling I/O and time drivers on a runtime instance basis. This is
useful when wanting to create lightweight runtime instances to execute
compute only tasks.
* Bug fixes
The condition variable parker is updated to the same algorithm used in
`std`. This is motivated by some potential deadlock cases discovered by
`loom`.
The basic scheduler is fixed to fairly schedule tasks. `push_front` was
accidentally used instead of `push_back`.
I/O, time, and spawning now work from within `spawn_blocking` closures.
* Misc cleanup
The threaded scheduler is no longer generic over `P :Park`. Instead, it
is hard coded to a specific parker. Tests, including loom tests, are
updated to use `Runtime` directly. This provides greater coverage.
The `blocking` module is moved back into `runtime` as all usage is
within `runtime` itself.
|
|
|
|
`tokio::spawn` now returns a `JoinHandle` to obtain the result of the task:
Closes #887.
|
|
It no longer supports executing !Send futures. The use case for
It is wanting a “light” runtime. There will be “local” task execution
using a different strategy coming later.
This patch also renames `thread_pool` -> `threaded_scheduler`, but
only in public APIs for now.
|