Age | Commit message (Collapse) | Author |
|
|
|
## Motivation
Currently, the per-task `tracing` spans generated by tokio's `tracing`
feature flag include the `std::any::type_name` of the future that was
spawned. When future combinators and/or libraries like Tower are in use,
these future names can get _quite_ long. Furthermore, when formatting
the `tracing` spans with their parent spans as context, any other task
spans in the span context where the future was spawned from can _also_
include extremely long future names.
In some cases, this can result in extremely high memory use just to
store the future names. For example, in Linkerd, when we enable
`tokio=trace` to enable the task spans, there's a spawned task whose
future name is _232990 characters long_. A proxy with only 14 spawned
tasks generates a task list that's over 690 KB. Enabling task spans
under load results in the process getting OOM killed very quickly.
## Solution
This branch removes future type names from the spans generated by
`spawn`. As a replacement, to allow identifying which `spawn` call a
span corresponds to, the task span now contains the source code location
where `spawn` was called, when the compiler supports the
`#[track_caller]` attribute. Since `track_caller` was stabilized in Rust
1.46.0, and our minimum supported Rust version is 1.45.0, we can't
assume that `#[track_caller]` is always available. Instead, we have a
RUSTFLAGS cfg, `tokio_track_caller`, that guards whether or not we use
it. I've also added a `build.rs` that detects the compiler minor
version, and sets the cfg flag automatically if the current compiler
version is >= 1.46. This means users shouldn't have to enable
`tokio_track_caller` manually.
Here's the trace output from the `chat` example, before this change:
![Screenshot_20201030_110157](https://user-images.githubusercontent.com/2796466/97741071-6d408800-1a9f-11eb-9ed6-b25e72f58c7b.png)
...and after:
![Screenshot_20201030_110303](https://user-images.githubusercontent.com/2796466/97741112-7e899480-1a9f-11eb-9197-c5a3f9ea1c05.png)
Closes #3073
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
|
|
Move common code and tracing integration into Handle
Fixes #2998
Closes #3004
Signed-off-by: Marc-Antoine Perennou <Marc-Antoine@Perennou.com>
|
|
|
|
tokio:
merge rt-core and rt-util as rt
rename rt-threaded to rt-multi-thread
tokio-util:
rename rt-core to rt
Closes #2942
|
|
Co-authored-by: Alice Ryhl <alice@ryhl.io>
Co-authored-by: Carl Lerche <me@carllerche.com>
|
|
Uses the infrastructure added by #2828 to enable switching
`TcpListener::accept` to use `&self`.
This also switches `poll_accept` to use `&self`. While doing introduces
a hazard, `poll_*` style functions are considered low-level. Most users
will use the `async fn` variants which are more misuse-resistant.
TcpListener::incoming() is temporarily removed as it has the same
problem as `TcpSocket::by_ref()` and will be implemented later.
|
|
|
|
|
|
Co-authored-by: Eliza Weisman <eliza@buoyant.io>
|
|
## Motivation
When debugging asynchronous systems, it can be very valuable to inspect
what tasks are currently active (see #2510). The [`tracing` crate] and
related libraries provide an interface for Rust libraries and
applications to emit and consume structured, contextual, and async-aware
diagnostic information. Because this diagnostic information is
structured and machine-readable, it is a better fit for the
task-tracking use case than textual logging — `tracing` spans can be
consumed to generate metrics ranging from a simple counter of active
tasks to histograms of poll durations, idle durations, and total task
lifetimes. This information is potentially valuable to both Tokio users
*and* to maintainers.
Additionally, `tracing` is maintained by the Tokio project and is
becoming widely adopted by other libraries in the "Tokio stack", such as
[`hyper`], [`h2`], and [`tonic`] and in [other] [parts] of the broader Rust
ecosystem. Therefore, it is suitable for use in Tokio itself.
[`tracing` crate]: https://github.com/tokio-rs/tracing
[`hyper`]: https://github.com/hyperium/hyper/pull/2204
[`h2`]: https://github.com/hyperium/h2/pull/475
[`tonic`]: https://github.com/hyperium/tonic/blob/570c606397e47406ec148fe1763586e87a8f5298/tonic/Cargo.toml#L48
[other]: https://github.com/rust-lang/chalk/pull/525
[parts]: https://github.com/rust-lang/compiler-team/issues/331
## Solution
This PR is an MVP for instrumenting Tokio with `tracing` spans. When the
"tracing" optional dependency is enabled, every spawned future will be
instrumented with a `tracing` span.
The generated spans are at the `TRACE` verbosity level, and have the
target "tokio::task", which may be used by consumers to filter whether
they should be recorded. They include fields for the type name of the
spawned future and for what kind of task the span corresponds to (a
standard `spawn`ed task, a local task spawned by `spawn_local`, or a
`blocking` task spawned by `spawn_blocking`). Because `tracing` has
separate concepts of "opening/closing" and "entering/exiting" a span, we
enter these spans every time the spawned task is polled. This allows
collecting data such as:
- the total lifetime of the task from `spawn` to `drop`
- the number of times the task was polled before it completed
- the duration of each individual time that the span was polled (and
therefore, aggregated metrics like histograms or averages of poll
durations)
- the total time a span was actively being polled, and the total time
it was alive but **not** being polled
- the time between when the task was `spawn`ed and the first poll
As an example, here is the output of a version of the `chat` example
instrumented with `tracing`:
![image](https://user-images.githubusercontent.com/2796466/87231927-e50f6900-c36f-11ea-8a90-6da9b93b9601.png)
And, with multiple connections actually sending messages:
![trace_example_1](https://user-images.githubusercontent.com/2796466/87231876-8d70fd80-c36f-11ea-91f1-0ad1a5b3112f.png)
I haven't added any `tracing` spans in the example, only converted the
existing `println!`s to `tracing::info` and `tracing::error` for
consistency. The span durations in the above output are generated by
`tracing-subscriber`. Of course, a Tokio-specific subscriber could
generate even more detailed statistics, but that's follow-up work once
basic tracing support has been added.
Note that the `Instrumented` type from `tracing-futures`, which attaches
a `tracing` span to a future, was reimplemented inside of Tokio to avoid
a dependency on that crate. `tracing-futures` has a feature flag that
enables an optional dependency on Tokio, and I believe that if another
crate in a dependency graph enables that feature while Tokio's `tracing`
support is also enabled, it would create a circular dependency that
Cargo wouldn't be able to handle. Also, it avoids a dependency for a
very small amount of code that is unlikely to ever change.
There is, of course, room for plenty of future work here. This might
include:
- instrumenting other parts of `tokio`, such as I/O resources and
channels (possibly via waker instrumentation)
- instrumenting the threadpool so that the state of worker threads
can be inspected
- writing `tracing-subscriber` `Layer`s to collect and display
Tokio-specific data from these traces
- using `track_caller` (when it's stable) to record _where_ a task
was `spawn`ed from
However, this is intended as an MVP to get us started on that path.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
|
|
|
|
Simplifies coop implementation. Prunes unused code, create a `Budget`
type to track the current budget.
|
|
## Motivation
Currently, an issue exists where a `LocalSet` has a single cooperative
task budget that's shared across all futures spawned on the `LocalSet`
_and_ by any future passed to `LocalSet::run_until` or
`LocalSet::block_on`. Because these methods will poll the `run_until`
future before polling spawned tasks, it is possible for that task to
_always_ deterministically starve the entire `LocalSet` so that no local
tasks can proceed. When the completion of that future _itself_ depends
on other tasks on the `LocalSet`, this will then result in a deadlock,
as in issue #2460.
A detailed description of why this is the case, taken from [this
comment][1]:
`LocalSet` wraps each time a local task is run in `budget`:
https://github.com/tokio-rs/tokio/blob/947045b9445f15fb9314ba0892efa2251076ae73/tokio/src/task/local.rs#L406
This is identical to what tokio's other schedulers do when running
tasks, and in theory should give each task its own budget every time
it's polled.
_However_, `LocalSet` is different from other schedulers. Unlike the
runtime schedulers, a `LocalSet` is itself a future that's run on
another scheduler, in `block_on`. `block_on` _also_ sets a budget:
https://github.com/tokio-rs/tokio/blob/947045b9445f15fb9314ba0892efa2251076ae73/tokio/src/runtime/basic_scheduler.rs#L131
The docs for `budget` state that:
https://github.com/tokio-rs/tokio/blob/947045b9445f15fb9314ba0892efa2251076ae73/tokio/src/coop.rs#L73
This means that inside of a `LocalSet`, the calls to `budget` are
no-ops. Instead, each future polled by the `LocalSet` is subtracting
from a single global budget.
`LocalSet`'s `RunUntil` future polls the provided future before polling
any other tasks spawned on the local set:
https://github.com/tokio-rs/tokio/blob/947045b9445f15fb9314ba0892efa2251076ae73/tokio/src/task/local.rs#L525-L535
In this case, the provided future is `JoinAll`. Unfortunately, every
time a `JoinAll` is polled, it polls _every_ joined future that has not
yet completed. When the number of futures in the `JoinAll` is >= 128,
this means that the `JoinAll` immediately exhausts the task budget. This
would, in theory, be a _good_ thing --- if the `JoinAll` had a huge
number of `JoinHandle`s in it and none of them are ready, it would limit
the time we spend polling those join handles.
However, because the `LocalSet` _actually_ has a single shared task
budget, this means polling the `JoinAll` _always_ exhausts the entire
budget. There is now no budget remaining to poll any other tasks spawned
on the `LocalSet`, and they are never able to complete.
[1]: https://github.com/tokio-rs/tokio/issues/2460#issuecomment-621403122
## Solution
This branch solves this issue by resetting the task budget when polling
a `LocalSet`. I've added a new function to `coop` for resetting the task
budget to `UNCONSTRAINED` for the duration of a closure, and thus
allowing the `budget` calls in `LocalSet` to _actually_ create a new
budget for each spawned local task. Additionally, I've changed
`LocalSet` to _also_ ensure that a separate task budget is applied to
any future passed to `block_on`/`run_until`.
Additionally, I've added a test reproducing the issue described in
#2460. This test fails prior to this change, and passes after it.
Fixes #2460
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
|
|
|
|
|
|
The link to tokio::main was relative to tokio_macros crate in the source
directory. This is why it worked in local build of documentation and not
in doc.rs.
Refs: #1473
|
|
This enables `block_in_place` to be used in more contexts. Specifically,
it allows you to block whenever you are off the tokio runtime (like if
you are not using tokio, are in a `spawn_blocking` closure, etc.), and
in the threaded scheduler's `block_on`. Blocking in `LocalSet` and the
basic scheduler's` block_on` is still disallowed.
Fixes #2327.
Fixes #2393.
|
|
local (#2416)
|
|
|
|
|
|
This does not count as a breaking change as it fixes a
regression and a soundness bug.
|
|
Included changes
- all simple references like `<type>.<name>.html` for these types
- enum
- fn
- struct
- trait
- type
- simple references for methods, like struct.DelayQueue.html#method.poll
Refs: #1473
|
|
The work-stealing scheduler includes an optimization where each worker
includes a single slot to store the **last** scheduled task. Tasks in
scheduler's LIFO slot are executed next. This speeds up and reduces
latency with message passing patterns.
Previously, this optimization was susceptible to starving other tasks in
certain cases. If two tasks ping-ping between each other without ever
yielding, the worker would never execute other tasks.
An early PR (#2160) introduced a form of pre-emption. Each task is
allocated a per-poll operation budget. Tokio resources will return ready
until the budget is depleted, at which point, Tokio resources will
always return `Pending`.
This patch leverages the operation budget to limit the LIFO scheduler
optimization. When executing tasks from the LIFO slot, the budget is
**not** reset. Once the budget goes to zero, the task in the LIFO slot
is pushed to the back of the queue.
|
|
This PR was prompted by having encountered a few cases of people not noticing that Runtime::handle can be cloned, and therefore not realizing it could be moved to another thread.
|
|
A refactor of the scheduler internals focusing on simplifying and
reducing unsafety. There are no fundamental logic changes.
* The state transitions of the core task component are refined and
reduced.
* `basic_scheduler` has most unsafety removed.
* `local_set` has most unsafety removed.
* `threaded_scheduler` limits most unsafety to its queue implementation.
|
|
The previous implementation would perform a load that might be part of a
data race. The value read would be used only when race did not occur.
This would be well defined in a memory model where a load that is a part
of race merely returns an undefined value, the Rust memory model on the
other hand defines it to be undefined behaviour.
Perform read conditionally to avoid data race.
Covered by existing loom tests after changing casualty check to be
immediate rather than deferred.
Fixes: #2087
|
|
|
|
Adds `is_cancelled()` and `is_panic()` methods to `JoinError`, as well as
`into_panic()` and `try_into_panic()` methods which, when applicable, returns
the payload of the panic.
|
|
|
|
|
|
|
|
* add subsections for the blocking and yielding examples in task mod
* flesh out yield_now rustdoc
* add a must_use for yield_now
|
|
This PR introduces a new pattern for task-local storage. It allows for storage
and retrieval of data in an asynchronous context. It does so using a new pattern
based on past experience.
A quick example:
```rust
tokio::task_local! {
static FOO: u32;
}
FOO.scope(1, async move {
some_async_fn().await;
assert_eq!(FOO.get(), 1);
}).await;
```
## Background of task-local storage
The goal for task-local storage is to be able to provide some ambiant context in
an asynchronous context. One primary use case is for distributed tracing style
systems where a request identifier is made available during the context of a
request / response exchange. In a synchronous context, thread-local storage
would be used for this. However, with asynchronous Rust, logic is run in a
"task", which is decoupled from an underlying thread. A task may run on many
threads and many tasks may be multiplexed on a single thread. This hints at the
need for task-local storage.
### Early attempt
Futures 0.1 included a [task-local storage][01] strategy. This was based around
using the "runtime task" (more on this later) as the scope. When a task was
spawned with `tokio::spawn`, a task-local map would be created and assigned
with that task. Any task-local value that was stored would be stored in this
map. Whenever the runtime polled the task, it would set the task context
enabling access to find the value.
There are two main problems with this strategy which ultimetly lead to the
removal of runtime task-local storage:
1) In asynchronous Rust, a "task" is not a clear-cut thing.
2) The implementation did not leverage the significant optimizations that the
compiler provides for thread-local storage.
### What is a "task"?
With synchronous Rust, a "thread" is a clear concept: the construct you get with
`thread::spawn`. With asynchronous Rust, there is no strict definition of a
"task". A task is most commonly the construct you get when calling
`tokio::spawn`. The construct obtained with `tokio::spawn` will be referred to
as the "runtime task". However, it is also possible to multiplex asynchronous
logic within the context of a runtime task. APIs such as
[`task::LocalSet`][local-set] , [`FuturesUnordered`][futures-unordered],
[`select!`][select], and [`join!`][join] provide the ability to embed a mini
scheduler within a single runtime task.
Revisiting the primary use case, setting a request identifier for the duration
of a request response exchange, here is a scenario in which using the "runtime
task" as the scope for task-local storage would fail:
```rust
task_local!(static REQUEST_ID: Cell<u64> = Cell::new(0));
let request1 = get_request().await;
let request2 = get_request().await;
let (response1, response2) = join!{
async {
REQUEST_ID.with(|cell| cell.set(request1.identifier()));
process(request1)
},
async {
REQUEST_ID.with(|cell| cell.set(request2.identifier()));
process(request2)
},
};
```
`join!` multiplexes the execution of both branches on the same runtime task.
Given this, if `REQUEST_ID` is scoped by the runtime task, the request ID would
leak across the request / response exchange processing.
This is not a theoretical problem, but was hit repeatedly in practice. For
example, Hyper's HTTP/2.0 implementation multiplexes many request / response
exchanges on the same runtime task.
### Compiler thread-local optimizations
A second smaller problem with the original task-local storage strategy is that
it required re-implementing "thread-local storage" like constructs but without
being able to get the compiler to help optimize. A discussion of how the
compiler optimizes thread-local storage is out of scope for this PR description,
but suffice to say a task-local storage implementation should be able to
leverage thread-locals as much as possible.
## A new task-local strategy
Introduced in this PR is a new strategy for dealing with task-local storage.
Instead of using the runtime task as the thread-local scope, the proposed
task-local API allows the user to define any arbitrary scope. This solves the
problem of binding task-locals to the runtime task:
```rust
tokio::task_local!(static FOO: u32);
FOO.scope(1, async move {
some_async_fn().await;
assert_eq!(FOO.get(), 1);
}).await;
```
The `scope` function establishes a task-local scope for the `FOO` variable. It
takes a value to initialize `FOO` with and an async block. The `FOO` task-local
is then available for the duration of the provided block. `scope` returns a new
future that must then be awaited on.
`tokio::task_local` will define a new thread-local. The future returned from
`scope` will set this thread-local at the start of `poll` and unset it at the
end of `poll`. `FOO.get` is a simple thread-local access with no special logic.
This strategy solves both problems. Task-locals can be scoped at any level and
can leverage thread-local compiler optimizations.
Going back to the previous example:
```rust
task_local! {
static REQUEST_ID: u64;
}
let request1 = get_request().await;
let request2 = get_request().await;
let (response1, response2) = join!{
async {
let identifier = request1.identifier();
REQUEST_ID.scope(identifier, async {
process(request1).await
}).await
},
async {
let identifier = request2.identifier();
REQUEST_ID.scope(identifier, async {
process(request2).await
}).await
},
};
```
There is no longer a problem with request identifiers leaking.
## Disadvantages
The primary disadvantage of this strategy is that the "set and forget" pattern
with thread-locals is not possible.
```rust
thread_local! {
static FOO: Cell<usize> = Cell::new(0);
}
thread::spawn(|| {
FOO.with(|cell| cell.set(123));
do_work();
});
```
In this example, `FOO` is set at the start of the thread and automatically
cleared when the thread terminates. While this is nice in some cases, it only
really logically makes sense because the scope of a "thread" is clear (the
thread).
A similar pattern can be done with the proposed stratgy but would require an
explicit setting of the scope at the root of `tokio::spawn`. Additionally, one
should only do this if the runtime task is the appropriate scope for the
specific task-local variable.
Another disadvantage is that this new method does not support lazy initialization
but requires an explicit `LocalKey::scope` call to set the task-local value. In
this case since task-local's are different from thread-locals it is fine.
[01]: https://docs.rs/futures/0.1.29/futures/task/struct.LocalKey.html
[local-set]: #
[futures-unordered]: https://docs.rs/futures/0.3.1/futures/stream/struct.FuturesUnordered.html
[select]: https://docs.rs/futures/0.3.1/futures/macro.select.html
[join]: https://docs.rs/futures/0.3.1/futures/macro.join.html
|
|
Previously acquire operations reading a value written by a successful
CAS in `drop_join_handle_fast` did not synchronize with it. The CAS
wasn't guaranteed to happen before the task deallocation, and so
created a data race between the two.
Use release success ordering to ensure synchronization.
|
|
Tweak context to remove more fns and usage of `Option`. Remove
`ThreadContext` struct as it is reduced to just `Handle`. Avoid passing
around individual driver handles and instead limit to the
`runtime::Handle` struct.
|
|
|
|
Currently, the only way to run a `tokio::task::LocalSet` is to call its
`block_on` method with a `&mut Runtime`, like
```rust
let mut rt = tokio::runtime::Runtime::new();
let local = tokio::task::LocalSet::new();
local.block_on(&mut rt, async {
// whatever...
});
```
Unfortunately, this means that `LocalSet` doesn't work with the
`#[tokio::main]` and `#[tokio::test]` macros, since the `main`
function is _already_ inside of a call to `block_on`.
**Solution**
This branch adds a `LocalSet::run` method, which takes a future and
returns a new future that runs that future on the `LocalSet`. This
is analogous to `LocalSet::block_on`, except that it can be called in
an async context.
Additionally, this branch implements `Future` for `LocalSet`. Awaiting
a `LocalSet` will run all spawned local futures until they complete.
This allows code like
```rust
#[tokio::main]
async fn main() {
let local = tokio::task::LocalSet::new();
local.spawn_local(async {
// ...
});
local.spawn_local(async {
// ...
tokio::task::spawn_local(...);
// ...
});
local.await;
}
```
The `LocalSet` docs have been updated to show the usage with
`#[tokio::main]` rather than with manually created runtimes, where
applicable.
Closes #1906
Closes #1908
Fixes #2057
|
|
The `Waker::will_wake` compares both a data pointer and a vtable to
decide if wakers are equivalent. To avoid false negatives during
comparison, use the same vtable for a waker stored in `WakerRef`.
|
|
Previously, thread-locals used by the various drivers were situated
with the driver code. This resulted in state being spread out and many
thread-locals being required to run a runtime.
This PR coalesces the thread-locals into a single struct.
|
|
Calls to tasks should not be nested. Currently, while a task is being
executed and the runtime is shutting down, a call to wake() can result
in the wake target to be dropped. This, in turn, results in the drop
handler being called.
If the user holds a ref cell borrow, a mutex guard, or any such value,
dropping the task inline can result in a deadlock.
The fix is to permit tasks to be scheduled during the shutdown process
and dropping the tasks once they are popped from the queue.
Fixes #1929, #1886
|
|
|
|
|
|
Currently, a `LocalSet` does not notify the `LocalFuture` again at the
end of a tick. This means that if we didn't poll every task in the run
queue during that tick (e.g. there are more than 61 tasks enqueued),
those tasks will not be polled.
This commit fixes this issue by changing `local::Scheduler::tick` to
return whether or not the local future needs to be notified again, and
waking the task if so.
Fixes #1899
Fixes #1900
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
|
|
## Motivation
There's currently an issue in `task::LocalSet` where dropping the local
set can result in an infinite loop if a task running in the local set is
notified from outside the local set (e.g. by a timer). This was reported
in issue #1885.
This issue exists because the `Drop` impl for `task::local::Scheduler`
does not drain the queue of tasks notified externally, the way the basic
scheduler does. Instead, only the local queue is drained, leaving some
tasks in place. Since these tasks are never removed, the loop that
continues trying to cancel tasks until the owned task list is totally
empty continues infinitely.
I think this issue was due to the `Drop` impl being written before a
remote queue was added to the local scheduler, and the need to close the
remote queue as well was overlooked.
## Solution
This branch solves the problem by clearing the local scheduler's remote
queue as well as the local one.
I've added a test that reproduces the behavior. The test fails on master
and passes after this change.
In addition, this branch factors out the common task queue logic in the
basic scheduler runtime and the `LocalSet` struct in `tokio::task`. This
is because as more work was done on the `LocalSet`, it has gotten closer
and closer to the basic scheduler in behavior, and factoring out the
shared code reduces the risk of errors caused by `LocalSet` not doing
something that the basic scheduler does. The queues are now encapsulated
by a `MpscQueues` struct in `tokio::task::queue` (crate-public). As a
follow-up, I'd also like to look into changing this type to use the same
remote queue type as the threadpool (a linked list).
In particular, I noticed the basic scheduler has a flag that indicates
the remote queue has been closed, which is set when dropping the
scheduler. This prevents tasks from being added after the scheduler has
started shutting down, stopping a potential task leak. Rather than
duplicating this code in `LocalSet`, I thought it was probably better to
factor it out into a shared type.
There are a few cases where there are small differences in behavior,
though, so there is still a need for separate types implemented _using_
the new `MpscQueues` struct. However, it should cover most of the
identical code.
Note that this diff is rather large, due to the refactoring. However, the
actual fix for the infinite loop is very simple. It can be reviewed on its own
by looking at commit 4f46ac6. The refactor is in a separate commit, with
the SHA 90b5b1f.
Fixes #1885
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
|
|
* Make JoinError Sync
* Move Mutex inside JoinError internals, hide its constructors
* Deprecate JoinError constructors, fix internal usages
|
|
|
|
## Motivation
Currently, `tokio::task::LocalSet`'s `block_on` method requires the
future to live for the 'static lifetime. However, this bound is not
required — the future is wrapped in a `LocalFuture`, and then passed
into `Runtime::block_on`, which does _not_ require a `'static` future.
This came up while updating `tokio-compat` to work with version 0.2. To
mimic the behavior of `tokio` 0.1's `current_thread::Runtime::run`, we
want to be able to have a runtime block on the `recv` future from an
mpsc channel indicating when the runtime is idle. To support `!Send`
futures, as the old `current_thread::Runtime` did, we must do so inside
of a `LocalSet`. However, with the current bounds, we cannot await an
`mpsc::Receiver`'s `recv` future inside the `LocalSet::block_on` call.
## Solution
This branch removes the unnecessary `'static` bound.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
|
|
Some feature flags are missing and some are duplicated.
Closes #1836
|
|
It turns out that the `Scheduler::release` method on `LocalSet`'s
`Scheduler` *is* called, when the `Scheduler` is dropped with tasks
still running. Currently, that method is `unreachable!`, which means
that dropping a `LocalSet` with tasks running will panic.
This commit fixes the panic, by pushing released tasks to
`pending_drop`. This is the same as `BasicScheduler`.
Fixes #1842
|