Age | Commit message (Collapse) | Author |
|
Provides a `select!` macro for concurrently waiting on multiple async
expressions. The macro has similar goals and syntax as the one provided
by the `futures` crate, but differs significantly in implementation.
First, this implementation does not require special traits to be
implemented on futures or streams (i.e., no `FuseFuture`). A design goal
is to be able to pass a "plain" async fn result into the select! macro.
Even without `FuseFuture`, this `select!` implementation is able to
handle all cases the `futures::select!` macro can handle. It does this
by supporting pre-poll conditions on branches and result pattern
matching. For pre-conditions, each branch is able to include a condition
that disables the branch if it evaluates to false. This allows the user
to guard futures that have already been polled, preventing double
polling. Pattern matching can be used to disable streams that complete.
A second big difference is the macro is implemented almost entirely as a
declarative macro. The biggest advantage to using this strategy is that
the user will not need to alter the rustc recursion limit except in the
most extreme cases.
The resulting future also tends to be smaller in many cases.
|
|
Make sure the tail mutex is acquired when `condvar` is notified,
otherwise the wakeup may be lost and the sender could be left waiting.
Use `notify_all()` instead of `notify_one()` to ensure that the correct
sender is woken. Finally, only do any of this when there are no more
readers left.
Additionally, calling `send()` is buggy and may cause a panic when
the slot has another pending send.
|
|
The `&mut self` requirements for `TcpStream` methods ensure that there are at
most two tasks using the stream--one for reading and one for writing.
`TcpStream::split` allows two separate tasks to hold a reference to a single
`TcpStream`. `TcpStream::{peek,poll_peek}` only poll for read readiness, and
therefore are safe to use with a `ReadHalf`.
Instead of duplicating `TcpStream::poll_peek`, a private method is now used by
both `poll_peek` methods that uses the fact that only a `&TcpStream` is
required.
Closes #2136
|
|
replace website link, because previous one was broken
|
|
|
|
|
|
|
|
|
|
|
|
* io: Clean up split check
* fix tests
|
|
The Tokio runtime provides a "shell" runtime when `rt-core` is not
available. This shell runtime is enough to support `#[tokio::main`] and
`#[tokio::test].
A previous change disabled these two attr macros when `rt-core` was not
selected. This patch fixes this by re-enabling the `main` and `test`
attr macros without `rt-core` and adds some integration tests to prevent
future regressions.
|
|
`AsyncRead` is safe to implement but can be implemented so that it
reports that it read more bytes than it actually did. `poll_read_buf` on
the other head implicitly trusts that the returned length is actually
correct which makes it possible to advance the buffer past what has
actually been initialized.
An alternative fix could be to avoid the panic and instead advance by
`n.min(b.len())`
|
|
Loom currently does not compile on windows due to a
transitive dependency on `generator`. The `generator`
crate builds have started to fail on windows CI. Loom
is not run under windows, however, so removing the
loom dependency on windows is sufficient to fix CI.
Refs: https://github.com/Xudong-Huang/generator-rs/issues/19
|
|
* Add a method to test if split streams come from the same stream.
The exposed stream ID can also be used as key in associative containers.
* Document the fact that split stream IDs can dangle.
|
|
|
|
* add subsections for the blocking and yielding examples in task mod
* flesh out yield_now rustdoc
* add a must_use for yield_now
|
|
* runtime: add Handle::try_current
Makes it possible to get a Handle only if a Runtime has been started, without panicing if that isn't the case
* Use an error instead
|
|
This PR introduces a new pattern for task-local storage. It allows for storage
and retrieval of data in an asynchronous context. It does so using a new pattern
based on past experience.
A quick example:
```rust
tokio::task_local! {
static FOO: u32;
}
FOO.scope(1, async move {
some_async_fn().await;
assert_eq!(FOO.get(), 1);
}).await;
```
## Background of task-local storage
The goal for task-local storage is to be able to provide some ambiant context in
an asynchronous context. One primary use case is for distributed tracing style
systems where a request identifier is made available during the context of a
request / response exchange. In a synchronous context, thread-local storage
would be used for this. However, with asynchronous Rust, logic is run in a
"task", which is decoupled from an underlying thread. A task may run on many
threads and many tasks may be multiplexed on a single thread. This hints at the
need for task-local storage.
### Early attempt
Futures 0.1 included a [task-local storage][01] strategy. This was based around
using the "runtime task" (more on this later) as the scope. When a task was
spawned with `tokio::spawn`, a task-local map would be created and assigned
with that task. Any task-local value that was stored would be stored in this
map. Whenever the runtime polled the task, it would set the task context
enabling access to find the value.
There are two main problems with this strategy which ultimetly lead to the
removal of runtime task-local storage:
1) In asynchronous Rust, a "task" is not a clear-cut thing.
2) The implementation did not leverage the significant optimizations that the
compiler provides for thread-local storage.
### What is a "task"?
With synchronous Rust, a "thread" is a clear concept: the construct you get with
`thread::spawn`. With asynchronous Rust, there is no strict definition of a
"task". A task is most commonly the construct you get when calling
`tokio::spawn`. The construct obtained with `tokio::spawn` will be referred to
as the "runtime task". However, it is also possible to multiplex asynchronous
logic within the context of a runtime task. APIs such as
[`task::LocalSet`][local-set] , [`FuturesUnordered`][futures-unordered],
[`select!`][select], and [`join!`][join] provide the ability to embed a mini
scheduler within a single runtime task.
Revisiting the primary use case, setting a request identifier for the duration
of a request response exchange, here is a scenario in which using the "runtime
task" as the scope for task-local storage would fail:
```rust
task_local!(static REQUEST_ID: Cell<u64> = Cell::new(0));
let request1 = get_request().await;
let request2 = get_request().await;
let (response1, response2) = join!{
async {
REQUEST_ID.with(|cell| cell.set(request1.identifier()));
process(request1)
},
async {
REQUEST_ID.with(|cell| cell.set(request2.identifier()));
process(request2)
},
};
```
`join!` multiplexes the execution of both branches on the same runtime task.
Given this, if `REQUEST_ID` is scoped by the runtime task, the request ID would
leak across the request / response exchange processing.
This is not a theoretical problem, but was hit repeatedly in practice. For
example, Hyper's HTTP/2.0 implementation multiplexes many request / response
exchanges on the same runtime task.
### Compiler thread-local optimizations
A second smaller problem with the original task-local storage strategy is that
it required re-implementing "thread-local storage" like constructs but without
being able to get the compiler to help optimize. A discussion of how the
compiler optimizes thread-local storage is out of scope for this PR description,
but suffice to say a task-local storage implementation should be able to
leverage thread-locals as much as possible.
## A new task-local strategy
Introduced in this PR is a new strategy for dealing with task-local storage.
Instead of using the runtime task as the thread-local scope, the proposed
task-local API allows the user to define any arbitrary scope. This solves the
problem of binding task-locals to the runtime task:
```rust
tokio::task_local!(static FOO: u32);
FOO.scope(1, async move {
some_async_fn().await;
assert_eq!(FOO.get(), 1);
}).await;
```
The `scope` function establishes a task-local scope for the `FOO` variable. It
takes a value to initialize `FOO` with and an async block. The `FOO` task-local
is then available for the duration of the provided block. `scope` returns a new
future that must then be awaited on.
`tokio::task_local` will define a new thread-local. The future returned from
`scope` will set this thread-local at the start of `poll` and unset it at the
end of `poll`. `FOO.get` is a simple thread-local access with no special logic.
This strategy solves both problems. Task-locals can be scoped at any level and
can leverage thread-local compiler optimizations.
Going back to the previous example:
```rust
task_local! {
static REQUEST_ID: u64;
}
let request1 = get_request().await;
let request2 = get_request().await;
let (response1, response2) = join!{
async {
let identifier = request1.identifier();
REQUEST_ID.scope(identifier, async {
process(request1).await
}).await
},
async {
let identifier = request2.identifier();
REQUEST_ID.scope(identifier, async {
process(request2).await
}).await
},
};
```
There is no longer a problem with request identifiers leaking.
## Disadvantages
The primary disadvantage of this strategy is that the "set and forget" pattern
with thread-locals is not possible.
```rust
thread_local! {
static FOO: Cell<usize> = Cell::new(0);
}
thread::spawn(|| {
FOO.with(|cell| cell.set(123));
do_work();
});
```
In this example, `FOO` is set at the start of the thread and automatically
cleared when the thread terminates. While this is nice in some cases, it only
really logically makes sense because the scope of a "thread" is clear (the
thread).
A similar pattern can be done with the proposed stratgy but would require an
explicit setting of the scope at the root of `tokio::spawn`. Additionally, one
should only do this if the runtime task is the appropriate scope for the
specific task-local variable.
Another disadvantage is that this new method does not support lazy initialization
but requires an explicit `LocalKey::scope` call to set the task-local value. In
this case since task-local's are different from thread-locals it is fine.
[01]: https://docs.rs/futures/0.1.29/futures/task/struct.LocalKey.html
[local-set]: #
[futures-unordered]: https://docs.rs/futures/0.3.1/futures/stream/struct.FuturesUnordered.html
[select]: https://docs.rs/futures/0.3.1/futures/macro.select.html
[join]: https://docs.rs/futures/0.3.1/futures/macro.join.html
|
|
* One more clippy fix, remove special instructions from CI
* Fix Collect description
|
|
|
|
Provides an asynchronous equivalent to `Iterator::collect()`. A sealed
`FromStream` trait is added. Stabilization is pending Rust supporting
`async` trait fns.
|
|
fixes #2064, #2106
|
|
Asynchronous equivalent to `Iterator::chain`.
|
|
An async equivalent to `iter::once`
|
|
`stream::empty()` is the asynchronous equivalent to
`std::iter::empty()`. `pending()` provides a stream that never becomes
ready.
|
|
Provides an equivalent to stream `select()` from futures-rs. `merge`
best describes the operation (vs. `select`). `futures-rs` named the
operation "select" for historical reasons and did not rename it back to
`merge` in 0.3. The operation is most commonly named `merge` else where
as well (e.g. ReactiveX).
|
|
|
|
|
|
|
|
With the rt-threaded feature flag we create a threaded scheduler by
default. The documentation had a copy-and-paste error from the section
about the basic scheduler.
|
|
Previously acquire operations reading a value written by a successful
CAS in `drop_join_handle_fast` did not synchronize with it. The CAS
wasn't guaranteed to happen before the task deallocation, and so
created a data race between the two.
Use release success ordering to ensure synchronization.
|
|
|
|
Previously, when the threaded scheduler was in the shutdown process, it
would hold a lock while dropping in-flight tasks. If those tasks
included a drop handler that attempted to wake a second task, the wake
operation would attempt to acquire a lock held by the scheduler. This
results in a deadlock.
Dropping the lock before dropping tasks resolves the problem.
Fixes #2046
|
|
|
|
Previously, if an IO event was received during the runtime shutdown
process, it was possible to enter a deadlock. This was due to the
scheduler shutdown logic not expecting tasks to get scheduled once the
worker was in the shutdown process.
This patch fixes the deadlock by checking the queues for new tasks after
each call to park. If a new task is received, it is forcefully shutdown.
Fixes #2061
|
|
* io: Fix the Seek adapter and add a tested example.
If the first 'AsyncRead::start_seek' call returns Ready,
'AsyncRead::poll_complete' will be called.
Previously, a start_seek that immediately returned 'Ready' would cause
the Seek adapter to return 'Pending' without registering a Waker.
* fs: Do not return write errors from methods on AsyncSeek.
Write errors should only be returned on subsequent writes or on flush.
Also copy the last_write_err assert from 'poll_read' to both
'start_seek' and 'poll_complete' for consistency.
|
|
|
|
Brings back old macro implementations and updates the version of
tokio-macros that tokio depends on.
Prepares a new release.
|
|
|
|
|
|
|
|
* Links are added where missing and examples are improved.
* Improve `stdin`, `stdout`, and `stderr` documentation by going
into more details regarding what can go wrong in concurrent
situations and provide examples for `stdout` and `stderr`.
|
|
Tweak context to remove more fns and usage of `Option`. Remove
`ThreadContext` struct as it is reduced to just `Handle`. Avoid passing
around individual driver handles and instead limit to the
`runtime::Handle` struct.
|
|
|
|
Currently, the only way to run a `tokio::task::LocalSet` is to call its
`block_on` method with a `&mut Runtime`, like
```rust
let mut rt = tokio::runtime::Runtime::new();
let local = tokio::task::LocalSet::new();
local.block_on(&mut rt, async {
// whatever...
});
```
Unfortunately, this means that `LocalSet` doesn't work with the
`#[tokio::main]` and `#[tokio::test]` macros, since the `main`
function is _already_ inside of a call to `block_on`.
**Solution**
This branch adds a `LocalSet::run` method, which takes a future and
returns a new future that runs that future on the `LocalSet`. This
is analogous to `LocalSet::block_on`, except that it can be called in
an async context.
Additionally, this branch implements `Future` for `LocalSet`. Awaiting
a `LocalSet` will run all spawned local futures until they complete.
This allows code like
```rust
#[tokio::main]
async fn main() {
let local = tokio::task::LocalSet::new();
local.spawn_local(async {
// ...
});
local.spawn_local(async {
// ...
tokio::task::spawn_local(...);
// ...
});
local.await;
}
```
The `LocalSet` docs have been updated to show the usage with
`#[tokio::main]` rather than with manually created runtimes, where
applicable.
Closes #1906
Closes #1908
Fixes #2057
|
|
Adds `Handle::current()` for accessing a handle to the runtime
associated with the current thread. This handle can then be
passed to other threads in order to spawn or perform other
runtime related tasks.
|
|
The `Waker::will_wake` compares both a data pointer and a vtable to
decide if wakers are equivalent. To avoid false negatives during
comparison, use the same vtable for a waker stored in `WakerRef`.
|
|
|
|
Fixes #2009
|
|
Document that conversion from `std` types must be done from within
the Tokio runtime context.
|