Age | Commit message (Collapse) | Author |
|
|
|
* doc: fix typo in addr
* doc: fix typo in stream
* doc: fix typo in stream/collect
|
|
Previously, dropping the Write handle would issue a `shutdown(Both)`. However,
shutting down the read half is not portable and not the correct action to take.
This changes the behavior of OwnedWriteHalf to only perform a `shutdown(Write)`
on drop.
|
|
|
|
|
|
|
|
This took me a bit to catch on to because I didn't really think there was any reason to investigate the individual documentation of each half. As someone dealing with TCP streams directly for first time (without previous experience from other languages) this caught me by surprise
|
|
Although very similar in many regards, illumos and Solaris have been
diverging since the end of OpenSolaris. With the addition of illumos as
a Rust target, it must be wired into the same interfaces which it was
consuming when running under the 'solaris' target.
|
|
|
|
Refs: #2307
|
|
Refs: rust-lang/rustfmt#4140
|
|
|
|
|
|
Included changes
- all simple references like `<type>.<name>.html` for these types
- enum
- fn
- struct
- trait
- type
- simple references for methods, like struct.DelayQueue.html#method.poll
Refs: #1473
|
|
The streams documentation referred to module-level 'split' doc which is no longer there
|
|
* Update listener docs
* re-wrap text and add links
|
|
The Incoming types currently don't take ownership of the listener, but
in most cases, users who want to use the Listener as a stream will only
want to use the stream from that point on. So, implement Stream directly
on the Listener types.
|
|
|
|
|
|
|
|
The `&mut self` requirements for `TcpStream` methods ensure that there are at
most two tasks using the stream--one for reading and one for writing.
`TcpStream::split` allows two separate tasks to hold a reference to a single
`TcpStream`. `TcpStream::{peek,poll_peek}` only poll for read readiness, and
therefore are safe to use with a `ReadHalf`.
Instead of duplicating `TcpStream::poll_peek`, a private method is now used by
both `poll_peek` methods that uses the fact that only a `&TcpStream` is
required.
Closes #2136
|
|
Provides an asynchronous equivalent to `Iterator::collect()`. A sealed
`FromStream` trait is added. Stabilization is pending Rust supporting
`async` trait fns.
|
|
Document that conversion from `std` types must be done from within
the Tokio runtime context.
|
|
|
|
also add an async version of `fs::canonicalize`
|
|
`cargo fmt` has a bug where it does not format modules scoped with
feature flags.
|
|
`ToSocketAddrs` is a sealed trait pending changes in Rust that will allow
defining async trait fns. Until then, `net::lookup_host` is provided as a way
to convert a `T: ToSocketAddrs` into `SocketAddr`s.
|
|
Introduces `StreamExt` trait. This trait will be used to add utility functions
to make working with streams easier. This patch includes two functions:
* `next`: a future returning the item in the stream.
* `map`: transform each item in the stream.
|
|
|
|
The `process_socket` is hidden from the user which makes the example
fail to compile if copied by the reader.
|
|
This used to be exposed in 0.1, but was switched to private during the
upgrade. The `async fn` is sufficient for many, but not all cases.
Closes #1556
|
|
|
|
|
|
This fixes the API docs for both `TcpListener::incoming` and
`UnixListener::incoming`. The function now takes `&mut self` instead of
`self`. Adds an example for both function.
|
|
Also makes the `tokio::net::{tcp, udp, unix}` modules only for "utility"
types. The primary types are in `tokio::net` directly.
|
|
* runtime: cleanup and add config options
This patch finishes the cleanup as part of the transition to Tokio 0.2.
A number of changes were made to take advantage of having all Tokio
types in a single crate. Also, fixes using Tokio types from
`spawn_blocking`.
* Many threads, one resource driver
Previously, in the threaded scheduler, a resource driver (mio::Poll /
timer combo) was created per thread. This was more or less fine, except
it required balancing across the available drivers. When using a
resource driver from **outside** of the thread pool, balancing is
tricky. The change was original done to avoid having a dedicated driver
thread.
Now, instead of creating many resource drivers, a single resource driver
is used. Each scheduler thread will attempt to "lock" the resource
driver before parking on it. If the resource driver is already locked,
the thread uses a condition variable to park. Contention should remain
low as, under load, the scheduler avoids using the drivers.
* Add configuration options to enable I/O / time
New configuration options are added to `runtime::Builder` to allow
enabling I/O and time drivers on a runtime instance basis. This is
useful when wanting to create lightweight runtime instances to execute
compute only tasks.
* Bug fixes
The condition variable parker is updated to the same algorithm used in
`std`. This is motivated by some potential deadlock cases discovered by
`loom`.
The basic scheduler is fixed to fairly schedule tasks. `push_front` was
accidentally used instead of `push_back`.
I/O, time, and spawning now work from within `spawn_blocking` closures.
* Misc cleanup
The threaded scheduler is no longer generic over `P :Park`. Instead, it
is hard coded to a specific parker. Tests, including loom tests, are
updated to use `Runtime` directly. This provides greater coverage.
The `blocking` module is moved back into `runtime` as all usage is
within `runtime` itself.
|
|
Tokio will track changes to bytes until 0.5 is released.
|
|
Link fix only. After this fix, `cargo doc --package` succeeds.
|
|
The misc `split` types (`ReadHalf`, `WriteHalf`, `SendHalf`, `RecvHalf`)
are moved up a module and the `*::split` module is removed.
|
|
The I/O driver is made private and moved to `tokio::io::driver`. `Registration` is
moved to `tokio::io::Registration` and `PollEvented` is moved to `tokio::io::PollEvented`.
Additionally, the concurrent slab used by the I/O driver is cleaned up and extracted to
`tokio::util::slab`, allowing it to eventually be used by other types.
|
|
Removes dependencies between Tokio feature flags. For example, `process`
should not depend on `sync` simply because it uses the `mpsc` channel.
Instead, feature flags represent **public** APIs that become available
with the feature enabled. When the feature is not enabled, the
functionality is removed. If another Tokio component requires the
functionality, it is stays as `pub(crate)`.
The threaded scheduler is now exposed under `rt-threaded`. This feature
flag only enables the threaded scheduler and does not include I/O,
networking, or time. Those features must be explictly enabled.
A `full` feature flag is added that enables all features.
`stdin`, `stdout`, `stderr` are exposed under `io-std`.
Macros are used to scope code by feature flag.
|
|
|
|
In an effort to reach API stability, the `tokio` crate is shedding its
_public_ dependencies on crates that are either a) do not provide a
stable (1.0+) release with longevity guarantees or b) match the `tokio`
release cadence. Of course, implementing `std` traits fits the
requirements.
The on exception, for now, is the `Stream` trait found in `futures_core`.
It is expected that this trait will not change much and be moved into `std.
Since Tokio is not yet going reaching 1.0, I feel that it is acceptable to maintain
a dependency on this trait given how foundational it is.
Since the `Stream` implementation is optional, types that are logically
streams provide `async fn next_*` functions to obtain the next value.
Avoiding the `next()` name prevents fn conflicts with `StreamExt::next()`.
Additionally, some misc cleanup is also done:
- `tokio::io::io` -> `tokio::io::util`.
- `delay` -> `delay_until`.
- `Timeout::new` -> `timeout(...)`.
- `signal::ctrl_c()` returns a future instead of a stream.
- `{tcp,unix}::Incoming` is removed (due to lack of `Stream` trait).
- `time::Throttle` is removed (due to lack of `Stream` trait).
- Fix: `mpsc::UnboundedSender::send(&self)` (no more conflict with `Sink` fns).
|
|
This patch started as an effort to make `time::Timer` private. However, in an
effort to get the build compiling again, more and more changes were made. This
probably should have been broken up, but here we are. I will attempt to
summarize the changes here.
* Feature flags are reorganized to make clearer. `net-driver` becomes
`io-driver`. `rt-current-thread` becomes `rt-core`.
* The `Runtime` can be created without any executor. This replaces `enter`. It
also allows creating I/O / time drivers that are standalone.
* `tokio::timer` is renamed to `tokio::time`. This brings it in line with `std`.
* `tokio::timer::Timer` is renamed to `Driver` and made private.
* The `clock` module is removed. Instead, an `Instant` type is provided. This
type defaults to calling `std::time::Instant`. A `test-util` feature flag can
be used to enable hooking into time.
* The `blocking` module is moved to the top level and is cleaned up.
* The `task` module is moved to the top level.
* The thread-pool's in-place blocking implementation is cleaned up.
* `runtime::Spawner` is renamed to `runtime::Handle` and can be used to "enter"
a runtime context.
|
|
Now, all types are under `runtime`. `executor::util` is moved to a top
level `util` module.
|
|
When the crates were merged, each component kept its own `loom` file
containing mocked types it needed. This patch unifies them all in one
location.
|
|
## Motivation
The `futures` crate's [`compat` module][futures-compat] provides
interoperability between `futures` 0.1 and `std::future` _future types_
(e.g. implementing `std::future::Future` for a type that implements the
`futures` 0.1 `Future` trait). However, this on its own is insufficient
to run code written against `tokio` 0.1 on a `tokio` 0.2 runtime, if
that code also relies on `tokio`'s runtime services. If legacy tasks are
executed that rely on `tokio::timer`, perform IO using `tokio`'s
reactor, or call `tokio::spawn`, those API calls will fail unless there
is also a runtime compatibility layer.
## Solution
As proposed in #1549, this branch introduces a new `tokio-compat` crate,
with implementations of the thread pool and current-thread runtimes that
are capable of running both tokio 0.1 and tokio 0.2 tasks. The compat
runtime creates a background thread that runs a `tokio` 0.1 timer and
reactor, and sets itself as the `tokio` 0.1 executor as well as the
default 0.2 executor. This allows 0.1 futures that use 0.1 timer,
reactor, and executor APIs may run alongside `std::future` tasks on the
0.2 runtime.
### Examples
Spawning both `tokio` 0.1 and `tokio` 0.2 futures:
```rust
use futures_01::future::lazy;
tokio_compat::run(lazy(|| {
// spawn a `futures` 0.1 future using the `spawn` function from the
// `tokio` 0.1 crate:
tokio_01::spawn(lazy(|| {
println!("hello from tokio 0.1!");
Ok(())
}));
// spawn an `async` block future on the same runtime using `tokio`
// 0.2's `spawn`:
tokio_02::spawn(async {
println!("hello from tokio 0.2!");
});
Ok(())
}))
```
Futures on the compat runtime can use `timer` APIs from both 0.1 and 0.2
versions of `tokio`:
```rust
use std::time::{Duration, Instant};
use futures_01::future::lazy;
use tokio_compat::prelude::*;
tokio_compat::run_03(async {
// Wait for a `tokio` 0.1 `Delay`...
let when = Instant::now() + Duration::from_millis(10);
tokio_01::timer::Delay::new(when)
// convert the delay future into a `std::future` that we can `await`.
.compat()
.await
.expect("tokio 0.1 timer should work!");
println!("10 ms have elapsed");
// Wait for a `tokio` 0.2 `Delay`...
let when = Instant::now() + Duration::from_millis(20);
tokio_02::timer::delay(when).await;
println!("20 ms have elapsed");
});
```
## Future Work
This is just an initial implementation of a `tokio-compat` crate; there
are more compatibility layers we'll want to provide before that crate is
complete. For example, we should also provide compatibility between
`tokio` 0.2's `AsyncRead` and `AsyncWrite` traits and the `futures` 0.1
and `futures` 0.3 versions of those traits. In #1549, @carllerche also
suggests that the `compat` crate provide reimplementations of APIs that
were removed from `tokio` 0.2 proper, such as the `tcp::Incoming`
future.
Additionally, there is likely extra work required to get the
`tokio-threadpool` 0.1 `blocking` APIs to work on the compat runtime.
This will be addressed in a follow-up PR.
Fixes: #1605
Fixes: #1552
Refs: #1549
[futures-compat]: https://rust-lang-nursery.github.io/futures-api-docs/0.3.0-alpha.19/futures/compat/index.html
|
|
A step towards collapsing Tokio sub crates into a single `tokio`
crate (#1318).
The sync implementation is now provided by the main `tokio` crate.
Functionality can be opted out of by using the various net related
feature flags.
|
|
A step towards collapsing Tokio sub crates into a single `tokio`
crate (#1318).
The executor implementation is now provided by the main `tokio` crate.
Functionality can be opted out of by using the various net related
feature flags.
|
|
## Motivation
The `tokio_net::driver` module currently stores the state associated
with scheduled IO resources in a `Slab` implementation from the `slab`
crate. Because inserting items into and removing items from `slab::Slab`
requires mutable access, the slab must be placed within a `RwLock`. This
has the potential to be a performance bottleneck especially in the context of
the work-stealing scheduler where tasks and the reactor are often located on
the same thread.
`tokio-net` currently reimplements the `ShardedRwLock` type from
`crossbeam` on top of `parking_lot`'s `RwLock` in an attempt to squeeze
as much performance as possible out of the read-write lock around the
slab. This introduces several dependencies that are not used elsewhere.
## Solution
This branch replaces the `RwLock<Slab>` with a lock-free sharded slab
implementation.
The sharded slab is based on the concept of _free list sharding_
described by Leijen, Zorn, and de Moura in [_Mimalloc: Free List
Sharding in Action_][mimalloc], which describes the implementation of a
concurrent memory allocator. In this approach, the slab is sharded so
that each thread has its own thread-local list of slab _pages_. Objects
are always inserted into the local slab of the thread where the
insertion is performed. Therefore, the insert operation needs not be
synchronized.
However, since objects can be _removed_ from the slab by threads other
than the one on which they were inserted, removal operations can still
occur concurrently. Therefore, Leijen et al. introduce a concept of
_local_ and _global_ free lists. When an object is removed on the same
thread it was originally inserted on, it is placed on the local free
list; if it is removed on another thread, it goes on the global free
list for the heap of the thread from which it originated. To find a free
slot to insert into, the local free list is used first; if it is empty,
the entire global free list is popped onto the local free list. Since
the local free list is only ever accessed by the thread it belongs to,
it does not require synchronization at all, and because the global free
list is popped from infrequently, the cost of synchronization has a
reduced impact. A majority of insertions can occur without any
synchronization at all; and removals only require synchronization when
an object has left its parent thread.
The sharded slab was initially implemented in a separate crate (soon to
be released), vendored in-tree to decrease `tokio-net`'s dependencies.
Some code from the original implementation was removed or simplified,
since it is only necessary to support `tokio-net`'s use case, rather
than to provide a fully generic implementation.
[mimalloc]: https://www.microsoft.com/en-us/research/uploads/prod/2019/06/mimalloc-tr-v1.pdf
## Performance
These graphs were produced by out-of-tree `criterion` benchmarks of the
sharded slab implementation.
The first shows the results of a benchmark where an increasing number of
items are inserted and then removed into a slab concurrently by five
threads. It compares the performance of the sharded slab implementation
with a `RwLock<slab::Slab>`:
<img width="1124" alt="Screen Shot 2019-10-01 at 5 09 49 PM" src="https://user-images.githubusercontent.com/2796466/66078398-cd6c9f80-e516-11e9-9923-0ed6292e8498.png">
The second graph shows the results of a benchmark where an increasing
number of items are inserted and then removed by a _single_ thread. It
compares the performance of the sharded slab implementation with an
`RwLock<slab::Slab>` and a `mut slab::Slab`.
<img width="925" alt="Screen Shot 2019-10-01 at 5 13 45 PM" src="https://user-images.githubusercontent.com/2796466/66078469-f0974f00-e516-11e9-95b5-f65f0aa7e494.png">
Note that while the `mut slab::Slab` (i.e. no read-write lock) is
(unsurprisingly) faster than the sharded slab in the single-threaded
benchmark, the sharded slab outperforms the un-contended
`RwLock<slab::Slab>`. This case, where the lock is uncontended and only
accessed from a single thread, represents the best case for the current
use of `slab` in `tokio-net`, since the lock cannot be conditionally
removed in the single-threaded case.
These benchmarks demonstrate that, while the sharded approach introduces
a small constant-factor overhead, it offers significantly better
performance across concurrent accesses.
## Notes
This branch removes the following dependencies `tokio-net`:
- `parking_lot`
- `num_cpus`
- `crossbeam_util`
- `slab`
This branch adds the following dev-dependencies:
- `proptest`
- `loom`
Note that these dev dependencies were used to implement tests for the
sharded-slab crate out-of-tree, and were necessary in order to vendor
the existing tests. Alternatively, since the implementation is tested
externally, we _could_ remove these tests in order to avoid picking up
dev-dependencies. However, this means that we should try to ensure that
`tokio-net`'s vendored implementation doesn't diverge significantly from
upstream's, since it would be missing a majority of its tests.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
|