summaryrefslogtreecommitdiffstats
path: root/tokio/src/sync/mutex.rs
AgeCommit message (Collapse)Author
2020-10-09fs: future proof `File` (#2930)Carl Lerche
Changes inherent methods to take `&self` instead of `&mut self`. This brings the API in line with `std`. This patch is implemented by using a `tokio::sync::Mutex` to guard the internal `File` state. This is not an ideal implementation strategy doesn't make a big impact compared to having to dispatch operations to a background thread followed by a blocking syscall. In the future, the implementation can be improved as we explore async file-system APIs provided by the operating-system (iocp / io_uring). Closes #2927
2020-09-23sync: add `get_mut()` for `Mutex,RwLock` (#2856)Daniel Henry-Mantilla
2020-09-12sync: add const-constructors for some sync primitives (#2790)mental
Co-authored-by: Mikail Bagishov <bagishov.mikail@yandex.ru> Co-authored-by: Eliza Weisman <eliza@buoyant.io> Co-authored-by: Alice Ryhl <alice@ryhl.io>
2020-07-31sync: better Debug for Mutex (#2725)Mikail Bagishov
2020-07-20sync: "which kind of mutex?" section added to doc (#2658)Alice Ryhl
2020-06-13sync: allow unsized types in Mutex and RwLock (#2615)Taiki Endo
2020-06-07chore: fix ci failure on master (#2593)Taiki Endo
* Fix clippy warnings * Pin rustc version to 1.43.1 in macOS Refs: https://github.com/rust-lang/rust/issues/73030
2020-05-31docs: use intra-links in the docs (#2575)xliiv
2020-05-06rt: simplify coop implementation (#2498)Carl Lerche
Simplifies coop implementation. Prunes unused code, create a `Budget` type to track the current budget.
2020-04-29mutex: add `OwnedMutexGuard` for `Arc<Mutex<T>>`s (#2455)Eliza Weisman
This PR adds a new `OwnedMutexGuard` type and `lock_owned` and `try_lock_owned` methods for `Arc<Mutex<T>>`. This is pretty much the same as the similar APIs added in #2421. I've also corrected some existing documentation that incorrectly implied that the existing `lock` method cloned an internal `Arc` — I think this may be a holdover from `tokio` 0.1's `Lock` type? Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2020-04-21sync: improve mutex documentation (#2405)João Oliveira
Co-authored-by: Taiki Endo <te316e89@gmail.com> Co-authored-by: Alice Ryhl <alice@ryhl.io>
2020-04-12docs: replace some html links with rustdoc paths (#2381)xliiv
Included changes - all simple references like `<type>.<name>.html` for these types - enum - fn - struct - trait - type - simple references for methods, like struct.DelayQueue.html#method.poll Refs: #1473
2020-04-03sync: ensure Mutex, RwLock, and Semaphore futures are Send + Sync (#2375)Eliza Weisman
Previously, the `Mutex::lock`, `RwLock::{read, write}`, and `Semaphore::acquire` futures in `tokio::sync` implemented `Send + Sync` automatically. This was by virtue of being implemented using a `poll_fn` that only closed over `Send + Sync` types. However, this broke in PR #2325, which rewrote those types using the new `batch_semaphore`. Now, they await an `Acquire` future, which contains a `Waiter`, which internally contains an `UnsafeCell`, and thus does not implement `Sync`. Since removing previously implemented traits breaks existing code, this inadvertantly caused a breaking change. There were tests ensuring that the `Mutex`, `RwLock`, and `Semaphore` types themselves were `Send + Sync`, but no tests that the _futures they return_ implemented those traits. I've fixed this by adding an explicit impl of `Sync` for the `batch_semaphore::Acquire` future. Since the `Waiter` type held by this struct is only accessed when borrowed mutably, it is safe for it to implement `Sync`. Additionally, I've added to the bounds checks for the effected `tokio::sync` types to ensure that returned futures continue to implement `Send + Sync` in the future.
2020-03-23sync: new internal semaphore based on intrusive lists (#2325)Eliza Weisman
## Motivation Many of Tokio's synchronization primitives (`RwLock`, `Mutex`, `Semaphore`, and the bounded MPSC channel) are based on the internal semaphore implementation, called `semaphore_ll`. This semaphore type provides a lower-level internal API for the semaphore implementation than the public `Semaphore` type, and supports "batch" operations, where waiters may acquire more than one permit at a time, and batches of permits may be released back to the semaphore. Currently, `semaphore_ll` uses an atomic singly-linked list for the waiter queue. The linked list implementation is specific to the semaphore. This implementation therefore requires a heap allocation for every waiter in the queue. These allocations are owned by the semaphore, rather than by the task awaiting permits from the semaphore. Critically, they are only _deallocated_ when permits are released back to the semaphore, at which point it dequeues as many waiters from the front of the queue as can be satisfied with the released permits. If a task attempts to acquire permits from the semaphore and is cancelled (such as by timing out), their waiter nodes remain in the list until they are dequeued while releasing permits. In cases where large numbers of tasks are cancelled while waiting for permits, this results in extremely high memory use for the semaphore (see #2237). ## Solution @Matthias247 has proposed that Tokio adopt the approach used in his `futures-intrusive` crate: using an _intrusive_ linked list to store the wakers of tasks waiting on a synchronization primitive. In an intrusive list, each list node is stored as part of the entry that node represents, rather than in a heap allocation that owns the entry. Because futures must be pinned in order to be polled, the necessary invariant of such a list --- that entries may not move while in the list --- may be upheld by making the waiter node `!Unpin`. In this approach, the waiter node can be stored inline in the future, rather than requiring separate heap allocation, and cancelled futures may remove their nodes from the list. This branch adds a new semaphore implementation that uses the intrusive list added to Tokio in #2210. The implementation is essentially a hybrid of the old `semaphore_ll` and the semaphore used in `futures-intrusive`: while a `Mutex` around the wait list is necessary, since the intrusive list is not thread-safe, the permit state is stored outside of the mutex and updated atomically. The mutex is acquired only when accessing the wait list — if a task can acquire sufficient permits without waiting, it does not need to acquire the lock. When releasing permits, we iterate over the wait list from the end of the queue until we run out of permits to release, and split off all the nodes that received enough permits to wake up into a separate list. Then, we can drain the new list and notify those wakers *after* releasing the lock. Because the split operation only modifies the pointers on the head node of the split-off list and the new tail node of the old list, it is O(1) and does not require an allocation to return a variable length number of waiters to notify. Because of the intrusive list invariants, the API provided by the new `batch_semaphore` is somewhat different than that of `semaphore_ll`. In particular, the `Permit` type has been removed. This type was primarily intended allow the reuse of a wait list node allocated on the heap. Since the intrusive list means we can avoid heap-allocating waiters, this is no longer necessary. Instead, acquiring permits is done by polling an `Acquire` future returned by the `Semaphore` type. The use of a future here ensures that the waiter node is always pinned while waiting to acquire permits, and that a reference to the semaphore is available to remove the waiter if the future is cancelled. Unfortunately, the current implementation of the bounded MPSC requires a `poll_acquire` operation, and has methods that call it while outside of a pinned context. Therefore, I've left the old `semaphore_ll` implementation in place to be used by the bounded MPSC, and updated the `Mutex`, `RwLock`, and `Semaphore` APIs to use the new implementation. Hopefully, a subsequent change can update the bounded MPSC to use the new semaphore as well. Fixes #2237 Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2020-03-16Add cooperative task yielding (#2160)Jon Gjengset
A single call to `poll` on a top-level task may potentially do a lot of work before it returns `Poll::Pending`. If a task runs for a long period of time without yielding back to the executor, it can starve other tasks waiting on that executor to execute them, or drive underlying resources. See for example rust-lang/futures-rs#2047, rust-lang/futures-rs#1957, and rust-lang/futures-rs#869. Since Rust does not have a runtime, it is difficult to forcibly preempt a long-running task. Consider a future like this one: ```rust use tokio::stream::StreamExt; async fn drop_all<I: Stream>(input: I) { while let Some(_) = input.next().await {} } ``` It may look harmless, but consider what happens under heavy load if the input stream is _always_ ready. If we spawn `drop_all`, the task will never yield, and will starve other tasks and resources on the same executor. This patch adds a `coop` module that provides an opt-in mechanism for futures to cooperate with the executor to avoid starvation. This alleviates the problem above: ``` use tokio::stream::StreamExt; async fn drop_all<I: Stream>(input: I) { while let Some(_) = input.next().await { tokio::coop::proceed().await; } } ``` The call to [`proceed`] will coordinate with the executor to make sure that every so often control is yielded back to the executor so it can run other tasks. The implementation uses a thread-local counter that simply counts how many "cooperation points" we have passed since the task was first polled. Once the "budget" has been spent, any subsequent points will return `Poll::Pending`, eventually making the top-level task yield. When it finally does yield, the executor resets the budget before running the next task. The budget per task poll is currently hard-coded to 128. Eventually, we may want to make it dynamic as more cooperation points are added. The number 128 was chosen more or less arbitrarily to balance the cost of yielding unnecessarily against the time an executor may be "held up". At the moment, all the tokio leaf futures ("resources") call into coop, but external futures have no way of doing so. We probably want to continue limiting coop points to leaf futures in the future, but may want to also enable third-party leaf futures to cooperate to benefit the ecosystem as a whole. This is reflected in the methods marked as `pub` in `mod coop` (even though the module is only `pub(crate)`). We will likely also eventually want to expose `coop::limit`, which enables sub-executors and manual `impl Future` blocks to avoid one sub-task spending all of their poll budget. Benchmarks (see tokio-rs/tokio#2160) suggest that the overhead of `coop` is marginal.
2020-02-17Add `Mutex::into_inner` method (#2250)Waffle Lapkin
Add `Mutex::into_inner` method that consumes mutex and return underlying value. Fixes: #2211
2020-01-24docs: use third form in API docs (#2027)Oleg Nosov
2020-01-03sync: add batch op support to internal semaphore (#2004)Carl Lerche
Extend internal semaphore to support batch operations. With this PR, consumers of the semaphore are able to atomically request more than one permit. This is useful for implementing a RwLock.
2019-12-23doc: add additional Mutex example (#2019)Stephen Carman
2019-12-18sync: encapsulate TryLockError variants (#1980)Carl Lerche
As there is currently only one variant, make the error type an opaque struct.
2019-12-17sync: add Semaphore (#1973)Michael P. Jung
Provide an asynchronous Semaphore implementation. This is useful for synchronizing concurrent access to a shared resource.
2019-12-17sync: print MutexGuard inner value for debugging (#1961)yim7
2019-12-10Add Mutex::try_lock and (Unbounded)Receiver::try_recv (#1939)Michael P. Jung
2019-12-09sync::Mutex: Fix typo in documentation (#1934)Danilo Bargen
2019-12-09sync::Mutex: Add note about the absence of poisoning (#1933)Danilo Bargen
2019-12-06sync: fix Mutex when lock future dropped before complete (#1902)Michael P. Jung
The bug caused the mutex to reach a state where it is locked and cannot be unlocked. Fixes #1898
2019-11-15Limit `futures` dependency to `Stream` via feature flag (#1774)Carl Lerche
In an effort to reach API stability, the `tokio` crate is shedding its _public_ dependencies on crates that are either a) do not provide a stable (1.0+) release with longevity guarantees or b) match the `tokio` release cadence. Of course, implementing `std` traits fits the requirements. The on exception, for now, is the `Stream` trait found in `futures_core`. It is expected that this trait will not change much and be moved into `std. Since Tokio is not yet going reaching 1.0, I feel that it is acceptable to maintain a dependency on this trait given how foundational it is. Since the `Stream` implementation is optional, types that are logically streams provide `async fn next_*` functions to obtain the next value. Avoiding the `next()` name prevents fn conflicts with `StreamExt::next()`. Additionally, some misc cleanup is also done: - `tokio::io::io` -> `tokio::io::util`. - `delay` -> `delay_until`. - `Timeout::new` -> `timeout(...)`. - `signal::ctrl_c()` returns a future instead of a stream. - `{tcp,unix}::Incoming` is removed (due to lack of `Stream` trait). - `time::Throttle` is removed (due to lack of `Stream` trait). - Fix: `mpsc::UnboundedSender::send(&self)` (no more conflict with `Sink` fns).
2019-10-29sync: move into `tokio` crate (#1705)Carl Lerche
A step towards collapsing Tokio sub crates into a single `tokio` crate (#1318). The sync implementation is now provided by the main `tokio` crate. Functionality can be opted out of by using the various net related feature flags.