summaryrefslogtreecommitdiffstats
path: root/tokio/src/sync/batch_semaphore.rs
AgeCommit message (Collapse)Author
2020-12-02sync: make add_permits panic with usize::MAX >> 3 permits (#3188)Blas Rodriguez Irizar
2020-10-09fs: future proof `File` (#2930)Carl Lerche
Changes inherent methods to take `&self` instead of `&mut self`. This brings the API in line with `std`. This patch is implemented by using a `tokio::sync::Mutex` to guard the internal `File` state. This is not an ideal implementation strategy doesn't make a big impact compared to having to dispatch operations to a background thread followed by a blocking syscall. In the future, the implementation can be improved as we explore async file-system APIs provided by the operating-system (iocp / io_uring). Closes #2927
2020-10-08chore: Fix clippy lints (#2931)bdonlan
Closes: #2929 Co-authored-by: Bryan Donlan <bdonlan@amazon.com>
2020-10-02Fix new clippy warning (#2899)Alice Ryhl
2020-09-25chore: handle std `Mutex` poisoning in a shim (#2872)Zahari Dichev
As tokio does not rely on poisoning, we can avoid always unwrapping when locking by handling the `PoisonError` in the Mutex shim. Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>
2020-09-24sync: support mpsc send with `&self` (#2861)Carl Lerche
Updates the mpsc channel to use the intrusive waker based sempahore. This enables using `Sender` with `&self`. Instead of using `Sender::poll_ready` to ensure capacity and updating the `Sender` state, `async fn Sender::reserve()` is added. This function returns a `Permit` value representing the reserved capacity. Fixes: #2637 Refs: #2718 (intrusive waiters)
2020-09-12sync: add const-constructors for some sync primitives (#2790)mental
Co-authored-by: Mikail Bagishov <bagishov.mikail@yandex.ru> Co-authored-by: Eliza Weisman <eliza@buoyant.io> Co-authored-by: Alice Ryhl <alice@ryhl.io>
2020-09-02util: add `const fn` support for internal `LinkedList`. (#2805)mental
2020-08-09sync: typo in impl Semaphore (#2745)Blas Rodriguez Irizar
2020-08-08sync: show correct permits in fmt::Debug (#2750)Blas Rodriguez Irizar
Fixes: #2744
2020-07-21sync: support larger number of semaphore permits (#2607)Kornel
2020-06-07chore: fix ci failure on master (#2593)Taiki Endo
* Fix clippy warnings * Pin rustc version to 1.43.1 in macOS Refs: https://github.com/rust-lang/rust/issues/73030
2020-05-21coop: Undo budget decrement on Pending (#2549)Jon Gjengset
This patch updates the coop logic so that the budget is only decremented if a future makes progress (that is, if it returns `Ready`). This is realized by restoring the budget to its former value after `poll_proceed` _unless_ the caller indicates that it made progress. The thinking here is that we always want tasks to make progress when we poll them. With the way things were, if a task polled 128 resources that could make no progress, and just returned `Pending`, then a 129th resource that _could_ make progress would not be polled. Worse yet, this could manifest as a deadlock, if the first 128 resources were all _waiting_ for the 129th resource, since it would _never_ be polled. The downside of this change is that `Pending` resources now do not take up any part of the budget, even though they _do_ take up time on the executor. If a task is particularly aggressive (or unoptimized), and polls a large number of resources that cannot make progress whenever it is polled, then coop will allow it to run potentially much longer before yielding than it could before. The impact of this should be relatively contained though, because tasks that behaved in this way in the past probably ignored `Pending` _anyway_, so whether a resource returned `Pending` due to coop or due to lack of progress may not make a difference to it.
2020-05-16sync: document maximum number of permits (#2539)ZSL
2020-05-12sync: use intrusive list strategy for broadcast (#2509)Carl Lerche
Previously, in the broadcast channel, receiver wakers were passed to the sender via an atomic stack with allocated nodes. When a message was sent, the stack was drained. This caused a problem when many receivers pushed a waiter node then dropped. The waiter node remained indefinitely in cases where no values were sent. This patch switches broadcast to use the intrusive linked-list waiter strategy used by `Notify` and `Semaphore.
2020-05-06rt: simplify coop implementation (#2498)Carl Lerche
Simplifies coop implementation. Prunes unused code, create a `Budget` type to track the current budget.
2020-04-03sync: ensure Mutex, RwLock, and Semaphore futures are Send + Sync (#2375)Eliza Weisman
Previously, the `Mutex::lock`, `RwLock::{read, write}`, and `Semaphore::acquire` futures in `tokio::sync` implemented `Send + Sync` automatically. This was by virtue of being implemented using a `poll_fn` that only closed over `Send + Sync` types. However, this broke in PR #2325, which rewrote those types using the new `batch_semaphore`. Now, they await an `Acquire` future, which contains a `Waiter`, which internally contains an `UnsafeCell`, and thus does not implement `Sync`. Since removing previously implemented traits breaks existing code, this inadvertantly caused a breaking change. There were tests ensuring that the `Mutex`, `RwLock`, and `Semaphore` types themselves were `Send + Sync`, but no tests that the _futures they return_ implemented those traits. I've fixed this by adding an explicit impl of `Sync` for the `batch_semaphore::Acquire` future. Since the `Waiter` type held by this struct is only accessed when borrowed mutably, it is safe for it to implement `Sync`. Additionally, I've added to the bounds checks for the effected `tokio::sync` types to ensure that returned futures continue to implement `Send + Sync` in the future.
2020-03-27sync: fix possible dangling pointer in semaphore (#2340)Eliza Weisman
## Motivation When cancelling futures which are waiting to acquire semaphore permits, there is a possible dangling pointer if notified futures are dropped after the notified wakers have been split into a separate list. Because these futures' wait queue nodes are no longer in the main list guarded by the lock, their `Drop` impls will complete immediately, and they may be dropped while still in the list of tasks to notify. ## Solution This branch fixes this by popping from the wait list inside the lock. The wakers of popped nodes are temporarily stored in a stack array, so that they can be notified after the lock is released. Since the size of the stack array is fixed, we may in some cases have to loop multiple times, acquiring and releasing the lock, until all permits have been released. This may also have the possible side advantage of preventing a thread releasing a very large number of permits from starving other threads that need to enqueue waiters. I've also added a loom test that can reliably reproduce a segfault on master, but passes on this branch (after a lot of iterations). Signed-off-by: Eliza Weisman <eliza@buoyant.io>
2020-03-26rt: track loom changes + tweak queue (#2315)Carl Lerche
Loom is having a big refresh to improve performance and tighten up the concurrency model. This diff tracks those changes. Included in the changes is the removal of `CausalCell` deferred checks. This is due to it technically being undefined behavior in the C++11 memory model. To address this, the work-stealing queue is updated to avoid needing this behavior. This is done by limiting the queue to have one concurrent stealer.
2020-03-23sync: new internal semaphore based on intrusive lists (#2325)Eliza Weisman
## Motivation Many of Tokio's synchronization primitives (`RwLock`, `Mutex`, `Semaphore`, and the bounded MPSC channel) are based on the internal semaphore implementation, called `semaphore_ll`. This semaphore type provides a lower-level internal API for the semaphore implementation than the public `Semaphore` type, and supports "batch" operations, where waiters may acquire more than one permit at a time, and batches of permits may be released back to the semaphore. Currently, `semaphore_ll` uses an atomic singly-linked list for the waiter queue. The linked list implementation is specific to the semaphore. This implementation therefore requires a heap allocation for every waiter in the queue. These allocations are owned by the semaphore, rather than by the task awaiting permits from the semaphore. Critically, they are only _deallocated_ when permits are released back to the semaphore, at which point it dequeues as many waiters from the front of the queue as can be satisfied with the released permits. If a task attempts to acquire permits from the semaphore and is cancelled (such as by timing out), their waiter nodes remain in the list until they are dequeued while releasing permits. In cases where large numbers of tasks are cancelled while waiting for permits, this results in extremely high memory use for the semaphore (see #2237). ## Solution @Matthias247 has proposed that Tokio adopt the approach used in his `futures-intrusive` crate: using an _intrusive_ linked list to store the wakers of tasks waiting on a synchronization primitive. In an intrusive list, each list node is stored as part of the entry that node represents, rather than in a heap allocation that owns the entry. Because futures must be pinned in order to be polled, the necessary invariant of such a list --- that entries may not move while in the list --- may be upheld by making the waiter node `!Unpin`. In this approach, the waiter node can be stored inline in the future, rather than requiring separate heap allocation, and cancelled futures may remove their nodes from the list. This branch adds a new semaphore implementation that uses the intrusive list added to Tokio in #2210. The implementation is essentially a hybrid of the old `semaphore_ll` and the semaphore used in `futures-intrusive`: while a `Mutex` around the wait list is necessary, since the intrusive list is not thread-safe, the permit state is stored outside of the mutex and updated atomically. The mutex is acquired only when accessing the wait list — if a task can acquire sufficient permits without waiting, it does not need to acquire the lock. When releasing permits, we iterate over the wait list from the end of the queue until we run out of permits to release, and split off all the nodes that received enough permits to wake up into a separate list. Then, we can drain the new list and notify those wakers *after* releasing the lock. Because the split operation only modifies the pointers on the head node of the split-off list and the new tail node of the old list, it is O(1) and does not require an allocation to return a variable length number of waiters to notify. Because of the intrusive list invariants, the API provided by the new `batch_semaphore` is somewhat different than that of `semaphore_ll`. In particular, the `Permit` type has been removed. This type was primarily intended allow the reuse of a wait list node allocated on the heap. Since the intrusive list means we can avoid heap-allocating waiters, this is no longer necessary. Instead, acquiring permits is done by polling an `Acquire` future returned by the `Semaphore` type. The use of a future here ensures that the waiter node is always pinned while waiting to acquire permits, and that a reference to the semaphore is available to remove the waiter if the future is cancelled. Unfortunately, the current implementation of the bounded MPSC requires a `poll_acquire` operation, and has methods that call it while outside of a pinned context. Therefore, I've left the old `semaphore_ll` implementation in place to be used by the bounded MPSC, and updated the `Mutex`, `RwLock`, and `Semaphore` APIs to use the new implementation. Hopefully, a subsequent change can update the bounded MPSC to use the new semaphore as well. Fixes #2237 Signed-off-by: Eliza Weisman <eliza@buoyant.io>