summaryrefslogtreecommitdiffstats
path: root/tokio/src/sync/mutex.rs
diff options
context:
space:
mode:
authorStephen Carman <hntd187@users.noreply.github.com>2019-12-23 10:18:30 -0800
committerCarl Lerche <me@carllerche.com>2019-12-23 10:18:30 -0800
commit6ff4e349e28a4d89098f2587e70c86281c2ae182 (patch)
treef674d0790dbb618e04aec161e2f8a828e7533b55 /tokio/src/sync/mutex.rs
parentadc5186ebd1290c2f144e153a87e147d257f8b0f (diff)
doc: add additional Mutex example (#2019)
Diffstat (limited to 'tokio/src/sync/mutex.rs')
-rw-r--r--tokio/src/sync/mutex.rs45
1 files changed, 45 insertions, 0 deletions
diff --git a/tokio/src/sync/mutex.rs b/tokio/src/sync/mutex.rs
index 3c7f029b..fe891159 100644
--- a/tokio/src/sync/mutex.rs
+++ b/tokio/src/sync/mutex.rs
@@ -26,6 +26,51 @@
//! }
//! ```
//!
+//! Another example
+//! ```rust,no_run
+//! #![warn(rust_2018_idioms)]
+//!
+//! use tokio::sync::Mutex;
+//! use std::sync::Arc;
+//!
+//!
+//! #[tokio::main]
+//! async fn main() {
+//! let count = Arc::new(Mutex::new(0));
+//!
+//! for _ in 0..5 {
+//! let my_count = Arc::clone(&count);
+//! tokio::spawn(async move {
+//! for _ in 0..10 {
+//! let mut lock = my_count.lock().await;
+//! *lock += 1;
+//! println!("{}", lock);
+//! }
+//! });
+//! }
+//!
+//! loop {
+//! if *count.lock().await >= 50 {
+//! break;
+//! }
+//! }
+//! println!("Count hit 50.");
+//! }
+//! ```
+//! There are a few things of note here to pay attention to in this example.
+//! 1. The mutex is wrapped in an [`std::sync::Arc`] to allow it to be shared across threads.
+//! 2. Each spawned task obtains a lock and releases it on every iteration.
+//! 3. Mutation of the data the Mutex is protecting is done by de-referencing the the obtained lock
+//! as seen on lines 23 and 30.
+//!
+//! Tokio's Mutex works in a simple FIFO (first in, first out) style where as requests for a lock are
+//! made Tokio will queue them up and provide a lock when it is that requester's turn. In that way
+//! the Mutex is "fair" and predictable in how it distributes the locks to inner data. This is why
+//! the output of this program is an in-order count to 50. Locks are released and reacquired
+//! after every iteration, so basically, each thread goes to the back of the line after it increments
+//! the value once. Also, since there is only a single valid lock at any given time there is no
+//! possibility of a race condition when mutating the inner value.
+//!
//! Note that in contrast to `std::sync::Mutex`, this implementation does not
//! poison the mutex when a thread holding the `MutexGuard` panics. In such a
//! case, the mutex will be unlocked. If the panic is caught, this might leave