Skip to content

Commit 5f8e2cb

Browse files
committed
add mod level docs for sync
Signed-off-by: Yoshua Wuyts <[email protected]>
1 parent 4cab868 commit 5f8e2cb

File tree

1 file changed

+143
-0
lines changed

1 file changed

+143
-0
lines changed

src/sync/mod.rs

Lines changed: 143 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,149 @@
44
//!
55
//! [`std::sync`]: https://doc.rust-lang.org/std/sync/index.html
66
//!
7+
//! ## The need for synchronization
8+
//!
9+
//! Conceptually, a Rust program is a series of operations which will
10+
//! be executed on a computer. The timeline of events happening in the
11+
//! program is consistent with the order of the operations in the code.
12+
//!
13+
//! Consider the following code, operating on some global static variables:
14+
//!
15+
//! ```rust
16+
//! static mut A: u32 = 0;
17+
//! static mut B: u32 = 0;
18+
//! static mut C: u32 = 0;
19+
//!
20+
//! fn main() {
21+
//! unsafe {
22+
//! A = 3;
23+
//! B = 4;
24+
//! A = A + B;
25+
//! C = B;
26+
//! println!("{} {} {}", A, B, C);
27+
//! C = A;
28+
//! }
29+
//! }
30+
//! ```
31+
//!
32+
//! It appears as if some variables stored in memory are changed, an addition
33+
//! is performed, result is stored in `A` and the variable `C` is
34+
//! modified twice.
35+
//!
36+
//! When only a single thread is involved, the results are as expected:
37+
//! the line `7 4 4` gets printed.
38+
//!
39+
//! As for what happens behind the scenes, when optimizations are enabled the
40+
//! final generated machine code might look very different from the code:
41+
//!
42+
//! - The first store to `C` might be moved before the store to `A` or `B`,
43+
//! _as if_ we had written `C = 4; A = 3; B = 4`.
44+
//!
45+
//! - Assignment of `A + B` to `A` might be removed, since the sum can be stored
46+
//! in a temporary location until it gets printed, with the global variable
47+
//! never getting updated.
48+
//!
49+
//! - The final result could be determined just by looking at the code
50+
//! at compile time, so [constant folding] might turn the whole
51+
//! block into a simple `println!("7 4 4")`.
52+
//!
53+
//! The compiler is allowed to perform any combination of these
54+
//! optimizations, as long as the final optimized code, when executed,
55+
//! produces the same results as the one without optimizations.
56+
//!
57+
//! Due to the [concurrency] involved in modern computers, assumptions
58+
//! about the program's execution order are often wrong. Access to
59+
//! global variables can lead to nondeterministic results, **even if**
60+
//! compiler optimizations are disabled, and it is **still possible**
61+
//! to introduce synchronization bugs.
62+
//!
63+
//! Note that thanks to Rust's safety guarantees, accessing global (static)
64+
//! variables requires `unsafe` code, assuming we don't use any of the
65+
//! synchronization primitives in this module.
66+
//!
67+
//! [constant folding]: https://en.wikipedia.org/wiki/Constant_folding
68+
//! [concurrency]: https://en.wikipedia.org/wiki/Concurrency_(computer_science)
69+
//!
70+
//! ## Out-of-order execution
71+
//!
72+
//! Instructions can execute in a different order from the one we define, due to
73+
//! various reasons:
74+
//!
75+
//! - The **compiler** reordering instructions: If the compiler can issue an
76+
//! instruction at an earlier point, it will try to do so. For example, it
77+
//! might hoist memory loads at the top of a code block, so that the CPU can
78+
//! start [prefetching] the values from memory.
79+
//!
80+
//! In single-threaded scenarios, this can cause issues when writing
81+
//! signal handlers or certain kinds of low-level code.
82+
//! Use [compiler fences] to prevent this reordering.
83+
//!
84+
//! - A **single processor** executing instructions [out-of-order]:
85+
//! Modern CPUs are capable of [superscalar] execution,
86+
//! i.e., multiple instructions might be executing at the same time,
87+
//! even though the machine code describes a sequential process.
88+
//!
89+
//! This kind of reordering is handled transparently by the CPU.
90+
//!
91+
//! - A **multiprocessor** system executing multiple hardware threads
92+
//! at the same time: In multi-threaded scenarios, you can use two
93+
//! kinds of primitives to deal with synchronization:
94+
//! - [memory fences] to ensure memory accesses are made visible to
95+
//! other CPUs in the right order.
96+
//! - [atomic operations] to ensure simultaneous access to the same
97+
//! memory location doesn't lead to undefined behavior.
98+
//!
99+
//! [prefetching]: https://en.wikipedia.org/wiki/Cache_prefetching
100+
//! [compiler fences]: https://doc.rust-lang.org/std/sync/atomic/fn.compiler_fence.html
101+
//! [out-of-order]: https://en.wikipedia.org/wiki/Out-of-order_execution
102+
//! [superscalar]: https://en.wikipedia.org/wiki/Superscalar_processor
103+
//! [memory fences]: https://doc.rust-lang.org/std/sync/atomic/fn.fence.html
104+
//! [atomic operations]: https://doc.rust-lang.org/std/sync/atomic/index.html
105+
//!
106+
//! ## Higher-level synchronization objects
107+
//!
108+
//! Most of the low-level synchronization primitives are quite error-prone and
109+
//! inconvenient to use, which is why async-std also exposes some
110+
//! higher-level synchronization objects.
111+
//!
112+
//! These abstractions can be built out of lower-level primitives.
113+
//! For efficiency, the sync objects in async-std are usually
114+
//! implemented with help from the scheduler, which is
115+
//! able to reschedule the tasks while they are blocked on acquiring
116+
//! a lock.
117+
//!
118+
//! The following is an overview of the available synchronization
119+
//! objects:
120+
//!
121+
//! - [`Arc`]: Atomically Reference-Counted pointer, which can be used
122+
//! in multithreaded environments to prolong the lifetime of some
123+
//! data until all the threads have finished using it.
124+
//!
125+
//! - [`Barrier`]: Ensures multiple threads will wait for each other
126+
//! to reach a point in the program, before continuing execution all
127+
//! together.
128+
//!
129+
//! - [`channel`]: Multi-producer, multi-consumer queues, used for
130+
//! message-based communication. Can provide a lightweight
131+
//! inter-task synchronisation mechanism, at the cost of some
132+
//! extra memory.
133+
//!
134+
//! - [`Mutex`]: Mutual Exclusion mechanism, which ensures that at
135+
//! most one task at a time is able to access some data.
136+
//!
137+
//! - [`RwLock`]: Provides a mutual exclusion mechanism which allows
138+
//! multiple readers at the same time, while allowing only one
139+
//! writer at a time. In some cases, this can be more efficient than
140+
//! a mutex.
141+
//!
142+
//! [`Arc`]: crate::sync::Arc
143+
//! [`Barrier`]: crate::sync::Barrier
144+
//! [`Condvar`]: crate::sync::Condvar
145+
//! [`channel`]: fn.channel.html
146+
//! [`Mutex`]: crate::sync::Mutex
147+
//! [`Once`]: crate::sync::Once
148+
//! [`RwLock`]: crate::sync::RwLock
149+
//!
7150
//! # Examples
8151
//!
9152
//! Spawn a task that updates an integer protected by a mutex:

0 commit comments

Comments
 (0)