Skip to content

Commit e662bc9

Browse files
committed
---
yaml --- r: 224702 b: refs/heads/tmp c: 6a8aec3 h: refs/heads/master v: v3
1 parent c53130e commit e662bc9

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

43 files changed

+273
-267
lines changed

[refs]

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ refs/tags/0.11.0: e1247cb1d0d681be034adb4b558b5a0c0d5720f9
2525
refs/tags/0.12.0: f0c419429ef30723ceaf6b42f9b5a2aeb5d2e2d1
2626
refs/heads/beta: 83dee3dfbb452a7558193f3ce171b3c60bf4a499
2727
refs/tags/1.0.0-alpha: e42bd6d93a1d3433c486200587f8f9e12590a4d7
28-
refs/heads/tmp: a8b7146f70f9c9409d205adc324da559dfd4ddde
28+
refs/heads/tmp: 6a8aec35612b1ca7968a962a47e7fabea0eeafe2
2929
refs/tags/1.0.0-alpha.2: 4c705f6bc559886632d3871b04f58aab093bfa2f
3030
refs/tags/homu-tmp: e58601ab085591c71a27ae82137fc313222c2270
3131
refs/tags/1.0.0-beta: 8cbb92b53468ee2b0c2d3eeb8567005953d40828

branches/tmp/src/doc/tarpl/README.md

Lines changed: 24 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -2,33 +2,38 @@
22

33
# NOTE: This is a draft document, and may contain serious errors
44

5-
So you've played around with Rust a bit. You've written a few simple programs
6-
and you think you grok the basics. Maybe you've even read through *[The Rust
7-
Programming Language][trpl]* (TRPL). Now you want to get neck-deep in all the
5+
So you've played around with Rust a bit. You've written a few simple programs and
6+
you think you grok the basics. Maybe you've even read through
7+
*[The Rust Programming Language][trpl]*. Now you want to get neck-deep in all the
88
nitty-gritty details of the language. You want to know those weird corner-cases.
9-
You want to know what the heck `unsafe` really means, and how to properly use
10-
it. This is the book for you.
9+
You want to know what the heck `unsafe` really means, and how to properly use it.
10+
This is the book for you.
1111

12-
To be clear, this book goes into serious detail. We're going to dig into
12+
To be clear, this book goes into *serious* detail. We're going to dig into
1313
exception-safety and pointer aliasing. We're going to talk about memory
1414
models. We're even going to do some type-theory. This is stuff that you
15-
absolutely don't need to know to write fast and safe Rust programs.
15+
absolutely *don't* need to know to write fast and safe Rust programs.
1616
You could probably close this book *right now* and still have a productive
1717
and happy career in Rust.
1818

19-
However if you intend to write unsafe code -- or just really want to dig into
20-
the guts of the language -- this book contains invaluable information.
19+
However if you intend to write unsafe code -- or just *really* want to dig into
20+
the guts of the language -- this book contains *invaluable* information.
2121

22-
Unlike TRPL we will be assuming considerable prior knowledge. In particular, you
23-
should be comfortable with basic systems programming and basic Rust. If you
24-
don't feel comfortable with these topics, you should consider [reading
25-
TRPL][trpl], though we will not be assuming that you have. You can skip
26-
straight to this book if you want; just know that we won't be explaining
27-
everything from the ground up.
22+
Unlike *The Rust Programming Language* we *will* be assuming considerable prior
23+
knowledge. In particular, you should be comfortable with:
2824

29-
Due to the nature of advanced Rust programming, we will be spending a lot of
30-
time talking about *safety* and *guarantees*. In particular, a significant
31-
portion of the book will be dedicated to correctly writing and understanding
32-
Unsafe Rust.
25+
* Basic Systems Programming:
26+
* Pointers
27+
* [The stack and heap][]
28+
* The memory hierarchy (caches)
29+
* Threads
30+
31+
* [Basic Rust][]
32+
33+
Due to the nature of advanced Rust programming, we will be spending a lot of time
34+
talking about *safety* and *guarantees*. In particular, a significant portion of
35+
the book will be dedicated to correctly writing and understanding Unsafe Rust.
3336

3437
[trpl]: ../book/
38+
[The stack and heap]: ../book/the-stack-and-the-heap.html
39+
[Basic Rust]: ../book/syntax-and-semantics.html

branches/tmp/src/doc/tarpl/SUMMARY.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@
1010
* [Ownership](ownership.md)
1111
* [References](references.md)
1212
* [Lifetimes](lifetimes.md)
13-
* [Limits of Lifetimes](lifetime-mismatch.md)
13+
* [Limits of lifetimes](lifetime-mismatch.md)
1414
* [Lifetime Elision](lifetime-elision.md)
1515
* [Unbounded Lifetimes](unbounded-lifetimes.md)
1616
* [Higher-Rank Trait Bounds](hrtb.md)

branches/tmp/src/doc/tarpl/atomics.md

Lines changed: 28 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ face.
1717
The C11 memory model is fundamentally about trying to bridge the gap between the
1818
semantics we want, the optimizations compilers want, and the inconsistent chaos
1919
our hardware wants. *We* would like to just write programs and have them do
20-
exactly what we said but, you know, fast. Wouldn't that be great?
20+
exactly what we said but, you know, *fast*. Wouldn't that be great?
2121

2222

2323

@@ -35,20 +35,20 @@ y = 3;
3535
x = 2;
3636
```
3737

38-
The compiler may conclude that it would be best if your program did
38+
The compiler may conclude that it would *really* be best if your program did
3939

4040
```rust,ignore
4141
x = 2;
4242
y = 3;
4343
```
4444

45-
This has inverted the order of events and completely eliminated one event.
45+
This has inverted the order of events *and* completely eliminated one event.
4646
From a single-threaded perspective this is completely unobservable: after all
4747
the statements have executed we are in exactly the same state. But if our
48-
program is multi-threaded, we may have been relying on `x` to actually be
49-
assigned to 1 before `y` was assigned. We would like the compiler to be
48+
program is multi-threaded, we may have been relying on `x` to *actually* be
49+
assigned to 1 before `y` was assigned. We would *really* like the compiler to be
5050
able to make these kinds of optimizations, because they can seriously improve
51-
performance. On the other hand, we'd also like to be able to depend on our
51+
performance. On the other hand, we'd really like to be able to depend on our
5252
program *doing the thing we said*.
5353

5454

@@ -57,15 +57,15 @@ program *doing the thing we said*.
5757
# Hardware Reordering
5858

5959
On the other hand, even if the compiler totally understood what we wanted and
60-
respected our wishes, our hardware might instead get us in trouble. Trouble
60+
respected our wishes, our *hardware* might instead get us in trouble. Trouble
6161
comes from CPUs in the form of memory hierarchies. There is indeed a global
6262
shared memory space somewhere in your hardware, but from the perspective of each
6363
CPU core it is *so very far away* and *so very slow*. Each CPU would rather work
64-
with its local cache of the data and only go through all the anguish of
65-
talking to shared memory only when it doesn't actually have that memory in
64+
with its local cache of the data and only go through all the *anguish* of
65+
talking to shared memory *only* when it doesn't actually have that memory in
6666
cache.
6767

68-
After all, that's the whole point of the cache, right? If every read from the
68+
After all, that's the whole *point* of the cache, right? If every read from the
6969
cache had to run back to shared memory to double check that it hadn't changed,
7070
what would the point be? The end result is that the hardware doesn't guarantee
7171
that events that occur in the same order on *one* thread, occur in the same
@@ -99,13 +99,13 @@ provides weak ordering guarantees. This has two consequences for concurrent
9999
programming:
100100

101101
* Asking for stronger guarantees on strongly-ordered hardware may be cheap or
102-
even free because they already provide strong guarantees unconditionally.
102+
even *free* because they already provide strong guarantees unconditionally.
103103
Weaker guarantees may only yield performance wins on weakly-ordered hardware.
104104

105-
* Asking for guarantees that are too weak on strongly-ordered hardware is
105+
* Asking for guarantees that are *too* weak on strongly-ordered hardware is
106106
more likely to *happen* to work, even though your program is strictly
107-
incorrect. If possible, concurrent algorithms should be tested on
108-
weakly-ordered hardware.
107+
incorrect. If possible, concurrent algorithms should be tested on weakly-
108+
ordered hardware.
109109

110110

111111

@@ -115,10 +115,10 @@ programming:
115115

116116
The C11 memory model attempts to bridge the gap by allowing us to talk about the
117117
*causality* of our program. Generally, this is by establishing a *happens
118-
before* relationship between parts of the program and the threads that are
118+
before* relationships between parts of the program and the threads that are
119119
running them. This gives the hardware and compiler room to optimize the program
120120
more aggressively where a strict happens-before relationship isn't established,
121-
but forces them to be more careful where one is established. The way we
121+
but forces them to be more careful where one *is* established. The way we
122122
communicate these relationships are through *data accesses* and *atomic
123123
accesses*.
124124

@@ -130,10 +130,8 @@ propagate the changes made in data accesses to other threads as lazily and
130130
inconsistently as it wants. Mostly critically, data accesses are how data races
131131
happen. Data accesses are very friendly to the hardware and compiler, but as
132132
we've seen they offer *awful* semantics to try to write synchronized code with.
133-
Actually, that's too weak.
134-
135-
**It is literally impossible to write correct synchronized code using only data
136-
accesses.**
133+
Actually, that's too weak. *It is literally impossible to write correct
134+
synchronized code using only data accesses*.
137135

138136
Atomic accesses are how we tell the hardware and compiler that our program is
139137
multi-threaded. Each atomic access can be marked with an *ordering* that
@@ -143,10 +141,7 @@ they *can't* do. For the compiler, this largely revolves around re-ordering of
143141
instructions. For the hardware, this largely revolves around how writes are
144142
propagated to other threads. The set of orderings Rust exposes are:
145143

146-
* Sequentially Consistent (SeqCst)
147-
* Release
148-
* Acquire
149-
* Relaxed
144+
* Sequentially Consistent (SeqCst) Release Acquire Relaxed
150145

151146
(Note: We explicitly do not expose the C11 *consume* ordering)
152147

@@ -159,13 +154,13 @@ synchronize"
159154

160155
Sequentially Consistent is the most powerful of all, implying the restrictions
161156
of all other orderings. Intuitively, a sequentially consistent operation
162-
cannot be reordered: all accesses on one thread that happen before and after a
163-
SeqCst access stay before and after it. A data-race-free program that uses
157+
*cannot* be reordered: all accesses on one thread that happen before and after a
158+
SeqCst access *stay* before and after it. A data-race-free program that uses
164159
only sequentially consistent atomics and data accesses has the very nice
165160
property that there is a single global execution of the program's instructions
166161
that all threads agree on. This execution is also particularly nice to reason
167162
about: it's just an interleaving of each thread's individual executions. This
168-
does not hold if you start using the weaker atomic orderings.
163+
*does not* hold if you start using the weaker atomic orderings.
169164

170165
The relative developer-friendliness of sequential consistency doesn't come for
171166
free. Even on strongly-ordered platforms sequential consistency involves
@@ -175,8 +170,8 @@ In practice, sequential consistency is rarely necessary for program correctness.
175170
However sequential consistency is definitely the right choice if you're not
176171
confident about the other memory orders. Having your program run a bit slower
177172
than it needs to is certainly better than it running incorrectly! It's also
178-
mechanically trivial to downgrade atomic operations to have a weaker
179-
consistency later on. Just change `SeqCst` to `Relaxed` and you're done! Of
173+
*mechanically* trivial to downgrade atomic operations to have a weaker
174+
consistency later on. Just change `SeqCst` to e.g. `Relaxed` and you're done! Of
180175
course, proving that this transformation is *correct* is a whole other matter.
181176

182177

@@ -188,15 +183,15 @@ Acquire and Release are largely intended to be paired. Their names hint at their
188183
use case: they're perfectly suited for acquiring and releasing locks, and
189184
ensuring that critical sections don't overlap.
190185

191-
Intuitively, an acquire access ensures that every access after it stays after
186+
Intuitively, an acquire access ensures that every access after it *stays* after
192187
it. However operations that occur before an acquire are free to be reordered to
193188
occur after it. Similarly, a release access ensures that every access before it
194-
stays before it. However operations that occur after a release are free to be
189+
*stays* before it. However operations that occur after a release are free to be
195190
reordered to occur before it.
196191

197192
When thread A releases a location in memory and then thread B subsequently
198193
acquires *the same* location in memory, causality is established. Every write
199-
that happened before A's release will be observed by B after its release.
194+
that happened *before* A's release will be observed by B *after* its release.
200195
However no causality is established with any other threads. Similarly, no
201196
causality is established if A and B access *different* locations in memory.
202197

@@ -235,7 +230,7 @@ weakly-ordered platforms.
235230
# Relaxed
236231

237232
Relaxed accesses are the absolute weakest. They can be freely re-ordered and
238-
provide no happens-before relationship. Still, relaxed operations are still
233+
provide no happens-before relationship. Still, relaxed operations *are* still
239234
atomic. That is, they don't count as data accesses and any read-modify-write
240235
operations done to them occur atomically. Relaxed operations are appropriate for
241236
things that you definitely want to happen, but don't particularly otherwise care

branches/tmp/src/doc/tarpl/borrow-splitting.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
The mutual exclusion property of mutable references can be very limiting when
44
working with a composite structure. The borrow checker understands some basic
5-
stuff, but will fall over pretty easily. It does understand structs
5+
stuff, but will fall over pretty easily. It *does* understand structs
66
sufficiently to know that it's possible to borrow disjoint fields of a struct
77
simultaneously. So this works today:
88

@@ -50,7 +50,7 @@ to the same value.
5050

5151
In order to "teach" borrowck that what we're doing is ok, we need to drop down
5252
to unsafe code. For instance, mutable slices expose a `split_at_mut` function
53-
that consumes the slice and returns two mutable slices. One for everything to
53+
that consumes the slice and returns *two* mutable slices. One for everything to
5454
the left of the index, and one for everything to the right. Intuitively we know
5555
this is safe because the slices don't overlap, and therefore alias. However
5656
the implementation requires some unsafety:
@@ -93,10 +93,10 @@ completely incompatible with this API, as it would produce multiple mutable
9393
references to the same object!
9494

9595
However it actually *does* work, exactly because iterators are one-shot objects.
96-
Everything an IterMut yields will be yielded at most once, so we don't
97-
actually ever yield multiple mutable references to the same piece of data.
96+
Everything an IterMut yields will be yielded *at most* once, so we don't
97+
*actually* ever yield multiple mutable references to the same piece of data.
9898

99-
Perhaps surprisingly, mutable iterators don't require unsafe code to be
99+
Perhaps surprisingly, mutable iterators *don't* require unsafe code to be
100100
implemented for many types!
101101

102102
For instance here's a singly linked list:

branches/tmp/src/doc/tarpl/casts.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
11
% Casts
22

33
Casts are a superset of coercions: every coercion can be explicitly
4-
invoked via a cast. However some conversions require a cast.
4+
invoked via a cast. However some conversions *require* a cast.
55
While coercions are pervasive and largely harmless, these "true casts"
66
are rare and potentially dangerous. As such, casts must be explicitly invoked
77
using the `as` keyword: `expr as Type`.
88

99
True casts generally revolve around raw pointers and the primitive numeric
10-
types. Even though they're dangerous, these casts are infallible at runtime.
10+
types. Even though they're dangerous, these casts are *infallible* at runtime.
1111
If a cast triggers some subtle corner case no indication will be given that
1212
this occurred. The cast will simply succeed. That said, casts must be valid
1313
at the type level, or else they will be prevented statically. For instance,

branches/tmp/src/doc/tarpl/checked-uninit.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ loop {
8080
// because it relies on actual values.
8181
if true {
8282
// But it does understand that it will only be taken once because
83-
// we unconditionally break out of it. Therefore `x` doesn't
83+
// we *do* unconditionally break out of it. Therefore `x` doesn't
8484
// need to be marked as mutable.
8585
x = 0;
8686
break;

branches/tmp/src/doc/tarpl/concurrency.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,12 +2,12 @@
22

33
Rust as a language doesn't *really* have an opinion on how to do concurrency or
44
parallelism. The standard library exposes OS threads and blocking sys-calls
5-
because everyone has those, and they're uniform enough that you can provide
5+
because *everyone* has those, and they're uniform enough that you can provide
66
an abstraction over them in a relatively uncontroversial way. Message passing,
77
green threads, and async APIs are all diverse enough that any abstraction over
88
them tends to involve trade-offs that we weren't willing to commit to for 1.0.
99

1010
However the way Rust models concurrency makes it relatively easy design your own
11-
concurrency paradigm as a library and have everyone else's code Just Work
11+
concurrency paradigm as a library and have *everyone else's* code Just Work
1212
with yours. Just require the right lifetimes and Send and Sync where appropriate
13-
and you're off to the races. Or rather, off to the... not... having... races.
13+
and you're off to the races. Or rather, off to the... not... having... races.

branches/tmp/src/doc/tarpl/constructors.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -37,14 +37,14 @@ blindly memcopied to somewhere else in memory. This means pure on-the-stack-but-
3737
still-movable intrusive linked lists are simply not happening in Rust (safely).
3838

3939
Assignment and copy constructors similarly don't exist because move semantics
40-
are the only semantics in Rust. At most `x = y` just moves the bits of y into
41-
the x variable. Rust does provide two facilities for providing C++'s copy-
40+
are the *only* semantics in Rust. At most `x = y` just moves the bits of y into
41+
the x variable. Rust *does* provide two facilities for providing C++'s copy-
4242
oriented semantics: `Copy` and `Clone`. Clone is our moral equivalent of a copy
4343
constructor, but it's never implicitly invoked. You have to explicitly call
4444
`clone` on an element you want to be cloned. Copy is a special case of Clone
4545
where the implementation is just "copy the bits". Copy types *are* implicitly
4646
cloned whenever they're moved, but because of the definition of Copy this just
47-
means not treating the old copy as uninitialized -- a no-op.
47+
means *not* treating the old copy as uninitialized -- a no-op.
4848

4949
While Rust provides a `Default` trait for specifying the moral equivalent of a
5050
default constructor, it's incredibly rare for this trait to be used. This is

branches/tmp/src/doc/tarpl/conversions.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ a different type. Because Rust encourages encoding important properties in the
88
type system, these problems are incredibly pervasive. As such, Rust
99
consequently gives you several ways to solve them.
1010

11-
First we'll look at the ways that Safe Rust gives you to reinterpret values.
11+
First we'll look at the ways that *Safe Rust* gives you to reinterpret values.
1212
The most trivial way to do this is to just destructure a value into its
1313
constituent parts and then build a new type out of them. e.g.
1414

branches/tmp/src/doc/tarpl/data.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
% Data Representation in Rust
22

3-
Low-level programming cares a lot about data layout. It's a big deal. It also
4-
pervasively influences the rest of the language, so we're going to start by
5-
digging into how data is represented in Rust.
3+
Low-level programming cares a lot about data layout. It's a big deal. It also pervasively
4+
influences the rest of the language, so we're going to start by digging into how data is
5+
represented in Rust.

0 commit comments

Comments
 (0)