Skip to content

Commit 2702548

Browse files
committed
---
yaml --- r: 229165 b: refs/heads/try c: 412366f h: refs/heads/master i: 229163: 33540c5 v: v3
1 parent ca22811 commit 2702548

File tree

132 files changed

+1058
-1970
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

132 files changed

+1058
-1970
lines changed

[refs]

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
refs/heads/master: aca2057ed5fb7af3f8905b2bc01f72fa001c35c8
33
refs/heads/snap-stage3: 1af31d4974e33027a68126fa5a5a3c2c6491824f
4-
refs/heads/try: 5f841eb82476b6de4ebb4eb199e9b070a0a0493c
4+
refs/heads/try: 412366fe36ea5c44803c3a9bf9943684a5068d63
55
refs/tags/release-0.1: 1f5c5126e96c79d22cb7862f75304136e204f105
66
refs/tags/release-0.2: c870d2dffb391e14efb05aa27898f1f6333a9596
77
refs/tags/release-0.3: b5f0d0f648d9a6153664837026ba1be43d3e2503

branches/try/.travis.yml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,8 @@ sudo: false
2020
before_script:
2121
- ./configure --enable-ccache
2222
script:
23-
- make tidy check -j4
23+
- make tidy
24+
- make rustc-stage1 -j4
2425

2526
env:
2627
- CXX=/usr/bin/g++-4.7

branches/try/mk/docs.mk

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ ERR_IDX_GEN = $(RPATH_VAR2_T_$(CFG_BUILD)_H_$(CFG_BUILD)) $(ERR_IDX_GEN_EXE)
7777

7878
D := $(S)src/doc
7979

80-
DOC_TARGETS := trpl nomicon style error-index
80+
DOC_TARGETS := trpl tarpl style error-index
8181
COMPILER_DOC_TARGETS :=
8282
DOC_L10N_TARGETS :=
8383

@@ -287,12 +287,12 @@ doc/book/index.html: $(RUSTBOOK_EXE) $(wildcard $(S)/src/doc/trpl/*.md) | doc/
287287
$(Q)rm -rf doc/book
288288
$(Q)$(RUSTBOOK) build $(S)src/doc/trpl doc/book
289289

290-
nomicon: doc/nomicon/index.html
290+
tarpl: doc/adv-book/index.html
291291

292-
doc/nomicon/index.html: $(RUSTBOOK_EXE) $(wildcard $(S)/src/doc/nomicon/*.md) | doc/
292+
doc/adv-book/index.html: $(RUSTBOOK_EXE) $(wildcard $(S)/src/doc/tarpl/*.md) | doc/
293293
@$(call E, rustbook: $@)
294-
$(Q)rm -rf doc/nomicon
295-
$(Q)$(RUSTBOOK) build $(S)src/doc/nomicon doc/nomicon
294+
$(Q)rm -rf doc/adv-book
295+
$(Q)$(RUSTBOOK) build $(S)src/doc/tarpl doc/adv-book
296296

297297
style: doc/style/index.html
298298

branches/try/mk/tests.mk

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -162,8 +162,8 @@ $(foreach doc,$(DOCS), \
162162
$(eval $(call DOCTEST,md-$(doc),$(S)src/doc/$(doc).md)))
163163
$(foreach file,$(wildcard $(S)src/doc/trpl/*.md), \
164164
$(eval $(call DOCTEST,$(file:$(S)src/doc/trpl/%.md=trpl-%),$(file))))
165-
$(foreach file,$(wildcard $(S)src/doc/nomicon/*.md), \
166-
$(eval $(call DOCTEST,$(file:$(S)src/doc/nomicon/%.md=nomicon-%),$(file))))
165+
$(foreach file,$(wildcard $(S)src/doc/tarpl/*.md), \
166+
$(eval $(call DOCTEST,$(file:$(S)src/doc/tarpl/%.md=tarpl-%),$(file))))
167167
######################################################################
168168
# Main test targets
169169
######################################################################

branches/try/src/doc/nomicon/README.md

Lines changed: 0 additions & 38 deletions
This file was deleted.

branches/try/src/doc/nomicon/data.md

Lines changed: 0 additions & 5 deletions
This file was deleted.

branches/try/src/doc/reference.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1199,8 +1199,8 @@ An example of an `enum` item and its use:
11991199

12001200
```
12011201
enum Animal {
1202-
Dog,
1203-
Cat,
1202+
Dog,
1203+
Cat
12041204
}
12051205
12061206
let mut a: Animal = Animal::Dog;

branches/try/src/doc/tarpl/README.md

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
% The Advanced Rust Programming Language
2+
3+
# NOTE: This is a draft document, and may contain serious errors
4+
5+
So you've played around with Rust a bit. You've written a few simple programs and
6+
you think you grok the basics. Maybe you've even read through
7+
*[The Rust Programming Language][trpl]*. Now you want to get neck-deep in all the
8+
nitty-gritty details of the language. You want to know those weird corner-cases.
9+
You want to know what the heck `unsafe` really means, and how to properly use it.
10+
This is the book for you.
11+
12+
To be clear, this book goes into *serious* detail. We're going to dig into
13+
exception-safety and pointer aliasing. We're going to talk about memory
14+
models. We're even going to do some type-theory. This is stuff that you
15+
absolutely *don't* need to know to write fast and safe Rust programs.
16+
You could probably close this book *right now* and still have a productive
17+
and happy career in Rust.
18+
19+
However if you intend to write unsafe code -- or just *really* want to dig into
20+
the guts of the language -- this book contains *invaluable* information.
21+
22+
Unlike *The Rust Programming Language* we *will* be assuming considerable prior
23+
knowledge. In particular, you should be comfortable with:
24+
25+
* Basic Systems Programming:
26+
* Pointers
27+
* [The stack and heap][]
28+
* The memory hierarchy (caches)
29+
* Threads
30+
31+
* [Basic Rust][]
32+
33+
Due to the nature of advanced Rust programming, we will be spending a lot of time
34+
talking about *safety* and *guarantees*. In particular, a significant portion of
35+
the book will be dedicated to correctly writing and understanding Unsafe Rust.
36+
37+
[trpl]: ../book/
38+
[The stack and heap]: ../book/the-stack-and-the-heap.html
39+
[Basic Rust]: ../book/syntax-and-semantics.html

branches/try/src/doc/nomicon/SUMMARY.md renamed to branches/try/src/doc/tarpl/SUMMARY.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@
1010
* [Ownership](ownership.md)
1111
* [References](references.md)
1212
* [Lifetimes](lifetimes.md)
13-
* [Limits of Lifetimes](lifetime-mismatch.md)
13+
* [Limits of lifetimes](lifetime-mismatch.md)
1414
* [Lifetime Elision](lifetime-elision.md)
1515
* [Unbounded Lifetimes](unbounded-lifetimes.md)
1616
* [Higher-Rank Trait Bounds](hrtb.md)

branches/try/src/doc/nomicon/atomics.md renamed to branches/try/src/doc/tarpl/atomics.md

Lines changed: 28 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ face.
1717
The C11 memory model is fundamentally about trying to bridge the gap between the
1818
semantics we want, the optimizations compilers want, and the inconsistent chaos
1919
our hardware wants. *We* would like to just write programs and have them do
20-
exactly what we said but, you know, fast. Wouldn't that be great?
20+
exactly what we said but, you know, *fast*. Wouldn't that be great?
2121

2222

2323

@@ -35,20 +35,20 @@ y = 3;
3535
x = 2;
3636
```
3737

38-
The compiler may conclude that it would be best if your program did
38+
The compiler may conclude that it would *really* be best if your program did
3939

4040
```rust,ignore
4141
x = 2;
4242
y = 3;
4343
```
4444

45-
This has inverted the order of events and completely eliminated one event.
45+
This has inverted the order of events *and* completely eliminated one event.
4646
From a single-threaded perspective this is completely unobservable: after all
4747
the statements have executed we are in exactly the same state. But if our
48-
program is multi-threaded, we may have been relying on `x` to actually be
49-
assigned to 1 before `y` was assigned. We would like the compiler to be
48+
program is multi-threaded, we may have been relying on `x` to *actually* be
49+
assigned to 1 before `y` was assigned. We would *really* like the compiler to be
5050
able to make these kinds of optimizations, because they can seriously improve
51-
performance. On the other hand, we'd also like to be able to depend on our
51+
performance. On the other hand, we'd really like to be able to depend on our
5252
program *doing the thing we said*.
5353

5454

@@ -57,15 +57,15 @@ program *doing the thing we said*.
5757
# Hardware Reordering
5858

5959
On the other hand, even if the compiler totally understood what we wanted and
60-
respected our wishes, our hardware might instead get us in trouble. Trouble
60+
respected our wishes, our *hardware* might instead get us in trouble. Trouble
6161
comes from CPUs in the form of memory hierarchies. There is indeed a global
6262
shared memory space somewhere in your hardware, but from the perspective of each
6363
CPU core it is *so very far away* and *so very slow*. Each CPU would rather work
64-
with its local cache of the data and only go through all the anguish of
65-
talking to shared memory only when it doesn't actually have that memory in
64+
with its local cache of the data and only go through all the *anguish* of
65+
talking to shared memory *only* when it doesn't actually have that memory in
6666
cache.
6767

68-
After all, that's the whole point of the cache, right? If every read from the
68+
After all, that's the whole *point* of the cache, right? If every read from the
6969
cache had to run back to shared memory to double check that it hadn't changed,
7070
what would the point be? The end result is that the hardware doesn't guarantee
7171
that events that occur in the same order on *one* thread, occur in the same
@@ -99,13 +99,13 @@ provides weak ordering guarantees. This has two consequences for concurrent
9999
programming:
100100

101101
* Asking for stronger guarantees on strongly-ordered hardware may be cheap or
102-
even free because they already provide strong guarantees unconditionally.
102+
even *free* because they already provide strong guarantees unconditionally.
103103
Weaker guarantees may only yield performance wins on weakly-ordered hardware.
104104

105-
* Asking for guarantees that are too weak on strongly-ordered hardware is
105+
* Asking for guarantees that are *too* weak on strongly-ordered hardware is
106106
more likely to *happen* to work, even though your program is strictly
107-
incorrect. If possible, concurrent algorithms should be tested on
108-
weakly-ordered hardware.
107+
incorrect. If possible, concurrent algorithms should be tested on weakly-
108+
ordered hardware.
109109

110110

111111

@@ -115,10 +115,10 @@ programming:
115115

116116
The C11 memory model attempts to bridge the gap by allowing us to talk about the
117117
*causality* of our program. Generally, this is by establishing a *happens
118-
before* relationship between parts of the program and the threads that are
118+
before* relationships between parts of the program and the threads that are
119119
running them. This gives the hardware and compiler room to optimize the program
120120
more aggressively where a strict happens-before relationship isn't established,
121-
but forces them to be more careful where one is established. The way we
121+
but forces them to be more careful where one *is* established. The way we
122122
communicate these relationships are through *data accesses* and *atomic
123123
accesses*.
124124

@@ -130,10 +130,8 @@ propagate the changes made in data accesses to other threads as lazily and
130130
inconsistently as it wants. Mostly critically, data accesses are how data races
131131
happen. Data accesses are very friendly to the hardware and compiler, but as
132132
we've seen they offer *awful* semantics to try to write synchronized code with.
133-
Actually, that's too weak.
134-
135-
**It is literally impossible to write correct synchronized code using only data
136-
accesses.**
133+
Actually, that's too weak. *It is literally impossible to write correct
134+
synchronized code using only data accesses*.
137135

138136
Atomic accesses are how we tell the hardware and compiler that our program is
139137
multi-threaded. Each atomic access can be marked with an *ordering* that
@@ -143,10 +141,7 @@ they *can't* do. For the compiler, this largely revolves around re-ordering of
143141
instructions. For the hardware, this largely revolves around how writes are
144142
propagated to other threads. The set of orderings Rust exposes are:
145143

146-
* Sequentially Consistent (SeqCst)
147-
* Release
148-
* Acquire
149-
* Relaxed
144+
* Sequentially Consistent (SeqCst) Release Acquire Relaxed
150145

151146
(Note: We explicitly do not expose the C11 *consume* ordering)
152147

@@ -159,13 +154,13 @@ synchronize"
159154

160155
Sequentially Consistent is the most powerful of all, implying the restrictions
161156
of all other orderings. Intuitively, a sequentially consistent operation
162-
cannot be reordered: all accesses on one thread that happen before and after a
163-
SeqCst access stay before and after it. A data-race-free program that uses
157+
*cannot* be reordered: all accesses on one thread that happen before and after a
158+
SeqCst access *stay* before and after it. A data-race-free program that uses
164159
only sequentially consistent atomics and data accesses has the very nice
165160
property that there is a single global execution of the program's instructions
166161
that all threads agree on. This execution is also particularly nice to reason
167162
about: it's just an interleaving of each thread's individual executions. This
168-
does not hold if you start using the weaker atomic orderings.
163+
*does not* hold if you start using the weaker atomic orderings.
169164

170165
The relative developer-friendliness of sequential consistency doesn't come for
171166
free. Even on strongly-ordered platforms sequential consistency involves
@@ -175,8 +170,8 @@ In practice, sequential consistency is rarely necessary for program correctness.
175170
However sequential consistency is definitely the right choice if you're not
176171
confident about the other memory orders. Having your program run a bit slower
177172
than it needs to is certainly better than it running incorrectly! It's also
178-
mechanically trivial to downgrade atomic operations to have a weaker
179-
consistency later on. Just change `SeqCst` to `Relaxed` and you're done! Of
173+
*mechanically* trivial to downgrade atomic operations to have a weaker
174+
consistency later on. Just change `SeqCst` to e.g. `Relaxed` and you're done! Of
180175
course, proving that this transformation is *correct* is a whole other matter.
181176

182177

@@ -188,15 +183,15 @@ Acquire and Release are largely intended to be paired. Their names hint at their
188183
use case: they're perfectly suited for acquiring and releasing locks, and
189184
ensuring that critical sections don't overlap.
190185

191-
Intuitively, an acquire access ensures that every access after it stays after
186+
Intuitively, an acquire access ensures that every access after it *stays* after
192187
it. However operations that occur before an acquire are free to be reordered to
193188
occur after it. Similarly, a release access ensures that every access before it
194-
stays before it. However operations that occur after a release are free to be
189+
*stays* before it. However operations that occur after a release are free to be
195190
reordered to occur before it.
196191

197192
When thread A releases a location in memory and then thread B subsequently
198193
acquires *the same* location in memory, causality is established. Every write
199-
that happened before A's release will be observed by B after its release.
194+
that happened *before* A's release will be observed by B *after* its release.
200195
However no causality is established with any other threads. Similarly, no
201196
causality is established if A and B access *different* locations in memory.
202197

@@ -235,7 +230,7 @@ weakly-ordered platforms.
235230
# Relaxed
236231

237232
Relaxed accesses are the absolute weakest. They can be freely re-ordered and
238-
provide no happens-before relationship. Still, relaxed operations are still
233+
provide no happens-before relationship. Still, relaxed operations *are* still
239234
atomic. That is, they don't count as data accesses and any read-modify-write
240235
operations done to them occur atomically. Relaxed operations are appropriate for
241236
things that you definitely want to happen, but don't particularly otherwise care

branches/try/src/doc/nomicon/borrow-splitting.md renamed to branches/try/src/doc/tarpl/borrow-splitting.md

Lines changed: 12 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
The mutual exclusion property of mutable references can be very limiting when
44
working with a composite structure. The borrow checker understands some basic
5-
stuff, but will fall over pretty easily. It does understand structs
5+
stuff, but will fall over pretty easily. It *does* understand structs
66
sufficiently to know that it's possible to borrow disjoint fields of a struct
77
simultaneously. So this works today:
88

@@ -27,27 +27,19 @@ However borrowck doesn't understand arrays or slices in any way, so this doesn't
2727
work:
2828

2929
```rust,ignore
30-
let mut x = [1, 2, 3];
30+
let x = [1, 2, 3];
3131
let a = &mut x[0];
3232
let b = &mut x[1];
3333
println!("{} {}", a, b);
3434
```
3535

3636
```text
37-
<anon>:4:14: 4:18 error: cannot borrow `x[..]` as mutable more than once at a time
38-
<anon>:4 let b = &mut x[1];
39-
^~~~
40-
<anon>:3:14: 3:18 note: previous borrow of `x[..]` occurs here; the mutable borrow prevents subsequent moves, borrows, or modification of `x[..]` until the borrow ends
41-
<anon>:3 let a = &mut x[0];
42-
^~~~
43-
<anon>:6:2: 6:2 note: previous borrow ends here
44-
<anon>:1 fn main() {
45-
<anon>:2 let mut x = [1, 2, 3];
46-
<anon>:3 let a = &mut x[0];
47-
<anon>:4 let b = &mut x[1];
48-
<anon>:5 println!("{} {}", a, b);
49-
<anon>:6 }
50-
^
37+
<anon>:3:18: 3:22 error: cannot borrow immutable indexed content `x[..]` as mutable
38+
<anon>:3 let a = &mut x[0];
39+
^~~~
40+
<anon>:4:18: 4:22 error: cannot borrow immutable indexed content `x[..]` as mutable
41+
<anon>:4 let b = &mut x[1];
42+
^~~~
5143
error: aborting due to 2 previous errors
5244
```
5345

@@ -58,7 +50,7 @@ to the same value.
5850

5951
In order to "teach" borrowck that what we're doing is ok, we need to drop down
6052
to unsafe code. For instance, mutable slices expose a `split_at_mut` function
61-
that consumes the slice and returns two mutable slices. One for everything to
53+
that consumes the slice and returns *two* mutable slices. One for everything to
6254
the left of the index, and one for everything to the right. Intuitively we know
6355
this is safe because the slices don't overlap, and therefore alias. However
6456
the implementation requires some unsafety:
@@ -101,10 +93,10 @@ completely incompatible with this API, as it would produce multiple mutable
10193
references to the same object!
10294

10395
However it actually *does* work, exactly because iterators are one-shot objects.
104-
Everything an IterMut yields will be yielded at most once, so we don't
105-
actually ever yield multiple mutable references to the same piece of data.
96+
Everything an IterMut yields will be yielded *at most* once, so we don't
97+
*actually* ever yield multiple mutable references to the same piece of data.
10698

107-
Perhaps surprisingly, mutable iterators don't require unsafe code to be
99+
Perhaps surprisingly, mutable iterators *don't* require unsafe code to be
108100
implemented for many types!
109101

110102
For instance here's a singly linked list:

0 commit comments

Comments
 (0)