2
2
3
3
# Introduction
4
4
5
- The Rust language is designed from the ground up to support pervasive
6
- and safe concurrency through lightweight, memory-isolated tasks and
7
- message passing.
8
-
9
- Rust tasks are not the same as traditional threads - they are what are
10
- often referred to as _ green threads_ , cooperatively scheduled by the
11
- Rust runtime onto a small number of operating system threads. Being
12
- significantly cheaper to create than traditional threads, Rust can
13
- create hundreds of thousands of concurrent tasks on a typical 32-bit
14
- system.
5
+ Rust supports concurrency and parallelism through lightweight tasks.
6
+ Rust tasks are significantly cheaper to create than traditional
7
+ threads, with a typical 32-bit system able to run hundreds of
8
+ thousands simultaneously. Tasks in Rust are what are often referred to
9
+ as _ green threads_ , cooperatively scheduled by the Rust runtime onto a
10
+ small number of operating system threads.
15
11
16
12
Tasks provide failure isolation and recovery. When an exception occurs
17
13
in rust code (either by calling ` fail ` explicitly or by otherwise performing
@@ -20,11 +16,11 @@ to `catch` an exception as in other languages. Instead tasks may monitor
20
16
each other to detect when failure has occurred.
21
17
22
18
Rust tasks have dynamically sized stacks. When a task is first created
23
- it starts off with a small amount of stack (currently in the low
24
- thousands of bytes, depending on platform) and more stack is acquired as
25
- needed. A Rust task will never run off the end of the stack as is
26
- possible in many other languages, but they do have a stack budget, and
27
- if a Rust task exceeds its stack budget then it will fail safely.
19
+ it starts off with a small amount of stack (in the hundreds to
20
+ low thousands of bytes, depending on plattform), and more stack is
21
+ added as needed. A Rust task will never run off the end of the stack as
22
+ is possible in many other languages, but they do have a stack budget,
23
+ and if a Rust task exceeds its stack budget then it will fail safely.
28
24
29
25
Tasks make use of Rust's type system to provide strong memory safety
30
26
guarantees, disallowing shared mutable state. Communication between
@@ -36,12 +32,12 @@ explore some typical patterns in concurrent Rust code, and finally
36
32
discuss some of the more exotic synchronization types in the standard
37
33
library.
38
34
39
- ## A note about the libraries
35
+ # A note about the libraries
40
36
41
37
While Rust's type system provides the building blocks needed for safe
42
38
and efficient tasks, all of the task functionality itself is implemented
43
39
in the core and standard libraries, which are still under development
44
- and do not always present a consistent interface.
40
+ and do not always present a nice programming interface.
45
41
46
42
In particular, there are currently two independent modules that provide
47
43
a message passing interface to Rust code: ` core::comm ` and ` core::pipes ` .
@@ -70,96 +66,43 @@ concurrency at the moment.
70
66
[ `std::arc` ] : std/arc.html
71
67
[ `std::par` ] : std/par.html
72
68
73
- # Basics
69
+ # Spawning a task
74
70
75
- The programming interface for creating and managing tasks is contained
76
- in the ` task ` module of the ` core ` library, making it available to all
77
- Rust code by default. At it's simplest, creating a task is a matter of
78
- calling the ` spawn ` function, passing a closure to run in the new
79
- task.
71
+ Spawning a task is done using the various spawn functions in the
72
+ module ` task ` . Let's begin with the simplest one, ` task::spawn() ` :
80
73
81
74
~~~~
82
- # use io::println;
83
75
use task::spawn;
76
+ use io::println;
84
77
85
- // Print something profound in a different task using a named function
86
- fn print_message() { println("I am running in a different task!"); }
87
- spawn(print_message);
88
-
89
- // Print something more profound in a different task using a lambda expression
90
- spawn( || println("I am also running in a different task!") );
78
+ let some_value = 22;
91
79
92
- // The canonical way to spawn is using `do` notation
93
80
do spawn {
94
- println("I too am running in a different task!");
81
+ println(~"This executes in the child task.");
82
+ println(fmt!("%d", some_value));
95
83
}
96
84
~~~~
97
85
98
- In Rust, there is nothing special about creating tasks - the language
99
- itself doesn't know what a 'task' is. Instead, Rust provides in the
100
- type system all the tools necessary to implement safe concurrency,
101
- _ owned types_ in particular, and leaves the dirty work up to the
102
- core library.
103
-
104
- The ` spawn ` function has a very simple type signature: `fn spawn(f:
105
- ~ fn())`. Because it accepts only owned closures, and owned closures
106
- contained only owned data, ` spawn ` can safely move the entire closure
107
- and all its associated state into an entirely different task for
108
- execution. Like any closure, the function passed to spawn may capture
109
- an environment that it carries across tasks.
110
-
111
- ~~~
112
- # use io::println;
113
- # use task::spawn;
114
- # fn generate_task_number() -> int { 0 }
115
- // Generate some state locally
116
- let child_task_number = generate_task_number();
117
-
118
- do spawn {
119
- // Capture it in the remote task
120
- println(fmt!("I am child number %d", child_task_number));
121
- }
122
- ~~~
123
-
124
- By default tasks will be multiplexed across the available cores, running
125
- in parallel, thus on a multicore machine, running the following code
126
- should interleave the output in vaguely random order.
86
+ The argument to ` task::spawn() ` is a [ unique
87
+ closure] ( #unique-closures ) of type ` fn~() ` , meaning that it takes no
88
+ arguments and generates no return value. The effect of ` task::spawn() `
89
+ is to fire up a child task that will execute the closure in parallel
90
+ with the creator.
127
91
128
- ~~~
129
- # use io::print;
130
- # use task::spawn;
92
+ # Communication
131
93
132
- for int::range(0, 20) |child_task_number| {
133
- do spawn {
134
- print(fmt!("I am child number %d\n", child_task_number));
135
- }
136
- }
137
- ~~~
138
-
139
- ## Communication
140
-
141
- Now that we have spawned a new task, it would be nice if we could
142
- communicate with it. Recall that Rust does not have shared mutable
143
- state, so one task may not manipulate variables owned by another task.
144
- Instead we use * pipes* .
145
-
146
- Pipes are simply a pair of endpoints, with one for sending messages
147
- and another for receiving messages. Pipes are low-level communication
148
- building-blocks and so come in a variety of forms, appropriate for
149
- different use cases, but there are just a few varieties that are most
150
- commonly used, which we will cover presently.
151
-
152
- The simplest way to create a pipe is to use the ` pipes::stream `
153
- function to create a ` (Chan, Port) ` pair. In Rust parlance a 'channel'
154
- is a sending endpoint of a pipe, and a 'port' is the recieving
155
- endpoint. Consider the following example of performing two calculations
156
- concurrently.
94
+ Now that we have spawned a child task, it would be nice if we could
95
+ communicate with it. This is done using * pipes* . Pipes are simply a
96
+ pair of endpoints, with one for sending messages and another for
97
+ receiving messages. The easiest way to create a pipe is to use
98
+ ` pipes::stream ` . Imagine we wish to perform two expensive
99
+ computations in parallel. We might write something like:
157
100
158
101
~~~~
159
102
use task::spawn;
160
103
use pipes::{stream, Port, Chan};
161
104
162
- let (chan, port): (Chan<int>, Port<int>) = stream();
105
+ let (chan, port) = stream();
163
106
164
107
do spawn {
165
108
let result = some_expensive_computation();
@@ -173,19 +116,17 @@ let result = port.recv();
173
116
# fn some_other_expensive_computation() {}
174
117
~~~~
175
118
176
- Let's examine this example in detail. The ` let ` statement first creates a
177
- stream for sending and receiving integers (recall that ` let ` can be
178
- used for destructuring patterns, in this case separating a tuple into
179
- its component parts).
119
+ Let's walk through this code line-by-line. The first line creates a
120
+ stream for sending and receiving integers:
180
121
181
- ~~~~
182
- # use pipes::{ stream, Chan, Port} ;
183
- let (chan, port): (Chan<int>, Port<int>) = stream();
122
+ ~~~~ {.ignore}
123
+ # use pipes::stream;
124
+ let (chan, port) = stream();
184
125
~~~~
185
126
186
- The channel will be used by the child task to send data to the parent task,
187
- which will wait to recieve the data on the port. The next statement
188
- spawns the child task.
127
+ This port is where we will receive the message from the child task
128
+ once it is complete. The channel will be used by the child to send a
129
+ message to the port. The next statement actually spawns the child:
189
130
190
131
~~~~
191
132
# use task::{spawn};
@@ -199,15 +140,14 @@ do spawn {
199
140
}
200
141
~~~~
201
142
202
- Notice that ` chan ` was transferred to the child task implicitly by
203
- capturing it in the task closure. Both ` Chan ` and ` Port ` are sendable
204
- types and may be captured into tasks or otherwise transferred between
205
- them. In the example, the child task performs an expensive computation
206
- then sends the result over the captured channel.
143
+ This child will perform the expensive computation send the result
144
+ over the channel. (Under the hood, ` chan ` was captured by the
145
+ closure that forms the body of the child task. This capture is
146
+ allowed because channels are sendable.)
207
147
208
- Finally, the parent continues by performing some other expensive
209
- computation and then waiting for the child's result to arrive on the
210
- port:
148
+ Finally, the parent continues by performing
149
+ some other expensive computation and then waiting for the child's result
150
+ to arrive on the port:
211
151
212
152
~~~~
213
153
# use pipes::{stream, Port, Chan};
@@ -218,73 +158,7 @@ some_other_expensive_computation();
218
158
let result = port.recv();
219
159
~~~~
220
160
221
- The ` Port ` and ` Chan ` pair created by ` stream ` enable efficient
222
- communication between a single sender and a single receiver, but
223
- multiple senders cannot use a single ` Chan ` , nor can multiple
224
- receivers use a single ` Port ` . What if our example needed to
225
- perform multiple computations across a number of tasks? In that
226
- case we might use a ` SharedChan ` , a type that allows a single
227
- ` Chan ` to be used by multiple senders.
228
-
229
- ~~~
230
- # use task::spawn;
231
- use pipes::{stream, SharedChan};
232
-
233
- let (chan, port) = stream();
234
- let chan = SharedChan(move chan);
235
-
236
- for uint::range(0, 3) |init_val| {
237
- // Create a new channel handle to distribute to the child task
238
- let child_chan = chan.clone();
239
- do spawn {
240
- child_chan.send(some_expensive_computation(init_val));
241
- }
242
- }
243
-
244
- let result = port.recv() + port.recv() + port.recv();
245
- # fn some_expensive_computation(_i: uint) -> int { 42 }
246
- ~~~
247
-
248
- Here we transfer ownership of the channel into a new ` SharedChan `
249
- value. Like ` Chan ` , ` SharedChan ` is a non-copyable, owned type
250
- (sometimes also referred to as an 'affine' or 'linear' type). Unlike
251
- ` Chan ` though, ` SharedChan ` may be duplicated with the ` clone() `
252
- method. A cloned ` SharedChan ` produces a new handle to the same
253
- channel, allowing multiple tasks to send data to a single port.
254
- Between ` spawn ` , ` stream ` and ` SharedChan ` we have enough tools
255
- to implement many useful concurrency patterns.
256
-
257
- Note that the above ` SharedChan ` example is somewhat contrived since
258
- you could also simply use three ` stream ` pairs, but it serves to
259
- illustrate the point. For reference, written with multiple streams it
260
- might look like the example below.
261
-
262
- ~~~
263
- # use task::spawn;
264
- # use pipes::{stream, Port, Chan};
265
-
266
- let ports = do vec::from_fn(3) |init_val| {
267
- let (chan, port) = stream();
268
-
269
- do spawn {
270
- chan.send(some_expensive_computation(init_val));
271
- }
272
-
273
- port
274
- };
275
-
276
- // Wait on each port, accumulating the results
277
- let result = ports.foldl(0, |accum, port| *accum + port.recv() );
278
- # fn some_expensive_computation(_i: uint) -> int { 42 }
279
- ~~~
280
-
281
- # Unfinished notes
282
-
283
- ## Actor patterns
284
-
285
- ## Linearity, option dancing, owned closures
286
-
287
- ## Creating a task with a bi-directional communication path
161
+ # Creating a task with a bi-directional communication path
288
162
289
163
A very common thing to do is to spawn a child task where the parent
290
164
and child both need to exchange messages with each other. The
@@ -353,4 +227,3 @@ assert from_child.recv() == ~"0";
353
227
354
228
The parent task first calls ` DuplexStream ` to create a pair of bidirectional endpoints. It then uses ` task::spawn ` to create the child task, which captures one end of the communication channel. As a result, both parent
355
229
and child can send and receive data to and from the other.
356
-
0 commit comments