1
- Broadcasting messages
2
- =====================
1
+ Broadcasting
2
+ ============
3
3
4
4
.. currentmodule :: websockets
5
5
6
-
7
- .. admonition :: If you just want to send a message to all connected clients,
8
- use :func: `broadcast `.
6
+ .. admonition :: If you want to send a message to all connected clients,
7
+ use :func: `~asyncio.connection.broadcast `.
9
8
:class: tip
10
9
11
- If you want to learn about its design in depth, continue reading this
12
- document.
10
+ If you want to learn about its design, continue reading this document.
11
+
12
+ For the legacy :mod: `asyncio ` implementation, use
13
+ :func: `~legacy.protocol.broadcast `.
13
14
14
15
WebSocket servers often send the same message to all connected clients or to a
15
16
subset of clients for which the message is relevant.
16
17
17
- Let's explore options for broadcasting a message, explain the design
18
- of :func: `broadcast `, and discuss alternatives.
18
+ Let's explore options for broadcasting a message, explain the design of
19
+ :func: `~asyncio.connection. broadcast `, and discuss alternatives.
19
20
20
21
For each option, we'll provide a connection handler called ``handler() `` and a
21
22
function or coroutine called ``broadcast() `` that sends a message to all
@@ -24,7 +25,7 @@ connected clients.
24
25
Integrating them is left as an exercise for the reader. You could start with::
25
26
26
27
import asyncio
27
- import websockets
28
+ from websockets.asyncio.server import serve
28
29
29
30
async def handler(websocket):
30
31
...
@@ -39,7 +40,7 @@ Integrating them is left as an exercise for the reader. You could start with::
39
40
await broadcast(message)
40
41
41
42
async def main():
42
- async with websockets. serve(handler, "localhost", 8765):
43
+ async with serve(handler, "localhost", 8765):
43
44
await broadcast_messages() # runs forever
44
45
45
46
if __name__ == "__main__":
82
83
83
84
Here's a coroutine that broadcasts a message to all clients::
84
85
86
+ from websockets import ConnectionClosed
87
+
85
88
async def broadcast(message):
86
89
for websocket in CLIENTS.copy():
87
90
try:
88
91
await websocket.send(message)
89
- except websockets. ConnectionClosed:
92
+ except ConnectionClosed:
90
93
pass
91
94
92
95
There are two tricks in this version of ``broadcast() ``.
@@ -117,11 +120,11 @@ which is usually outside of the control of the server.
117
120
118
121
If you know for sure that you will never write more than ``write_limit `` bytes
119
122
within ``ping_interval + ping_timeout ``, then websockets will terminate slow
120
- connections before the write buffer has time to fill up.
123
+ connections before the write buffer can fill up.
121
124
122
- Don't set extreme ``write_limit ``, ``ping_interval ``, and `` ping_timeout ``
123
- values to ensure that this condition holds. Set reasonable values and use the
124
- built-in :func: `broadcast ` function instead .
125
+ Don't set extreme values of ``write_limit ``, ``ping_interval ``, or
126
+ `` ping_timeout `` to ensure that this condition holds! Instead, set reasonable
127
+ values and use the built-in :func: `~asyncio.connection. broadcast ` function.
125
128
126
129
The concurrent way
127
130
------------------
@@ -134,7 +137,7 @@ Let's modify ``broadcast()`` to send messages concurrently::
134
137
async def send(websocket, message):
135
138
try:
136
139
await websocket.send(message)
137
- except websockets. ConnectionClosed:
140
+ except ConnectionClosed:
138
141
pass
139
142
140
143
def broadcast(message):
@@ -179,20 +182,20 @@ doesn't work well when broadcasting a message to thousands of clients.
179
182
180
183
When you're sending messages to a single client, you don't want to send them
181
184
faster than the network can transfer them and the client accept them. This is
182
- why :meth: `~server.WebSocketServerProtocol .send ` checks if the write buffer
183
- is full and, if it is, waits until it drain , giving the network and the
184
- client time to catch up. This provides backpressure.
185
+ why :meth: `~asyncio. server.ServerConnection .send ` checks if the write buffer is
186
+ above the high-water mark and, if it is, waits until it drains , giving the
187
+ network and the client time to catch up. This provides backpressure.
185
188
186
189
Without backpressure, you could pile up data in the write buffer until the
187
190
server process runs out of memory and the operating system kills it.
188
191
189
- The :meth: `~server.WebSocketServerProtocol .send ` API is designed to enforce
192
+ The :meth: `~asyncio. server.ServerConnection .send ` API is designed to enforce
190
193
backpressure by default. This helps users of websockets write robust programs
191
194
even if they never heard about backpressure.
192
195
193
196
For comparison, :class: `asyncio.StreamWriter ` requires users to understand
194
- backpressure and to await :meth: `~asyncio.StreamWriter.drain ` explicitly
195
- after each :meth: `~asyncio.StreamWriter.write `.
197
+ backpressure and to await :meth: `~asyncio.StreamWriter.drain ` after each
198
+ :meth: `~asyncio.StreamWriter.write ` — or at least sufficiently frequently .
196
199
197
200
When broadcasting messages, backpressure consists in slowing down all clients
198
201
in an attempt to let the slowest client catch up. With thousands of clients,
@@ -203,14 +206,14 @@ How do we avoid running out of memory when slow clients can't keep up with the
203
206
broadcast rate, then? The most straightforward option is to disconnect them.
204
207
205
208
If a client gets too far behind, eventually it reaches the limit defined by
206
- ``ping_timeout `` and websockets terminates the connection. You can read the
207
- discussion of :doc: `keepalive and timeouts <./ timeouts >` for details.
209
+ ``ping_timeout `` and websockets terminates the connection. You can refer to
210
+ the discussion of :doc: `keepalive and timeouts <timeouts >` for details.
208
211
209
- How :func: `broadcast ` works
210
- ---------------------------
212
+ How :func: `~asyncio.connection. broadcast ` works
213
+ -----------------------------------------------
211
214
212
- The built-in :func: `broadcast ` function is similar to the naive way. The main
213
- difference is that it doesn't apply backpressure.
215
+ The built-in :func: `~asyncio.connection. broadcast ` function is similar to the
216
+ naive way. The main difference is that it doesn't apply backpressure.
214
217
215
218
This provides the best performance by avoiding the overhead of scheduling and
216
219
running one task per client.
@@ -321,9 +324,9 @@ the asynchronous iterator returned by ``subscribe()``.
321
324
Performance considerations
322
325
--------------------------
323
326
324
- The built-in :func: `broadcast ` function sends all messages without yielding
325
- control to the event loop. So does the naive way when the network and clients
326
- are fast and reliable.
327
+ The built-in :func: `~asyncio.connection. broadcast ` function sends all messages
328
+ without yielding control to the event loop. So does the naive way when the
329
+ network and clients are fast and reliable.
327
330
328
331
For each client, a WebSocket frame is prepared and sent to the network. This
329
332
is the minimum amount of work required to broadcast a message.
@@ -343,7 +346,7 @@ However, this isn't possible in general for two reasons:
343
346
344
347
All other patterns discussed above yield control to the event loop once per
345
348
client because messages are sent by different tasks. This makes them slower
346
- than the built-in :func: `broadcast ` function.
349
+ than the built-in :func: `~asyncio.connection. broadcast ` function.
347
350
348
351
There is no major difference between the performance of per-client queues and
349
352
publish–subscribe.
0 commit comments