Skip to content

Commit 1cf0df4

Browse files
miss-islingtonWindsooon
authored andcommitted
bpo-36654: Add examples for using tokenize module programmatically (GH-18187)
(cherry picked from commit 4b09dc7) Co-authored-by: Windson yang <[email protected]>
1 parent 321491a commit 1cf0df4

File tree

1 file changed

+19
-0
lines changed

1 file changed

+19
-0
lines changed

Doc/library/tokenize.rst

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -278,3 +278,22 @@ The exact token type names can be displayed using the :option:`-e` option:
278278
4,10-4,11: RPAR ')'
279279
4,11-4,12: NEWLINE '\n'
280280
5,0-5,0: ENDMARKER ''
281+
282+
Example of tokenizing a file programmatically, reading unicode
283+
strings instead of bytes with :func:`generate_tokens`::
284+
285+
import tokenize
286+
287+
with tokenize.open('hello.py') as f:
288+
tokens = tokenize.generate_tokens(f.readline)
289+
for token in tokens:
290+
print(token)
291+
292+
Or reading bytes directly with :func:`.tokenize`::
293+
294+
import tokenize
295+
296+
with open('hello.py', 'rb') as f:
297+
tokens = tokenize.tokenize(f.readline)
298+
for token in tokens:
299+
print(token)

0 commit comments

Comments
 (0)