Skip to content

Commit fcb7e35

Browse files
committed
Automated tutorials push
1 parent 85c8720 commit fcb7e35

File tree

197 files changed

+12108
-13027
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

197 files changed

+12108
-13027
lines changed

_downloads/3195443a0ced3cabc0ad643537bdb5cd/introyt1_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@
3434
{
3535
"cell_type": "code",
3636
"execution_count": null,
37-
"id": "03d00eb0",
37+
"id": "333110fd",
3838
"metadata": {},
3939
"outputs": [],
4040
"source": [
@@ -50,7 +50,7 @@
5050
},
5151
{
5252
"cell_type": "markdown",
53-
"id": "a2258b92",
53+
"id": "bee03c61",
5454
"metadata": {},
5555
"source": [
5656
"\n",

_downloads/4355e2cef7d17548f1e25f97a62828c4/template_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@
3131
{
3232
"cell_type": "code",
3333
"execution_count": null,
34-
"id": "4e1876e7",
34+
"id": "6b84cd50",
3535
"metadata": {},
3636
"outputs": [],
3737
"source": [
@@ -47,7 +47,7 @@
4747
},
4848
{
4949
"cell_type": "markdown",
50-
"id": "57340e2a",
50+
"id": "c7942b32",
5151
"metadata": {},
5252
"source": [
5353
"\n",

_downloads/63a0f0fc7b3ffb15d3a5ac8db3d521ee/tensors_deeper_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@
3434
{
3535
"cell_type": "code",
3636
"execution_count": null,
37-
"id": "c82232a1",
37+
"id": "9a5e39d6",
3838
"metadata": {},
3939
"outputs": [],
4040
"source": [
@@ -50,7 +50,7 @@
5050
},
5151
{
5252
"cell_type": "markdown",
53-
"id": "aafac980",
53+
"id": "bbfa1eb2",
5454
"metadata": {},
5555
"source": [
5656
"\n",

_downloads/770632dd3941d2a51b831c52ded57aa2/trainingyt.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@
3535
{
3636
"cell_type": "code",
3737
"execution_count": null,
38-
"id": "5e949b91",
38+
"id": "e7355df9",
3939
"metadata": {},
4040
"outputs": [],
4141
"source": [
@@ -51,7 +51,7 @@
5151
},
5252
{
5353
"cell_type": "markdown",
54-
"id": "3593a657",
54+
"id": "3a8b0583",
5555
"metadata": {},
5656
"source": [
5757
"\n",

_downloads/c28f42852d456daf9af72da6c6909556/captumyt.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@
3737
{
3838
"cell_type": "code",
3939
"execution_count": null,
40-
"id": "241f94be",
40+
"id": "c46e89e7",
4141
"metadata": {},
4242
"outputs": [],
4343
"source": [
@@ -53,7 +53,7 @@
5353
},
5454
{
5555
"cell_type": "markdown",
56-
"id": "eb62a5d6",
56+
"id": "122eb44e",
5757
"metadata": {},
5858
"source": [
5959
"\n",

_downloads/e2e556f6b4693c2cef716dd7f40caaf6/tensorboardyt_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@
3535
{
3636
"cell_type": "code",
3737
"execution_count": null,
38-
"id": "6881b39d",
38+
"id": "fb14dc6c",
3939
"metadata": {},
4040
"outputs": [],
4141
"source": [
@@ -51,7 +51,7 @@
5151
},
5252
{
5353
"cell_type": "markdown",
54-
"id": "4cb84077",
54+
"id": "b2b08c1a",
5555
"metadata": {},
5656
"source": [
5757
"\n",

_downloads/ed9d4f94afb79f7dada6742a06c486a5/autogradyt_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@
3434
{
3535
"cell_type": "code",
3636
"execution_count": null,
37-
"id": "5a85e8ec",
37+
"id": "90f4d3a2",
3838
"metadata": {},
3939
"outputs": [],
4040
"source": [
@@ -50,7 +50,7 @@
5050
},
5151
{
5252
"cell_type": "markdown",
53-
"id": "01153738",
53+
"id": "4fa489f6",
5454
"metadata": {},
5555
"source": [
5656
"\n",

_downloads/fe726e041160526cf828806536922cf6/modelsyt_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@
3434
{
3535
"cell_type": "code",
3636
"execution_count": null,
37-
"id": "a6756670",
37+
"id": "539ee7a6",
3838
"metadata": {},
3939
"outputs": [],
4040
"source": [
@@ -50,7 +50,7 @@
5050
},
5151
{
5252
"cell_type": "markdown",
53-
"id": "52962b28",
53+
"id": "d1844aa4",
5454
"metadata": {},
5555
"source": [
5656
"\n",
Loading
Loading

_images/sphx_glr_coding_ddpg_001.png

-212 Bytes
Loading
-3.48 KB
Loading
205 Bytes
Loading
-239 Bytes
Loading
-29 Bytes
Loading
-719 Bytes
Loading
198 Bytes
Loading
1.12 KB
Loading
-97 Bytes
Loading
Loading
-2.46 KB
Loading

_sources/advanced/coding_ddpg.rst.txt

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1634,26 +1634,26 @@ modules we need.
16341634
16351635
16361636
0%| | 0/10000 [00:00<?, ?it/s]
1637-
8%|8 | 800/10000 [00:00<00:06, 1413.52it/s]
1638-
16%|#6 | 1600/10000 [00:03<00:20, 403.13it/s]
1639-
24%|##4 | 2400/10000 [00:04<00:12, 586.58it/s]
1640-
32%|###2 | 3200/10000 [00:04<00:08, 768.34it/s]
1641-
40%|#### | 4000/10000 [00:05<00:06, 920.14it/s]
1642-
48%|####8 | 4800/10000 [00:05<00:04, 1040.30it/s]
1643-
56%|#####6 | 5600/10000 [00:06<00:03, 1121.15it/s]
1644-
reward: -2.56 (r0 = -3.32), reward eval: reward: -0.01, reward normalized=-2.85/6.79, grad norm= 86.98, loss_value= 477.54, loss_actor= 16.11, target value: -17.80: 56%|#####6 | 5600/10000 [00:07<00:03, 1121.15it/s]
1645-
reward: -2.56 (r0 = -3.32), reward eval: reward: -0.01, reward normalized=-2.85/6.79, grad norm= 86.98, loss_value= 477.54, loss_actor= 16.11, target value: -17.80: 64%|######4 | 6400/10000 [00:08<00:04, 796.06it/s]
1646-
reward: -0.20 (r0 = -3.32), reward eval: reward: -0.01, reward normalized=-3.04/5.86, grad norm= 82.28, loss_value= 291.16, loss_actor= 16.75, target value: -19.42: 64%|######4 | 6400/10000 [00:08<00:04, 796.06it/s]
1647-
reward: -0.20 (r0 = -3.32), reward eval: reward: -0.01, reward normalized=-3.04/5.86, grad norm= 82.28, loss_value= 291.16, loss_actor= 16.75, target value: -19.42: 72%|#######2 | 7200/10000 [00:09<00:04, 670.32it/s]
1648-
reward: -2.85 (r0 = -3.32), reward eval: reward: -0.01, reward normalized=-2.93/6.12, grad norm= 169.17, loss_value= 310.63, loss_actor= 15.04, target value: -18.15: 72%|#######2 | 7200/10000 [00:10<00:04, 670.32it/s]
1649-
reward: -2.85 (r0 = -3.32), reward eval: reward: -0.01, reward normalized=-2.93/6.12, grad norm= 169.17, loss_value= 310.63, loss_actor= 15.04, target value: -18.15: 80%|######## | 8000/10000 [00:11<00:03, 611.26it/s]
1650-
reward: -4.98 (r0 = -3.32), reward eval: reward: -0.01, reward normalized=-2.72/4.96, grad norm= 71.58, loss_value= 203.21, loss_actor= 16.10, target value: -17.88: 80%|######## | 8000/10000 [00:12<00:03, 611.26it/s]
1651-
reward: -4.98 (r0 = -3.32), reward eval: reward: -0.01, reward normalized=-2.72/4.96, grad norm= 71.58, loss_value= 203.21, loss_actor= 16.10, target value: -17.88: 88%|########8 | 8800/10000 [00:12<00:02, 579.99it/s]
1652-
reward: -5.15 (r0 = -3.32), reward eval: reward: -5.77, reward normalized=-2.74/4.88, grad norm= 67.79, loss_value= 177.52, loss_actor= 19.43, target value: -18.64: 88%|########8 | 8800/10000 [00:15<00:02, 579.99it/s]
1653-
reward: -5.15 (r0 = -3.32), reward eval: reward: -5.77, reward normalized=-2.74/4.88, grad norm= 67.79, loss_value= 177.52, loss_actor= 19.43, target value: -18.64: 96%|#########6| 9600/10000 [00:16<00:01, 368.64it/s]
1654-
reward: -4.63 (r0 = -3.32), reward eval: reward: -5.77, reward normalized=-2.72/4.96, grad norm= 100.61, loss_value= 233.46, loss_actor= 19.22, target value: -19.81: 96%|#########6| 9600/10000 [00:17<00:01, 368.64it/s]
1655-
reward: -4.63 (r0 = -3.32), reward eval: reward: -5.77, reward normalized=-2.72/4.96, grad norm= 100.61, loss_value= 233.46, loss_actor= 19.22, target value: -19.81: : 10400it [00:19, 348.57it/s]
1656-
reward: -4.79 (r0 = -3.32), reward eval: reward: -5.77, reward normalized=-4.03/4.05, grad norm= 168.97, loss_value= 157.23, loss_actor= 23.69, target value: -28.33: : 10400it [00:20, 348.57it/s]
1637+
8%|8 | 800/10000 [00:00<00:06, 1416.92it/s]
1638+
16%|#6 | 1600/10000 [00:03<00:21, 394.72it/s]
1639+
24%|##4 | 2400/10000 [00:04<00:13, 572.27it/s]
1640+
32%|###2 | 3200/10000 [00:04<00:09, 740.78it/s]
1641+
40%|#### | 4000/10000 [00:05<00:06, 881.14it/s]
1642+
48%|####8 | 4800/10000 [00:06<00:05, 999.44it/s]
1643+
56%|#####6 | 5600/10000 [00:06<00:04, 1087.14it/s]
1644+
reward: -2.21 (r0 = -2.33), reward eval: reward: 0.00, reward normalized=-1.95/6.62, grad norm= 55.87, loss_value= 464.75, loss_actor= 13.97, target value: -12.47: 56%|#####6 | 5600/10000 [00:07<00:04, 1087.14it/s]
1645+
reward: -2.21 (r0 = -2.33), reward eval: reward: 0.00, reward normalized=-1.95/6.62, grad norm= 55.87, loss_value= 464.75, loss_actor= 13.97, target value: -12.47: 64%|######4 | 6400/10000 [00:08<00:04, 783.28it/s]
1646+
reward: -0.13 (r0 = -2.33), reward eval: reward: 0.00, reward normalized=-2.44/5.91, grad norm= 85.41, loss_value= 286.78, loss_actor= 12.42, target value: -15.80: 64%|######4 | 6400/10000 [00:09<00:04, 783.28it/s]
1647+
reward: -0.13 (r0 = -2.33), reward eval: reward: 0.00, reward normalized=-2.44/5.91, grad norm= 85.41, loss_value= 286.78, loss_actor= 12.42, target value: -15.80: 72%|#######2 | 7200/10000 [00:09<00:04, 667.62it/s]
1648+
reward: -3.46 (r0 = -2.33), reward eval: reward: 0.00, reward normalized=-1.97/6.10, grad norm= 156.42, loss_value= 294.56, loss_actor= 12.12, target value: -12.42: 72%|#######2 | 7200/10000 [00:10<00:04, 667.62it/s]
1649+
reward: -3.46 (r0 = -2.33), reward eval: reward: 0.00, reward normalized=-1.97/6.10, grad norm= 156.42, loss_value= 294.56, loss_actor= 12.12, target value: -12.42: 80%|######## | 8000/10000 [00:11<00:03, 606.34it/s]
1650+
reward: -4.76 (r0 = -2.33), reward eval: reward: 0.00, reward normalized=-2.34/5.31, grad norm= 90.94, loss_value= 266.09, loss_actor= 17.22, target value: -15.30: 80%|######## | 8000/10000 [00:12<00:03, 606.34it/s]
1651+
reward: -4.76 (r0 = -2.33), reward eval: reward: 0.00, reward normalized=-2.34/5.31, grad norm= 90.94, loss_value= 266.09, loss_actor= 17.22, target value: -15.30: 88%|########8 | 8800/10000 [00:13<00:02, 569.06it/s]
1652+
reward: -5.06 (r0 = -2.33), reward eval: reward: -4.19, reward normalized=-2.55/5.54, grad norm= 88.47, loss_value= 273.97, loss_actor= 18.95, target value: -18.19: 88%|########8 | 8800/10000 [00:16<00:02, 569.06it/s]
1653+
reward: -5.06 (r0 = -2.33), reward eval: reward: -4.19, reward normalized=-2.55/5.54, grad norm= 88.47, loss_value= 273.97, loss_actor= 18.95, target value: -18.19: 96%|#########6| 9600/10000 [00:17<00:01, 367.86it/s]
1654+
reward: -4.87 (r0 = -2.33), reward eval: reward: -4.19, reward normalized=-3.12/4.83, grad norm= 62.93, loss_value= 200.26, loss_actor= 17.99, target value: -21.88: 96%|#########6| 9600/10000 [00:17<00:01, 367.86it/s]
1655+
reward: -4.87 (r0 = -2.33), reward eval: reward: -4.19, reward normalized=-3.12/4.83, grad norm= 62.93, loss_value= 200.26, loss_actor= 17.99, target value: -21.88: : 10400it [00:19, 342.03it/s]
1656+
reward: -6.21 (r0 = -2.33), reward eval: reward: -4.19, reward normalized=-3.63/4.73, grad norm= 70.11, loss_value= 197.44, loss_actor= 21.66, target value: -25.21: : 10400it [00:20, 342.03it/s]
16571657
16581658
16591659
@@ -1723,7 +1723,7 @@ To iterate further on this loss module we might consider:
17231723

17241724
.. rst-class:: sphx-glr-timing
17251725

1726-
**Total running time of the script:** ( 0 minutes 31.669 seconds)
1726+
**Total running time of the script:** ( 0 minutes 31.944 seconds)
17271727

17281728

17291729
.. _sphx_glr_download_advanced_coding_ddpg.py:

_sources/advanced/dynamic_quantization_tutorial.rst.txt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -517,9 +517,9 @@ models run single threaded.
517517
.. code-block:: none
518518
519519
loss: 5.167
520-
elapsed time (seconds): 204.3
520+
elapsed time (seconds): 203.4
521521
loss: 5.168
522-
elapsed time (seconds): 115.2
522+
elapsed time (seconds): 118.1
523523
524524
525525
@@ -541,7 +541,7 @@ Thanks for reading! As always, we welcome any feedback, so please create an issu
541541

542542
.. rst-class:: sphx-glr-timing
543543

544-
**Total running time of the script:** ( 5 minutes 27.712 seconds)
544+
**Total running time of the script:** ( 5 minutes 29.977 seconds)
545545

546546

547547
.. _sphx_glr_download_advanced_dynamic_quantization_tutorial.py:

_sources/advanced/neural_style_tutorial.rst.txt

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -410,20 +410,20 @@ network to evaluation mode using ``.eval()``.
410410
Downloading: "https://download.pytorch.org/models/vgg19-dcbb9e9d.pth" to /var/lib/ci-user/.cache/torch/hub/checkpoints/vgg19-dcbb9e9d.pth
411411
412412
0%| | 0.00/548M [00:00<?, ?B/s]
413-
7%|7 | 40.5M/548M [00:00<00:01, 424MB/s]
414-
15%|#4 | 81.0M/548M [00:00<00:01, 370MB/s]
415-
21%|##1 | 117M/548M [00:00<00:01, 365MB/s]
416-
28%|##8 | 155M/548M [00:00<00:01, 378MB/s]
417-
35%|###5 | 193M/548M [00:00<00:00, 386MB/s]
418-
43%|####3 | 237M/548M [00:00<00:00, 411MB/s]
419-
51%|#####1 | 282M/548M [00:00<00:00, 427MB/s]
420-
59%|#####8 | 322M/548M [00:00<00:00, 405MB/s]
421-
66%|######5 | 362M/548M [00:00<00:00, 391MB/s]
422-
73%|#######3 | 401M/548M [00:01<00:00, 398MB/s]
423-
80%|######## | 440M/548M [00:01<00:00, 400MB/s]
424-
88%|########7 | 480M/548M [00:01<00:00, 406MB/s]
425-
95%|#########4| 520M/548M [00:01<00:00, 408MB/s]
426-
100%|##########| 548M/548M [00:01<00:00, 403MB/s]
413+
8%|7 | 41.5M/548M [00:00<00:01, 435MB/s]
414+
15%|#5 | 83.0M/548M [00:00<00:01, 431MB/s]
415+
23%|##2 | 124M/548M [00:00<00:01, 392MB/s]
416+
30%|##9 | 162M/548M [00:00<00:01, 310MB/s]
417+
35%|###5 | 193M/548M [00:00<00:01, 294MB/s]
418+
41%|#### | 223M/548M [00:00<00:01, 297MB/s]
419+
47%|####7 | 260M/548M [00:00<00:00, 323MB/s]
420+
55%|#####5 | 304M/548M [00:00<00:00, 363MB/s]
421+
62%|######2 | 342M/548M [00:01<00:00, 374MB/s]
422+
70%|####### | 386M/548M [00:01<00:00, 401MB/s]
423+
79%|#######8 | 431M/548M [00:01<00:00, 420MB/s]
424+
86%|########6 | 471M/548M [00:01<00:00, 392MB/s]
425+
93%|#########2| 510M/548M [00:01<00:00, 385MB/s]
426+
100%|##########| 548M/548M [00:01<00:00, 370MB/s]
427427
428428
429429
@@ -744,22 +744,22 @@ Finally, we can run the algorithm.
744744
745745
Optimizing..
746746
run [50]:
747-
Style Loss : 4.094805 Content Loss: 4.188426
747+
Style Loss : 4.119137 Content Loss: 4.126241
748748
749749
run [100]:
750-
Style Loss : 1.117979 Content Loss: 3.008747
750+
Style Loss : 1.133011 Content Loss: 3.011671
751751
752752
run [150]:
753-
Style Loss : 0.712450 Content Loss: 2.650858
753+
Style Loss : 0.717595 Content Loss: 2.657820
754754
755755
run [200]:
756-
Style Loss : 0.476656 Content Loss: 2.489839
756+
Style Loss : 0.481263 Content Loss: 2.491316
757757
758758
run [250]:
759-
Style Loss : 0.344013 Content Loss: 2.401936
759+
Style Loss : 0.347754 Content Loss: 2.402972
760760
761761
run [300]:
762-
Style Loss : 0.264354 Content Loss: 2.349211
762+
Style Loss : 0.263970 Content Loss: 2.349388
763763
764764
765765
@@ -768,7 +768,7 @@ Finally, we can run the algorithm.
768768
769769
.. rst-class:: sphx-glr-timing
770770

771-
**Total running time of the script:** ( 0 minutes 11.342 seconds)
771+
**Total running time of the script:** ( 0 minutes 11.629 seconds)
772772

773773

774774
.. _sphx_glr_download_advanced_neural_style_tutorial.py:

_sources/advanced/numpy_extensions_tutorial.rst.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -303,7 +303,7 @@ The backward pass computes the gradient ``wrt`` the input and the gradient ``wrt
303303
304304
.. rst-class:: sphx-glr-timing
305305

306-
**Total running time of the script:** ( 0 minutes 0.377 seconds)
306+
**Total running time of the script:** ( 0 minutes 0.369 seconds)
307307

308308

309309
.. _sphx_glr_download_advanced_numpy_extensions_tutorial.py:

0 commit comments

Comments
 (0)