Skip to content

Commit 5142851

Browse files
committed
docs: documentation for v1.4.0
Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
1 parent 5b156dc commit 5142851

File tree

324 files changed

+137151
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

324 files changed

+137151
-0
lines changed

docs/v1.4.0/.nojekyll

Whitespace-only changes.

docs/v1.4.0/_cpp_api/classtorch__tensorrt_1_1DataType.html

Lines changed: 888 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/classtorch__tensorrt_1_1Device_1_1DeviceType.html

Lines changed: 831 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/classtorch__tensorrt_1_1TensorFormat.html

Lines changed: 863 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/classtorch__tensorrt_1_1ptq_1_1Int8CacheCalibrator.html

Lines changed: 827 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/classtorch__tensorrt_1_1ptq_1_1Int8Calibrator.html

Lines changed: 837 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/define_macros_8h_1a18d295a837ac71add5578860b55e5502.html

Lines changed: 722 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/define_macros_8h_1a282fd3c0b1c3a215148ae372070e1268.html

Lines changed: 722 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/define_macros_8h_1a31398a6d4d27e28817afb0f0139e909e.html

Lines changed: 722 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/define_macros_8h_1a35703561b26b1a9d2738ad7d58b27827.html

Lines changed: 722 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/define_macros_8h_1abd1465eb38256d3f22cc1426b23d516b.html

Lines changed: 722 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/define_macros_8h_1abe87b341f562fd1cf40b7672e4d759da.html

Lines changed: 722 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/define_macros_8h_1ad19939408f7be171a74a89928b36eb59.html

Lines changed: 722 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/define_macros_8h_1adad592a7b1b7eed529cdf6acd584c883.html

Lines changed: 722 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/dir_cpp.html

Lines changed: 705 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/dir_cpp_include.html

Lines changed: 706 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/dir_cpp_include_torch_tensorrt.html

Lines changed: 709 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/enum_logging_8h_1a130f65408ad8cbaee060f05e8db69558.html

Lines changed: 760 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/enum_torch__tensorrt_8h_1a3fbe5d72e4fc624dbd038853079620eb.html

Lines changed: 739 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/file_cpp_include_torch_tensorrt_logging.h.html

Lines changed: 761 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/file_cpp_include_torch_tensorrt_macros.h.html

Lines changed: 747 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/file_cpp_include_torch_tensorrt_ptq.h.html

Lines changed: 758 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/file_cpp_include_torch_tensorrt_torch_tensorrt.h.html

Lines changed: 773 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/function_logging_8h_1a0593f776f469c20469e2f729fc7861a3.html

Lines changed: 722 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/function_logging_8h_1a0c012cb374addd90eb1f42eaec570650.html

Lines changed: 728 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/function_logging_8h_1a56e110feaaba2c3fd44bd201fd21a76a.html

Lines changed: 728 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/function_logging_8h_1a7cb50492421ea9de4e3db895819df6f2.html

Lines changed: 728 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/function_logging_8h_1ac46ac0901cb97e3ae6e93b45f24e90b8.html

Lines changed: 731 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/function_logging_8h_1ad2efd47b6c3689e58ccc595680579ae5.html

Lines changed: 728 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/function_logging_8h_1af8f3443813315af7901903d25dd495cc.html

Lines changed: 722 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/function_ptq_8h_1a226e3c83379d1012cde8578c1c86b16c.html

Lines changed: 737 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/function_ptq_8h_1a6186e305f47c1d94b6130ef6c7f7e178.html

Lines changed: 743 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/function_torch__tensorrt_8h_1a5b405fd3bf3c8fc2e2a54cbbab979797.html

Lines changed: 737 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/function_torch__tensorrt_8h_1a6e19490a08fb1553c9dd347a5ae79db9.html

Lines changed: 737 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/function_torch__tensorrt_8h_1a81f9783517335dda877d8cfcf38987c9.html

Lines changed: 743 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/function_torch__tensorrt_8h_1ac4ab8313ae72c2c899ea31548b528528.html

Lines changed: 728 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/function_torch__tensorrt_8h_1ad1acd06eaeaffbbcf6e7ebf426891384.html

Lines changed: 728 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/function_torch__tensorrt_8h_1ad6a4ee8ca6c8f6e5519eb1128ec7f4a1.html

Lines changed: 723 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/function_torch__tensorrt_8h_1ae8d56472106eeef37fbe51ff7f40c9b2.html

Lines changed: 737 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/namespace_torch.html

Lines changed: 708 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/namespace_torch_tensorrt.html

Lines changed: 756 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/namespace_torch_tensorrt__logging.html

Lines changed: 737 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/namespace_torch_tensorrt__ptq.html

Lines changed: 733 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/namespace_torch_tensorrt__torchscript.html

Lines changed: 734 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/program_listing_file_cpp_include_torch_tensorrt_logging.h.html

Lines changed: 737 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/program_listing_file_cpp_include_torch_tensorrt_macros.h.html

Lines changed: 736 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/program_listing_file_cpp_include_torch_tensorrt_ptq.h.html

Lines changed: 876 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/program_listing_file_cpp_include_torch_tensorrt_torch_tensorrt.h.html

Lines changed: 1039 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/structtorch__tensorrt_1_1Device.html

Lines changed: 879 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/structtorch__tensorrt_1_1GraphInputs.html

Lines changed: 737 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/structtorch__tensorrt_1_1Input.html

Lines changed: 1058 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/structtorch__tensorrt_1_1torchscript_1_1CompileSpec.html

Lines changed: 897 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/torch_tensort_cpp.html

Lines changed: 1104 additions & 0 deletions
Large diffs are not rendered by default.

docs/v1.4.0/_cpp_api/unabridged_orphan.html

Lines changed: 794 additions & 0 deletions
Large diffs are not rendered by default.
Lines changed: 84 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,84 @@
1+
"""
2+
.. _dynamo_compile_resnet:
3+
4+
Compiling ResNet using the Torch-TensorRT Dyanmo Frontend
5+
==========================================================
6+
7+
This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a ResNet model."""
8+
9+
# %%
10+
# Imports and Model Definition
11+
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
12+
13+
import torch
14+
import torch_tensorrt
15+
import torchvision.models as models
16+
17+
# %%
18+
19+
# Initialize model with half precision and sample inputs
20+
model = models.resnet18(pretrained=True).half().eval().to("cuda")
21+
inputs = [torch.randn((1, 3, 224, 224)).to("cuda").half()]
22+
23+
# %%
24+
# Optional Input Arguments to `torch_tensorrt.dynamo.compile`
25+
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
26+
27+
# Enabled precision for TensorRT optimization
28+
enabled_precisions = {torch.half}
29+
30+
# Whether to print verbose logs
31+
debug = True
32+
33+
# Workspace size for TensorRT
34+
workspace_size = 20 << 30
35+
36+
# Maximum number of TRT Engines
37+
# (Lower value allows more graph segmentation)
38+
min_block_size = 3
39+
40+
# Operations to Run in Torch, regardless of converter support
41+
torch_executed_ops = {}
42+
43+
# %%
44+
# Compilation with `torch_tensorrt.dynamo.compile`
45+
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
46+
47+
# Build and compile the model with torch.compile, using Torch-TensorRT backend
48+
optimized_model = torch_tensorrt.dynamo.compile(
49+
model,
50+
inputs,
51+
enabled_precisions=enabled_precisions,
52+
debug=debug,
53+
workspace_size=workspace_size,
54+
min_block_size=min_block_size,
55+
torch_executed_ops=torch_executed_ops,
56+
)
57+
58+
# %%
59+
# Equivalently, we could have run the above via the convenience frontend, as so:
60+
# `torch_tensorrt.compile(model, ir="dynamo_compile", inputs=inputs, ...)`
61+
62+
# %%
63+
# Inference
64+
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
65+
66+
# Does not cause recompilation (same batch size as input)
67+
new_inputs = [torch.randn((1, 3, 224, 224)).half().to("cuda")]
68+
new_outputs = optimized_model(*new_inputs)
69+
70+
# %%
71+
72+
# Does cause recompilation (new batch size)
73+
new_batch_size_inputs = [torch.randn((8, 3, 224, 224)).half().to("cuda")]
74+
new_batch_size_outputs = optimized_model(*new_batch_size_inputs)
75+
76+
# %%
77+
# Cleanup
78+
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
79+
80+
# Finally, we use Torch utilities to clean up the workspace
81+
torch._dynamo.reset()
82+
83+
with torch.no_grad():
84+
torch.cuda.empty_cache()
Lines changed: 151 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,151 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"\n\n# Compiling a Transformer using torch.compile and TensorRT\n\nThis interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a transformer-based model.\n"
8+
]
9+
},
10+
{
11+
"cell_type": "markdown",
12+
"metadata": {},
13+
"source": [
14+
"## Imports and Model Definition\n\n"
15+
]
16+
},
17+
{
18+
"cell_type": "code",
19+
"execution_count": null,
20+
"metadata": {
21+
"collapsed": false
22+
},
23+
"outputs": [],
24+
"source": [
25+
"import torch\nimport torch_tensorrt\nfrom transformers import BertModel"
26+
]
27+
},
28+
{
29+
"cell_type": "code",
30+
"execution_count": null,
31+
"metadata": {
32+
"collapsed": false
33+
},
34+
"outputs": [],
35+
"source": [
36+
"# Initialize model with float precision and sample inputs\nmodel = BertModel.from_pretrained(\"bert-base-uncased\").eval().to(\"cuda\")\ninputs = [\n torch.randint(0, 2, (1, 14), dtype=torch.int32).to(\"cuda\"),\n torch.randint(0, 2, (1, 14), dtype=torch.int32).to(\"cuda\"),\n]"
37+
]
38+
},
39+
{
40+
"cell_type": "markdown",
41+
"metadata": {},
42+
"source": [
43+
"## Optional Input Arguments to `torch_tensorrt.dynamo.compile`\n\n"
44+
]
45+
},
46+
{
47+
"cell_type": "code",
48+
"execution_count": null,
49+
"metadata": {
50+
"collapsed": false
51+
},
52+
"outputs": [],
53+
"source": [
54+
"# Enabled precision for TensorRT optimization\nenabled_precisions = {torch.float}\n\n# Whether to print verbose logs\ndebug = True\n\n# Workspace size for TensorRT\nworkspace_size = 20 << 30\n\n# Maximum number of TRT Engines\n# (Lower value allows more graph segmentation)\nmin_block_size = 3\n\n# Operations to Run in Torch, regardless of converter support\ntorch_executed_ops = {}"
55+
]
56+
},
57+
{
58+
"cell_type": "markdown",
59+
"metadata": {},
60+
"source": [
61+
"## Compilation with `torch_tensorrt.dynamo.compile`\n\n"
62+
]
63+
},
64+
{
65+
"cell_type": "code",
66+
"execution_count": null,
67+
"metadata": {
68+
"collapsed": false
69+
},
70+
"outputs": [],
71+
"source": [
72+
"# Build and compile the model with torch.compile, using tensorrt backend\noptimized_model = torch_tensorrt.dynamo.compile(\n model,\n inputs,\n enabled_precisions=enabled_precisions,\n debug=debug,\n workspace_size=workspace_size,\n min_block_size=min_block_size,\n torch_executed_ops=torch_executed_ops,\n)"
73+
]
74+
},
75+
{
76+
"cell_type": "markdown",
77+
"metadata": {},
78+
"source": [
79+
"Equivalently, we could have run the above via the convenience frontend, as so:\n`torch_tensorrt.compile(model, ir=\"dynamo_compile\", inputs=inputs, ...)`\n\n"
80+
]
81+
},
82+
{
83+
"cell_type": "markdown",
84+
"metadata": {},
85+
"source": [
86+
"## Inference\n\n"
87+
]
88+
},
89+
{
90+
"cell_type": "code",
91+
"execution_count": null,
92+
"metadata": {
93+
"collapsed": false
94+
},
95+
"outputs": [],
96+
"source": [
97+
"# Does not cause recompilation (same batch size as input)\nnew_inputs = [\n torch.randint(0, 2, (1, 14), dtype=torch.int32).to(\"cuda\"),\n torch.randint(0, 2, (1, 14), dtype=torch.int32).to(\"cuda\"),\n]\nnew_outputs = optimized_model(*new_inputs)"
98+
]
99+
},
100+
{
101+
"cell_type": "code",
102+
"execution_count": null,
103+
"metadata": {
104+
"collapsed": false
105+
},
106+
"outputs": [],
107+
"source": [
108+
"# Does cause recompilation (new batch size)\nnew_inputs = [\n torch.randint(0, 2, (4, 14), dtype=torch.int32).to(\"cuda\"),\n torch.randint(0, 2, (4, 14), dtype=torch.int32).to(\"cuda\"),\n]\nnew_outputs = optimized_model(*new_inputs)"
109+
]
110+
},
111+
{
112+
"cell_type": "markdown",
113+
"metadata": {},
114+
"source": [
115+
"## Cleanup\n\n"
116+
]
117+
},
118+
{
119+
"cell_type": "code",
120+
"execution_count": null,
121+
"metadata": {
122+
"collapsed": false
123+
},
124+
"outputs": [],
125+
"source": [
126+
"# Finally, we use Torch utilities to clean up the workspace\ntorch._dynamo.reset()\n\nwith torch.no_grad():\n torch.cuda.empty_cache()"
127+
]
128+
}
129+
],
130+
"metadata": {
131+
"kernelspec": {
132+
"display_name": "Python 3",
133+
"language": "python",
134+
"name": "python3"
135+
},
136+
"language_info": {
137+
"codemirror_mode": {
138+
"name": "ipython",
139+
"version": 3
140+
},
141+
"file_extension": ".py",
142+
"mimetype": "text/x-python",
143+
"name": "python",
144+
"nbconvert_exporter": "python",
145+
"pygments_lexer": "ipython3",
146+
"version": "3.10.6"
147+
}
148+
},
149+
"nbformat": 4,
150+
"nbformat_minor": 0
151+
}
Lines changed: 151 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,151 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"\n\n# Compiling ResNet using the Torch-TensorRT Dyanmo Frontend\n\nThis interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a ResNet model.\n"
8+
]
9+
},
10+
{
11+
"cell_type": "markdown",
12+
"metadata": {},
13+
"source": [
14+
"## Imports and Model Definition\n\n"
15+
]
16+
},
17+
{
18+
"cell_type": "code",
19+
"execution_count": null,
20+
"metadata": {
21+
"collapsed": false
22+
},
23+
"outputs": [],
24+
"source": [
25+
"import torch\nimport torch_tensorrt\nimport torchvision.models as models"
26+
]
27+
},
28+
{
29+
"cell_type": "code",
30+
"execution_count": null,
31+
"metadata": {
32+
"collapsed": false
33+
},
34+
"outputs": [],
35+
"source": [
36+
"# Initialize model with half precision and sample inputs\nmodel = models.resnet18(pretrained=True).half().eval().to(\"cuda\")\ninputs = [torch.randn((1, 3, 224, 224)).to(\"cuda\").half()]"
37+
]
38+
},
39+
{
40+
"cell_type": "markdown",
41+
"metadata": {},
42+
"source": [
43+
"## Optional Input Arguments to `torch_tensorrt.dynamo.compile`\n\n"
44+
]
45+
},
46+
{
47+
"cell_type": "code",
48+
"execution_count": null,
49+
"metadata": {
50+
"collapsed": false
51+
},
52+
"outputs": [],
53+
"source": [
54+
"# Enabled precision for TensorRT optimization\nenabled_precisions = {torch.half}\n\n# Whether to print verbose logs\ndebug = True\n\n# Workspace size for TensorRT\nworkspace_size = 20 << 30\n\n# Maximum number of TRT Engines\n# (Lower value allows more graph segmentation)\nmin_block_size = 3\n\n# Operations to Run in Torch, regardless of converter support\ntorch_executed_ops = {}"
55+
]
56+
},
57+
{
58+
"cell_type": "markdown",
59+
"metadata": {},
60+
"source": [
61+
"## Compilation with `torch_tensorrt.dynamo.compile`\n\n"
62+
]
63+
},
64+
{
65+
"cell_type": "code",
66+
"execution_count": null,
67+
"metadata": {
68+
"collapsed": false
69+
},
70+
"outputs": [],
71+
"source": [
72+
"# Build and compile the model with torch.compile, using Torch-TensorRT backend\noptimized_model = torch_tensorrt.dynamo.compile(\n model,\n inputs,\n enabled_precisions=enabled_precisions,\n debug=debug,\n workspace_size=workspace_size,\n min_block_size=min_block_size,\n torch_executed_ops=torch_executed_ops,\n)"
73+
]
74+
},
75+
{
76+
"cell_type": "markdown",
77+
"metadata": {},
78+
"source": [
79+
"Equivalently, we could have run the above via the convenience frontend, as so:\n`torch_tensorrt.compile(model, ir=\"dynamo_compile\", inputs=inputs, ...)`\n\n"
80+
]
81+
},
82+
{
83+
"cell_type": "markdown",
84+
"metadata": {},
85+
"source": [
86+
"## Inference\n\n"
87+
]
88+
},
89+
{
90+
"cell_type": "code",
91+
"execution_count": null,
92+
"metadata": {
93+
"collapsed": false
94+
},
95+
"outputs": [],
96+
"source": [
97+
"# Does not cause recompilation (same batch size as input)\nnew_inputs = [torch.randn((1, 3, 224, 224)).half().to(\"cuda\")]\nnew_outputs = optimized_model(*new_inputs)"
98+
]
99+
},
100+
{
101+
"cell_type": "code",
102+
"execution_count": null,
103+
"metadata": {
104+
"collapsed": false
105+
},
106+
"outputs": [],
107+
"source": [
108+
"# Does cause recompilation (new batch size)\nnew_batch_size_inputs = [torch.randn((8, 3, 224, 224)).half().to(\"cuda\")]\nnew_batch_size_outputs = optimized_model(*new_batch_size_inputs)"
109+
]
110+
},
111+
{
112+
"cell_type": "markdown",
113+
"metadata": {},
114+
"source": [
115+
"## Cleanup\n\n"
116+
]
117+
},
118+
{
119+
"cell_type": "code",
120+
"execution_count": null,
121+
"metadata": {
122+
"collapsed": false
123+
},
124+
"outputs": [],
125+
"source": [
126+
"# Finally, we use Torch utilities to clean up the workspace\ntorch._dynamo.reset()\n\nwith torch.no_grad():\n torch.cuda.empty_cache()"
127+
]
128+
}
129+
],
130+
"metadata": {
131+
"kernelspec": {
132+
"display_name": "Python 3",
133+
"language": "python",
134+
"name": "python3"
135+
},
136+
"language_info": {
137+
"codemirror_mode": {
138+
"name": "ipython",
139+
"version": 3
140+
},
141+
"file_extension": ".py",
142+
"mimetype": "text/x-python",
143+
"name": "python",
144+
"nbconvert_exporter": "python",
145+
"pygments_lexer": "ipython3",
146+
"version": "3.10.6"
147+
}
148+
},
149+
"nbformat": 4,
150+
"nbformat_minor": 0
151+
}

0 commit comments

Comments
 (0)