You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: inference/generativeai/llm-workshop/lab12-hosting-controlnet-models-on-sagemaker/README.md
+15-1Lines changed: 15 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
3
3
## Overview
4
4
5
-
In this notebook, we explore how to build generative fill application and host Stable Diffusion/ ControlNet / segment anything models on SageMaker asynchronous endpoint using BYOC (Bring-your-own-container).
5
+
In this notebook, we will explore how to build generative fill application and host Stable Diffusion/ ControlNet / segment anything models on SageMaker asynchronous endpoint using DLC container.
6
6
7
7
You will find 2 Jupyter Notebooks: 1 for running with Amazon SageMaker Studio and 1 for running with Amazon SageMaker Notebook.
8
8
@@ -39,6 +39,14 @@ You will find 2 Jupyter Notebooks: 1 for running with Amazon SageMaker Studio an
39
39
}
40
40
```
41
41
42
+
* Tested image, kernel, and instance:
43
+
```
44
+
image: Pytorch 2.0.1 Python 3.10 CPU Optimized
45
+
kernel: Python 3
46
+
instance: ml.m5.4xlarge
47
+
48
+
```
49
+
42
50
2) Running with Amazon SageMaker Notebook
43
51
44
52
* Permission Policies
@@ -68,6 +76,12 @@ You will find 2 Jupyter Notebooks: 1 for running with Amazon SageMaker Studio an
68
76
}
69
77
```
70
78
79
+
* Tested kernel:
80
+
```
81
+
kernel: conda_pytorch_p310
82
+
83
+
```
84
+
71
85
## Note
72
86
73
87
1. You may need to adjust IAM roles definition to achieve fine grained access control.
0 commit comments