You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here are the complete [Predictor docs](../../../docs/deployments/realtime-api/predictors.md).
37
-
38
-
<br>
39
-
40
-
## Specify your Python dependencies
33
+
## Specify Python dependencies
41
34
42
35
Create a `requirements.txt` file to specify the dependencies needed by `predictor.py`. Cortex will automatically install them into your runtime once you deploy:
43
36
@@ -48,36 +41,39 @@ torch
48
41
transformers==3.0.*
49
42
```
50
43
51
-
<br>
52
-
53
-
## Configure your API
54
-
55
-
Create a `cortex.yaml` file and add the configuration below. A `RealtimeAPI` provides a runtime for inference and makes your `predictor.py` implementation available as a web service that can serve realtime predictions:
44
+
## Deploy your model locally
56
45
57
-
```yaml
58
-
# cortex.yaml
46
+
You can create APIs from any Python runtime that has access to Docker (e.g. the Python shell or a Jupyter notebook):
59
47
60
-
- name: text-generator
61
-
kind: RealtimeAPI
62
-
predictor:
63
-
type: python
64
-
path: predictor.py
65
-
```
48
+
```python
49
+
import cortex
66
50
67
-
Here are the complete [API configuration docs](../../../docs/deployments/realtime-api/api-configuration.md).
Monitor the status of your API using `cortex get`:
82
78
83
79
```bash
@@ -91,11 +87,11 @@ Show additional information for your API (e.g. its endpoint) using `cortex get <
91
87
92
88
```bash
93
89
$ cortex get text-generator
90
+
94
91
status last update avg request 2XX
95
92
live 1m - -
96
93
97
94
endpoint: http://localhost:8889
98
-
...
99
95
```
100
96
101
97
You can also stream logs from your API:
@@ -106,18 +102,6 @@ $ cortex logs text-generator
106
102
...
107
103
```
108
104
109
-
Once your API is live, use `curl` to test your API (it will take a few seconds to generate the text):
110
-
111
-
```bash
112
-
$ curl http://localhost:8889 \
113
-
-X POST -H "Content-Type: application/json" \
114
-
-d '{"text": "machine learning is"}'
115
-
116
-
"machine learning is ..."
117
-
```
118
-
119
-
<br>
120
-
121
105
## Deploy your model to AWS
122
106
123
107
Cortex can automatically provision infrastructure on your AWS account and deploy your models as production-ready web services:
@@ -126,19 +110,26 @@ Cortex can automatically provision infrastructure on your AWS account and deploy
126
110
$ cortex cluster up
127
111
```
128
112
129
-
This creates a Cortex cluster in your AWS account, which will take approximately 15 minutes.
113
+
This creates a Cortex cluster in your AWS account, which will take approximately 15 minutes. After your cluster is created, you can deploy to your cluster by using the same code and configuration as before:
130
114
131
-
After your cluster is created, you can deploy your model to your cluster by using the same code and configuration as before:
115
+
```python
116
+
import cortex
132
117
133
-
```bash
134
-
$ cortex deploy --env aws
118
+
cx_aws = cortex.client("aws")
135
119
136
-
creating text-generator (RealtimeAPI)
137
-
```
120
+
api_spec = {
121
+
"name": "text-generator",
122
+
"kind": "RealtimeAPI",
123
+
"predictor": {
124
+
"type": "python",
125
+
"path": "predictor.py"
126
+
}
127
+
}
138
128
139
-
_Note that the `--env` flag specifies the name of the CLI environment to use. [CLI environments](../../../docs/miscellaneous/environments.md) contain the information necessary to connect to your cluster. The default environment is `local`, and when the cluster was created, a new environment named `aws` was created to point to the cluster. You can change the default environment with `cortex env default <env_name`)._
129
+
cx_aws.deploy(api_spec, project_dir=".")
130
+
```
140
131
141
-
Monitor the status of your APIs using `cortex get`:
132
+
Monitor the status of your APIs using `cortex get` using your CLI:
142
133
143
134
```bash
144
135
$ cortex get --watch
@@ -156,62 +147,32 @@ Show additional information for your API (e.g. its endpoint) using `cortex get <
156
147
$ cortex get text-generator --env aws
157
148
158
149
status up-to-date requested last update avg request 2XX
When you make a change to your `predictor.py` or your `cortex.yaml`, you can update your api by re-running `cortex deploy`.
155
+
## Run on GPUs
181
156
182
-
Let's modify `predictor.py` to set the length of the generated text based on a query parameter:
157
+
If your cortex cluster is using GPU instances (configured during cluster creation) or if you are running locally with an nvidia GPU, you can run your text generator API on GPUs. Add the `compute` field to your API configuration and re-deploy:
183
158
184
159
```python
185
-
# predictor.py
186
-
187
-
import torch
188
-
from transformers import GPT2Tokenizer, GPT2LMHeadModel
189
-
160
+
api_spec = {
161
+
"name": "text-generator",
162
+
"kind": "RealtimeAPI",
163
+
"predictor": {
164
+
"type": "python",
165
+
"path": "predictor.py"
166
+
},
167
+
"compute": {
168
+
"gpu": 1
169
+
}
170
+
}
190
171
191
-
class PythonPredictor:
192
-
def __init__(self, config):
193
-
self.device = "cuda" if torch.cuda.is_available() else "cpu"
prediction = self.model.generate(tokens, max_length=input_length + output_length, do_sample=True) # this line is updated
203
-
return self.tokenizer.decode(prediction[0])
204
-
```
205
-
206
-
Run `cortex deploy` to perform a rolling update of your API:
207
-
208
-
```bash
209
-
$ cortex deploy --env aws
210
-
211
-
updating text-generator (RealtimeAPI)
172
+
cx_aws.deploy(api_spec, project_dir=".")
212
173
```
213
174
214
-
You can track the status of your API using `cortex get`:
175
+
As your new API is initializing, the old API will continue to respond to prediction requests. Once the API's status becomes "live" (with one up-to-date replica), traffic will be routed to the updated version. You can track the status of your API using `cortex get`:
215
176
216
177
```bash
217
178
$ cortex get --env aws --watch
@@ -220,78 +181,12 @@ realtime api status up-to-date stale requested last update avg r
220
181
text-generator updating 0 1 1 29s - -
221
182
```
222
183
223
-
As your new implementation is initializing, the old implementation will continue to be used to respond to prediction requests. Eventually the API's status will become "live" (with one up-to-date replica), and traffic will be routed to the updated version.
If your cortex cluster is using GPU instances (configured during cluster creation), you can run your text generator API on GPUs. Add the `compute` field to your API configuration:
240
-
241
-
```yaml
242
-
# cortex.yaml
243
-
244
-
- name: text-generator
245
-
kind: RealtimeAPI
246
-
predictor:
247
-
type: python
248
-
path: predictor.py
249
-
compute:
250
-
gpu: 1
251
-
```
252
-
253
-
Run `cortex deploy` to update your API with this configuration:
254
-
255
-
```bash
256
-
$ cortex deploy --env aws
257
-
258
-
updating text-generator (RealtimeAPI)
259
-
```
260
-
261
-
You can use `cortex get` to check the status of your API, and once it's live, prediction requests should be faster.
262
-
263
-
### A note about rolling updates in dev environments
264
-
265
-
In development environments, you may wish to disable rolling updates since rolling updates require additional cluster resources. For example, a rolling update of a GPU-based API will require at least two GPUs, which can require a new instance to spin up if your cluster only has one instance. To disable rolling updates, set `max_surge` to 0 in the `update_strategy` configuration:
266
-
267
-
```yaml
268
-
# cortex.yaml
269
-
270
-
- name: text-generator
271
-
kind: RealtimeAPI
272
-
predictor:
273
-
type: python
274
-
path: predictor.py
275
-
compute:
276
-
gpu: 1
277
-
update_strategy:
278
-
max_surge: 0
279
-
```
280
-
281
-
<br>
282
-
283
184
## Cleanup
284
185
285
-
Run `cortex delete` to delete each API:
186
+
Deleting APIs will free up cluster resources and allow Cortex to scale down to the minimum number of instances you specified during cluster creation:
286
187
287
-
```bash
288
-
$ cortex delete text-generator --env local
289
-
290
-
deleting text-generator
291
-
292
-
$ cortex delete text-generator --env aws
188
+
```python
189
+
cx_local.delete_api("text-generator")
293
190
294
-
deleting text-generator
191
+
cx_aws.delete_api("text-generator")
295
192
```
296
-
297
-
Running `cortex delete` will free up cluster resources and allow Cortex to scale down to the minimum number of instances you specified during cluster creation. It will not spin down your cluster.
0 commit comments