Skip to content

Commit 4e03b2d

Browse files
authored
Add image tag to API config docs (#1581)
1 parent 7dc6608 commit 4e03b2d

File tree

2 files changed

+14
-8
lines changed

2 files changed

+14
-8
lines changed

docs/deployments/batch-api/api-configuration.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@ Reference the section below which corresponds to your Predictor type: [Python](#
88

99
## Python Predictor
1010

11+
<!-- CORTEX_VERSION_BRANCH_STABLE x2 -->
1112
```yaml
1213
- name: <string> # API name (required)
1314
kind: BatchAPI
@@ -16,7 +17,7 @@ Reference the section below which corresponds to your Predictor type: [Python](#
1617
path: <string> # path to a python file with a PythonPredictor class definition, relative to the Cortex root (required)
1718
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
1819
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
19-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu or quay.io/cortexlabs/python-predictor-gpu based on compute)
20+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:master or quay.io/cortexlabs/python-predictor-gpu:master based on compute)
2021
env: <string: string> # dictionary of environment variables
2122
networking:
2223
endpoint: <string> # the endpoint for the API (default: <api_name>)
@@ -32,6 +33,7 @@ See additional documentation for [compute](../compute.md), [networking](../netwo
3233
3334
## TensorFlow Predictor
3435
36+
<!-- CORTEX_VERSION_BRANCH_STABLE x3 -->
3537
```yaml
3638
- name: <string> # API name (required)
3739
kind: BatchAPI
@@ -50,8 +52,8 @@ See additional documentation for [compute](../compute.md), [networking](../netwo
5052
batch_interval: <duration> # the maximum amount of time to spend waiting for additional requests before running inference on the batch of requests
5153
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
5254
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
53-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor)
54-
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-gpu or quay.io/cortexlabs/tensorflow-serving-cpu based on compute)
55+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:master)
56+
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-gpu:master or quay.io/cortexlabs/tensorflow-serving-cpu:master based on compute)
5557
env: <string: string> # dictionary of environment variables
5658
networking:
5759
endpoint: <string> # the endpoint for the API (default: <api_name>)
@@ -67,6 +69,7 @@ See additional documentation for [compute](../compute.md), [networking](../netwo
6769
6870
## ONNX Predictor
6971
72+
<!-- CORTEX_VERSION_BRANCH_STABLE x2 -->
7073
```yaml
7174
- name: <string> # API name (required)
7275
kind: BatchAPI
@@ -81,7 +84,7 @@ See additional documentation for [compute](../compute.md), [networking](../netwo
8184
...
8285
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
8386
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
84-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-gpu or quay.io/cortexlabs/onnx-predictor-cpu based on compute)
87+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-gpu:master or quay.io/cortexlabs/onnx-predictor-cpu:master based on compute)
8588
env: <string: string> # dictionary of environment variables
8689
networking:
8790
endpoint: <string> # the endpoint for the API (default: <api_name>)

docs/deployments/realtime-api/api-configuration.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@ Reference the section below which corresponds to your Predictor type: [Python](#
88

99
## Python Predictor
1010

11+
<!-- CORTEX_VERSION_BRANCH_STABLE x2 -->
1112
```yaml
1213
- name: <string> # API name (required)
1314
kind: RealtimeAPI
@@ -27,7 +28,7 @@ Reference the section below which corresponds to your Predictor type: [Python](#
2728
threads_per_process: <int> # the number of threads per process (default: 1)
2829
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (optional)
2930
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
30-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu or quay.io/cortexlabs/python-predictor-gpu based on compute)
31+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:master or quay.io/cortexlabs/python-predictor-gpu:master based on compute)
3132
env: <string: string> # dictionary of environment variables
3233
networking:
3334
endpoint: <string> # the endpoint for the API (aws only) (default: <api_name>)
@@ -63,6 +64,7 @@ See additional documentation for [models](models.md), [parallelism](parallelism.
6364
6465
## TensorFlow Predictor
6566
67+
<!-- CORTEX_VERSION_BRANCH_STABLE x3 -->
6668
```yaml
6769
- name: <string> # API name (required)
6870
kind: RealtimeAPI
@@ -88,8 +90,8 @@ See additional documentation for [models](models.md), [parallelism](parallelism.
8890
threads_per_process: <int> # the number of threads per process (default: 1)
8991
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (optional)
9092
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
91-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor)
92-
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-gpu or quay.io/cortexlabs/tensorflow-serving-cpu based on compute)
93+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:master)
94+
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-gpu:master or quay.io/cortexlabs/tensorflow-serving-cpu:master based on compute)
9395
env: <string: string> # dictionary of environment variables
9496
networking:
9597
endpoint: <string> # the endpoint for the API (aws only) (default: <api_name>)
@@ -125,6 +127,7 @@ See additional documentation for [models](models.md), [parallelism](parallelism.
125127
126128
## ONNX Predictor
127129
130+
<!-- CORTEX_VERSION_BRANCH_STABLE x2 -->
128131
```yaml
129132
- name: <string> # API name (required)
130133
kind: RealtimeAPI
@@ -145,7 +148,7 @@ See additional documentation for [models](models.md), [parallelism](parallelism.
145148
threads_per_process: <int> # the number of threads per process (default: 1)
146149
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (optional)
147150
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
148-
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-gpu or quay.io/cortexlabs/onnx-predictor-cpu based on compute)
151+
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-gpu:master or quay.io/cortexlabs/onnx-predictor-cpu:master based on compute)
149152
env: <string: string> # dictionary of environment variables
150153
networking:
151154
endpoint: <string> # the endpoint for the API (aws only) (default: <api_name>)

0 commit comments

Comments
 (0)