Skip to content

Accept class definition from Python Client directly #1587

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 12 commits into from
Nov 24, 2020
Merged
11 changes: 1 addition & 10 deletions dev/generate_python_client_md.sh
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ sed -i "s/# cortex.client.Client/# cortex.client.Client\n/g" $ROOT/docs/miscella
sed -i "s/](#cortex\./](#/g" $ROOT/docs/miscellaneous/python-client.md
sed -i "s/](#client\.Client\./](#/g" $ROOT/docs/miscellaneous/python-client.md

# indentdation
# indentation
sed -i "s/ \* / \* /g" $ROOT/docs/miscellaneous/python-client.md
sed -i "s/#### /## /g" $ROOT/docs/miscellaneous/python-client.md

Expand All @@ -61,12 +61,3 @@ sed -i 's/[[:space:]]*$//' $ROOT/docs/miscellaneous/python-client.md
truncate -s -1 $ROOT/docs/miscellaneous/python-client.md

pip3 uninstall -y cortex

cat << EOF

#### MANUAL EDITS REQUIRED ####

- Copy the docstring for \`client(env: str)\` in pkg/workloads/cortex/client/__init__.py into the generated docs and unindent

Then check the diff
EOF
15 changes: 14 additions & 1 deletion docs/cluster-management/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,20 @@ import requests
local_client = cortex.client("local")

# deploy the model as a realtime api and wait for it to become active
deployments = local_client.deploy("./cortex.yaml", wait=True)

api_spec={
"name": "iris-classifier",
"kind": "RealtimeAPI",
"predictor": {
"type": "python",
"path": "predictor.py",
"config": {
"model": "s3://cortex-examples/pytorch/iris-classifier/weights.pth"
}
}
}

deployments = local_client.deploy(api_spec, project_dir=".", wait=True)

# get the api's endpoint
url = deployments[0]["api"]["endpoint"]
Expand Down
46 changes: 12 additions & 34 deletions docs/miscellaneous/python-client.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,37 +29,6 @@ client(env: str)

Initialize a client based on the specified environment.

To deploy and manage APIs on a new cluster:

1. Spin up a cluster using the CLI command `cortex cluster up`.
An environment named "aws" will be created once the cluster is ready.
2. Initialize your client:

```python
import cortex
c = cortex.client("aws")
c.deploy("./cortex.yaml")
```

To deploy and manage APIs on an existing cluster:

1. Use the command `cortex cluster info` to get the Operator Endpoint.
2. Configure a client to your cluster:

```python
import cortex
c = cortex.cluster_client("aws", operator_endpoint, aws_access_key_id, aws_secret_access_key)
c.deploy("./cortex.yaml")
```

To deploy and manage APIs locally:

```python
import cortex
c = cortex.client("local")
c.deploy("./cortex.yaml")
```

**Arguments**:

- `env` - Name of the environment to use.
Expand Down Expand Up @@ -136,14 +105,23 @@ Delete an environment configured on this machine.
## deploy

```python
| deploy(config_file: str, force: bool = False, wait: bool = False) -> list
| deploy(api_spec: dict, predictor=None, pip_dependencies=[], conda_dependencies=[], project_dir: Optional[str] = None, force: bool = False, wait: bool = False) -> list
```

Deploy or update APIs specified in the config_file.
Deploy an API.

**Arguments**:

- `config_file` - Local path to a yaml file defining Cortex APIs.
- `api_spec` - A dictionary defining a single Cortex API. Schema can be found here:
→ Realtime API: https://docs.cortex.dev/v/master/deployments/realtime-api/api-configuration
→ Batch API: https://docs.cortex.dev/v/master/deployments/batch-api/api-configuration
→ Traffic Splitter: https://docs.cortex.dev/v/master/deployments/realtime-api/traffic-splitter
- `predictor` - A Cortex Predictor class implementation. Not required when deploying a traffic splitter.
→ Realtime API: https://docs.cortex.dev/v/master/deployments/realtime-api/predictors
→ Batch API: https://docs.cortex.dev/v/master/deployments/batch-api/predictors
- `pip_dependencies` - A list of PyPI dependencies that will be installed before running your predictor class.
- `conda_dependencies` - A list of Conda dependencies that will be installed before running your predictor class.
- `project_dir` - Path to a python project.
- `force` - Override any in-progress api updates.
- `wait` - Streams logs until the APIs are ready.

Expand Down
2 changes: 1 addition & 1 deletion pkg/workloads/cortex/client/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ import requests
local_client = cortex.client("local")

# deploy the model as a realtime api and wait for it to become active
deployments = local_client.deploy("./cortex.yaml", wait=True)
deployments = local_client.deploy_project(config_file="./cortex.yaml", wait=True)

# get the api's endpoint
url = deployments[0]["api"]["endpoint"]
Expand Down
31 changes: 0 additions & 31 deletions pkg/workloads/cortex/client/cortex/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,37 +24,6 @@ def client(env: str):
"""
Initialize a client based on the specified environment.

To deploy and manage APIs on a new cluster:

1. Spin up a cluster using the CLI command `cortex cluster up`.
An environment named "aws" will be created once the cluster is ready.
2. Initialize your client:

```python
import cortex
c = cortex.client("aws")
c.deploy("./cortex.yaml")
```

To deploy and manage APIs on an existing cluster:

1. Use the command `cortex cluster info` to get the Operator Endpoint.
2. Configure a client to your cluster:

```python
import cortex
c = cortex.cluster_client("aws", operator_endpoint, aws_access_key_id, aws_secret_access_key)
c.deploy("./cortex.yaml")
```

To deploy and manage APIs locally:

```python
import cortex
c = cortex.client("local")
c.deploy("./cortex.yaml")
```

Args:
env: Name of the environment to use.

Expand Down
106 changes: 106 additions & 0 deletions pkg/workloads/cortex/client/cortex/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,15 @@
import sys
import subprocess
import threading
import yaml
import uuid
import dill
import inspect
from pathlib import Path

from typing import List, Dict, Optional, Tuple, Callable, Union
from cortex.binary import run_cli, get_cli_path
from cortex import util


class Client:
Expand All @@ -33,7 +39,107 @@ def __init__(self, env: str):
"""
self.env = env

# CORTEX_VERSION_MINOR x5
def deploy(
self,
api_spec: dict,
predictor=None,
pip_dependencies=[],
conda_dependencies=[],
project_dir: Optional[str] = None,
force: bool = False,
wait: bool = False,
) -> list:
"""
Deploy an API.

Args:
api_spec: A dictionary defining a single Cortex API. Schema can be found here:
→ Realtime API: https://docs.cortex.dev/v/master/deployments/realtime-api/api-configuration
→ Batch API: https://docs.cortex.dev/v/master/deployments/batch-api/api-configuration
→ Traffic Splitter: https://docs.cortex.dev/v/master/deployments/realtime-api/traffic-splitter
predictor: A Cortex Predictor class implementation. Not required when deploying a traffic splitter.
→ Realtime API: https://docs.cortex.dev/v/master/deployments/realtime-api/predictors
→ Batch API: https://docs.cortex.dev/v/master/deployments/batch-api/predictors
pip_dependencies: A list of PyPI dependencies that will be installed before the predictor class implementation is invoked.
conda_dependencies: A list of Conda dependencies that will be installed before the predictor class implementation is invoked.
project_dir: Path to a python project.
force: Override any in-progress api updates.
wait: Streams logs until the APIs are ready.

Returns:
Deployment status, API specification, and endpoint for each API.
"""

if project_dir is not None and predictor is not None:
raise ValueError(
"`predictor` and `project_dir` parameters cannot be specified at the same time, please choose one"
)

if project_dir is not None:
cortex_yaml_path = os.path.join(project_dir, f".cortex-{uuid.uuid4()}.yaml")

with util.open_temporarily(cortex_yaml_path, "w") as f:
yaml.dump([api_spec], f) # write a list
return self._deploy(cortex_yaml_path, force, wait)

project_dir = Path.home() / ".cortex" / "deployments" / str(uuid.uuid4())
with util.open_tempdir(str(project_dir)):
cortex_yaml_path = os.path.join(project_dir, "cortex.yaml")

if predictor is None:
# for deploying a traffic splitter
with open(cortex_yaml_path, "w") as f:
yaml.dump([api_spec], f) # write a list
return self._deploy(cortex_yaml_path, force=force, wait=wait)

# Change if PYTHONVERSION changes
expected_version = "3.6"
actual_version = f"{sys.version_info.major}.{sys.version_info.minor}"
if actual_version < expected_version:
raise Exception("cortex is only supported for python versions >= 3.6") # unexpected
if actual_version > expected_version:
is_python_set = any(
conda_dep.startswith("python=") or "::python=" in conda_dep
for conda_dep in conda_dependencies
)

if not is_python_set:
conda_dependencies = [
f"conda-forge::python={sys.version_info.major}.{sys.version_info.minor}.{sys.version_info.micro}"
] + conda_dependencies

if len(pip_dependencies) > 0:
with open(project_dir / "requirements.txt", "w") as requirements_file:
requirements_file.write("\n".join(pip_dependencies))

if len(conda_dependencies) > 0:
with open(project_dir / "conda-packages.txt", "w") as conda_file:
conda_file.write("\n".join(conda_dependencies))

if not inspect.isclass(predictor):
raise ValueError("predictor parameter must be a class definition")

with open(project_dir / "predictor.pickle", "wb") as pickle_file:
dill.dump(predictor, pickle_file)
if api_spec.get("predictor") is None:
api_spec["predictor"] = {}

if predictor.__name__ == "PythonPredictor":
predictor_type = "python"
if predictor.__name__ == "TensorFlowPredictor":
predictor_type = "tensorflow"
if predictor.__name__ == "ONNXPredictor":
predictor_type = "onnx"

api_spec["predictor"]["path"] = "predictor.pickle"
api_spec["predictor"]["type"] = predictor_type

with open(cortex_yaml_path, "w") as f:
yaml.dump([api_spec], f) # write a list
return self._deploy(cortex_yaml_path, force=force, wait=wait)

def _deploy(
self,
config_file: str,
force: bool = False,
Expand Down
39 changes: 39 additions & 0 deletions pkg/workloads/cortex/client/cortex/util.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# Copyright 2020 Cortex Labs, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from contextlib import contextmanager
import os
from pathlib import Path
import shutil


@contextmanager
def open_temporarily(path, mode):
file = open(path, mode)

try:
yield file
finally:
file.close()
os.remove(path)


@contextmanager
def open_tempdir(dir_path):
Path(dir_path).mkdir(parents=True)

try:
yield dir_path
finally:
shutil.rmtree(dir_path)
8 changes: 7 additions & 1 deletion pkg/workloads/cortex/client/setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,13 @@ def run(self):
"cortex = cortex.binary:run",
],
},
install_requires=(["importlib-resources; python_version < '3.7'"]),
install_requires=(
[
"importlib-resources; python_version < '3.7'",
"pyyaml>=5.3.0",
"dill==0.3.2", # lines up with dill package version used in cortex serving code
]
),
python_requires=">=3.6",
cmdclass={
"install": InstallBinary,
Expand Down
Loading