Skip to content

support nvidia-docker2 natively in local mode. #426

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Oct 13, 2018
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ CHANGELOG
=========

* feature: Local Mode: Add support for Batch Inference
* enhancement: Local Mode: support nvidia-docker2 natively

1.11.2
======
Expand Down
9 changes: 7 additions & 2 deletions src/sagemaker/local/image.py
Original file line number Diff line number Diff line change
Expand Up @@ -362,8 +362,8 @@ def _generate_compose_file(self, command, additional_volumes=None, additional_en
}

content = {
# Some legacy hosts only support the 2.1 format.
'version': '2.1',
# Use version 2.3 as a minimum so that we can specify the runtime
'version': '2.3',
'services': services,
'networks': {
'sagemaker-local': {'name': 'sagemaker-local'}
Expand Down Expand Up @@ -415,6 +415,11 @@ def _create_docker_host(self, host, environment, optml_subdirs, command, volumes
}
}

# for GPU support pass in nvidia as the runtime, this is equivalent
# to setting --runtime=nvidia in the docker commandline.
if self.instance_type == 'local_gpu':
host_config['runtime'] = 'nvidia'

if command == 'serve':
serving_port = sagemaker.utils.get_config_value('local.serving_port',
self.sagemaker_session.config) or 8080
Expand Down