Skip to content

Commit c6c8169

Browse files
feat(notebooks): update the api
#### notebooks:v1 The following keys were added: - resources.projects.resources.locations.resources.runtimes.methods.getIamPolicy (Total Keys: 14) - resources.projects.resources.locations.resources.runtimes.methods.setIamPolicy (Total Keys: 12) - resources.projects.resources.locations.resources.runtimes.methods.testIamPermissions (Total Keys: 12) - schemas.ExecutionTemplate.properties.kernelSpec.type (Total Keys: 1) - schemas.VertexAIParameters.properties.env (Total Keys: 2)
1 parent 05f7497 commit c6c8169

File tree

5 files changed

+357
-85
lines changed

5 files changed

+357
-85
lines changed

docs/dyn/notebooks_v1.projects.locations.executions.html

Lines changed: 26 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -118,20 +118,24 @@ <h3>Method Details</h3>
118118
},
119119
&quot;containerImageUri&quot;: &quot;A String&quot;, # Container Image URI to a DLVM Example: &#x27;gcr.io/deeplearning-platform-release/base-cu100&#x27; More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
120120
&quot;dataprocParameters&quot;: { # Parameters used in Dataproc JobType executions. # Parameters used in Dataproc JobType executions.
121-
&quot;cluster&quot;: &quot;A String&quot;, # URI for cluster used to run Dataproc execution. Format: &#x27;projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
121+
&quot;cluster&quot;: &quot;A String&quot;, # URI for cluster used to run Dataproc execution. Format: `projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}`
122122
},
123-
&quot;inputNotebookFile&quot;: &quot;A String&quot;, # Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
123+
&quot;inputNotebookFile&quot;: &quot;A String&quot;, # Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: `gs://{bucket_name}/{folder}/{notebook_file_name}` Ex: `gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb`
124124
&quot;jobType&quot;: &quot;A String&quot;, # The type of Job to be used on this execution.
125+
&quot;kernelSpec&quot;: &quot;A String&quot;, # Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
125126
&quot;labels&quot;: { # Labels for execution. If execution is scheduled, a field included will be &#x27;nbs-scheduled&#x27;. Otherwise, it is an immediate execution, and an included field will be &#x27;nbs-immediate&#x27;. Use fields to efficiently index between various types of executions.
126127
&quot;a_key&quot;: &quot;A String&quot;,
127128
},
128129
&quot;masterType&quot;: &quot;A String&quot;, # Specifies the type of virtual machine to use for your training job&#x27;s master worker. You must specify this field when `scaleTier` is set to `CUSTOM`. You can use certain Compute Engine machine types directly in this field. The following types are supported: - `n1-standard-4` - `n1-standard-8` - `n1-standard-16` - `n1-standard-32` - `n1-standard-64` - `n1-standard-96` - `n1-highmem-2` - `n1-highmem-4` - `n1-highmem-8` - `n1-highmem-16` - `n1-highmem-32` - `n1-highmem-64` - `n1-highmem-96` - `n1-highcpu-16` - `n1-highcpu-32` - `n1-highcpu-64` - `n1-highcpu-96` Alternatively, you can use the following legacy machine types: - `standard` - `large_model` - `complex_model_s` - `complex_model_m` - `complex_model_l` - `standard_gpu` - `complex_model_m_gpu` - `complex_model_l_gpu` - `standard_p100` - `complex_model_m_p100` - `standard_v100` - `large_model_v100` - `complex_model_m_v100` - `complex_model_l_v100` Finally, if you want to use a TPU for training, specify `cloud_tpu` in this field. Learn more about the [special configuration options for training with TPU](https://cloud.google.com/ai-platform/training/docs/using-tpus#configuring_a_custom_tpu_machine).
129-
&quot;outputNotebookFolder&quot;: &quot;A String&quot;, # Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks
130+
&quot;outputNotebookFolder&quot;: &quot;A String&quot;, # Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: `gs://{bucket_name}/{folder}` Ex: `gs://notebook_user/scheduled_notebooks`
130131
&quot;parameters&quot;: &quot;A String&quot;, # Parameters used within the &#x27;input_notebook_file&#x27; notebook.
131-
&quot;paramsYamlFile&quot;: &quot;A String&quot;, # Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
132+
&quot;paramsYamlFile&quot;: &quot;A String&quot;, # Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: `gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml`
132133
&quot;scaleTier&quot;: &quot;A String&quot;, # Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
133134
&quot;serviceAccount&quot;: &quot;A String&quot;, # The email address of a service account to use when running the execution. You must have the `iam.serviceAccounts.actAs` permission for the specified service account.
134135
&quot;vertexAiParameters&quot;: { # Parameters used in Vertex AI JobType executions. # Parameters used in Vertex AI JobType executions.
136+
&quot;env&quot;: { # Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
137+
&quot;a_key&quot;: &quot;A String&quot;,
138+
},
135139
&quot;network&quot;: &quot;A String&quot;, # The full name of the Compute Engine [network](/compute/docs/networks-and-firewalls#networks) to which the Job should be peered. For example, `projects/12345/global/networks/myVPC`. [Format](https://cloud.google.com/compute/docs/reference/rest/v1/networks/insert) is of the form `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in `12345`, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
136140
},
137141
},
@@ -232,20 +236,24 @@ <h3>Method Details</h3>
232236
},
233237
&quot;containerImageUri&quot;: &quot;A String&quot;, # Container Image URI to a DLVM Example: &#x27;gcr.io/deeplearning-platform-release/base-cu100&#x27; More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
234238
&quot;dataprocParameters&quot;: { # Parameters used in Dataproc JobType executions. # Parameters used in Dataproc JobType executions.
235-
&quot;cluster&quot;: &quot;A String&quot;, # URI for cluster used to run Dataproc execution. Format: &#x27;projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
239+
&quot;cluster&quot;: &quot;A String&quot;, # URI for cluster used to run Dataproc execution. Format: `projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}`
236240
},
237-
&quot;inputNotebookFile&quot;: &quot;A String&quot;, # Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
241+
&quot;inputNotebookFile&quot;: &quot;A String&quot;, # Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: `gs://{bucket_name}/{folder}/{notebook_file_name}` Ex: `gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb`
238242
&quot;jobType&quot;: &quot;A String&quot;, # The type of Job to be used on this execution.
243+
&quot;kernelSpec&quot;: &quot;A String&quot;, # Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
239244
&quot;labels&quot;: { # Labels for execution. If execution is scheduled, a field included will be &#x27;nbs-scheduled&#x27;. Otherwise, it is an immediate execution, and an included field will be &#x27;nbs-immediate&#x27;. Use fields to efficiently index between various types of executions.
240245
&quot;a_key&quot;: &quot;A String&quot;,
241246
},
242247
&quot;masterType&quot;: &quot;A String&quot;, # Specifies the type of virtual machine to use for your training job&#x27;s master worker. You must specify this field when `scaleTier` is set to `CUSTOM`. You can use certain Compute Engine machine types directly in this field. The following types are supported: - `n1-standard-4` - `n1-standard-8` - `n1-standard-16` - `n1-standard-32` - `n1-standard-64` - `n1-standard-96` - `n1-highmem-2` - `n1-highmem-4` - `n1-highmem-8` - `n1-highmem-16` - `n1-highmem-32` - `n1-highmem-64` - `n1-highmem-96` - `n1-highcpu-16` - `n1-highcpu-32` - `n1-highcpu-64` - `n1-highcpu-96` Alternatively, you can use the following legacy machine types: - `standard` - `large_model` - `complex_model_s` - `complex_model_m` - `complex_model_l` - `standard_gpu` - `complex_model_m_gpu` - `complex_model_l_gpu` - `standard_p100` - `complex_model_m_p100` - `standard_v100` - `large_model_v100` - `complex_model_m_v100` - `complex_model_l_v100` Finally, if you want to use a TPU for training, specify `cloud_tpu` in this field. Learn more about the [special configuration options for training with TPU](https://cloud.google.com/ai-platform/training/docs/using-tpus#configuring_a_custom_tpu_machine).
243-
&quot;outputNotebookFolder&quot;: &quot;A String&quot;, # Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks
248+
&quot;outputNotebookFolder&quot;: &quot;A String&quot;, # Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: `gs://{bucket_name}/{folder}` Ex: `gs://notebook_user/scheduled_notebooks`
244249
&quot;parameters&quot;: &quot;A String&quot;, # Parameters used within the &#x27;input_notebook_file&#x27; notebook.
245-
&quot;paramsYamlFile&quot;: &quot;A String&quot;, # Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
250+
&quot;paramsYamlFile&quot;: &quot;A String&quot;, # Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: `gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml`
246251
&quot;scaleTier&quot;: &quot;A String&quot;, # Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
247252
&quot;serviceAccount&quot;: &quot;A String&quot;, # The email address of a service account to use when running the execution. You must have the `iam.serviceAccounts.actAs` permission for the specified service account.
248253
&quot;vertexAiParameters&quot;: { # Parameters used in Vertex AI JobType executions. # Parameters used in Vertex AI JobType executions.
254+
&quot;env&quot;: { # Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
255+
&quot;a_key&quot;: &quot;A String&quot;,
256+
},
249257
&quot;network&quot;: &quot;A String&quot;, # The full name of the Compute Engine [network](/compute/docs/networks-and-firewalls#networks) to which the Job should be peered. For example, `projects/12345/global/networks/myVPC`. [Format](https://cloud.google.com/compute/docs/reference/rest/v1/networks/insert) is of the form `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in `12345`, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
250258
},
251259
},
@@ -263,7 +271,7 @@ <h3>Method Details</h3>
263271

264272
Args:
265273
parent: string, Required. Format: `parent=projects/{project_id}/locations/{location}` (required)
266-
filter: string, Filter applied to resulting executions. Currently only supports filtering executions by a specified schedule_id. Format: &quot;schedule_id=&quot;
274+
filter: string, Filter applied to resulting executions. Currently only supports filtering executions by a specified schedule_id. Format: `schedule_id=`
267275
orderBy: string, Sort by field.
268276
pageSize: integer, Maximum return size of the list call.
269277
pageToken: string, A previous returned page token that can be used to continue listing from the last result.
@@ -288,20 +296,24 @@ <h3>Method Details</h3>
288296
},
289297
&quot;containerImageUri&quot;: &quot;A String&quot;, # Container Image URI to a DLVM Example: &#x27;gcr.io/deeplearning-platform-release/base-cu100&#x27; More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
290298
&quot;dataprocParameters&quot;: { # Parameters used in Dataproc JobType executions. # Parameters used in Dataproc JobType executions.
291-
&quot;cluster&quot;: &quot;A String&quot;, # URI for cluster used to run Dataproc execution. Format: &#x27;projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
299+
&quot;cluster&quot;: &quot;A String&quot;, # URI for cluster used to run Dataproc execution. Format: `projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}`
292300
},
293-
&quot;inputNotebookFile&quot;: &quot;A String&quot;, # Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
301+
&quot;inputNotebookFile&quot;: &quot;A String&quot;, # Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: `gs://{bucket_name}/{folder}/{notebook_file_name}` Ex: `gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb`
294302
&quot;jobType&quot;: &quot;A String&quot;, # The type of Job to be used on this execution.
303+
&quot;kernelSpec&quot;: &quot;A String&quot;, # Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
295304
&quot;labels&quot;: { # Labels for execution. If execution is scheduled, a field included will be &#x27;nbs-scheduled&#x27;. Otherwise, it is an immediate execution, and an included field will be &#x27;nbs-immediate&#x27;. Use fields to efficiently index between various types of executions.
296305
&quot;a_key&quot;: &quot;A String&quot;,
297306
},
298307
&quot;masterType&quot;: &quot;A String&quot;, # Specifies the type of virtual machine to use for your training job&#x27;s master worker. You must specify this field when `scaleTier` is set to `CUSTOM`. You can use certain Compute Engine machine types directly in this field. The following types are supported: - `n1-standard-4` - `n1-standard-8` - `n1-standard-16` - `n1-standard-32` - `n1-standard-64` - `n1-standard-96` - `n1-highmem-2` - `n1-highmem-4` - `n1-highmem-8` - `n1-highmem-16` - `n1-highmem-32` - `n1-highmem-64` - `n1-highmem-96` - `n1-highcpu-16` - `n1-highcpu-32` - `n1-highcpu-64` - `n1-highcpu-96` Alternatively, you can use the following legacy machine types: - `standard` - `large_model` - `complex_model_s` - `complex_model_m` - `complex_model_l` - `standard_gpu` - `complex_model_m_gpu` - `complex_model_l_gpu` - `standard_p100` - `complex_model_m_p100` - `standard_v100` - `large_model_v100` - `complex_model_m_v100` - `complex_model_l_v100` Finally, if you want to use a TPU for training, specify `cloud_tpu` in this field. Learn more about the [special configuration options for training with TPU](https://cloud.google.com/ai-platform/training/docs/using-tpus#configuring_a_custom_tpu_machine).
299-
&quot;outputNotebookFolder&quot;: &quot;A String&quot;, # Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks
308+
&quot;outputNotebookFolder&quot;: &quot;A String&quot;, # Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: `gs://{bucket_name}/{folder}` Ex: `gs://notebook_user/scheduled_notebooks`
300309
&quot;parameters&quot;: &quot;A String&quot;, # Parameters used within the &#x27;input_notebook_file&#x27; notebook.
301-
&quot;paramsYamlFile&quot;: &quot;A String&quot;, # Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
310+
&quot;paramsYamlFile&quot;: &quot;A String&quot;, # Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: `gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml`
302311
&quot;scaleTier&quot;: &quot;A String&quot;, # Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
303312
&quot;serviceAccount&quot;: &quot;A String&quot;, # The email address of a service account to use when running the execution. You must have the `iam.serviceAccounts.actAs` permission for the specified service account.
304313
&quot;vertexAiParameters&quot;: { # Parameters used in Vertex AI JobType executions. # Parameters used in Vertex AI JobType executions.
314+
&quot;env&quot;: { # Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
315+
&quot;a_key&quot;: &quot;A String&quot;,
316+
},
305317
&quot;network&quot;: &quot;A String&quot;, # The full name of the Compute Engine [network](/compute/docs/networks-and-firewalls#networks) to which the Job should be peered. For example, `projects/12345/global/networks/myVPC`. [Format](https://cloud.google.com/compute/docs/reference/rest/v1/networks/insert) is of the form `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in `12345`, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
306318
},
307319
},
@@ -313,7 +325,7 @@ <h3>Method Details</h3>
313325
},
314326
],
315327
&quot;nextPageToken&quot;: &quot;A String&quot;, # Page token that can be used to continue listing from the last result in the next list call.
316-
&quot;unreachable&quot;: [ # Executions IDs that could not be reached. For example, [&#x27;projects/{project_id}/location/{location}/executions/imagenet_test1&#x27;, &#x27;projects/{project_id}/location/{location}/executions/classifier_train1&#x27;].
328+
&quot;unreachable&quot;: [ # Executions IDs that could not be reached. For example: [&#x27;projects/{project_id}/location/{location}/executions/imagenet_test1&#x27;, &#x27;projects/{project_id}/location/{location}/executions/classifier_train1&#x27;]
317329
&quot;A String&quot;,
318330
],
319331
}</pre>

0 commit comments

Comments
 (0)