Skip to content

Commit 8e4181d

Browse files
feat(container): update the api
#### container:v1 The following keys were added: - schemas.ClusterUpdate.properties.desiredNodePoolAutoConfigLinuxNodeConfig.$ref (Total Keys: 1) - schemas.NodeConfig.properties.localSsdEncryptionMode.type (Total Keys: 1) - schemas.NodePoolAutoConfig.properties.linuxNodeConfig (Total Keys: 2) - schemas.UpgradeInfoEvent (Total Keys: 13) #### container:v1beta1 The following keys were added: - schemas.ClusterUpdate.properties.desiredNodePoolAutoConfigLinuxNodeConfig.$ref (Total Keys: 1) - schemas.NodeConfig.properties.localSsdEncryptionMode.type (Total Keys: 1) - schemas.NodePoolAutoConfig.properties.linuxNodeConfig (Total Keys: 2) - schemas.UpgradeInfoEvent (Total Keys: 13)
1 parent 0ec6e07 commit 8e4181d

10 files changed

+520
-138
lines changed

docs/dyn/container_v1.projects.locations.clusters.html

Lines changed: 62 additions & 16 deletions
Large diffs are not rendered by default.

docs/dyn/container_v1.projects.locations.clusters.nodePools.html

Lines changed: 19 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -153,10 +153,10 @@ <h3>Method Details</h3>
153153
&quot;autoprovisioned&quot;: True or False, # Can this node pool be deleted automatically.
154154
&quot;enabled&quot;: True or False, # Is autoscaling enabled for this node pool.
155155
&quot;locationPolicy&quot;: &quot;A String&quot;, # Location policy used when scaling up a nodepool.
156-
&quot;maxNodeCount&quot;: 42, # Maximum number of nodes for one location in the NodePool. Must be &gt;= min_node_count. There has to be enough quota to scale up the cluster.
157-
&quot;minNodeCount&quot;: 42, # Minimum number of nodes for one location in the NodePool. Must be &gt;= 1 and &lt;= max_node_count.
158-
&quot;totalMaxNodeCount&quot;: 42, # Maximum number of nodes in the node pool. Must be greater than total_min_node_count. There has to be enough quota to scale up the cluster. The total_*_node_count fields are mutually exclusive with the *_node_count fields.
159-
&quot;totalMinNodeCount&quot;: 42, # Minimum number of nodes in the node pool. Must be greater than 1 less than total_max_node_count. The total_*_node_count fields are mutually exclusive with the *_node_count fields.
156+
&quot;maxNodeCount&quot;: 42, # Maximum number of nodes for one location in the node pool. Must be &gt;= min_node_count. There has to be enough quota to scale up the cluster.
157+
&quot;minNodeCount&quot;: 42, # Minimum number of nodes for one location in the node pool. Must be greater than or equal to 0 and less than or equal to max_node_count.
158+
&quot;totalMaxNodeCount&quot;: 42, # Maximum number of nodes in the node pool. Must be greater than or equal to total_min_node_count. There has to be enough quota to scale up the cluster. The total_*_node_count fields are mutually exclusive with the *_node_count fields.
159+
&quot;totalMinNodeCount&quot;: 42, # Minimum number of nodes in the node pool. Must be greater than or equal to 0 and less than or equal to total_max_node_count. The total_*_node_count fields are mutually exclusive with the *_node_count fields.
160160
},
161161
&quot;bestEffortProvisioning&quot;: { # Best effort provisioning. # Enable best effort provisioning for nodes
162162
&quot;enabled&quot;: True or False, # When this is enabled, cluster/node pool creations will ignore non-fatal errors like stockout to best provision as many nodes as possible right now and eventually bring up all target number of nodes
@@ -248,6 +248,7 @@ <h3>Method Details</h3>
248248
&quot;localSsdCount&quot;: 42, # Number of local NVMe SSDs to use. The limit for this value is dependent upon the maximum number of disk available on a machine per zone. See: https://cloud.google.com/compute/docs/disks/local-ssd for more information. A zero (or unset) value has different meanings depending on machine type being used: 1. For pre-Gen3 machines, which support flexible numbers of local ssds, zero (or unset) means to disable using local SSDs as ephemeral storage. 2. For Gen3 machines which dictate a specific number of local ssds, zero (or unset) means to use the default number of local ssds that goes with that machine type. For example, for a c3-standard-8-lssd machine, 2 local ssds would be provisioned. For c3-standard-8 (which doesn&#x27;t support local ssds), 0 will be provisioned. See https://cloud.google.com/compute/docs/disks/local-ssd#choose_number_local_ssds for more info.
249249
},
250250
&quot;localSsdCount&quot;: 42, # The number of local SSD disks to be attached to the node. The limit for this value is dependent upon the maximum number of disks available on a machine per zone. See: https://cloud.google.com/compute/docs/disks/local-ssd for more information.
251+
&quot;localSsdEncryptionMode&quot;: &quot;A String&quot;, # Specifies which method should be used for encrypting the Local SSDs attahced to the node.
251252
&quot;loggingConfig&quot;: { # NodePoolLoggingConfig specifies logging configuration for nodepools. # Logging configuration.
252253
&quot;variantConfig&quot;: { # LoggingVariantConfig specifies the behaviour of the logging component. # Logging variant configuration.
253254
&quot;variant&quot;: &quot;A String&quot;, # Logging variant deployed on nodes.
@@ -575,10 +576,10 @@ <h3>Method Details</h3>
575576
&quot;autoprovisioned&quot;: True or False, # Can this node pool be deleted automatically.
576577
&quot;enabled&quot;: True or False, # Is autoscaling enabled for this node pool.
577578
&quot;locationPolicy&quot;: &quot;A String&quot;, # Location policy used when scaling up a nodepool.
578-
&quot;maxNodeCount&quot;: 42, # Maximum number of nodes for one location in the NodePool. Must be &gt;= min_node_count. There has to be enough quota to scale up the cluster.
579-
&quot;minNodeCount&quot;: 42, # Minimum number of nodes for one location in the NodePool. Must be &gt;= 1 and &lt;= max_node_count.
580-
&quot;totalMaxNodeCount&quot;: 42, # Maximum number of nodes in the node pool. Must be greater than total_min_node_count. There has to be enough quota to scale up the cluster. The total_*_node_count fields are mutually exclusive with the *_node_count fields.
581-
&quot;totalMinNodeCount&quot;: 42, # Minimum number of nodes in the node pool. Must be greater than 1 less than total_max_node_count. The total_*_node_count fields are mutually exclusive with the *_node_count fields.
579+
&quot;maxNodeCount&quot;: 42, # Maximum number of nodes for one location in the node pool. Must be &gt;= min_node_count. There has to be enough quota to scale up the cluster.
580+
&quot;minNodeCount&quot;: 42, # Minimum number of nodes for one location in the node pool. Must be greater than or equal to 0 and less than or equal to max_node_count.
581+
&quot;totalMaxNodeCount&quot;: 42, # Maximum number of nodes in the node pool. Must be greater than or equal to total_min_node_count. There has to be enough quota to scale up the cluster. The total_*_node_count fields are mutually exclusive with the *_node_count fields.
582+
&quot;totalMinNodeCount&quot;: 42, # Minimum number of nodes in the node pool. Must be greater than or equal to 0 and less than or equal to total_max_node_count. The total_*_node_count fields are mutually exclusive with the *_node_count fields.
582583
},
583584
&quot;bestEffortProvisioning&quot;: { # Best effort provisioning. # Enable best effort provisioning for nodes
584585
&quot;enabled&quot;: True or False, # When this is enabled, cluster/node pool creations will ignore non-fatal errors like stockout to best provision as many nodes as possible right now and eventually bring up all target number of nodes
@@ -670,6 +671,7 @@ <h3>Method Details</h3>
670671
&quot;localSsdCount&quot;: 42, # Number of local NVMe SSDs to use. The limit for this value is dependent upon the maximum number of disk available on a machine per zone. See: https://cloud.google.com/compute/docs/disks/local-ssd for more information. A zero (or unset) value has different meanings depending on machine type being used: 1. For pre-Gen3 machines, which support flexible numbers of local ssds, zero (or unset) means to disable using local SSDs as ephemeral storage. 2. For Gen3 machines which dictate a specific number of local ssds, zero (or unset) means to use the default number of local ssds that goes with that machine type. For example, for a c3-standard-8-lssd machine, 2 local ssds would be provisioned. For c3-standard-8 (which doesn&#x27;t support local ssds), 0 will be provisioned. See https://cloud.google.com/compute/docs/disks/local-ssd#choose_number_local_ssds for more info.
671672
},
672673
&quot;localSsdCount&quot;: 42, # The number of local SSD disks to be attached to the node. The limit for this value is dependent upon the maximum number of disks available on a machine per zone. See: https://cloud.google.com/compute/docs/disks/local-ssd for more information.
674+
&quot;localSsdEncryptionMode&quot;: &quot;A String&quot;, # Specifies which method should be used for encrypting the Local SSDs attahced to the node.
673675
&quot;loggingConfig&quot;: { # NodePoolLoggingConfig specifies logging configuration for nodepools. # Logging configuration.
674676
&quot;variantConfig&quot;: { # LoggingVariantConfig specifies the behaviour of the logging component. # Logging variant configuration.
675677
&quot;variant&quot;: &quot;A String&quot;, # Logging variant deployed on nodes.
@@ -863,10 +865,10 @@ <h3>Method Details</h3>
863865
&quot;autoprovisioned&quot;: True or False, # Can this node pool be deleted automatically.
864866
&quot;enabled&quot;: True or False, # Is autoscaling enabled for this node pool.
865867
&quot;locationPolicy&quot;: &quot;A String&quot;, # Location policy used when scaling up a nodepool.
866-
&quot;maxNodeCount&quot;: 42, # Maximum number of nodes for one location in the NodePool. Must be &gt;= min_node_count. There has to be enough quota to scale up the cluster.
867-
&quot;minNodeCount&quot;: 42, # Minimum number of nodes for one location in the NodePool. Must be &gt;= 1 and &lt;= max_node_count.
868-
&quot;totalMaxNodeCount&quot;: 42, # Maximum number of nodes in the node pool. Must be greater than total_min_node_count. There has to be enough quota to scale up the cluster. The total_*_node_count fields are mutually exclusive with the *_node_count fields.
869-
&quot;totalMinNodeCount&quot;: 42, # Minimum number of nodes in the node pool. Must be greater than 1 less than total_max_node_count. The total_*_node_count fields are mutually exclusive with the *_node_count fields.
868+
&quot;maxNodeCount&quot;: 42, # Maximum number of nodes for one location in the node pool. Must be &gt;= min_node_count. There has to be enough quota to scale up the cluster.
869+
&quot;minNodeCount&quot;: 42, # Minimum number of nodes for one location in the node pool. Must be greater than or equal to 0 and less than or equal to max_node_count.
870+
&quot;totalMaxNodeCount&quot;: 42, # Maximum number of nodes in the node pool. Must be greater than or equal to total_min_node_count. There has to be enough quota to scale up the cluster. The total_*_node_count fields are mutually exclusive with the *_node_count fields.
871+
&quot;totalMinNodeCount&quot;: 42, # Minimum number of nodes in the node pool. Must be greater than or equal to 0 and less than or equal to total_max_node_count. The total_*_node_count fields are mutually exclusive with the *_node_count fields.
870872
},
871873
&quot;bestEffortProvisioning&quot;: { # Best effort provisioning. # Enable best effort provisioning for nodes
872874
&quot;enabled&quot;: True or False, # When this is enabled, cluster/node pool creations will ignore non-fatal errors like stockout to best provision as many nodes as possible right now and eventually bring up all target number of nodes
@@ -958,6 +960,7 @@ <h3>Method Details</h3>
958960
&quot;localSsdCount&quot;: 42, # Number of local NVMe SSDs to use. The limit for this value is dependent upon the maximum number of disk available on a machine per zone. See: https://cloud.google.com/compute/docs/disks/local-ssd for more information. A zero (or unset) value has different meanings depending on machine type being used: 1. For pre-Gen3 machines, which support flexible numbers of local ssds, zero (or unset) means to disable using local SSDs as ephemeral storage. 2. For Gen3 machines which dictate a specific number of local ssds, zero (or unset) means to use the default number of local ssds that goes with that machine type. For example, for a c3-standard-8-lssd machine, 2 local ssds would be provisioned. For c3-standard-8 (which doesn&#x27;t support local ssds), 0 will be provisioned. See https://cloud.google.com/compute/docs/disks/local-ssd#choose_number_local_ssds for more info.
959961
},
960962
&quot;localSsdCount&quot;: 42, # The number of local SSD disks to be attached to the node. The limit for this value is dependent upon the maximum number of disks available on a machine per zone. See: https://cloud.google.com/compute/docs/disks/local-ssd for more information.
963+
&quot;localSsdEncryptionMode&quot;: &quot;A String&quot;, # Specifies which method should be used for encrypting the Local SSDs attahced to the node.
961964
&quot;loggingConfig&quot;: { # NodePoolLoggingConfig specifies logging configuration for nodepools. # Logging configuration.
962965
&quot;variantConfig&quot;: { # LoggingVariantConfig specifies the behaviour of the logging component. # Logging variant configuration.
963966
&quot;variant&quot;: &quot;A String&quot;, # Logging variant deployed on nodes.
@@ -1222,10 +1225,10 @@ <h3>Method Details</h3>
12221225
&quot;autoprovisioned&quot;: True or False, # Can this node pool be deleted automatically.
12231226
&quot;enabled&quot;: True or False, # Is autoscaling enabled for this node pool.
12241227
&quot;locationPolicy&quot;: &quot;A String&quot;, # Location policy used when scaling up a nodepool.
1225-
&quot;maxNodeCount&quot;: 42, # Maximum number of nodes for one location in the NodePool. Must be &gt;= min_node_count. There has to be enough quota to scale up the cluster.
1226-
&quot;minNodeCount&quot;: 42, # Minimum number of nodes for one location in the NodePool. Must be &gt;= 1 and &lt;= max_node_count.
1227-
&quot;totalMaxNodeCount&quot;: 42, # Maximum number of nodes in the node pool. Must be greater than total_min_node_count. There has to be enough quota to scale up the cluster. The total_*_node_count fields are mutually exclusive with the *_node_count fields.
1228-
&quot;totalMinNodeCount&quot;: 42, # Minimum number of nodes in the node pool. Must be greater than 1 less than total_max_node_count. The total_*_node_count fields are mutually exclusive with the *_node_count fields.
1228+
&quot;maxNodeCount&quot;: 42, # Maximum number of nodes for one location in the node pool. Must be &gt;= min_node_count. There has to be enough quota to scale up the cluster.
1229+
&quot;minNodeCount&quot;: 42, # Minimum number of nodes for one location in the node pool. Must be greater than or equal to 0 and less than or equal to max_node_count.
1230+
&quot;totalMaxNodeCount&quot;: 42, # Maximum number of nodes in the node pool. Must be greater than or equal to total_min_node_count. There has to be enough quota to scale up the cluster. The total_*_node_count fields are mutually exclusive with the *_node_count fields.
1231+
&quot;totalMinNodeCount&quot;: 42, # Minimum number of nodes in the node pool. Must be greater than or equal to 0 and less than or equal to total_max_node_count. The total_*_node_count fields are mutually exclusive with the *_node_count fields.
12291232
},
12301233
&quot;clusterId&quot;: &quot;A String&quot;, # Deprecated. The name of the cluster to upgrade. This field has been deprecated and replaced by the name field.
12311234
&quot;name&quot;: &quot;A String&quot;, # The name (project, location, cluster, node pool) of the node pool to set autoscaler settings. Specified in the format `projects/*/locations/*/clusters/*/nodePools/*`.

0 commit comments

Comments
 (0)