Skip to content

Commit 50d9aa1

Browse files
chore(container): update the api
#### container:v1 The following keys were added: - schemas.AddonsConfig.properties.gcePersistentDiskCsiDriverConfig (Total Keys: 1) - schemas.Autopilot (Total Keys: 3) - schemas.Cluster.properties.autopilot (Total Keys: 1) - schemas.Cluster.properties.notificationConfig (Total Keys: 1) - schemas.ClusterUpdate.properties.desiredNotificationConfig (Total Keys: 1) - schemas.ClusterUpdate.properties.desiredPrivateIpv6GoogleAccess (Total Keys: 1) - schemas.GcePersistentDiskCsiDriverConfig (Total Keys: 3) - schemas.LinuxNodeConfig (Total Keys: 4) - schemas.NetworkConfig.properties.privateIpv6GoogleAccess (Total Keys: 1) - schemas.NodeConfig.properties.kubeletConfig (Total Keys: 1) - schemas.NodeConfig.properties.linuxNodeConfig (Total Keys: 1) - schemas.NodeKubeletConfig (Total Keys: 5) - schemas.NotificationConfig (Total Keys: 3) - schemas.PubSub (Total Keys: 4) - schemas.UpdateNodePoolRequest.properties.kubeletConfig (Total Keys: 1) - schemas.UpdateNodePoolRequest.properties.linuxNodeConfig (Total Keys: 1) #### container:v1beta1 The following keys were added: - schemas.Autopilot (Total Keys: 3) - schemas.Cluster.properties.autopilot (Total Keys: 1) - schemas.ClusterUpdate.properties.desiredPrivateIpv6GoogleAccess (Total Keys: 1) - schemas.NetworkConfig.properties.privateIpv6GoogleAccess (Total Keys: 1) - schemas.UpgradeAvailableEvent (Total Keys: 6)
1 parent 8e8af01 commit 50d9aa1

10 files changed

+11401
-10846
lines changed

docs/dyn/container_v1.projects.locations.clusters.html

Lines changed: 128 additions & 16 deletions
Large diffs are not rendered by default.

docs/dyn/container_v1.projects.locations.clusters.nodePools.html

Lines changed: 49 additions & 9 deletions
Large diffs are not rendered by default.

docs/dyn/container_v1.projects.zones.clusters.html

Lines changed: 128 additions & 16 deletions
Large diffs are not rendered by default.

docs/dyn/container_v1.projects.zones.clusters.nodePools.html

Lines changed: 49 additions & 9 deletions
Large diffs are not rendered by default.

docs/dyn/container_v1beta1.projects.locations.clusters.html

Lines changed: 34 additions & 21 deletions
Large diffs are not rendered by default.

docs/dyn/container_v1beta1.projects.locations.clusters.nodePools.html

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,7 @@ <h2>Instance Methods</h2>
100100
<p class="firstline">Sets the NodeManagement options for a node pool.</p>
101101
<p class="toc_element">
102102
<code><a href="#setSize">setSize(name, body=None, x__xgafv=None)</a></code></p>
103-
<p class="firstline">Sets the size for a specific node pool.</p>
103+
<p class="firstline">SetNodePoolSizeRequest sets the size of a node pool. The new size will be used for all replicas, including future replicas created by modifying NodePool.locations.</p>
104104
<p class="toc_element">
105105
<code><a href="#update">update(name, body=None, x__xgafv=None)</a></code></p>
106106
<p class="firstline">Updates the version and/or image type of a specific node pool.</p>
@@ -144,7 +144,7 @@ <h3>Method Details</h3>
144144
],
145145
&quot;bootDiskKmsKey&quot;: &quot;A String&quot;, # The Customer Managed Encryption Key used to encrypt the boot disk attached to each node in the node pool. This should be of the form projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME]. For more information about protecting resources with Cloud KMS Keys please see: https://cloud.google.com/compute/docs/disks/customer-managed-encryption
146146
&quot;diskSizeGb&quot;: 42, # Size of the disk attached to each node, specified in GB. The smallest allowed disk size is 10GB. If unspecified, the default disk size is 100GB.
147-
&quot;diskType&quot;: &quot;A String&quot;, # Type of the disk attached to each node (e.g. &#x27;pd-standard&#x27; or &#x27;pd-ssd&#x27;) If unspecified, the default disk type is &#x27;pd-standard&#x27;
147+
&quot;diskType&quot;: &quot;A String&quot;, # Type of the disk attached to each node (e.g. &#x27;pd-standard&#x27;, &#x27;pd-ssd&#x27; or &#x27;pd-balanced&#x27;) If unspecified, the default disk type is &#x27;pd-standard&#x27;
148148
&quot;ephemeralStorageConfig&quot;: { # EphemeralStorageConfig contains configuration for the ephemeral storage filesystem. # Parameters for the ephemeral storage filesystem. If unspecified, ephemeral storage is backed by the boot disk.
149149
&quot;localSsdCount&quot;: 42, # Number of local SSDs to use to back ephemeral storage. Uses NVMe interfaces. Each local SSD is 375 GB in size. If zero, it means to disable using local SSDs as ephemeral storage.
150150
},
@@ -226,7 +226,7 @@ <h3>Method Details</h3>
226226
&quot;podIpv4CidrSize&quot;: 42, # [Output only] The pod CIDR block size per node in this node pool.
227227
&quot;selfLink&quot;: &quot;A String&quot;, # [Output only] Server-defined URL for the resource.
228228
&quot;status&quot;: &quot;A String&quot;, # [Output only] The status of the nodes in this pool instance.
229-
&quot;statusMessage&quot;: &quot;A String&quot;, # [Output only] Additional information about the current status of this node pool instance, if available.
229+
&quot;statusMessage&quot;: &quot;A String&quot;, # [Output only] Deprecated. Use conditions instead. Additional information about the current status of this node pool instance, if available.
230230
&quot;upgradeSettings&quot;: { # These upgrade settings control the level of parallelism and the level of disruption caused by an upgrade. maxUnavailable controls the number of nodes that can be simultaneously unavailable. maxSurge controls the number of additional nodes that can be added to the node pool temporarily for the time of the upgrade to increase the number of available nodes. (maxUnavailable + maxSurge) determines the level of parallelism (how many nodes are being upgraded at the same time). Note: upgrades inevitably introduce some disruption since workloads need to be moved from old nodes to new, upgraded ones. Even if maxUnavailable=0, this holds true. (Disruption stays within the limits of PodDisruptionBudget, if it is configured.) Consider a hypothetical node pool with 5 nodes having maxSurge=2, maxUnavailable=1. This means the upgrade process upgrades 3 nodes simultaneously. It creates 2 additional (upgraded) nodes, then it brings down 3 old (not yet upgraded) nodes at the same time. This ensures that there are always at least 4 nodes available. # Upgrade settings control disruption and speed of the upgrade.
231231
&quot;maxSurge&quot;: 42, # The maximum number of nodes that can be created beyond the current size of the node pool during the upgrade process.
232232
&quot;maxUnavailable&quot;: 42, # The maximum number of nodes that can be simultaneously unavailable during the upgrade process. A node is considered available if its status is Ready.
@@ -411,7 +411,7 @@ <h3>Method Details</h3>
411411
],
412412
&quot;bootDiskKmsKey&quot;: &quot;A String&quot;, # The Customer Managed Encryption Key used to encrypt the boot disk attached to each node in the node pool. This should be of the form projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME]. For more information about protecting resources with Cloud KMS Keys please see: https://cloud.google.com/compute/docs/disks/customer-managed-encryption
413413
&quot;diskSizeGb&quot;: 42, # Size of the disk attached to each node, specified in GB. The smallest allowed disk size is 10GB. If unspecified, the default disk size is 100GB.
414-
&quot;diskType&quot;: &quot;A String&quot;, # Type of the disk attached to each node (e.g. &#x27;pd-standard&#x27; or &#x27;pd-ssd&#x27;) If unspecified, the default disk type is &#x27;pd-standard&#x27;
414+
&quot;diskType&quot;: &quot;A String&quot;, # Type of the disk attached to each node (e.g. &#x27;pd-standard&#x27;, &#x27;pd-ssd&#x27; or &#x27;pd-balanced&#x27;) If unspecified, the default disk type is &#x27;pd-standard&#x27;
415415
&quot;ephemeralStorageConfig&quot;: { # EphemeralStorageConfig contains configuration for the ephemeral storage filesystem. # Parameters for the ephemeral storage filesystem. If unspecified, ephemeral storage is backed by the boot disk.
416416
&quot;localSsdCount&quot;: 42, # Number of local SSDs to use to back ephemeral storage. Uses NVMe interfaces. Each local SSD is 375 GB in size. If zero, it means to disable using local SSDs as ephemeral storage.
417417
},
@@ -493,7 +493,7 @@ <h3>Method Details</h3>
493493
&quot;podIpv4CidrSize&quot;: 42, # [Output only] The pod CIDR block size per node in this node pool.
494494
&quot;selfLink&quot;: &quot;A String&quot;, # [Output only] Server-defined URL for the resource.
495495
&quot;status&quot;: &quot;A String&quot;, # [Output only] The status of the nodes in this pool instance.
496-
&quot;statusMessage&quot;: &quot;A String&quot;, # [Output only] Additional information about the current status of this node pool instance, if available.
496+
&quot;statusMessage&quot;: &quot;A String&quot;, # [Output only] Deprecated. Use conditions instead. Additional information about the current status of this node pool instance, if available.
497497
&quot;upgradeSettings&quot;: { # These upgrade settings control the level of parallelism and the level of disruption caused by an upgrade. maxUnavailable controls the number of nodes that can be simultaneously unavailable. maxSurge controls the number of additional nodes that can be added to the node pool temporarily for the time of the upgrade to increase the number of available nodes. (maxUnavailable + maxSurge) determines the level of parallelism (how many nodes are being upgraded at the same time). Note: upgrades inevitably introduce some disruption since workloads need to be moved from old nodes to new, upgraded ones. Even if maxUnavailable=0, this holds true. (Disruption stays within the limits of PodDisruptionBudget, if it is configured.) Consider a hypothetical node pool with 5 nodes having maxSurge=2, maxUnavailable=1. This means the upgrade process upgrades 3 nodes simultaneously. It creates 2 additional (upgraded) nodes, then it brings down 3 old (not yet upgraded) nodes at the same time. This ensures that there are always at least 4 nodes available. # Upgrade settings control disruption and speed of the upgrade.
498498
&quot;maxSurge&quot;: 42, # The maximum number of nodes that can be created beyond the current size of the node pool during the upgrade process.
499499
&quot;maxUnavailable&quot;: 42, # The maximum number of nodes that can be simultaneously unavailable during the upgrade process. A node is considered available if its status is Ready.
@@ -544,7 +544,7 @@ <h3>Method Details</h3>
544544
],
545545
&quot;bootDiskKmsKey&quot;: &quot;A String&quot;, # The Customer Managed Encryption Key used to encrypt the boot disk attached to each node in the node pool. This should be of the form projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME]. For more information about protecting resources with Cloud KMS Keys please see: https://cloud.google.com/compute/docs/disks/customer-managed-encryption
546546
&quot;diskSizeGb&quot;: 42, # Size of the disk attached to each node, specified in GB. The smallest allowed disk size is 10GB. If unspecified, the default disk size is 100GB.
547-
&quot;diskType&quot;: &quot;A String&quot;, # Type of the disk attached to each node (e.g. &#x27;pd-standard&#x27; or &#x27;pd-ssd&#x27;) If unspecified, the default disk type is &#x27;pd-standard&#x27;
547+
&quot;diskType&quot;: &quot;A String&quot;, # Type of the disk attached to each node (e.g. &#x27;pd-standard&#x27;, &#x27;pd-ssd&#x27; or &#x27;pd-balanced&#x27;) If unspecified, the default disk type is &#x27;pd-standard&#x27;
548548
&quot;ephemeralStorageConfig&quot;: { # EphemeralStorageConfig contains configuration for the ephemeral storage filesystem. # Parameters for the ephemeral storage filesystem. If unspecified, ephemeral storage is backed by the boot disk.
549549
&quot;localSsdCount&quot;: 42, # Number of local SSDs to use to back ephemeral storage. Uses NVMe interfaces. Each local SSD is 375 GB in size. If zero, it means to disable using local SSDs as ephemeral storage.
550550
},
@@ -626,7 +626,7 @@ <h3>Method Details</h3>
626626
&quot;podIpv4CidrSize&quot;: 42, # [Output only] The pod CIDR block size per node in this node pool.
627627
&quot;selfLink&quot;: &quot;A String&quot;, # [Output only] Server-defined URL for the resource.
628628
&quot;status&quot;: &quot;A String&quot;, # [Output only] The status of the nodes in this pool instance.
629-
&quot;statusMessage&quot;: &quot;A String&quot;, # [Output only] Additional information about the current status of this node pool instance, if available.
629+
&quot;statusMessage&quot;: &quot;A String&quot;, # [Output only] Deprecated. Use conditions instead. Additional information about the current status of this node pool instance, if available.
630630
&quot;upgradeSettings&quot;: { # These upgrade settings control the level of parallelism and the level of disruption caused by an upgrade. maxUnavailable controls the number of nodes that can be simultaneously unavailable. maxSurge controls the number of additional nodes that can be added to the node pool temporarily for the time of the upgrade to increase the number of available nodes. (maxUnavailable + maxSurge) determines the level of parallelism (how many nodes are being upgraded at the same time). Note: upgrades inevitably introduce some disruption since workloads need to be moved from old nodes to new, upgraded ones. Even if maxUnavailable=0, this holds true. (Disruption stays within the limits of PodDisruptionBudget, if it is configured.) Consider a hypothetical node pool with 5 nodes having maxSurge=2, maxUnavailable=1. This means the upgrade process upgrades 3 nodes simultaneously. It creates 2 additional (upgraded) nodes, then it brings down 3 old (not yet upgraded) nodes at the same time. This ensures that there are always at least 4 nodes available. # Upgrade settings control disruption and speed of the upgrade.
631631
&quot;maxSurge&quot;: 42, # The maximum number of nodes that can be created beyond the current size of the node pool during the upgrade process.
632632
&quot;maxUnavailable&quot;: 42, # The maximum number of nodes that can be simultaneously unavailable during the upgrade process. A node is considered available if its status is Ready.
@@ -887,14 +887,14 @@ <h3>Method Details</h3>
887887

888888
<div class="method">
889889
<code class="details" id="setSize">setSize(name, body=None, x__xgafv=None)</code>
890-
<pre>Sets the size for a specific node pool.
890+
<pre>SetNodePoolSizeRequest sets the size of a node pool. The new size will be used for all replicas, including future replicas created by modifying NodePool.locations.
891891

892892
Args:
893893
name: string, The name (project, location, cluster, node pool id) of the node pool to set size. Specified in the format `projects/*/locations/*/clusters/*/nodePools/*`. (required)
894894
body: object, The request body.
895895
The object takes the form of:
896896

897-
{ # SetNodePoolSizeRequest sets the size a node pool.
897+
{ # SetNodePoolSizeRequest sets the size of a node pool.
898898
&quot;clusterId&quot;: &quot;A String&quot;, # Required. Deprecated. The name of the cluster to update. This field has been deprecated and replaced by the name field.
899899
&quot;name&quot;: &quot;A String&quot;, # The name (project, location, cluster, node pool id) of the node pool to set size. Specified in the format `projects/*/locations/*/clusters/*/nodePools/*`.
900900
&quot;nodeCount&quot;: 42, # Required. The desired node count for the pool.

0 commit comments

Comments
 (0)