You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<pclass="firstline">Sets the size for a specific node pool.</p>
103
+
<pclass="firstline">SetNodePoolSizeRequest sets the size of a node pool. The new size will be used for all replicas, including future replicas created by modifying NodePool.locations.</p>
<pclass="firstline">Updates the version and/or image type of a specific node pool.</p>
@@ -144,7 +144,7 @@ <h3>Method Details</h3>
144
144
],
145
145
"bootDiskKmsKey": "A String", # The Customer Managed Encryption Key used to encrypt the boot disk attached to each node in the node pool. This should be of the form projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME]. For more information about protecting resources with Cloud KMS Keys please see: https://cloud.google.com/compute/docs/disks/customer-managed-encryption
146
146
"diskSizeGb": 42, # Size of the disk attached to each node, specified in GB. The smallest allowed disk size is 10GB. If unspecified, the default disk size is 100GB.
147
-
"diskType": "A String", # Type of the disk attached to each node (e.g. 'pd-standard'or 'pd-ssd') If unspecified, the default disk type is 'pd-standard'
147
+
"diskType": "A String", # Type of the disk attached to each node (e.g. 'pd-standard', 'pd-ssd' or 'pd-balanced') If unspecified, the default disk type is 'pd-standard'
148
148
"ephemeralStorageConfig": { # EphemeralStorageConfig contains configuration for the ephemeral storage filesystem. # Parameters for the ephemeral storage filesystem. If unspecified, ephemeral storage is backed by the boot disk.
149
149
"localSsdCount": 42, # Number of local SSDs to use to back ephemeral storage. Uses NVMe interfaces. Each local SSD is 375 GB in size. If zero, it means to disable using local SSDs as ephemeral storage.
150
150
},
@@ -226,7 +226,7 @@ <h3>Method Details</h3>
226
226
"podIpv4CidrSize": 42, # [Output only] The pod CIDR block size per node in this node pool.
227
227
"selfLink": "A String", # [Output only] Server-defined URL for the resource.
228
228
"status": "A String", # [Output only] The status of the nodes in this pool instance.
229
-
"statusMessage": "A String", # [Output only] Additional information about the current status of this node pool instance, if available.
229
+
"statusMessage": "A String", # [Output only] Deprecated. Use conditions instead. Additional information about the current status of this node pool instance, if available.
230
230
"upgradeSettings": { # These upgrade settings control the level of parallelism and the level of disruption caused by an upgrade. maxUnavailable controls the number of nodes that can be simultaneously unavailable. maxSurge controls the number of additional nodes that can be added to the node pool temporarily for the time of the upgrade to increase the number of available nodes. (maxUnavailable + maxSurge) determines the level of parallelism (how many nodes are being upgraded at the same time). Note: upgrades inevitably introduce some disruption since workloads need to be moved from old nodes to new, upgraded ones. Even if maxUnavailable=0, this holds true. (Disruption stays within the limits of PodDisruptionBudget, if it is configured.) Consider a hypothetical node pool with 5 nodes having maxSurge=2, maxUnavailable=1. This means the upgrade process upgrades 3 nodes simultaneously. It creates 2 additional (upgraded) nodes, then it brings down 3 old (not yet upgraded) nodes at the same time. This ensures that there are always at least 4 nodes available. # Upgrade settings control disruption and speed of the upgrade.
231
231
"maxSurge": 42, # The maximum number of nodes that can be created beyond the current size of the node pool during the upgrade process.
232
232
"maxUnavailable": 42, # The maximum number of nodes that can be simultaneously unavailable during the upgrade process. A node is considered available if its status is Ready.
@@ -411,7 +411,7 @@ <h3>Method Details</h3>
411
411
],
412
412
"bootDiskKmsKey": "A String", # The Customer Managed Encryption Key used to encrypt the boot disk attached to each node in the node pool. This should be of the form projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME]. For more information about protecting resources with Cloud KMS Keys please see: https://cloud.google.com/compute/docs/disks/customer-managed-encryption
413
413
"diskSizeGb": 42, # Size of the disk attached to each node, specified in GB. The smallest allowed disk size is 10GB. If unspecified, the default disk size is 100GB.
414
-
"diskType": "A String", # Type of the disk attached to each node (e.g. 'pd-standard'or 'pd-ssd') If unspecified, the default disk type is 'pd-standard'
414
+
"diskType": "A String", # Type of the disk attached to each node (e.g. 'pd-standard', 'pd-ssd' or 'pd-balanced') If unspecified, the default disk type is 'pd-standard'
415
415
"ephemeralStorageConfig": { # EphemeralStorageConfig contains configuration for the ephemeral storage filesystem. # Parameters for the ephemeral storage filesystem. If unspecified, ephemeral storage is backed by the boot disk.
416
416
"localSsdCount": 42, # Number of local SSDs to use to back ephemeral storage. Uses NVMe interfaces. Each local SSD is 375 GB in size. If zero, it means to disable using local SSDs as ephemeral storage.
417
417
},
@@ -493,7 +493,7 @@ <h3>Method Details</h3>
493
493
"podIpv4CidrSize": 42, # [Output only] The pod CIDR block size per node in this node pool.
494
494
"selfLink": "A String", # [Output only] Server-defined URL for the resource.
495
495
"status": "A String", # [Output only] The status of the nodes in this pool instance.
496
-
"statusMessage": "A String", # [Output only] Additional information about the current status of this node pool instance, if available.
496
+
"statusMessage": "A String", # [Output only] Deprecated. Use conditions instead. Additional information about the current status of this node pool instance, if available.
497
497
"upgradeSettings": { # These upgrade settings control the level of parallelism and the level of disruption caused by an upgrade. maxUnavailable controls the number of nodes that can be simultaneously unavailable. maxSurge controls the number of additional nodes that can be added to the node pool temporarily for the time of the upgrade to increase the number of available nodes. (maxUnavailable + maxSurge) determines the level of parallelism (how many nodes are being upgraded at the same time). Note: upgrades inevitably introduce some disruption since workloads need to be moved from old nodes to new, upgraded ones. Even if maxUnavailable=0, this holds true. (Disruption stays within the limits of PodDisruptionBudget, if it is configured.) Consider a hypothetical node pool with 5 nodes having maxSurge=2, maxUnavailable=1. This means the upgrade process upgrades 3 nodes simultaneously. It creates 2 additional (upgraded) nodes, then it brings down 3 old (not yet upgraded) nodes at the same time. This ensures that there are always at least 4 nodes available. # Upgrade settings control disruption and speed of the upgrade.
498
498
"maxSurge": 42, # The maximum number of nodes that can be created beyond the current size of the node pool during the upgrade process.
499
499
"maxUnavailable": 42, # The maximum number of nodes that can be simultaneously unavailable during the upgrade process. A node is considered available if its status is Ready.
@@ -544,7 +544,7 @@ <h3>Method Details</h3>
544
544
],
545
545
"bootDiskKmsKey": "A String", # The Customer Managed Encryption Key used to encrypt the boot disk attached to each node in the node pool. This should be of the form projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME]. For more information about protecting resources with Cloud KMS Keys please see: https://cloud.google.com/compute/docs/disks/customer-managed-encryption
546
546
"diskSizeGb": 42, # Size of the disk attached to each node, specified in GB. The smallest allowed disk size is 10GB. If unspecified, the default disk size is 100GB.
547
-
"diskType": "A String", # Type of the disk attached to each node (e.g. 'pd-standard'or 'pd-ssd') If unspecified, the default disk type is 'pd-standard'
547
+
"diskType": "A String", # Type of the disk attached to each node (e.g. 'pd-standard', 'pd-ssd' or 'pd-balanced') If unspecified, the default disk type is 'pd-standard'
548
548
"ephemeralStorageConfig": { # EphemeralStorageConfig contains configuration for the ephemeral storage filesystem. # Parameters for the ephemeral storage filesystem. If unspecified, ephemeral storage is backed by the boot disk.
549
549
"localSsdCount": 42, # Number of local SSDs to use to back ephemeral storage. Uses NVMe interfaces. Each local SSD is 375 GB in size. If zero, it means to disable using local SSDs as ephemeral storage.
550
550
},
@@ -626,7 +626,7 @@ <h3>Method Details</h3>
626
626
"podIpv4CidrSize": 42, # [Output only] The pod CIDR block size per node in this node pool.
627
627
"selfLink": "A String", # [Output only] Server-defined URL for the resource.
628
628
"status": "A String", # [Output only] The status of the nodes in this pool instance.
629
-
"statusMessage": "A String", # [Output only] Additional information about the current status of this node pool instance, if available.
629
+
"statusMessage": "A String", # [Output only] Deprecated. Use conditions instead. Additional information about the current status of this node pool instance, if available.
630
630
"upgradeSettings": { # These upgrade settings control the level of parallelism and the level of disruption caused by an upgrade. maxUnavailable controls the number of nodes that can be simultaneously unavailable. maxSurge controls the number of additional nodes that can be added to the node pool temporarily for the time of the upgrade to increase the number of available nodes. (maxUnavailable + maxSurge) determines the level of parallelism (how many nodes are being upgraded at the same time). Note: upgrades inevitably introduce some disruption since workloads need to be moved from old nodes to new, upgraded ones. Even if maxUnavailable=0, this holds true. (Disruption stays within the limits of PodDisruptionBudget, if it is configured.) Consider a hypothetical node pool with 5 nodes having maxSurge=2, maxUnavailable=1. This means the upgrade process upgrades 3 nodes simultaneously. It creates 2 additional (upgraded) nodes, then it brings down 3 old (not yet upgraded) nodes at the same time. This ensures that there are always at least 4 nodes available. # Upgrade settings control disruption and speed of the upgrade.
631
631
"maxSurge": 42, # The maximum number of nodes that can be created beyond the current size of the node pool during the upgrade process.
632
632
"maxUnavailable": 42, # The maximum number of nodes that can be simultaneously unavailable during the upgrade process. A node is considered available if its status is Ready.
<pre>SetNodePoolSizeRequest sets the size of a node pool. The new size will be used for all replicas, including future replicas created by modifying NodePool.locations.
891
891
892
892
Args:
893
893
name: string, The name (project, location, cluster, node pool id) of the node pool to set size. Specified in the format `projects/*/locations/*/clusters/*/nodePools/*`. (required)
894
894
body: object, The request body.
895
895
The object takes the form of:
896
896
897
-
{ # SetNodePoolSizeRequest sets the size a node pool.
897
+
{ # SetNodePoolSizeRequest sets the size of a node pool.
898
898
"clusterId": "A String", # Required. Deprecated. The name of the cluster to update. This field has been deprecated and replaced by the name field.
899
899
"name": "A String", # The name (project, location, cluster, node pool id) of the node pool to set size. Specified in the format `projects/*/locations/*/clusters/*/nodePools/*`.
900
900
"nodeCount": 42, # Required. The desired node count for the pool.
0 commit comments