You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+6-1Lines changed: 6 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -277,9 +277,14 @@ The node_pools variable takes the following parameters:
277
277
| max_count | Maximum number of nodes in the NodePool. Must be >= min_count. Cannot be used with total limits. | 100 | Optional |
278
278
| total_max_count | Total maximum number of nodes in the NodePool. Must be >= min_count. Cannot be used with per zone limits. | null | Optional |
279
279
| max_pods_per_node | The maximum number of pods per node in this cluster | null | Optional |
280
+
| strategy | The upgrade stragey to be used for upgrading the nodes. Valid values of state are: `SURGE` or `BLUE_GREEN`| "SURGE" | Optional |
280
281
| max_surge | The number of additional nodes that can be added to the node pool during an upgrade. Increasing max_surge raises the number of nodes that can be upgraded simultaneously. Can be set to 0 or greater. | 1 | Optional |
281
282
| max_unavailable | The number of nodes that can be simultaneously unavailable during an upgrade. Increasing max_unavailable raises the number of nodes that can be upgraded in parallel. Can be set to 0 or greater. | 0 | Optional |
282
-
| min_count | Minimum number of nodes in the NodePool. Must be >=0 and <= max_count. Should be used when autoscaling is true. Cannot be used with total limits. | 1 | Optional |
283
+
| node_pool_soak_duration | Time needed after draining the entire blue pool. After this period, the blue pool will be cleaned up. By default, it is set to one hour (3600 seconds). The maximum length of the soak time is 7 days (604,800 seconds). | "3600s" | Optional |
284
+
| batch_soak_duration | Soak time after each batch gets drained, with the default being zero seconds. | "0s" | Optional |
285
+
| batch_node_count | Absolute number of nodes to drain in a batch. If it is set to zero, this phase will be skipped. | null | Optional |
286
+
| batch_percentage | Percentage of nodes to drain in a batch. Must be in the range of [0.0, 1.0]. If it is set to zero, this phase will be skipped. | null | Optional |
287
+
| min_count | Minimum number of nodes in the NodePool. Must be >=0 and <= max_count. Should be used when autoscaling is true | 1 | Optional |
283
288
| total_min_count | Total minimum number of nodes in the NodePool. Must be >=0 and <= max_count. Should be used when autoscaling is true. Cannot be used with per zone limits. | null | Optional |
284
289
| name | The name of the node pool || Required |
285
290
| node_count | The number of nodes in the nodepool when autoscaling is false. Otherwise defaults to 1. Only valid for non-autoscaling clusters || Required |
Copy file name to clipboardExpand all lines: autogen/main/README.md
+6-1Lines changed: 6 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -213,9 +213,14 @@ The node_pools variable takes the following parameters:
213
213
| max_count | Maximum number of nodes in the NodePool. Must be >= min_count. Cannot be used with total limits. | 100 | Optional |
214
214
| total_max_count | Total maximum number of nodes in the NodePool. Must be >= min_count. Cannot be used with per zone limits. | null | Optional |
215
215
| max_pods_per_node | The maximum number of pods per node in this cluster | null | Optional |
216
+
| strategy | The upgrade stragey to be used for upgrading the nodes. Valid values of state are: `SURGE` or `BLUE_GREEN`| "SURGE" | Optional |
216
217
| max_surge | The number of additional nodes that can be added to the node pool during an upgrade. Increasing max_surge raises the number of nodes that can be upgraded simultaneously. Can be set to 0 or greater. | 1 | Optional |
217
218
| max_unavailable | The number of nodes that can be simultaneously unavailable during an upgrade. Increasing max_unavailable raises the number of nodes that can be upgraded in parallel. Can be set to 0 or greater. | 0 | Optional |
218
-
| min_count | Minimum number of nodes in the NodePool. Must be >=0 and <= max_count. Should be used when autoscaling is true. Cannot be used with total limits. | 1 | Optional |
219
+
| node_pool_soak_duration | Time needed after draining the entire blue pool. After this period, the blue pool will be cleaned up. By default, it is set to one hour (3600 seconds). The maximum length of the soak time is 7 days (604,800 seconds). | "3600s" | Optional |
220
+
| batch_soak_duration | Soak time after each batch gets drained, with the default being zero seconds. | "0s" | Optional |
221
+
| batch_node_count | Absolute number of nodes to drain in a batch. If it is set to zero, this phase will be skipped. | null | Optional |
222
+
| batch_percentage | Percentage of nodes to drain in a batch. Must be in the range of [0.0, 1.0]. If it is set to zero, this phase will be skipped. | null | Optional |
223
+
| min_count | Minimum number of nodes in the NodePool. Must be >=0 and <= max_count. Should be used when autoscaling is true | 1 | Optional |
219
224
| total_min_count | Total minimum number of nodes in the NodePool. Must be >=0 and <= max_count. Should be used when autoscaling is true. Cannot be used with per zone limits. | null | Optional |
description = "Enable the Identity Service component, which allows customers to use external identity providers with the K8S API."
759
758
default = false
760
759
}
760
+
761
+
variable "strategy" {
762
+
type = string
763
+
description = "The upgrade stragey to be used for upgrading the nodes. Valid values of state are: `SURGE`; `BLUE_GREEN`. By default strategy is `SURGE` (Optional)"
764
+
default = "SURGE"
765
+
}
766
+
767
+
variable "max_surge" {
768
+
type = number
769
+
description = "The number of additional nodes that can be added to the node pool during an upgrade. Increasing max_surge raises the number of nodes that can be upgraded simultaneously. Can be set to 0 or greater (Optional)"
770
+
default = null
771
+
}
772
+
773
+
variable "max_unavailable" {
774
+
type = number
775
+
description = "The number of nodes that can be simultaneously unavailable during an upgrade. Increasing max_unavailable raises the number of nodes that can be upgraded in parallel. Can be set to 0 or greater (Optional)"
776
+
default = null
777
+
}
778
+
779
+
variable "node_pool_soak_duration" {
780
+
type = string
781
+
description = "Time needed after draining the entire blue pool. After this period, the blue pool will be cleaned up (Optional)"
782
+
default = "3600s"
783
+
}
784
+
785
+
variable "batch_soak_duration" {
786
+
type = string
787
+
description = "Soak time after each batch gets drained (Optionial)"
788
+
default = "0s"
789
+
}
790
+
791
+
variable "batch_percentage" {
792
+
type = string
793
+
description = "Percentage of the blue pool nodes to drain in a batch (Optional)"
794
+
default = null
795
+
}
796
+
797
+
variable "batch_node_count" {
798
+
type = number
799
+
description = "The number of blue nodes to drain in a batch (Optional)"
Copy file name to clipboardExpand all lines: modules/beta-private-cluster-update-variant/README.md
+13-1Lines changed: 13 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -163,6 +163,9 @@ Then perform the following commands on the root folder:
163
163
| add\_master\_webhook\_firewall\_rules | Create master\_webhook firewall rules for ports defined in `firewall_inbound_ports`|`bool`|`false`| no |
164
164
| add\_shadow\_firewall\_rules | Create GKE shadow firewall (the same as default firewall rules with firewall logs enabled). |`bool`|`false`| no |
165
165
| authenticator\_security\_group | The name of the RBAC security group for use with Google security groups in Kubernetes RBAC. Group name must be in format [email protected]|`string`|`null`| no |
166
+
| batch\_node\_count | The number of blue nodes to drain in a batch (Optional) |`number`|`null`| no |
167
+
| batch\_percentage | Percentage of the blue pool nodes to drain in a batch (Optional) |`string`|`null`| no |
168
+
| batch\_soak\_duration | Soak time after each batch gets drained (Optionial) |`string`|`"0s"`| no |
166
169
| cloudrun | (Beta) Enable CloudRun addon |`bool`|`false`| no |
167
170
| cloudrun\_load\_balancer\_type | (Beta) Configure the Cloud Run load balancer type. External by default. Set to `LOAD_BALANCER_TYPE_INTERNAL` to configure as an internal load balancer. |`string`|`""`| no |
@@ -227,6 +230,8 @@ Then perform the following commands on the root folder:
227
230
| master\_authorized\_networks | List of master authorized networks. If none are provided, disallow external access (except the cluster node IPs, which GKE automatically whitelists). |`list(object({ cidr_block = string, display_name = string }))`|`[]`| no |
228
231
| master\_global\_access\_enabled | Whether the cluster master is accessible globally (from any region) or only within the same region as the private endpoint. |`bool`|`true`| no |
229
232
| master\_ipv4\_cidr\_block | (Beta) The IP range in CIDR notation to use for the hosted master network |`string`|`"10.0.0.0/28"`| no |
233
+
| max\_surge | The number of additional nodes that can be added to the node pool during an upgrade. Increasing max\_surge raises the number of nodes that can be upgraded simultaneously. Can be set to 0 or greater (Optional) |`number`|`null`| no |
234
+
| max\_unavailable | The number of nodes that can be simultaneously unavailable during an upgrade. Increasing max\_unavailable raises the number of nodes that can be upgraded in parallel. Can be set to 0 or greater (Optional) |`number`|`null`| no |
230
235
| monitoring\_enable\_managed\_prometheus | Configuration for Managed Service for Prometheus. Whether or not the managed collection is enabled. |`bool`|`false`| no |
231
236
| monitoring\_enabled\_components | List of services to monitor: SYSTEM\_COMPONENTS, WORKLOADS (provider version >= 3.89.0). Empty list is default GKE configuration. |`list(string)`|`[]`| no |
232
237
| monitoring\_service | The monitoring service that the cluster should write metrics to. Automatically send metrics from pods in the cluster to the Google Cloud Monitoring API. VM metrics will be collected by Google Compute Engine regardless of this setting Available options include monitoring.googleapis.com, monitoring.googleapis.com/kubernetes (beta) and none |`string`|`"monitoring.googleapis.com/kubernetes"`| no |
@@ -236,6 +241,7 @@ Then perform the following commands on the root folder:
236
241
| network\_policy\_provider | The network policy provider. |`string`|`"CALICO"`| no |
237
242
| network\_project\_id | The project ID of the shared VPC's host (for shared vpc support) |`string`|`""`| no |
238
243
| node\_metadata | Specifies how node metadata is exposed to the workload running on the node |`string`|`"GKE_METADATA"`| no |
244
+
| node\_pool\_soak\_duration | Time needed after draining the entire blue pool. After this period, the blue pool will be cleaned up (Optional) |`string`|`"3600s"`| no |
239
245
| node\_pools | List of maps containing node pools |`list(map(any))`| <pre>[<br> {<br> "name": "default-node-pool"<br> }<br>]</pre> | no |
240
246
| node\_pools\_labels | Map of maps containing node labels by node-pool name |`map(map(string))`| <pre>{<br> "all": {},<br> "default-node-pool": {}<br>}</pre> | no |
241
247
| node\_pools\_linux\_node\_configs\_sysctls | Map of maps containing linux node config sysctls by node-pool name |`map(map(string))`| <pre>{<br> "all": {},<br> "default-node-pool": {}<br>}</pre> | no |
@@ -259,6 +265,7 @@ Then perform the following commands on the root folder:
259
265
| shadow\_firewall\_rules\_log\_config | The log\_config for shadow firewall rules. You can set this variable to `null` to disable logging. | <pre>object({<br> metadata = string<br> })</pre> | <pre>{<br> "metadata": "INCLUDE_ALL_METADATA"<br>}</pre> | no |
260
266
| shadow\_firewall\_rules\_priority | The firewall priority of GKE shadow firewall rules. The priority should be less than default firewall, which is 1000. |`number`|`999`| no |
261
267
| skip\_provisioners | Flag to skip all local-exec provisioners. It breaks `stub_domains` and `upstream_nameservers` variables functionality. |`bool`|`false`| no |
268
+
| strategy | The upgrade stragey to be used for upgrading the nodes. Valid values of state are: `SURGE`; `BLUE_GREEN`. By default strategy is `SURGE` (Optional) |`string`|`"SURGE"`| no |
262
269
| stub\_domains | Map of stub domains and their resolvers to forward DNS queries for a certain domain to an external DNS server |`map(list(string))`|`{}`| no |
263
270
| subnetwork | The subnetwork to host the cluster in (required) |`string`| n/a | yes |
264
271
| timeouts | Timeout for cluster operations. |`map(string)`|`{}`| no |
@@ -342,9 +349,14 @@ The node_pools variable takes the following parameters:
342
349
| max_count | Maximum number of nodes in the NodePool. Must be >= min_count. Cannot be used with total limits. | 100 | Optional |
343
350
| total_max_count | Total maximum number of nodes in the NodePool. Must be >= min_count. Cannot be used with per zone limits. | null | Optional |
344
351
| max_pods_per_node | The maximum number of pods per node in this cluster | null | Optional |
352
+
| strategy | The upgrade stragey to be used for upgrading the nodes. Valid values of state are: `SURGE` or `BLUE_GREEN`| "SURGE" | Optional |
345
353
| max_surge | The number of additional nodes that can be added to the node pool during an upgrade. Increasing max_surge raises the number of nodes that can be upgraded simultaneously. Can be set to 0 or greater. | 1 | Optional |
346
354
| max_unavailable | The number of nodes that can be simultaneously unavailable during an upgrade. Increasing max_unavailable raises the number of nodes that can be upgraded in parallel. Can be set to 0 or greater. | 0 | Optional |
347
-
| min_count | Minimum number of nodes in the NodePool. Must be >=0 and <= max_count. Should be used when autoscaling is true. Cannot be used with total limits. | 1 | Optional |
355
+
| node_pool_soak_duration | Time needed after draining the entire blue pool. After this period, the blue pool will be cleaned up. By default, it is set to one hour (3600 seconds). The maximum length of the soak time is 7 days (604,800 seconds). | "3600s" | Optional |
356
+
| batch_soak_duration | Soak time after each batch gets drained, with the default being zero seconds. | "0s" | Optional |
357
+
| batch_node_count | Absolute number of nodes to drain in a batch. If it is set to zero, this phase will be skipped. | null | Optional |
358
+
| batch_percentage | Percentage of nodes to drain in a batch. Must be in the range of [0.0, 1.0]. If it is set to zero, this phase will be skipped. | null | Optional |
359
+
| min_count | Minimum number of nodes in the NodePool. Must be >=0 and <= max_count. Should be used when autoscaling is true | 1 | Optional |
348
360
| total_min_count | Total minimum number of nodes in the NodePool. Must be >=0 and <= max_count. Should be used when autoscaling is true. Cannot be used with per zone limits. | null | Optional |
349
361
| name | The name of the node pool || Required |
350
362
| placement_policy | Placement type to set for nodes in a node pool. Can be set as [COMPACT](https://cloud.google.com/kubernetes-engine/docs/how-to/compact-placement#overview) if desired | Optional |
0 commit comments