You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+16Lines changed: 16 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -108,6 +108,22 @@ Then perform the following commands on the root folder:
108
108
-`terraform apply` to apply the infrastructure build
109
109
-`terraform destroy` to destroy the built infrastructure
110
110
111
+
## Upgrade to v3.0.0
112
+
113
+
v3.0.0 is a breaking release. Refer to the
114
+
[Upgrading to v3.0 guide][upgrading-to-v3.0] for details.
115
+
116
+
## Upgrade to v2.0.0
117
+
118
+
v2.0.0 is a breaking release. Refer to the
119
+
[Upgrading to v2.0 guide][upgrading-to-v2.0] for details.
120
+
121
+
## Upgrade to v1.0.0
122
+
123
+
Version 1.0.0 of this module introduces a breaking change: adding the `disable-legacy-endpoints` metadata field to all node pools. This metadata is required by GKE and [determines whether the `/0.1/` and `/v1beta1/` paths are available in the nodes' metadata server](https://cloud.google.com/kubernetes-engine/docs/how-to/protecting-cluster-metadata#disable-legacy-apis). If your applications do not require access to the node's metadata server, you can leave the default value of `true` provided by the module. If your applications require access to the metadata server, be sure to read the linked documentation to see if you need to set the value for this field to `false` to allow your applications access to the above metadata server paths.
124
+
125
+
In either case, upgrading to module version `v1.0.0` will trigger a recreation of all node pools in the cluster.
126
+
111
127
<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
Copy file name to clipboardExpand all lines: modules/beta-private-cluster-update-variant/README.md
-135Lines changed: 0 additions & 135 deletions
Original file line number
Diff line number
Diff line change
@@ -278,141 +278,6 @@ The project has the following folders and files:
278
278
- /README.MD: This file.
279
279
- /modules: Private and beta sub modules.
280
280
281
-
## Templating
282
-
283
-
To more cleanly handle cases where desired functionality would require complex duplication of Terraform resources (i.e. [PR 51](https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/pull/51)), this repository is largely generated from the [`autogen`](/autogen) directory.
284
-
285
-
The root module is generated by running `make generate`. Changes to this repository should be made in the [`autogen`](/autogen) directory where appropriate.
286
-
287
-
Note: The correct sequence to update the repo using autogen functionality is to run
288
-
`make generate && make generate_docs`. This will create the various Terraform files, and then
289
-
generate the Terraform documentation using `terraform-docs`.
### Autogeneration of documentation from .tf files
299
-
Run
300
-
```
301
-
make generate_docs
302
-
```
303
-
304
-
### Integration test
305
-
306
-
Integration tests are run though [test-kitchen](https://github.com/test-kitchen/test-kitchen), [kitchen-terraform](https://github.com/newcontext-oss/kitchen-terraform), and [InSpec](https://github.com/inspec/inspec).
307
-
308
-
Six test-kitchen instances are defined:
309
-
310
-
-`deploy-service`
311
-
-`node-pool`
312
-
-`shared-vpc`
313
-
-`simple-regional`
314
-
-`simple-zonal`
315
-
-`stub-domains`
316
-
317
-
The test-kitchen instances in `test/fixtures/` wrap identically-named examples in the `examples/` directory.
318
-
319
-
#### Setup
320
-
321
-
1. Configure the [test fixtures](#test-configuration)
322
-
2. Download a Service Account key with the necessary permissions and put it in the module's root directory with the name `credentials.json`.
323
-
- Requires the [permissions to run the module](#configure-a-service-account)
324
-
- Requires `roles/compute.networkAdmin` to create the test suite's networks
325
-
- Requires `roles/resourcemanager.projectIamAdmin` since service account creation is tested
326
-
3. Build the Docker container for testing:
327
-
328
-
```
329
-
make docker_build_kitchen_terraform
330
-
```
331
-
4. Run the testing container in interactive mode:
332
-
333
-
```
334
-
make docker_run
335
-
```
336
-
337
-
The module root directory will be loaded into the Docker container at `/cft/workdir/`.
338
-
5. Run kitchen-terraform to test the infrastructure:
339
-
340
-
1.`kitchen create` creates Terraform state and downloads modules, if applicable.
341
-
2.`kitchen converge` creates the underlying resources. Run `kitchen converge <INSTANCE_NAME>` to create resources for a specific test case.
342
-
3. Run `kitchen converge` again. This is necessary due to an oddity in how `networkPolicyConfig` is handled by the upstream API. (See [#72](https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/issues/72) for details).
343
-
4.`kitchen verify` tests the created infrastructure. Run `kitchen verify <INSTANCE_NAME>` to run a specific test case.
344
-
5.`kitchen destroy` tears down the underlying resources created by `kitchen converge`. Run `kitchen destroy <INSTANCE_NAME>` to tear down resources for a specific test case.
345
-
346
-
Alternatively, you can simply run `make test_integration_docker` to run all the test steps non-interactively.
347
-
348
-
If you wish to parallelize running the test suites, it is also possible to offload the work onto Concourse to run each test suite for you using the command `make test_integration_concourse`. The `.concourse` directory will be created and contain all of the logs from the running test suites.
349
-
350
-
When running tests locally, you will need to use your own test project environment. You can configure your environment by setting all of the following variables:
Each test-kitchen instance is configured with a `variables.tfvars` file in the test fixture directory, e.g. `test/fixtures/node_pool/terraform.tfvars`.
365
-
For convenience, since all of the variables are project-specific, these files have been symlinked to `test/fixtures/shared/terraform.tfvars`.
366
-
Similarly, each test fixture has a `variables.tf` to define these variables, and an `outputs.tf` to facilitate providing necessary information for `inspec` to locate and query against created resources.
367
-
368
-
Each test-kitchen instance creates a GCP Network and Subnetwork fixture to house resources, and may create any other necessary fixture data as needed.
369
-
370
-
### Autogeneration of documentation from .tf files
371
-
Run
372
-
```
373
-
make generate_docs
374
-
```
375
-
376
-
### Linting
377
-
The makefile in this project will lint or sometimes just format any shell,
378
-
Python, golang, Terraform, or Dockerfiles. The linters will only be run if
379
-
the makefile finds files with the appropriate file extension.
380
-
381
-
All of the linter checks are in the default make target, so you just have to
382
-
run
383
-
384
-
```
385
-
make -s
386
-
```
387
-
388
-
The -s is for 'silent'. Successful output looks like this
Copy file name to clipboardExpand all lines: modules/private-cluster-update-variant/README.md
-135Lines changed: 0 additions & 135 deletions
Original file line number
Diff line number
Diff line change
@@ -257,141 +257,6 @@ The project has the following folders and files:
257
257
- /README.MD: This file.
258
258
- /modules: Private and beta sub modules.
259
259
260
-
## Templating
261
-
262
-
To more cleanly handle cases where desired functionality would require complex duplication of Terraform resources (i.e. [PR 51](https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/pull/51)), this repository is largely generated from the [`autogen`](/autogen) directory.
263
-
264
-
The root module is generated by running `make generate`. Changes to this repository should be made in the [`autogen`](/autogen) directory where appropriate.
265
-
266
-
Note: The correct sequence to update the repo using autogen functionality is to run
267
-
`make generate && make generate_docs`. This will create the various Terraform files, and then
268
-
generate the Terraform documentation using `terraform-docs`.
### Autogeneration of documentation from .tf files
278
-
Run
279
-
```
280
-
make generate_docs
281
-
```
282
-
283
-
### Integration test
284
-
285
-
Integration tests are run though [test-kitchen](https://github.com/test-kitchen/test-kitchen), [kitchen-terraform](https://github.com/newcontext-oss/kitchen-terraform), and [InSpec](https://github.com/inspec/inspec).
286
-
287
-
Six test-kitchen instances are defined:
288
-
289
-
-`deploy-service`
290
-
-`node-pool`
291
-
-`shared-vpc`
292
-
-`simple-regional`
293
-
-`simple-zonal`
294
-
-`stub-domains`
295
-
296
-
The test-kitchen instances in `test/fixtures/` wrap identically-named examples in the `examples/` directory.
297
-
298
-
#### Setup
299
-
300
-
1. Configure the [test fixtures](#test-configuration)
301
-
2. Download a Service Account key with the necessary permissions and put it in the module's root directory with the name `credentials.json`.
302
-
- Requires the [permissions to run the module](#configure-a-service-account)
303
-
- Requires `roles/compute.networkAdmin` to create the test suite's networks
304
-
- Requires `roles/resourcemanager.projectIamAdmin` since service account creation is tested
305
-
3. Build the Docker container for testing:
306
-
307
-
```
308
-
make docker_build_kitchen_terraform
309
-
```
310
-
4. Run the testing container in interactive mode:
311
-
312
-
```
313
-
make docker_run
314
-
```
315
-
316
-
The module root directory will be loaded into the Docker container at `/cft/workdir/`.
317
-
5. Run kitchen-terraform to test the infrastructure:
318
-
319
-
1.`kitchen create` creates Terraform state and downloads modules, if applicable.
320
-
2.`kitchen converge` creates the underlying resources. Run `kitchen converge <INSTANCE_NAME>` to create resources for a specific test case.
321
-
3. Run `kitchen converge` again. This is necessary due to an oddity in how `networkPolicyConfig` is handled by the upstream API. (See [#72](https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/issues/72) for details).
322
-
4.`kitchen verify` tests the created infrastructure. Run `kitchen verify <INSTANCE_NAME>` to run a specific test case.
323
-
5.`kitchen destroy` tears down the underlying resources created by `kitchen converge`. Run `kitchen destroy <INSTANCE_NAME>` to tear down resources for a specific test case.
324
-
325
-
Alternatively, you can simply run `make test_integration_docker` to run all the test steps non-interactively.
326
-
327
-
If you wish to parallelize running the test suites, it is also possible to offload the work onto Concourse to run each test suite for you using the command `make test_integration_concourse`. The `.concourse` directory will be created and contain all of the logs from the running test suites.
328
-
329
-
When running tests locally, you will need to use your own test project environment. You can configure your environment by setting all of the following variables:
Each test-kitchen instance is configured with a `variables.tfvars` file in the test fixture directory, e.g. `test/fixtures/node_pool/terraform.tfvars`.
344
-
For convenience, since all of the variables are project-specific, these files have been symlinked to `test/fixtures/shared/terraform.tfvars`.
345
-
Similarly, each test fixture has a `variables.tf` to define these variables, and an `outputs.tf` to facilitate providing necessary information for `inspec` to locate and query against created resources.
346
-
347
-
Each test-kitchen instance creates a GCP Network and Subnetwork fixture to house resources, and may create any other necessary fixture data as needed.
348
-
349
-
### Autogeneration of documentation from .tf files
350
-
Run
351
-
```
352
-
make generate_docs
353
-
```
354
-
355
-
### Linting
356
-
The makefile in this project will lint or sometimes just format any shell,
357
-
Python, golang, Terraform, or Dockerfiles. The linters will only be run if
358
-
the makefile finds files with the appropriate file extension.
359
-
360
-
All of the linter checks are in the default make target, so you just have to
361
-
run
362
-
363
-
```
364
-
make -s
365
-
```
366
-
367
-
The -s is for 'silent'. Successful output looks like this
0 commit comments