Skip to content

Commit 99eb259

Browse files
authored
[test] Fix workspace integration tests (#17222)
* [tests] unique ContextURL for commit only * [tests] include std output in the error message for commit Sometimes commit fails with status 1 w/o error output...so it's helpful to look at StdOut * [tests] Fix TestGitActions by using the test case context Prior to this, createWorkspace was working, but getWorkspace could not find the created workspace. * [test] bypass exit code 1 on `git commit` with `--allow-empty` * [test] Fix gcloud auth, do setup before auth * Show all output Might remove later... * [test] avoid using deleted user, identity, and token * [test] add organizationId to CreateWorkspaceOptions It's expected on the Typescript side: https://github.com/gitpod-io/gitpod/blob/90b7d658589fd6e9fb239ff239591b9f1218fc83/components/server/src/workspace/gitpod-server-impl.ts#L1227-L1235 * [test] orgId is required on createWorkspace But sometimes there's no team 🤷 * [test] fix git context tests We use UBP now, there is no more unleashed. Also, remove the "ff" feature flag code (which was for PVC). It was mutating the username, resulting in Code 460 errors on createWorkspace * [test] Use example test as the example * [test] fix context tests when run as gitpod-integration-test user * [test] clean-up * [test] wait for workspaces to stop Tests intermittently fail with to avoid intermittent failures * [test] add code owners This way, we can assert tests are passing for all teams prior to merging * [test] limit # of tests that can run in parallel * [test] no parallel tests Test to see if flakeyness goes away... ...and bump the timeout because we reduced parallel runs * [preview] update the VM image to have parity with production This: 1. updates from K3s 1.23 to 1.26 2. requires that we remove PodSecurityPolicy changes (as it's no longer supported) 3. resolves intermittent disk pressure issues * [preview] no PSP in support of VM image update * We were getting PSP from rook/ceph, which I think was for PVC * We were getting PSP from the monitoring-satellite * [test] don't wait for workspace stop with git_test.go, we're testing git actions. Why? We miss state transitions, it's not guaranteed each one will be returned, and there are other tests waiting. For example, in the below log, we miss INITIALIZING, RUNNING, and STOPPING. workspace.go:369: attempt to create the workspace as user 0565bb3c-e724-4da9-84fb-22e2a7b23b8c, with context github.com/gitpod-io/gitpod-test-repo/tree/integration-test/commit workspace.go:411: attempt to get the workspace information: gitpodio-gitpodtestrepo-nscsowy1njb workspace.go:423: not preparing workspace.go:432: got the workspace information: gitpodio-gitpodtestrepo-nscsowy1njb workspace.go:460: wait for workspace to be fully up and running workspace.go:569: prepare for a connection with ws-manager workspace.go:590: established for a connection with ws-manager workspace.go:598: check if the status of workspace is in the running phase: 462f1325-3019-4547-8666-508e8353335e workspace.go:631: status: 462f1325-3019-4547-8666-508e8353335e, PENDING workspace.go:598: check if the status of workspace is in the running phase: 462f1325-3019-4547-8666-508e8353335e workspace.go:631: status: 462f1325-3019-4547-8666-508e8353335e, PENDING workspace.go:598: check if the status of workspace is in the running phase: 462f1325-3019-4547-8666-508e8353335e workspace.go:631: status: 462f1325-3019-4547-8666-508e8353335e, CREATING workspace.go:598: check if the status of workspace is in the running phase: 462f1325-3019-4547-8666-508e8353335e workspace.go:631: status: 462f1325-3019-4547-8666-508e8353335e, CREATING workspace.go:598: check if the status of workspace is in the running phase: 462f1325-3019-4547-8666-508e8353335e workspace.go:631: status: 462f1325-3019-4547-8666-508e8353335e, CREATING workspace.go:598: check if the status of workspace is in the running phase: 462f1325-3019-4547-8666-508e8353335e workspace.go:504: waiting for stopping the workspace: 462f1325-3019-4547-8666-508e8353335e workspace.go:514: attemp to delete the workspace: 462f1325-3019-4547-8666-508e8353335e workspace.go:797: confirmed the worksapce is stopped: 462f1325-3019-4547-8666-508e8353335e, STOPPED workspace.go:538: successfully terminated workspace git_test.go:172: failed to wait for the workspace to start up: cannot wait for workspace: context deadline exceeded * [preview] retry installing trust-manager And use trust-manager from the packer image * [test] clarify USER_TOKEN value for preview environments * Cleanup * [preview] remove commented out yaml related to PodSecurityPolicy
1 parent c2923d8 commit 99eb259

File tree

18 files changed

+177
-392
lines changed

18 files changed

+177
-392
lines changed

.github/CODEOWNERS

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -128,3 +128,18 @@
128128
/CHANGELOG.md
129129
/components/ide/jetbrains/backend-plugin/gradle-latest.properties
130130
/components/ide/jetbrains/gateway-plugin/gradle-latest.properties
131+
132+
#
133+
# Add so that teams assert we're not breaking each other's integration tests
134+
/test/pkg/agent @gitpod-io/engineering-workspace
135+
/test/pkg/integration @gitpod-io/engineering-ide @gitpod-io/engineering-workspace
136+
/test/pkg/report @gitpod-io/engineering-workspace
137+
/test/tests/workspace @gitpod-io/engineering-workspace
138+
/test/tests/smoke-test @gitpod-io/engineering-ide @gitpod-io/engineering-workspace
139+
/test/tests/ide @gitpod-io/engineering-ide
140+
/test/tests/components/content-service @gitpod-io/engineering-workspace
141+
/test/tests/components/database @gitpod-io/engineering-webapp
142+
/test/tests/components/image-builder @gitpod-io/engineering-workspace
143+
/test/tests/components/server @gitpod-io/engineering-webapp
144+
/test/tests/components/ws-daemon @gitpod-io/engineering-workspace
145+
/test/tests/components/ws-manager @gitpod-io/engineering-workspace

.github/workflows/workspace-integration-tests.yml

Lines changed: 6 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -38,21 +38,14 @@ jobs:
3838
steps:
3939
# sometimes auth fails with:
4040
# google-github-actions/setup-gcloud failed with: EACCES: permission denied, mkdir '/__t/gcloud'
41+
- name: Set up Cloud SDK
42+
uses: google-github-actions/setup-gcloud@v1
4143
- id: auth
4244
uses: google-github-actions/auth@v1
4345
continue-on-error: true
4446
with:
4547
token_format: access_token
4648
credentials_json: "${{ secrets.GCP_CREDENTIALS }}"
47-
# so we retry on failure
48-
- id: auth-retry
49-
uses: google-github-actions/auth@v1
50-
if: steps.auth.outcome == 'failure'
51-
with:
52-
token_format: access_token
53-
credentials_json: "${{ secrets.GCP_CREDENTIALS }}"
54-
- name: Set up Cloud SDK
55-
uses: google-github-actions/setup-gcloud@v1
5649
# do this step as early as possible, so that Slack Notify failure has the secret
5750
- name: Get Secrets from GCP
5851
id: "secrets"
@@ -193,7 +186,7 @@ jobs:
193186
args+=( "-kubeconfig=/home/gitpod/.kube/config" )
194187
args+=( "-namespace=default" )
195188
[[ "$USERNAME" != "" ]] && args+=( "-username=$USERNAME" )
196-
args+=( "-timeout=60m" )
189+
args+=( "-timeout=90m" )
197190
198191
BASE_TESTS_DIR="$GITHUB_WORKSPACE/test/tests"
199192
CONTENT_SERVICE_TESTS="$BASE_TESTS_DIR/components/content-service"
@@ -225,7 +218,8 @@ jobs:
225218
fi
226219
227220
set +e
228-
go test -p 2 -v ./... "${args[@]}" -run '.*[^.SerialOnly]$' 2>&1 | go-junit-report -subtest-mode=exclude-parents -set-exit-code -out "TEST-${TEST_NAME}-PARALLEL.xml" -iocopy
221+
# running tests in parallel saves time, but is flakey.
222+
go test -p 1 --parallel 1 -v ./... "${args[@]}" -run '.*[^.SerialOnly]$' 2>&1 | go-junit-report -subtest-mode=exclude-parents -set-exit-code -out "TEST-${TEST_NAME}-PARALLEL.xml" -iocopy
229223
RC=${PIPESTATUS[0]}
230224
set -e
231225
@@ -240,6 +234,7 @@ jobs:
240234
uses: test-summary/action@v2
241235
with:
242236
paths: "test/tests/**/TEST-*.xml"
237+
show: "all"
243238
if: always()
244239
- name: Slack Notification
245240
uses: rtCamp/action-slack-notify@v2

.werft/vm/manifests/rook-ceph/common.yaml

Lines changed: 0 additions & 256 deletions
Original file line numberDiff line numberDiff line change
@@ -80,24 +80,6 @@ rules:
8080
resources: ["volumesnapshots/status"]
8181
verbs: ["update", "patch"]
8282
---
83-
apiVersion: rbac.authorization.k8s.io/v1
84-
kind: ClusterRole
85-
metadata:
86-
name: 'psp:rook'
87-
labels:
88-
operator: rook
89-
storage-backend: ceph
90-
app.kubernetes.io/part-of: rook-ceph-operator
91-
rules:
92-
- apiGroups:
93-
- policy
94-
resources:
95-
- podsecuritypolicies
96-
resourceNames:
97-
- 00-rook-privileged
98-
verbs:
99-
- use
100-
---
10183
kind: ClusterRole
10284
apiVersion: rbac.authorization.k8s.io/v1
10385
metadata:
@@ -701,156 +683,6 @@ subjects:
701683
name: rook-ceph-system
702684
namespace: rook-ceph # namespace:operator
703685
---
704-
apiVersion: rbac.authorization.k8s.io/v1
705-
kind: ClusterRoleBinding
706-
metadata:
707-
name: rook-ceph-system-psp
708-
labels:
709-
operator: rook
710-
storage-backend: ceph
711-
app.kubernetes.io/part-of: rook-ceph-operator
712-
roleRef:
713-
apiGroup: rbac.authorization.k8s.io
714-
kind: ClusterRole
715-
name: 'psp:rook'
716-
subjects:
717-
- kind: ServiceAccount
718-
name: rook-ceph-system
719-
namespace: rook-ceph # namespace:operator
720-
---
721-
apiVersion: rbac.authorization.k8s.io/v1
722-
kind: ClusterRoleBinding
723-
metadata:
724-
name: rook-csi-cephfs-plugin-sa-psp
725-
roleRef:
726-
apiGroup: rbac.authorization.k8s.io
727-
kind: ClusterRole
728-
name: 'psp:rook'
729-
subjects:
730-
- kind: ServiceAccount
731-
name: rook-csi-cephfs-plugin-sa
732-
namespace: rook-ceph # namespace:operator
733-
---
734-
apiVersion: rbac.authorization.k8s.io/v1
735-
kind: ClusterRoleBinding
736-
metadata:
737-
name: rook-csi-cephfs-provisioner-sa-psp
738-
roleRef:
739-
apiGroup: rbac.authorization.k8s.io
740-
kind: ClusterRole
741-
name: 'psp:rook'
742-
subjects:
743-
- kind: ServiceAccount
744-
name: rook-csi-cephfs-provisioner-sa
745-
namespace: rook-ceph # namespace:operator
746-
---
747-
apiVersion: rbac.authorization.k8s.io/v1
748-
kind: ClusterRoleBinding
749-
metadata:
750-
name: rook-csi-rbd-plugin-sa-psp
751-
roleRef:
752-
apiGroup: rbac.authorization.k8s.io
753-
kind: ClusterRole
754-
name: 'psp:rook'
755-
subjects:
756-
- kind: ServiceAccount
757-
name: rook-csi-rbd-plugin-sa
758-
namespace: rook-ceph # namespace:operator
759-
---
760-
apiVersion: rbac.authorization.k8s.io/v1
761-
kind: ClusterRoleBinding
762-
metadata:
763-
name: rook-csi-rbd-provisioner-sa-psp
764-
roleRef:
765-
apiGroup: rbac.authorization.k8s.io
766-
kind: ClusterRole
767-
name: 'psp:rook'
768-
subjects:
769-
- kind: ServiceAccount
770-
name: rook-csi-rbd-provisioner-sa
771-
namespace: rook-ceph # namespace:operator
772-
---
773-
# We expect most Kubernetes teams to follow the Kubernetes docs and have these PSPs.
774-
# * privileged (for kube-system namespace)
775-
# * restricted (for all logged in users)
776-
#
777-
# PSPs are applied based on the first match alphabetically. `rook-ceph-operator` comes after
778-
# `restricted` alphabetically, so we name this `00-rook-privileged`, so it stays somewhere
779-
# close to the top and so `rook-system` gets the intended PSP. This may need to be renamed in
780-
# environments with other `00`-prefixed PSPs.
781-
#
782-
# More on PSP ordering: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#policy-order
783-
apiVersion: policy/v1beta1
784-
kind: PodSecurityPolicy
785-
metadata:
786-
name: 00-rook-privileged
787-
annotations:
788-
seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'runtime/default'
789-
seccomp.security.alpha.kubernetes.io/defaultProfileName: 'runtime/default'
790-
spec:
791-
privileged: true
792-
allowedCapabilities:
793-
# required by CSI
794-
- SYS_ADMIN
795-
- MKNOD
796-
fsGroup:
797-
rule: RunAsAny
798-
# runAsUser, supplementalGroups - Rook needs to run some pods as root
799-
# Ceph pods could be run as the Ceph user, but that user isn't always known ahead of time
800-
runAsUser:
801-
rule: RunAsAny
802-
supplementalGroups:
803-
rule: RunAsAny
804-
# seLinux - seLinux context is unknown ahead of time; set if this is well-known
805-
seLinux:
806-
rule: RunAsAny
807-
volumes:
808-
# recommended minimum set
809-
- configMap
810-
- downwardAPI
811-
- emptyDir
812-
- persistentVolumeClaim
813-
- secret
814-
- projected
815-
# required for Rook
816-
- hostPath
817-
# allowedHostPaths can be set to Rook's known host volume mount points when they are fully-known
818-
# allowedHostPaths:
819-
# - pathPrefix: "/run/udev" # for OSD prep
820-
# readOnly: false
821-
# - pathPrefix: "/dev" # for OSD prep
822-
# readOnly: false
823-
# - pathPrefix: "/var/lib/rook" # or whatever the dataDirHostPath value is set to
824-
# readOnly: false
825-
# Ceph requires host IPC for setting up encrypted devices
826-
hostIPC: true
827-
# Ceph OSDs need to share the same PID namespace
828-
hostPID: true
829-
# hostNetwork can be set to 'false' if host networking isn't used
830-
hostNetwork: true
831-
hostPorts:
832-
# Ceph messenger protocol v1
833-
- min: 6789
834-
max: 6790 # <- support old default port
835-
# Ceph messenger protocol v2
836-
- min: 3300
837-
max: 3300
838-
# Ceph RADOS ports for OSDs, MDSes
839-
- min: 6800
840-
max: 7300
841-
# # Ceph dashboard port HTTP (not recommended)
842-
# - min: 7000
843-
# max: 7000
844-
# Ceph dashboard port HTTPS
845-
- min: 8443
846-
max: 8443
847-
# Ceph mgr Prometheus Metrics
848-
- min: 9283
849-
max: 9283
850-
# port for CSIAddons
851-
- min: 9070
852-
max: 9070
853-
---
854686
kind: Role
855687
apiVersion: rbac.authorization.k8s.io/v1
856688
metadata:
@@ -1147,38 +979,6 @@ subjects:
1147979
name: rook-ceph-cmd-reporter
1148980
namespace: rook-ceph # namespace:cluster
1149981
---
1150-
apiVersion: rbac.authorization.k8s.io/v1
1151-
kind: RoleBinding
1152-
metadata:
1153-
name: rook-ceph-cmd-reporter-psp
1154-
namespace: rook-ceph # namespace:cluster
1155-
roleRef:
1156-
apiGroup: rbac.authorization.k8s.io
1157-
kind: ClusterRole
1158-
name: psp:rook
1159-
subjects:
1160-
- kind: ServiceAccount
1161-
name: rook-ceph-cmd-reporter
1162-
namespace: rook-ceph # namespace:cluster
1163-
---
1164-
apiVersion: rbac.authorization.k8s.io/v1
1165-
kind: RoleBinding
1166-
metadata:
1167-
name: rook-ceph-default-psp
1168-
namespace: rook-ceph # namespace:cluster
1169-
labels:
1170-
operator: rook
1171-
storage-backend: ceph
1172-
app.kubernetes.io/part-of: rook-ceph-operator
1173-
roleRef:
1174-
apiGroup: rbac.authorization.k8s.io
1175-
kind: ClusterRole
1176-
name: psp:rook
1177-
subjects:
1178-
- kind: ServiceAccount
1179-
name: default
1180-
namespace: rook-ceph # namespace:cluster
1181-
---
1182982
# Allow the ceph mgr to access resources scoped to the CephCluster namespace necessary for mgr modules
1183983
kind: RoleBinding
1184984
apiVersion: rbac.authorization.k8s.io/v1
@@ -1194,20 +994,6 @@ subjects:
1194994
name: rook-ceph-mgr
1195995
namespace: rook-ceph # namespace:cluster
1196996
---
1197-
apiVersion: rbac.authorization.k8s.io/v1
1198-
kind: RoleBinding
1199-
metadata:
1200-
name: rook-ceph-mgr-psp
1201-
namespace: rook-ceph # namespace:cluster
1202-
roleRef:
1203-
apiGroup: rbac.authorization.k8s.io
1204-
kind: ClusterRole
1205-
name: psp:rook
1206-
subjects:
1207-
- kind: ServiceAccount
1208-
name: rook-ceph-mgr
1209-
namespace: rook-ceph # namespace:cluster
1210-
---
1211997
# Allow the ceph mgr to access resources in the Rook operator namespace necessary for mgr modules
1212998
kind: RoleBinding
1213999
apiVersion: rbac.authorization.k8s.io/v1
@@ -1238,20 +1024,6 @@ subjects:
12381024
name: rook-ceph-osd
12391025
namespace: rook-ceph # namespace:cluster
12401026
---
1241-
apiVersion: rbac.authorization.k8s.io/v1
1242-
kind: RoleBinding
1243-
metadata:
1244-
name: rook-ceph-osd-psp
1245-
namespace: rook-ceph # namespace:cluster
1246-
roleRef:
1247-
apiGroup: rbac.authorization.k8s.io
1248-
kind: ClusterRole
1249-
name: psp:rook
1250-
subjects:
1251-
- kind: ServiceAccount
1252-
name: rook-ceph-osd
1253-
namespace: rook-ceph # namespace:cluster
1254-
---
12551027
# Allow the osd purge job to run in this namespace
12561028
kind: RoleBinding
12571029
apiVersion: rbac.authorization.k8s.io/v1
@@ -1267,20 +1039,6 @@ subjects:
12671039
name: rook-ceph-purge-osd
12681040
namespace: rook-ceph # namespace:cluster
12691041
---
1270-
apiVersion: rbac.authorization.k8s.io/v1
1271-
kind: RoleBinding
1272-
metadata:
1273-
name: rook-ceph-purge-osd-psp
1274-
namespace: rook-ceph # namespace:cluster
1275-
roleRef:
1276-
apiGroup: rbac.authorization.k8s.io
1277-
kind: ClusterRole
1278-
name: psp:rook
1279-
subjects:
1280-
- kind: ServiceAccount
1281-
name: rook-ceph-purge-osd
1282-
namespace: rook-ceph # namespace:cluster
1283-
---
12841042
# Allow the rgw pods in this namespace to work with configmaps
12851043
kind: RoleBinding
12861044
apiVersion: rbac.authorization.k8s.io/v1
@@ -1296,20 +1054,6 @@ subjects:
12961054
name: rook-ceph-rgw
12971055
namespace: rook-ceph # namespace:cluster
12981056
---
1299-
apiVersion: rbac.authorization.k8s.io/v1
1300-
kind: RoleBinding
1301-
metadata:
1302-
name: rook-ceph-rgw-psp
1303-
namespace: rook-ceph # namespace:cluster
1304-
roleRef:
1305-
apiGroup: rbac.authorization.k8s.io
1306-
kind: ClusterRole
1307-
name: psp:rook
1308-
subjects:
1309-
- kind: ServiceAccount
1310-
name: rook-ceph-rgw
1311-
namespace: rook-ceph # namespace:cluster
1312-
---
13131057
# Grant the operator, agent, and discovery agents access to resources in the rook-ceph-system namespace
13141058
kind: RoleBinding
13151059
apiVersion: rbac.authorization.k8s.io/v1

components/gitpod-protocol/go/gitpod-service.go

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2053,6 +2053,7 @@ type UpdateOwnAuthProviderParams struct {
20532053
type CreateWorkspaceOptions struct {
20542054
StartWorkspaceOptions
20552055
ContextURL string `json:"contextUrl,omitempty"`
2056+
OrganizationId string `json:"organizationId,omitempty"`
20562057
IgnoreRunningWorkspaceOnSameCommit bool `json:"ignoreRunningWorkspaceOnSameCommit,omitemopty"`
20572058
IgnoreRunningPrebuild bool `json:"ignoreRunningPrebuild,omitemopty"`
20582059
AllowUsingPreviousPrebuilds bool `json:"allowUsingPreviousPrebuilds,omitemopty"`

dev/preview/BUILD.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ scripts:
3333
export TF_VAR_preview_name="${TF_VAR_preview_name:-$(previewctl get name)}"
3434
export TF_VAR_vm_cpu="${TF_VAR_vm_cpu:-6}"
3535
export TF_VAR_vm_memory="${TF_VAR_vm_memory:-12Gi}"
36-
export TF_VAR_vm_storage_class="${TF_VAR_vm_storage_class:-longhorn-gitpod-k3s-202209251218-onereplica}"
36+
export TF_VAR_vm_storage_class="${TF_VAR_vm_storage_class:-longhorn-gitpod-k3s-202304191605-onereplica}"
3737
./workflow/preview/deploy-harvester.sh
3838
3939
- name: delete-preview

0 commit comments

Comments
 (0)