-
Notifications
You must be signed in to change notification settings - Fork 562
Update install.sh to use kubectl create #2771
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tested this out locally using a fresh kind cluster:
$ kind delete cluster ; kind create cluster
$ git checkout master
$ ./scripts/install.sh v0.21.1
customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/olmconfigs.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operatorconditions.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
The CustomResourceDefinition "clusterserviceversions.operators.coreos.com" is invalid: metadata.annotations: Too long: must have at most 262144 bytes
And when pulling down these changes:
$ kind delete cluster ; kind create cluster
$ gh pr checkout 2771
$ ./scripts/install.sh v0.21.1
customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/olmconfigs.operators.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/operatorconditions.operators.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/olmconfigs.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/operatorconditions.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com condition met
namespace/olm serverside-applied
namespace/operators serverside-applied
serviceaccount/olm-operator-serviceaccount serverside-applied
clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager serverside-applied
clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm serverside-applied
olmconfig.operators.coreos.com/cluster serverside-applied
deployment.apps/olm-operator serverside-applied
deployment.apps/catalog-operator serverside-applied
clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit serverside-applied
clusterrole.rbac.authorization.k8s.io/aggregate-olm-view serverside-applied
operatorgroup.operators.coreos.com/global-operators serverside-applied
operatorgroup.operators.coreos.com/olm-operators serverside-applied
clusterserviceversion.operators.coreos.com/packageserver serverside-applied
catalogsource.operators.coreos.com/operatorhubio-catalog serverside-applied
Waiting for deployment "olm-operator" rollout to finish: 0 of 1 updated replicas are available...
deployment "olm-operator" successfully rolled out
Waiting for deployment "catalog-operator" rollout to finish: 0 of 1 updated replicas are available...
deployment "catalog-operator" successfully rolled out
Package server phase: Installing
Package server phase: Succeeded
deployment "packageserver" successfully rolled out
FYI - it looks like SSA GA'd using k8s 1.22 so this solution is potentially problematic for older clusters. |
Hmm -- should we instead |
@exdx That seems like a valid alternative, but something I realize now is what happens if I have an OLM installation that stamped out by this install.sh that used the |
Yes, that's the update path, and we don't have a supported way of doing it today unforunately (tracked in #2695). The update would fail without the |
The install of OLM CRDs via install.sh via apply was failing due to the 'last-applied-configuration' annotation causing the size of the CRD annotations to be too large for the server to accept. Creating the CRDs via kubectl create does not cause the annotation to be automatically appended to the object, so the application goes through successfully. Installing via create means that the install.sh script does not support updating an existing OLM installation, but there are already checks in place to abort the install if an existing OLM installation is detected. Signed-off-by: Daniel Sover <[email protected]>
6737f60
to
33e86c2
Compare
Updated the script to use |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
Nice work
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: awgreene, exdx, timflannagan The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@timflannagan @exdx this does bring into question what version of k8s we support, should we commit to a set number of the most recent releases? |
I think ideally we would support N-2 releases behind, but I don't think we necessarily have the ability to ensure that, unless we build out the e2e suite to also test on past versions -- which is possible. Since our upstream support guarantee is best effort, I think being always on the latest k8s release and suggesting users on older clusters use an older version is reasonable. |
/lgtm |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
1 similar comment
/retest-required Please review the full test history for this PR and help us cut down flakes. |
I'm not sure why, but that test seems to be consistently failing on this PR, so it suggests it must be related? /hold |
Looks like it passed /hold cancel |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
4 similar comments
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
4 similar comments
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/override flaky-e2e-tests |
@exdx: /override requires a failed status context or a job name to operate on.
Only the following contexts were expected:
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Going to merge manually since the flaky-e2e job is not required, and the bot is hung up on it for some reason. |
@exdx I just gave it a go and it fails to install on Steps to reproduce:
Outcome:
Output:# curl -sL https://raw.githubusercontent.com/exdx/operator-lifecycle-manager/33e86c2850975a793e121b200476a95511179dc6/scripts/install.sh | bash -s v0.21.1
customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/olmconfigs.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operatorconditions.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/olmconfigs.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/operatorconditions.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com condition met
namespace/olm created
namespace/operators created
serviceaccount/olm-operator-serviceaccount created
clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
olmconfig.operators.coreos.com/cluster created
deployment.apps/olm-operator created
deployment.apps/catalog-operator created
clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
operatorgroup.operators.coreos.com/global-operators created
operatorgroup.operators.coreos.com/olm-operators created
clusterserviceversion.operators.coreos.com/packageserver created
catalogsource.operators.coreos.com/operatorhubio-catalog created
Waiting for deployment "olm-operator" rollout to finish: 0 of 1 updated replicas are available...
deployment "olm-operator" successfully rolled out
Waiting for deployment "catalog-operator" rollout to finish: 0 of 1 updated replicas are available...
deployment "catalog-operator" successfully rolled out
Package server phase: Installing
CSV "packageserver" failed to reach phase succeeded |
Hi @joaomlneto, I'd recommend opening a separate issue as this seems unrelated to the size of the last-applied-configuration annotation. I'm not sure how well OLM works on smaller k8s environments like microk8s or k3s, I know there were some issues in the past. |
The install of OLM CRDs via install.sh via apply was failing due to the
'last-applied-configuration' annotation causing the size of the CRD
annotations to be too large for the server to accept. Creating the CRDs via
kubectl create does not cause the annotation to be automatically
appended to the object, so the application goes through successfully.
Installing via create means that the install.sh script does not support updating an
existing OLM installation, but there are already checks in place to abort the
install if an existing OLM installation is detected.
Signed-off-by: Daniel Sover [email protected]
Closes #2767
Description of the change:
Motivation for the change:
Reviewer Checklist
/doc
[FLAKE]
are truly flaky[FLAKE]
tag are no longer flaky