-
Notifications
You must be signed in to change notification settings - Fork 562
Bug 1825330: support creating v1beta CRDs to avoid data loss #1470
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 1825330: support creating v1beta CRDs to avoid data loss #1470
Conversation
@exdx: This pull request references Bugzilla bug 1825330, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker. 3 validation(s) were run on this bug
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
It would be nice to have a test that reproduces the issue so we can be certain of the cause and catch it in the future. |
Putting this here for record keeping purposes: kubernetes/kubernetes#87231 |
/hold Here is so much wrong about conversion. |
Closing this in favor of adding a v1beta1 codepath (something we had originally done in prior versions of the v1 support PR). There will be no client-side conversion between v1beta1 and v1 CRD types in OLM. |
@exdx: This pull request references Bugzilla bug 1825330, which is valid. 3 validation(s) were run on this bug
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Replace hub and spoke CRD conversion with direct unmarshaling to v1. This follows advice to avoid the internal k8s types and conversions client-side (see operator-framework/operator-lifecycle-manager#1470 (comment)).
return nil | ||
} | ||
convertedCRD := &apiextensions.CustomResourceDefinition{} | ||
if err := apiextensionsv1beta1.Convert_v1beta1_CustomResourceDefinition_To_apiextensions_CustomResourceDefinition(newCRD, convertedCRD, nil); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here is still a conversion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here we convert to the internal version to do static schema validation. Specifically, the NewSchemaValidator which creates an openapi schema validator for the given CRD validation, expects an internal CRD type to be provided to it, so OLM does the conversion here when validating the new CRD.
The internal CRD is never written to the cluster again. We can make a follow-up BZ to use dry-run APIs instead of using the internal validation package, if necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, would prefer dry-run (follow-up). The schema validator is also a component considered private.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How can we dry-run against an CRD API that has not been applied to the cluster yet?
The goal is to catch the mistake before the CRD has been created.
There seems to be little practical issue with using the internal types, correct? We could just as easily read the json schema from the CRD spec and use an off-the-shelf schema validator to accomplish the same goals.
…n to v1. Due to data loss during client side conversions OLM will support two different paths for v1 and v1beta1 CRDs.
ea2997d
to
5f054ee
Compare
return nil | ||
} | ||
convertedCRD := &apiextensions.CustomResourceDefinition{} | ||
if err := apiextensionsv1beta1.Convert_v1beta1_CustomResourceDefinition_To_apiextensions_CustomResourceDefinition(newCRD, convertedCRD, nil); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How can we dry-run against an CRD API that has not been applied to the cluster yet?
The goal is to catch the mistake before the CRD has been created.
There seems to be little practical issue with using the internal types, correct? We could just as easily read the json schema from the CRD spec and use an off-the-shelf schema validator to accomplish the same goals.
/test e2e-gcp |
2 similar comments
/test e2e-gcp |
/test e2e-gcp |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
return crd, nil | ||
} | ||
|
||
// UnmarshalV1 takes in a CRD manifest and returns a v1beta1 versioned CRD object. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// UnmarshalV1 takes in a CRD manifest and returns a v1beta1 versioned CRD object. | |
// UnmarshalV1Beta1 takes in a CRD manifest and returns a v1beta1 versioned CRD object. |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: benluddy, exdx The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@exdx: All pull requests linked via external trackers have merged: operator-framework/operator-lifecycle-manager#1470. Bugzilla bug 1825330 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Replace hub and spoke CRD conversion with direct unmarshaling to v1. This follows advice to avoid the internal k8s types and conversions client-side (see operator-framework/operator-lifecycle-manager#1470 (comment)).
Replace hub and spoke CRD conversion with direct unmarshaling to v1. This follows advice to avoid the internal k8s types and conversions client-side (see operator-framework/operator-lifecycle-manager#1470 (comment)).
Replace hub and spoke CRD conversion with direct unmarshaling to v1. This follows advice to avoid the internal k8s types and conversions client-side (see operator-framework/operator-lifecycle-manager#1470 (comment)). (cherry picked from commit 7d6ac12ec8bc5a864646bae56bf80d138c922b61)
Replace hub and spoke CRD conversion with direct unmarshaling to v1. This follows advice to avoid the internal k8s types and conversions client-side (see operator-framework/operator-lifecycle-manager#1470 (comment)). (upstream api commit: 7d6ac12ec8bc5a864646bae56bf80d138c922b61)
Replace hub and spoke CRD conversion with direct unmarshaling to v1. This follows advice to avoid the internal k8s types and conversions client-side (see operator-framework/operator-lifecycle-manager#1470 (comment)). (upstream api commit: 7d6ac12ec8bc5a864646bae56bf80d138c922b61)
Replace hub and spoke CRD conversion with direct unmarshaling to v1. This follows advice to avoid the internal k8s types and conversions client-side (see operator-framework/operator-lifecycle-manager#1470 (comment)). Upstream-repository: api Upstream-commit: 7d6ac12ec8bc5a864646bae56bf80d138c922b61
Description of the change:
Supporting v1 CRDs originally intended to create and update all CRDs at v1 - converting those provided at v1beta1. However, due to some issues around the validation, and the fact that CRDs are not fully backward-compatible, OLM will have to create v1beta1 CRDs with the v1beta1 client. This PR adds back support for creating v1beta1 CRDs.
Motivation for the change:
Reviewer Checklist
/docs