-
Notifications
You must be signed in to change notification settings - Fork 1.2k
🌱 [WIP] Conditional Controllers #1527
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🌱 [WIP] Conditional Controllers #1527
Conversation
/assign @DirectXMan12 |
I think I'm missing some background here, but what use case is this for? This seems like it would only come up in operators which can do stuff to any specified object type based on runtime inputs? That seems legit but want to clarify since we've gotten questions about this kind of feature before when "two normal controllers that both have an early return check" was the far simpler solution :) |
The use case we are targeting is when the owner of the operator is not an admin of the cluster(s) it is operating on and thus it doesn’t have any control over when CRDs are installed and uninstalled. As the owner of the controller/operator, my operator should still run successfully on such a cluster and gracefully transition from a state of reconciling when the CRD I’m operating on is present to waiting for that CRD to be reinstalled when it no longer exists on the cluster (and vice versa). Running two separate controllers that intentionally stop themselves is not always viable when you’re running multiple operators in a single binary (e.g. multi-cluster, or just sharing a cache for perf reasons like kubernetes’ controller-manager). |
b5f1ed8
to
32f8b21
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: kevindelgado The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@kevindelgado: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
@kevindelgado: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
nice feature |
@kevindelgado The feature is an awsome feature for us, we implemented a same one but with a bad method due to the sealed stop channel of informer. Hope the PR to be merged soon. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@tossmilestone: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@kevindelgado Is the feature will be abandoned? If you are busy, I can help to work on it and get it done. |
This is an updated implementation of the design to add conditional controller support from #1192
It offers a way to configure controllers to seamlessly start/stop/restart themselves based on whether the CRD it watches exists on the cluster or not. It does so by:
MapEntry
that can beretrieved from the cache's
Informers
interface that fires when the informer hasstopped.
ConditionalSource
that is implemented by aConditionalKind
that has aReady()
method that blocks until the kind'stype is installed on the cluster and ready to be controlled, and a
StartNotifyDone()
that starts the underlying source with a channel thatfires when the source (and its underlying informer have stopped).
ConditonalSource
s known asconditionalWatches
that can be added like regular watches via thecontrollers
Watch
method. For anyConditionalSource
thatWatch()
iscalled with, it uses
Ready()
to determine when toStartNotifyDone()
itand uses the done channel returned by
StartNotifyDone()
to determine whenthe watch has started and that it should wait on
Ready()
again.Runnable
known as aConditionalRunnable
that has aReady()
method to indicate when theunderlying
Runnable
can be ran.conditional
option tothe builder's
For
,Owns
, andWatches
inputs that createsConditionalRunnables
to be ran by the manager.Currently it has been manually tested and there is a single integration test that checks the whole CRD install/uninstall/reinstall flow. I intend to add more unit tests for each individual component that has changed, but wanted to get feedback on the overall design first.