@@ -87,8 +87,16 @@ summarization of these discussions are presented below.
87
87
wants to support these changes, and even if they were on board, it could take
88
88
a long time to land these changes upstream.
89
89
90
- * TODO: Bubbling up errors from apimachinery.
91
-
90
+ * Surfacing the watch error handler from the reflector in client-go through the
91
+ informer. Controller-runtimer could then look for specific errors and decide how to
92
+ handle it from there, such as terminating the informer if a specific error was
93
+ thrown indicating that the informer will no longer be viable (for example when
94
+ the resource is uninstalled). The advantage is that we'd be pushing updates
95
+ from the informer to the manager when errors arise (such as when the resource
96
+ disappears) and this would lead to more responsive informer shutdown that doesn't
97
+ require a separate watch mechanism to determine whether to remove informers. Like
98
+ de-registering EventHandlers, the downside is that we would need api-machinery to
99
+ support these changes and might take a long time to coordinate and implement.
92
100
93
101
### Minimal hooks needed to use informer removal externally
94
102
@@ -103,13 +111,18 @@ The proposal to do this is:
103
111
or not and use this field to not empty the controller’s ` startWatches ` when the
104
112
controller is stopped.
105
113
114
+ A proof of concept for PR is here at
115
+ [ #1180 ] ( https://github.com/kubernetes-sigs/controller-runtime/pull/1180 )
116
+
106
117
#### Risks and Mitigations
107
118
108
119
* We lack a consistent story around multi-cluster support and introducing
109
120
changes such as this without fully thinking through the multi-cluster story
110
- might bind us for future designs. We think that restarting
111
- controllers is a valid use-case even for single cluster regardless of the
112
- multi-cluster use case.
121
+ might bind us for future designs. We think gracefully handling degraded
122
+ functionality in informers we start as end users modify the cluster is a valid
123
+ use case that exists whenever the cluster administrator is different from the
124
+ controller administrator and should be handled irregardless of its application
125
+ in multi-cluster envrionments.
113
126
114
127
* [ #1139 ] ( https://github.com/kubernetes-sigs/controller-runtime/pull/1139 ) discusses why
115
128
the ability to start a controller more than once was taken away. It's a little
0 commit comments