Skip to content

[WIP] ✨ Adding timeout while waiting for cache to sync #580

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

shawn-hurley
Copy link

@shawn-hurley shawn-hurley commented Sep 1, 2019

fixes #562

Adding a timeout based on the context while waiting for the cache to sync. The default value is 30 seconds as I didn't have a better guess on what it should be.

Discussion:

  1. What should the correct default be?
  2. Should we create a specific error that one could check, I would think we would ultimately want this to return true.
  3. Need help adding a test for this. All I can think to do is change the user that is making the calls to a user that does not have access to a resource. This seemed daunting, is there some other way to trigger the cache sync failure?

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Sep 1, 2019
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: shawn-hurley
To complete the pull request process, please assign directxman12
You can assign the PR to them by writing /assign @directxman12 in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Sep 1, 2019
@shawn-hurley
Copy link
Author

/cc @DirectXMan12 @joelanford

@k8s-ci-robot
Copy link
Contributor

@shawn-hurley: GitHub didn't allow me to request PR reviews from the following users: joelanford.

Note that only kubernetes-sigs members and repo collaborators can review this PR, and authors cannot review their own PRs.

In response to this:

/cc @DirectXMan12 @joelanford

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Waiting for cache to sync happens when getting a new informer
in the informer map.
Copy link
Contributor

@DirectXMan12 DirectXMan12 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

default timeout

See inline

specific error

Yeah, let's return a timeout error so that returns true trivially.

test

Preferably, fake out the HasSynced method or have a dummy informer or something. Alternatively, maybe fake out the restmapper to contain a fake resource, then it won't ever sync cause it can't get anything out of the server?

if err != nil {
return err
}

return cache.Reader.List(ctx, out, opts...)
}

// addTimeout adds a default 30 second timeout to a child contxt
// if one does not exist.
func addTimeout(ctx context.Context) (context.Context, context.CancelFunc) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe change this to be addTimeoutIfUnset or something.

Also, do we want a default timeout? I'm unsure

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, do we want a default timeout? I'm unsure

That could also be an option it might be easier to punt the decision to the implementer as they probably know their operator and what it would expect better.

On the other hand, I think a case where we are never going to sync permissions error especially and will hang a worker is a bug that should be fixed.

I am torn :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That could also be an option it might be easier to punt the decision to the implementer as they probably know their operator and what it would expect better.

That's my opinion -- defaults might fail weirdly in certain conditions, but hanging forever is bad too. The problem is that with defaults, there's not way to say "no, never time out" except to go astronomically large. Let's punt on the default for now (do it in a separate PR) so we can at least get the capability in and unbreak people that want to use this.

Relatedly: what's the practical timeout on client->server requests? If you just eat everything after the handshake on a direct client call, how long will we wait? Forever? If not, we can match that by default

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @DirectXMan12. My opinion is that when context.Context is exposed to callers, its generally to explicitly hand over full control of deadlining. I'd be little confused if I passed context.Background() and it timed out after 30s.

If we're not going to default the timeout, how do we handle GetInformerForKind and GetInformer, which don't accept a context.Context?

  • Interally use context.Background()
  • Make a breaking change to the Informers interface to include a Context parameter
  • Add a new interface that has methods that accept a Context and check whether the cache implementation supports it when calling.

Then there's the concern of how that propagates up the call stack. For example, what would we pass here?

On the other hand, I think a case where we are never going to sync permissions error especially and will hang a worker is a bug that should be fixed.

Is it possible to propagate these errors back up to the caller so that a worker doesn't hang?

Copy link
Author

@shawn-hurley shawn-hurley Sep 5, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will remove the defaults and add the timeout error.

Is it possible to propagate these errors back up to the caller so that a worker doesn't hang?

We would have to change the underlying informer to do this, I don't know how we could do that without a backward incompatabile change ATM?

I like @DirectXMan12 idea of using the client's default timeout for all of these as it should be expected. note we should add go doc that one of the reasons this can timeout is permissions

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we pass here

context.TODO, with a note & bug that when we go to make breaking changes, we fix the signature of start to include a context

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(any internal uses where you say "we really should have a real context here, but can't because of the interface" should be TODO, not Background)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as for propagating those errors up, it would be nice eventually, but we'd need to get wrapping errors from 1.13 before that becomes even slightly close to possible.

@shawn-hurley
Copy link
Author

Hopefully getting back to this today, sorry about the delay

@alvaroaleman
Copy link
Member

@shawn-hurley Anything one can do to help out here? The current behavior of just continuing to run but not doing anything when CRDs or RBAC are wrong is pretty terrible IMO.

Maybe a simple approach to avoiding that case would be to extend the manager.Options with a cacheSyncTimeout that if non-zero will add another runnable that just does a GetCache().WaitForCacheSync(ctxWithTimeout.Done())?

@djzager
Copy link
Contributor

djzager commented Oct 28, 2019

Anything one can do to help out here? The current behavior of just continuing to run but not doing anything when CRDs or RBAC are wrong is pretty terrible IMO.

@alvaroaleman I'm working on creating a PR to finish up this patch based on existing comments.

@k8s-ci-robot
Copy link
Contributor

@shawn-hurley: The following test failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
pull-controller-runtime-test-master 813459b link /test pull-controller-runtime-test-master

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jan 22, 2020
@k8s-ci-robot
Copy link
Contributor

@shawn-hurley: PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

DirectXMan12 pushed a commit that referenced this pull request Jan 31, 2020
📖 bumped up the latest stable version
@DirectXMan12
Copy link
Contributor

close this for now, if anyone wants to pick it up, feel free

(IIRC, it's just a matter for folks being busy)

@DirectXMan12 DirectXMan12 added the out-of-time-pick-me-up Someone ran out of time, feel free to pick this up label Feb 5, 2020
@DirectXMan12
Copy link
Contributor

/help

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. out-of-time-pick-me-up Someone ran out of time, feel free to pick this up size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Create timeout and error when informer set up in the cache is hanging
6 participants