Skip to content

Session level API caching #634

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Sep 25, 2018
Merged

Session level API caching #634

merged 2 commits into from
Sep 25, 2018

Conversation

bigkraig
Copy link

No description provided.

@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Sep 21, 2018
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: bigkraig

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Sep 21, 2018
@coveralls
Copy link

Coverage Status

Coverage increased (+0.5%) to 28.868% when pulling 6ae8e16 on session-caching into f959dc6 on master.

@M00nF1sh
Copy link
Collaborator

M00nF1sh commented Sep 25, 2018

Nice work! 👍
Remaining question before putting the merge label 😸 .

  1. should we leverage LayeredCache, instead of sync.Map?(however, there will be no changes in this PR)

  2. should we directly embed AWS API in our wrapper API interfaces? It's convenient, but we are locked to specific aws go sdk version (unrelated to this PR, but just wondering what should the correct abstraction level in our AWS API wrapper, and whether we should do cache there, since caching on session level is "kind of hacky")

  3. should we indeed need caching, once we implemented "only refresh specific ingress when changes happen"?

@bigkraig
Copy link
Author

  1. That's a good idea. Can backlog it on the caching package.
  2. I need to be convinced on having an opinion on this one way or the other.. I'm not really worried about being locked to the AWS SDK version, maybe I should be?
  3. We will probably still need it for larger deployments. The API throttles at relatively low rates and is troublesome to track down what is making API calls when you hit the limit. Anything we can do to reduce superfluous queries is good in my book.

@M00nF1sh
Copy link
Collaborator

For #2, actually i'm ok with both solution if we adding more integration test cases for the caching(just incase aws break the internal implementation some how).

For #3, i'm afraid our local caching might cause problems. E.g. our ingress controller wants to update securityGroups on nodes, but it overrides changes make by other components/user due to there is new API like addSecurityGroup, and aws doesn't support compareAndSet.

@M00nF1sh
Copy link
Collaborator

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Sep 25, 2018
@k8s-ci-robot k8s-ci-robot merged commit fe631df into master Sep 25, 2018
@bigkraig bigkraig deleted the session-caching branch October 1, 2018 15:19
bigkraig pushed a commit that referenced this pull request Oct 1, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants