-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Support Managed Prometheus #1298
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR @abousias
@@ -95,6 +95,15 @@ resource "google_container_cluster" "primary" { | |||
enable_components = var.monitoring_enabled_components | |||
} | |||
} | |||
dynamic "monitoring_config" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is an existing dynamic monitoring_config, I think we can nest managed_prometheus
as a dynamic under that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for quick response and the review comment @bharathkkb! I kept it separate to have a separate for_each
check as in one case we have var.monitoring_enabled_components=[]
but we still want managed prometheus enabled. I didn't think of nesting with another dynamic! I will try that!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bharathkkb the nested dynamic, the way I have it now, doesn't seem to be working as expected. With the var.enable_managed_prometheus
set to true
, the service didn't get enabled and the tfstate is like this:
...
"monitoring_config": [
{
"enable_components": [
"SYSTEM_COMPONENTS"
],
"managed_prometheus": []
}
],
...
I am looking into it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interestingly switching to a static block:
It gives an api error:
I am reverting to the initial as I don't have a working solution at the moment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am having the error, with the reverted code too. The issue didn't appear because in my test I was applying the change to an existing cluster. Now I am getting this with a brand new cluster.
I am looking into it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @jroiseux I was about to check exactly the same conditional 😄! I will take a look and copy! Many thanks for the help! Nice to know that it will work 😉!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bharathkkb ready for re-review. It works fine now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SYSTEM_COMPONENTS
, here is the plan against an existing cluster:
Whereas the separate block one leaves the enabled_components
untouched:
This is because the dynamic
is "forced" to produce the block but the var.monitoring_enabled_components
is []
. My existing cluster is also based on the main module instead of the beta
ones. I am checking if that is an issue for me.
@jroiseux this is most probably happening with your version, isn't it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@abousias Yes, it happens on my version too.
This is what the statefile looks like when I use the current module for "beta-private-cluster" (without the changes from this PR) without setting var.monitoring_enabled_components
:
I think it might be an issue with the default value. As enabled_components
seems to end up as ["SYSTEM_COMPONENTS"]
, even though the default is []
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bharathkkb I am reverting back to the original implementation, leaving TF to merge the blocks in one well after the evaluation of the the for_each
s. As @jroiseux also points out, the other option would be to change the default value of var.monitoring_enabled_components
to something like [ "SYSTEM_COMPONENTS" ]
which I don't know if it is a good idea.
@abousias |
Thank you @bharathkkb! |
@bharathkkb and @jroiseux was the issue, where the |
Yes, it is no longer trying to disable it because of |
Indeed, sorry I missed the PR description! I double checked in my scenarios too, it works. Thanks again @jroiseux ! |
With the release of the
4.25.0
"beta" provider, it is now possible to enable Managed Prometheus.With this we can now enable this feature in the beta submodules.