Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Jsonnet] Fix memberlist when using a stateful ruler #6662

Merged
merged 3 commits into from
Dec 12, 2022
Merged

[Jsonnet] Fix memberlist when using a stateful ruler #6662

merged 3 commits into from
Dec 12, 2022

Conversation

Whyeasy
Copy link
Contributor

@Whyeasy Whyeasy commented Jul 12, 2022

Signed-off-by: Whyeasy [email protected]

What this PR does / why we need it:

When using stateful rulers and the memberlist as its ring, generating the resources with jsonnet throws in an error regarding the ruler_deployment. The ruler_statefulset didn't receive the meberlist labels either.

Checklist

  • Documentation added
  • Tests updated
  • Is this an important fix or new feature? Add an entry in the CHANGELOG.md.
  • Changes that require user attention or interaction to upgrade are documented in docs/sources/upgrading/_index.md

@Whyeasy Whyeasy requested a review from a team as a code owner July 12, 2022 06:21
@grafanabot
Copy link
Collaborator

./tools/diff_coverage.sh ../loki-main/test_results.txt test_results.txt ingester,distributor,querier,querier/queryrange,iter,storage,chunkenc,logql,loki

Change in test coverage per package. Green indicates 0 or positive change, red indicates that test coverage for a package fell.

+           ingester	0%
+        distributor	0%
+            querier	0%
+ querier/queryrange	0%
+               iter	0%
+            storage	0%
+           chunkenc	0%
+              logql	0%
+               loki	0%

@Whyeasy
Copy link
Contributor Author

Whyeasy commented Jul 12, 2022

It looks like there went something wrong with the jsonnet bundler in the pipeline and not the code contribution. Is this correct?

@grafanabot
Copy link
Collaborator

./tools/diff_coverage.sh ../loki-main/test_results.txt test_results.txt ingester,distributor,querier,querier/queryrange,iter,storage,chunkenc,logql,loki

Change in test coverage per package. Green indicates 0 or positive change, red indicates that test coverage for a package fell.

+           ingester	0%
+        distributor	0%
+            querier	0%
+ querier/queryrange	0%
+               iter	0%
+            storage	0%
+           chunkenc	0%
+              logql	0%
+               loki	0%

Copy link
Contributor

@chaudum chaudum left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, maybe another 👀 from @cstyan ?

@stale
Copy link

stale bot commented Sep 21, 2022

Hi! This issue has been automatically marked as stale because it has not had any
activity in the past 30 days.

We use a stalebot among other tools to help manage the state of issues in this project.
A stalebot can be very useful in closing issues in a number of cases; the most common
is closing issues or PRs where the original reporter has not responded.

Stalebots are also emotionless and cruel and can close issues which are still very relevant.

If this issue is important to you, please add a comment to keep it open. More importantly, please add a thumbs-up to the original issue entry.

We regularly sort for closed issues which have a stale label sorted by thumbs up.

We may also:

  • Mark issues as revivable if we think it's a valid issue but isn't something we are likely
    to prioritize in the future (the issue will still remain closed).
  • Add a keepalive label to silence the stalebot if the issue is very common/popular/important.

We are doing our best to respond, organize, and prioritize all issues but it can be a challenging task,
our sincere apologies if you find yourself at the mercy of the stalebot.

@stale stale bot added the stale A stale issue or PR that will automatically be closed. label Sep 21, 2022
@Whyeasy
Copy link
Contributor Author

Whyeasy commented Sep 21, 2022

Still would like to have this fix in 😄 but waiting on another look at it by @cstyan

@MichelHollands MichelHollands removed the stale A stale issue or PR that will automatically be closed. label Nov 7, 2022
@grafanabot
Copy link
Collaborator

./tools/diff_coverage.sh ../loki-target-branch/test_results.txt test_results.txt ingester,distributor,querier,querier/queryrange,iter,storage,chunkenc,logql,loki

Change in test coverage per package. Green indicates 0 or positive change, red indicates that test coverage for a package fell.

+           ingester	0%
+        distributor	0%
+            querier	0%
+ querier/queryrange	0%
+               iter	0%
+            storage	0%
+           chunkenc	0%
+              logql	0%
+               loki	0%

@cstyan
Copy link
Contributor

cstyan commented Nov 29, 2022

@Whyeasy Can you show me the error you get? We're only using statefulsets internally and don't get a jsonnet error about the deployments. But you are right, the label for the statefulset was missing. Nothing would have broken, the label is only used for joining the ring and we have all pods that use memberlist join the same ring, but still worth fixing.

@Whyeasy
Copy link
Contributor Author

Whyeasy commented Nov 30, 2022

@cstyan, I get the following error:

Error: got an error while extracting env `environments/dev`: recursion did not resolve in a valid Kubernetes object. In path `.ruler_deployment.spec.template.metadata.labels` found key `loki_gossip_member` of type `string` instead.

During reproducing the error I noticed the following change also fixed my issue:

ruler_deployment+: if !$._config.memberlist_ring_enabled || !$._config.ruler_enabled || $._config.stateful_rulers then {} else gossipLabel,

But I guess it's better to stay consistent here and at the ruler_statefulset as well 😄

@cstyan
Copy link
Contributor

cstyan commented Dec 12, 2022

Sorry, I missed your reply. I'll check this out more this week.

@cstyan
Copy link
Contributor

cstyan commented Dec 12, 2022

I'm not able to reproduce the error you get in our environments that only have stateful rulers deployed, but I don't see any issue with merging this since it's just checking the stateful ruler config value for both the deployment and statefulset.

@cstyan cstyan merged commit 9d5665e into grafana:main Dec 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants