-
Notifications
You must be signed in to change notification settings - Fork 9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kibana metrics endpoint reporting as down wth "server returned HTTP status 404 Not Found" #3058
Comments
Did you solve your issue? |
@pbn4 no I didn't actually solve it. I had closed it because I read in the helm chart a note that it requires kibana-prometheus-exporter plugin which I didn't realize and assumed that was the issue. However, I added the 7.8.0 version of the kibana-prometheus-exporter plugin (to align with the Kibana version for the latest chart) using the plugins[0] value and although it added it (I can see it in the in /opt/bitnami/kibana/plugins/kibana-prometheus-exporter) I still get a 404 error when hitting the endpoint (e.g. http:https://127.0.0.1:5601/_prometheus/metrics from the pod itself or from the Prometheus endpoints, it shows DOWN). On the pod, I ran: ./kibana-plugin list which shows: So it appears to have installed. I opened an issue with pjhamton pjhampton/kibana-prometheus-exporter#169 but he said it is working / there have been lots of downloads without any reports, so at this point I'm not sure why although the plugin looks to install through the Bitnami helm chart but it isn't running. If you have any thoughts, that would be great, kind of at a dead end right now. |
Not able to get the plugin to return metrics on the Bitnami helm chart version. Looking for some thoughts. |
Hi @davejhahn, you're right. Kibana requires the kibana-prometheus-exporter plugin to be installed, it is vaguely mentioned in values.yaml and NOTES (which appears when installing the chart) of the Kibana subchart, and not the Elasticsearch. The following options allow the Kibana chart to be installed with the Prometheus plugin:
diff --git a/bitnami/kibana/values.yaml b/bitnami/kibana/values.yaml
index a09a3a606..defc31864 100644
--- a/bitnami/kibana/values.yaml
+++ b/bitnami/kibana/values.yaml
@@ -58,6 +58,7 @@ updateStrategy:
## List of plugins to install
##
plugins:
+- https://github.com/pjhampton/kibana-prometheus-exporter/releases/download/6.8.10/kibana-prometheus-exporter-6.8.10.zip
# - https://github.com/fbaligand/kibana-enhanced-table/releases/download/v1.5.0/enhanced-table-1.5.0_7.3.2.zip
## Saved objects to import (NDJSON format) However, it looks like the plugin gets installed after starting Kibana, meaning that it still does not work. In the meantime there are several workarounds:
Also, the chart should make it a lot simpler to enable monitoring, by documenting the required steps to enable monitoring. As you can see, this requires some major changes to the initialization of the container/chart (or at least a rethink), so I'm afraid I cannot give an ETA for when this will be fixed. If you feel confident to try and make the changes for your self, we'd be glad to review a PR that fixes this issue! |
@marcosbc I created my own image from the bitnami kibana image, simply with the difference being installation of the plugin. That doesn't seem to work either, it returns a 500 error when accessing the endpoint. So unlike the helm package, that installs it after it is started, this is installed when the container starts up so the above problem you described shouldn't be happening. The author if the widely used plugin asked if bitnami has a fork of Kibana--that it works fine for him (and apparently everyone else using it). I'm assuming no, but wasn't sure. Is there any reason this wouldn't work either? I'm almost at the point of giving up getting metrics to work with the bitnami instance. My alternative at this point is to use a different helm package and create my own service monitor, since everything is working nicely (aside from this) I am hesitant to do that. Any insight would be great. |
Hi @davejhahn, we have been investigating the issue. We saw this pjhampton/kibana-prometheus-exporter#186 where you are showing the Dockerfile that you are using. We tried the following:
image:
tag: custom
elasticsearch:
hosts:
- localhost
port: 9200
sidecars:
- name: elasticsearch
image: bitnami/elasticsearch:latest
ports:
- name: http
containerPort: 9200
We the steps above, we could reproduce your issue. It seems the issue is caused due to some kind of incompatibility with IPv6 or at least due to the configuration key
Could you please try on your side? |
I will give it a try and report back, thanks! |
@andresbono I created the config map and upgraded specifying the configmap name in configurationCM, but the readiness probe is failing. Readiness probe failed: HTTP probe failed with statuscode: 503 Any ideas? |
Got a chance to look at this again, thinking it was the elasticsearch.hosts setting--I changed it to point to the bitnami install I am using (verified the URL with curl from working Kibana before upgrading) with http:https://elasticsearch-coordinating-only:9200. It still can't get passed the readiness probe because Kibana doesn't start: Kibana pod log:
kubectl describe pod:
|
Hi @davejhahn, you can try to add extraEnvVars:
- name: NAMI_LOG_LEVEL
value: trace In any case, I would recommend you to use a sidecar elasticsearch container as explained in #3058 (comment) (only for testing purposes), just to confirm that at least the problem with the metrics plugin is resolved. |
@andresbono ah, after looking at the logging and trying the URL that it was complaining about, I found it was just wrong: was http:https://elasticsearch-coordinating-only:9200 instead of http:https://elasticsearch-elasticsearch-coordinating-only:9200 Once I updated the config map, everything was good. Seeing the endpoint as up in Prometheus and data reporting. Thanks for your help! |
Leaving this open for the original issue.... |
Great, @davejhahn! Thanks for confirming it. |
I too faced the issue but your suggested workaround (custom image and configmap) helped me to fix the issue |
Nice, @Prakash-droid! Thanks for confirming it. The original issue is still pending to be resolved. |
Hello, I am facing a similar issue however for me the probes fail due to the path being /login for the probes, when i server side apply it to check /status it works fine, can I add the ability to change healthcheck route as a PR or is there another solution? |
Hi @fmubaidien , Regarding the feature you need, currently it supports a custom path, but the |
Which chart:
kibana-5.2.6
Describe the bug
Installed bitnami Kibana chart with metrics.enabled=true,metrics.serviceMonitor.enabled=true,metrics.serviceMonitor.namespace=monitoring. The ServiceMonitor is created, but there are no pods or sidecars created for metrics. Target endpoints appears in Prometheus but as DOWN with "server returned HTTP status 404 Not Found".
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Endpoint is accessible and returning UP
Version of Helm and Kubernetes:
helm version
:kubectl version
:Additional context
Based on the URL, not sure if metrics are built into Kibana (same port as Kibana, 5601) or if this URL being reported is wrong. Mentioned above, no other pods are created or sidecars added so not sure where metrics are supposed to come from.
The text was updated successfully, but these errors were encountered: