Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Imported AKS cluster does not show any workload level metrics #4658

Closed
jls-tschanzc opened this issue Jun 11, 2020 · 10 comments
Closed

Imported AKS cluster does not show any workload level metrics #4658

jls-tschanzc opened this issue Jun 11, 2020 · 10 comments

Comments

@jls-tschanzc
Copy link

What kind of request is this (question/bug/enhancement/feature request):

bug

Steps to reproduce (least amount of steps as possible):

  1. Import an AKS Cluster
  2. Enable Monitoring (0.0.7 or 0.1.0)
  3. Wait a while
  4. Go to Project -> Namespace -> Workload (Or browse to any workload/pod metrics in Grafana)
  5. Open the "Workload Metrics"

Result:

It always states "Not enough data for graph". In Grafana it only shows requests/limits and no CPU/Memory data.

Other details that may be helpful:

It has already been this way on Rancher 2.3.5 / Monitoring 0.0.7 and remained the same after upgrading to Rancher 2.4.4 / Monitoring 0.1.0.

It does show cluster level metrics.

Environment information

  • Rancher version (rancher/rancher/rancher/server image tag or shown bottom left in the UI): 2.4.4
  • Installation option (single install/HA): HA

Cluster information

  • Cluster type (Hosted/Infrastructure Provider/Custom/Imported): AKS Imported
  • Machine type (cloud/VM/metal) and specifications (CPU/memory): Cloud, 1.9 vCPU / 4.5 GB
  • Kubernetes version (use kubectl version): v1.15.10
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T23:34:25Z", GoVersion:"go1.14.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.10", GitCommit:"059c666b8d0cce7219d2958e6ecc3198072de9bc", GitTreeState:"clean", BuildDate:"2020-04-03T15:17:29Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}

gz#12942

@Rancheroo
Copy link

Rancheroo commented Jun 19, 2020

I also have the same issue on
Rancher 2.3.3
AKS 1.14.8
Monitoring 0.0.6

Please advise if more info is required.

@Ileriayo
Copy link

Try opening port 8472 on the cluster nodes.

@bentastic27
Copy link

bentastic27 commented Nov 10, 2020

I'm seeing the same behavior

Rancher 2.4.8
AKS 1.17.11 (Rancher created even)
Monitoring 0.1.2

Some logs from the metrics-server include the following, so perhaps related.

E1110 14:30:11.289027       1 reststorage.go:160] unable to fetch pod metrics for pod default/nginx-649c7cb7f9-xqmxg: no metrics known for pod

@bentastic27
Copy link

This doesn't seem to work with monitoring 0.1.4 either.

@GGGitBoy
Copy link

GGGitBoy commented Nov 24, 2020

I tried to reproduce these steps and found this error.
If your case is the same as mine. Then the following may be helpful to you.

______2020-11-24______7 06 33的副本

View the kubelet process of the aks node is the default port 10250, but this is different from the port we scraped

截屏2020-11-24 下午7 07 30

Why is it like this:

After enabling monitoring for some imported clusters, ui will set exporter-kubelets.https to false

______2020-11-24______7 12 40的副本

So our scraping port will be 10255 instead of 10250, for aks cluster this is not what we expected

Workaround:

  1. Add monitoring answer exporter-kubelets.https: true, for example:

截屏2020-11-24 下午11 18 27

  1. Save and wait for the monitoring deployment to complete

now it works fine

______2020-11-24______7 19 05的副本

@bentastic27
Copy link

@GGGitBoy that is a solid workaround for the time being. The downside is that answer disappears from the monitoring config page after you hit save so you have to remember to re-add it on every change.

@GGGitBoy
Copy link

GGGitBoy commented Dec 5, 2020

yep, this is because UI sends the incorrect exporter-kubelets.https parameter according to the cluster type (aks). it will be fixed later

@deniseschannon deniseschannon transferred this issue from rancher/rancher Nov 27, 2021
@gaktive gaktive added this to the v2.6.5 milestone Mar 14, 2022
@nwmac nwmac modified the milestones: v2.6.5, v2.6.6 Mar 21, 2022
@nwmac nwmac modified the milestones: v2.6.6, v2.6.x May 9, 2022
@nwmac nwmac added the kind/bug label Jun 20, 2022
@gaktive gaktive added the kind/tech-debt Technical debt label Nov 7, 2022
@gaktive gaktive modified the milestones: v2.7.x, v2.7.next1 Jan 10, 2023
@catherineluse
Copy link
Contributor

@gaktive I realized that this issue is for the deprecated monitoring V1. Therefore I suggest closing it because we are no longer updating the V1 apps. In addition to that, the UI is moving away from injecting certain values into Helm charts because it could potentially introduce a version incompatibility issue between the Helm chart version and the Rancher version.

I think it would be better to address this issue through documentation in these places:

  • Rancher documentation for imported AKS clusters
  • Rancher documentation for the monitoring application
  • The monitoring Helm chart README

@gaktive
Copy link
Member

gaktive commented Jan 17, 2023

Closing as won't fix. For docs people who see this, we should document as @catherineluse suggests.

@catherineluse
Copy link
Contributor

catherineluse commented Jan 19, 2023

I have opened a docs ticket to track the work of figuring out if this kind of requirement needs to be documented for Monitoring V2. rancher/rancher-docs#377

@zube zube bot removed the [zube]: Done label Apr 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

9 participants