Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

--disable-metrics case #199

Open
romantico88 opened this issue Jul 22, 2021 · 10 comments
Open

--disable-metrics case #199

romantico88 opened this issue Jul 22, 2021 · 10 comments

Comments

@romantico88
Copy link

Hello, i saw that there is posibillity to disable some metrics.
I want to disable f.ex openstack_nova_server_status, openstack_cinder_status. When i'm using --disable-metric=openstack_server_status, it's not working.

Which names of metrics should i use? those from openstack ceilometer/metrics services or those from exporter? how can i see it?

@alexeymyltsev
Copy link
Collaborator

Hello,
As mention in description it should be as service name "-" metric name:
--disable-metric can be specified in the format: service-metric (i.e: cinder-snapshots)
It mean you should have
-d nova-server_status -d cinder-volume_status

@romantico88
Copy link
Author

romantico88 commented Jul 23, 2021

i've also try this, exporter is launched, but after making curl to metrics after 11-12 seconds have error:
(it's happens with disabling nova-server_status)

: panic: runtime error: invalid memory address or nil pointer dereference

lip 23 11:26:06 openstack-exporter[8605]: panic: runtime error: invalid memory address or nil pointer dereference
lip 23 11:26:06 openstack-exporter[8605]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x8f11f3]
lip 23 11:26:06 openstack-exporter[8605]: goroutine 28 [running]:
lip 23 11:26:06openstack-exporter[8605]: github.com/openstack-exporter/openstack-exporter/exporters.ListAllServers(0xc000070ba0, 0xc000070720, 0xf1eb40, 0x2)
lip 23 11:26:06openstack-exporter[8605]: /app/exporters/nova.go:289 +0x4f3
lip 23 11:26:06openstack-exporter[8605]: github.com/openstack-exporter/openstack-exporter/exporters.(*BaseOpenStackExporter).RunCollection(0xc000070ba0, 0xc00042bbd0, 0xa60f25, 0x9, 0xc000070720, 0x0, 0x0)
lip 23 11:26:06openstack-exporter[8605]: /app/exporters/exporter.go:98 +0x20c
lip 23 11:26:06openstack-exporter[8605]: github.com/openstack-exporter/openstack-exporter/exporters.(*BaseOpenStackExporter).Collect(0xc000070ba0, 0xc000070720)
lip 23 11:26:06 openstack-exporter[8605]: /app/exporters/exporter.go:121 +0x126
lip 23 11:26:06 openstack-exporter[8605]: github.com/prometheus/client_golang/prometheus.(*Registry).Gather.func1()
lip 23 11:26:06  openstack-exporter[8605]: /go/pkg/mod/github.com/prometheus/[email protected]/prometheus/registry.go:443 +0x19d
lip 23 11:26:06  openstack-exporter[8605]: created by github.com/prometheus/client_golang/prometheus.(*Registry).Gather
lip 23 11:26:06 openstack-exporter[8605]: /go/pkg/mod/github.com/prometheus/[email protected]/prometheus/registry.go:454 +0x57d

using cinder-volume_status

its no error, but openstack_cinder_volume_status is still presented on metrics.

used newest version - downloaded amd64 tar.gz , centos 7 / centos 8.

@romantico88
Copy link
Author

romantico88 commented Jul 23, 2021

also i used a default cinder-snapshots like i example and metric still presented in exports.

i running exporter like below:

/usr/bin/openstack-exporter --os-client-config /etc/openstack-exporter/clouds.yaml --web.listen-address=":9170" mycloud --log.level="debug" -d cinder-snapshots
time curl http:https://localhost:9170/metrics 2>&1 | grep snap 
# TYPE openstack_cinder_snapshots gauge
openstack_cinder_snapshots 23

and my question is what i doing wrong?:)

@alexeymyltsev
Copy link
Collaborator

i've also try this, exporter is launched, but after making curl to metrics after 11-12 seconds have error:
(it's happens with disabling nova-server_status)

: panic: runtime error: invalid memory address or nil pointer dereference

lip 23 11:26:06 openstack-exporter[8605]: panic: runtime error: invalid memory address or nil pointer dereference
lip 23 11:26:06 openstack-exporter[8605]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x8f11f3]
lip 23 11:26:06 openstack-exporter[8605]: goroutine 28 [running]:
lip 23 11:26:06openstack-exporter[8605]: github.com/openstack-exporter/openstack-exporter/exporters.ListAllServers(0xc000070ba0, 0xc000070720, 0xf1eb40, 0x2)
lip 23 11:26:06openstack-exporter[8605]: /app/exporters/nova.go:289 +0x4f3
lip 23 11:26:06openstack-exporter[8605]: github.com/openstack-exporter/openstack-exporter/exporters.(*BaseOpenStackExporter).RunCollection(0xc000070ba0, 0xc00042bbd0, 0xa60f25, 0x9, 0xc000070720, 0x0, 0x0)
lip 23 11:26:06openstack-exporter[8605]: /app/exporters/exporter.go:98 +0x20c
lip 23 11:26:06openstack-exporter[8605]: github.com/openstack-exporter/openstack-exporter/exporters.(*BaseOpenStackExporter).Collect(0xc000070ba0, 0xc000070720)
lip 23 11:26:06 openstack-exporter[8605]: /app/exporters/exporter.go:121 +0x126
lip 23 11:26:06 openstack-exporter[8605]: github.com/prometheus/client_golang/prometheus.(*Registry).Gather.func1()
lip 23 11:26:06  openstack-exporter[8605]: /go/pkg/mod/github.com/prometheus/[email protected]/prometheus/registry.go:443 +0x19d
lip 23 11:26:06  openstack-exporter[8605]: created by github.com/prometheus/client_golang/prometheus.(*Registry).Gather
lip 23 11:26:06 openstack-exporter[8605]: /go/pkg/mod/github.com/prometheus/[email protected]/prometheus/registry.go:454 +0x57d

using cinder-volume_status

its no error, but openstack_cinder_volume_status is still presented on metrics.

used newest version - downloaded amd64 tar.gz , centos 7 / centos 8.

This panic happened because some of metrics collected in one function. It mean function executed , but if some metic has been desalted then it can not be written and panic happened.

We should work on it. We should create new issue about it.

@alexeymyltsev
Copy link
Collaborator

also i used a default cinder-snapshots like i example and metric still presented in exports.

i running exporter like below:

/usr/bin/openstack-exporter --os-client-config /etc/openstack-exporter/clouds.yaml --web.listen-address=":9170" mycloud --log.level="debug" -d cinder-snapshots
time curl http:https://localhost:9170/metrics 2>&1 | grep snap 
# TYPE openstack_cinder_snapshots gauge
openstack_cinder_snapshots 23

and my question is what i doing wrong?:)

I have tested several times mostly same string
go run *.go --os-client-config=./clouds.yaml --disable-cinder-agent-uuid default --log.level="debug" -d cinder-snapshots
and it works without any problem

time curl http:https://localhost:9180/metrics 2>&1 | grep snap
curl http:https://localhost:9180/metrics 2>&1  0,00s user 0,01s system 0% cpu 18,888 total
grep --color=auto --exclude-dir={.bzr,CVS,.git,.hg,.svn} snap  0,01s user 0,00s system 0% cpu 18,888 total

@jvleminc
Copy link

jvleminc commented Oct 27, 2021

I am facing the same issue, related to another issue I created: #208

Containers starts up well, but when calling /metrics the goroutine error is thrown:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x8cbc66]

goroutine 42 [running]:
github.com/openstack-exporter/openstack-exporter/exporters.ListZonesAndRecordsets(0xc0004b40f0, 0xc0004ff200, 0xedc1e0, 0x2)
	/app/exporters/designate.go:95 +0xc36
github.com/openstack-exporter/openstack-exporter/exporters.(*BaseOpenStackExporter).RunCollection(0xc0004b40f0, 0xc0004be080, 0xa399e7, 0x5, 0xc0004ff200, 0x0, 0x1)
	/app/exporters/exporter.go:106 +0x1eb
github.com/openstack-exporter/openstack-exporter/exporters.(*BaseOpenStackExporter).Collect(0xc0004b40f0, 0xc0004ff200)
	/app/exporters/exporter.go:127 +0x101
github.com/prometheus/client_golang/prometheus.(*Registry).Gather.func1()
	/go/pkg/mod/github.com/prometheus/[email protected]/prometheus/registry.go:443 +0x19d
created by github.com/prometheus/client_golang/prometheus.(*Registry).Gather
	/go/pkg/mod/github.com/prometheus/[email protected]/prometheus/registry.go:535 +0xe12

Update: I see #200 exists for this issue; will comment there.

@LaurentDumont
Copy link

I see the same issue with disabling --disable-metric nova-server_status. We'd like to remove the data because it's obviously a really expensive metric with bigger clouds.

@frmorais
Copy link

Is this fixed? I'm having the same issue when using --disable-slow-metrics

@jvleminc
Copy link

@frmorais I closed #208 after upgrading worked.

@frmorais
Copy link

frmorais commented Jun 27, 2022

@jvleminc In my case I'm still getting the 'panic: runtime error' and I'm using latest docker image available.

EDIT:
It seems to me that image_bytes is considered a slow metric. So, when using --disable-slow-metrics it will give an error, because cannot collect the other glance metrics :

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x85261a]
goroutine 131 [running]:
github.com/openstack-exporter/openstack-exporter/exporters.ListImages(0xc0002b60c0, 0x2f?)
/exporters/glance.go:54 +0x39a
github.com/openstack-exporter/openstack-exporter/exporters.(*BaseOpenStackExporter).RunCollection(0xc0002b60c0, 0xc00039db50, {0x97c14e, 0x6}, 0x0?)
/exporters/exporter.go:99 +0x182
github.com/openstack-exporter/openstack-exporter/exporters.(*BaseOpenStackExporter).Collect(0xc0002b60c0, 0xc00059bf60?)
/exporters/exporter.go:122 +0x105
github.com/prometheus/client_golang/prometheus.(*Registry).Gather.func1()
/go/pkg/mod/github.com/prometheus/[email protected]/prometheus/registry.go:443 +0xfb
created by github.com/prometheus/client_golang/prometheus.(*Registry).Gather
/go/pkg/mod/github.com/prometheus/[email protected]/prometheus/registry.go:535 +0xb0b

Disabling completely image service with '--disable-service.image' fix this, but no glance metrics will be available of course.
Tested this with an older branch (v1.4.0) and this issue was not there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants