Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unbalanced number of queues on nodes #2175

Closed
kramarov666 opened this issue Jul 26, 2023 · 1 comment
Closed

Unbalanced number of queues on nodes #2175

kramarov666 opened this issue Jul 26, 2023 · 1 comment

Comments

@kramarov666
Copy link

kramarov666 commented Jul 26, 2023

  • VerneMQ Version:
  • OS: Docker 1.12.3-alpine
  • Erlang/OTP version (if building from source): -
  • Cluster size/standalone: 3

Hello. I'm running Vernemq version 1.12.3-alpine as statefullset inside Kubernetes cluster.
And cluster nodes(pods) restart sometimes can cause unbalanced queues. For instance:
node 1 queue - 3.6k
node 2 queue - 6k
node 3 queue - 900

Number of clients equals the number of queues, so each client has their own queue.
verne726

Is there are some tricks that can force rebalance for the nodes, to have an equal amount of queues? except restart them again?

The config:

listener.ssl.use_identity_as_username=on
max_offline_messages=-1
listener.ssl.iotrelay.cafile=/etc/ssl/vernemq/ca.cer
vmq_acl.acl_file=/etc/vernemq/custom/vmq.acl
allow_register_during_netsplit=on
log.console.level=debug
vmq_webhooks.pool_max_connections=500
allow_unsubscribe_during_netsplit=on
accept_eula=yes
listener.ssl.iotrelay.keyfile=/etc/ssl/vernemq/server.iot-relay.key
plugins.vmq_passwd=off
listener.ssl.iotrelay.certfile=/etc/ssl/vernemq/server.iot-relay.cer
vmq_webhooks.onsubscribe.hook=on_register
allow_publish_during_netsplit=on
vmq_webhooks.onclientgone.hook=on_client_gone
listener.ssl.default=0.0.0.0:8883
listener.ssl.require_certificate=on
plugins.vmq_webhooks=on
listener.ssl.keyfile=/etc/ssl/vernemq/server.iot.key
listener.ssl.cafile=/etc/ssl/vernemq/ca.cer
listener.tcp.ditto=0.0.0.0:1880
listener.ssl.iotrelay=0.0.0.0:8884
log.console=console
listener.tcp.localhost=127.0.0.1:1883
max_online_messages=-1
vmq_webhooks.onsubscribe.endpoint=http:https://device-monitor.prod-namespace.svc.cluster.local:8080/webhooks/connected/on_register
listener.ssl.iotrelay.ciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256:TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384:TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256:TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
listener.ssl.ciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256:TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384:TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256:TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
allow_anonymous=on
vmq_webhooks.onclientgone.endpoint=http:https://device-monitor.prod-namespace.svc.cluster.local:8080/webhooks/disconnected/on_client_gone
listener.ssl.certfile=/etc/ssl/vernemq/server.iot.cer
plugins.vmq_acl=on
plugins.vmq_generic_msg_store=off
allow_subscribe_during_netsplit=on
erlang.distribution.port_range.minimum = 9100
erlang.distribution.port_range.maximum = 9109
listener.tcp.default = 10.250.140.72:1883
listener.ws.default = 10.250.140.72:8080
listener.vmq.clustering = 10.250.140.72:44053
listener.http.metrics = 10.250.140.72:8888

@ioolkos
Copy link
Contributor

ioolkos commented Jul 27, 2023

@kramarov666 thanks. When your client connections are longstanding, you can see temporarily unbalanced numbers. This is not necessarily a bad thing per se, but I understand the question. Since the distribution is ultimately determined by clients and the load balancer (that is external behaviour), we'd need to trigger that external behaviour by disconnecting clients. Disconnecting clients administratively is currently possible (see vmq-admin session disconnect) but it's not easy to do in batches, since this is a command to disconnect single known ClientIds.

If you want to give this a go and look into implementing batch disconnects, let me know.


👉 Thank you for supporting VerneMQ: https://github.com/sponsors/vernemq
👉 Using the binary VerneMQ packages commercially (.deb/.rpm/Docker) requires a paid subscription.

@ioolkos ioolkos closed this as completed Aug 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants