Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

master dead,slave become master, proxy can't work #83

Open
chenjie199234 opened this issue Oct 23, 2020 · 1 comment
Open

master dead,slave become master, proxy can't work #83

chenjie199234 opened this issue Oct 23, 2020 · 1 comment

Comments

@chenjie199234
Copy link

chenjie199234 commented Oct 23, 2020

redis and proxy info

redis version: 6.0.8
proxy version:1.0-beta2

test key info

test key:test0
key slot:641
test key type:list
test command:rpush

cluster info

redis-cli --cluster create 127.0.0.1:10001 127.0.0.1:10002 127.0.0.1:10003 127.0.0.1:10004 127.0.0.1:10005 127.0.0.1:10006 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 127.0.0.1:10005 to 127.0.0.1:10001
Adding replica 127.0.0.1:10006 to 127.0.0.1:10002
Adding replica 127.0.0.1:10004 to 127.0.0.1:10003
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: 4fafba0af2cb398ce498854c3ab52639087f02d3 127.0.0.1:10001
   slots:[0-5460] (5461 slots) master
M: d4a19fa7c44c1db090ebb1e3d24b5baa3baafbdd 127.0.0.1:10002
   slots:[5461-10922] (5462 slots) master
M: 3fa833a7094be90fb9555c7686ec33ec78abe930 127.0.0.1:10003
   slots:[10923-16383] (5461 slots) master
S: 1d715390c335d0e0771c71129fff2c6c92cafc6f 127.0.0.1:10004
   replicates d4a19fa7c44c1db090ebb1e3d24b5baa3baafbdd
S: 5dc7cac19e358903121baed263adf0fd90ff21c7 127.0.0.1:10005
   replicates 3fa833a7094be90fb9555c7686ec33ec78abe930
S: 0374a01c6ffcf6e3b681515aed54291e6658bad6 127.0.0.1:10006
   replicates 4fafba0af2cb398ce498854c3ab52639087f02d3
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
....
>>> Performing Cluster Check (using node 127.0.0.1:10001)
M: 4fafba0af2cb398ce498854c3ab52639087f02d3 127.0.0.1:10001
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 0374a01c6ffcf6e3b681515aed54291e6658bad6 127.0.0.1:10006
   slots: (0 slots) slave
   replicates 4fafba0af2cb398ce498854c3ab52639087f02d3
S: 1d715390c335d0e0771c71129fff2c6c92cafc6f 127.0.0.1:10004
   slots: (0 slots) slave
   replicates d4a19fa7c44c1db090ebb1e3d24b5baa3baafbdd
M: d4a19fa7c44c1db090ebb1e3d24b5baa3baafbdd 127.0.0.1:10002
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: 3fa833a7094be90fb9555c7686ec33ec78abe930 127.0.0.1:10003
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 5dc7cac19e358903121baed263adf0fd90ff21c7 127.0.0.1:10005
   slots: (0 slots) slave
   replicates 3fa833a7094be90fb9555c7686ec33ec78abe930
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

proxy info

./redis-cluster-proxy 127.0.0.1:10001
[2020-10-23 14:28:41.028/M] Redis Cluster Proxy v0.9.102
[2020-10-23 14:28:41.028/M] Commit: (00000000/0)
[2020-10-23 14:28:41.028/M] PID: 4033
[2020-10-23 14:28:41.028/M] OS: Darwin 19.6.0 x86_64
[2020-10-23 14:28:41.028/M] Bits: 64
[2020-10-23 14:28:41.028/M] Log level: info
[2020-10-23 14:28:41.028/M] Connections pool size: 10 (respawn 2 every 50ms if below 10)
[2020-10-23 14:28:41.029/M] Listening on *:7777
[2020-10-23 14:28:41.029/M] Starting 8 threads...
[2020-10-23 14:28:41.029/M] Fetching cluster configuration...
[2020-10-23 14:28:41.032/M] Cluster Address: 127.0.0.1:10001
[2020-10-23 14:28:41.032/M] Cluster has 3 masters and 3 replica(s)
[2020-10-23 14:28:41.032/M] Increased maximum number of open files to 10518 (it was originally set to 2560).
[2020-10-23 14:28:41.057/M] All thread(s) started!

10001 alive

redis-cli -p 7777
127.0.0.1:7777> rpush test0 abc
(integer) 1

10001 dead and proxy didn't restart

redis-cli -p 7777
127.0.0.1:7777> rpush test0 abc
(integer) 1
127.0.0.1:7777> rpush test0 123
(error) ERR Cluster node disconnected: 127.0.0.1:10001
127.0.0.1:7777> rpush test0 123
(error) ERR Cluster node disconnected: 127.0.0.1:10001

10001 dead and proxy restart

redis-cli -p 7777
127.0.0.1:7777> rpush test0 123
blocking here

10001 dead cluster nodes info

redis-cli -p 10002
127.0.0.1:10002> cluster nodes
1d715390c335d0e0771c71129fff2c6c92cafc6f 127.0.0.1:10004@20004 slave d4a19fa7c44c1db090ebb1e3d24b5baa3baafbdd 0 1603434857135 2 connected
d4a19fa7c44c1db090ebb1e3d24b5baa3baafbdd 127.0.0.1:10002@20002 myself,master - 0 1603434859000 2 connected 5461-10922
4fafba0af2cb398ce498854c3ab52639087f02d3 127.0.0.1:10001@20001 master,fail - 1603434716804 1603434712000 1 disconnected
0374a01c6ffcf6e3b681515aed54291e6658bad6 127.0.0.1:10006@20006 master - 0 1603434856000 7 connected 0-5460
5dc7cac19e358903121baed263adf0fd90ff21c7 127.0.0.1:10005@20005 slave 3fa833a7094be90fb9555c7686ec33ec78abe930 0 1603434858155 3 connected
3fa833a7094be90fb9555c7686ec33ec78abe930 127.0.0.1:10003@20003 master - 0 1603434859175 3 connected 10923-16383
@hsafe
Copy link

hsafe commented Aug 28, 2022

I observed the same issue, the only difference is that mine is production but same cluster making with 3x masters and 3x slaves. While the masters go down the new formation of the cluster is not picked by the redislab proxy and causes errors ...can you please help if the view of the cluster is only formed at he start of the service and never updated? It really help us to get over this issue cause it happens very frequently that master-slave roles are changed in a cluster...much appreciated

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants