-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
one of the slave report an error with 1062 #757
Comments
Hi! thanks for your contribution! great first issue! |
这是来自QQ邮箱的假期自动回复邮件。
您好,邮件我已收到,尽快给您回复。
|
check your slaves‘ gtids and your master gtids ,and about uuids of the servers. error code 1062 always occurs in the failure of replication. So you should check the gtids first. |
There are 5 replicas, when I am writing data from the leader's IP,I delete the leader's PVC and pod ,after a new pod startup ,some of the replicas(not all) report this error. the error show GTID's resource id is the new pod's server_uuid,but I confirm that the sql statements executed a few minutes ago and before I delete the pod of the old leader. It seems that some of the replicas get the wrong binlog position,even the variables of master_auto_position was set to true.Or the new master's binlog with something wrong |
I am in the same situation as you, can we communicate together?
|
|
You could check it whether get duplicate key |
General Question
one of the slave report 1062 , all of the slaves set with read_only, how this happen? I try to fix it, but there were too many errors like this. How can I solve it? rebuild the pod?
Last_SQL_Errno: 1062
Last_SQL_Error: Coordinator stopped because there were error(s) in the worker(s). The most recent failure being: Worker 1 failed executing transaction 'b125d82f-a2bb-11ed-bc73-16fea3664962:2' at master log mysql-bin.000004, end_log_pos 715. See error log and/or performance_schema.replication_applier_status_by_worker table for more details about this failure or others, if any.
The text was updated successfully, but these errors were encountered: