Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

one of the slave report an error with 1062 #757

Closed
liarby opened this issue Feb 3, 2023 · 7 comments
Closed

one of the slave report an error with 1062 #757

liarby opened this issue Feb 3, 2023 · 7 comments
Labels
question Further information is requested

Comments

@liarby
Copy link

liarby commented Feb 3, 2023

General Question

one of the slave report 1062 , all of the slaves set with read_only, how this happen? I try to fix it, but there were too many errors like this. How can I solve it? rebuild the pod?
Last_SQL_Errno: 1062
Last_SQL_Error: Coordinator stopped because there were error(s) in the worker(s). The most recent failure being: Worker 1 failed executing transaction 'b125d82f-a2bb-11ed-bc73-16fea3664962:2' at master log mysql-bin.000004, end_log_pos 715. See error log and/or performance_schema.replication_applier_status_by_worker table for more details about this failure or others, if any.

@liarby liarby added the question Further information is requested label Feb 3, 2023
@github-actions
Copy link

github-actions bot commented Feb 3, 2023

Hi! thanks for your contribution! great first issue!

@liarby
Copy link
Author

liarby commented Feb 3, 2023 via email

@acekingke
Copy link
Contributor

check your slaves‘ gtids and your master gtids ,and about uuids of the servers. error code 1062 always occurs in the failure of replication. So you should check the gtids first.

@liarby
Copy link
Author

liarby commented Feb 12, 2023

There are 5 replicas, when I am writing data from the leader's IP,I delete the leader's PVC and pod ,after a new pod startup ,some of the replicas(not all) report this error.

the error show GTID's resource id is the new pod's server_uuid,but I confirm that the sql statements executed a few minutes ago and before I delete the pod of the old leader.

It seems that some of the replicas get the wrong binlog position,even the variables of master_auto_position was set to true.Or the new master's binlog with something wrong

@goeason-world
Copy link

I am in the same situation as you, can we communicate together?

There are 5 replicas, when I am writing data from the leader's IP,I delete the leader's PVC and pod ,after a new pod startup ,some of the replicas(not all) report this error.

the error show GTID's resource id is the new pod's server_uuid,but I confirm that the sql statements executed a few minutes ago and before I delete the pod of the old leader.

It seems that some of the replicas get the wrong binlog position,even the variables of master_auto_position was set to true.Or the new master's binlog with something wrong

@liarby
Copy link
Author

liarby commented Feb 13, 2023

I am in the same situation as you, can we communicate together?

There are 5 replicas, when I am writing data from the leader's IP,I delete the leader's PVC and pod ,after a new pod startup ,some of the replicas(not all) report this error.
the error show GTID's resource id is the new pod's server_uuid,but I confirm that the sql statements executed a few minutes ago and before I delete the pod of the old leader.
It seems that some of the replicas get the wrong binlog position,even the variables of master_auto_position was set to true.Or the new master's binlog with something wrong
您微信多少,加一下

@acekingke
Copy link
Contributor

You could check it whether get duplicate key

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants