TiCDC sink re-establishes in some unnecessary cases #9695
Labels
affects-6.5
This bug affects the 6.5.x(LTS) versions.
affects-7.1
This bug affects the 7.1.x(LTS) versions.
area/ticdc
Issues or PRs related to TiCDC.
severity/moderate
type/bug
The issue is confirmed as a bug.
What did you do?
After TiCDC sink is stuck for a while, TiCDC will destroy and re-establish the sink. However currently the stuck detection is not good enough. In such cases stuck will be reported mistakely:
What did you expect to see?
No response
What did you see instead?
In those cases, sink will be re-established, which is not expected.
Versions of the cluster
Master branch.
Analysis
Users who configured
consistent.level = eventual
andenable-syncpoint = true
can meet the problem more easily. Especially for the cases withsyncpoint-interval
is larger than150s
, and it's the most common situations in fact.It will cause table sink re-establishing too frequently, even cause checkpoint can not be advanced at all.
The text was updated successfully, but these errors were encountered: