-
Notifications
You must be signed in to change notification settings - Fork 969
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bridge Node Stuck After Archive RPC Restart #4090
Comments
Can you share the version of the archival consensus node? |
We have also encountered this issue on Ubuntu 24.04 with celestia-app version Actions Taken and Observations
2) Restarting both the bridge and full node
To prevent potential downtime, we have migrated the node for now. That's because without manually restarting the node, the issue is not recoverable. |
added it to description |
Hey @cristaloleg! 👋 I think it is not about the consensus node. I have just upgraded the node to I had to manually restart the bridge node, it seems fine right now. I will update here if something happens. |
We can confirm encountering the same issue: after restarting the RPC node, the bridge node became stuck. Restarting the bridge node resolves the problem. celestia-node version: Celestia bridge service logs:
|
confirming the same here after the upgrade today, cc @renaynay |
Celestia Node version
v0.21.3-mocha
Celestia Consensus Node version
3.3.0-mocha
OS
22.04.5 LTS (Jammy Jellyfish)
Steps to reproduce it
Restart the
archive RPC
while the bridge node is runningExpected result
The bridge node should be able to recover from an archive RPC restart without requiring a manual service restart.
Actual result
The bridge node became unresponsive. It repeatedly logged fetcher and listener errors. The issue persisted until a manual restart was performed.
Relevant log output
Is the node "stuck"? Has it stopped syncing?
Yes
Notes
We encountered an issue today on our testnet bridge node. After restarting the archive RPC, our bridge node got stuck and could not recover on its own. However, a manual service restart resolved the issue.
The text was updated successfully, but these errors were encountered: