You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Explore all unavailability scenarios and the behavior of the json rpc client we are using
Execution Engine errors: Sometimes EL can give internal errors because of the software issues for e.g. nethermind sometimes give internal errors on some concurrency race conditions. Without explicit handling the error basis the error message, a basic retry (like 3 retires) could be done to see if EL can recover.
Basically there are 3 fundamental apis: new payload (retry needed), forkchoice updated (no retry needed without payloadattributes as it tells the EL where out head is, so in 12 seconds it will be called again so no worries, but with payload attributes max tries 3 within 1-2 second interval), and engine_getPayloadV1 again 3 tries with 2 sec interval.
Conclusion: In other to keep things simple, have 3 retries within 1-2 second interval for all the endpoints that needs to be retried.
Also add metrics for:
histogram of retry attempts per call
histogram of overall requests times to EL
The text was updated successfully, but these errors were encountered:
Add retry for following scenarios
Basically there are 3 fundamental apis: new payload (retry needed), forkchoice updated (no retry needed without payloadattributes as it tells the EL where out head is, so in 12 seconds it will be called again so no worries, but with payload attributes max tries 3 within 1-2 second interval), and engine_getPayloadV1 again 3 tries with 2 sec interval.
Conclusion: In other to keep things simple, have 3 retries within 1-2 second interval for all the endpoints that needs to be retried.
Also add metrics for:
The text was updated successfully, but these errors were encountered: