-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
activeStake - db-sync epoch_stake discrepancy #587
Comments
Is this still happening @CyberCyclone ? On my local instance: {
activeStake_aggregate(where: {epochNo: {_eq: 289}}){
aggregate{
count
}
}
} {
"data": {
"activeStake_aggregate": {
"aggregate": {
"count": "801247"
}
}
}
} |
Yes, epoch 289 is still stuck at 1000 delegates. Epoch 290 has the correct number. I'm going to look into this further on Sunday and see if I can figure it out. I had 1 local instance where this happened, but I've since had 2 production instances that have NOT had this issue. |
Any updates on this @CyberCyclone ? |
I was mislead by Hasura. The epoch_stake table does not have as many rows as it states in the pagination and cardano-graphql is querying correctly. Therefore I don't see this as a cardano-graphql issue but a cardano-db-sync issue. Unfortunately since I did a docker-compose down and then brought them back up to see if that would fix it 6 days ago, it's deleted the container logs so I can't see if there was a particular error that occurred. Either way, if there was an error during the epoch_stake populating or if the containers were stopped, there should be a check at some point to verify that epoch_stake has completed and have the ability to continue importing. This could somehow be managed in correlation with IntersectMBO/cardano-db-sync#797 I personally see this as a low priority issue as at this point there's no evidence of it being a bug. If this happened, most people would be able to drop the db and reimport from the snapshot. |
Steps to reproduce the bug
Fully synced cardano-graphql:5.1.0.
Initially everything was fully synced and running normally. After leaving it for a day I came back and the containers had crashed and spitting out an error something about Postgres password incorrect, but I suspect Postgres had just died. After restarting, everything seems to have continued on fine.
After leaving it for another day I've noticed though for epoch 289 (current), when running
It's only returning a count of 1000. I suspected that the change as mentioned in IntersectMBO/cardano-db-sync#709 had meant the bulk insert hadn't restarted after the containers crashing.
I've looked at the epoch_stake table and it's returned 2497980 rows.
I'm not entirely sure on what's going on here, but the aggregate count isn't increasing.
The text was updated successfully, but these errors were encountered: