You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This 2tb restore finished the first 1tb at more or less a consistent pace, but obviously slows down over time. Unclear if there's anything to do here, but opening this for discussion
This was a restore on a 10 node cluster with no other traffic. Ignore everything after the drop, one of the nodes ran out of disk
The text was updated successfully, but these errors were encountered:
My current theory is that the naive rate limiting is at fault here. We allow num_nodes outstanding Import requests at any given time, so it's possible that some nodes weren't doing any work. Dunno why this would be more likely over time. We do start the restore by scattering leaseholders, but it's hard to see if this becomes less balanced over time because of quiescence.
Alternatively, rocksdb compactions are more expensive as we get more data. Maybe this has something to do with it.
This could be related to #14108. We could take a look at the node statistics page to see if anything is increasing appreciably over time (e.g. disk I/O).
This 2tb restore finished the first 1tb at more or less a consistent pace, but obviously slows down over time. Unclear if there's anything to do here, but opening this for discussion
This was a restore on a 10 node cluster with no other traffic. Ignore everything after the drop, one of the nodes ran out of disk
The text was updated successfully, but these errors were encountered: