-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Executive supporters batch requests + mobile delegate names #807
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
PR Summary
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left some minor suggestions but nice work figuring out the issue
{delegate.name.split(' - ').map((name, i) => ( | ||
<Text key={delegate.name + '-' + i}>{limitString(name, limitTextLength, '...')}</Text> | ||
))} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Think this is fine as is, but a small suggestion is that this display logic of splitting the delegate name could be refactored into a little helper function that would A) add a little context to what it's doing (based on the function name, or a comment) for future maintainers and B) make it reusable if it was necessary somewhere else
// Split voters array into chunks of 1000 addresses | ||
const chunkSize = 1000; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could also be helpful for future maintainers to leave a short comment about it being set to 1000 due the batch request limits
const chunkSize = 1000; | ||
const voterChunks = Array.from({ length: Math.ceil(voters.length / chunkSize) }, (_, i) => | ||
voters.slice(i * chunkSize, i * chunkSize + chunkSize) | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work, one thing I'm curious about though is how the JsonRpcBatchProvider
handles batching on it's end. From what I understand, the provider just continuously adds batched requests to it's internal cache and executes them with a 10ms delay.
If that's the case I wonder how the chunking here is actually working. But maybe it's because each loop of our chunk creates a new pendingBatch
in the provider as seen here.
In any case it does seem to be working, so that's great
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems to work well, nice fix
What does this PR do?