Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Executive supporters batch requests + mobile delegate names #807

Merged
merged 4 commits into from
Jul 21, 2023

Conversation

hernandoagf
Copy link
Collaborator

What does this PR do?

  • Fix an issue where executive supporters were not being fetched due to an Alchemy rate limitation for the batch RPC requests
  • Split the aligned delegate names into 2 rows in mobile screens, that way the delegate name will always be visible regardless of the length of the AVC name

@vercel
Copy link

vercel bot commented Jul 19, 2023

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Updated (UTC)
governance-portal-v2 ✅ Ready (Inspect) Visit Preview Jul 20, 2023 6:46pm

@what-the-diff
Copy link

what-the-diff bot commented Jul 19, 2023

PR Summary

  • Enhanced Delegate Name Display
    A new file, splitDelegateName.ts, was added to the modules/delegates/helpers directory. This file contains a function that separates delegate names into two distinctive parts. The change makes the delegate names more readable, especially on smaller screens and tight spaces.

  • Updated AddressIconBox.tsx Component
    Updates were made in AddressIconBox.tsx in relation to rendering delegate names. Now, if a set maximum text length exists, the delegate name will be divided and visualized as multiple Text components rather than a single one. This comes as part of a measure to ensure better representation of data and improved readability for the users.

  • Alleviating API Restrictions on fetchExecutiveVoteTally.ts
    Changes in fetchExecutiveVoteTally.ts were implemented with respect to Alchemy key batch request limitation. Now, voter information is fetched in chunks of 1000 addresses at a time instead of issuing a single, large request. To support this, various related variables were adjusted, and the total vote tally is now calculated over fetched chunks. This significantly improves the efficiency of the program by making optimal use of the allowed API request limits.

adamgoth
adamgoth previously approved these changes Jul 20, 2023
Copy link
Collaborator

@adamgoth adamgoth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left some minor suggestions but nice work figuring out the issue

Comment on lines 79 to 81
{delegate.name.split(' - ').map((name, i) => (
<Text key={delegate.name + '-' + i}>{limitString(name, limitTextLength, '...')}</Text>
))}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Think this is fine as is, but a small suggestion is that this display logic of splitting the delegate name could be refactored into a little helper function that would A) add a little context to what it's doing (based on the function name, or a comment) for future maintainers and B) make it reusable if it was necessary somewhere else

Comment on lines 43 to 44
// Split voters array into chunks of 1000 addresses
const chunkSize = 1000;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could also be helpful for future maintainers to leave a short comment about it being set to 1000 due the batch request limits

Comment on lines +45 to 48
const chunkSize = 1000;
const voterChunks = Array.from({ length: Math.ceil(voters.length / chunkSize) }, (_, i) =>
voters.slice(i * chunkSize, i * chunkSize + chunkSize)
);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work, one thing I'm curious about though is how the JsonRpcBatchProvider handles batching on it's end. From what I understand, the provider just continuously adds batched requests to it's internal cache and executes them with a 10ms delay.

If that's the case I wonder how the chunking here is actually working. But maybe it's because each loop of our chunk creates a new pendingBatch in the provider as seen here.

In any case it does seem to be working, so that's great

Copy link

@generatorpoint generatorpoint left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems to work well, nice fix

@b-pmcg b-pmcg merged commit ed483b9 into develop Jul 21, 2023
@b-pmcg b-pmcg deleted the mobile-delegate-name branch July 21, 2023 20:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants