Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(breaking): add storage gap to da service manager storage #215

Merged
merged 1 commit into from
Jan 29, 2024

Conversation

stevennevins
Copy link
Contributor

Why are these changes needed?

EigenDAServiceManager storage needs a gap to prevent changes in its layout from pushing into the storage layout of contracts it inherits. This change adds the standard 50 storage slot buffer between each contract in the inheritance chain. Three are currently used, and 47 are appended to the end to account for future changes.

The attached screen shows the storage layout before and after to illustrate that the __GAP was needed after the three storage slots currently used by EigenDA to prevent shifting the entire storage layout if a new variable or mapping was added.

Screenshot 2024-01-26 at 1 55 17 PM

@stevennevins stevennevins requested a review from 0x0aa0 January 26, 2024 19:11
@pakim249CAL
Copy link

Have you considered making use of unstructured storage? This completely removes any storage management issues with upgradeable contracts. OpenZeppelin makes use of it now.

@stevennevins
Copy link
Contributor Author

Have you considered making use of unstructured storage? This completely removes any storage management issues with upgradeable contracts. OpenZeppelin makes use of it now.

Yeah, this is a good suggestion. Thanks for surfacing

@0x0aa0 0x0aa0 merged commit d06dec1 into m2-mainnet-contracts Jan 29, 2024
2 checks passed
@stevennevins stevennevins deleted the fix/service-manager-storage-gap branch March 11, 2024 18:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants