-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(katana): compute block commitments #2588
Conversation
WalkthroughOhayo, sensei! This pull request introduces significant updates across multiple files, primarily focusing on enhancing the workspace configuration in Changes
Assessment against linked issues
Possibly related PRs
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 24
🧹 Outside diff range and nitpick comments (19)
crates/katana/storage/provider/src/traits/trie.rs (2)
10-17
: Add documentation for the ClassTrieWriter trait.Consider adding documentation to explain:
- The purpose and responsibility of this trait
- The meaning of the returned
Felt
value- Any requirements or invariants for the updates map
- Example usage if applicable
Example:
/// Trait for writing class hash updates to a trie structure. /// /// # Returns /// Returns a `Felt` representing the new root hash after updates. pub trait ClassTrieWriter: Send + Sync { // ... rest of the implementation }
19-26
: Add documentation for the ContractTrieWriter trait.Similar to ClassTrieWriter, please add documentation explaining:
- The trait's purpose and responsibility
- The meaning of the returned
Felt
value- Any requirements or invariants for the state updates
crates/katana/storage/db/Cargo.toml (1)
27-27
: Consider moving smallvec to workspace dependencies, sensei!For better version management and consistency across the workspace, consider moving the smallvec dependency to the workspace level in the root Cargo.toml.
-smallvec = "1.13.2" +smallvec.workspace = truecrates/katana/storage/db/src/models/trie.rs (2)
7-13
: Add documentation for enum variants, sensei!While the enum is well-structured, it would benefit from documentation explaining the purpose and use case of each variant (
Trie
,Flat
,TrieLog
).#[repr(u8)] #[derive(Debug, Clone, Copy, Serialize, Deserialize)] +/// Represents different types of database keys in the trie system pub enum TrieDatabaseKeyType { + /// Regular trie node Trie, + /// Flat storage entry Flat, + /// Trie operation log entry TrieLog, }
15-21
: Add documentation and consider key validation, sensei!The struct could benefit from:
- Documentation explaining its purpose and field usage
- Validation or constraints on the
key
field's content+/// Represents a key in the trie database system +/// combining the key type and the actual key bytes #[derive(Debug, Clone, Serialize, Deserialize)] pub struct TrieDatabaseKey { + /// The type of the database key pub r#type: TrieDatabaseKeyType, + /// The actual key bytes pub key: Vec<u8>, }crates/katana/trie/src/lib.rs (1)
14-18
: Clean trait definition, sensei!The trait and its blanket implementation are well-designed. The documentation clearly explains its purpose as a shorthand for the combined traits.
Consider adding a code example to the documentation to show typical usage:
/// A helper trait to define a database that can be used as a Bonsai Trie. /// /// Basically a short hand for `BonsaiDatabase + BonsaiPersistentDatabase<BasicId>`. +/// +/// # Example +/// ``` +/// use your_crate::{BonsaiTrieDb, YourDatabase}; +/// +/// // Any type implementing both required traits automatically implements BonsaiTrieDb +/// let db: impl BonsaiTrieDb = YourDatabase::new(); +/// ```crates/katana/primitives/src/state.rs (2)
33-48
: Ohayo! Consider using checked arithmetic for len calculation.The implementation looks good, but for very large state updates, the length calculation could potentially overflow. Consider using checked addition to handle this edge case gracefully.
pub fn len(&self) -> usize { - let mut len: usize = 0; + let mut len = self.deployed_contracts.len(); + len = len.checked_add(self.replaced_classes.len()) + .expect("State update length overflow"); + len = len.checked_add(self.declared_classes.len()) + .expect("State update length overflow"); - len += self.deployed_contracts.len(); - len += self.replaced_classes.len(); - len += self.declared_classes.len(); // ... continue with checked_add for other fields
98-99
: Document the purpose of magic numbers.The usage of
Felt::ONE
andFelt::ZERO
at lines 98-99 needs documentation explaining their significance in the state diff computation.+ // These represent the class updates count and root count respectively + // as specified in the Starknet state diff computation specification .chain(iter::once(Felt::ONE)) .chain(iter::once(Felt::ZERO))crates/katana/primitives/src/receipt.rs (1)
222-244
: Efficient implementation of message hashing, sensei!The implementation shows attention to performance:
- Pre-allocates vector capacity to avoid reallocations
- Uses iterator folding for clean and efficient implementation
- Follows the correct hashing algorithm structure
One minor suggestion for improved readability:
Consider extracting the message payload hash computation into a separate method:
fn compute_messages_to_l1_hash(&self) -> Felt { let messages = self.messages_sent(); let messages_len = messages.len(); let mut accumulator: Vec<Felt> = Vec::with_capacity((messages_len * 3) + 1); accumulator.push(Felt::from(messages_len)); + fn compute_payload_hash(payload: &[Felt]) -> Felt { + let len = Felt::from(payload.len()); + let payload = iter::once(len).chain(payload.iter().cloned()).collect::<Vec<Felt>>(); + hash::Poseidon::hash_array(&payload) + } + let elements = messages.iter().fold(accumulator, |mut acc, msg| { - let len = Felt::from(msg.payload.len()); - let payload = iter::once(len).chain(msg.payload.clone()).collect::<Vec<Felt>>(); - let payload_hash = hash::Poseidon::hash_array(&payload); + let payload_hash = compute_payload_hash(&msg.payload); acc.push(msg.from_address.into()); acc.push(msg.to_address); acc.push(payload_hash); acc }); hash::Poseidon::hash_array(&elements) }crates/katana/storage/provider/src/lib.rs (1)
399-410
: LGTM! Well-structured implementation for block commitments.The ContractTrieWriter implementation complements the ClassTrieWriter, providing a complete solution for computing block commitments through trie updates. The delegation pattern is consistently maintained, ensuring clean separation of concerns.
This implementation aligns well with the block commitment computation feature. The separation into class and contract tries allows for efficient and organized state management, which is crucial for blockchain state transitions.
crates/katana/storage/db/src/tables.rs (1)
173-176
: Consider adding documentation for the new trie tables, sensei!While the table definitions are correct, adding documentation comments would help explain:
- The purpose of each trie table
- The relationship between these tables
- Any specific usage patterns or constraints
Example documentation:
/// Class trie table for storing and managing class-related merkle tries ClassTrie: (TrieDatabaseKey) => TrieDatabaseValue, /// Contract trie table for managing contract-specific merkle tries ContractTrie: (TrieDatabaseKey) => TrieDatabaseValue, /// Contract storage trie table for handling contract storage merkle tries ContractStorageTrie: (TrieDatabaseKey) => TrieDatabaseValueAlso applies to: 233-238
crates/katana/primitives/src/chain_spec.rs (1)
56-57
: Ohayo sensei! Consider documenting the purpose of state_diff_length.The new field
state_diff_length
is initialized to 0, but its purpose and significance in block header computation isn't immediately clear to other developers.Add a doc comment explaining the field's purpose:
let header = Header { + // Length of the state diff for this block, used in computing block commitments state_diff_length: 0,
crates/katana/trie/src/class.rs (2)
27-34
: ReviewBonsaiStorageConfig
parameters for optimal performanceOhayo, sensei! The current configuration sets
max_saved_trie_logs
andmax_saved_snapshots
toSome(0)
, andsnapshot_interval
tou64::MAX
. This might prevent any logs or snapshots from being saved, which could impact debugging and recovery. Please verify that these settings align with your intended operational requirements.
20-22
: Consider deriving additional traits forClassTrie
Ohayo, sensei! Deriving traits like
Clone
or implementingDefault
forClassTrie
could enhance its usability and flexibility within the codebase, depending on how instances of this struct are managed.crates/katana/storage/db/src/trie.rs (1)
122-122
: Empty implementation ofsnapshot
methodThe
snapshot
method is currently empty. If this method is intended to perform an action, please implement it. Otherwise, consider adding documentation to explain why it's empty.Ohayo, sensei! Let me know if you'd like help with this implementation.
crates/katana/core/src/backend/mod.rs (2)
68-74
: Ohayo, sensei! Consider avoiding unnecessary cloning ofstate_updates
In the
do_mine_block
method,execution_output.states.state_updates.clone()
is being cloned when passing tocommit_block
. If possible, consider passing a reference to avoid the performance overhead of cloning, especially ifstate_updates
is large.[performance]
Apply this diff:
let block = self.commit_block( block_env.clone(), - execution_output.states.state_updates.clone(), + &execution_output.states.state_updates, txs, &receipts, )?;Adjust the
commit_block
method to accept a reference toStateUpdates
:fn commit_block( &self, block_env: BlockEnv, - state_updates: StateUpdates, + state_updates: &StateUpdates, transactions: Vec<TxWithHash>, receipts: &[ReceiptWithTxHash], ) -> Result<SealedBlock, BlockProductionError> {
220-222
: Ohayo, sensei! Avoid cloningstate_updates
incompute_state_diff_commitment
In
compute_state_diff_commitment
, cloningself.state_updates
can be avoided ifcompute_state_diff_hash
accepts a reference. This enhances performance by reducing memory usage.[performance]
Apply this diff:
fn compute_state_diff_commitment(&self) -> Felt { - compute_state_diff_hash(self.state_updates.clone()) + compute_state_diff_hash(self.state_updates) }Ensure that
compute_state_diff_hash
accepts a reference toStateUpdates
.crates/katana/primitives/src/block.rs (2)
77-77
: Typo in comment: 'uncommited' should be 'uncommitted'Ohayo, sensei! There's a small typo in the comment; please correct 'uncommited' to 'uncommitted' for clarity.
133-138
: Typo in variable nameconcant
; should beconcat
Ohayo, sensei! The variable
concant
in thecompute_hash
method seems to be a typo. Consider renaming it toconcat
for consistency and clarity.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (27)
- Cargo.toml (4 hunks)
- crates/katana/core/Cargo.toml (2 hunks)
- crates/katana/core/src/backend/mod.rs (3 hunks)
- crates/katana/core/src/backend/storage.rs (3 hunks)
- crates/katana/primitives/src/block.rs (5 hunks)
- crates/katana/primitives/src/chain_spec.rs (2 hunks)
- crates/katana/primitives/src/receipt.rs (2 hunks)
- crates/katana/primitives/src/state.rs (3 hunks)
- crates/katana/storage/db/Cargo.toml (2 hunks)
- crates/katana/storage/db/src/codecs/postcard.rs (2 hunks)
- crates/katana/storage/db/src/lib.rs (1 hunks)
- crates/katana/storage/db/src/models/mod.rs (1 hunks)
- crates/katana/storage/db/src/models/trie.rs (1 hunks)
- crates/katana/storage/db/src/tables.rs (7 hunks)
- crates/katana/storage/db/src/trie.rs (1 hunks)
- crates/katana/storage/provider/Cargo.toml (1 hunks)
- crates/katana/storage/provider/src/lib.rs (3 hunks)
- crates/katana/storage/provider/src/providers/db/mod.rs (1 hunks)
- crates/katana/storage/provider/src/providers/db/trie.rs (1 hunks)
- crates/katana/storage/provider/src/providers/fork/mod.rs (4 hunks)
- crates/katana/storage/provider/src/traits/mod.rs (1 hunks)
- crates/katana/storage/provider/src/traits/trie.rs (1 hunks)
- crates/katana/trie/Cargo.toml (1 hunks)
- crates/katana/trie/src/class.rs (1 hunks)
- crates/katana/trie/src/contract.rs (1 hunks)
- crates/katana/trie/src/lib.rs (1 hunks)
- crates/saya/provider/src/rpc/mod.rs (1 hunks)
✅ Files skipped from review due to trivial changes (5)
- crates/katana/storage/db/src/models/mod.rs
- crates/katana/storage/provider/src/providers/db/mod.rs
- crates/katana/storage/provider/src/traits/mod.rs
- crates/katana/trie/Cargo.toml
- crates/katana/trie/src/contract.rs
🔇 Additional comments (44)
crates/katana/storage/provider/src/traits/trie.rs (1)
1-8
: Ohayo! Clean and well-organized imports, sensei!The imports are properly organized and follow best practices, with external crates first followed by internal ones.
crates/katana/storage/db/Cargo.toml (1)
11-11
: Ohayo! The katana-trie workspace dependency looks good, sensei!The addition aligns well with the block commitment computation feature.
crates/katana/storage/provider/Cargo.toml (1)
12-12
: Ohayo! The new dependencies look good, sensei! ٩(◕‿◕。)۶The additions of
katana-trie
,bitvec
, andstarknet-types-core
are well-structured and appropriate for implementing block commitments. The use of workspace versioning ensures consistent dependency management across the project.Also applies to: 20-21
crates/katana/storage/db/src/codecs/postcard.rs (2)
14-14
: Ohayo! Clean import addition, sensei!The import follows the existing style and organization pattern.
42-42
: Excellent choice using postcard serialization for TrieDatabaseValue!The postcard serialization is well-suited for trie database values, offering:
- Space efficiency for binary data
- Strong type safety guarantees
- Good performance characteristics
crates/katana/storage/db/src/models/trie.rs (1)
1-6
: Ohayo! Clean and well-organized imports, sensei!The imports are minimal and purposeful, properly organizing external and internal dependencies.
crates/katana/trie/src/lib.rs (2)
1-13
: Ohayo! Clean and well-organized imports, sensei!The choice of dependencies and their organization shows good practices:
- Using
anyhow
for flexible error handling- Re-exporting
bonsai_trie
makes the API more accessible- Clear separation of external and internal modules
20-43
: 🛠️ Refactor suggestion
⚠️ Potential issueOhayo sensei! A few suggestions to enhance robustness
The implementation is generally sound, but there are several areas for improvement:
- Replace unwrap calls with proper error handling:
- let mut bs = BonsaiStorage::<_, _, H>::new(bonsai_db, config).unwrap(); + let mut bs = BonsaiStorage::<_, _, H>::new(bonsai_db, config) + .map_err(|e| anyhow::anyhow!("Failed to create BonsaiStorage: {}", e))?; - bs.insert(IDENTIFIER, key.as_bitslice(), value).unwrap(); + bs.insert(IDENTIFIER, key.as_bitslice(), value) + .map_err(|e| anyhow::anyhow!("Failed to insert value at index {}: {}", id, e))?; - bs.commit(id).unwrap(); + bs.commit(id) + .map_err(|e| anyhow::anyhow!("Failed to commit storage: {}", e))?; - Ok(bs.root_hash(IDENTIFIER).unwrap()) + bs.root_hash(IDENTIFIER) + .map_err(|e| anyhow::anyhow!("Failed to get root hash: {}", e))
The use of
to_be_bytes()
needs verification for different architectures:Consider making IDENTIFIER more specific and documented:
- const IDENTIFIER: &[u8] = b"1"; + /// Identifier used for the Merkle tree entries. + /// This is used as a namespace for all entries in this tree. + const MERKLE_TREE_IDENTIFIER: &[u8] = b"MERKLE_TREE_V1";
- Consider adding input validation:
- Maximum allowed length for the input slice
- Handling of empty input
crates/katana/core/Cargo.toml (1)
16-16
: Ohayo! The new dependencies look good, sensei! ٩(◕‿◕。)۶The addition of
katana-trie
andstarknet-types-core
as workspace dependencies aligns perfectly with the objective of computing block commitments. Both dependencies follow the project's conventions for workspace inheritance.Also applies to: 31-31
crates/katana/primitives/src/state.rs (1)
90-107
:⚠️ Potential issueAdd validation for maximum state size.
The function should validate that the total number of elements won't exceed the maximum supported size for the Poseidon hash function.
pub fn compute_state_diff_hash(states: StateUpdates) -> Felt { + const MAX_ELEMENTS: usize = /* value from Poseidon implementation */; + if states.len() > MAX_ELEMENTS { + panic!("State diff too large: {} elements", states.len()); + } // ... rest of the functioncrates/katana/storage/db/src/lib.rs (2)
Line range hint
1-144
: Excellent code organization and error handling!The existing database initialization and version management code is well-structured and thoroughly tested. The error handling is robust, particularly around version mismatches and malformed version files. This provides a solid foundation for the new trie functionality.
16-16
: Ohayo sensei! The trie module integration looks good!The addition of the
trie
module aligns well with the PR's objective of computing block commitments. Its placement among other database-related modules is logical and maintains good code organization.Let's verify the trie module's integration:
✅ Verification successful
Let me gather more information about the trie module implementation to ensure proper integration.
Ohayo sensei! The trie module integration is properly implemented and well-structured!
The verification shows that:
- The trie module implements the required
BonsaiDatabase
andBonsaiPersistentDatabase
traits fromkatana-trie
- Proper error handling and type safety are in place
- The module is correctly integrated with the database transaction system
- Dependencies are properly set up across the relevant crates
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify trie module integration and dependencies # Check if the trie module file exists fd --type f "trie.rs" --exec echo "Found trie module at: {}" # Look for trie-related dependencies in Cargo.toml files rg -l "katana-trie" -g "Cargo.toml" # Check for any existing trie-related implementations ast-grep --pattern 'trait $name { $$$ trie($$$) { $$$ } $$$ }'Length of output: 603
Script:
#!/bin/bash # Check the content of the trie module in db crate cat ./crates/katana/storage/db/src/trie.rs # Check for trie usage patterns in the codebase rg "use.*trie" -g "*.rs" -A 2 # Look for any trie-related trait implementations ast-grep --pattern 'impl $_ for $_ { $$$ trie($$$) { $$$ } $$$ }'Length of output: 9466
crates/saya/provider/src/rpc/mod.rs (2)
Line range hint
151-155
: Clean error handling implementation, sensei!The error handling for PendingUpdate is well-implemented using anyhow with a clear error message.
93-93
:⚠️ Potential issueOhayo sensei! The
state_diff_length
initialization needs attention.The field is currently hardcoded to 0, which might not accurately reflect the actual state diff length. This could affect block hash computation since the summary indicates this field is used in the Poseidon hash calculation.
Consider computing the actual state diff length from the state updates. Here's a suggested approach:
- state_diff_length: 0, + state_diff_length: block.state_update.state_diff.storage_diffs.len() as u64,Let's verify if the state diff length is used consistently:
Cargo.toml (4)
38-38
: Ohayo! New workspace member looks good, sensei!The addition of
"crates/katana/trie"
aligns with the PR objective to compute block commitments, as tries are commonly used for this purpose.
221-221
: Feature reordering is harmless, sensei!The reordering of tonic features (
"gzip"
,"tls"
,"tls-roots"
) is purely cosmetic and doesn't affect functionality.
250-250
: Version bump and new feature look solid, sensei!The update of
starknet-types-core
to0.1.7
with the addition of the"hash"
feature aligns with the need for computing block commitments.
252-252
: New dependency for bit manipulation, nice choice sensei!The addition of
bitvec = "1.0.1"
is appropriate for efficient bit-level operations, which are often needed in trie implementations.Let's verify if this dependency is actually used:
✅ Verification successful
Ohayo! The bitvec dependency is well utilized across the trie implementation, sensei!
The verification shows the
bitvec
crate is actively used in multiple key components:
crates/katana/trie/src/lib.rs
andclass.rs
for core trie functionalitycrates/katana/storage/provider/src/providers/db/trie.rs
for storage implementationThe imports cover essential bitvec features:
BitVec
for bit vector operationsMsb0
for most-significant-bit orderingAsBits
for bit-level view transformations🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for bitvec usage in the codebase rg "use.*bitvec" --type rustLength of output: 501
crates/katana/primitives/src/receipt.rs (4)
1-6
: Ohayo! The imports look good, sensei!All new imports are properly utilized in the implementation, particularly for hashing operations and derive macros.
180-189
: Elegant struct design, sensei!The
ReceiptWithTxHash
struct is well-designed with:
- Appropriate derive macros for common traits
- Smart use of
#[deref]
and#[as_ref]
for transparent access to receipt methods- Clear field documentation
192-194
: Simple and effective constructor, sensei!The constructor implementation is clean and straightforward.
201-218
: Review the commented code and gas consumption implementation.A few points to consider:
- There's a commented out line for
l1_data_gas
. Is this intentionally excluded or should it be implemented?Felt::ZERO
is used for L2 gas consumption (line 214). Is this a temporary placeholder?Let's check if
l1_data_gas
is used elsewhere in the codebase:crates/katana/storage/provider/src/lib.rs (2)
22-22
: LGTM! Clean trait import.Ohayo! The import is well-organized and properly scoped to the traits::trie module.
386-397
: LGTM! Clean trait implementation, sensei!The implementation follows the established delegation pattern consistently used throughout the provider, properly forwarding the calls to the underlying database implementation.
crates/katana/core/src/backend/storage.rs (3)
23-23
: Ohayo! Import looks good, sensei!The new trie-related traits are properly imported and well-organized with other trait imports.
70-71
: Implementation bounds are perfectly aligned, sensei!The trait implementation bounds correctly mirror the trait declaration, maintaining consistency.
47-48
: Trait bounds addition looks solid, sensei!The new trie writer traits enhance the Database capabilities for block commitment computation. Let's verify that all implementors properly implement these new traits.
✅ Verification successful
All implementations are properly in place, sensei!
The verification shows that all necessary implementations of
ClassTrieWriter
andContractTrieWriter
are present for the relevant types:
BlockchainProvider<Db>
DbProvider<Db>
ForkedProvider
Each implementation follows the same pattern and signature, ensuring consistent behavior across the codebase.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify all Database implementors have the new trait implementations # Search for Database implementors and their corresponding ClassTrieWriter and ContractTrieWriter implementations echo "Searching for Database implementors..." rg "impl.*Database.*for" -A 5 echo "Searching for ClassTrieWriter implementations..." rg "impl.*ClassTrieWriter.*for" -A 5 echo "Searching for ContractTrieWriter implementations..." rg "impl.*ContractTrieWriter.*for" -A 5Length of output: 17733
crates/katana/storage/db/src/tables.rs (3)
13-13
: Ohayo! Clean trait definition for trie functionality.The
Trie
trait is well-defined with appropriate type constraints, ensuring type safety for trie-specific operations.Also applies to: 39-39
241-243
: Clean trait implementations!The
Trie
trait implementations are concise and follow Rust's implementation patterns correctly.
50-50
: Ohayo! Let's verify the integration of these new tables, sensei!The addition of three new trie tables (
NUM_TABLES
increased to 26) represents a significant change. Let's verify their integration:✅ Verification successful
Let me gather more information about the initialization and usage of these tables.
Let me check one more crucial aspect about the table initialization and usage.
Ohayo! The trie tables are properly integrated, sensei!
The verification shows that the new trie tables are well-integrated into the system:
- All three tries (ClassTrie, ContractTrie, ContractStorageTrie) have proper implementations
- The tries are integrated with both DbProvider and ForkedProvider
- The table count (NUM_TABLES = 26) is correctly referenced in tests
- The trie system has a complete implementation chain from high-level providers down to the database layer
The codebase shows proper initialization, usage patterns, and test coverage for these trie tables, indicating a well-structured architectural change.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the initialization and usage of new trie tables # Check for database initialization code echo "Checking database initialization..." rg -l "ClassTrie|ContractTrie|ContractStorageTrie" --type rust # Check for any TODO or FIXME comments related to tries echo "Checking for pending work..." rg "TODO|FIXME" -C 2 --type rust | rg -i "trie"Length of output: 750
Script:
#!/bin/bash # Check the table definitions and initialization echo "=== Checking table definitions ===" rg "Table::new\(" -A 5 crates/katana/storage/db/src/tables.rs # Check the actual usage of these tries in the core functionality echo -e "\n=== Checking trie usage in core functionality ===" ast-grep --pattern 'impl $_ { $$$ fn $_(trie: $_) { $$$ } }' # Check if there are any test cases for these tries echo -e "\n=== Checking test coverage ===" rg "test|#\[test\]" -B 2 -A 5 | rg -i "trie"Length of output: 5468
Script:
#!/bin/bash # Check the table initialization in the database setup echo "=== Checking database setup ===" rg "NUM_TABLES" -B 5 -A 10 crates/katana/storage/db/src/tables.rs # Check for any direct references to table indices echo -e "\n=== Checking table index references ===" rg "TABLE_\w+\s*=\s*\d+" crates/katana/storage/db/src/tables.rs # Check the trie implementation echo -e "\n=== Checking trie implementations ===" rg "impl.*Trie" -A 5 --type rustLength of output: 8178
crates/katana/storage/provider/src/providers/fork/mod.rs (3)
4-4
: Ohayo! The imports look good, sensei!The new imports are well-organized and necessary for implementing the trie-related functionality.
Also applies to: 20-20, 41-41
601-611
:⚠️ Potential issueOhayo! This implementation needs attention, sensei!
The current implementation of
ContractTrieWriter
is a stub that ignores the state updates and returnsFelt::ZERO
. This could lead to incorrect state root calculations.Consider implementing the following:
- Process the state updates to compute contract commitments
- Use appropriate hashing mechanism for commitment calculation
- Add error handling for invalid state updates
Let's verify if this is the only implementation:
589-599
: Verify if stub implementation is intentional, sensei!The current implementation ignores the parameters and returns
Felt::ZERO
. This could lead to incorrect block commitment calculations if this implementation is used in production.Let's check if there are any TODOs or related implementations:
✅ Verification successful
Let me generate additional verification scripts to check if this is intentional:
Let me check one more aspect to understand if this is part of a larger pattern in the fork provider:
Ohayo sensei! The stub implementation is intentional for the fork provider
Looking at the implementation patterns in ForkedProvider, both
ClassTrieWriter
andContractTrieWriter
are intentionally stubbed withFelt::ZERO
returns. This is consistent with the fork provider's design where it acts as a read-only view of forked state and doesn't need to compute new commitments.Key evidence:
- Both trie writers (
ClassTrieWriter
andContractTrieWriter
) follow the same pattern of ignoring inputs and returningFelt::ZERO
- The fork provider properly implements read operations like
state_root
but intentionally stubs write operations- This matches the provider's role as a forked state viewer rather than a full state manager
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for other ClassTrieWriter implementations or related TODOs echo "Searching for other ClassTrieWriter implementations..." rg "impl ClassTrieWriter" -A 10 echo "Searching for TODOs related to class trie..." rg "TODO.*class.*trie" -iLength of output: 48203
Script:
#!/bin/bash # Search for TODOs or comments related to ForkedProvider implementation echo "Searching for TODOs or comments about ForkedProvider..." rg "TODO.*ForkedProvider" -i # Search for any comments about block commitment or state root in the fork provider echo "Searching for comments about block commitment in fork provider..." rg "(commitment|state_root).*fork" -B 2 -A 2 # Check if there are any tests for ForkedProvider echo "Checking for fork provider tests..." fd "test.*fork" --type f --exec rg "test.*ForkedProvider" {}Length of output: 1012
Script:
#!/bin/bash # Check other trait implementations in ForkedProvider echo "Checking other trait implementations in ForkedProvider..." rg "impl.*for ForkedProvider" -A 10 # Look for any comments about fork provider's design echo "Looking for design comments..." rg "ForkedProvider" -B 3 -A 3 -g "*.rs" --no-filename | grep -i "//.*fork"Length of output: 20052
crates/katana/primitives/src/chain_spec.rs (2)
375-376
: LGTM! Test coverage looks good, sensei.The test case properly verifies the initialization of
state_diff_length
in the block header.
Line range hint
1-24
: Verify the impact on block commitment computation.The addition of
state_diff_length
appears to be part of block commitment computation. Let's verify its usage in the codebase.crates/katana/trie/src/class.rs (2)
44-45
: Confirm the correct use ofPoseidon::hash
with constantsOhayo, sensei! In the
apply
method,Poseidon::hash(&CONTRACT_CLASS_LEAF_V0, compiled_class_hash)
is invoked. Please ensure that usingCONTRACT_CLASS_LEAF_V0
as a constant in the hash function aligns with cryptographic standards and the intended behavior for state updates.
25-25
: Verify the identifier format for consistencyOhayo, sensei! The identifier
b"0xclass"
is used for the Bonsai Trie. Please ensure that this identifier complies with any naming conventions or formats required by the Bonsai Trie system or related protocols.Run the following script to check for consistent identifier usage across the codebase:
✅ Verification successful
Identifier format is consistent with related implementations
Ohayo, sensei! After checking the codebase, I can confirm that the identifier format follows the established pattern. We found two similar identifiers:
b"0xclass"
in ClassTrieb"0xcontract"
in the storage providerBoth follow the same format:
b"0x"
prefix followed by a descriptive name. This is a consistent pattern across the Bonsai Trie implementations.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify that the identifier `b"0xclass"` is used consistently. # Test: Search for Bonsai Trie identifiers. Expect: Only `b"0xclass"` is used. rg 'BonsaiTrie::<.*>::IDENTIFIER' -A 1 \ | rg -v 'b"0xclass"' && echo "Inconsistent identifiers found."Length of output: 61
Script:
#!/bin/bash # Let's try a broader search for identifiers in trie-related code rg "IDENTIFIER.*=.*b\"0x" -B 2 -A 1 # Also search for any static byte string identifiers rg "static.*\[u8\].*=.*b\"0x" -B 2 -A 1 # Search for any struct implementations that might use identifiers ast-grep --pattern 'impl $_ { const IDENTIFIER: $_ = $_ }'Length of output: 869
crates/katana/storage/db/src/trie.rs (1)
124-126
: Verify themerge
method implementationThe
merge
method returnsOk(())
without performing any merge logic. Please confirm if this is the intended behavior.Ohayo, sensei! Is this method expected to be a no-op? If not, I can assist with implementing the merge functionality.
crates/katana/core/src/backend/mod.rs (1)
58-60
: Ohayo, sensei! Great job updating toReceiptWithTxHash
Updating the receipts to use
ReceiptWithTxHash
improves the association between transactions and their receipts. This enhances traceability and aligns with the updated method signatures.crates/katana/primitives/src/block.rs (5)
1-2
: Ohayo, sensei! Necessary imports added correctlyThe imports for
cairo_short_string_to_felt
andshort_string
are properly included and support the new functionality incompute_hash
.
13-16
: Good reorganization of type definitionsOhayo, sensei! Moving the
BlockNumber
andBlockHash
type definitions improves code readability and structure.
174-199
:concat_counts
function correctly implements the concatenation logicOhayo, sensei! The
concat_counts
function properly concatenates the counts and L1 DA mode into a single felt value as per the specification.
324-337
: Good addition of tests forconcat_counts
functionOhayo, sensei! The added tests for
concat_counts
ensure the correctness of the concatenation logic and enhance code reliability.
93-94
: Ensurestate_diff_length
is properly initialized in allHeader
instancesOhayo, sensei! The addition of
state_diff_length
requires updating all instances whereHeader
is instantiated to include this new field.You can run the following script to find
Header
initializations that may need updating:✅ Verification successful
No initialization issues found with
state_diff_length
fieldOhayo, sensei! After careful verification, I can confirm that all
Header
initializations are safe because:
- The
Default
implementation properly initializesstate_diff_length
to 0- All found initializations either:
- Use
..Default::default()
to fill missing fields- Are complete struct initializations where missing fields are handled by the compiler
The Rust compiler's strict initialization checks ensure we won't miss this field in any case.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Find initializations of `Header` that may be missing `state_diff_length`. # Test: Search for `Header` initializations without `state_diff_length`. Expect: No matches. ast-grep --pattern 'Header { $$$ }' --lang rust | grep -B 5 -A 5 'Header' | grep -v 'state_diff_length'Length of output: 4472
Script:
#!/bin/bash # Let's check the Header struct definition and its Default implementation ast-grep --pattern 'impl Default for Header' --lang rust # Also check if there are any direct struct initializations without Default::default() ast-grep --pattern 'struct Header { $$$ }' --lang rustLength of output: 1837
impl Encode for TrieDatabaseKey { | ||
type Encoded = Vec<u8>; | ||
|
||
fn encode(self) -> Self::Encoded { | ||
let mut encoded = Vec::new(); | ||
encoded.push(self.r#type as u8); | ||
encoded.extend(self.key); | ||
encoded | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Optimize vector allocation, sensei!
Consider pre-allocating the vector capacity to avoid potential reallocations:
fn encode(self) -> Self::Encoded {
- let mut encoded = Vec::new();
+ let mut encoded = Vec::with_capacity(1 + self.key.len());
encoded.push(self.r#type as u8);
encoded.extend(self.key);
encoded
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
impl Encode for TrieDatabaseKey { | |
type Encoded = Vec<u8>; | |
fn encode(self) -> Self::Encoded { | |
let mut encoded = Vec::new(); | |
encoded.push(self.r#type as u8); | |
encoded.extend(self.key); | |
encoded | |
} | |
} | |
impl Encode for TrieDatabaseKey { | |
type Encoded = Vec<u8>; | |
fn encode(self) -> Self::Encoded { | |
let mut encoded = Vec::with_capacity(1 + self.key.len()); | |
encoded.push(self.r#type as u8); | |
encoded.extend(self.key); | |
encoded | |
} | |
} |
impl Decode for TrieDatabaseKey { | ||
fn decode<B: AsRef<[u8]>>(bytes: B) -> Result<Self, CodecError> { | ||
let bytes = bytes.as_ref(); | ||
if bytes.is_empty() { | ||
panic!("emptyy buffer") | ||
} | ||
|
||
let r#type = match bytes[0] { | ||
0 => TrieDatabaseKeyType::Trie, | ||
1 => TrieDatabaseKeyType::Flat, | ||
2 => TrieDatabaseKeyType::TrieLog, | ||
_ => panic!("Invalid trie database key type"), | ||
}; | ||
|
||
let key = bytes[1..].to_vec(); | ||
|
||
Ok(TrieDatabaseKey { r#type, key }) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix error handling and bounds checking, sensei!
Several issues need attention:
- Replace panics with proper error handling
- Fix typo in "emptyy"
- Add bounds checking before slice access
Apply these improvements:
fn decode<B: AsRef<[u8]>>(bytes: B) -> Result<Self, CodecError> {
let bytes = bytes.as_ref();
if bytes.is_empty() {
- panic!("emptyy buffer")
+ return Err(CodecError::InvalidLength);
}
+
+ if bytes.len() < 2 {
+ return Err(CodecError::InvalidLength);
+ }
let r#type = match bytes[0] {
0 => TrieDatabaseKeyType::Trie,
1 => TrieDatabaseKeyType::Flat,
2 => TrieDatabaseKeyType::TrieLog,
- _ => panic!("Invalid trie database key type"),
+ _ => return Err(CodecError::InvalidFormat),
};
let key = bytes[1..].to_vec();
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
impl Decode for TrieDatabaseKey { | |
fn decode<B: AsRef<[u8]>>(bytes: B) -> Result<Self, CodecError> { | |
let bytes = bytes.as_ref(); | |
if bytes.is_empty() { | |
panic!("emptyy buffer") | |
} | |
let r#type = match bytes[0] { | |
0 => TrieDatabaseKeyType::Trie, | |
1 => TrieDatabaseKeyType::Flat, | |
2 => TrieDatabaseKeyType::TrieLog, | |
_ => panic!("Invalid trie database key type"), | |
}; | |
let key = bytes[1..].to_vec(); | |
Ok(TrieDatabaseKey { r#type, key }) | |
} | |
} | |
impl Decode for TrieDatabaseKey { | |
fn decode<B: AsRef<[u8]>>(bytes: B) -> Result<Self, CodecError> { | |
let bytes = bytes.as_ref(); | |
if bytes.is_empty() { | |
return Err(CodecError::InvalidLength); | |
} | |
if bytes.len() < 2 { | |
return Err(CodecError::InvalidLength); | |
} | |
let r#type = match bytes[0] { | |
0 => TrieDatabaseKeyType::Trie, | |
1 => TrieDatabaseKeyType::Flat, | |
2 => TrieDatabaseKeyType::TrieLog, | |
_ => return Err(CodecError::InvalidFormat), | |
}; | |
let key = bytes[1..].to_vec(); | |
Ok(TrieDatabaseKey { r#type, key }) | |
} | |
} |
pub fn compute_state_diff_hash(states: StateUpdates) -> Felt { | ||
let replaced_classes_len = states.replaced_classes.len(); | ||
let deployed_contracts_len = states.deployed_contracts.len(); | ||
let updated_contracts_len = Felt::from(deployed_contracts_len + replaced_classes_len); | ||
// flatten the updated contracts into a single list of Felt values | ||
let updated_contracts = states.deployed_contracts.into_iter().chain(states.replaced_classes); | ||
let updated_contracts = updated_contracts.flat_map(|(addr, hash)| vec![addr.into(), hash]); | ||
|
||
let declared_classes = states.declared_classes; | ||
let declared_classes_len = Felt::from(declared_classes.len()); | ||
let declared_classes = declared_classes.into_iter().flat_map(|e| vec![e.0, e.1]); | ||
|
||
let deprecated_declared_classes = states.deprecated_declared_classes; | ||
let deprecated_declared_classes_len = Felt::from(deprecated_declared_classes.len()); | ||
|
||
let storage_updates = states.storage_updates; | ||
let storage_updates_len = Felt::from(storage_updates.len()); | ||
let storage_updates = storage_updates.into_iter().flat_map(|update| { | ||
let address = Felt::from(update.0); | ||
let storage_entries_len = Felt::from(update.1.len()); | ||
let storage_entries = update.1.into_iter().flat_map(|entries| vec![entries.0, entries.1]); | ||
iter::once(address).chain(iter::once(storage_entries_len)).chain(storage_entries) | ||
}); | ||
|
||
let nonce_updates = states.nonce_updates; | ||
let nonces_len = Felt::from(nonce_updates.len()); | ||
let nonce_updates = nonce_updates.into_iter().flat_map(|nonce| vec![nonce.0.into(), nonce.1]); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Sensei, consider extracting flattening logic into separate methods.
The flattening logic for each state component could be moved into separate helper methods to improve readability and maintainability.
+impl StateUpdates {
+ fn flatten_contracts(&self) -> (Felt, impl Iterator<Item = Felt>) {
+ let len = self.deployed_contracts.len() + self.replaced_classes.len();
+ let iter = self.deployed_contracts
+ .iter()
+ .chain(&self.replaced_classes)
+ .flat_map(|(addr, hash)| vec![(*addr).into(), *hash]);
+ (Felt::from(len), iter)
+ }
+ // Add similar methods for other components
+}
pub fn compute_state_diff_hash(states: StateUpdates) -> Felt {
- let replaced_classes_len = states.replaced_classes.len();
- let deployed_contracts_len = states.deployed_contracts.len();
- let updated_contracts_len = Felt::from(deployed_contracts_len + replaced_classes_len);
- let updated_contracts = states.deployed_contracts.into_iter()...
+ let (updated_contracts_len, updated_contracts) = states.flatten_contracts();
// ... rest of the function
Committable suggestion was skipped due to low confidence.
assert_eq!(Tables::ALL[23].name(), ClassTrie::NAME); | ||
assert_eq!(Tables::ALL[24].name(), ContractTrie::NAME); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add missing test assertion for ContractStorageTrie, sensei!
The test cases are missing an assertion for the ContractStorageTrie
table type. Consider adding:
assert_eq!(Tables::ALL[23].name(), ClassTrie::NAME);
assert_eq!(Tables::ALL[24].name(), ContractTrie::NAME);
+assert_eq!(Tables::ALL[25].name(), ContractStorageTrie::NAME);
assert_eq!(Tables::Headers.table_type(), TableType::Table);
// ... other assertions ...
assert_eq!(Tables::ClassTrie.table_type(), TableType::Table);
assert_eq!(Tables::ContractTrie.table_type(), TableType::Table);
+assert_eq!(Tables::ContractStorageTrie.table_type(), TableType::Table);
Also applies to: 302-303
contract_leafs | ||
.into_iter() | ||
.map(|(address, mut leaf)| { | ||
let storage_root = storage_trie_db.root_hash(&address.to_bytes_be()).unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle errors when retrieving storage_root
Ohayo, sensei! Calling storage_trie_db.root_hash(...).unwrap()
may panic if an error occurs. Handling the error will improve reliability.
Apply this diff:
-let storage_root = storage_trie_db.root_hash(&address.to_bytes_be()).unwrap();
+let storage_root = storage_trie_db.root_hash(&address.to_bytes_be())?;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
let storage_root = storage_trie_db.root_hash(&address.to_bytes_be()).unwrap(); | |
let storage_root = storage_trie_db.root_hash(&address.to_bytes_be())?; |
.class_hash | ||
.unwrap_or(provider.class_hash_of_contract(*address).unwrap().unwrap_or_default()); | ||
|
||
let storage_root = contract_leaf.storage_root.expect("root need to set"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid panics by handling potential None
values in storage_root
Ohayo, sensei! Using contract_leaf.storage_root.expect("root need to set")
may panic if storage_root
is None
. Consider handling the None
case or returning an error.
Modify the code to handle None
values appropriately, perhaps by returning an error if storage_root
is missing:
-let storage_root = contract_leaf.storage_root.expect("root need to set");
+let storage_root = match contract_leaf.storage_root {
+ Some(root) => root,
+ None => return Err(/* appropriate error */),
+};
Committable suggestion was skipped due to low confidence.
let class_trie_root = ClassTrieWriter::insert_updates( | ||
&self.trie_provider, | ||
self.header.number, | ||
&self.state_updates.declared_classes, | ||
) | ||
.unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohayo, sensei! Handle potential errors instead of unwrapping
In compute_new_state_root
, using .unwrap()
after insert_updates
may cause a panic if an error occurs. Consider handling the Result
properly to enhance robustness.
Apply this diff:
let class_trie_root = ClassTrieWriter::insert_updates(
&self.trie_provider,
self.header.number,
&self.state_updates.declared_classes,
)
- .unwrap();
+ ?; // Propagate the error or handle appropriately
let contract_trie_root = ContractTrieWriter::insert_updates(
&self.trie_provider,
self.header.number,
self.state_updates,
)
- .unwrap();
+ ?; // Propagate the error or handle appropriately
This will require updating the function signature to return a Result<Felt, Error>
and handling the error accordingly.
Also applies to: 255-261
compute_merkle_root::<hash::Poseidon>(&tx_hashes).unwrap() | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohayo, sensei! Handle errors when computing Merkle roots
Calls to compute_merkle_root
using .unwrap()
can lead to panics if an error occurs. Consider handling the potential errors to improve reliability.
Modify the code:
fn compute_transaction_commitment(&self) -> Felt {
let tx_hashes = self.transactions.iter().map(|t| t.hash).collect::<Vec<TxHash>>();
- compute_merkle_root::<hash::Poseidon>(&tx_hashes).unwrap()
+ compute_merkle_root::<hash::Poseidon>(&tx_hashes)?
}
fn compute_receipt_commitment(&self) -> Felt {
let receipt_hashes = self.receipts.iter().map(|r| r.compute_hash()).collect::<Vec<Felt>>();
- compute_merkle_root::<hash::Poseidon>(&receipt_hashes).unwrap()
+ compute_merkle_root::<hash::Poseidon>(&receipt_hashes)?
}
fn compute_event_commitment(&self) -> Felt {
// ...
- compute_merkle_root::<hash::Poseidon>(&hashes).unwrap()
+ compute_merkle_root::<hash::Poseidon>(&hashes)?
}
Update the function signatures to return Result<Felt, Error>
as necessary.
Also applies to: 217-218, 243-244
block_env: BlockEnv, | ||
state_updates: StateUpdates, | ||
transactions: Vec<TxWithHash>, | ||
receipts: &[Receipt], | ||
receipts: &[ReceiptWithTxHash], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ohayo, sensei! Modify commit_block
to accept state_updates
by reference
Following up, accepting state_updates
as a reference in commit_block
improves performance by avoiding unnecessary clones.
[performance]
Apply this diff:
fn commit_block(
&self,
block_env: BlockEnv,
- state_updates: StateUpdates,
+ state_updates: &StateUpdates,
transactions: Vec<TxWithHash>,
receipts: &[ReceiptWithTxHash],
) -> Result<SealedBlock, BlockProductionError> {
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
block_env: BlockEnv, | |
state_updates: StateUpdates, | |
transactions: Vec<TxWithHash>, | |
receipts: &[Receipt], | |
receipts: &[ReceiptWithTxHash], | |
block_env: BlockEnv, | |
state_updates: &StateUpdates, | |
transactions: Vec<TxWithHash>, | |
receipts: &[ReceiptWithTxHash], |
fc4eda8
to
00749ad
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
🧹 Outside diff range and nitpick comments (9)
crates/katana/storage/provider/src/traits/trie.rs (1)
10-17
: Add documentation for public traits.Ohayo! The traits look well-structured, but as public APIs, they should include documentation explaining their purpose, usage, and the meaning of the returned
Felt
(presumably a commitment).Add documentation like this:
+/// Handles the insertion of class updates into a trie structure and computes block commitments. +/// +/// # Arguments +/// * `block_number` - The block number for which updates are being inserted +/// * `updates` - Map of class hashes to their compiled versions +/// +/// # Returns +/// A commitment representing the updated trie state #[auto_impl::auto_impl(&, Box, Arc)] pub trait ClassTrieWriter: Send + Sync {Also applies to: 19-26
crates/katana/storage/db/src/models/trie.rs (1)
7-13
: Add explicit values and documentation for enum variants, sensei!The enum variants should have explicit values to prevent breaking changes if variants are reordered:
#[repr(u8)] #[derive(Debug, Clone, Copy, Serialize, Deserialize)] +/// Represents different types of trie database keys pub enum TrieDatabaseKeyType { - Trie, - Flat, - TrieLog, + Trie = 0, + Flat = 1, + TrieLog = 2, }crates/katana/primitives/src/state.rs (2)
34-48
: Ohayo! The len() implementation looks good, sensei!The method correctly aggregates the total number of state updates. Consider using
sum()
for a more idiomatic approach to the storage updates iteration:- for updates in self.storage_updates.values() { - len += updates.len(); - } + len += self.storage_updates.values().map(|updates| updates.len()).sum::<usize>();
90-90
: Consider defining magic string as a constant, sensei.The magic string "STARKNET_STATE_DIFF0" should be defined as a module-level constant for better maintainability and reusability.
+const STATE_DIFF_MAGIC: Felt = short_string!("STARKNET_STATE_DIFF0"); pub fn compute_state_diff_hash(states: StateUpdates) -> Felt { // ... - let magic = short_string!("STARKNET_STATE_DIFF0"); + let magic = STATE_DIFF_MAGIC;crates/katana/storage/provider/src/providers/db/trie.rs (1)
22-27
: Add documentation for ContractLeaf structOhayo, sensei! The
ContractLeaf
struct would benefit from documentation explaining its purpose and the significance of its optional fields. This would help other developers understand when and why fields might be None.Add documentation like this:
+/// Represents the state of a contract in the trie +/// * `class_hash` - The hash of the contract's class +/// * `storage_root` - The root hash of the contract's storage trie +/// * `nonce` - The current nonce of the contract #[derive(Debug, Default)] struct ContractLeaf { pub class_hash: Option<Felt>, pub storage_root: Option<Felt>, pub nonce: Option<Felt>, }crates/katana/core/src/backend/mod.rs (2)
78-80
: Ohayo, sensei! Consider implementing the suggested TODO improvementThe TODO comment suggests changing
insert_block_with_states_and_receipts
to acceptReceiptWithTxHash
directly. This would avoid the unnecessary conversion and improve code efficiency.Would you like me to help implement this improvement or create a GitHub issue to track this task?
147-171
: Ohayo, sensei! Add documentation for the UncommittedBlock structThe
UncommittedBlock
struct introduces important functionality but lacks documentation. Consider adding doc comments to explain:
- The purpose of the struct
- The relationship between its fields
- The lifecycle of an uncommitted block
Add documentation like this:
/// Represents a block that has not yet been committed to the chain. /// This struct holds all the necessary components to compute block commitments /// and create a sealed block. pub struct UncommittedBlock<'a, P> where P: ClassTrieWriter + ContractTrieWriter, { /// Partial header containing block metadata header: PartialHeader, /// List of transactions to be included in the block transactions: Vec<TxWithHash>, // ... (document other fields) }crates/katana/primitives/src/block.rs (1)
319-338
: LGTM! Comprehensive test coverage for concat_counts.The test cases effectively cover:
- Regular values with both Calldata and Blob modes
- Edge case with maximum u32 value (0xFFFFFFFF)
However, sensei, consider adding these test cases:
#[test] fn test_edge_cases() { // Test maximum values for all fields let max = Header::concat_counts( u32::MAX, u32::MAX, u32::MAX, L1DataAvailabilityMode::Blob ); // Test zero values with both modes let zeros_blob = Header::concat_counts( 0, 0, 0, L1DataAvailabilityMode::Blob ); let zeros_calldata = Header::concat_counts( 0, 0, 0, L1DataAvailabilityMode::Calldata ); assert_ne!(zeros_blob, zeros_calldata); }crates/katana/trie/src/class.rs (1)
36-57
: Consider adding documentation comments to public methodsOhayo, sensei! Adding Rustdoc comments to the
new
andapply
methods in theClassTrie
struct will enhance code readability and help other developers understand their functionality and usage.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (3)
Cargo.lock
is excluded by!**/*.lock
spawn-and-move-db.tar.gz
is excluded by!**/*.gz
types-test-db.tar.gz
is excluded by!**/*.gz
📒 Files selected for processing (27)
- Cargo.toml (4 hunks)
- crates/katana/core/Cargo.toml (2 hunks)
- crates/katana/core/src/backend/mod.rs (3 hunks)
- crates/katana/core/src/backend/storage.rs (3 hunks)
- crates/katana/primitives/src/block.rs (5 hunks)
- crates/katana/primitives/src/chain_spec.rs (2 hunks)
- crates/katana/primitives/src/receipt.rs (2 hunks)
- crates/katana/primitives/src/state.rs (3 hunks)
- crates/katana/storage/db/Cargo.toml (2 hunks)
- crates/katana/storage/db/src/codecs/postcard.rs (2 hunks)
- crates/katana/storage/db/src/lib.rs (1 hunks)
- crates/katana/storage/db/src/models/mod.rs (1 hunks)
- crates/katana/storage/db/src/models/trie.rs (1 hunks)
- crates/katana/storage/db/src/tables.rs (7 hunks)
- crates/katana/storage/db/src/trie.rs (1 hunks)
- crates/katana/storage/provider/Cargo.toml (1 hunks)
- crates/katana/storage/provider/src/lib.rs (3 hunks)
- crates/katana/storage/provider/src/providers/db/mod.rs (1 hunks)
- crates/katana/storage/provider/src/providers/db/trie.rs (1 hunks)
- crates/katana/storage/provider/src/providers/fork/mod.rs (4 hunks)
- crates/katana/storage/provider/src/traits/mod.rs (1 hunks)
- crates/katana/storage/provider/src/traits/trie.rs (1 hunks)
- crates/katana/trie/Cargo.toml (1 hunks)
- crates/katana/trie/src/class.rs (1 hunks)
- crates/katana/trie/src/contract.rs (1 hunks)
- crates/katana/trie/src/lib.rs (1 hunks)
- crates/saya/provider/src/rpc/mod.rs (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (19)
- Cargo.toml
- crates/katana/core/Cargo.toml
- crates/katana/core/src/backend/storage.rs
- crates/katana/primitives/src/chain_spec.rs
- crates/katana/primitives/src/receipt.rs
- crates/katana/storage/db/Cargo.toml
- crates/katana/storage/db/src/codecs/postcard.rs
- crates/katana/storage/db/src/lib.rs
- crates/katana/storage/db/src/models/mod.rs
- crates/katana/storage/db/src/tables.rs
- crates/katana/storage/db/src/trie.rs
- crates/katana/storage/provider/Cargo.toml
- crates/katana/storage/provider/src/lib.rs
- crates/katana/storage/provider/src/providers/db/mod.rs
- crates/katana/storage/provider/src/providers/fork/mod.rs
- crates/katana/storage/provider/src/traits/mod.rs
- crates/katana/trie/Cargo.toml
- crates/katana/trie/src/contract.rs
- crates/saya/provider/src/rpc/mod.rs
🔇 Additional comments (14)
crates/katana/storage/provider/src/traits/trie.rs (2)
10-26
: Previous refactoring suggestion is still applicable, sensei.The shared pattern between both traits suggests potential for generalization.
10-11
: LGTM: Proper thread-safety constraints.The
Send + Sync
bounds are appropriate for traits that will be used across thread boundaries, especially in the context of block commitment computation.Also applies to: 19-20
crates/katana/storage/db/src/models/trie.rs (2)
23-32
: Vector pre-allocation suggestion was previously made.
34-52
: Previous error handling concerns remain valid.crates/katana/trie/src/lib.rs (2)
1-8
: Ohayo! Clean and well-organized imports, sensei!The imports are properly organized and include all necessary dependencies for Merkle root computation using Bonsai Trie.
14-18
: Excellent trait abstraction, sensei!The
BonsaiTrieDb
trait provides a clean abstraction over the Bonsai database requirements with clear documentation.crates/katana/primitives/src/state.rs (1)
62-107
: Verify hash computation correctness.The state diff hash computation appears to handle all state components correctly, but let's verify the order and format matches the expected protocol specification.
✅ Verification successful
Let's also check if there are any protocol specifications or documentation:
Ohayo sensei! The state diff hash computation has been verified
The implementation follows the correct protocol specification with the magic value "STARKNET_STATE_DIFF0" and properly orders all state components:
- Updated contracts (deployed + replaced classes)
- Declared classes
- Deprecated declared classes
- Storage updates
- Nonce updates
The backend implementation in
crates/katana/core/src/backend/mod.rs
correctly uses this function to compute the state diff commitment for block sealing.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for any test files or documentation that specify the expected state diff hash format rg -l "STARKNET_STATE_DIFF0" --type rustLength of output: 78
Script:
#!/bin/bash # Let's search for test files related to state diff hash computation rg -l "compute_state_diff_hash" --type rust # Also search for any test cases or constants related to state diff rg "state_diff.*test" --type rust -A 5Length of output: 160
Script:
#!/bin/bash # Look for documentation files fd -e md -e txt | xargs rg -l "state.*(diff|hash)" # Check for any protocol-related constants or specifications rg "protocol.*state.*diff" --type rustLength of output: 93
Script:
#!/bin/bash # Let's check the backend implementation to see how it's used rg -A 10 -B 10 "compute_state_diff_hash" crates/katana/core/src/backend/mod.rs # Let's also look for any state diff related structs and implementations ast-grep --pattern 'struct StateUpdates { $$$ }' # And check for any related test modules rg "#\[test\].*state" --type rust -A 10Length of output: 1976
Script:
# Let's find the StateUpdates struct definition ast-grep --pattern 'struct StateUpdates' # And check for any protocol version or specification constants rg "STARKNET_STATE_DIFF" -A 5 -B 5 # Look for any state-related test files specifically fd test.*state.*rsLength of output: 988
crates/katana/storage/provider/src/providers/db/trie.rs (1)
29-40
: LGTM! Clean implementation of ClassTrieWriterOhayo, sensei! The implementation properly handles errors and efficiently applies updates to the class trie.
crates/katana/primitives/src/block.rs (4)
13-16
: LGTM! Type definitions are well-placed.Ohayo! The movement of
BlockNumber
andBlockHash
type definitions to the top of the file improves code organization and readability.
102-159
: LGTM! Robust block hash computation implementation.The new
compute_hash
implementation:
- Follows StarkWare's sequencer specification
- Uses Poseidon hash with correct field ordering
- Properly handles all header fields including the new state_diff_length
161-199
: Verify bit manipulation boundaries in concat_counts.The
concat_counts
method uses bit manipulation to pack multiple values into a single felt. While the implementation looks correct, we should verify the maximum values that can be safely handled.Let's analyze the maximum values:
✅ Verification successful
Ohayo sensei! After analyzing the code and test cases, I can now provide a conclusive response about the bit manipulation boundaries.
The bit manipulation in concat_counts is safe and verified ✨
The implementation is safe because:
- Each field (transaction_count, event_count, state_diff_length) is allocated 64 bits, while being stored as u32 (32 bits), providing sufficient headroom
- The test cases verify both normal cases (100, 200, 300) and boundary cases (0xFFFFFFFF for transaction_count)
- Real usage in the codebase shows safe patterns, using
len() as u32
which can't overflow the allocated bits- The L1 DA mode uses only 1 bit (0 or 1) with proper padding of 63 bits
The field layout is well-documented in the code comments and the implementation matches the specification exactly.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for potential overflow scenarios in tests rg -B 2 -A 2 "concat_counts.*0x[fF]+" --type rust # Search for actual usage to verify real-world values rg "state_diff_length.*=.*\d+" --type rustLength of output: 622
Script:
#!/bin/bash # Let's check the test cases for concat_counts to understand the boundary testing ast-grep --pattern 'fn test_concat_counts$$$' # Also check if there are any other usages of transaction_count and event_count rg "transaction_count.*=.*\d+" --type rust rg "event_count.*=.*\d+" --type rust # Look for any comments or documentation about size limits rg -B 2 -A 2 "(transaction|event).*count.*bits" --type rustLength of output: 295
Script:
#!/bin/bash # Let's check all test cases in the block.rs file rg -B 5 -A 5 "#\[test\]" crates/katana/primitives/src/block.rs # Check for any constants or type definitions related to these counts rg "type .*(Count|Length)" --type rust rg "const .*(COUNT|LENGTH)" --type rust # Look for any documentation about the block structure rg -l "struct (Block|Header)" --type rust | xargs rg -B 5 -A 5 "struct (Block|Header)"Length of output: 16532
93-93
: Verify state diff length initialization.The new
state_diff_length
field tracks the size of state changes. While it's properly initialized to 0 in the Default implementation, we should ensure it's correctly populated during block creation.Also applies to: 208-208
✅ Verification successful
Let me gather more context about the state diff length initialization by checking where and how blocks are created.
The pattern will help us verify if state_diff_length is properly set during block creation, while also checking test cases for validation.
Let me search for more specific information about state_diff_length initialization and usage.
Ohayo sensei! Based on the code analysis, I can now provide a definitive response about the state_diff_length initialization concern.
State diff length is properly initialized and populated ✨
The state_diff_length field is correctly handled throughout the codebase:
- It's initialized to 0 in the Default implementation (crates/katana/primitives/src/block.rs)
- It's properly populated during block creation in the backend (crates/katana/core/src/backend/mod.rs) where it's set to
self.state_updates.len() as u32
- It's used appropriately in block hash computation and state commitment calculations
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for state_diff_length assignments rg -A 3 "state_diff_length.*=" --type rustLength of output: 395
Script:
#!/bin/bash # Search for Block struct construction/creation ast-grep --pattern 'Block { $$$state_diff_length$$$}' # Also check for block creation methods rg -A 5 "fn.*Block" --type rust # And look for state_diff_length usage in tests rg -A 3 "state_diff_length" --type rust --type testLength of output: 65063
Script:
#!/bin/bash # Search for state_diff_length in block-related code rg -A 5 "state_diff_length" crates/katana/primitives/src/block.rs crates/katana/core/src/backend/mod.rs # Look for state_diff_length in tests rg -A 5 "state_diff_length.*=" crates/katana/*/tests/ # Check Default implementation ast-grep --pattern 'impl Default for Block { $$$state_diff_length$$$}'Length of output: 4993
crates/katana/trie/src/class.rs (2)
51-51
:⚠️ Potential issueEnsure safe slicing when converting bytes to
BitVec
Ohayo, sensei! In
bytes.as_bits()[5..]
, slicing from index 5 assumes thatbytes
has sufficient length. To prevent potential out-of-bounds errors, please ensure thatbytes
is at least 5 bytes long or handle cases where it might be shorter.Apply this diff to add a check:
+ if bytes.len() < 5 { + return Err(ErrorType::InvalidLength); + } let bv: BitVec<u8, Msb0> = bytes.as_bits()[5..].to_owned();Alternatively, adjust the slicing logic to accommodate varying lengths.
Likely invalid or redundant comment.
52-56
:⚠️ Potential issueAvoid panics by handling
unwrap()
calls inapply
methodOhayo, sensei! The
apply
method contains multipleunwrap()
calls that can cause panics if an operation fails. To improve robustness, consider handling these potential errors gracefully.Apply this diff to handle the errors:
- self.bonsai_storage.insert(Self::IDENTIFIER, &bv, &value).unwrap(); + self.bonsai_storage.insert(Self::IDENTIFIER, &bv, &value)?; - self.bonsai_storage.commit(BasicId::new(block_number)).unwrap(); + self.bonsai_storage.commit(BasicId::new(block_number))?; - self.bonsai_storage.root_hash(Self::IDENTIFIER).unwrap() + self.bonsai_storage.root_hash(Self::IDENTIFIER)?Remember to adjust the method signature to return
Result<Felt, ErrorType>
and handle propagated errors appropriately.Likely invalid or redundant comment.
_ => panic!("Invalid trie database key type"), | ||
}; | ||
|
||
let key = bytes[1..].to_vec(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add length validation before slice access, sensei!
The current implementation assumes bytes[1..] exists without validation. This could lead to out-of-bounds access if the input is exactly 1 byte long.
Add length validation:
- let key = bytes[1..].to_vec();
+ if bytes.len() < 2 {
+ return Err(CodecError::InvalidLength);
+ }
+ let key = bytes[1..].to_vec();
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
let key = bytes[1..].to_vec(); | |
if bytes.len() < 2 { | |
return Err(CodecError::InvalidLength); | |
} | |
let key = bytes[1..].to_vec(); |
pub fn compute_merkle_root<H>(values: &[Felt]) -> Result<Felt> | ||
where | ||
H: StarkHash + Send + Sync, | ||
{ | ||
use bonsai_trie::id::BasicId; | ||
use bonsai_trie::{databases, BonsaiStorage, BonsaiStorageConfig}; | ||
|
||
// the value is irrelevant | ||
const IDENTIFIER: &[u8] = b"1"; | ||
|
||
let config = BonsaiStorageConfig::default(); | ||
let bonsai_db = databases::HashMapDb::<BasicId>::default(); | ||
let mut bs = BonsaiStorage::<_, _, H>::new(bonsai_db, config).unwrap(); | ||
|
||
for (id, value) in values.iter().enumerate() { | ||
let key = BitVec::from_iter(id.to_be_bytes()); | ||
bs.insert(IDENTIFIER, key.as_bitslice(), value).unwrap(); | ||
} | ||
|
||
let id = bonsai_trie::id::BasicIdBuilder::new().new_id(); | ||
bs.commit(id).unwrap(); | ||
|
||
Ok(bs.root_hash(IDENTIFIER).unwrap()) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider these improvements to the Merkle root computation, sensei!
While the implementation is functional, there are several areas for improvement:
- The constant identifier could be more descriptive
- Missing validation for empty input
- Using
unwrap()
could lead to panic - Memory efficiency could be improved by preallocating the BitVec
Here's a suggested improvement:
-const IDENTIFIER: &[u8] = b"1";
+const MERKLE_TREE_IDENTIFIER: &[u8] = b"merkle_tree";
pub fn compute_merkle_root<H>(values: &[Felt]) -> Result<Felt>
where
H: StarkHash + Send + Sync,
{
+ if values.is_empty() {
+ return Err(anyhow::anyhow!("Cannot compute Merkle root for empty values"));
+ }
+
use bonsai_trie::id::BasicId;
use bonsai_trie::{databases, BonsaiStorage, BonsaiStorageConfig};
let config = BonsaiStorageConfig::default();
let bonsai_db = databases::HashMapDb::<BasicId>::default();
- let mut bs = BonsaiStorage::<_, _, H>::new(bonsai_db, config).unwrap();
+ let mut bs = BonsaiStorage::<_, _, H>::new(bonsai_db, config)
+ .map_err(|e| anyhow::anyhow!("Failed to create Bonsai storage: {}", e))?;
for (id, value) in values.iter().enumerate() {
- let key = BitVec::from_iter(id.to_be_bytes());
- bs.insert(IDENTIFIER, key.as_bitslice(), value).unwrap();
+ let mut key = BitVec::with_capacity(64);
+ key.extend_from_bitslice(&id.to_be_bytes());
+ bs.insert(MERKLE_TREE_IDENTIFIER, key.as_bitslice(), value)
+ .map_err(|e| anyhow::anyhow!("Failed to insert value: {}", e))?;
}
let id = bonsai_trie::id::BasicIdBuilder::new().new_id();
- bs.commit(id).unwrap();
+ bs.commit(id)
+ .map_err(|e| anyhow::anyhow!("Failed to commit storage: {}", e))?;
- Ok(bs.root_hash(IDENTIFIER).unwrap())
+ bs.root_hash(MERKLE_TREE_IDENTIFIER)
+ .map_err(|e| anyhow::anyhow!("Failed to get root hash: {}", e))
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
pub fn compute_merkle_root<H>(values: &[Felt]) -> Result<Felt> | |
where | |
H: StarkHash + Send + Sync, | |
{ | |
use bonsai_trie::id::BasicId; | |
use bonsai_trie::{databases, BonsaiStorage, BonsaiStorageConfig}; | |
// the value is irrelevant | |
const IDENTIFIER: &[u8] = b"1"; | |
let config = BonsaiStorageConfig::default(); | |
let bonsai_db = databases::HashMapDb::<BasicId>::default(); | |
let mut bs = BonsaiStorage::<_, _, H>::new(bonsai_db, config).unwrap(); | |
for (id, value) in values.iter().enumerate() { | |
let key = BitVec::from_iter(id.to_be_bytes()); | |
bs.insert(IDENTIFIER, key.as_bitslice(), value).unwrap(); | |
} | |
let id = bonsai_trie::id::BasicIdBuilder::new().new_id(); | |
bs.commit(id).unwrap(); | |
Ok(bs.root_hash(IDENTIFIER).unwrap()) | |
} | |
pub fn compute_merkle_root<H>(values: &[Felt]) -> Result<Felt> | |
where | |
H: StarkHash + Send + Sync, | |
{ | |
if values.is_empty() { | |
return Err(anyhow::anyhow!("Cannot compute Merkle root for empty values")); | |
} | |
use bonsai_trie::id::BasicId; | |
use bonsai_trie::{databases, BonsaiStorage, BonsaiStorageConfig}; | |
const MERKLE_TREE_IDENTIFIER: &[u8] = b"merkle_tree"; | |
let config = BonsaiStorageConfig::default(); | |
let bonsai_db = databases::HashMapDb::<BasicId>::default(); | |
let mut bs = BonsaiStorage::<_, _, H>::new(bonsai_db, config) | |
.map_err(|e| anyhow::anyhow!("Failed to create Bonsai storage: {}", e))?; | |
for (id, value) in values.iter().enumerate() { | |
let mut key = BitVec::with_capacity(64); | |
key.extend_from_bitslice(&id.to_be_bytes()); | |
bs.insert(MERKLE_TREE_IDENTIFIER, key.as_bitslice(), value) | |
.map_err(|e| anyhow::anyhow!("Failed to insert value: {}", e))?; | |
} | |
let id = bonsai_trie::id::BasicIdBuilder::new().new_id(); | |
bs.commit(id) | |
.map_err(|e| anyhow::anyhow!("Failed to commit storage: {}", e))?; | |
bs.root_hash(MERKLE_TREE_IDENTIFIER) | |
.map_err(|e| anyhow::anyhow!("Failed to get root hash: {}", e)) | |
} |
crates/katana/trie/src/lib.rs
Outdated
mod class; | ||
mod contract; | ||
|
||
pub use class::ClassTrie; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Remove unused contract module
The contract module is declared but not used or re-exported. The module contains contract-related types and traits that are not being utilized in the current codebase.
🔗 Analysis chain
Verify the necessity of the contract module.
The contract
module is declared but doesn't seem to be used or re-exported.
Let's verify its usage:
The previous search might have been too specific. Let's broaden our search to find any usage of the contract module and examine its contents to make a more informed decision.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if contract module is used elsewhere
rg -l "use.*trie::contract"
Length of output: 29
Script:
#!/bin/bash
# Let's check if the contract module file exists and its content
fd contract.rs --exec cat {}
# Also check for any references to contract in the codebase
rg -g '!target/' -l "contract"
# Check if there are any types or functions defined in contract module that might be used
ast-grep --pattern 'struct $name {$$$}' --lang rust contract.rs
ast-grep --pattern 'impl $name {$$$}' --lang rust contract.rs
Length of output: 64586
.collect::<Vec<_>>() | ||
}; | ||
|
||
const IDENTIFIER: &[u8] = b"0xcontract"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Move the identifier constant to module level
Ohayo, sensei! The identifier constant should be defined at the module level for better visibility and reusability.
+/// Identifier for the contract trie
+const CONTRACT_TRIE_IDENTIFIER: &[u8] = b"0xcontract";
-const IDENTIFIER: &[u8] = b"0xcontract";
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const IDENTIFIER: &[u8] = b"0xcontract"; | |
/// Identifier for the contract trie | |
const CONTRACT_TRIE_IDENTIFIER: &[u8] = b"0xcontract"; |
let config = BonsaiStorageConfig { | ||
max_saved_trie_logs: Some(0), | ||
max_saved_snapshots: Some(0), | ||
snapshot_interval: u64::MAX, | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider making trie configuration values configurable
Ohayo, sensei! The hardcoded configuration values could be made configurable through environment variables or a configuration file for better flexibility.
Consider extracting these values to a configuration:
+const DEFAULT_MAX_SAVED_TRIE_LOGS: Option<u64> = Some(0);
+const DEFAULT_MAX_SAVED_SNAPSHOTS: Option<u64> = Some(0);
+const DEFAULT_SNAPSHOT_INTERVAL: u64 = u64::MAX;
let config = BonsaiStorageConfig {
- max_saved_trie_logs: Some(0),
- max_saved_snapshots: Some(0),
- snapshot_interval: u64::MAX,
+ max_saved_trie_logs: self.config.max_saved_trie_logs.unwrap_or(DEFAULT_MAX_SAVED_TRIE_LOGS),
+ max_saved_snapshots: self.config.max_saved_snapshots.unwrap_or(DEFAULT_MAX_SAVED_SNAPSHOTS),
+ snapshot_interval: self.config.snapshot_interval.unwrap_or(DEFAULT_SNAPSHOT_INTERVAL),
};
Committable suggestion was skipped due to low confidence.
// computes the contract state leaf hash | ||
fn contract_state_leaf_hash( | ||
provider: impl StateProvider, | ||
address: &ContractAddress, | ||
contract_leaf: &ContractLeaf, | ||
) -> Felt { | ||
let nonce = | ||
contract_leaf.nonce.unwrap_or(provider.nonce(*address).unwrap().unwrap_or_default()); | ||
|
||
let class_hash = contract_leaf | ||
.class_hash | ||
.unwrap_or(provider.class_hash_of_contract(*address).unwrap().unwrap_or_default()); | ||
|
||
let storage_root = contract_leaf.storage_root.expect("root need to set"); | ||
|
||
// hPed( | ||
// hPed( | ||
// hPed( | ||
// class_hash, | ||
// storage_root | ||
// ), | ||
// nonce | ||
// ), | ||
// 0 | ||
// ) | ||
Pedersen::hash( | ||
&Pedersen::hash(&Pedersen::hash(&class_hash, &storage_root), &nonce), | ||
&Felt::ZERO, | ||
) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve error handling and use constants in hash calculation
Ohayo, sensei! The function has several areas for improvement:
- Error handling for state provider calls
- Define constant for the zero value
- Consider breaking down the nested hash calculation for better readability
Consider this improvement:
+/// Zero value used in the final hash calculation
+const FINAL_HASH_PADDING: Felt = Felt::ZERO;
fn contract_state_leaf_hash(
provider: impl StateProvider,
address: &ContractAddress,
contract_leaf: &ContractLeaf,
-) -> Felt {
+) -> crate::ProviderResult<Felt> {
let nonce =
- contract_leaf.nonce.unwrap_or(provider.nonce(*address).unwrap().unwrap_or_default());
+ contract_leaf.nonce.unwrap_or(provider.nonce(*address)?.unwrap_or_default());
let class_hash = contract_leaf
.class_hash
- .unwrap_or(provider.class_hash_of_contract(*address).unwrap().unwrap_or_default());
+ .unwrap_or(provider.class_hash_of_contract(*address)?.unwrap_or_default());
- let storage_root = contract_leaf.storage_root.expect("root need to set");
+ let storage_root = contract_leaf.storage_root
+ .ok_or_else(|| crate::ProviderError::Custom("storage root not set".into()))?;
+ // Calculate hashes step by step for better readability
+ let hash1 = Pedersen::hash(&class_hash, &storage_root);
+ let hash2 = Pedersen::hash(&hash1, &nonce);
+ let final_hash = Pedersen::hash(&hash2, &FINAL_HASH_PADDING);
- Pedersen::hash(
- &Pedersen::hash(&Pedersen::hash(&class_hash, &storage_root), &nonce),
- &Felt::ZERO,
- )
+ Ok(final_hash)
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// computes the contract state leaf hash | |
fn contract_state_leaf_hash( | |
provider: impl StateProvider, | |
address: &ContractAddress, | |
contract_leaf: &ContractLeaf, | |
) -> Felt { | |
let nonce = | |
contract_leaf.nonce.unwrap_or(provider.nonce(*address).unwrap().unwrap_or_default()); | |
let class_hash = contract_leaf | |
.class_hash | |
.unwrap_or(provider.class_hash_of_contract(*address).unwrap().unwrap_or_default()); | |
let storage_root = contract_leaf.storage_root.expect("root need to set"); | |
// hPed( | |
// hPed( | |
// hPed( | |
// class_hash, | |
// storage_root | |
// ), | |
// nonce | |
// ), | |
// 0 | |
// ) | |
Pedersen::hash( | |
&Pedersen::hash(&Pedersen::hash(&class_hash, &storage_root), &nonce), | |
&Felt::ZERO, | |
) | |
} | |
// computes the contract state leaf hash | |
/// Zero value used in the final hash calculation | |
const FINAL_HASH_PADDING: Felt = Felt::ZERO; | |
fn contract_state_leaf_hash( | |
provider: impl StateProvider, | |
address: &ContractAddress, | |
contract_leaf: &ContractLeaf, | |
) -> crate::ProviderResult<Felt> { | |
let nonce = | |
contract_leaf.nonce.unwrap_or(provider.nonce(*address)?.unwrap_or_default()); | |
let class_hash = contract_leaf | |
.class_hash | |
.unwrap_or(provider.class_hash_of_contract(*address)?.unwrap_or_default()); | |
let storage_root = contract_leaf.storage_root | |
.ok_or_else(|| crate::ProviderError::Custom("storage root not set".into()))?; | |
// Calculate hashes step by step for better readability | |
let hash1 = Pedersen::hash(&class_hash, &storage_root); | |
let hash2 = Pedersen::hash(&hash1, &nonce); | |
let final_hash = Pedersen::hash(&hash2, &FINAL_HASH_PADDING); | |
Ok(final_hash) | |
} |
let block = self.commit_block( | ||
block_env.clone(), | ||
execution_output.states.state_updates.clone(), | ||
txs, | ||
&receipts, | ||
)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ohayo, sensei! Avoid unnecessary cloning of state updates
The state_updates
is cloned twice: once in the method call and again in compute_state_diff_commitment
. Consider passing it by reference to avoid unnecessary memory allocation.
Apply this diff:
let block = self.commit_block(
block_env.clone(),
- execution_output.states.state_updates.clone(),
+ &execution_output.states.state_updates,
txs,
&receipts,
)?;
Then update the method signature:
fn commit_block(
&self,
block_env: BlockEnv,
- state_updates: StateUpdates,
+ state_updates: &StateUpdates,
transactions: Vec<TxWithHash>,
receipts: &[ReceiptWithTxHash],
) -> Result<SealedBlock, BlockProductionError>
Committable suggestion was skipped due to low confidence.
fn compute_event_commitment(&self) -> Felt { | ||
// h(emitter_address, tx_hash, h(keys), h(data)) | ||
fn event_hash(tx: TxHash, event: &Event) -> Felt { | ||
let keys_hash = hash::Poseidon::hash_array(&event.keys); | ||
let data_hash = hash::Poseidon::hash_array(&event.data); | ||
hash::Poseidon::hash_array(&[tx, event.from_address.into(), keys_hash, data_hash]) | ||
} | ||
|
||
// the iterator will yield all events from all the receipts, each one paired with the | ||
// transaction hash that emitted it: (tx hash, event). | ||
let events = self.receipts.iter().flat_map(|r| r.events().iter().map(|e| (r.tx_hash, e))); | ||
|
||
let mut hashes = Vec::new(); | ||
for (tx, event) in events { | ||
let event_hash = event_hash(tx, event); | ||
hashes.push(event_hash); | ||
} | ||
|
||
// compute events commitment | ||
compute_merkle_root::<hash::Poseidon>(&hashes).unwrap() | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ohayo, sensei! Optimize event commitment computation
The event commitment computation could be optimized in several ways:
- Pre-allocate the
hashes
vector with the exact size needed - Use iterator chaining instead of explicit loops
Apply this diff:
fn compute_event_commitment(&self) -> Felt {
// h(emitter_address, tx_hash, h(keys), h(data))
fn event_hash(tx: TxHash, event: &Event) -> Felt {
let keys_hash = hash::Poseidon::hash_array(&event.keys);
let data_hash = hash::Poseidon::hash_array(&event.data);
hash::Poseidon::hash_array(&[tx, event.from_address.into(), keys_hash, data_hash])
}
let events = self.receipts.iter().flat_map(|r| r.events().iter().map(|e| (r.tx_hash, e)));
-
- let mut hashes = Vec::new();
- for (tx, event) in events {
- let event_hash = event_hash(tx, event);
- hashes.push(event_hash);
- }
+ let total_events: usize = self.receipts.iter().map(|r| r.events().len()).sum();
+ let hashes: Vec<_> = events
+ .map(|(tx, event)| event_hash(tx, event))
+ .collect();
compute_merkle_root::<hash::Poseidon>(&hashes).unwrap()
}
Committable suggestion was skipped due to low confidence.
00749ad
to
ea8cd64
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🧹 Outside diff range and nitpick comments (4)
crates/katana/storage/db/src/models/trie.rs (2)
54-81
: Enhance test coverage for error cases, sensei!While the happy path tests are good, we should add tests for error cases:
- Empty buffer
- Invalid type byte
- Buffer with only one byte
#[test] fn test_decode_error_cases() { // Test empty buffer assert!(matches!( TrieDatabaseKey::decode(vec![]), Err(CodecError::InvalidLength) )); // Test invalid type assert!(matches!( TrieDatabaseKey::decode(vec![3]), Err(CodecError::InvalidFormat) )); // Test single byte buffer assert!(matches!( TrieDatabaseKey::decode(vec![0]), Err(CodecError::InvalidLength) )); }
7-13
: Add documentation for key types, sensei!Consider adding documentation comments to explain:
- The purpose of each key type
- When each type should be used
- Any constraints or considerations
/// Represents different types of keys in the trie database #[repr(u8)] #[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq)] pub enum TrieDatabaseKeyType { /// Used for storing trie nodes Trie = 0, /// Used for storing flat key-value pairs Flat, /// Used for storing trie modification logs TrieLog, }crates/katana/storage/provider/src/providers/db/trie.rs (2)
22-27
: Add documentation for ContractLeaf struct and its fieldsOhayo, sensei! While the struct is well-designed, it would benefit from documentation explaining its purpose and the significance of each field.
Add documentation like this:
#[derive(Debug, Default)] +/// Represents the state of a contract in the trie struct ContractLeaf { + /// The class hash of the contract, if available pub class_hash: Option<Felt>, + /// The root hash of the contract's storage trie pub storage_root: Option<Felt>, + /// The current nonce of the contract pub nonce: Option<Felt>, }
136-146
: Enhance hash computation documentationOhayo, sensei! While the ASCII art documentation is helpful, it could be more descriptive and formatted better.
Consider improving the documentation:
- // hPed( - // hPed( - // hPed( - // class_hash, - // storage_root - // ), - // nonce - // ), - // 0 - // ) + // Contract state leaf hash computation: + // + // The final hash is computed as a series of nested Pedersen hashes: + // + // ┌─────────┐ + // │ Final │ + // │ Hash │ + // └────┬────┘ + // ┌─────────┴──────────┐ + // │ │ + // ┌────┴────┐ ┌───┴───┐ + // │ h3 │ │ 0 │ + // └────┬────┘ └───────┘ + // ┌────────┴─────────┐ + // │ │ + // ┌──┴──┐ ┌───┴───┐ + // │ h2 │ │ nonce │ + // └──┬──┘ └───────┘ + // │ + // ┌──┴──────────────────┐ + // │ │ + // │ class_hash, │ + // │ storage_root │ + // └─────────────────────┘
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (3)
crates/katana/storage/db/src/models/trie.rs
(1 hunks)crates/katana/storage/db/src/trie.rs
(1 hunks)crates/katana/storage/provider/src/providers/db/trie.rs
(1 hunks)
🔇 Additional comments (4)
crates/katana/storage/db/src/models/trie.rs (3)
1-21
: LGTM! Well-structured type definitions, sensei!
The type definitions are clean, efficient, and well-organized. The use of repr(u8)
for the enum is a good choice for space efficiency.
23-32
: Previous review comment about vector pre-allocation is still valid.
The suggestion to pre-allocate the vector capacity would improve performance by avoiding reallocations.
34-52
: Previous review comments about error handling and bounds checking are still valid.
The implementation needs proper error handling and bounds checking as mentioned in previous comments.
crates/katana/storage/provider/src/providers/db/trie.rs (1)
29-40
: LGTM! Clean and well-implemented ClassTrieWriter
Ohayo, sensei! The implementation is concise, handles errors appropriately, and follows good practices.
32476c6
to
070711c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Outside diff range and nitpick comments (12)
crates/katana/trie/src/lib.rs (1)
30-32
: Consider making the storage backend configurable, sensei!Currently, the implementation uses
HashMapDb
by default. For production use cases, you might want to:
- Accept a generic database type parameter that implements
BonsaiTrieDb
- Allow passing configuration options from the caller
- Support different storage backends (e.g., MDBX as mentioned in the PR objectives)
crates/katana/storage/db/src/models/trie.rs (2)
1-21
: Ohayo! Consider adding documentation for the types, sensei!The type definitions look good! The enum representation and derive macros are appropriate. However, adding documentation comments would help other developers understand:
- The purpose of each
TrieDatabaseKeyType
variant- The relationship between
TrieDatabaseKey
andTrieDatabaseValue
- Usage examples
Add documentation like this:
/// Represents different types of trie database keys #[repr(u8)] #[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq)] pub enum TrieDatabaseKeyType { /// Regular trie node Trie = 0, /// Flat storage entry Flat, /// Trie operation log TrieLog, }
54-81
: Add tests for error cases and edge conditions, sensei!While the happy path tests look good, consider adding tests for:
- Empty input
- Invalid key type values
- Single byte input
- Very large keys
Here's a suggested test:
#[test] fn test_decode_errors() { // Empty input assert!(matches!( TrieDatabaseKey::decode(vec![]), Err(CodecError::InvalidLength) )); // Invalid key type assert!(matches!( TrieDatabaseKey::decode(vec![255, 1, 2, 3]), Err(CodecError::InvalidFormat) )); // Single byte input assert!(matches!( TrieDatabaseKey::decode(vec![0]), Err(CodecError::InvalidLength) )); }crates/katana/storage/provider/src/providers/db/trie.rs (2)
22-27
: Add documentation for ContractLeaf structOhayo, sensei! Consider adding documentation to explain the purpose and usage of the ContractLeaf struct and its fields. This will help other developers understand the data structure better.
Add documentation like this:
+/// Represents the state of a contract in the trie +/// +/// This struct contains the essential components that define a contract's state: +/// - class_hash: The hash of the contract's class +/// - storage_root: The root hash of the contract's storage trie +/// - nonce: The current nonce of the contract #[derive(Debug, Default)] struct ContractLeaf { + /// The hash of the contract's class implementation pub class_hash: Option<Felt>, + /// The root hash of the contract's storage trie pub storage_root: Option<Felt>, + /// The current nonce of the contract pub nonce: Option<Felt>, }
121-150
: Improve hash computation documentationOhayo, sensei! While the ASCII art is helpful, consider converting it to proper doc comments for better IDE integration.
Here's a suggested improvement:
-// computes the contract state leaf hash +/// Computes the contract state leaf hash using nested Pedersen hashes +/// +/// The hash is computed as follows: +/// ```text +/// h_final = Pedersen( +/// Pedersen( +/// Pedersen(class_hash, storage_root), +/// nonce +/// ), +/// 0 +/// ) +/// ``` +/// +/// # Arguments +/// * `provider` - The state provider to fetch default values +/// * `address` - The contract address +/// * `contract_leaf` - The contract leaf containing state information +/// +/// # Returns +/// The computed state leaf hash as a Felt fn contract_state_leaf_hash(crates/katana/core/src/backend/mod.rs (2)
78-80
: Ohayo, sensei! Consider implementing the suggested TODO optimizationThe TODO comment suggests changing the
insert_block_with_states_and_receipts
to acceptReceiptWithTxHash
directly. This would avoid the conversion at line 80 and improve performance by eliminating an unnecessary transformation.Would you like me to help implement this optimization?
156-156
: Ohayo, sensei! Add documentation for trie_provider fieldConsider adding documentation to explain the role and requirements of the
trie_provider
field, as it's crucial for understanding how the block commitment process interacts with the trie implementation.crates/katana/primitives/src/block.rs (1)
319-338
: Consider adding more test cases for robustness.The current test coverage is good, but could be enhanced with these additional cases:
- Error cases with panic conditions
- Boundary testing for all three counter fields simultaneously
- Different combinations of L1 DA modes with various counter values
Here's a suggested additional test case:
#[test] fn header_concat_counts_boundary_cases() { // Test all counters at max simultaneously let expected = felt!("0xffffffffffffffffffffffffffffffffffffffff80000000"); let actual = Header::concat_counts( u32::MAX, u32::MAX, u32::MAX, L1DataAvailabilityMode::Blob ); assert_eq!(actual, expected); }crates/katana/storage/provider/src/lib.rs (1)
386-410
: Consider adding documentation for the trie writer implementations.Ohayo sensei! While the implementations are clean, it would be helpful to add documentation comments explaining:
- The purpose of these trait implementations
- What the returned Felt represents (presumably a Merkle/Patricia root)
- Any important considerations about the block number parameter
Apply this diff to add documentation:
+/// Implementation of ClassTrieWriter for BlockchainProvider +/// +/// This implementation allows computing class trie roots by inserting class hash updates +/// at specific block numbers. The returned Felt represents the root hash of the class trie. impl<Db> ClassTrieWriter for BlockchainProvider<Db> where Db: ClassTrieWriter, { + /// Inserts class hash updates for a given block number and returns the new root + /// + /// # Arguments + /// * `block_number` - The block number at which these updates occurred + /// * `updates` - Mapping of class hashes to their compiled class hashes + /// + /// # Returns + /// The root hash of the class trie after applying the updates fn insert_updates( +/// Implementation of ContractTrieWriter for BlockchainProvider +/// +/// This implementation allows computing contract state trie roots by inserting state updates +/// at specific block numbers. The returned Felt represents the root hash of the state trie. impl<Db> ContractTrieWriter for BlockchainProvider<Db> where Db: ContractTrieWriter, { + /// Inserts contract state updates for a given block number and returns the new root + /// + /// # Arguments + /// * `block_number` - The block number at which these updates occurred + /// * `state_updates` - The state updates to apply + /// + /// # Returns + /// The root hash of the contract state trie after applying the updates fn insert_updates(crates/katana/storage/db/src/tables.rs (3)
39-39
: Ohayo! Consider adding documentation for the Trie trait, sensei!The
Trie
trait serves as a marker trait for trie-specific tables, but it would benefit from documentation explaining its purpose and relationship with the database structure.Add documentation like this:
+/// A marker trait for trie-specific tables that store merkle tree nodes. +/// These tables use `TrieDatabaseKey` as keys and `TrieDatabaseValue` as values +/// to maintain the tree structure. pub trait Trie: Table<Key = TrieDatabaseKey, Value = TrieDatabaseValue> {}
233-238
: Consider adding documentation for the trie tables, sensei!While the table names are self-explanatory, it would be beneficial to document their specific roles in the merkle tree structure.
Add documentation like this:
- /// Class trie + /// Stores merkle tree nodes for class state commitment computation ClassTrie: (TrieDatabaseKey) => TrieDatabaseValue, - /// Contract trie + /// Stores merkle tree nodes for contract state commitment computation ContractTrie: (TrieDatabaseKey) => TrieDatabaseValue, - /// Contract storage trie + /// Stores merkle tree nodes for contract storage state commitment computation ContractStorageTrie: (TrieDatabaseKey) => TrieDatabaseValue
Line range hint
1-244
: Consider adding trie-specific test cases, sensei!While the basic table functionality is tested, we should add specific test cases for the trie functionality to ensure proper integration.
Add a new test function like this:
#[test] fn test_trie_tables() { use super::*; // Verify that trie tables implement the Trie trait fn assert_implements_trie<T: Trie>() {} assert_implements_trie::<ClassTrie>(); assert_implements_trie::<ContractTrie>(); assert_implements_trie::<ContractStorageTrie>(); // Verify key/value types assert_eq!(std::any::type_name::<ClassTrie::Key>(), std::any::type_name::<TrieDatabaseKey>()); assert_eq!(std::any::type_name::<ClassTrie::Value>(), std::any::type_name::<TrieDatabaseValue>()); }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (27)
Cargo.toml
(3 hunks)crates/katana/core/Cargo.toml
(2 hunks)crates/katana/core/src/backend/mod.rs
(3 hunks)crates/katana/core/src/backend/storage.rs
(3 hunks)crates/katana/primitives/src/block.rs
(5 hunks)crates/katana/primitives/src/chain_spec.rs
(2 hunks)crates/katana/primitives/src/receipt.rs
(2 hunks)crates/katana/primitives/src/state.rs
(3 hunks)crates/katana/storage/db/Cargo.toml
(2 hunks)crates/katana/storage/db/src/codecs/postcard.rs
(2 hunks)crates/katana/storage/db/src/lib.rs
(1 hunks)crates/katana/storage/db/src/models/mod.rs
(1 hunks)crates/katana/storage/db/src/models/trie.rs
(1 hunks)crates/katana/storage/db/src/tables.rs
(7 hunks)crates/katana/storage/db/src/trie.rs
(1 hunks)crates/katana/storage/provider/Cargo.toml
(1 hunks)crates/katana/storage/provider/src/lib.rs
(3 hunks)crates/katana/storage/provider/src/providers/db/mod.rs
(1 hunks)crates/katana/storage/provider/src/providers/db/trie.rs
(1 hunks)crates/katana/storage/provider/src/providers/fork/mod.rs
(4 hunks)crates/katana/storage/provider/src/traits/mod.rs
(1 hunks)crates/katana/storage/provider/src/traits/trie.rs
(1 hunks)crates/katana/trie/Cargo.toml
(1 hunks)crates/katana/trie/src/class.rs
(1 hunks)crates/katana/trie/src/contract.rs
(1 hunks)crates/katana/trie/src/lib.rs
(1 hunks)crates/saya/provider/src/rpc/mod.rs
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (17)
- Cargo.toml
- crates/katana/core/Cargo.toml
- crates/katana/core/src/backend/storage.rs
- crates/katana/primitives/src/chain_spec.rs
- crates/katana/storage/db/Cargo.toml
- crates/katana/storage/db/src/codecs/postcard.rs
- crates/katana/storage/db/src/lib.rs
- crates/katana/storage/db/src/models/mod.rs
- crates/katana/storage/provider/Cargo.toml
- crates/katana/storage/provider/src/providers/db/mod.rs
- crates/katana/storage/provider/src/providers/fork/mod.rs
- crates/katana/storage/provider/src/traits/mod.rs
- crates/katana/storage/provider/src/traits/trie.rs
- crates/katana/trie/Cargo.toml
- crates/katana/trie/src/class.rs
- crates/katana/trie/src/contract.rs
- crates/saya/provider/src/rpc/mod.rs
🔇 Additional comments (26)
crates/katana/trie/src/lib.rs (3)
14-18
: Ohayo! Clean trait definition, sensei!
The BonsaiTrieDb
trait provides a nice abstraction over the database requirements. The documentation and implementation are clear and follow Rust best practices.
20-23
: Well-defined function signature, sensei!
The generic constraints and error handling approach are appropriate for this use case.
24-43
: Previous review comment is still applicable
The implementation could benefit from the improvements suggested in the previous review comment, including better constant naming, input validation, error handling, and memory efficiency optimizations.
crates/katana/primitives/src/state.rs (3)
33-48
: Ohayo! The len() implementation looks solid, sensei!
The method efficiently computes the total length by aggregating all state components, and the clippy attribute is appropriately used.
62-89
: Previous refactoring suggestion is still applicable.
The flattening logic for different state components follows similar patterns and could benefit from extraction into helper methods.
90-107
: Verify hash computation order consistency.
The order of elements in the final hash computation is crucial for consistency. Let's verify that this order matches other implementations in the codebase.
✅ Verification successful
Let me gather more information about state diff hash computation patterns.
The state diff hash computation order is consistent with the codebase standards
The implementation follows the established pattern in the codebase, where:
- The magic prefix "STARKNET_STATE_DIFF0" is used consistently
- The order of elements matches the standard sequence:
- Updated contracts (deployed + replaced)
- Declared classes
- Deprecated declared classes
- Storage updates
- Nonce updates
This is verified by examining other Poseidon hash implementations in the codebase which follow similar structured patterns with magic prefixes and consistent element ordering.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for other Poseidon hash computations with similar patterns
rg -A 5 "STARKNET_STATE_DIFF0"
# Look for other state diff hash implementations
ast-grep --pattern 'fn compute_state_diff_hash($$$) {
$$$
}'
Length of output: 592
Script:
#!/bin/bash
# Search for any state diff hash related code with more context
rg -A 10 "state_diff.*hash" --ignore-case
# Look for Poseidon hash array usage patterns
rg -A 5 "Poseidon::hash_array"
# Search for any state diff related structs/implementations
ast-grep --pattern 'struct StateDiff {
$$$
}'
Length of output: 8011
crates/katana/storage/provider/src/providers/db/trie.rs (1)
29-40
: LGTM! Clean implementation of ClassTrieWriter
The implementation properly handles errors and follows a clean approach to updating the class trie.
crates/katana/primitives/src/receipt.rs (5)
1-6
: Ohayo! Import statements look good, sensei!
The imports are well-organized and all are utilized effectively in the implementation.
180-189
: Excellent struct design, sensei!
The struct is well-designed with appropriate derive macros and smart use of #[deref]
and #[as_ref]
attributes, allowing transparent access to the underlying Receipt
methods while maintaining encapsulation.
226-241
: Excellent optimization for message hashing, sensei!
The implementation shows great attention to performance:
- Pre-allocating vector capacity to avoid reallocations
- Efficient use of iterator folding for message processing
196-200
: Great documentation with reference link!
The documentation clearly explains the purpose and includes a link to the official Starknet documentation for further reference.
209-218
: Verify gas consumption implementation.
A few concerns in the hash computation:
- Line 214: Hard-coded
Felt::ZERO
for L2 gas consumption needs verification - Line 216: Commented out
l1_data_gas
suggests incomplete implementation
crates/katana/core/src/backend/mod.rs (1)
173-208
: Ohayo, sensei! Consider handling edge cases in commit method
The commit
method assumes all required data is valid. Consider adding validation for:
- Empty transaction list
- Mismatched transaction and receipt counts
- Zero state updates
This would help prevent invalid block creation and improve error reporting.
crates/katana/primitives/src/block.rs (4)
Line range hint 1-16
: Ohayo sensei! Clean organization of imports and type definitions.
The reorganization of type definitions and addition of new imports for Poseidon hash functionality is well-structured and follows Rust best practices.
102-159
: Excellent implementation of the Poseidon hash computation!
The implementation is thorough, well-documented, and follows StarkWare's specification exactly. The order of fields in the hash computation is correct and the code is clear and maintainable.
208-208
: LGTM! Consistent default initialization.
The default initialization of state_diff_length
to 0 is consistent with other counter fields.
93-94
: Verify state_diff_length usage across the codebase.
The addition of state_diff_length
field is well-placed within the counter fields group. Let's verify its usage across the codebase.
✅ Verification successful
Let me analyze the initialization patterns and related state diff functionality to ensure proper usage.
The state_diff_length field is properly integrated across the codebase
The field is correctly used in:
- Block header hash computation with proper bit layout (64-bits allocation)
- Initialized consistently to 0 in genesis/default blocks
- Populated with actual state updates length in backend implementation
- Used alongside related state diff functionality in a coherent manner
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for state_diff_length usage and initialization
# Look for potential initialization points and usage
rg "state_diff_length" --type rust
Length of output: 1208
Script:
#!/bin/bash
# Check the initialization and usage context of state_diff_length
rg "state_diff_length" -B 3 -A 3 --type rust
# Check for any state diff related functionality
rg "state.*diff" --type rust
Length of output: 18272
crates/katana/storage/provider/src/lib.rs (3)
22-22
: LGTM! Clean trait import.
Ohayo! The import of trie-related traits is well-organized and follows the module structure.
386-397
: LGTM! Clean ClassTrieWriter implementation.
The implementation correctly delegates to the underlying provider while maintaining the proper trait bounds. The method signature properly handles block numbers and class hash updates, returning a Felt value that represents the computed root.
399-410
: LGTM! Clean ContractTrieWriter implementation.
The implementation follows the same pattern as ClassTrieWriter, properly delegating to the provider while maintaining consistent error handling through ProviderResult. The state updates are passed through correctly to compute the contract state root.
Let's verify the trait implementations in the underlying provider:
✅ Verification successful
Let me search with a different pattern to find these trait implementations.
Ohayo! The ContractTrieWriter implementation is properly supported across the provider chain, sensei!
The verification shows that ContractTrieWriter is correctly implemented in:
- DbProvider (base implementation)
- ForkedProvider
- BlockchainProvider (the wrapper we reviewed)
All implementations maintain consistent signatures and return types using ProviderResult, following the same pattern as ClassTrieWriter. The trait hierarchy is complete and properly connected.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the trait implementations in the DB provider
# Expected: Find implementations of ClassTrieWriter and ContractTrieWriter
# Search for ClassTrieWriter implementation
echo "Searching for ClassTrieWriter implementation..."
ast-grep --pattern 'impl$$$ClassTrieWriter for DbProvider'
# Search for ContractTrieWriter implementation
echo "Searching for ContractTrieWriter implementation..."
ast-grep --pattern 'impl$$$ContractTrieWriter for DbProvider'
Length of output: 340
Script:
#!/bin/bash
# Search for trait implementations in the entire codebase
echo "Searching for ClassTrieWriter implementations..."
rg "impl.*ClassTrieWriter" -A 5
echo "\nSearching for ContractTrieWriter implementations..."
rg "impl.*ContractTrieWriter" -A 5
Length of output: 3246
crates/katana/storage/db/src/tables.rs (4)
13-13
: LGTM! Clean import addition, sensei!
The new import for trie-related types is well-placed and properly scoped.
173-176
: LGTM! Well-structured table definitions, sensei!
The new trie tables are properly integrated into the existing table structure with consistent type definitions.
241-243
: LGTM! Clean trait implementations, sensei!
The Trie
trait implementations for the three trie tables are correct and consistent.
276-277
: The test coverage is incomplete, sensei!
The test assertions are missing for the ContractStorageTrie
table, as noted in the past review comments.
Also applies to: 302-303
crates/katana/storage/db/src/trie.rs (2)
66-69
: Ohayo, sensei! The get
method is well implemented
The get
method correctly retrieves the value associated with the given key.
133-145
: Ohayo, sensei! The to_db_key
function correctly maps DatabaseKey
variants
The function properly converts DatabaseKey
variants into TrieDatabaseKey
, ensuring consistent key handling across different key types.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
🧹 Outside diff range and nitpick comments (13)
crates/katana/trie/src/class.rs (2)
27-34
: Document the configuration values.Ohayo, sensei! The configuration sets specific values that might not be immediately obvious to future maintainers:
max_saved_trie_logs: Some(0)
max_saved_snapshots: Some(0)
snapshot_interval: u64::MAX
Consider adding a comment explaining why these specific values were chosen.
36-40
: Add detailed documentation for the apply method.Ohayo, sensei! Consider adding documentation that explains:
- The purpose of the method
- The format and requirements for the input updates
- The meaning of the returned root hash
- Any side effects on the storage state
+ /// Applies a batch of class updates to the trie and computes the new root hash. + /// + /// # Arguments + /// * `block_number` - The block number for which these updates are being applied + /// * `updates` - A map of class hashes to their compiled class hashes + /// + /// # Returns + /// The new root hash of the trie after applying all updates pub fn apply(crates/katana/primitives/src/state.rs (2)
33-48
: Ohayo! Consider adding documentation for the len method, sensei.While the implementation is clear, adding documentation would help explain:
- The purpose of this length calculation
- What the returned value represents in the context of state updates
- Why empty state is valid (hence the clippy allow)
Add documentation like:
+ /// Returns the total number of individual state updates across all categories. + /// This includes the sum of: + /// - Deployed contracts + /// - Replaced classes + /// - Declared classes + /// - Deprecated declared classes + /// - Nonce updates + /// - Storage updates (counting each key-value pair) #[allow(clippy::len_without_is_empty)] pub fn len(&self) -> usize {
90-90
: Document the magic values, sensei.The magic string
STARKNET_STATE_DIFF0
and the use ofONE
andZERO
at lines 98-99 should be documented to explain their significance in the state diff computation.Add a comment explaining these constants:
+ // STARKNET_STATE_DIFF0: Magic value that identifies this as a state diff hash + // The following constants (ONE and ZERO) represent placeholder values for + // future extensions of the state diff format let magic = short_string!("STARKNET_STATE_DIFF0");crates/katana/storage/provider/src/providers/db/trie.rs (2)
136-145
: Enhance documentation of the hashing schemeOhayo, sensei! While the ASCII art comment helps visualize the nested hashing, it would be beneficial to add documentation explaining:
- The purpose of this specific hashing scheme
- Why this particular order of inputs is used
- Why zero is used as the final input
Consider adding a detailed doc comment:
/// Computes the contract state leaf hash using a nested Pedersen hash scheme. /// The hash is computed as follows: /// 1. First level: hash(class_hash, storage_root) to combine contract identity /// 2. Second level: hash(result1, nonce) to include state sequence /// 3. Final level: hash(result2, 0) for domain separation
42-119
: Consider implementing transaction rollback mechanismOhayo, sensei! The implementation would benefit from proper transaction rollback handling in case of failures during the multi-step update process. This ensures data consistency when errors occur during storage updates, contract updates, or commitment phases.
Consider wrapping the entire update process in a transaction that can be rolled back on failure. This could be implemented by:
- Starting a transaction at the beginning of
insert_updates
- Implementing rollback logic for each step
- Only committing the transaction after all operations succeed
crates/katana/primitives/src/receipt.rs (3)
180-189
: LGTM! Well-structured struct definition with appropriate derives.The struct design effectively combines transaction hash with receipt data. Consider adding a doc comment explaining the purpose of the
#[deref]
attribute for better maintainability.pub struct ReceiptWithTxHash { /// The hash of the transaction. pub tx_hash: TxHash, - /// The raw transaction. + /// The raw transaction receipt. + /// This field can be dereferenced to access all Receipt methods directly. #[deref] #[as_ref] pub receipt: Receipt, }
214-214
: Document the zero L2 gas consumption.Sensei, consider adding a comment explaining why L2 gas consumption is set to zero. This will help future maintainers understand the implementation decision.
- Felt::ZERO, // L2 gas consumption. + Felt::ZERO, // L2 gas consumption is currently not tracked in Katana
220-244
: Enhance documentation for message hash computation.The message hash computation is complex and could benefit from more detailed documentation. Consider adding:
- Example of the hash computation flow
- Explanation of the accumulator structure
- // H(n, from, to, H(payload), ...), where n, is the total number of messages, the payload is - // prefixed by its length, and h is the Poseidon hash function. + /// Computes the hash of L1 messages using the following format: + /// H(n, from₁, to₁, H(payload₁), from₂, to₂, H(payload₂), ...), where: + /// - n: total number of messages + /// - fromᵢ: sender address of message i + /// - toᵢ: recipient address of message i + /// - H(payloadᵢ): Poseidon hash of [len, payload₁, ..., payload_len] + /// + /// The function pre-allocates memory for efficiency and uses fold to + /// build the final array of elements to be hashed.crates/katana/primitives/src/block.rs (2)
Line range hint
77-94
: Consider reordering struct fields to match hash computation order.The fields in the
Header
struct don't match the order in which they're used incompute_hash
. This makes it harder to verify the correctness of the hash computation. Consider reordering the fields to match the order in the hash computation (as documented in the comments forcompute_hash
).
319-338
: Enhance test coverage for edge cases.The test module could benefit from additional test cases:
- Test all possible L1DataAvailabilityMode values
- Test combinations of maximum values for all fields
- Add property-based tests to verify that the concatenated value can be correctly unpacked
Would you like me to help generate these additional test cases?
crates/katana/storage/provider/src/lib.rs (1)
399-410
: LGTM! Well-structured trait implementation for contract state commitments.The
ContractTrieWriter
implementation maintains consistency with the codebase patterns and properly handles state update commitments. This implementation, along withClassTrieWriter
, forms a complete system for computing block commitments for both contract states and class updates.For future scalability, consider:
- Adding batch processing capabilities for handling multiple blocks' commitments
- Implementing caching mechanisms for frequently accessed state roots
- Adding methods to verify these commitments against a known state
crates/katana/storage/db/src/models/trie.rs (1)
58-80
: Consider adding tests for error cases, sensei!Ohayo, sensei! While your existing tests validate successful encoding and decoding, adding tests for error scenarios would enhance robustness. Testing how
decode
handles invalid or malformed inputs ensures your error handling is solid.Here are some examples:
#[test] fn test_decode_empty_input() { let result = TrieDatabaseKey::decode(Vec::new()); assert!(result.is_err()); } #[test] fn test_decode_invalid_type() { let input = vec![255, 1, 2, 3]; // Invalid type let result = TrieDatabaseKey::decode(input); assert!(result.is_err()); }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (3)
Cargo.lock
is excluded by!**/*.lock
spawn-and-move-db.tar.gz
is excluded by!**/*.gz
types-test-db.tar.gz
is excluded by!**/*.gz
📒 Files selected for processing (27)
Cargo.toml
(3 hunks)crates/katana/core/Cargo.toml
(2 hunks)crates/katana/core/src/backend/mod.rs
(3 hunks)crates/katana/core/src/backend/storage.rs
(3 hunks)crates/katana/primitives/src/block.rs
(5 hunks)crates/katana/primitives/src/chain_spec.rs
(2 hunks)crates/katana/primitives/src/receipt.rs
(2 hunks)crates/katana/primitives/src/state.rs
(3 hunks)crates/katana/storage/db/Cargo.toml
(2 hunks)crates/katana/storage/db/src/codecs/postcard.rs
(2 hunks)crates/katana/storage/db/src/lib.rs
(1 hunks)crates/katana/storage/db/src/models/mod.rs
(1 hunks)crates/katana/storage/db/src/models/trie.rs
(1 hunks)crates/katana/storage/db/src/tables.rs
(7 hunks)crates/katana/storage/db/src/trie.rs
(1 hunks)crates/katana/storage/provider/Cargo.toml
(1 hunks)crates/katana/storage/provider/src/lib.rs
(3 hunks)crates/katana/storage/provider/src/providers/db/mod.rs
(1 hunks)crates/katana/storage/provider/src/providers/db/trie.rs
(1 hunks)crates/katana/storage/provider/src/providers/fork/mod.rs
(4 hunks)crates/katana/storage/provider/src/traits/mod.rs
(1 hunks)crates/katana/storage/provider/src/traits/trie.rs
(1 hunks)crates/katana/trie/Cargo.toml
(1 hunks)crates/katana/trie/src/class.rs
(1 hunks)crates/katana/trie/src/contract.rs
(1 hunks)crates/katana/trie/src/lib.rs
(1 hunks)crates/saya/provider/src/rpc/mod.rs
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- crates/katana/trie/src/contract.rs
🚧 Files skipped from review as they are similar to previous changes (16)
- Cargo.toml
- crates/katana/core/Cargo.toml
- crates/katana/core/src/backend/storage.rs
- crates/katana/primitives/src/chain_spec.rs
- crates/katana/storage/db/Cargo.toml
- crates/katana/storage/db/src/codecs/postcard.rs
- crates/katana/storage/db/src/lib.rs
- crates/katana/storage/db/src/models/mod.rs
- crates/katana/storage/db/src/tables.rs
- crates/katana/storage/provider/Cargo.toml
- crates/katana/storage/provider/src/providers/db/mod.rs
- crates/katana/storage/provider/src/providers/fork/mod.rs
- crates/katana/storage/provider/src/traits/mod.rs
- crates/katana/storage/provider/src/traits/trie.rs
- crates/katana/trie/Cargo.toml
- crates/saya/provider/src/rpc/mod.rs
🔇 Additional comments (18)
crates/katana/trie/src/lib.rs (3)
9-10
: Previous review comment about unused module is still valid
The contract module appears to be unused as previously noted.
14-18
: Ohayo, sensei! The trait definition looks perfect!
The BonsaiTrieDb
trait is well-documented and follows Rust best practices with its generic implementation.
20-43
: Previous review suggestions for Merkle root computation are still valid
The previous review provided comprehensive improvements for:
- Better constant naming
- Proper error handling
- Input validation
- BitVec optimization
Additionally, sensei, I noticed one more architectural consideration:
Consider adding a configuration struct to make the implementation more flexible:
pub struct MerkleConfig {
identifier: &'static [u8],
storage_config: BonsaiStorageConfig,
}
impl Default for MerkleConfig {
fn default() -> Self {
Self {
identifier: b"merkle_tree",
storage_config: BonsaiStorageConfig::default(),
}
}
}
This would allow users to customize the behavior without modifying the core implementation.
crates/katana/trie/src/class.rs (2)
1-17
: LGTM! Well-organized imports and documented constants.
Ohayo, sensei! The imports are properly organized, and the constant is well-documented with a reference to the architecture documentation.
19-22
: LGTM! Clean struct definition with proper trait bounds.
The struct is well-defined with appropriate generic constraints.
crates/katana/primitives/src/state.rs (1)
106-106
: Consider adding test vectors for the hash computation.
The Poseidon hash computation is a critical part of the state commitment. It would be valuable to have test vectors ensuring consistent hash computation across different implementations.
Let me help you verify if there are any existing test vectors:
crates/katana/storage/provider/src/providers/db/trie.rs (1)
1-27
: LGTM! Well-structured imports and ContractLeaf definition
The imports are well-organized and the ContractLeaf struct provides a clean representation of contract state with appropriate optional fields.
crates/katana/primitives/src/receipt.rs (2)
1-6
: LGTM! Imports are well-organized.
Ohayo! The new imports appropriately support the hash computation requirements for block commitments.
226-228
: LGTM! Efficient memory pre-allocation.
Nice optimization with pre-allocating the vector capacity! This avoids potential reallocations during the message processing.
crates/katana/core/src/backend/mod.rs (3)
68-73
: Ohayo, sensei! Avoid unnecessary cloning of state updates
The state_updates
is cloned unnecessarily. Consider passing it by reference to avoid the performance overhead.
147-157
: Ohayo, sensei! LGTM - Well-structured block representation
The UncommittedBlock
struct is well-designed with appropriate lifetime annotations and clear separation of concerns.
173-208
: Ohayo, sensei! LGTM - Well-structured block commitment implementation
The commit
method follows a clear and logical flow:
- Computes all required commitments
- Constructs the header with all necessary fields
- Computes the block hash
- Returns a properly sealed block
The implementation demonstrates good separation of concerns and maintainability.
crates/katana/primitives/src/block.rs (2)
Line range hint 1-16
: Ohayo sensei! Clean organization of imports and type definitions.
The reorganization of type definitions and addition of necessary imports for the Poseidon hash implementation is well-structured.
102-159
: Verify protocol version string conversion behavior.
The conversion of protocol version to felt at line 155 uses cairo_short_string_to_felt
which may fail for certain protocol version strings. Consider adding tests to verify this behavior for all possible protocol versions.
crates/katana/storage/provider/src/lib.rs (2)
22-22
: LGTM! Clean trait import.
Ohayo! The import of ClassTrieWriter
and ContractTrieWriter
traits is well-placed with other trait imports.
386-397
: LGTM! Clean trait implementation with proper delegation.
The ClassTrieWriter
implementation follows the established pattern of delegating to the underlying provider while maintaining proper error handling. The return type Felt
suggests this is computing a commitment/root value for the class updates.
Let's verify the trait definition and its usage:
✅ Verification successful
Implementation correctly follows the trait contract and delegation pattern
The verification confirms:
- The trait is properly defined in
traits/trie.rs
with requiredSend + Sync
bounds BlockchainProvider<Db>
correctly delegates to the underlying provider that must implementClassTrieWriter
DbProvider<Db>
provides the concrete implementation that actually updates the trie and returns the new root- The delegation chain and error handling are consistent
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the ClassTrieWriter trait definition and its usage
# Expected: Find the trait definition and other implementations
# Search for the trait definition
echo "=== ClassTrieWriter trait definition ==="
rg -p "trait ClassTrieWriter"
# Search for other implementations
echo "=== Other ClassTrieWriter implementations ==="
ast-grep --pattern 'impl $_ ClassTrieWriter for $_'
Length of output: 2410
crates/katana/storage/db/src/models/trie.rs (1)
1-81
: Well-structured implementation, sensei!
Ohayo, sensei! The overall implementation is clean and follows Rust best practices. The use of custom Encode
and Decode
traits provides clear serialization logic, and the unit tests enhance code reliability.
crates/katana/storage/db/src/trie.rs (1)
1-145
: Ohayo, sensei! The implementation looks solid overall.
The code is well-structured, and the integration of the TrieDb
with the bonsai database traits is implemented correctly.
crates/katana/trie/src/class.rs
Outdated
let updates: Vec<_> = updates | ||
.iter() | ||
.map(|(class_hash, compiled_class_hash)| { | ||
let hash = Poseidon::hash(&CONTRACT_CLASS_LEAF_V0, compiled_class_hash); | ||
(*class_hash, hash) | ||
}) | ||
.collect(); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider avoiding intermediate vector allocation.
The current implementation collects all updates into an intermediate vector before processing them. This could be optimized by processing updates directly from the iterator:
- let updates: Vec<_> = updates
- .iter()
- .map(|(class_hash, compiled_class_hash)| {
- let hash = Poseidon::hash(&CONTRACT_CLASS_LEAF_V0, compiled_class_hash);
- (*class_hash, hash)
- })
- .collect();
-
- for (key, value) in updates {
+ for (class_hash, compiled_class_hash) in updates {
+ let value = Poseidon::hash(&CONTRACT_CLASS_LEAF_V0, compiled_class_hash);
+ let bytes = class_hash.to_bytes_be();
Committable suggestion was skipped due to low confidence.
crates/katana/trie/src/class.rs
Outdated
pub fn apply( | ||
&mut self, | ||
block_number: BlockNumber, | ||
updates: &BTreeMap<ClassHash, CompiledClassHash>, | ||
) -> Felt { | ||
let updates: Vec<_> = updates | ||
.iter() | ||
.map(|(class_hash, compiled_class_hash)| { | ||
let hash = Poseidon::hash(&CONTRACT_CLASS_LEAF_V0, compiled_class_hash); | ||
(*class_hash, hash) | ||
}) | ||
.collect(); | ||
|
||
for (key, value) in updates { | ||
let bytes = key.to_bytes_be(); | ||
let bv: BitVec<u8, Msb0> = bytes.as_bits()[5..].to_owned(); | ||
self.bonsai_storage.insert(Self::IDENTIFIER, &bv, &value).unwrap(); | ||
} | ||
|
||
self.bonsai_storage.commit(BasicId::new(block_number)).unwrap(); | ||
self.bonsai_storage.root_hash(Self::IDENTIFIER).unwrap() | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add input validation for empty updates.
Consider adding a fast path for empty updates to avoid unnecessary storage operations:
pub fn apply(
&mut self,
block_number: BlockNumber,
updates: &BTreeMap<ClassHash, CompiledClassHash>,
) -> Felt {
+ if updates.is_empty() {
+ return self.bonsai_storage.root_hash(Self::IDENTIFIER)?;
+ }
Committable suggestion was skipped due to low confidence.
pub fn compute_state_diff_hash(states: StateUpdates) -> Felt { | ||
let replaced_classes_len = states.replaced_classes.len(); | ||
let deployed_contracts_len = states.deployed_contracts.len(); | ||
let updated_contracts_len = Felt::from(deployed_contracts_len + replaced_classes_len); | ||
// flatten the updated contracts into a single list of Felt values | ||
let updated_contracts = states.deployed_contracts.into_iter().chain(states.replaced_classes); | ||
let updated_contracts = updated_contracts.flat_map(|(addr, hash)| vec![addr.into(), hash]); | ||
|
||
let declared_classes = states.declared_classes; | ||
let declared_classes_len = Felt::from(declared_classes.len()); | ||
let declared_classes = declared_classes.into_iter().flat_map(|e| vec![e.0, e.1]); | ||
|
||
let deprecated_declared_classes = states.deprecated_declared_classes; | ||
let deprecated_declared_classes_len = Felt::from(deprecated_declared_classes.len()); | ||
|
||
let storage_updates = states.storage_updates; | ||
let storage_updates_len = Felt::from(storage_updates.len()); | ||
let storage_updates = storage_updates.into_iter().flat_map(|update| { | ||
let address = Felt::from(update.0); | ||
let storage_entries_len = Felt::from(update.1.len()); | ||
let storage_entries = update.1.into_iter().flat_map(|entries| vec![entries.0, entries.1]); | ||
iter::once(address).chain(iter::once(storage_entries_len)).chain(storage_entries) | ||
}); | ||
|
||
let nonce_updates = states.nonce_updates; | ||
let nonces_len = Felt::from(nonce_updates.len()); | ||
let nonce_updates = nonce_updates.into_iter().flat_map(|nonce| vec![nonce.0.into(), nonce.1]); | ||
|
||
let magic = short_string!("STARKNET_STATE_DIFF0"); | ||
let elements: Vec<Felt> = iter::once(magic) | ||
.chain(iter::once(updated_contracts_len)) | ||
.chain(updated_contracts) | ||
.chain(iter::once(declared_classes_len)) | ||
.chain(declared_classes) | ||
.chain(iter::once(deprecated_declared_classes_len)) | ||
.chain(deprecated_declared_classes) | ||
.chain(iter::once(Felt::ONE)) | ||
.chain(iter::once(Felt::ZERO)) | ||
.chain(iter::once(storage_updates_len)) | ||
.chain(storage_updates) | ||
.chain(iter::once(nonces_len)) | ||
.chain(nonce_updates) | ||
.collect(); | ||
|
||
hash::Poseidon::hash_array(&elements) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ohayo sensei! Let's improve the structure of this function.
The function is quite long and has repeated flattening patterns. The previous review's suggestion about extracting the flattening logic is still valid and would improve maintainability.
Consider breaking this down into smaller, focused methods:
impl StateUpdates {
+ fn flatten_contracts(&self) -> (Felt, impl Iterator<Item = Felt>) {
+ let len = self.deployed_contracts.len() + self.replaced_classes.len();
+ let iter = self.deployed_contracts
+ .into_iter()
+ .chain(self.replaced_classes)
+ .flat_map(|(addr, hash)| vec![addr.into(), hash]);
+ (Felt::from(len), iter)
+ }
+
+ fn flatten_declared_classes(&self) -> (Felt, impl Iterator<Item = Felt>) {
+ let iter = self.declared_classes
+ .into_iter()
+ .flat_map(|e| vec![e.0, e.1]);
+ (Felt::from(self.declared_classes.len()), iter)
+ }
+
+ fn flatten_storage(&self) -> (Felt, impl Iterator<Item = Felt>) {
+ let iter = self.storage_updates
+ .into_iter()
+ .flat_map(|update| {
+ let address = Felt::from(update.0);
+ let entries_len = Felt::from(update.1.len());
+ let entries = update.1.into_iter()
+ .flat_map(|entries| vec![entries.0, entries.1]);
+ iter::once(address)
+ .chain(iter::once(entries_len))
+ .chain(entries)
+ });
+ (Felt::from(self.storage_updates.len()), iter)
+ }
}
Committable suggestion was skipped due to low confidence.
for (address, storage_entries) in &state_updates.storage_updates { | ||
for (key, value) in storage_entries { | ||
let bytes = key.to_bytes_be(); | ||
let bv: BitVec<u8, Msb0> = bytes.as_bits()[5..].to_owned(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add bounds check for bit slicing operation
Ohayo, sensei! The bit slice operation bytes.as_bits()[5..]
could panic if the input is less than 5 bits long.
Consider adding a length check:
- let bv: BitVec<u8, Msb0> = bytes.as_bits()[5..].to_owned();
+ let bits = bytes.as_bits();
+ if bits.len() < 5 {
+ return Err(crate::ProviderError::Custom("Invalid key length".into()));
+ }
+ let bv: BitVec<u8, Msb0> = bits[5..].to_owned();
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
let bv: BitVec<u8, Msb0> = bytes.as_bits()[5..].to_owned(); | |
let bits = bytes.as_bits(); | |
if bits.len() < 5 { | |
return Err(crate::ProviderError::Custom("Invalid key length".into())); | |
} | |
let bv: BitVec<u8, Msb0> = bits[5..].to_owned(); |
impl<Db: Database> ClassTrieWriter for DbProvider<Db> { | ||
fn insert_updates( | ||
&self, | ||
block_number: BlockNumber, | ||
updates: &BTreeMap<ClassHash, CompiledClassHash>, | ||
) -> crate::ProviderResult<Felt> { | ||
let db = TrieDb::<tables::ClassTrie, <Db as Database>::TxMut>::new(self.0.tx_mut()?); | ||
let mut trie = katana_trie::ClassTrie::new(db); | ||
let new_root = trie.apply(block_number, updates); | ||
Ok(new_root) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling for trie operations
Ohayo, sensei! The apply
operation on line 37 should handle potential errors that might occur during trie updates.
Apply this diff:
- let new_root = trie.apply(block_number, updates);
+ let new_root = trie.apply(block_number, updates)
+ .map_err(|e| crate::ProviderError::Custom(format!("Failed to apply class updates: {}", e)))?;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
impl<Db: Database> ClassTrieWriter for DbProvider<Db> { | |
fn insert_updates( | |
&self, | |
block_number: BlockNumber, | |
updates: &BTreeMap<ClassHash, CompiledClassHash>, | |
) -> crate::ProviderResult<Felt> { | |
let db = TrieDb::<tables::ClassTrie, <Db as Database>::TxMut>::new(self.0.tx_mut()?); | |
let mut trie = katana_trie::ClassTrie::new(db); | |
let new_root = trie.apply(block_number, updates); | |
Ok(new_root) | |
} | |
} | |
impl<Db: Database> ClassTrieWriter for DbProvider<Db> { | |
fn insert_updates( | |
&self, | |
block_number: BlockNumber, | |
updates: &BTreeMap<ClassHash, CompiledClassHash>, | |
) -> crate::ProviderResult<Felt> { | |
let db = TrieDb::<tables::ClassTrie, <Db as Database>::TxMut>::new(self.0.tx_mut()?); | |
let mut trie = katana_trie::ClassTrie::new(db); | |
let new_root = trie.apply(block_number, updates) | |
.map_err(|e| crate::ProviderError::Custom(format!("Failed to apply class updates: {}", e)))?; | |
Ok(new_root) | |
} | |
} |
/// Computes the hash of the receipt. This is used for computing the receipts commitment. | ||
/// | ||
/// See the Starknet [docs] for reference. | ||
/// | ||
/// [docs]: https://docs.starknet.io/architecture-and-concepts/network-architecture/block-structure/#receipt_hash | ||
pub fn compute_hash(&self) -> Felt { | ||
let messages_hash = self.compute_messages_to_l1_hash(); | ||
let revert_reason_hash = if let Some(reason) = self.revert_reason() { | ||
starknet_keccak(reason.as_bytes()) | ||
} else { | ||
Felt::ZERO | ||
}; | ||
|
||
hash::Poseidon::hash_array(&[ | ||
self.tx_hash, | ||
self.receipt.fee().overall_fee.into(), | ||
messages_hash, | ||
revert_reason_hash, | ||
Felt::ZERO, // L2 gas consumption. | ||
self.receipt.fee().gas_consumed.into(), | ||
// self.receipt.fee().l1_data_gas.into(), | ||
]) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Based on the search results, I can see that l1_data_gas
is a well-established field used throughout the codebase, particularly in block headers, receipts, and gas calculations. The field is actively used in multiple components:
- It's part of the block header structure (
BlockHeader
) - It's used in gas price calculations and resource tracking
- It's included in RPC responses
- It's used in the execution environment
Looking at the implementation in receipt.rs
, the commented-out field is actually part of the receipt hash calculation that follows the Starknet documentation (as mentioned in the code comment linking to block-structure/#receipt_hash).
The commented out l1_data_gas field should be included in the hash computation
The field is a core part of the Starknet protocol's gas accounting and block structure. Its exclusion from the receipt hash computation appears to be an oversight since it's actively used throughout the rest of the codebase.
🔗 Analysis chain
Verify the commented out l1_data_gas field.
The implementation has a commented out field in the hash computation. Please verify if this should be included or document why it's excluded.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for l1_data_gas usage in the codebase
rg "l1_data_gas" -A 2 -B 2
Length of output: 23576
// TODO: maybe should change the arguments for insert_block_with_states_and_receipts to | ||
// accept ReceiptWithTxHash instead to avoid this conversion. | ||
let receipts = receipts.into_iter().map(|r| r.receipt).collect::<Vec<_>>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ohayo, sensei! Consider removing the redundant receipt conversion
The TODO comment and conversion from ReceiptWithTxHash
to regular receipts could be eliminated by updating the insert_block_with_states_and_receipts
method signature to accept ReceiptWithTxHash
directly.
Would you like me to help update the method signature to avoid this conversion?
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2588 +/- ##
==========================================
+ Coverage 55.12% 55.63% +0.51%
==========================================
Files 385 392 +7
Lines 47775 48373 +598
==========================================
+ Hits 26337 26914 +577
- Misses 21438 21459 +21 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (1)
crates/katana/trie/src/lib.rs (1)
45-66
: Consider expanding test coverage, sensei!While the current test is well-documented and validates against known values from Pathfinder, consider adding these test cases:
- Empty input validation
- Single value case
- Large number of values (stress test)
- Edge cases with specific values (e.g., zero, max felt value)
Here's a suggested test expansion:
#[test] fn test_empty_input() { let result = compute_merkle_root::<hash::Pedersen>(&[]); assert!(result.is_err()); } #[test] fn test_single_value() { let result = compute_merkle_root::<hash::Pedersen>(&[Felt::from(1)]).unwrap(); // Add expected value here } #[test] fn test_large_input() { let large_input: Vec<Felt> = (0..1000).map(Felt::from).collect(); let result = compute_merkle_root::<hash::Pedersen>(&large_input); assert!(result.is_ok()); }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (1)
crates/katana/trie/src/lib.rs
(1 hunks)
🔇 Additional comments (3)
crates/katana/trie/src/lib.rs (3)
9-10
: The contract module appears unused, sensei!
This issue was previously identified in an earlier review. The contract module is declared but not used or re-exported.
14-18
: Ohayo! Clean trait definition, sensei!
The helper trait is well-documented and follows Rust best practices with a blanket implementation.
20-43
: Previous suggestions for Merkle root computation still apply, sensei!
The improvements suggested in the previous review regarding error handling, constant naming, and BitVec optimization are still relevant.
5d82720
to
397e624
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 13
🧹 Outside diff range and nitpick comments (7)
crates/katana/trie/src/lib.rs (2)
62-74
: Consider expanding test coverage, sensei!While the current test case is good and properly attributed to Pathfinder, consider adding these scenarios:
- Empty vector case
- Single element case
- Large number of elements
- Edge cases with specific values
Would you like me to help generate additional test cases?
76-87
: Enhance contract state hash test coverage, sensei!The test case is well-documented with Pathfinder reference values. Consider adding these scenarios:
- Zero class hash case
- Maximum felt value cases
- Different nonce values
Would you like me to help generate additional test cases?
crates/katana/storage/provider/src/providers/db/trie.rs (1)
17-22
: Add documentation for ContractLeaf struct and its fieldsOhayo, sensei! While the struct is well-designed, it would benefit from documentation explaining its purpose and the significance of each field.
Add documentation like this:
+/// Represents the state of a contract in the trie #[derive(Debug, Default)] struct ContractLeaf { + /// The hash of the contract's class pub class_hash: Option<Felt>, + /// The root hash of the contract's storage trie pub storage_root: Option<Felt>, + /// The current nonce of the contract pub nonce: Option<Felt>, }crates/katana/primitives/src/block.rs (1)
319-338
: Consider adding more edge cases to tests, sensei!While the current test coverage is good, consider adding these edge cases:
- All fields at max u32 value
- All fields at 0
- Mixed large and small values
crates/katana/storage/db/src/trie/contract.rs (1)
80-82
: Consider definingbonsai_identifier
as a constantOhayo, sensei! The
bonsai_identifier
function returns a hardcoded valueb"1"
. Consider defining this value as a constant or making it configurable for clarity and potential future flexibility.crates/katana/storage/db/src/trie/mod.rs (2)
80-81
: Implement theget_by_prefix
methodOhayo, sensei! The
get_by_prefix
method is marked withtodo!()
. Implementing this functionality will enhance the database's capabilities in retrieving entries by prefix.Would you like assistance in implementing the
get_by_prefix
method?
127-135
: Complete the persistence methodsOhayo, sensei! The methods
snapshot
,merge
, andtransaction
in theBonsaiPersistentDatabase
implementation are either empty or containtodo!()
. Completing these methods is crucial for proper transaction handling and data persistence.Would you like assistance in completing these methods?
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (3)
Cargo.lock
is excluded by!**/*.lock
spawn-and-move-db.tar.gz
is excluded by!**/*.gz
types-test-db.tar.gz
is excluded by!**/*.gz
📒 Files selected for processing (28)
Cargo.toml
(3 hunks)crates/katana/core/Cargo.toml
(2 hunks)crates/katana/core/src/backend/mod.rs
(3 hunks)crates/katana/core/src/backend/storage.rs
(3 hunks)crates/katana/primitives/src/block.rs
(5 hunks)crates/katana/primitives/src/chain_spec.rs
(2 hunks)crates/katana/primitives/src/receipt.rs
(2 hunks)crates/katana/primitives/src/state.rs
(3 hunks)crates/katana/storage/db/Cargo.toml
(2 hunks)crates/katana/storage/db/src/codecs/postcard.rs
(2 hunks)crates/katana/storage/db/src/lib.rs
(1 hunks)crates/katana/storage/db/src/models/mod.rs
(1 hunks)crates/katana/storage/db/src/models/trie.rs
(1 hunks)crates/katana/storage/db/src/tables.rs
(7 hunks)crates/katana/storage/db/src/trie/class.rs
(1 hunks)crates/katana/storage/db/src/trie/contract.rs
(1 hunks)crates/katana/storage/db/src/trie/mod.rs
(1 hunks)crates/katana/storage/provider/Cargo.toml
(1 hunks)crates/katana/storage/provider/src/lib.rs
(3 hunks)crates/katana/storage/provider/src/providers/db/mod.rs
(1 hunks)crates/katana/storage/provider/src/providers/db/trie.rs
(1 hunks)crates/katana/storage/provider/src/providers/fork/mod.rs
(4 hunks)crates/katana/storage/provider/src/traits/mod.rs
(1 hunks)crates/katana/storage/provider/src/traits/trie.rs
(1 hunks)crates/katana/trie/Cargo.toml
(1 hunks)crates/katana/trie/src/lib.rs
(1 hunks)crates/saya/provider/src/rpc/mod.rs
(1 hunks)examples/spawn-and-move/manifest_dev.json
(5 hunks)
🚧 Files skipped from review as they are similar to previous changes (19)
- Cargo.toml
- crates/katana/core/Cargo.toml
- crates/katana/core/src/backend/storage.rs
- crates/katana/primitives/src/chain_spec.rs
- crates/katana/primitives/src/receipt.rs
- crates/katana/primitives/src/state.rs
- crates/katana/storage/db/Cargo.toml
- crates/katana/storage/db/src/codecs/postcard.rs
- crates/katana/storage/db/src/lib.rs
- crates/katana/storage/db/src/models/mod.rs
- crates/katana/storage/db/src/models/trie.rs
- crates/katana/storage/db/src/tables.rs
- crates/katana/storage/provider/Cargo.toml
- crates/katana/storage/provider/src/providers/db/mod.rs
- crates/katana/storage/provider/src/providers/fork/mod.rs
- crates/katana/storage/provider/src/traits/mod.rs
- crates/katana/storage/provider/src/traits/trie.rs
- crates/katana/trie/Cargo.toml
- crates/saya/provider/src/rpc/mod.rs
🔇 Additional comments (26)
crates/katana/trie/src/lib.rs (2)
1-14
: Ohayo! Clean and well-organized imports and trait definition, sensei!
The BonsaiTrieDb
trait is well-documented and provides a helpful shorthand for commonly combined traits. The implementation is elegant and follows Rust best practices.
16-39
: Previous review comment is still applicable, sensei!
The earlier review comment about improving error handling, constant naming, and memory efficiency remains valid.
crates/katana/core/src/backend/mod.rs (4)
147-171
: LGTM! Well-structured implementation
The UncommittedBlock
struct is well-designed with proper lifetime annotations and trait bounds. The implementation effectively manages references to avoid unnecessary cloning.
186-208
: LGTM! Proper header construction
The header construction and hash computation are implemented correctly with all required fields properly set.
68-73
: 🛠️ Refactor suggestion
Ohayo, sensei! Optimize state updates handling
The state_updates
is being cloned unnecessarily in the commit_block
call. Consider passing it by reference to avoid the performance overhead of cloning.
Apply this diff:
let block = self.commit_block(
block_env.clone(),
- execution_output.states.state_updates.clone(),
+ &execution_output.states.state_updates,
txs,
&receipts,
)?;
Likely invalid or redundant comment.
224-244
: 🛠️ Refactor suggestion
Ohayo, sensei! Optimize event commitment computation
The event commitment computation could be more efficient by:
- Pre-allocating the vector with the exact size
- Using iterator chaining for better readability and performance
Apply this diff:
fn compute_event_commitment(&self) -> Felt {
// h(emitter_address, tx_hash, h(keys), h(data))
fn event_hash(tx: TxHash, event: &Event) -> Felt {
let keys_hash = hash::Poseidon::hash_array(&event.keys);
let data_hash = hash::Poseidon::hash_array(&event.data);
hash::Poseidon::hash_array(&[tx, event.from_address.into(), keys_hash, data_hash])
}
let events = self.receipts.iter().flat_map(|r| r.events().iter().map(|e| (r.tx_hash, e)));
-
- let mut hashes = Vec::new();
- for (tx, event) in events {
- let event_hash = event_hash(tx, event);
- hashes.push(event_hash);
- }
+ let total_events: usize = self.receipts.iter().map(|r| r.events().len()).sum();
+ let hashes: Vec<_> = events
+ .map(|(tx, event)| event_hash(tx, event))
+ .collect();
compute_merkle_root::<hash::Poseidon>(&hashes).unwrap()
}
Likely invalid or redundant comment.
crates/katana/primitives/src/block.rs (5)
13-16
: LGTM! Type definitions are well-organized.
The movement of BlockNumber
and BlockHash
type definitions improves code organization while maintaining functionality.
102-159
: LGTM! Block hash computation is well-implemented.
The implementation correctly follows the specified Poseidon hash algorithm and includes all required fields in the proper order. The documentation clearly references the source implementation.
161-199
: Previous review comment about compile-time checks is still applicable.
The bit concatenation implementation is correct, but the suggestion to add compile-time checks for field size assumptions remains valid.
208-208
: LGTM! Default initialization is appropriate.
The initialization of state_diff_length
to 0 is consistent with other numeric field defaults.
93-93
: Verify the impact of adding state_diff_length field.
Ohayo! The addition of state_diff_length
field looks good, but let's verify its usage across the codebase.
✅ Verification successful
Field state_diff_length
is properly initialized and used across the codebase
Ohayo! After checking the usage, I can confirm that:
- The field is properly initialized with default value 0 in chain specs and RPC
- It's correctly populated from state updates length in the backend
- It's properly included in block hash computation alongside transaction_count and event_count
- All necessary components handle this field appropriately
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for state_diff_length usage to ensure proper initialization and handling
rg "state_diff_length" --type rust
Length of output: 1208
crates/katana/storage/provider/src/lib.rs (3)
22-22
: LGTM! Clean import addition.
The new import for trie-related traits is well-organized and properly scoped.
399-410
: Verify state update consistency across blocks.
The ContractTrieWriter implementation correctly delegates to the provider. The returned Felt likely represents a commitment or root hash of the updated state.
#!/bin/bash
# Description: Verify state update handling consistency
# Look for state update processing patterns across the codebase
# Check state update processing patterns
rg -B 2 -A 2 "ContractTrieWriter.*insert_updates.*state_updates"
# Look for related state root computations
ast-grep --pattern 'fn $_ ($$$) -> ProviderResult<Felt>'
386-397
: Ohayo sensei! Verify block number ordering for class updates.
The implementation looks clean and follows the established patterns. However, let's ensure the block number ordering is maintained for consistent state transitions.
✅ Verification successful
Ohayo sensei! The block number ordering appears to be properly handled.
The implementation delegates the block number handling to the underlying database implementation through the trait bound Db: ClassTrieWriter
. Since this is a trait implementation that simply forwards the call, the responsibility for maintaining block number ordering lies with the concrete database implementation. The trait contract itself ensures consistency by accepting the block number as a parameter.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if class updates are processed in sequential block order
# Look for any non-sequential block number usage in class trie updates
rg -B 2 -A 2 "ClassTrieWriter.*insert_updates.*block_number"
Length of output: 959
Script:
#!/bin/bash
# Let's check the implementation of the storage provider to see how block numbers are handled
ast-grep --pattern 'impl $_ for $_ {
$$$
fn insert_updates(&self, block_number: BlockNumber, $$$) -> ProviderResult<Felt> {
$$$
}
$$$
}'
# Also check for any block number validation or ordering logic
rg -B 3 -A 3 "block_number.*validate|validate.*block_number"
Length of output: 2126
Script:
#!/bin/bash
# Let's check the underlying storage implementation for block number handling
rg -B 3 -A 3 "impl.*Storage.*block" --type rust
# Also check for any block-related methods in the provider
ast-grep --pattern 'fn $name($$$block_number: BlockNumber$$$) -> $ret {
$$$
}'
Length of output: 132
examples/spawn-and-move/manifest_dev.json (3)
6-1120
: Ohayo! The world configuration looks well-structured, sensei!
The world interface implementation provides a comprehensive set of features including:
- Resource management (models, events, contracts)
- Permission system (owners and writers)
- Event tracking
- Entity management
- Metadata handling
The layout system effectively handles different data types and the event definitions properly track all state changes.
Line range hint 1124-1955
: Verify init_calldata consistency across contracts, sensei!
While the contract structure is well-organized with proper namespace prefixing, there's an inconsistency in the init_calldata:
- Three contracts have empty init_calldata: []
- The 'others' contract has init_calldata: ["0xff"]
Line range hint 1956-2000
: Ohayo! Please verify the empty model and event definitions, sensei!
All models and events are defined with empty members. This could be intentional if the definitions are externalized, but please verify:
- Message, MockToken, Moves, PlayerConfig, Position, ServerProfile models
- ContractInitialized and Moved events
crates/katana/storage/db/src/trie/class.rs (1)
40-40
: Ohayo, sensei! Verify the correctness of bit slicing in key generation
The slicing operation [5..]
on hash.to_bytes_be().as_bits()
in the insert
method may need verification to ensure it captures the intended portion of the hash. Please confirm that this accurately represents the desired bits for the key.
crates/katana/storage/db/src/trie/contract.rs (2)
35-35
: Verify correct slicing of bits in key computation
Ohayo, sensei! The code slices off the first 5 bits when converting to BitVec
. Please verify that this is intentional and aligns with the expected key format for the trie.
Also applies to: 68-68
22-26
: Review BonsaiStorageConfig
settings
Ohayo, sensei! The BonsaiStorageConfig
is set with max_saved_trie_logs
and max_saved_snapshots
as Some(0)
, and snapshot_interval
as u64::MAX
. Please verify if these settings are appropriate for your use case, or if adjustments are needed to handle trie logs and snapshots effectively.
Also applies to: 55-59
crates/katana/storage/db/src/trie/mod.rs (6)
1-29
: Well-structured definition of TrieDb
Ohayo, sensei! The TrieDb
struct and its initial implementation are well-defined, making good use of generics and the PhantomData
marker.
71-74
: Confirm the correctness of the get
method
Ohayo, sensei! The get
method correctly retrieves values using the converted database key.
84-95
: Ensure proper handling in the insert
method
Ohayo, sensei! The insert
method effectively inserts new entries and returns the old value if it exists.
97-106
: Validate the remove
method functionality
Ohayo, sensei! The remove
method correctly deletes the specified key and returns the old value.
108-112
: Efficient check in the contains
method
Ohayo, sensei! The contains
method appropriately checks for the existence of a key.
138-150
: Accurate key conversion in to_db_key
function
Ohayo, sensei! The to_db_key
function correctly matches the DatabaseKey
variants and constructs the corresponding TrieDatabaseKey
.
// H(H(H(class_hash, storage_root), nonce), 0), where H is the pedersen hash | ||
pub fn compute_contract_state_hash( | ||
class_hash: &ClassHash, | ||
storage_root: &Felt, | ||
nonce: &Felt, | ||
) -> Felt { | ||
const CONTRACT_STATE_HASH_VERSION: Felt = Felt::ZERO; | ||
let hash = Pedersen::hash(class_hash, storage_root); | ||
let hash = Pedersen::hash(&hash, nonce); | ||
Pedersen::hash(&hash, &CONTRACT_STATE_HASH_VERSION) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance robustness of contract state hash computation, sensei!
While the implementation correctly follows the formula H(H(H(class_hash, storage_root), nonce), 0), consider these improvements:
+const CONTRACT_STATE_HASH_VERSION_ZERO: Felt = Felt::ZERO;
pub fn compute_contract_state_hash(
class_hash: &ClassHash,
storage_root: &Felt,
nonce: &Felt,
) -> Felt {
- const CONTRACT_STATE_HASH_VERSION: Felt = Felt::ZERO;
+ // Validate inputs
+ debug_assert!(!class_hash.is_zero(), "class_hash cannot be zero");
+
let hash = Pedersen::hash(class_hash, storage_root);
let hash = Pedersen::hash(&hash, nonce);
- Pedersen::hash(&hash, &CONTRACT_STATE_HASH_VERSION)
+ Pedersen::hash(&hash, &CONTRACT_STATE_HASH_VERSION_ZERO)
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// H(H(H(class_hash, storage_root), nonce), 0), where H is the pedersen hash | |
pub fn compute_contract_state_hash( | |
class_hash: &ClassHash, | |
storage_root: &Felt, | |
nonce: &Felt, | |
) -> Felt { | |
const CONTRACT_STATE_HASH_VERSION: Felt = Felt::ZERO; | |
let hash = Pedersen::hash(class_hash, storage_root); | |
let hash = Pedersen::hash(&hash, nonce); | |
Pedersen::hash(&hash, &CONTRACT_STATE_HASH_VERSION) | |
} | |
const CONTRACT_STATE_HASH_VERSION_ZERO: Felt = Felt::ZERO; | |
pub fn compute_contract_state_hash( | |
class_hash: &ClassHash, | |
storage_root: &Felt, | |
nonce: &Felt, | |
) -> Felt { | |
// Validate inputs | |
debug_assert!(!class_hash.is_zero(), "class_hash cannot be zero"); | |
let hash = Pedersen::hash(class_hash, storage_root); | |
let hash = Pedersen::hash(&hash, nonce); | |
Pedersen::hash(&hash, &CONTRACT_STATE_HASH_VERSION_ZERO) | |
} |
impl<Db: Database> ContractTrieWriter for DbProvider<Db> { | ||
fn insert_updates( | ||
&self, | ||
block_number: BlockNumber, | ||
state_updates: &StateUpdates, | ||
) -> crate::ProviderResult<Felt> { | ||
let mut contract_leafs: HashMap<ContractAddress, ContractLeaf> = HashMap::new(); | ||
|
||
let leaf_hashes: Vec<_> = { | ||
let mut storage_trie_db = StorageTrie::new(self.0.tx_mut()?); | ||
|
||
// First we insert the contract storage changes | ||
for (address, storage_entries) in &state_updates.storage_updates { | ||
for (key, value) in storage_entries { | ||
storage_trie_db.insert(*address, *key, *value); | ||
} | ||
// insert the contract address in the contract_leafs to put the storage root later | ||
contract_leafs.insert(*address, Default::default()); | ||
} | ||
|
||
// Then we commit them | ||
storage_trie_db.commit(block_number); | ||
|
||
for (address, nonce) in &state_updates.nonce_updates { | ||
contract_leafs.entry(*address).or_default().nonce = Some(*nonce); | ||
} | ||
|
||
for (address, class_hash) in &state_updates.deployed_contracts { | ||
contract_leafs.entry(*address).or_default().class_hash = Some(*class_hash); | ||
} | ||
|
||
for (address, class_hash) in &state_updates.replaced_classes { | ||
contract_leafs.entry(*address).or_default().class_hash = Some(*class_hash); | ||
} | ||
|
||
contract_leafs | ||
.into_iter() | ||
.map(|(address, mut leaf)| { | ||
let storage_root = storage_trie_db.root(&address); | ||
leaf.storage_root = Some(storage_root); | ||
|
||
let latest_state = self.latest().unwrap(); | ||
let leaf_hash = contract_state_leaf_hash(latest_state, &address, &leaf); | ||
|
||
(address, leaf_hash) | ||
}) | ||
.collect::<Vec<_>>() | ||
}; | ||
|
||
let mut contract_trie_db = ContractTrie::new(self.0.tx_mut()?); | ||
|
||
for (k, v) in leaf_hashes { | ||
contract_trie_db.insert(k, v); | ||
} | ||
|
||
contract_trie_db.commit(block_number); | ||
Ok(contract_trie_db.root()) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Optimize storage operations and improve code structure
Ohayo, sensei! The implementation could benefit from optimization and better organization.
Consider these improvements:
- Batch storage operations:
+ let mut storage_updates = Vec::with_capacity(storage_entries.len());
for (key, value) in storage_entries {
- storage_trie_db.insert(*address, *key, *value);
+ storage_updates.push((*key, *value));
}
+ storage_trie_db.batch_insert(*address, &storage_updates)?;
- Extract update logic into separate methods:
impl<Db: Database> DbProvider<Db> {
fn process_storage_updates(
&self,
storage_trie_db: &mut StorageTrie<impl Database>,
address: &ContractAddress,
storage_entries: &BTreeMap<Felt, Felt>,
) -> crate::ProviderResult<()> {
// Storage update logic
}
fn update_contract_state(
&self,
contract_leafs: &mut HashMap<ContractAddress, ContractLeaf>,
state_updates: &StateUpdates,
) {
// State update logic
}
}
impl<Db: Database> ClassTrieWriter for DbProvider<Db> { | ||
fn insert_updates( | ||
&self, | ||
block_number: BlockNumber, | ||
updates: &BTreeMap<ClassHash, CompiledClassHash>, | ||
) -> crate::ProviderResult<Felt> { | ||
let mut trie = trie::ClassTrie::new(self.0.tx_mut()?); | ||
|
||
for (class_hash, compiled_hash) in updates { | ||
trie.insert(*class_hash, *compiled_hash); | ||
} | ||
|
||
trie.commit(block_number); | ||
Ok(trie.root()) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Improve error handling in ClassTrieWriter implementation
Ohayo, sensei! The implementation should handle potential errors from trie operations.
Apply this diff to improve error handling:
fn insert_updates(
&self,
block_number: BlockNumber,
updates: &BTreeMap<ClassHash, CompiledClassHash>,
) -> crate::ProviderResult<Felt> {
- let mut trie = trie::ClassTrie::new(self.0.tx_mut()?);
+ let mut trie = trie::ClassTrie::new(self.0.tx_mut()
+ .map_err(|e| crate::ProviderError::Custom(format!("Failed to get transaction: {}", e)))?);
for (class_hash, compiled_hash) in updates {
- trie.insert(*class_hash, *compiled_hash);
+ trie.insert(*class_hash, *compiled_hash)
+ .map_err(|e| crate::ProviderError::Custom(format!("Failed to insert class: {}", e)))?;
}
- trie.commit(block_number);
- Ok(trie.root())
+ trie.commit(block_number)
+ .map_err(|e| crate::ProviderError::Custom(format!("Failed to commit: {}", e)))?;
+ Ok(trie.root()?)
}
Committable suggestion skipped: line range outside the PR's diff.
// computes the contract state leaf hash | ||
fn contract_state_leaf_hash( | ||
provider: impl StateProvider, | ||
address: &ContractAddress, | ||
contract_leaf: &ContractLeaf, | ||
) -> Felt { | ||
let nonce = | ||
contract_leaf.nonce.unwrap_or(provider.nonce(*address).unwrap().unwrap_or_default()); | ||
|
||
let class_hash = contract_leaf | ||
.class_hash | ||
.unwrap_or(provider.class_hash_of_contract(*address).unwrap().unwrap_or_default()); | ||
|
||
let storage_root = contract_leaf.storage_root.expect("root need to set"); | ||
|
||
compute_contract_state_hash(&class_hash, &storage_root, &nonce) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Improve error handling in contract_state_leaf_hash function
Ohayo, sensei! The function should handle errors properly instead of using unwrap().
Apply this diff:
fn contract_state_leaf_hash(
provider: impl StateProvider,
address: &ContractAddress,
contract_leaf: &ContractLeaf,
-) -> Felt {
+) -> crate::ProviderResult<Felt> {
let nonce = contract_leaf
.nonce
- .unwrap_or(provider.nonce(*address).unwrap().unwrap_or_default());
+ .unwrap_or(provider.nonce(*address)?.unwrap_or_default());
let class_hash = contract_leaf
.class_hash
- .unwrap_or(provider.class_hash_of_contract(*address).unwrap().unwrap_or_default());
+ .unwrap_or(provider.class_hash_of_contract(*address)?.unwrap_or_default());
let storage_root = contract_leaf
.storage_root
- .expect("root need to set");
+ .ok_or_else(|| crate::ProviderError::Custom("Storage root must be set".into()))?;
- compute_contract_state_hash(&class_hash, &storage_root, &nonce)
+ Ok(compute_contract_state_hash(&class_hash, &storage_root, &nonce))
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// computes the contract state leaf hash | |
fn contract_state_leaf_hash( | |
provider: impl StateProvider, | |
address: &ContractAddress, | |
contract_leaf: &ContractLeaf, | |
) -> Felt { | |
let nonce = | |
contract_leaf.nonce.unwrap_or(provider.nonce(*address).unwrap().unwrap_or_default()); | |
let class_hash = contract_leaf | |
.class_hash | |
.unwrap_or(provider.class_hash_of_contract(*address).unwrap().unwrap_or_default()); | |
let storage_root = contract_leaf.storage_root.expect("root need to set"); | |
compute_contract_state_hash(&class_hash, &storage_root, &nonce) | |
} | |
fn contract_state_leaf_hash( | |
provider: impl StateProvider, | |
address: &ContractAddress, | |
contract_leaf: &ContractLeaf, | |
) -> crate::ProviderResult<Felt> { | |
let nonce = contract_leaf | |
.nonce | |
.unwrap_or(provider.nonce(*address)?.unwrap_or_default()); | |
let class_hash = contract_leaf | |
.class_hash | |
.unwrap_or(provider.class_hash_of_contract(*address)?.unwrap_or_default()); | |
let storage_root = contract_leaf | |
.storage_root | |
.ok_or_else(|| crate::ProviderError::Custom("Storage root must be set".into()))?; | |
Ok(compute_contract_state_hash(&class_hash, &storage_root, &nonce)) | |
} |
}; | ||
|
||
let db = TrieDb::<tables::ClassTrie, Tx>::new(tx); | ||
let inner = BonsaiStorage::new(db, config).unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohayo, sensei! Consider handling potential errors instead of using unwrap()
Using unwrap()
on BonsaiStorage::new(db, config)
can cause a panic if initialization fails. To enhance robustness, consider handling the Result
with the ?
operator or matching the error to provide meaningful feedback.
}; | ||
|
||
let db = TrieDb::<tables::ContractStorageTrie, Tx>::new(tx); | ||
let inner = BonsaiStorage::new(db, config).unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle potential errors instead of using unwrap()
Ohayo, sensei! Using unwrap()
may cause the application to panic if an error occurs. Consider handling errors appropriately to ensure robustness.
Also applies to: 36-36, 40-40, 44-44
}; | ||
|
||
let db = TrieDb::<tables::ContractTrie, Tx>::new(tx); | ||
let inner = BonsaiStorage::new(db, config).unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle potential errors instead of using unwrap()
Ohayo, sensei! Using unwrap()
may cause the application to panic if an error occurs. Consider handling errors appropriately to ensure robustness.
Also applies to: 69-69, 73-73, 77-77
pub fn new(tx: Tx) -> Self { | ||
let config = BonsaiStorageConfig { | ||
max_saved_trie_logs: Some(0), | ||
max_saved_snapshots: Some(0), | ||
snapshot_interval: u64::MAX, | ||
}; | ||
|
||
let db = TrieDb::<tables::ContractStorageTrie, Tx>::new(tx); | ||
let inner = BonsaiStorage::new(db, config).unwrap(); | ||
|
||
Self { inner } | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor duplicate code in new
methods
Ohayo, sensei! The new
methods in StorageTrie
and ContractTrie
share similar logic. Consider refactoring the common code into a shared function or struct to adhere to the DRY (Don't Repeat Yourself) principle.
Also applies to: 54-65
fn remove_by_prefix(&mut self, prefix: &DatabaseKey<'_>) -> Result<(), Self::DatabaseError> { | ||
let mut cursor = self.tx.cursor_mut::<Tb>()?; | ||
let walker = cursor.walk(None)?; | ||
|
||
let mut keys_to_remove = Vec::new(); | ||
// iterate over all entries in the table | ||
for entry in walker { | ||
let (key, _) = entry?; | ||
if key.key.starts_with(prefix.as_slice()) { | ||
keys_to_remove.push(key); | ||
} | ||
} | ||
|
||
for key in keys_to_remove { | ||
let _ = self.tx.delete::<Tb>(key, None)?; | ||
} | ||
|
||
Ok(()) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Optimize remove_by_prefix
for better performance
Ohayo, sensei! The remove_by_prefix
method iterates over all entries in the table to find keys matching the prefix, which might be inefficient for large datasets.
Consider optimizing this method by leveraging more efficient database operations. If the underlying database supports prefix searches or deletions, using those features can improve performance.
Would you like assistance in refactoring this method?
type Batch = (); | ||
type DatabaseError = Error; | ||
|
||
fn create_batch(&self) -> Self::Batch {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Explicitly return ()
in create_batch
Ohayo, sensei! The create_batch
method currently has an empty body. Since Self::Batch
is defined as ()
, it's clearer to explicitly return ()
.
Consider applying this diff:
fn create_batch(&self) -> Self::Batch {
+ ()
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
fn create_batch(&self) -> Self::Batch {} | |
fn create_batch(&self) -> Self::Batch { | |
() | |
} |
resolves #1027
Summary by CodeRabbit
New Features
katana-trie
crate.bitvec
and updates tostarknet-types-core
.Bug Fixes
Documentation
manifest_dev.json
for better contract organization and functionality.Chores