-
Notifications
You must be signed in to change notification settings - Fork 190
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor(katana): rpc class types improvements #2712
Conversation
WalkthroughOhayo, sensei! The pull request introduces extensive modifications across various files in the Changes
Possibly related PRs
Suggested reviewers
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 11
🧹 Outside diff range and nitpick comments (23)
crates/katana/rpc/rpc-types/Cargo.toml (1)
30-30
: Consider using workspace version for serde_json_pythonic, sensei!Using a fixed version
0.1.2
forserde_json_pythonic
while other dependencies use workspace versions could lead to version conflicts. Consider moving this to workspace-level version management.-serde_json_pythonic = "0.1.2" +serde_json_pythonic.workspace = truecrates/katana/primitives/Cargo.toml (2)
30-30
: Ohayo sensei! Consider workspace-level version management.The new dependency
serde_json_pythonic
is directly pinned while other dependencies use workspace-level version management. Consider moving this to the workspace manifest for consistent dependency management across the project.-serde_json_pythonic = "0.1.2" +serde_json_pythonic.workspace = true
30-30
: Consider making the dependency optional.If the pythonic string handling is only needed for specific features, consider making this dependency optional and gating it behind a feature flag.
-serde_json_pythonic = "0.1.2" +serde_json_pythonic = { version = "0.1.2", optional = true } [features] default = [ "serde" ] +pythonic = ["dep:serde_json_pythonic"]crates/katana/primitives/src/utils/class.rs (2)
28-31
: Update function documentation to reflect new return type, sensei!The function comment still references
SierraClass
but the function now returnsContractClass
. Let's update the documentation to match the implementation.-/// Parse a [`str`] into a [`SierraClass`]. +/// Parse a [`str`] into a [`ContractClass`]. pub fn parse_sierra_class(class: &str) -> Result<ContractClass, serde_json::Error> { let sierra = serde_json::from_str::<SierraContractClass>(class)?; Ok(ContractClass::Class(sierra))
Line range hint
7-13
: Consider addressing the network limits TODO, sensei!The TODO comment raises important points about:
- Making the network limit configurable
- Sharing the limit with
dojo-world
without creating circular dependenciesWould you like assistance in designing a solution for these architectural concerns?
crates/katana/storage/db/src/codecs/mod.rs (1)
Consider using postcard serialization for ContractClass, sensei!
I noticed that postcard binary serialization is already being used in the codebase (
crates/katana/storage/db/src/codecs/postcard.rs
). For database storage, it would be more efficient to use postcard instead of JSON serialization. This would:
- Reduce storage size
- Improve serialization/deserialization performance
- Maintain consistency with other parts of the codebase
Example implementation:
impl Compress for ContractClass { type Compressed = Vec<u8>; fn compress(self) -> Self::Compressed { postcard::to_allocvec(&self).unwrap() } } impl Decompress for ContractClass { fn decompress<B: AsRef<[u8]>>(bytes: B) -> Result<Self, CodecError> { postcard::from_bytes(bytes.as_ref()) .map_err(|e| CodecError::Decode(e.to_string())) } }🔗 Analysis chain
Line range hint
75-86
: Consider using a more efficient serialization format, sensei!While JSON serialization is human-readable and flexible, for database storage you might want to consider more efficient binary formats like:
- Bincode (Rust-specific, very efficient)
- MessagePack (cross-language, more compact than JSON)
- Protobuf (cross-language, schema-based)
Also, noting that the PR objectives mention implementing
from_string_pythonic
- this might affect how we handle serialization in the future.Let's check if any binary serialization formats are already in use:
Let's check if there are any binary serialization formats already in use for database storage specifically:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check for usage of common serialization formats echo "Checking for serialization formats in use:" rg -i "serde|bincode|messagepack|protobuf" --type rustLength of output: 102490
Script:
#!/bin/bash # Search for database-related serialization in storage rg -B2 -A2 "impl.*Compress|impl.*Decompress" --type rustLength of output: 7034
crates/katana/primitives/src/genesis/mod.rs (1)
Line range hint
1-150
: Ohayo! Don't forget about the TODO from PR description, sensei!The class type improvements look good, but the PR objectives mention implementing
from_string_pythonic
. This functionality isn't addressed in the current changes.Would you like help implementing the
from_string_pythonic
function? I can assist in creating a new issue to track this TODO.crates/katana/primitives/src/genesis/constant.rs (2)
105-105
: Ohayo! Consider following Rust naming conventions for contract filesThe contract filename
controller_CartridgeAccount.contract_class.json
uses mixed case. Consider using snake_case for better alignment with Rust conventions, e.g.,controller_cartridge_account.contract_class.json
.
98-105
: Reminder about pending TODOOhayo sensei! I notice the PR description mentions implementing
from_string_pythonic
, but it's not addressed in this file. Should this functionality be added here since we're working with contract classes?Would you like help implementing the
from_string_pythonic
function?crates/katana/rpc/rpc-api/src/starknet.rs (1)
105-109
: Reminder: Implement from_string_pythonic functionThe PR description mentions a TODO for implementing
from_string_pythonic
. This might be related to the contract class handling.Would you like me to help implement the
from_string_pythonic
function or create a GitHub issue to track this task?crates/katana/storage/provider/src/providers/db/state.rs (2)
118-121
: Track the TODO for direct ContractClass storage, sensei! 📝The TODO comment indicates a future improvement to store ContractClass directly. This aligns with the simplified approach in
set_class
.Would you like me to create a GitHub issue to track this TODO for storing ContractClass directly? This would help ensure this technical debt is addressed in future iterations.
Line range hint
189-193
: Clean historical state implementation, sensei! 🌟The simplified class retrieval maintains proper historical state verification while aligning with the new storage pattern. Consider adding metrics/logging to track historical lookups performance.
Consider adding Prometheus metrics to track:
- Historical class lookup latency
- Cache hit/miss ratios for frequently accessed historical states
crates/katana/executor/tests/fixtures/mod.rs (2)
Line range hint
31-31
: Ohayo sensei! Consider addressing the TODO commentsThere are two important TODOs that should be tracked:
- Removing support for legacy contract declarations
- Updating transactions to include valid signatures
The second TODO is particularly important as it relates to security considerations in the test fixtures.
Would you like me to help create GitHub issues to track these improvements?
Also applies to: 89-89
Line range hint
91-207
: Consider extracting test data to fixture constants, sensei!The
valid_blocks
fixture contains numerous hardcoded values. Consider:
- Moving the test data to constants or a separate test data file
- Creating helper functions for common test data setup
- Adding documentation for the test scenarios
This would improve maintainability and make the test intentions clearer.
Would you like me to demonstrate how to refactor this into a more maintainable structure?
crates/katana/primitives/src/conversion/rpc.rs (1)
Line range hint
290-314
: Ohayo sensei! Consider adding tests for flattened_sierra_to_compiled_classI notice we have tests for legacy class conversion but the test for
flattened_sierra_to_compiled_class
seems to be missing. This function is crucial for our contract class handling.Would you like me to help generate comprehensive tests for this function? This would help ensure reliability during the transition from
FlattenedSierraClass
toContractClass
.crates/katana/executor/src/implementation/blockifier/state.rs (1)
276-278
: Consider adding error handling for class compilation, sensei!While the test setup looks good, it might be worth adding error handling for the class compilation process, especially since we're working with default classes that are crucial for the test environment.
- let class = DEFAULT_ACCOUNT_CLASS.clone(); - let compiled_class = DEFAULT_ACCOUNT_CLASS_CASM.clone(); + let class = DEFAULT_ACCOUNT_CLASS.clone(); + let compiled_class = DEFAULT_ACCOUNT_CLASS_CASM + .clone() + .try_compile() + .expect("Failed to compile default account class");crates/katana/primitives/src/genesis/json.rs (1)
295-295
: Ohayo sensei! Consider optimizing clone operations for better performance.The pattern
.clone().into()
is used consistently across multiple locations for class type conversion. While this works correctly, frequent cloning of potentially large class data structures might impact performance.Consider these improvements:
- If possible, use references where ownership isn't required
- Consider implementing
From
trait to avoid explicit.into()
calls- Evaluate if cloning can be deferred or avoided in some cases
Example optimization:
- class: CONTROLLER_ACCOUNT_CLASS.clone().into(), + class: ContractClass::from(&*CONTROLLER_ACCOUNT_CLASS),Also applies to: 390-390, 725-725, 733-733, 899-899, 906-906
crates/katana/primitives/src/class.rs (1)
51-53
: Ohayo sensei!Avoid variable shadowing to enhance code clarity
In the
compile
method, the variablecasm
is declared twice, which can be confusing. Consider renaming the secondcasm
to improve readability.Apply this diff to rename the variable:
let casm = CasmContractClass::from_contract_class(class, true, usize::MAX)?; -let casm = CompiledClass::Class(casm); +let compiled_class = CompiledClass::Class(casm); -Ok(casm) +Ok(compiled_class)crates/katana/rpc/rpc-types/src/class.rs (3)
153-161
: Simplify assignment ofprogram.debug_info
Ohayo, sensei! Instead of using
serde_json::to_value::<Option<()>>(None)?
, you can directly assignserde_json::Value::Null
toprogram.debug_info
for clarity and simplicity.Apply this diff to simplify the assignment:
fn compress_legacy_program(mut program: LegacyProgram) -> Result<Vec<u8>, ConversionError> { // We don't need the debug info in the compressed program. - program.debug_info = serde_json::to_value::<Option<()>>(None)?; + program.debug_info = serde_json::Value::Null; let bytes = serde_json::to_vec(&program)?; let mut gzip_encoder = flate2::write::GzEncoder::new(Vec::new(), flate2::Compression::fast()); Write::write_all(&mut gzip_encoder, &bytes)?; Ok(gzip_encoder.finish()?) }
177-195
: Optimize conversions to avoid unnecessary serializationOhayo, sensei! In the
TryFrom
implementations forFlattenedSierraClass
andCompressedLegacyContractClass
, consider implementing direct field mappings instead of serializing to and from JSON. This would enhance performance by eliminating the overhead of serialization.
206-292
: Useexpect
instead ofunwrap
in test cases for better diagnosticsOhayo, sensei! In your test functions, using
expect
with descriptive messages instead ofunwrap
can make debugging easier if a test fails due to deserialization errors.Apply this change to your test deserializations:
- let class = serde_json::from_str::<SierraContractClass>(json).unwrap(); + let class = serde_json::from_str::<SierraContractClass>(json).expect("Failed to deserialize SierraContractClass"); - let class = serde_json::from_str::<LegacyContractClass>(json).unwrap(); + let class = serde_json::from_str::<LegacyContractClass>(json).expect("Failed to deserialize LegacyContractClass");crates/katana/rpc/rpc-types/src/transaction.rs (1)
95-96
: Ohayo, sensei! Noticed a TODO for refactoring contract class typesThere's a TODO comment suggesting changing the contract class type for broadcasted transactions to
katana-rpc-types
for easier conversion. Implementing this could streamline the conversion process and reduce potential errors.Would you like assistance in implementing this change? I can help draft the code or open a GitHub issue to track this task.
crates/katana/storage/provider/src/providers/db/mod.rs (1)
665-666
: Ohayo, sensei! Consider avoiding unnecessary cloning ofclass
before compilingCloning
class
before compiling might be unnecessary if thecompile
method can be called on a reference. Avoiding the clone could enhance performance and reduce memory usage.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (3)
Cargo.lock
is excluded by!**/*.lock
spawn-and-move-db.tar.gz
is excluded by!**/*.gz
types-test-db.tar.gz
is excluded by!**/*.gz
📒 Files selected for processing (26)
crates/katana/executor/src/implementation/blockifier/state.rs
(6 hunks)crates/katana/executor/tests/fixtures/mod.rs
(1 hunks)crates/katana/primitives/Cargo.toml
(1 hunks)crates/katana/primitives/src/chain_spec.rs
(3 hunks)crates/katana/primitives/src/class.rs
(2 hunks)crates/katana/primitives/src/conversion/rpc.rs
(2 hunks)crates/katana/primitives/src/genesis/constant.rs
(2 hunks)crates/katana/primitives/src/genesis/json.rs
(6 hunks)crates/katana/primitives/src/genesis/mod.rs
(1 hunks)crates/katana/primitives/src/utils/class.rs
(2 hunks)crates/katana/rpc/rpc-api/src/starknet.rs
(4 hunks)crates/katana/rpc/rpc-types/Cargo.toml
(2 hunks)crates/katana/rpc/rpc-types/src/class.rs
(1 hunks)crates/katana/rpc/rpc-types/src/lib.rs
(2 hunks)crates/katana/rpc/rpc-types/src/transaction.rs
(5 hunks)crates/katana/rpc/rpc/src/starknet/mod.rs
(5 hunks)crates/katana/rpc/rpc/src/starknet/read.rs
(3 hunks)crates/katana/storage/db/src/codecs/mod.rs
(2 hunks)crates/katana/storage/db/src/tables.rs
(5 hunks)crates/katana/storage/provider/Cargo.toml
(1 hunks)crates/katana/storage/provider/src/providers/db/mod.rs
(1 hunks)crates/katana/storage/provider/src/providers/db/state.rs
(3 hunks)crates/katana/storage/provider/src/providers/fork/backend.rs
(1 hunks)crates/katana/storage/provider/src/test_utils.rs
(1 hunks)crates/katana/storage/provider/tests/fixtures.rs
(2 hunks)crates/saya/provider/src/rpc/mod.rs
(2 hunks)
🔇 Additional comments (48)
crates/katana/rpc/rpc-types/Cargo.toml (1)
13-13
: Ohayo! New dependencies look good, but let's verify flate2 usage.
The addition of these dependencies aligns well with the PR objectives, especially serde_json_pythonic
for the planned from_string_pythonic
implementation. However, let's ensure flate2
is actually needed.
Also applies to: 29-31
✅ Verification successful
Ohayo! The flate2 dependency is actively used for contract class compression
The flate2
dependency is correctly added and actively used in the codebase, particularly in:
crates/katana/rpc/rpc-types/src/class.rs
for compressing/decompressing legacy program datacrates/katana/primitives/src/conversion/rpc.rs
for contract class compression operations
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if flate2 is actually used in the codebase
# Expected: Find actual compression/decompression usage
echo "Checking for flate2 usage..."
rg -l "use.*flate2|::flate2"
echo "Checking for specific compression methods..."
rg "Compression|Decompression|compress|decompress"
Length of output: 14032
crates/katana/storage/provider/Cargo.toml (1)
35-35
: Verify the removal of serde_json from test-utils
The removal of serde_json from test-utils feature aligns with making it a main dependency, but we should verify that this doesn't affect any existing tests.
✅ Verification successful
The removal of serde_json from test-utils is safe
Looking at the codebase, serde_json is now correctly placed as both a main dependency and a dev-dependency in Cargo.toml. The test_utils.rs file and other test modules don't show any usage of serde_json under test-utils feature flag, so the removal is safe.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any test files that might be affected by the removal of serde_json from test-utils
# Search for test files using serde_json in tests
rg -l "use.*serde_json" $(fd -t f "test.*\.rs$" crates/katana/storage/provider/)
# Search for #[cfg(feature = "test-utils")] blocks that might use serde_json
rg -B 3 -A 3 '#\[cfg\(feature = "test-utils"\)\]' crates/katana/storage/provider/
Length of output: 215
Script:
#!/bin/bash
# Let's check the content of test_utils.rs and any tests that might use serde_json
cat crates/katana/storage/provider/src/test_utils.rs
# Also check for any test modules in the main source files
rg "mod test" crates/katana/storage/provider/src/
# And check the actual Cargo.toml to see the dependency structure
cat crates/katana/storage/provider/Cargo.toml
Length of output: 4372
crates/katana/primitives/Cargo.toml (1)
30-30
: Verify security implications of early-version dependency.
The dependency is at version 0.1.2, which indicates an early development stage with potential API instability.
✅ Verification successful
Let me gather more information about this package's repository and maintenance status.
Early version dependency appears stable and safe for use
Ohayo sensei! After careful verification, the serde_json_pythonic
dependency at v0.1.2 appears to be safe:
- It's the latest version with significant adoption (~63k recent downloads)
- No security vulnerabilities reported
- Used specifically for Pythonic JSON formatting in contract class handling
- Has minimal scope (no dependencies) reducing attack surface
- Codebase uses it in a controlled manner for serialization only
The early version number reflects its focused functionality rather than instability. The package is appropriately used within its intended purpose in the Starknet-related code.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for security advisories and latest versions
# Check crates.io for latest versions and download stats
curl -s https://crates.io/api/v1/crates/serde_json_pythonic | jq '{latest_version: .crate.max_version, downloads: .crate.downloads, recent_downloads: .crate.recent_downloads}'
# Check for security advisories
gh api graphql -f query='
{
securityVulnerabilities(first: 5, ecosystem: RUST, package: "serde_json_pythonic") {
nodes {
advisory {
summary
severity
publishedAt
}
vulnerableVersionRange
firstPatchedVersion {
identifier
}
}
}
}'
Length of output: 615
Script:
#!/bin/bash
# Check repository details and commit history
curl -s https://crates.io/api/v1/crates/serde_json_pythonic | jq '{repository: .crate.repository, description: .crate.description, created_at: .crate.created_at, updated_at: .crate.updated_at}'
# Check dependencies of the package
curl -s https://crates.io/api/v1/crates/serde_json_pythonic/0.1.2 | jq '.version.deps'
# Look for usage patterns in the codebase
rg -C2 "serde_json_pythonic" --type rust
Length of output: 2040
crates/katana/primitives/src/utils/class.rs (2)
5-5
: Ohayo! Clean import organization, sensei!
The consolidated import of class types from the local module improves code organization and makes the relationship between different class types more explicit.
23-25
: 🛠️ Refactor suggestion
Consider moving the ContractClass import to module level, sensei!
The local import within the function scope is unusual. Since ContractClass
is already imported at the module level (line 5), this local import appears redundant and can be removed.
- use katana_cairo::lang::starknet_classes::contract_class::ContractClass;
let class: ContractClass = serde_json::from_value(class)?;
Ok(CasmContractClass::from_contract_class(class, true, usize::MAX)?)
Likely invalid or redundant comment.
crates/katana/rpc/rpc-types/src/lib.rs (2)
8-8
: Ohayo! New module addition looks good, sensei!
The addition of the class
module aligns well with the refactoring objectives for improving class types handling.
53-53
: Verify the impact of commenting out ContractClass type alias
Ohayo sensei! While commenting out this type alias is part of the refactoring, we should ensure all existing usages have been properly migrated.
Let's check for any remaining references:
crates/katana/storage/provider/src/test_utils.rs (1)
52-54
: Ohayo! Clean simplification of class instantiation, sensei! 🎋
The direct usage of parse_sierra_class
result makes the code more straightforward and aligns well with the class types improvements initiative.
Let's verify the consistency of this pattern across other test files:
✅ Verification successful
Ohayo! Your class parsing is perfectly aligned across the codebase, sensei! ✨
The direct usage of parse_sierra_class
in test_utils.rs
follows the consistent pattern seen throughout the codebase. All instances follow the same approach:
- The function is properly defined in
primitives/src/utils/class.rs
- It's used consistently in test files and constants
- The pattern of unwrapping the result and wrapping in
Arc
when needed is uniform
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for similar class instantiation patterns in test files
# Expected: Other test files should follow similar direct class instantiation
# Search for parse_sierra_class usage in test files
rg -t rust "parse_sierra_class.*unwrap\(\)" --glob "**/test*.rs"
# Look for any remaining FlattenedSierraClass usage that might need updating
ast-grep --pattern 'FlattenedSierraClass'
Length of output: 940
Script:
#!/bin/bash
# Let's check the actual usage context of parse_sierra_class in test files
# and see if there are any inconsistencies in class handling
# Check for parse_sierra_class function definition and its return type
rg -t rust "fn parse_sierra_class" -B 2 -A 2
# Look for actual usage patterns of parse_sierra_class
rg -t rust "parse_sierra_class" -B 2 -A 2
# Check for any other class parsing patterns in test files
rg -t rust "class.*=.*parse" --glob "**/test*.rs" -B 2 -A 2
Length of output: 5907
crates/katana/storage/db/src/codecs/mod.rs (2)
5-5
: Ohayo! Import looks good, sensei!
The import statement for ContractClass
is properly placed and aligns with the new implementations below.
Line range hint 82-86
: Ohayo! The decompression implementation looks solid, sensei!
The error handling is properly implemented using map_err
, and the implementation follows the pattern used elsewhere in the codebase.
crates/katana/primitives/src/genesis/mod.rs (2)
124-124
: Ohayo! Clean implementation of class conversion, sensei!
The direct usage of .into()
for class conversion simplifies the code while maintaining type safety. This change aligns well with the broader class types improvements mentioned in the PR objectives.
132-132
: Verify slot feature compatibility, sensei!
While the class conversion implementation looks good, let's ensure this change maintains compatibility with the slot feature's requirements.
✅ Verification successful
Slot feature compatibility is properly maintained, sensei!
The change is correctly implemented within the slot feature context. The CONTROLLER_ACCOUNT_CLASS
is:
- Consistently used with
#[cfg(feature = "slot")]
across the codebase - Properly integrated in genesis initialization
- Correctly paired with
CONTROLLER_CLASS_HASH
- Used consistently with
.into()
conversion in all similar contexts
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any slot feature-specific class handling
# Look for other slot feature usages that might interact with CONTROLLER_CLASS
rg --type rust -C 3 '#\[cfg\(feature = "slot"\)\].*CONTROLLER_CLASS'
# Check for any potential class conversion implementations specific to the slot feature
ast-grep --pattern 'impl $_ for ContractClass {
$$$
}'
Length of output: 2089
Script:
#!/bin/bash
# Let's check for any slot feature-specific implementations or usages of CONTROLLER_ACCOUNT_CLASS
# Search for CONTROLLER_ACCOUNT_CLASS usage in slot feature context
rg --type rust -C 3 'CONTROLLER_ACCOUNT_CLASS.*#\[cfg\(feature = "slot"\)\]'
# Look for any direct references to CONTROLLER_ACCOUNT_CLASS
rg --type rust -C 3 'CONTROLLER_ACCOUNT_CLASS'
# Check for any Into/From implementations related to the class type
ast-grep --pattern 'impl $_ for ContractClass {
fn into(self) -> $_ {
$$$
}
}'
Length of output: 6929
crates/katana/primitives/src/genesis/constant.rs (2)
5-5
: Ohayo sensei! Import changes look good!
The import of ContractClass
aligns well with the codebase's transition from SierraClass
to ContractClass
.
98-99
: Verify consumers of DEFAULT_ACCOUNT_CLASS
The transition from SierraClass
to ContractClass
looks good, but let's ensure all consumers are compatible with this type change.
✅ Verification successful
Type change from SierraClass to ContractClass is safe and compatible
All usages of DEFAULT_ACCOUNT_CLASS
are compatible with the type change:
- In
chain_spec.rs
, it's used in genesis class initialization where the type is explicitly converted using.into()
- In
blockifier/state.rs
, it's used in test assertions and class initialization - In
genesis/json.rs
, it's used for serialization in genesis configuration
The type change maintains backward compatibility and all consumers handle the ContractClass type correctly.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for usages of DEFAULT_ACCOUNT_CLASS to verify compatibility
rg "DEFAULT_ACCOUNT_CLASS" --type rust -B 2 -A 2
Length of output: 21548
crates/katana/storage/provider/tests/fixtures.rs (3)
9-9
: Ohayo! Clean import update, sensei!
The import changes align well with the class types improvements, maintaining good organization of related types.
Line range hint 142-142
: Clean usage of the new type in mock data, sensei!
The integration of SierraContractClass
within the mock state updates is well-structured, maintaining consistency with the type system improvements.
38-40
: Verify JSON compatibility with both class types, sensei!
The same JSON file is used for both DOJO_WORLD_COMPILED_CLASS
and DOJO_WORLD_SIERRA_CLASS
. Let's verify the structure compatibility.
Also noting that this might be related to the TODO for implementing from_string_pythonic
. Would you like me to help track this in a new issue?
✅ Verification successful
Ohayo! The JSON structure is compatible with both class types, sensei!
The verification shows that:
- The JSON file contains all required fields:
sierra_program
,contract_class_version
, andentry_points_by_type
- Both
DOJO_WORLD_COMPILED_CLASS
andDOJO_WORLD_SIERRA_CLASS
use the same JSON file correctly - The JSON structure includes additional fields like
abi
andsierra_program_debug_info
which support both class types
No issues found with using the same JSON source for both class types. The deserialization approach is valid and maintainable.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify JSON structure compatibility with both class types
# Check if the JSON has required fields for both class types
jq 'has("sierra_program") and has("contract_class_version") and has("entry_points_by_type")' \
crates/katana/storage/provider/tests/../../db/benches/artifacts/dojo_world_240.json
# Display the top-level structure of the JSON
jq 'keys' crates/katana/storage/provider/tests/../../db/benches/artifacts/dojo_world_240.json
Length of output: 394
Script:
#!/bin/bash
# Let's verify the usage of this JSON file in both contexts
# First, find where DOJO_WORLD_COMPILED_CLASS is defined
rg "DOJO_WORLD_COMPILED_CLASS" -A 3
# Then check if there are any type definitions or conversions between these classes
ast-grep --pattern 'struct CompiledClass {
$$$
}'
ast-grep --pattern 'struct SierraContractClass {
$$$
}'
Length of output: 552
crates/saya/provider/src/rpc/mod.rs (2)
17-17
: Ohayo! The new import looks good, sensei!
The addition of RpcContractClass
and RpcSierraContractClass
aligns with the PR's objective of improving class type handling.
Line range hint 1-1
: Ohayo! Don't forget about from_string_pythonic
, sensei!
The PR objectives mention implementing from_string_pythonic
, but I don't see it in this file. Is this planned for a different file or should it be added here?
Let's check if it's implemented elsewhere:
crates/katana/rpc/rpc-api/src/starknet.rs (2)
12-12
: Ohayo! Import changes look good, sensei!
The addition of RpcContractClass
and reorganization of imports align well with the class types improvements initiative.
Also applies to: 22-23
105-109
: API method changes are well-structured, sensei!
The transition to RpcContractClass
as the return type for both get_class
and get_class_at
methods is consistent with the class types improvements initiative.
Let's verify the consistency of these changes across the codebase:
Also applies to: 126-126
✅ Verification successful
Type changes are properly implemented with bidirectional conversions, sensei!
The transition to RpcContractClass
is well-structured with:
- Proper implementation of
get_class
method returningRpcContractClass
- Bidirectional
TryFrom
conversions betweenContractClass
andRpcContractClass
- Consistent enum variants handling for both Legacy and Sierra classes
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify consistent usage of RpcContractClass
# Look for any remaining references to the old ContractClass type
rg -l "ContractClass" --type rust
# Check for implementations of these methods
ast-grep --pattern 'impl $_ for $_ {
$$$
async fn get_class($$$) -> $_ {
$$$
}
$$$
}'
Length of output: 2384
Script:
#!/bin/bash
# Let's check the implementation details of get_class in rpc/rpc/src/starknet/read.rs
rg "async fn get_class" crates/katana/rpc/rpc/src/starknet/read.rs -A 5
# Also check the RpcContractClass definition and its relationship with ContractClass
rg "RpcContractClass" crates/katana/rpc/rpc-types/src/class.rs -A 5
# Check for any conversion implementations between these types
ast-grep --pattern 'impl From<ContractClass> for RpcContractClass'
ast-grep --pattern 'impl From<RpcContractClass> for ContractClass'
Length of output: 1764
crates/katana/rpc/rpc/src/starknet/read.rs (3)
11-11
: Ohayo sensei! The import looks good!
The new import for RpcContractClass
is properly organized with other RPC types.
53-53
: Verify the impact of return type change, sensei!
The change from ContractClass
to RpcContractClass
could affect API consumers.
Let's check for potential breaking changes:
✅ Verification successful
Return type change from ContractClass
to RpcContractClass
is properly handled
The change is part of a well-structured type conversion system in the codebase:
RpcContractClass
inrpc-types
crate is specifically designed for RPC layer- Proper conversion traits (
TryFrom
) are implemented betweenContractClass
andRpcContractClass
- All consumers of
get_class_at
are using the correct RPC types - The change maintains API consistency across the RPC layer
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for direct uses of get_class_at's return value
rg -A 2 "get_class_at\(" --type rust
# Search for any remaining ContractClass usages that might need updating
rg "ContractClass" --type rust
Length of output: 41615
117-117
: Verify the type conversion implementation, sensei!
The change to RpcContractClass
looks good, but let's verify the conversion logic.
Let's check the implementation details:
✅ Verification successful
Type conversion looks solid, sensei! ✨
The TryFrom<ContractClass>
implementation for RpcContractClass
properly handles both Sierra and Legacy contract classes with appropriate error handling. The conversion is type-safe and handles all variants of the enum.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for class_at_hash implementation to verify type conversion
ast-grep --pattern 'fn class_at_hash($_) -> $_ {
$$$
}'
# Check for any conversion traits between ContractClass and RpcContractClass
rg "impl.*From.*ContractClass.*for.*RpcContractClass" --type rust
Length of output: 219
Script:
#!/bin/bash
# Let's check the implementation of TryFrom for RpcContractClass
rg -A 10 "impl.*TryFrom.*ContractClass.*for.*RpcContractClass" --type rust
# Also check where class_at_hash is implemented
ast-grep --pattern 'fn class_at_hash($_) -> $_ {
$$$
}' --lang rust
Length of output: 1044
crates/katana/storage/provider/src/providers/db/state.rs (1)
75-77
: Ohayo! Clean simplification of class storage, sensei! 🎋
The direct storage of ContractClass in tables::Classes is a welcome improvement that reduces complexity. However, let's verify that all callers are prepared for this change.
✅ Verification successful
Ohayo! The class storage changes are safe to proceed, sensei! 🎋
The search results show that FlattenedSierraClass
references are only used in:
- Type definitions and conversions in primitives
- RPC handling and tests
- Contract backend implementation
- Developer tools (sozo)
None of these usages conflict with the simplified class storage change in DbProvider
, as they're either handling the class type at different layers or performing type conversions. The direct storage approach in tables::Classes
remains compatible with all current usages.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that all callers are using the new ContractClass type
# Look for any remaining references to the old class handling pattern
rg -g '!target' -g '*.rs' "FlattenedSierraClass|SierraClasses"
Length of output: 2287
crates/katana/executor/tests/fixtures/mod.rs (1)
44-47
: Ohayo sensei! The class handling simplification looks good!
The removal of .flatten()
aligns well with the broader refactoring of class types. The code is now more straightforward and consistent with the new ContractClass approach.
Let's verify that we're not missing any flattening requirements elsewhere:
✅ Verification successful
Ohayo sensei! The removal of .flatten()
is safe and verified!
The search results confirm that the remaining .flatten()
calls in the codebase are unrelated to the contract class handling changes:
- Most usages are for iterator flattening (e.g., in
block_producer.rs
,artifacts.rs
) - The class-related
.flatten()
calls insozo
and RPC modules are for different purposes, working withFlattenedSierraClass
type from the starknet crate
The changes in the test fixture align perfectly with the broader refactoring without any side effects.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any remaining flatten() calls that might need similar updates
# Search for flatten calls in related contexts
rg -A 2 'flatten\(\)' --type rust
# Look for FlattenedSierraClass usage that might need updating
ast-grep --pattern 'FlattenedSierraClass'
Length of output: 4812
crates/katana/primitives/src/conversion/rpc.rs (2)
Line range hint 200-207
: Verify the need for public access to compress_legacy_program_data
The function has been made public, which expands our API surface. Let's ensure this change is necessary and being used appropriately.
Also, consider adding documentation to explain:
- When to use this function
- Expected input/output behavior
- Any performance considerations with the fast compression level
✅ Verification successful
Let me verify the module visibility and usage patterns to ensure we have a complete picture.
Ohayo sensei! Based on the verification results, I can now provide a conclusive response about the compress_legacy_program_data
function.
Making the function public is justified and correctly implemented
The function is used internally within the same module for converting legacy contract classes, specifically in the conversion flow where it's called to compress program data before further processing. The public visibility is necessary because:
- It's used by the module's public API for contract class conversion
- It follows a consistent pattern with its counterpart
decompress_legacy_program_data
- It's part of a larger compression/decompression pattern used throughout the codebase for handling legacy program data
However, documentation would indeed be helpful for future maintainers. Consider adding:
/// Compresses legacy program data using gzip with fast compression level.
///
/// This function is used internally for contract class conversion and storage optimization.
/// The compressed data can be decompressed using `decompress_legacy_program_data`.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check usage of compress_legacy_program_data
# Look for external calls to this function
echo "Checking compress_legacy_program_data usage..."
rg "compress_legacy_program_data" --type rust -B 2 -A 2
Length of output: 1897
Script:
#!/bin/bash
# Check if this module is re-exported or used in public interfaces
rg "pub.*use.*rpc" --type rust
# Check the module structure to understand visibility requirements
fd -e rs -x cat {} | rg "^mod |^pub mod"
Length of output: 8811
Script:
#!/bin/bash
# Look for any other compression-related functions to understand the pattern
rg "compress|decompress" --type rust
# Check if there are any tests using this function
rg "compress_legacy_program_data" --type rust -g "*test*.rs"
Length of output: 12924
22-25
: Ohayo sensei! Verify import usage after refactoring
Since we're moving away from FlattenedSierraClass
, let's ensure all these imports are still necessary.
Also applies to: 28-28
✅ Verification successful
Let me analyze the usage of these imports more deeply to ensure we're making the right call.
Ohayo sensei! All imports are actively used in the codebase
After analyzing the code, all the imported types are actively used in the conversion functions:
FlattenedSierraClass
is used in multiple conversion functionsCompressedLegacyContractClass
andContractClass
are used in RPC conversions- All the Legacy* types (e.g.,
LegacyContractAbiEntry
,LegacyEventAbiEntry
, etc.) are used in the conversion functions for handling legacy contract data FunctionStateMutability
is used in ABI conversions
The imports are necessary for the contract class conversion functionality.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check usage of imported types
# Look for usages of FlattenedSierraClass and other imported types
echo "Checking FlattenedSierraClass usage..."
rg "FlattenedSierraClass" --type rust
echo "Checking other imported types usage..."
rg "ClassHash|CompiledClassHash|LegacyContractClass" --type rust
Length of output: 60787
Script:
#!/bin/bash
# Check specific usage of imported types in the file under review
echo "Checking usage of imported types in conversion/rpc.rs..."
rg -A 5 "FlattenedSierraClass|CompressedLegacyContractClass|ContractClass|FunctionStateMutability|LegacyContractAbiEntry|LegacyContractEntryPoint|LegacyEntryPointsByType|LegacyEventAbiEntry|LegacyEventAbiType|LegacyFunctionAbiEntry|LegacyFunctionAbiType|LegacyStructAbiEntry|LegacyStructAbiType|LegacyStructMember|LegacyTypedParameter" crates/katana/primitives/src/conversion/rpc.rs
echo -e "\nChecking function implementations using these types..."
ast-grep --pattern $'fn $FUNC($ARGS) -> $RET {
$$$
}' crates/katana/primitives/src/conversion/rpc.rs
Length of output: 23289
crates/katana/storage/db/src/tables.rs (3)
2-2
: Ohayo sensei! Clean and consistent renaming from SierraClasses
to Classes
The changes look good! The renaming from SierraClasses
to Classes
and the type update from FlattenedSierraClass
to ContractClass
are applied consistently across imports, table enum definition, and table type mapping.
Also applies to: 164-164, 208-208
266-266
: Test assertions properly updated
The test assertions have been correctly updated to reflect the table renaming, maintaining comprehensive test coverage.
Also applies to: 292-292
2-2
: Verify consistent renaming across codebase
Ohayo sensei! Let's ensure there are no lingering references to SierraClasses
or FlattenedSierraClass
in the codebase.
Also applies to: 164-164, 208-208
✅ Verification successful
No remaining references to old class names in active code
Ohayo sensei! After analyzing the codebase, I can confirm that:
- References to
SierraClasses
only exist in documentation and monitoring files (database.md
and Grafana dashboards) - References to
FlattenedSierraClass
are properly being used in test files and conversion utilities - The new
ContractClass
is consistently used across the codebase for the actual implementation
The changes are safe and properly implemented.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for any remaining references to old names
echo "Checking for remaining references to SierraClasses..."
rg "SierraClasses"
echo "Checking for remaining references to FlattenedSierraClass..."
rg "FlattenedSierraClass"
echo "Verifying consistent usage of ContractClass..."
rg "ContractClass"
Length of output: 43720
crates/katana/executor/src/implementation/blockifier/state.rs (3)
261-267
: Ohayo! The test helper function looks good, sensei!
The refactored new_sierra_class()
function now returns a tuple of (ContractClass, CompiledClass)
which aligns well with the new class types improvements. The implementation correctly parses both the sierra and compiled classes from the JSON artifact.
400-401
: LGTM! Clean class insertion in the test setup.
The insertion of both the legacy and sierra classes into the declared_classes map is well-structured and maintains consistency with the new class types system.
Line range hint 451-459
: Verify class compilation error handling in assertions, sensei!
The assertions for class compilation could benefit from more explicit error handling to provide better feedback during test failures.
- assert_eq!(actual_legacy_compiled_class,
- Some(new_legacy_class.compile().expect("can compile"))
+ assert_eq!(
+ actual_legacy_compiled_class,
+ Some(new_legacy_class
+ .compile()
+ .expect("Failed to compile legacy class: compilation error"))
);
crates/katana/primitives/src/chain_spec.rs (2)
292-292
: LGTM! Class initialization improvements look good.
The changes to use .into()
for class initialization align well with the class types improvements objective.
Also applies to: 300-300
476-476
: LGTM! Test assertions are properly updated.
The test assertions now correctly use direct reference comparison with the class constants, maintaining consistency with the class type improvements.
Also applies to: 490-490
crates/katana/primitives/src/genesis/json.rs (1)
34-34
: LGTM! Import changes align with the new class type system.
The import of ContractClass
and SierraContractClass
is correctly added to support the refactoring from SierraClass
to the new class type system.
crates/katana/primitives/src/class.rs (1)
155-155
:
Ohayo sensei!
Remove pub use
statement inside function
The pub use
statement inside the compute_legacy_class_hash
function is unusual and may not compile. pub use
declarations should be at the module level, not within functions.
Apply this diff to fix the issue:
-fn compute_legacy_class_hash(class: &LegacyContractClass) -> Result<Felt, ComputeClassHashError> {
- pub use starknet::core::types::contract::legacy::LegacyContractClass as StarknetRsLegacyContractClass;
+pub use starknet::core::types::contract::legacy::LegacyContractClass as StarknetRsLegacyContractClass;
+fn compute_legacy_class_hash(class: &LegacyContractClass) -> Result<Felt, ComputeClassHashError> {
Likely invalid or redundant comment.
crates/katana/rpc/rpc-types/src/class.rs (3)
42-54
: Conversion implementations are correct and well-structured
Ohayo, sensei! The TryFrom
implementations for converting between ContractClass
and RpcContractClass
are properly handling both Class
and Legacy
variants. The error handling using ConversionError
ensures robustness.
82-118
: Efficient conversion between SierraContractClass
and RpcSierraContractClass
Ohayo, sensei! The conversion logic accurately maps the fields between SierraContractClass
and RpcSierraContractClass
. The use of to_string_pythonic
for the ABI serialization preserves the intended format.
135-151
: Accurate handling of legacy contract class compression
Ohayo, sensei! The conversions for LegacyContractClass
correctly implement compression and decompression of the program field, ensuring data integrity during the process.
crates/katana/rpc/rpc-types/src/transaction.rs (3)
1-1
: Ohayo, sensei! Importing Arc
is appropriate
The addition of use std::sync::Arc;
is necessary for handling smart pointers with Arc
later in the code.
8-8
: Including compiled_class_hash_from_flattened_sierra_class
The added import is required for computing the compiled class hash from the flattened Sierra class.
25-25
: Ohayo, sensei! Adding necessary imports for RPC contract classes
The import of RpcContractClass
, RpcLegacyContractClass
, and RpcSierraContractClass
is appropriate for handling contract class conversions.
crates/katana/rpc/rpc/src/starknet/mod.rs (4)
20-20
: Ohayo, sensei! Importing ClassHash
correctly
The import of ClassHash
from katana_primitives::class
is appropriate and necessary for handling class hashes within the module.
38-38
: Ohayo, sensei! Importing RpcContractClass
Bringing in RpcContractClass
from katana_rpc_types::class
aligns with the updated representation of contract classes in the RPC context.
48-48
: Ohayo, sensei! Updated imports for transaction types
Adding PriceUnit
, ResultPageRequest
, TransactionExecutionStatus
, and TransactionStatus
from starknet::core::types
ensures that the necessary types are available for transaction processing and status handling.
286-286
: Ohayo, sensei! Correctly retrieving the contract class
The method class_at_address
appropriately obtains the class hash and fetches the corresponding RPC contract class. This aligns with the updated handling of contract classes.
@@ -26,13 +26,13 @@ starknet = { workspace = true, optional = true } | |||
tokio = { workspace = true, optional = true } | |||
|
|||
alloy-primitives = { workspace = true, optional = true } | |||
serde_json = { workspace = true, optional = true } | |||
serde_json = { version = "1.0.133" } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ohayo sensei! Consider keeping serde_json as a workspace dependency
The change from a workspace-managed to a specific version of serde_json could lead to version conflicts if other crates in the workspace use different versions. Additionally, removing the optional = true
flag makes it a required dependency.
Consider this alternative:
-serde_json = { version = "1.0.133" }
+serde_json = { workspace = true }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
serde_json = { version = "1.0.133" } | |
serde_json = { workspace = true } |
impl Compress for ContractClass { | ||
type Compressed = Vec<u8>; | ||
fn compress(self) -> Self::Compressed { | ||
serde_json::to_vec(&self).unwrap() | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohayo sensei! Let's handle serialization errors gracefully!
The current implementation uses unwrap()
which could panic if serialization fails. We should handle potential errors gracefully.
Here's a safer implementation:
impl Compress for ContractClass {
type Compressed = Vec<u8>;
fn compress(self) -> Self::Compressed {
- serde_json::to_vec(&self).unwrap()
+ // If serialization fails, return an empty vector to maintain the type signature
+ // The error will be caught during decompression
+ serde_json::to_vec(&self).unwrap_or_default()
}
}
Alternatively, if you prefer to make compression fallible:
impl Compress for ContractClass {
- type Compressed = Vec<u8>;
- fn compress(self) -> Self::Compressed {
- serde_json::to_vec(&self).unwrap()
+ type Compressed = Result<Vec<u8>, serde_json::Error>;
+ fn compress(self) -> Self::Compressed {
+ serde_json::to_vec(&self)
}
}
Committable suggestion skipped: line range outside the PR's diff.
let class = RpcSierraContractClass::try_from(s).unwrap(); | ||
let class = ContractClass::try_from(RpcContractClass::Class(class)).unwrap(); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Enhance error handling in class conversion, sensei!
The current implementation uses unwrap()
which could cause panics if the conversion fails. Consider using proper error handling to provide better diagnostics.
Here's a suggested improvement:
- let class = RpcSierraContractClass::try_from(s).unwrap();
- let class = ContractClass::try_from(RpcContractClass::Class(class)).unwrap();
+ let class = RpcSierraContractClass::try_from(s)
+ .map_err(|e| anyhow!("Failed to convert Sierra class: {}", e))?;
+ let class = ContractClass::try_from(RpcContractClass::Class(class))
+ .map_err(|e| anyhow!("Failed to convert RPC class: {}", e))?;
This change will:
- Provide better error messages
- Prevent runtime panics
- Maintain the error chain for debugging
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
let class = RpcSierraContractClass::try_from(s).unwrap(); | |
let class = ContractClass::try_from(RpcContractClass::Class(class)).unwrap(); | |
let class = RpcSierraContractClass::try_from(s) | |
.map_err(|e| anyhow!("Failed to convert Sierra class: {}", e))?; | |
let class = ContractClass::try_from(RpcContractClass::Class(class)) | |
.map_err(|e| anyhow!("Failed to convert RPC class: {}", e))?; |
RpcContractClass::Sierra(class) => { | ||
let value = serde_json::to_value(class).unwrap(); | ||
let class = serde_json::from_value(value).unwrap(); | ||
(hash, ContractClass::Class(class)) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohayo sensei! Let's improve error handling and performance.
The current implementation has several concerns:
- Using
unwrap()
can cause runtime panics if serialization/deserialization fails - The double serialization (to JSON and back) seems unnecessary and could impact performance
Consider applying this safer implementation:
RpcContractClass::Sierra(class) => {
- let value = serde_json::to_value(class).unwrap();
- let class = serde_json::from_value(value).unwrap();
- (hash, ContractClass::Class(class))
+ let class = serde_json::from_value(serde_json::to_value(class)
+ .map_err(|e| ProviderError::ParsingError(format!("Failed to serialize Sierra class: {}", e)))?)
+ .map_err(|e| ProviderError::ParsingError(format!("Failed to deserialize Sierra class: {}", e)))?;
+ (hash, ContractClass::Class(class))
}
Also, could you explain why we need the double serialization here? If there's no specific reason, we could potentially simplify this to a direct conversion.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
RpcContractClass::Sierra(class) => { | |
let value = serde_json::to_value(class).unwrap(); | |
let class = serde_json::from_value(value).unwrap(); | |
(hash, ContractClass::Class(class)) | |
} | |
RpcContractClass::Sierra(class) => { | |
let class = serde_json::from_value(serde_json::to_value(class) | |
.map_err(|e| ProviderError::ParsingError(format!("Failed to serialize Sierra class: {}", e)))?) | |
.map_err(|e| ProviderError::ParsingError(format!("Failed to deserialize Sierra class: {}", e)))?; | |
(hash, ContractClass::Class(class)) | |
} |
// | ||
// TODO: add an `AbiFormatter` trait and let users choose which one to use. | ||
let abi = class.abi.as_ref(); | ||
let abi_str = to_string_pythonic(abi.unwrap_or(&abi::Contract::default())).unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohayo sensei!
Handle potential errors when converting ABI to string
Using unwrap()
in to_string_pythonic
may cause a panic if serialization fails. It's safer to propagate the error using the ?
operator.
Apply this diff to handle the error properly:
-let abi_str = to_string_pythonic(abi.unwrap_or(&abi::Contract::default())).unwrap();
+let abi_str = to_string_pythonic(abi.unwrap_or(&abi::Contract::default()))?;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
let abi_str = to_string_pythonic(abi.unwrap_or(&abi::Contract::default())).unwrap(); | |
let abi_str = to_string_pythonic(abi.unwrap_or(&abi::Contract::default()))?; |
#[allow(clippy::large_enum_variant)] | ||
#[derive(Debug, Clone, Eq, PartialEq)] | ||
#[cfg_attr(feature = "serde", derive(::serde::Serialize, ::serde::Deserialize))] | ||
pub enum ContractClass { | ||
Class(FlattenedSierraClass), | ||
Class(SierraContractClass), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ohayo sensei!
Consider boxing large enum variants to improve performance
The ContractClass
enum contains large variants, as indicated by the #[allow(clippy::large_enum_variant)]
attribute. Boxing the large variants can reduce the memory footprint and improve performance by avoiding unnecessary data copying.
Apply this diff to box the large variants:
#[allow(clippy::large_enum_variant)]
#[derive(Debug, Clone, Eq, PartialEq)]
#[cfg_attr(feature = "serde", derive(::serde::Serialize, ::serde::Deserialize))]
pub enum ContractClass {
- Class(SierraContractClass),
- Legacy(LegacyContractClass),
+ Class(Box<SierraContractClass>),
+ Legacy(Box<LegacyContractClass>),
}
Committable suggestion skipped: line range outside the PR's diff.
let rpc_class = Arc::unwrap_or_clone(tx.contract_class); | ||
let rpc_class = RpcLegacyContractClass::try_from(rpc_class).unwrap(); | ||
let class = ContractClass::try_from(RpcContractClass::Legacy(rpc_class)).unwrap(); | ||
|
||
let class_hash = class.class_hash().unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid using unwrap()
to prevent potential panics
In the conversion logic for BroadcastedDeclareTransaction::V1
, using unwrap()
on fallible operations can cause the application to panic if an error occurs.
Consider handling the possible errors gracefully. Here's a suggested refactor:
let rpc_class = Arc::unwrap_or_clone(tx.contract_class);
let rpc_class = RpcLegacyContractClass::try_from(rpc_class)?;
let class = ContractClass::try_from(RpcContractClass::Legacy(rpc_class))?;
let class_hash = class.class_hash()?;
This way, errors are propagated up the call stack, allowing for proper error handling.
let rpc_class = Arc::unwrap_or_clone(tx.contract_class); | ||
let rpc_class = RpcSierraContractClass::try_from(rpc_class).unwrap(); | ||
let class = ContractClass::try_from(RpcContractClass::Class(rpc_class)).unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle potential errors in conversion without unwrap()
In BroadcastedDeclareTransaction::V2
, the use of unwrap()
can lead to panics. To enhance robustness, it's better to handle errors gracefully.
Here's a suggested refactor:
let rpc_class = Arc::unwrap_or_clone(tx.contract_class);
let rpc_class = RpcSierraContractClass::try_from(rpc_class)?;
let class = ContractClass::try_from(RpcContractClass::Class(rpc_class))?;
This approach ensures that errors are appropriately propagated.
let rpc_class = Arc::unwrap_or_clone(tx.contract_class); | ||
let rpc_class = RpcSierraContractClass::try_from(rpc_class).unwrap(); | ||
let class = ContractClass::try_from(RpcContractClass::Class(rpc_class)).unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prevent panics by avoiding unwrap()
during conversions
In BroadcastedDeclareTransaction::V3
, similar to V1 and V2, using unwrap()
may cause panics on errors.
Consider the following refactor:
let rpc_class = Arc::unwrap_or_clone(tx.contract_class);
let rpc_class = RpcSierraContractClass::try_from(rpc_class)?;
let class = ContractClass::try_from(RpcContractClass::Class(rpc_class))?;
This ensures that any conversion errors are handled properly, preventing unexpected panics.
) -> StarknetApiResult<RpcContractClass> { | ||
self.on_io_blocking_task(move |this| { | ||
let state = this.state(&block_id)?; | ||
|
||
let Some(class) = state.class(class_hash)? else { | ||
return Err(StarknetApiError::ClassHashNotFound); | ||
}; | ||
|
||
match class { | ||
ContractClass::Legacy(class) => Ok(legacy_inner_to_rpc_class(class)?), | ||
ContractClass::Class(sierra) => Ok(StarknetRsContractClass::Sierra(sierra)), | ||
} | ||
Ok(RpcContractClass::try_from(class).unwrap()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohayo, sensei! Avoid using unwrap()
to prevent panics
Using unwrap()
on RpcContractClass::try_from(class)
can cause a panic if the conversion fails. It's safer to handle the Result
to prevent unexpected crashes.
Apply this diff to handle the conversion error:
- Ok(RpcContractClass::try_from(class).unwrap())
+ match RpcContractClass::try_from(class) {
+ Ok(rpc_class) => Ok(rpc_class),
+ Err(e) => Err(StarknetApiError::UnexpectedError { reason: e.to_string() }),
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
) -> StarknetApiResult<RpcContractClass> { | |
self.on_io_blocking_task(move |this| { | |
let state = this.state(&block_id)?; | |
let Some(class) = state.class(class_hash)? else { | |
return Err(StarknetApiError::ClassHashNotFound); | |
}; | |
match class { | |
ContractClass::Legacy(class) => Ok(legacy_inner_to_rpc_class(class)?), | |
ContractClass::Class(sierra) => Ok(StarknetRsContractClass::Sierra(sierra)), | |
} | |
Ok(RpcContractClass::try_from(class).unwrap()) | |
) -> StarknetApiResult<RpcContractClass> { | |
self.on_io_blocking_task(move |this| { | |
let state = this.state(&block_id)?; | |
let Some(class) = state.class(class_hash)? else { | |
return Err(StarknetApiError::ClassHashNotFound); | |
}; | |
match RpcContractClass::try_from(class) { | |
Ok(rpc_class) => Ok(rpc_class), | |
Err(e) => Err(StarknetApiError::UnexpectedError { reason: e.to_string() }), | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (3)
crates/katana/storage/provider/src/providers/db/state.rs (3)
75-75
: Ohayo sensei! LGTM, but consider enhancing error handlingThe simplified class storage implementation aligns well with the PR objectives. However, we might want to add some error context for debugging purposes.
Consider wrapping the error with additional context:
- db_tx.put::<tables::Classes>(hash, class)?; + db_tx.put::<tables::Classes>(hash, class).map_err(|e| { + ProviderError::Custom(format!("Failed to store class {hash}: {e}")) + })?;
184-184
: Enhance error handling for historical class retrievalThe implementation maintains temporal consistency while simplifying class retrieval. However, we could improve error handling for better debugging.
Consider adding more context to errors:
- Ok(self.tx.get::<tables::Classes>(hash)?) + self.tx.get::<tables::Classes>(hash).map_err(|e| { + ProviderError::Custom(format!( + "Failed to retrieve historical class {hash} at block {}: {e}", + self.block_number + )) + })
Line range hint
75-184
: Ohayo! Great architectural improvements in class type handlingThe changes effectively streamline the class type handling by:
- Unifying class storage under a single table
- Removing unnecessary type conversions
- Maintaining historical state access
This aligns perfectly with the PR objectives of simplifying the conversion process from RPC types to primitive types.
Consider documenting these architectural changes in the module-level documentation to help future maintainers understand the design decisions.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (3)
crates/katana/executor/src/implementation/blockifier/state.rs
(8 hunks)crates/katana/storage/db/src/tables.rs
(5 hunks)crates/katana/storage/provider/src/providers/db/state.rs
(3 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- crates/katana/executor/src/implementation/blockifier/state.rs
- crates/katana/storage/db/src/tables.rs
🔇 Additional comments (1)
crates/katana/storage/provider/src/providers/db/state.rs (1)
116-116
: Verify backward compatibility with existing class data
The simplified class retrieval looks good, but we should ensure existing data can still be accessed.
Let's verify if there are any existing class references that might be affected:
✅ Verification successful
Backward compatibility is maintained with existing class data
Ohayo sensei! After analyzing the codebase, I can confirm that the simplified class retrieval maintains backward compatibility:
- The
tables::Classes
table is still properly maintained alongsideCompiledClasses
andCompiledClassHashes
- The storage layer correctly handles both legacy and new class types through the same interface
- Class data is consistently written using
put::<tables::Classes>
when classes are stored - The retrieval operation is a direct mapping to the same underlying storage table
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for any references to the old class types or tables
rg -A 2 "SierraClass|CompiledClass|Legacy"
# Search for database migration code
fd -e rs -e sql "migrate|migration"
Length of output: 93635
Script:
#!/bin/bash
# Check if there are any references to the old class storage or access patterns
rg -A 2 "tables::Classes|get_class|set_class" crates/katana/storage/provider/src/
# Check if there are any migrations or schema changes related to class storage
fd -e rs "schema|migration" crates/katana/storage/db/src/
Length of output: 6930
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (2)
crates/katana/rpc/rpc-types/src/class.rs (2)
153-168
: Optimize compression utilitiesConsider these improvements for better performance:
- Pre-allocate the decompression buffer with a reasonable size
- Define compression level as a const
+const LEGACY_PROGRAM_COMPRESSION_LEVEL: flate2::Compression = flate2::Compression::fast(); + fn compress_legacy_program(mut program: LegacyProgram) -> Result<Vec<u8>, ConversionError> { program.debug_info = serde_json::to_value::<Option<()>>(None)?; let bytes = serde_json::to_vec(&program)?; - let mut gzip_encoder = flate2::write::GzEncoder::new(Vec::new(), flate2::Compression::fast()); + let mut gzip_encoder = flate2::write::GzEncoder::new(Vec::with_capacity(bytes.len()), LEGACY_PROGRAM_COMPRESSION_LEVEL); Write::write_all(&mut gzip_encoder, &bytes)?; Ok(gzip_encoder.finish()?) } fn decompress_legacy_program(compressed_data: &[u8]) -> Result<LegacyProgram, ConversionError> { let mut decoder = flate2::read::GzDecoder::new(compressed_data); - let mut decompressed = Vec::new(); + let mut decompressed = Vec::with_capacity(compressed_data.len() * 2); std::io::Read::read_to_end(&mut decoder, &mut decompressed)?; Ok(serde_json::from_slice::<LegacyProgram>(&decompressed)?) }
197-304
: Enhance test coverageConsider adding:
- Negative test cases for invalid inputs
- Parameterized tests to reduce code duplication
- Tests for compression/decompression edge cases
Example negative test:
#[test] fn test_invalid_sierra_program() { let invalid_program = vec![Felt::from(u128::MAX)]; let class = RpcSierraContractClass { sierra_program: invalid_program, contract_class_version: "0.1.0".to_string(), entry_points_by_type: ContractEntryPoints::default(), abi: "{}".to_string(), }; assert!(SierraContractClass::try_from(class).is_err()); }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (1)
crates/katana/rpc/rpc-types/src/class.rs
(1 hunks)
🔇 Additional comments (2)
crates/katana/rpc/rpc-types/src/class.rs (2)
22-41
: LGTM! Well-structured type definitions
The RPC type definitions are well-organized with proper error handling and documentation.
170-195
: LGTM! Well-documented conversions
The starknet-rs conversions are well-implemented with clear documentation about performance implications.
use katana_cairo::starknet_api::serde_utils::deserialize_optional_contract_class_abi_entry_vector; | ||
use katana_primitives::class::{ContractClass, LegacyContractClass, SierraContractClass}; | ||
use katana_primitives::{ | ||
Felt, {self}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix the Felt import syntax
Ohayo! The import statement has a syntax error with an unnecessary self reference.
- Felt, {self},
+ Felt,
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
Felt, {self}, | |
Felt, |
.map(|f| BigUintAsHex { value: f.to_biguint() }) | ||
.collect::<Vec<_>>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling for biguint conversion
The to_biguint()
conversion could potentially fail for invalid values.
Consider wrapping the conversion in a Result:
- .map(|f| BigUintAsHex { value: f.to_biguint() })
+ .map(|f| f.to_biguint().map(|v| BigUintAsHex { value: v }))
+ .collect::<Result<Vec<_>, _>>()?;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
.map(|f| BigUintAsHex { value: f.to_biguint() }) | |
.collect::<Vec<_>>(); | |
.map(|f| f.to_biguint().map(|v| BigUintAsHex { value: v })) | |
.collect::<Result<Vec<_>, _>>()?; |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2712 +/- ##
==========================================
+ Coverage 56.22% 56.26% +0.03%
==========================================
Files 410 415 +5
Lines 52892 53183 +291
==========================================
+ Hits 29741 29921 +180
- Misses 23151 23262 +111 ☔ View full report in Codecov by Sentry. 🚨 Try these New Features:
|
the goal is to reduce the steps in converting from rpc to primitives types for classes. currently we rely on starknet-rs for rpc types, while using cairo-lang types for internal usages (ie execution). this required a very ugly ceremony for conversion as they don't share the same types (only Felt!). so the current approach is to convert the starknet-rs types to serde_json::Value first and deserialize it to the cairo-lang types. This is damn slow!! so im solving this by defining the rpc types ourselves using the same internal types (for the inner fields) as cairo-lang's. so we don't need to perform any conversions from rpc -> internal. i also replace any usages of starknet-rs class types in the db and go fully with cairo-lang as the primitives types.
continuation of #2709
the goal is to reduce the steps in converting from rpc to primitives types for classes. currently we rely on
starknet-rs
for rpc types, while usingcairo-lang
types for internal usages (ie execution). this required a very ugly ceremony for conversion as they don't share the same types (onlyFelt
!). so the current approach is to convert thestarknet-rs
types toserde_json::Value
first and deserialize it to thecairo-lang
types. This is damn slow!!so im solving this by defining the rpc types ourselves using the same internal types (for the inner fields) as
cairo-lang
's. so we don't need to perform any conversions from rpc -> internal. i also replace any usages ofstarknet-rs
class types in the db and go fully withcairo-lang
as the primitives types.