-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor(torii-grpc): gzip compression & query opt #2542
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2542 +/- ##
==========================================
+ Coverage 69.44% 69.46% +0.01%
==========================================
Files 388 388
Lines 49881 50015 +134
==========================================
+ Hits 34640 34742 +102
- Misses 15241 15273 +32 ☔ View full report in Codecov by Sentry. |
WalkthroughOhayo, sensei! The pull request introduces significant updates to the Changes
Possibly related PRs
Suggested reviewers
📜 Recent review detailsConfiguration used: .coderabbit.yaml 📒 Files selected for processing (1)
🧰 Additional context used🔇 Additional comments (1)
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (5)
crates/torii/grpc/proto/world.proto (1)
104-107
: Ohayo again, sensei! The new message type is well-structured!The
RetrieveEntitiesStreamingResponse
message type is well-designed for streaming entity data. Theentity
field allows for sending individual entities, while theremaining_count
provides valuable progress information to the client.One small suggestion to consider:
You might want to add a comment explaining the purpose of the
remaining_count
field, especially if it's expected to be used for progress tracking or other client-side logic. For example:message RetrieveEntitiesStreamingResponse { types.Entity entity = 1; - uint32 remaining_count = 2; + // The number of entities left to be streamed. Can be used for progress tracking. + uint32 remaining_count = 2; }This addition would enhance the self-documentation of the protocol.
Cargo.toml (1)
Line range hint
239-245
: Ohayo, sensei! Intriguing addition of Alloy dependencies!The inclusion of various Alloy-related dependencies is an interesting development. It seems we're integrating with the Alloy Ethereum library, which could bring powerful Ethereum-related capabilities to our project. The consistent versioning and careful feature control (with
default-features = false
) show thoughtful integration.Could you provide more context on how we plan to use these Alloy components in our project? This would help ensure we're leveraging them effectively and securely.
If you'd like, I can help draft documentation or example code showcasing the intended usage of these new Alloy dependencies in our project. Just let me know!
crates/torii/grpc/src/client.rs (1)
Line range hint
55-61
: Ohayo, sensei! Excellent compression implementation!The changes to the
new
method for non-WASM targets are looking great! Here's what I like:
- The updated return type
Result<Self, Error>
allows for more robust error handling. Nice touch!- The addition of Gzip compression for both accepting and sending data is a smart move for improving network efficiency.
One small suggestion to make this even better:
Consider extracting the compression settings into a constant or configuration parameter. This would make it easier to adjust or disable compression in the future if needed. For example:
const COMPRESSION_ENCODING: CompressionEncoding = CompressionEncoding::Gzip; // Then in the new method: .accept_compressed(COMPRESSION_ENCODING) .send_compressed(COMPRESSION_ENCODING)What do you think, sensei? This small change could make our code even more flexible!
crates/torii/grpc/src/server/mod.rs (2)
1239-1249
: Review CORS configuration for completenessThe addition of CORS constants improves cross-origin support. However, consider reviewing the following:
- Ensure that all necessary headers are included in
DEFAULT_EXPOSED_HEADERS
andDEFAULT_ALLOW_HEADERS
based on your application's requirements.- Verify that
DEFAULT_MAX_AGE
aligns with your desired caching policy for preflight requests.
Line range hint
14-45
: Organize imports for readabilityThe addition of new imports improves functionality but has resulted in a less organized import section. Consider grouping imports logically and alphabetically to enhance readability.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (7)
- Cargo.toml (1 hunks)
- crates/torii/grpc/Cargo.toml (1 hunks)
- crates/torii/grpc/build.rs (1 hunks)
- crates/torii/grpc/proto/world.proto (2 hunks)
- crates/torii/grpc/src/client.rs (3 hunks)
- crates/torii/grpc/src/server/mod.rs (11 hunks)
- crates/torii/server/src/proxy.rs (2 hunks)
🧰 Additional context used
🔇 Additional comments (20)
crates/torii/grpc/build.rs (1)
19-19
: Ohayo, sensei! Let's talk about this proto compilation tweak.The change from
.compile()
to.compile_protos()
for the WebAssembly target looks interesting. This modification seems to be aligning the WebAssembly compilation method with the non-WebAssembly one, which is a good practice for consistency.Let's double-check this change against the
wasm-tonic-build
documentation:This will help us confirm if the new method is the recommended approach for WebAssembly targets, sensei.
✅ Verification successful
Ohayo, sensei!
Everything looks good with the.compile_protos()
usage incrates/torii/grpc/build.rs
. Since it's the only instance and there's no conflicting information in the documentation, we're all set!🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the usage of .compile_protos() in wasm-tonic-build documentation # Test: Search for .compile_protos() usage in wasm-tonic-build documentation rg --type rust 'compile_protos' $(fd -t f 'wasm-tonic-build')Length of output: 152
Script:
#!/bin/bash # Description: Search for all usages of `.compile_protos()` in the codebase rg --type rust '\.compile_protos\('Length of output: 130
crates/torii/grpc/Cargo.toml (1)
48-49
: Ohayo, sensei! New dependencies look good!The addition of
tower-http
andhttp
as workspace dependencies for non-WASM targets is a solid choice. These libraries are commonly used for building robust HTTP services and middleware in Rust.This change suggests that you're enhancing the HTTP capabilities of the gRPC server, possibly for features like request/response compression or additional middleware. It aligns well with the PR objective of implementing gzip compression.
Keep up the excellent work, sensei!
crates/torii/grpc/proto/world.proto (2)
28-29
: Ohayo, sensei! New streaming RPC method looks great!The addition of
RetrieveEntitiesStreaming
is a solid improvement. It allows for efficient retrieval of entities as a stream, which can be particularly useful for handling large datasets or providing real-time updates. The method signature is consistent with the existingRetrieveEntities
method, making it intuitive for developers to use.
28-29
: Ohayo once more, sensei! Overall, these changes are a great addition!The introduction of the
RetrieveEntitiesStreaming
RPC method and theRetrieveEntitiesStreamingResponse
message type significantly enhances the World service. These additions enable efficient streaming of entity data, which can greatly improve performance for large datasets and enable real-time updates for clients.The new features are well-integrated with the existing service structure and maintain consistency in naming and parameter usage. This thoughtful design will make it easier for developers to adopt and use the new streaming functionality.
Great work on improving the capabilities of the World service!
Also applies to: 104-107
crates/torii/server/src/proxy.rs (2)
Line range hint
20-33
: Ohayo, sensei! LGTM: Enhanced gRPC support in CORS configuration.The addition of "grpc-accept-encoding" and "grpc-encoding" to the
DEFAULT_ALLOW_HEADERS
is a positive change. These headers enable the server to handle compressed gRPC requests and responses, which aligns perfectly with the PR's objective of implementing gzip compression.
35-36
: Excellent addition, sensei! CORS configuration updated for gRPC compression.The inclusion of "grpc-encoding" in
DEFAULT_EXPOSED_HEADERS
is a smart move. This change allows clients to access the compression-related header in cross-origin requests, which is essential for proper gRPC compression support. It's perfectly aligned with the PR's goal of implementing gzip compression.Cargo.toml (4)
219-219
: Ohayo, sensei! Excellent addition of gzip compression support!The inclusion of the "gzip" feature in the
tonic
dependency is a wise move. This enhancement will allow for compression in gRPC communications, potentially reducing network traffic and improving overall performance. Well done!
Line range hint
1-249
: Ohayo, sensei! Let's wrap up this enlightening journey through ourCargo.toml
!Overall, the changes in this file are promising and seem to be pushing our project in an exciting direction. Here's a quick summary of the key points:
- The addition of gzip compression support in
tonic
is a great performance booster.- Updating our WASM-compatible gRPC dependencies keeps us current and secure.
- The introduction of Alloy dependencies opens up new possibilities for Ethereum integration.
- The explicit versioning of Starknet dependencies, while unclear in purpose, ensures consistency.
These updates appear to be enhancing our project's capabilities in terms of performance, compatibility, and functionality. However, it would be beneficial to:
- Verify that our WASM-related features still work as expected with the updated dependencies.
- Provide more context on how we plan to utilize the new Alloy components.
- Clarify the reasoning behind the Starknet dependency specification changes.
Great work on keeping our dependencies up-to-date and expanding our toolkit, sensei! Let's ensure we fully understand and leverage these changes to their full potential.
Line range hint
247-249
: Ohayo, sensei! A question about the Starknet dependency updates.I noticed that the versions for
starknet
,starknet-crypto
, andstarknet-types-core
have been explicitly specified. While explicitly defining versions is generally a good practice for stability, it appears that these versions haven't actually changed from their previous values.Could you clarify the reasoning behind modifying these lines? Was this done for consistency in the
Cargo.toml
file, or is there another purpose I might be missing?
225-228
: Ohayo, sensei! Great job updating the WASM-compatible gRPC dependencies!These updates bring our WASM-compatible gRPC stack to the latest versions, which is excellent for maintaining security and leveraging performance improvements. The consistent versioning for
wasm-tonic
andwasm-tonic-build
is particularly noteworthy.To ensure everything works smoothly with these updates, it would be wise to run our WASM-related tests and verify that there are no breaking changes or unexpected behaviors.
✅ Verification successful
Ohayo, sensei!
Since the previous verification failed due to the missing
cargo
command, let's directly check if the WASM-compatible gRPC dependencies have been updated to the correct versions inCargo.toml
.
Ohayo, sensei!
It looks like the previous verification script had trouble extracting the correct version numbers from
Cargo.toml
. Let's refine the script to accurately parse both simple and table-formatted dependency declarations.
Ohayo, sensei!
Great news! All WASM-compatible gRPC dependencies in
Cargo.toml
have been successfully updated to the correct versions. Everything looks good to go!🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify WASM-related tests pass after dependency updates # Run WASM-related tests cargo test --workspace --features wasm # Check for any WASM-related compiler warnings cargo check --workspace --features wasm 2>&1 | grep -i wasmLength of output: 170
Script:
#!/bin/bash # Description: Verify that WASM-compatible gRPC dependencies have the correct versions in Cargo.toml # Define expected versions declare -A dependencies=( ["tonic-web-wasm-client"]="0.6.0" ["wasm-prost"]="0.13" ["wasm-tonic"]="0.12" ["wasm-tonic-build"]="0.12" ) # Initialize a flag to track mismatches mismatch_found=0 # Loop through each dependency and verify its version for dep in "${!dependencies[@]}"; do expected_version="${dependencies[$dep]}" # Extract the version from Cargo.toml actual_version=$(rg "^$dep" Cargo.toml | head -n1 | sed -E "s/.*\"([^\"]+)\".*/\1/") if [[ "$actual_version" != "$expected_version"* ]]; then echo "Version mismatch for $dep: expected $expected_version, found $actual_version" mismatch_found=1 else echo "$dep: version $actual_version is correct." fi done # Exit with non-zero status if any mismatches found if [[ $mismatch_found -eq 1 ]]; then exit 1 fiLength of output: 1535
Script:
#!/bin/bash # Description: Verify that WASM-compatible gRPC dependencies have the correct versions in Cargo.toml # Define expected versions declare -A dependencies=( ["tonic-web-wasm-client"]="0.6.0" ["wasm-prost"]="0.13" ["wasm-tonic"]="0.12" ["wasm-tonic-build"]="0.12" ) # Initialize a flag to track mismatches mismatch_found=0 # Function to extract version extract_version() { local dep="$1" # Match either version = "x.y.z" or version = "x.y" awk -v dep="$dep" ' /^\s*'"$dep"'\s*=/ { if ($0 ~ /\{/) { # Table-formatted dependency match($0, /version\s*=\s*"([^"]+)"/, arr) if (arr[1] != "") { print arr[1] exit } } else { # Simple dependency match($0, /"([^"]+)"/, arr) if (arr[1] != "") { print arr[1] exit } } } ' Cargo.toml } # Loop through each dependency and verify its version for dep in "${!dependencies[@]}"; do expected_version="${dependencies[$dep]}" # Extract the version from Cargo.toml actual_version=$(extract_version "$dep") if [[ "$actual_version" != "$expected_version"* ]]; then echo "Version mismatch for $dep: expected $expected_version, found $actual_version" mismatch_found=1 else echo "$dep: version $actual_version is correct." fi done # Exit with non-zero status if any mismatches found if [[ $mismatch_found -eq 1 ]]; then exit 1 fiLength of output: 2964
crates/torii/grpc/src/client.rs (3)
7-7
: Ohayo, sensei! Nice import addition!The addition of
CompressionEncoding
import is spot-on for enabling Gzip compression in our gRPC client. It's well-placed with other imports, making our code more efficient and ready for some compression action!
67-72
: Ohayo again, sensei! Consistent WASM implementation!Great job on keeping the WASM implementation consistent with the non-WASM version! The changes look solid:
- The
Result<Self, Error>
return type aligns perfectly with the non-WASM version, ensuring consistent error handling across targets.- The Gzip compression settings are applied correctly for WASM targets, maintaining efficiency across different environments.
As mentioned in the previous comment, consider applying the same suggestion of extracting the compression settings into a constant or configuration parameter here as well. This would ensure consistency and flexibility across both implementations.
Keep up the excellent work, sensei!
Line range hint
55-72
: Ohayo, sensei! Let's wrap this up with a bow!Great job on implementing Gzip compression for our gRPC client! Here's a summary of the impactful changes:
- Gzip compression added for both WASM and non-WASM targets, which should boost our network efficiency. Nicely done!
- The
new
method signatures now returnResult<Self, Error>
, providing more robust error handling. Smart move!These changes are consistent and well-implemented across the board. However, there's one thing we should double-check:
We need to ensure that any code calling these
new
methods is updated to handle the newResult
return type. Let's run a quick check:This will help us spot any places where we might need to update our error handling. What do you think, sensei? Shall we run this check to ensure our changes are fully integrated?
✅ Verification successful
Ohayo, sensei! 🎉
All
WorldClient::new
calls are correctly handling theResult
with the?
operator. No issues found with error handling in the current implementation.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Find all occurrences of WorldClient::new to ensure they handle the Result # Search for WorldClient::new calls echo "Searching for WorldClient::new calls:" rg "WorldClient::new" --type rust -C 3 # Search for potential error handling patterns echo "\nChecking for error handling patterns:" rg "WorldClient::new.*\?|WorldClient::new.*unwrap|WorldClient::new.*expect" --type rust -C 3Length of output: 1932
crates/torii/grpc/src/server/mod.rs (7)
1270-1272
: Confirm Gzip compression settingsEnabling Gzip compression with
.accept_compressed(CompressionEncoding::Gzip)
and.send_compressed(CompressionEncoding::Gzip)
enhances performance for clients supporting compression. Ensure that clients interacting with your server can handle compressed responses to avoid compatibility issues.
1277-1299
: Verify CORS and gRPC-Web layer integrationThe integration of
CorsLayer
andGrpcWebLayer
is crucial for gRPC-Web support. Please confirm:
- The CORS configuration correctly mirrors request origins and handles credentials securely.
- The combination of layers does not introduce unintended side effects or security vulnerabilities.
55-56
: Update imports to reflect new method signaturesWith several methods now being
async
, ensure that all associated imports, includingRetrieveEntitiesStreamingResponse
, are correctly updated to support asynchronous execution.
1029-1030
: Add missingSend
bound to type aliasThe type alias
RetrieveEntitiesStreamingResponseStream
includes theSend
trait bound, which is essential for concurrency. Confirm that all type aliases for stream responses include necessary trait bounds.
297-298
:⚠️ Potential issueOhayo, sensei! Ensure thread safety with
par_iter
usageThe use of
par_iter()
at lines 297-298 enhances performance by parallelizing the mapping of rows to entities. However, please verify thatarrays_rows
andschemas
are thread-safe or properly synchronized to prevent potential data races when accessed concurrently.Consider using thread-safe data structures or synchronization mechanisms if necessary.
1125-1160
:⚠️ Potential issueVerify the correctness of
retrieve_entities_streaming
implementationThe new asynchronous method
retrieve_entities_streaming
provides streaming of entities, which is a great addition. Please ensure that:
- The channel buffer size (
channel(100)
) is appropriate for your use case to prevent potential bottlenecks or memory issues.- Error handling within the spawned task is robust, especially if
tx.send()
fails.
607-607
: 🛠️ Refactor suggestionConsider parallelizing the iterator for consistency
At line 607,
iter()
is used instead ofpar_iter()
. For consistency and performance gains similar to those in lines 297-298, consider usingpar_iter()
here as well.Apply this change:
- .iter() + .par_iter()Likely invalid or redundant comment.
crates/torii/grpc/src/server/mod.rs
Outdated
schemas: &[Ty], | ||
) -> Result<proto::types::Entity, Error> { | ||
let hashed_keys = Felt::from_str(&row.get::<String, _>("id")).map_err(ParseError::FromStr)?; | ||
let models = schemas | ||
.iter_mut() | ||
.iter() | ||
.map(|schema| { | ||
map_row_to_ty("", &schema.name(), schema, row, arrays_rows)?; | ||
Ok(schema.as_struct().unwrap().clone().into()) | ||
let mut ty = schema.clone(); | ||
map_row_to_ty("", &schema.name(), &mut ty, row, arrays_rows)?; | ||
Ok(ty.as_struct().unwrap().clone().into()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle potential None
values when unwrapping
In map_row_to_entity
, the use of .unwrap()
at line 870 could cause a panic if the value is None
. Consider handling this case explicitly to improve robustness.
Apply this diff to handle None
values:
- Ok(ty.as_struct().unwrap().clone().into())
+ if let Some(struct_ty) = ty.as_struct() {
+ Ok(struct_ty.clone().into())
+ } else {
+ Err(Error::from(QueryError::InvalidType))
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
schemas: &[Ty], | |
) -> Result<proto::types::Entity, Error> { | |
let hashed_keys = Felt::from_str(&row.get::<String, _>("id")).map_err(ParseError::FromStr)?; | |
let models = schemas | |
.iter_mut() | |
.iter() | |
.map(|schema| { | |
map_row_to_ty("", &schema.name(), schema, row, arrays_rows)?; | |
Ok(schema.as_struct().unwrap().clone().into()) | |
let mut ty = schema.clone(); | |
map_row_to_ty("", &schema.name(), &mut ty, row, arrays_rows)?; | |
Ok(ty.as_struct().unwrap().clone().into()) | |
schemas: &[Ty], | |
) -> Result<proto::types::Entity, Error> { | |
let hashed_keys = Felt::from_str(&row.get::<String, _>("id")).map_err(ParseError::FromStr)?; | |
let models = schemas | |
.iter() | |
.map(|schema| { | |
let mut ty = schema.clone(); | |
map_row_to_ty("", &schema.name(), &mut ty, row, arrays_rows)?; | |
if let Some(struct_ty) = ty.as_struct() { | |
Ok(struct_ty.clone().into()) | |
} else { | |
Err(Error::from(QueryError::InvalidType)) | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Outside diff range and nitpick comments (2)
crates/dojo-types/src/schema.rs (1)
355-355
: Ohayo again, sensei! Nice adaptation here!The
to_sql_value
method has been updated to work with the newoption
method implementation. It correctly maintains the original functionality of returning the option name as aString
.A small optimization suggestion:
- self.option().map(|option| option.name.clone()) + self.option().map(|option| option.name.to_string())Using
to_string()
instead ofclone()
can be slightly more efficient forString
types, as it avoids potential unnecessary allocations.crates/torii/grpc/proto/schema.proto (1)
21-27
: Ohayo, sensei! Clarify the usage ofbytes
fields for large integersThe fields
i128
,u128
, andu256
are represented asbytes
since Protobuf does not natively support 128-bit or 256-bit integer types. To improve maintainability, consider adding comments explaining how these fields should be serialized and deserialized, including details about endianness and any specific formatting requirements.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (4)
- crates/dojo-types/src/schema.rs (4 hunks)
- crates/torii/grpc/proto/schema.proto (1 hunks)
- crates/torii/grpc/src/types/mod.rs (0 hunks)
- crates/torii/grpc/src/types/schema.rs (2 hunks)
💤 Files with no reviewable changes (1)
- crates/torii/grpc/src/types/mod.rs
🧰 Additional context used
🔇 Additional comments (2)
crates/dojo-types/src/schema.rs (2)
Line range hint
330-341
: Ohayo, sensei! This change looks great!The
option
method has been refactored to return a reference to theEnumOption
instead of cloning the name string. This improvement:
- Enhances performance by avoiding unnecessary string cloning.
- Provides more information to the caller by returning the entire
EnumOption
.- Maintains proper error handling for invalid cases.
Great job on this optimization!
451-453
: Ohayo once more, sensei! Excellent adaptation!The
format_member
function has been skillfully updated to work with the newoption
method of theEnum
struct. It now properly handles theResult
returned byoption()
and formats the output accordingly. The implementation is concise, readable, and correctly handles both success and error cases.Great job on maintaining consistency throughout the codebase!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With the current implementation any non-gzip format is still accepted? Or must be gzip
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 11
🧹 Outside diff range and nitpick comments (5)
crates/torii/grpc/proto/schema.proto (1)
10-27
: Ohayo, sensei! Excellent refactoring of the Primitive messageThe restructuring of the
Primitive
message is a significant improvement:
- The use of
oneof
provides a more flexible and type-safe way to represent different primitive types.- It simplifies the schema by eliminating the need for separate
PrimitiveType
andValue
messages.- The inclusion of various numeric types and custom types like
felt252
enhances the expressiveness of the schema.However, there's a small nitpick:
Consider adding comments to explain the purpose of custom types like
felt252
,class_hash
, andcontract_address
for better documentation. For example:+ // felt252: Field element in the Starknet network bytes felt252 = 14; + // class_hash: Unique identifier for a Starknet contract class bytes class_hash = 15; + // contract_address: Address of a Starknet contract bytes contract_address = 16;crates/torii/grpc/proto/types.proto (2)
32-35
: Ohayo, sensei! The newModel
message looks great!The structure of the
Model
message is well-defined and logical. It nicely represents a hierarchical model with a name and child elements.Consider adding a brief comment above the message to explain its purpose and usage, like this:
// Represents a hierarchical model structure with a name and child elements. message Model { string name = 1; repeated Ty children = 2; }
41-41
: Ohayo again, sensei! Nice update to theEntity
message!The change from
repeated Struct
torepeated Model
for themodels
field is a great improvement. It provides more structure and type safety to the entity representation.To maintain clarity, consider updating the comment above the
Entity
message to reflect this change:message Entity { // The entity's hashed keys bytes hashed_keys = 1; // Models associated with the entity, using the new Model message type repeated Model models = 2; }crates/torii/client/src/client/mod.rs (2)
8-9
: Consider grouping imports from the same crate.The imports from the
futures
crate can be consolidated for better readability.Apply this diff to group the imports:
-use futures::lock::Mutex; -use futures::stream::MapOk; -use futures::{Stream, StreamExt, TryStreamExt}; +use futures::{ + lock::Mutex, + stream::MapOk, + Stream, + StreamExt, + TryStreamExt, +};
15-17
: Unused imports detected.The imports
EventUpdateStreaming
andIndexerUpdateStreaming
fromtorii_grpc::client
andRetrieveEntitiesResponse
,RetrieveEventsResponse
,SubscribeEntityResponse
fromtorii_grpc::proto::world
are not used directly in this file.Consider removing these unused imports to keep the code clean.
-use torii_grpc::client::{EventUpdateStreaming, IndexerUpdateStreaming}; -use torii_grpc::proto::world::{ - RetrieveEntitiesResponse, RetrieveEventsResponse, SubscribeEntityResponse, -}; +use torii_grpc::client::WorldClient;
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (5)
- crates/torii/client/src/client/mod.rs (5 hunks)
- crates/torii/grpc/proto/schema.proto (2 hunks)
- crates/torii/grpc/proto/types.proto (1 hunks)
- crates/torii/grpc/src/client.rs (8 hunks)
- crates/torii/grpc/src/types/schema.rs (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- crates/torii/grpc/src/client.rs
🧰 Additional context used
🔇 Additional comments (4)
crates/torii/grpc/proto/schema.proto (3)
23-23
: Ohayo, sensei! Consider renaming thebool
field to improve clarityAs mentioned in a previous review, the field
bool
of typebool
might cause confusion because the field name is the same as its type. It's a good practice to avoid using reserved words or type names as field names to prevent potential conflicts or misunderstandings.Consider renaming the field to
boolean_value
orflag
for clarity:- bool bool = 13; + bool boolean_value = 13;This change would improve code readability and reduce the potential for confusion in languages where field names and types are used in similar contexts.
38-38
: Ohayo, sensei! Verify the impact of changing Struct to Array in Ty messageThe
Ty
message has been updated to replaceStruct
withArray
:- Struct struct = 4; + Array struct = 4;This change suggests that structs are now represented as arrays, which could have significant implications:
- It may simplify the representation of structured data.
- However, it might also lead to loss of field names or other struct-specific metadata.
Please ensure that all code processing
Ty
messages has been updated to handle structs as arrays. You may want to run the following command to check for any remaining uses of the oldStruct
type:Also, verify that this change doesn't negatively impact the ability to represent complex data structures in your system.
✅ Verification successful
Ohayo, sensei! It seems the previous command didn't recognize the
proto
file type. Let's try searching forStruct
within all.proto
files using the following command:
Ohayo, sensei! 🎉
Great news! The
Struct
type has been successfully replaced withArray
across all.proto
files. There are no remaining instances ofStruct
that might affect the handling ofTy
messages.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
rg --type proto 'Struct'
Length of output: 54
Script:
#!/bin/bash rg 'Struct' --glob '*.proto'Length of output: 28
5-6
: Ohayo, sensei! Consider the impact of changing Enum representationThe
Enum
message has undergone significant changes:
- The
name
field (previously a string) has been replaced with anoption
field of typeuint32
.- The order of fields has been changed, with
option
now being the first field.These changes may have the following impacts:
- Enum options are now represented by numeric identifiers instead of names, which might affect readability and debugging.
- The serialization/deserialization of Enum messages may be affected due to the field order change.
Please ensure that all code using the
Enum
message has been updated to reflect these changes. You may want to run the following command to check for any remaining uses of the oldname
field:✅ Verification successful
Ohayo, sensei! It looks like ripgrep doesn't recognize the
proto
file type. Let's redefine it and try searching again:
Ohayo, sensei! It looks like the previous command had an issue. Let's try searching the
.proto
files directly for any remaining instances of the oldname
field:
Ohayo, sensei! 🎉 The
name
field has been successfully removed from allEnum
definitions in the.proto
files.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
rg --type proto 'Enum.*name'
Length of output: 60
Script:
#!/bin/bash # Define 'proto' file type and search for 'Enum.*name' rg 'Enum.*name' --type-add 'proto:*.proto' --type protoLength of output: 156
Script:
rg -g "*.proto" 'Enum.*name'Length of output: 30
crates/torii/grpc/src/types/schema.rs (1)
66-124
: Ohayo sensei, potential mismatch betweenTyType
andty
variantsIn the
parse
method, each arm of thematch
statement assumes that the providedty
matches the expected variant. For example, at line 68~:if let Ty::Primitive(primitive) = ty { ... } else { return Err(...) }While you handle the mismatch by returning an error, consider whether this approach is robust against incorrect or malicious input. If there's a possibility that
ty
could be of an unexpected variant, additional validation or restructuring might be beneficial.You can run the following script to check for any mismatches between
TyType
variants andty
variants in the codebase:This script uses
ast-grep
to find patterns wherematch
statements onTyType
are followed byif let
checks onty
, ensuring they align correctly.
let models = &self.metadata().models; | ||
Ok(entities.into_iter().map(|e| e.map(models)).collect::<Result<Vec<Entity>, _>>()?) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle potential mapping errors explicitly.
Similar to the previous comment, handle errors when mapping event messages.
Apply this diff:
Ok(entities.into_iter().map(|e| {
- e.map(models)
+ e.map(models).map_err(|err| {
+ Error::MappingError(format!("Failed to map event message: {:?}", err))
+ })
}).collect::<Result<Vec<Entity>, _>>()?)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
let models = &self.metadata().models; | |
Ok(entities.into_iter().map(|e| e.map(models)).collect::<Result<Vec<Entity>, _>>()?) | |
let models = &self.metadata().models; | |
Ok(entities.into_iter().map(|e| { | |
e.map(models).map_err(|err| { | |
Error::MappingError(format!("Failed to map event message: {:?}", err)) | |
}) | |
}).collect::<Result<Vec<Entity>, _>>()?) |
type SubscriptionId = u64; | ||
type EntityMappedStream = MapOk< | ||
tonic::Streaming<SubscribeEntityResponse>, | ||
Box<dyn Fn(SubscribeEntityResponse) -> (SubscriptionId, Entity) + Send>, | ||
>; | ||
|
||
#[derive(Debug)] | ||
pub struct EntityUpdateStreaming(EntityMappedStream); | ||
|
||
impl Stream for EntityUpdateStreaming { | ||
type Item = <EntityMappedStream as Stream>::Item; | ||
fn poll_next( | ||
mut self: std::pin::Pin<&mut Self>, | ||
cx: &mut std::task::Context<'_>, | ||
) -> std::task::Poll<Option<Self::Item>> { | ||
self.0.poll_next_unpin(cx) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Implement Stream
for EntityUpdateStreaming
more concisely.
Since EntityUpdateStreaming
is a newtype over EntityMappedStream
, you can delegate the Stream
implementation using Deref
.
Simplify the Stream
implementation:
use std::ops::Deref;
#[derive(Debug)]
pub struct EntityUpdateStreaming(EntityMappedStream);
+impl Deref for EntityUpdateStreaming {
+ type Target = EntityMappedStream;
+ fn deref(&self) -> &Self::Target {
+ &self.0
+ }
+}
impl Stream for EntityUpdateStreaming {
- type Item = <EntityMappedStream as Stream>::Item;
- fn poll_next(
- mut self: std::pin::Pin<&mut Self>,
- cx: &mut std::task::Context<'_>,
- ) -> std::task::Poll<Option<Self::Item>> {
- self.0.poll_next_unpin(cx)
- }
+ // The Stream implementation is automatically provided via Deref.
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
type SubscriptionId = u64; | |
type EntityMappedStream = MapOk< | |
tonic::Streaming<SubscribeEntityResponse>, | |
Box<dyn Fn(SubscribeEntityResponse) -> (SubscriptionId, Entity) + Send>, | |
>; | |
#[derive(Debug)] | |
pub struct EntityUpdateStreaming(EntityMappedStream); | |
impl Stream for EntityUpdateStreaming { | |
type Item = <EntityMappedStream as Stream>::Item; | |
fn poll_next( | |
mut self: std::pin::Pin<&mut Self>, | |
cx: &mut std::task::Context<'_>, | |
) -> std::task::Poll<Option<Self::Item>> { | |
self.0.poll_next_unpin(cx) | |
} | |
} | |
type SubscriptionId = u64; | |
type EntityMappedStream = MapOk< | |
tonic::Streaming<SubscribeEntityResponse>, | |
Box<dyn Fn(SubscribeEntityResponse) -> (SubscriptionId, Entity) + Send>, | |
>; | |
use std::ops::Deref; | |
#[derive(Debug)] | |
pub struct EntityUpdateStreaming(EntityMappedStream); | |
impl Deref for EntityUpdateStreaming { | |
type Target = EntityMappedStream; | |
fn deref(&self) -> &Self::Target { | |
&self.0 | |
} | |
} | |
impl Stream for EntityUpdateStreaming { | |
// The Stream implementation is automatically provided via Deref. | |
} |
let models = self.metadata().models.clone(); | ||
|
||
Ok(EntityUpdateStreaming(stream.map_ok(Box::new(move |res| { | ||
res.entity.map_or( | ||
(res.subscription_id, Entity { hashed_keys: Felt::ZERO, models: vec![] }), | ||
|entity| { | ||
(res.subscription_id, entity.map(&models).expect("must able to serialize")) | ||
}, | ||
) | ||
})))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid using expect
in asynchronous code.
Using expect("must able to serialize")
can cause the application to panic if the mapping fails. It's better to handle errors gracefully.
Modify the code to handle errors without panicking.
Ok(EntityUpdateStreaming(stream.map_ok(Box::new(move |res| {
res.entity.map_or(
(res.subscription_id, Entity { hashed_keys: Felt::ZERO, models: vec![] }),
|entity| {
- (res.subscription_id, entity.map(&models).expect("must able to serialize"))
+ match entity.map(&models) {
+ Ok(mapped_entity) => (res.subscription_id, mapped_entity),
+ Err(err) => {
+ // Handle the error, e.g., log it and return a default entity
+ // or propagate the error
+ (res.subscription_id, Entity { hashed_keys: Felt::ZERO, models: vec![] })
+ }
+ }
},
)
}))))
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
let models = self.metadata().models.clone(); | |
Ok(EntityUpdateStreaming(stream.map_ok(Box::new(move |res| { | |
res.entity.map_or( | |
(res.subscription_id, Entity { hashed_keys: Felt::ZERO, models: vec![] }), | |
|entity| { | |
(res.subscription_id, entity.map(&models).expect("must able to serialize")) | |
}, | |
) | |
})))) | |
let models = self.metadata().models.clone(); | |
Ok(EntityUpdateStreaming(stream.map_ok(Box::new(move |res| { | |
res.entity.map_or( | |
(res.subscription_id, Entity { hashed_keys: Felt::ZERO, models: vec![] }), | |
|entity| { | |
match entity.map(&models) { | |
Ok(mapped_entity) => (res.subscription_id, mapped_entity), | |
Err(err) => { | |
// Handle the error, e.g., log it and return a default entity | |
// or propagate the error | |
(res.subscription_id, Entity { hashed_keys: Felt::ZERO, models: vec![] }) | |
} | |
} | |
}, | |
) | |
})))) |
let models = &self.metadata().models; | ||
Ok(entities.into_iter().map(|e| e.map(models)).collect::<Result<Vec<Entity>, _>>()?) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle potential mapping errors explicitly.
When mapping entities, any error from map(models)
is currently propagated using the ?
operator. It's better to handle potential errors explicitly to provide more context.
Consider wrapping the mapping in a closure that handles errors gracefully.
Ok(entities.into_iter().map(|e| {
- e.map(models)
+ e.map(models).map_err(|err| {
+ Error::MappingError(format!("Failed to map entity: {:?}", err))
+ })
}).collect::<Result<Vec<Entity>, _>>()?)
Committable suggestion was skipped due to low confidence.
let models = self.metadata().models.clone(); | ||
|
||
Ok(EntityUpdateStreaming(stream.map_ok(Box::new(move |res| { | ||
res.entity.map_or( | ||
(res.subscription_id, Entity { hashed_keys: Felt::ZERO, models: vec![] }), | ||
|entity| { | ||
(res.subscription_id, entity.map(&models).expect("must able to serialize")) | ||
}, | ||
) | ||
})))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid using expect
in asynchronous code.
The same issue applies here as in the previous comment regarding expect
.
Apply the similar changes to handle errors gracefully.
Ok(EntityUpdateStreaming(stream.map_ok(Box::new(move |res| {
res.entity.map_or(
(res.subscription_id, Entity { hashed_keys: Felt::ZERO, models: vec![] }),
|entity| {
- (res.subscription_id, entity.map(&models).expect("must able to serialize"))
+ match entity.map(&models) {
+ Ok(mapped_entity) => (res.subscription_id, mapped_entity),
+ Err(err) => {
+ // Handle the error appropriately
+ (res.subscription_id, Entity { hashed_keys: Felt::ZERO, models: vec![] })
+ }
+ }
},
)
}))))
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
let models = self.metadata().models.clone(); | |
Ok(EntityUpdateStreaming(stream.map_ok(Box::new(move |res| { | |
res.entity.map_or( | |
(res.subscription_id, Entity { hashed_keys: Felt::ZERO, models: vec![] }), | |
|entity| { | |
(res.subscription_id, entity.map(&models).expect("must able to serialize")) | |
}, | |
) | |
})))) | |
let models = self.metadata().models.clone(); | |
Ok(EntityUpdateStreaming(stream.map_ok(Box::new(move |res| { | |
res.entity.map_or( | |
(res.subscription_id, Entity { hashed_keys: Felt::ZERO, models: vec![] }), | |
|entity| { | |
match entity.map(&models) { | |
Ok(mapped_entity) => (res.subscription_id, mapped_entity), | |
Err(err) => { | |
// Handle the error appropriately | |
(res.subscription_id, Entity { hashed_keys: Felt::ZERO, models: vec![] }) | |
} | |
} | |
}, | |
) | |
})))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (5)
crates/torii/core/src/error.rs (1)
38-41
: Ohayo, sensei! Nice addition of new error variants!The new
IntegerOverflow
andExpectedDataType
variants in theParseError
enum are well-integrated and provide more specific error handling. They allow for detailed error messages, which is great for debugging and user feedback.Consider adding examples in the comments for these new variants to illustrate their usage. For instance:
/// Represents an integer overflow error. /// /// # Example /// ``` /// let error = ParseError::IntegerOverflow("Value 1000 exceeds u8 range".to_string()); /// ``` #[error("Integer overflow: {0}")] IntegerOverflow(String), /// Represents an error when the expected data type doesn't match. /// /// # Example /// ``` /// let error = ParseError::ExpectedDataType("Expected u64, got String".to_string()); /// ``` #[error("Expected data type: {0}")] ExpectedDataType(String),This would make the code even more self-documenting and helpful for other developers, sensei!
crates/torii/grpc/src/types/schema.rs (4)
38-61
: Ohayo sensei! Great addition, but let's make it even saferThe new
map
method is a solid implementation for convertingproto::types::Entity
toEntity
. It correctly handles the conversion ofhashed_keys
and constructsStruct
instances from the provided models.However, there's a potential panic point on line 57:
Ok(ty.as_struct().unwrap().clone())To make this code more robust, consider handling the
None
case explicitly:Ok(ty.as_struct().ok_or(SchemaError::ExpectedDataType("struct".to_string()))?.clone())This change will propagate the error instead of panicking if
as_struct()
returnsNone
.
65-130
: Ohayo sensei! Impressive parsing logic, but let's future-proof itThe new
parse
method is a comprehensive implementation for populating aTy
instance. It handles all current type cases with appropriate error handling and type consistency checks.To make this code more robust against future changes, consider adding a wildcard arm to the match statement:
match ty_type { // ... existing arms ... _ => return Err(SchemaError::UnsupportedType("Unknown TyType variant".to_string())), }This addition will ensure that any new
TyType
variants added in the future are handled gracefully, preventing potential panics and making the code more maintainable.
Line range hint
178-231
: Ohayo sensei! Robust implementation, but let's make it even betterThe updated
TryFrom<proto::types::Primitive>
implementation forPrimitive
is a significant improvement. The use of pattern matching and explicit error handling for integer overflows makes the code more robust and safer.To further improve this implementation, consider adding a wildcard arm to the match statement:
match &value { // ... existing arms ... _ => return Err(SchemaError::UnsupportedType("Unknown PrimitiveType variant".to_string())), }This addition will ensure that any new
PrimitiveType
variants added in the future are handled gracefully, preventing potential panics and making the code more maintainable.
233-289
: Ohayo sensei! Good implementation, but let's handle None cases more carefullyThe updated
From<Primitive>
implementation forproto::types::Primitive
is generally good, handling allPrimitive
variants correctly. However, the use ofunwrap_or_default()
could potentially mask missing data by replacingNone
with default values.Consider handling
None
cases more explicitly. For example:Primitive::I8(i8) => { proto::types::primitive::PrimitiveType::I8(i8.ok_or(SchemaError::MissingExpectedData("i8 value".to_string()))? as i32) },This approach would make it clear when data is missing, rather than silently using default values. Apply similar changes for other primitive types to ensure that missing data is properly handled and reported.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (3)
- crates/torii/core/src/error.rs (1 hunks)
- crates/torii/grpc/src/server/mod.rs (12 hunks)
- crates/torii/grpc/src/types/schema.rs (4 hunks)
🧰 Additional context used
🔇 Additional comments (4)
crates/torii/grpc/src/types/schema.rs (3)
26-29
: Ohayo sensei! Excellent error handling enhancementThe addition of
ExpectedDataType
andIntegerOverflow
error variants improves the granularity of error reporting. This change will make debugging and error handling more precise and informative.
134-146
: Ohayo sensei! Clean and concise implementationThe updated
From<Ty>
implementation forproto::types::Ty
is a great improvement. The use of pattern matching makes the code more readable and maintainable. It correctly handles all variants ofTy
without any apparent issues.Great job on this refactoring!
Line range hint
1-289
: Ohayo sensei! Overall, a significant improvement to the schema handlingThe changes made to this file represent a substantial improvement in type conversions, error handling, and overall code structure. The new
map
method inEntity
, theparse
method inTy
, and the updated implementations forPrimitive
conversions all contribute to a more robust and maintainable codebase.Key improvements include:
- More granular error handling with new
SchemaError
variants.- Comprehensive parsing logic for different types.
- Explicit handling of integer overflows.
While there are a few areas where error handling could be further refined (as mentioned in previous comments), these changes significantly enhance the safety and clarity of the schema handling code.
Great work on this refactoring, sensei!
crates/torii/grpc/src/server/mod.rs (1)
1272-1274
: Gzip compression enhancement looks great!Ohayo, sensei! The addition of Gzip compression with
.accept_compressed(CompressionEncoding::Gzip)
and.send_compressed(CompressionEncoding::Gzip)
improves the efficiency of data transmission.
CorsLayer::new() | ||
.allow_origin(AllowOrigin::mirror_request()) | ||
.allow_credentials(true) | ||
.max_age(DEFAULT_MAX_AGE) | ||
.expose_headers( | ||
DEFAULT_EXPOSED_HEADERS | ||
.iter() | ||
.cloned() | ||
.map(HeaderName::from_static) | ||
.collect::<Vec<HeaderName>>(), | ||
) | ||
.allow_headers( | ||
DEFAULT_ALLOW_HEADERS | ||
.iter() | ||
.cloned() | ||
.map(HeaderName::from_static) | ||
.collect::<Vec<HeaderName>>(), | ||
), | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review CORS settings for potential security risks
Ohayo, sensei! The current CORS configuration uses AllowOrigin::mirror_request()
with allow_credentials(true)
. This combination can expose your application to security vulnerabilities, as it allows credentials to be sent with requests from any origin, which might lead to cross-site request forgery (CSRF) attacks.
Consider specifying allowed origins explicitly or revising the CORS settings to ensure that only trusted origins can send requests with credentials. For example:
- .allow_origin(AllowOrigin::mirror_request())
+ .allow_origin(AllowOrigin::list(vec![
+ "https://trusted.domain.com".parse().unwrap(),
+ ]))
This modification enhances security by restricting cross-origin requests to known, trusted domains.
Committable suggestion was skipped due to low confidence.
for (i, entity) in res.entities.iter().enumerate() { | ||
tx.send(Ok(RetrieveEntitiesStreamingResponse { | ||
entity: Some(entity.clone()), | ||
remaining_count: (res.total_count - (i + 1) as u32), | ||
})) | ||
.await | ||
.unwrap(); | ||
} | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid unwrapping Results to prevent potential panics
Ohayo, sensei! In the loop starting at line 1147, using .unwrap()
after tx.send(...).await
could lead to a panic if the receiver has been dropped. To enhance reliability, it's recommended to handle the potential error gracefully.
Consider applying this diff to handle the error:
for (i, entity) in res.entities.iter().enumerate() {
- tx.send(Ok(RetrieveEntitiesStreamingResponse {
+ if let Err(e) = tx.send(Ok(RetrieveEntitiesStreamingResponse {
entity: Some(entity.clone()),
remaining_count: (res.total_count - (i + 1) as u32),
- }))
+ }))
.await
- .unwrap();
+ {
+ // Optionally log the error
+ eprintln!("Failed to send response: {}", e);
+ break;
+ }
}
This change ensures that the code handles send errors without panicking, maintaining the robustness of the streaming response.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
for (i, entity) in res.entities.iter().enumerate() { | |
tx.send(Ok(RetrieveEntitiesStreamingResponse { | |
entity: Some(entity.clone()), | |
remaining_count: (res.total_count - (i + 1) as u32), | |
})) | |
.await | |
.unwrap(); | |
} | |
}); | |
for (i, entity) in res.entities.iter().enumerate() { | |
if let Err(e) = tx.send(Ok(RetrieveEntitiesStreamingResponse { | |
entity: Some(entity.clone()), | |
remaining_count: (res.total_count - (i + 1) as u32), | |
})) | |
.await | |
{ | |
// Optionally log the error | |
eprintln!("Failed to send response: {}", e); | |
break; | |
} | |
} | |
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (3)
crates/torii/grpc/src/types/mod.rs (3)
43-43
: Ohayo, sensei! The new field looks good, but consider clarifying its purpose.The addition of
dont_include_hashed_keys: bool
to theQuery
struct provides more control over query behavior. However, the negative form in the field name might be slightly confusing.Consider renaming it to something more positive and descriptive, such as
include_unhashed_keys_only: bool
or adding a comment to explain its purpose.
202-202
: Ohayo, sensei! The implementation looks correct, but consider improving readability.The
From<Query>
implementation forproto::types::Query
has been correctly updated to include the newdont_include_hashed_keys
field. Great job on maintaining consistency!For improved readability, especially as the struct grows, consider using a multi-line format:
Self { clause: value.clause.map(|c| c.into()), limit: value.limit, offset: value.offset, dont_include_hashed_keys: value.dont_include_hashed_keys, }
Ohayo, sensei!
We've identified multiple instances where
Value::from(
is still used across the codebase. This suggests that the removal of theFrom<Value>
implementation forproto::types::Value
might not be fully addressed and could impact other parts of the project.Affected Files:
crates/torii/graphql/src/object/model.rs
crates/torii/graphql/src/object/entity.rs
crates/torii/graphql/src/object/metadata/mod.rs
crates/torii/graphql/src/object/event_message.rs
crates/torii/graphql/src/object/connection/mod.rs
crates/torii/graphql/src/object/event.rs
crates/torii/graphql/src/query/mod.rs
crates/katana/primitives/src/da/encoding.rs
🔗 Analysis chain
Line range hint
1-1
: Ohayo, sensei! Can you provide more context on the removal ofFrom<Value>
forproto::types::Value
?The AI-generated summary mentions the removal of the
From<Value>
implementation forproto::types::Value
, but this change is not visible in the provided code. This removal might have implications for other parts of the codebase.Could you please clarify the reasons for this removal and confirm that all affected code has been updated accordingly? Here's a script to help verify the impact:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Search for usages of From<Value> for proto::types::Value # Test: Search for potential usages of the removed implementation rg --type rust "From<Value>\s+for\s+proto::types::Value" ./crates rg --type rust "Value::from\(" ./cratesLength of output: 4736
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (3)
- crates/torii/grpc/proto/types.proto (2 hunks)
- crates/torii/grpc/src/server/mod.rs (27 hunks)
- crates/torii/grpc/src/types/mod.rs (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- crates/torii/grpc/proto/types.proto
🧰 Additional context used
🔇 Additional comments (12)
crates/torii/grpc/src/server/mod.rs (12)
Line range hint
201-210
: Ohayo, sensei! The addition ofdont_include_hashed_keys
enhances privacy controlThe new
dont_include_hashed_keys
parameter allows clients to opt-out of receiving hashed keys in the response, which is beneficial for maintaining privacy and reducing unnecessary data transmission.
302-303
: Ohayo, sensei! Correct propagation ofdont_include_hashed_keys
The
dont_include_hashed_keys
parameter is properly passed tomap_row_to_entity
, ensuring consistent behavior throughout the data retrieval process.
376-377
: Ohayo, sensei! Proper implementation inquery_by_hashed_keys
The
dont_include_hashed_keys
parameter is correctly utilized when fetching entities, maintaining the intended functionality.
491-492
: Ohayo, sensei! Accurate use inquery_by_keys
The integration of
dont_include_hashed_keys
in thequery_by_keys
method aligns with the overall privacy enhancements.
615-616
: Ohayo, sensei! Consistent application inquery_by_member
The parameter
dont_include_hashed_keys
is effectively incorporated, ensuring that hashed keys are excluded when requested.
676-677
: Ohayo, sensei! Effective usage inquery_by_composite
The inclusion of
dont_include_hashed_keys
in composite queries demonstrates thorough attention to privacy considerations.
Line range hint
760-817
: Ohayo, sensei! Comprehensive integration inretrieve_entities
The parameter
dont_include_hashed_keys
is appropriately propagated through various query clauses, ensuring consistent behavior regardless of the query type.
876-889
: Ensure correct handling ofdont_include_hashed_keys
inmap_row_to_entity
The logic correctly determines whether to include
hashed_keys
based on thedont_include_hashed_keys
flag. This approach effectively maintains privacy when needed.
Line range hint
1044-1054
: Ohayo, sensei! The streaming response types are well-definedThe introduction of
RetrieveEntitiesStreamingResponseStream
and its associated type aliases enhances the ability to handle streaming data efficiently.
1140-1175
: Previous issue regarding.unwrap()
inretrieve_entities_streaming
remainsThe use of
.unwrap()
aftertx.send(...).await
could lead to a panic if the receiver has been dropped. Handling this potential error would improve the robustness of the streaming implementation.
1254-1265
: Ohayo, sensei! CORS configuration constants are appropriately definedDefining
DEFAULT_MAX_AGE
,DEFAULT_EXPOSED_HEADERS
, andDEFAULT_ALLOW_HEADERS
enhances clarity and maintainability of CORS settings.
1293-1311
: Previous security concern regarding CORS settings remainsThe use of
AllowOrigin::mirror_request()
withallow_credentials(true)
may introduce security vulnerabilities related to cross-origin requests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (1)
crates/torii/grpc/src/types/mod.rs (1)
43-43
: Ohayo, sensei! New query option looks good, but consider positive naming.The addition of the
dont_include_hashed_keys
field to theQuery
struct is a welcome improvement, allowing for more flexible querying. However, the negative naming might be slightly confusing.Consider renaming to a positive form for clarity, such as
include_hashed_keys: bool
with the default set totrue
. This would make the intent clearer and align with common boolean naming conventions.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (3)
- crates/torii/grpc/proto/types.proto (1 hunks)
- crates/torii/grpc/src/server/mod.rs (26 hunks)
- crates/torii/grpc/src/types/mod.rs (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- crates/torii/grpc/proto/types.proto
🧰 Additional context used
🔇 Additional comments (7)
crates/torii/grpc/src/types/mod.rs (2)
202-202
: Ohayo, sensei! Conversion implementation updated correctly.The
From<Query>
implementation forproto::types::Query
has been properly updated to include the newdont_include_hashed_keys
field. This ensures that the query options are correctly preserved when converting between types.
Line range hint
1-458
: Ohayo, sensei! Could you clarify the removal ofFrom<Value>
implementation?The AI summary mentions that the
From<Value>
implementation forproto::types::Value
has been removed. However, this implementation is not visible in the provided code snippet. Could you provide more context about this removal?To better understand the impact of this change, please run the following script:
This will help us identify any potential areas that might be affected by the removal of this implementation.
crates/torii/grpc/src/server/mod.rs (5)
14-14
: LGTM!Ohayo, sensei! The added
use
statements import necessary modules for the new functionalities introduced, such as time durations, HTTP headers, channels, compression encoding, gRPC-Web support, and CORS layers. These imports are appropriate and correctly specified.Also applies to: 20-20, 36-36, 38-38, 41-41, 45-45, 55-57
1042-1043
: LGTM!The introduction of
RetrieveEntitiesStreamingResponseStream
type aliases enhances code readability by defining the streaming response types for the newretrieve_entities_streaming
method.Also applies to: 1052-1052
1252-1263
: LGTM!The addition of default constants for CORS configuration enhances the maintainability and configurability of the server. Defining
DEFAULT_MAX_AGE
,DEFAULT_EXPOSED_HEADERS
, andDEFAULT_ALLOW_HEADERS
as constants improves code clarity.
1283-1285
: LGTM!The server is correctly configured to accept and send compressed data using Gzip. This optimization can improve network efficiency for clients that support compression.
1290-1312
:⚠️ Potential issueReview CORS settings for potential security risks
Ohayo, sensei! The current CORS configuration uses
AllowOrigin::mirror_request()
withallow_credentials(true)
. This combination can expose your application to security vulnerabilities, as it allows credentials to be sent with requests from any origin, which might lead to cross-site request forgery (CSRF) attacks.Consider specifying allowed origins explicitly or revising the CORS settings to ensure that only trusted origins can send requests with credentials. For example:
.layer( CorsLayer::new() - .allow_origin(AllowOrigin::mirror_request()) + .allow_origin(AllowOrigin::list(vec![ + "https://trusted.domain.com".parse().unwrap(), + ])) .allow_credentials(true) .max_age(DEFAULT_MAX_AGE) .expose_headers(This modification enhances security by restricting cross-origin requests to known, trusted domains.
Likely invalid or redundant comment.
async fn retrieve_entities_streaming( | ||
&self, | ||
request: Request<RetrieveEntitiesRequest>, | ||
) -> ServiceResult<Self::RetrieveEntitiesStreamingStream> { | ||
let query = request | ||
.into_inner() | ||
.query | ||
.ok_or_else(|| Status::invalid_argument("Missing query argument"))?; | ||
|
||
let (tx, rx) = channel(100); | ||
let res = self | ||
.retrieve_entities( | ||
ENTITIES_TABLE, | ||
ENTITIES_MODEL_RELATION_TABLE, | ||
ENTITIES_ENTITY_RELATION_COLUMN, | ||
query, | ||
) | ||
.await | ||
.map_err(|e| Status::internal(e.to_string()))?; | ||
tokio::spawn(async move { | ||
for (i, entity) in res.entities.iter().enumerate() { | ||
tx.send(Ok(RetrieveEntitiesStreamingResponse { | ||
entity: Some(entity.clone()), | ||
remaining_count: (res.total_count - (i + 1) as u32), | ||
})) | ||
.await | ||
.unwrap(); | ||
} | ||
}); | ||
|
||
Ok( | ||
Response::new( | ||
Box::pin(ReceiverStream::new(rx)) as Self::RetrieveEntitiesStreamingStream | ||
), | ||
) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure proper error handling in the spawned task
Ohayo, sensei! The implementation of the retrieve_entities_streaming
method correctly streams entities to the client. However, the spawned task doesn't handle potential errors that might occur during the send operation, especially if the receiver has been dropped. To enhance robustness, consider handling the tx.send(...).await
result without unwrapping.
Apply this diff to handle potential send errors:
for (i, entity) in res.entities.iter().enumerate() {
- tx.send(Ok(RetrieveEntitiesStreamingResponse {
+ if let Err(e) = tx.send(Ok(RetrieveEntitiesStreamingResponse {
entity: Some(entity.clone()),
remaining_count: (res.total_count - (i + 1) as u32),
- }))
+ }))
.await
- .unwrap();
+ {
+ // Optionally log the error
+ eprintln!("Failed to send response: {}", e);
+ break;
+ }
}
This change ensures the task handles send errors gracefully without panicking.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
async fn retrieve_entities_streaming( | |
&self, | |
request: Request<RetrieveEntitiesRequest>, | |
) -> ServiceResult<Self::RetrieveEntitiesStreamingStream> { | |
let query = request | |
.into_inner() | |
.query | |
.ok_or_else(|| Status::invalid_argument("Missing query argument"))?; | |
let (tx, rx) = channel(100); | |
let res = self | |
.retrieve_entities( | |
ENTITIES_TABLE, | |
ENTITIES_MODEL_RELATION_TABLE, | |
ENTITIES_ENTITY_RELATION_COLUMN, | |
query, | |
) | |
.await | |
.map_err(|e| Status::internal(e.to_string()))?; | |
tokio::spawn(async move { | |
for (i, entity) in res.entities.iter().enumerate() { | |
tx.send(Ok(RetrieveEntitiesStreamingResponse { | |
entity: Some(entity.clone()), | |
remaining_count: (res.total_count - (i + 1) as u32), | |
})) | |
.await | |
.unwrap(); | |
} | |
}); | |
Ok( | |
Response::new( | |
Box::pin(ReceiverStream::new(rx)) as Self::RetrieveEntitiesStreamingStream | |
), | |
) | |
} | |
async fn retrieve_entities_streaming( | |
&self, | |
request: Request<RetrieveEntitiesRequest>, | |
) -> ServiceResult<Self::RetrieveEntitiesStreamingStream> { | |
let query = request | |
.into_inner() | |
.query | |
.ok_or_else(|| Status::invalid_argument("Missing query argument"))?; | |
let (tx, rx) = channel(100); | |
let res = self | |
.retrieve_entities( | |
ENTITIES_TABLE, | |
ENTITIES_MODEL_RELATION_TABLE, | |
ENTITIES_ENTITY_RELATION_COLUMN, | |
query, | |
) | |
.await | |
.map_err(|e| Status::internal(e.to_string()))?; | |
tokio::spawn(async move { | |
for (i, entity) in res.entities.iter().enumerate() { | |
if let Err(e) = tx.send(Ok(RetrieveEntitiesStreamingResponse { | |
entity: Some(entity.clone()), | |
remaining_count: (res.total_count - (i + 1) as u32), | |
})) | |
.await | |
{ | |
// Optionally log the error | |
eprintln!("Failed to send response: {}", e); | |
break; | |
} | |
} | |
}); | |
Ok( | |
Response::new( | |
Box::pin(ReceiverStream::new(rx)) as Self::RetrieveEntitiesStreamingStream | |
), | |
) | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (1)
crates/torii/grpc/src/types/mod.rs (1)
43-43
: Ohayo, sensei! Consider renaming the new field for clarity.The new
dont_include_hashed_keys
field adds functionality to exclude hashed keys from query results. However, the negative boolean naming might be confusing. Consider renaming it toinclude_hashed_keys
for better clarity and consistency with positive boolean naming conventions.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (3)
- crates/torii/grpc/src/server/mod.rs (28 hunks)
- crates/torii/grpc/src/types/mod.rs (2 hunks)
- crates/torii/grpc/src/types/schema.rs (2 hunks)
🧰 Additional context used
🔇 Additional comments (9)
crates/torii/grpc/src/types/schema.rs (4)
Line range hint
8-20
: Ohayo sensei! Excellent improvements in error handlingThe enhancements to the
SchemaError
enum and the improved error handling throughout the file are commendable. The new error variants provide more specific error information, which will be valuable for debugging and error reporting.The use of
ok_or
withSchemaError::MissingExpectedData
in variousTryFrom
implementations is a good practice. It ensures that missing data is properly reported and helps maintain the integrity of the conversion process.For example, the error handling in the
TryFrom<proto::types::Primitive>
implementation is well done:let value = primitive .primitive_type .ok_or(SchemaError::MissingExpectedData("primitive_type".to_string()))?;This approach provides clear and specific error messages, which will be helpful for troubleshooting issues in the future.
Also applies to: 66-68, 148-150
Line range hint
1-264
: Ohayo sensei! Well-structured and consistent implementationThe overall structure and consistency of the file are commendable. The
TryFrom
andFrom
implementations follow a consistent pattern throughout the file, which enhances readability and maintainability.Key points of approval:
- Consistent naming conventions are used throughout the file.
- The structure of
TryFrom
andFrom
implementations is uniform, making it easy to understand and maintain.- The error handling approach is consistent across different implementations.
This consistency will make it easier for other developers to understand and work with this code in the future. Great job on maintaining a clean and organized codebase!
Line range hint
245-264
: Ohayo sensei! Consider adding a wildcard arm for future-proofingThe updated implementation for
TryFrom<proto::types::Ty>
looks good and handles all current cases ofTyType
. However, to make your code more robust against future changes, it's a good practice to include a wildcard arm in the match statement.Consider adding a wildcard arm to handle any unexpected
TyType
variants that might be added in the future:proto::types::ty::TyType::Array(array) => Ok(Ty::Array( array.children.into_iter().map(TryInto::try_into).collect::<Result<Vec<_>, _>>()?, )), proto::types::ty::TyType::Bytearray(string) => Ok(Ty::ByteArray(string)), + _ => Err(SchemaError::UnsupportedType("Unknown TyType variant".to_string())),
This change will ensure that your code gracefully handles any new variants that might be added to
TyType
in the future.To verify the completeness of our
TyType
handling, let's search for all definitions and usages ofTyType
:This will help us ensure that we're handling all current variants and identify any potential places where new variants might be added in the future.
✅ Verification successful
Wildcard Arm Verification Completed
The
TryFrom<proto::types::Ty>
implementation currently handles all existingTyType
variants. While adding a wildcard arm isn't necessary at this time, it could serve as a good practice for future-proofing your code against potential new variants.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for TyType definitions and usages rg --type rust 'enum TyType|TyType::' -C 3Length of output: 3212
189-242
:⚠️ Potential issueOhayo sensei! Be cautious with
unwrap_or_default()
usageThe updated implementation matches the new structure well. However, the use of
unwrap_or_default()
for all Option types could potentially mask missing data by replacingNone
with default values (e.g.,0
for numerics).Consider explicitly handling
None
cases or propagating theOption
to preserve the distinction betweenNone
and a default value. Here's an example of how you could modify the code:- Primitive::I8(i8) => { - proto::types::primitive::PrimitiveType::I8(i8.unwrap_or_default() as i32) - } + Primitive::I8(i8) => { + proto::types::primitive::PrimitiveType::I8(i8.map(|v| v as i32).unwrap_or_else(|| -1)) // Use a sentinel value like -1 for None + }Apply similar changes for other primitive types to ensure that missing values are appropriately handled and not silently replaced with defaults.
To verify the impact of this change, we can search for usages of these primitive types in the codebase:
This will help us understand if there are any cases where
None
values are expected and should be preserved.crates/torii/grpc/src/types/mod.rs (2)
202-207
: Ohayo, sensei! The implementation looks good!The
From<Query>
implementation forproto::types::Query
has been correctly updated to include the newdont_include_hashed_keys
field. This change ensures consistency between theQuery
struct and its proto representation.
Line range hint
1-1
: Ohayo, sensei! Could you provide more context on the removedFrom<Value>
implementation?The AI-generated summary mentions that the
From<Value>
implementation forproto::types::Value
has been removed. However, this implementation is not visible in the provided code snippet. Could you please provide more information about why this implementation was removed and howValue
types will be converted to proto types going forward?crates/torii/grpc/src/server/mod.rs (3)
1157-1192
: Ohayo, sensei! Excellent implementation ofretrieve_entities_streaming
The addition of the
retrieve_entities_streaming
method enhances the server by enabling streaming of entities. This approach can significantly improve performance and resource usage for clients handling large datasets.
1271-1282
: Ohayo, sensei! Well-defined CORS configuration constantsDefining
DEFAULT_MAX_AGE
,DEFAULT_EXPOSED_HEADERS
, andDEFAULT_ALLOW_HEADERS
as constants improves code readability and maintainability. It centralizes CORS settings, making future adjustments more straightforward.
1061-1062
: Ohayo, sensei! Clear definition of streaming response typesThe introduction of
RetrieveEntitiesStreamingResponseStream
type alias enhances code clarity by simplifying type definitions for streaming responses.
proto::types::primitive::PrimitiveType::I64(int) => Primitive::I64(Some(*int)), | ||
proto::types::primitive::PrimitiveType::I128(bytes) => Primitive::I128(Some( | ||
i128::from_be_bytes(bytes.as_slice().try_into().map_err(SchemaError::FromSlice)?), | ||
)), | ||
proto::types::primitive::PrimitiveType::U8(int) => Primitive::U8(Some(*int as u8)), | ||
proto::types::primitive::PrimitiveType::U16(int) => Primitive::U16(Some(*int as u16)), | ||
proto::types::primitive::PrimitiveType::U32(int) => Primitive::U32(Some(*int)), | ||
proto::types::primitive::PrimitiveType::U64(int) => Primitive::U64(Some(*int)), | ||
proto::types::primitive::PrimitiveType::U128(bytes) => Primitive::U128(Some( | ||
u128::from_be_bytes(bytes.as_slice().try_into().map_err(SchemaError::FromSlice)?), | ||
)), | ||
proto::types::primitive::PrimitiveType::Usize(int) => Primitive::USize(Some(*int)), | ||
proto::types::primitive::PrimitiveType::Felt252(felt) => { | ||
Primitive::Felt252(Some(Felt::from_bytes_be_slice(felt.as_slice()))) | ||
} | ||
proto::types::value::ValueType::ByteValue(bytes) => { | ||
match proto::types::PrimitiveType::from_i32(primitive_type) { | ||
Some(proto::types::PrimitiveType::I128) => { | ||
Primitive::I128(Some(i128::from_be_bytes( | ||
bytes.as_slice().try_into().map_err(SchemaError::FromSlice)?, | ||
))) | ||
} | ||
Some(proto::types::PrimitiveType::U128) => { | ||
Primitive::U128(Some(u128::from_be_bytes( | ||
bytes.as_slice().try_into().map_err(SchemaError::FromSlice)?, | ||
))) | ||
} | ||
Some(proto::types::PrimitiveType::U256) => { | ||
Primitive::U256(Some(U256::from_be_slice(bytes.as_slice()))) | ||
} | ||
Some(proto::types::PrimitiveType::Felt252) => { | ||
Primitive::Felt252(Some(Felt::from_bytes_be_slice(bytes.as_slice()))) | ||
} | ||
Some(proto::types::PrimitiveType::ClassHash) => { | ||
Primitive::ClassHash(Some(Felt::from_bytes_be_slice(bytes.as_slice()))) | ||
} | ||
Some(proto::types::PrimitiveType::ContractAddress) => { | ||
Primitive::ContractAddress(Some(Felt::from_bytes_be_slice( | ||
bytes.as_slice(), | ||
))) | ||
} | ||
_ => return Err(SchemaError::UnsupportedType("ByteValue".to_string())), | ||
} | ||
proto::types::primitive::PrimitiveType::ClassHash(felt) => { | ||
Primitive::ClassHash(Some(Felt::from_bytes_be_slice(felt.as_slice()))) | ||
} | ||
proto::types::value::ValueType::StringValue(str) => { | ||
match proto::types::PrimitiveType::from_i32(primitive_type) { | ||
Some(proto::types::PrimitiveType::I8) => { | ||
Primitive::I8(Some(str.parse().map_err(SchemaError::ParseIntError)?)) | ||
} | ||
Some(proto::types::PrimitiveType::I16) => { | ||
Primitive::I16(Some(str.parse().map_err(SchemaError::ParseIntError)?)) | ||
} | ||
Some(proto::types::PrimitiveType::I32) => { | ||
Primitive::I32(Some(str.parse().map_err(SchemaError::ParseIntError)?)) | ||
} | ||
Some(proto::types::PrimitiveType::I64) => { | ||
Primitive::I64(Some(str.parse().map_err(SchemaError::ParseIntError)?)) | ||
} | ||
Some(proto::types::PrimitiveType::I128) => { | ||
Primitive::I128(Some(str.parse().map_err(SchemaError::ParseIntError)?)) | ||
} | ||
Some(proto::types::PrimitiveType::U8) => { | ||
Primitive::U8(Some(str.parse().map_err(SchemaError::ParseIntError)?)) | ||
} | ||
Some(proto::types::PrimitiveType::U16) => { | ||
Primitive::U16(Some(str.parse().map_err(SchemaError::ParseIntError)?)) | ||
} | ||
Some(proto::types::PrimitiveType::U32) => { | ||
Primitive::U32(Some(str.parse().map_err(SchemaError::ParseIntError)?)) | ||
} | ||
Some(proto::types::PrimitiveType::U64) => { | ||
Primitive::U64(Some(str.parse().map_err(SchemaError::ParseIntError)?)) | ||
} | ||
Some(proto::types::PrimitiveType::U128) => { | ||
Primitive::U128(Some(str.parse().map_err(SchemaError::ParseIntError)?)) | ||
} | ||
Some(proto::types::PrimitiveType::Usize) => { | ||
Primitive::USize(Some(str.parse().map_err(SchemaError::ParseIntError)?)) | ||
} | ||
Some(proto::types::PrimitiveType::Felt252) => { | ||
Primitive::Felt252(Some(Felt::from_str(str).map_err(SchemaError::FromStr)?)) | ||
} | ||
Some(proto::types::PrimitiveType::ClassHash) => Primitive::ClassHash(Some( | ||
Felt::from_str(str).map_err(SchemaError::FromStr)?, | ||
)), | ||
Some(proto::types::PrimitiveType::ContractAddress) => { | ||
Primitive::ContractAddress(Some( | ||
Felt::from_str(str).map_err(SchemaError::FromStr)?, | ||
)) | ||
} | ||
_ => return Err(SchemaError::UnsupportedType("StringValue".to_string())), | ||
} | ||
proto::types::primitive::PrimitiveType::ContractAddress(felt) => { | ||
Primitive::ContractAddress(Some(Felt::from_bytes_be_slice(felt.as_slice()))) | ||
} | ||
proto::types::primitive::PrimitiveType::U256(bytes) => Primitive::U256(Some( | ||
U256::from_be_bytes(bytes.as_slice().try_into().map_err(SchemaError::FromSlice)?), | ||
)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Ohayo sensei! Potential data loss areas identified across the codebase due to unchecked integer conversions.
Multiple files exhibit similar unchecked casts for I8
, I16
, U8
, and U16
types, which may lead to data loss if values exceed the target type's range. It's recommended to implement checked conversions consistently across all instances to ensure data integrity and application stability.
🔗 Analysis chain
Ohayo sensei! Consider using checked conversions for integer types
The new implementation is more readable and maintainable. However, there are potential issues with integer conversions that may lead to data loss. For example, casting *int as i8
on line 154 could result in data loss if *int
exceeds the range of i8
(-128 to 127).
Consider using checked conversions to safely handle potential overflows. Here's an example of how you could modify the code:
- proto::types::primitive::PrimitiveType::I8(int) => Primitive::I8(Some(*int as i8)),
+ proto::types::primitive::PrimitiveType::I8(int) => Primitive::I8(Some(i8::try_from(*int).map_err(|_| SchemaError::InvalidConversion)?)),
Apply similar changes for other integer conversions (I16, U8, U16, etc.) to ensure safe handling of all integer types.
To verify the potential for overflow, we can search for usages of these primitive types:
This will help us understand if there are any cases where larger integers might be passed to these smaller types.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for usages of I8, I16, U8, and U16 primitive types
rg --type rust 'Primitive::(I8|I16|U8|U16)' -C 3
Length of output: 32691
Summary by CodeRabbit
New Features
Query
struct to include a new field for controlling the inclusion of hashed keys.Bug Fixes
Documentation