-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: [#280] Add Text, Json, etc methods #731
Conversation
WalkthroughThe pull request introduces significant enhancements to the database schema management by modifying interfaces and structs related to column definitions across various database grammars. New methods for defining various column types, such as Changes
Possibly related PRs
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Outside diff range and nitpick comments (24)
contracts/database/schema/column.go (2)
Line range hint
39-50
: Implementation details appear to be missingThe
Column
struct needs to be updated to support the newGetAllowed()
interface method. Consider:
- Adding an
Allowed []string
field to store allowed values- Adding a method to set allowed values for better encapsulation
Here's a suggested implementation:
type Column struct { Autoincrement bool Collation string Comment string Default string Name string Nullable bool Type string TypeName string + Allowed []string }
8-9
: Consider adding a setter method for allowed valuesFor consistency with other methods in the interface, consider adding:
// SetAllowed sets the allowed values for the column SetAllowed(values []string) ColumnDefinitiondatabase/schema/grammars/wrap.go (2)
53-60
: Add documentation explaining the SQL Server 'N' prefixThe implementation looks good, but would benefit from documentation explaining that the 'N' prefix in SQL Server is used for Unicode string literals.
+// Quotes wraps multiple string values in quotes, adding the 'N' prefix for SQL Server +// to indicate Unicode string literals. For other drivers, it simply quotes the values. func (r *Wrap) Quotes(value []string) []string {
53-60
: Consider handling edge casesThe method could be more robust by handling nil and empty slice inputs explicitly.
func (r *Wrap) Quotes(value []string) []string { + if value == nil || len(value) == 0 { + return []string{} + } return collect.Map(value, func(v string, _ int) string {database/schema/column.go (2)
9-9
: Consider maintaining consistency with other fieldsThe
allowed
field differs from the established pattern where other fields use pointer types. Additionally, consider maintaining alphabetical ordering of fields for better readability.- allowed []string + allowed *[]string
Line range hint
9-37
: Consider adding validation helpersSince this implementation supports enum columns, it might be beneficial to add helper methods for validating values against the allowed list. This could help prevent invalid data at the application level.
Example helper method:
func (r *ColumnDefinition) IsAllowedValue(value string) bool { if r.allowed == nil { return true // Non-enum columns allow any value } for _, allowed := range r.allowed { if allowed == value { return true } } return false }contracts/database/schema/grammar.go (2)
46-47
: Consider documenting allowed values handling for TypeEnum.While the method signature is correct, it would be helpful to document how implementers should handle the allowed values from the ColumnDefinition. This is particularly important since the AI summary mentions a new
GetAllowed()
method in the ColumnDefinition interface.- // TypeEnum Create the column definition for an enumeration type. + // TypeEnum Create the column definition for an enumeration type. + // The implementation should use column.GetAllowed() to retrieve the valid enum values.
52-55
: Document the distinction between Json and Jsonb types.Consider enhancing the documentation to clarify the difference between Json and Jsonb types, as this would help implementers choose the appropriate type for their database system.
- // TypeJson Create the column definition for a json type. - TypeJson(column ColumnDefinition) string - // TypeJsonb Create the column definition for a jsonb type. - TypeJsonb(column ColumnDefinition) string + // TypeJson Create the column definition for a json type. + // Stores JSON data as text and must be parsed on each query. + TypeJson(column ColumnDefinition) string + // TypeJsonb Create the column definition for a jsonb type. + // Stores JSON data in a binary format for faster querying and indexing. + TypeJsonb(column ColumnDefinition) stringcontracts/database/schema/blueprint.go (1)
Line range hint
14-77
: Consider grouping related methods with interfaces.Given the growing number of column type methods, consider splitting this large interface into smaller, focused interfaces (e.g.,
TextColumns
,JsonColumns
, etc.) that can be composed into the mainBlueprint
interface. This would improve maintainability and make the contract more modular.database/schema/grammars/sqlite_test.go (1)
229-235
: LGTM with suggestions for enhanced test coverage.The test implementation correctly verifies the basic enum type functionality. However, consider adding more test cases to cover:
- Empty allowed values list
- Special characters in enum values
- Case sensitivity handling
- Single allowed value
Example additional test cases:
func (s *SqliteSuite) TestTypeEnum() { tests := []struct { name string allowed []string expected string }{ { name: "basic enum", allowed: []string{"a", "b"}, expected: `varchar check ("a" in ('a', 'b'))`, }, { name: "empty allowed values", allowed: []string{}, expected: `varchar`, }, { name: "special characters", allowed: []string{"user's", "admin's"}, expected: `varchar check ("a" in ('user''s', 'admin''s'))`, }, { name: "single value", allowed: []string{"active"}, expected: `varchar check ("a" in ('active'))`, }, } for _, test := range tests { s.Run(test.name, func() { mockColumn := mocksschema.NewColumnDefinition(s.T()) mockColumn.EXPECT().GetName().Return("a").Once() mockColumn.EXPECT().GetAllowed().Return(test.allowed).Once() s.Equal(test.expected, s.grammar.TypeEnum(mockColumn)) }) } }database/schema/grammars/sqlite.go (2)
192-198
: Consider adding JSON validation constraint.While storing JSON as TEXT is correct for SQLite, consider adding a CHECK constraint to validate JSON format:
func (r *Sqlite) TypeJson(column schema.ColumnDefinition) string { - return "text" + return fmt.Sprintf(`text check ("%s" is json)`, column.GetName()) } func (r *Sqlite) TypeJsonb(column schema.ColumnDefinition) string { - return "text" + return fmt.Sprintf(`text check ("%s" is json)`, column.GetName()) }
200-202
: Consider deduplicating text type methods.All text-related methods return the same type. Consider using a single helper method to reduce code duplication:
+func (r *Sqlite) typeText() string { + return "text" +} + func (r *Sqlite) TypeLongText(column schema.ColumnDefinition) string { - return "text" + return r.typeText() } func (r *Sqlite) TypeMediumText(column schema.ColumnDefinition) string { - return "text" + return r.typeText() } func (r *Sqlite) TypeText(column schema.ColumnDefinition) string { - return "text" + return r.typeText() } func (r *Sqlite) TypeTinyText(column schema.ColumnDefinition) string { - return "text" + return r.typeText() }Also applies to: 208-210, 212-214, 220-222
database/schema/grammars/mysql.go (3)
242-249
: Add documentation for JSON type compatibilityConsider adding documentation to explain that
TypeJsonb
exists for PostgreSQL compatibility but defaults to regular JSON in MySQL.func (r *Mysql) TypeJson(column schema.ColumnDefinition) string { return "json" } +// TypeJsonb returns "json" as MySQL doesn't have a native JSONB type. +// This method exists for PostgreSQL compatibility. func (r *Mysql) TypeJsonb(column schema.ColumnDefinition) string { return "json" }
250-261
: Add documentation for text type size limitsConsider adding documentation to clarify the maximum size limits for each text type:
- TINYTEXT: 255 bytes
- TEXT: 65,535 bytes
- MEDIUMTEXT: 16,777,215 bytes
- LONGTEXT: 4,294,967,295 bytes
Also applies to: 262-263, 283-285
279-281
: Add support for unsigned TINYINTConsider adding support for unsigned TINYINT to allow values from 0 to 255.
func (r *Mysql) TypeTinyInteger(column schema.ColumnDefinition) string { - return "tinyint" + if column.GetUnsigned() { + return "tinyint unsigned" + } + return "tinyint" }database/schema/grammars/sqlserver.go (2)
251-264
: LGTM! Consider adding documentation about text type mappingsThe implementation correctly maps all text types to
nvarchar(max)
as per SQL Server's type system. Consider adding a comment explaining the mapping of MySQL/PostgreSQL text types to SQL Server equivalents for better maintainability.
Line range hint
214-287
: Consider adding comprehensive type mapping documentationThe implementation shows good consistency in type mappings, but consider adding:
- A type mapping table in comments or documentation
- SQL Server version compatibility requirements
- Performance implications of using certain types (especially for JSON and text fields)
- Migration considerations from other databases
This would greatly help maintainers and users understand the type system differences between databases.
database/schema/grammars/postgres.go (2)
261-275
: LGTM! Appropriate mapping of text types to PostgreSQL's native TEXT typeThe implementation correctly maps various text types to PostgreSQL's native TEXT type, which is the recommended approach since PostgreSQL's TEXT type has no practical length limit and no performance penalty.
Note: PostgreSQL's TEXT type is the most efficient choice as it has no overhead compared to the other text types, and there's no performance penalty for storing large values.
253-259
: LGTM! Correct implementation of JSON typesThe implementation correctly supports both PostgreSQL's JSON types:
json
: Stores JSON data as textjsonb
: Stores JSON data in a binary format for better performanceNote: JSONB is generally preferred over JSON in PostgreSQL as it offers better performance for querying and indexing.
database/schema/grammars/mysql_test.go (1)
308-313
: Consider adding edge cases to the enum testThe test correctly verifies basic enum functionality, but could be enhanced to cover additional scenarios:
- Empty allowed values list
- Values containing special characters or quotes
- Case sensitivity handling
Consider expanding the test with additional cases:
func (s *MysqlSuite) TestTypeEnum() { - mockColumn := mocksschema.NewColumnDefinition(s.T()) - mockColumn.EXPECT().GetAllowed().Return([]string{"a", "b"}).Once() - - s.Equal(`enum('a', 'b')`, s.grammar.TypeEnum(mockColumn)) + tests := []struct { + name string + allowed []string + expected string + }{ + { + name: "basic values", + allowed: []string{"a", "b"}, + expected: "enum('a', 'b')", + }, + { + name: "empty values", + allowed: []string{}, + expected: "enum()", + }, + { + name: "values with special chars", + allowed: []string{"user's", "admin's"}, + expected: "enum('user\\'s', 'admin\\'s')", + }, + } + + for _, test := range tests { + s.Run(test.name, func() { + mockColumn := mocksschema.NewColumnDefinition(s.T()) + mockColumn.EXPECT().GetAllowed().Return(test.allowed).Once() + s.Equal(test.expected, s.grammar.TypeEnum(mockColumn)) + }) + } }database/schema/blueprint_test.go (1)
139-157
: LGTM! Well-structured test following established patterns.The test case is well-implemented with good coverage of both default and custom length scenarios. It maintains consistency with other similar tests in the file (e.g., TestString).
A minor suggestion to enhance test coverage:
Consider adding a test case for edge cases such as zero or negative length values to ensure proper validation, similar to this:
func (s *BlueprintTestSuite) TestChar() { // ... existing test cases ... + + // Test edge cases + s.blueprint.Char(column, 0) + s.Contains(s.blueprint.GetAddedColumns(), &ColumnDefinition{ + length: &length, // Should fall back to default length + name: &column, + ttype: &ttype, + }) }database/schema/grammars/postgres_test.go (1)
351-357
: Enhance test coverage for enum type handlingWhile the basic test case looks good, consider adding more test cases to cover:
- Empty allowed values array
- Values containing special characters or quotes
- Case sensitivity handling
- Maximum varchar length validation
Example test cases to add:
func (s *PostgresSuite) TestTypeEnum() { tests := []struct { name string colName string allowed []string expected string }{ { name: "basic enum", colName: "status", allowed: []string{"active", "inactive"}, expected: `varchar(255) check ("status" in ('active', 'inactive'))`, }, { name: "empty allowed values", colName: "empty_enum", allowed: []string{}, expected: `varchar(255)`, }, { name: "values with special chars", colName: "special", allowed: []string{"can't", "won't"}, expected: `varchar(255) check ("special" in ('can''t', 'won''t'))`, }, } for _, test := range tests { s.Run(test.name, func() { mockColumn := mocksschema.NewColumnDefinition(s.T()) mockColumn.EXPECT().GetName().Return(test.colName).Once() mockColumn.EXPECT().GetAllowed().Return(test.allowed).Once() s.Equal(test.expected, s.grammar.TypeEnum(mockColumn)) }) } }database/schema/schema_test.go (2)
1632-1651
: Consider grouping related column types togetherThe column definitions could be organized better by grouping related types together (e.g., all text types, all numeric types, etc.). This would improve readability and maintenance.
Consider reorganizing the columns in this order:
- ID/Primary key columns
- Numeric types (BigInteger, Integer, Float, etc.)
- String/Text types (Char, Text, LongText, etc.)
- JSON types (Json, Jsonb)
- Date/Time types (when uncommented)
Line range hint
1144-1145
: Track unimplemented test casesThere are TODO comments for implementing tests for:
- Drop all types
- Drop all views
These should be tracked and implemented to ensure complete test coverage of the schema functionality.
Would you like me to help create GitHub issues to track these pending implementations?
Also applies to: 1149-1150
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (3)
mocks/database/schema/Blueprint.go
is excluded by!mocks/**
mocks/database/schema/ColumnDefinition.go
is excluded by!mocks/**
mocks/database/schema/Grammar.go
is excluded by!mocks/**
📒 Files selected for processing (16)
contracts/database/schema/blueprint.go
(4 hunks)contracts/database/schema/column.go
(1 hunks)contracts/database/schema/grammar.go
(1 hunks)database/schema/blueprint.go
(7 hunks)database/schema/blueprint_test.go
(1 hunks)database/schema/column.go
(2 hunks)database/schema/grammars/mysql.go
(4 hunks)database/schema/grammars/mysql_test.go
(1 hunks)database/schema/grammars/postgres.go
(4 hunks)database/schema/grammars/postgres_test.go
(1 hunks)database/schema/grammars/sqlite.go
(3 hunks)database/schema/grammars/sqlite_test.go
(1 hunks)database/schema/grammars/sqlserver.go
(4 hunks)database/schema/grammars/sqlserver_test.go
(1 hunks)database/schema/grammars/wrap.go
(2 hunks)database/schema/schema_test.go
(14 hunks)
🔇 Additional comments (27)
contracts/database/schema/column.go (1)
8-9
: LGTM on the interface addition!
The new GetAllowed()
method follows the existing interface patterns and supports the enum column functionality.
database/schema/grammars/wrap.go (1)
8-8
: LGTM: Import addition is appropriate
The collect package import is necessary for the new Quotes method implementation.
contracts/database/schema/grammar.go (3)
40-41
: LGTM! TypeChar method signature and documentation.
The method signature and documentation follow the established pattern and are consistent with the interface's style.
56-57
: LGTM! Text-related method signatures and documentation.
The various text type methods (LongText, MediumText, Text, TinyText) are well-defined and follow a consistent pattern. The hierarchy of text sizes is clear from the method names.
Also applies to: 60-61, 62-63, 66-67
40-67
: Verify implementation requirements across different database systems.
The interface additions look good overall. However, please ensure that all supported database systems (mentioned in the implementation) can handle these column types, particularly:
- The distinction between Json and Jsonb (as some databases might not support both)
- The various text type sizes (as size limits might vary between databases)
contracts/database/schema/blueprint.go (3)
52-53
: LGTM! Text-related methods are well-structured.
The new text-related methods (LongText
, MediumText
, Text
, TinyText
) follow a consistent pattern and provide a complete range of text column types.
Also applies to: 58-59, 70-71, 76-77
48-51
: Verify database compatibility for JSON types.
The addition of Json
and Jsonb
methods is good, but ensure that the underlying database implementations handle these types appropriately, especially Jsonb
which is PostgreSQL-specific.
14-15
: Ensure proper validation in concrete implementations.
For the new Char
and Enum
methods:
Char
: Consider validating length constraints based on the database engine's limitationsEnum
: Ensure that concrete implementations validate the provided array of allowed values
Also applies to: 24-25
database/schema/grammars/sqlite.go (1)
168-170
: LGTM! Correctly implements SQLite's type affinity.
The implementation aligns with SQLite's type affinity rules where both CHAR and VARCHAR are treated as TEXT storage class.
database/schema/grammars/mysql.go (1)
Line range hint 213-286
: Verify test coverage for new type methods
Please ensure comprehensive test coverage for the new type methods, including edge cases such as:
- CHAR with invalid lengths
- ENUM with empty allowed values
- JSON with complex data structures
- Text types with data exceeding size limits
database/schema/blueprint.go (5)
32-32
: LGTM: Good refactoring of numeric column methods
The refactoring to use createAndAddColumn
consistently across all numeric column types reduces code duplication and improves maintainability.
Also applies to: 64-64, 68-68, 148-148, 172-172, 192-192, 216-216
45-55
: LGTM: Well-implemented Char column with length handling
The implementation correctly handles both default and custom length specifications using constants.DefaultStringLength
.
77-82
: LGTM: Good implementation of Enum column type
The method properly stores allowed values for enum validation.
Please ensure that the database driver's grammar implementation correctly handles these allowed values when generating the DDL.
155-161
: LGTM: Clean implementation of Text and JSON column types
The implementations are consistent and follow the established pattern using createAndAddColumn
.
Also applies to: 163-165, 175-176, 207-209, 219-220
297-312
: LGTM: Well-structured column creation implementation
The method correctly handles both creation and modification scenarios. The conditional command addition for non-create operations is properly implemented.
Since this method modifies shared state (columns slice), please verify thread safety if this code might be used in concurrent operations.
database/schema/grammars/sqlserver_test.go (1)
287-293
: LGTM! Well-structured test case for enum type in SQL Server.
The test correctly verifies:
- Base type as nvarchar(255)
- CHECK constraint syntax
- Unicode string literals (N prefix)
- Multiple allowed values handling
Consider adding test cases for:
- Empty allowed values
- Values containing special characters
- Case sensitivity handling
database/schema/grammars/sqlserver.go (4)
214-216
: LGTM! Correct use of SQL Server's Unicode fixed-length character type
The implementation properly uses nchar
with length specification, which is the appropriate Unicode-aware fixed-length character type in SQL Server.
243-250
: Consider JSON validation and SQL Server version compatibility
While nvarchar(max)
works for storing JSON data, consider these improvements:
- Add JSON validation using
ISJSON
function for SQL Server 2016+ - Consider adding a comment that Jsonb is identical to Json in SQL Server
- Document minimum SQL Server version requirements
280-282
: Add documentation about tinyint range differences
While the implementation is correct, SQL Server's tinyint
is unsigned (0-255) unlike other databases where it's typically signed (-128 to 127). Consider adding a comment to warn about this difference to prevent potential data truncation issues.
284-286
: LGTM! Appropriate mapping for tinytext
The implementation correctly maps tinytext
to nvarchar(255)
, maintaining equivalent capacity and Unicode support.
database/schema/grammars/postgres.go (4)
215-222
: LGTM! Correct implementation of CHAR type
The implementation properly handles both fixed-length and default CHAR types according to PostgreSQL specifications.
298-300
: LGTM! Reasonable mapping for TINYTEXT
The implementation maps TINYTEXT to VARCHAR(255), which is a sensible choice for maintaining compatibility with other databases that have this type.
232-234
: Consider using PostgreSQL's native ENUM type
While the current implementation using VARCHAR with CHECK constraint is functional, PostgreSQL provides a native ENUM type that might be more appropriate. Native ENUMs offer:
- Type safety
- Storage efficiency
- Better performance
Example of native ENUM implementation:
-return fmt.Sprintf(`varchar(255) check ("%s" in (%s))`, column.GetName(), strings.Join(r.wrap.Quotes(column.GetAllowed()), ", "))
+return fmt.Sprintf("CREATE TYPE %s_enum AS ENUM (%s); %s_enum",
+ column.GetName(),
+ strings.Join(r.wrap.Quotes(column.GetAllowed()), ", "),
+ column.GetName())
Please ensure that:
- The column name is properly escaped to prevent SQL injection
- The allowed values are properly validated and escaped
294-296
: LGTM! Appropriate mapping for TINYINT to SMALLINT
The implementation correctly maps TINYINT to PostgreSQL's SMALLINT type, which is the closest equivalent as PostgreSQL doesn't have a native TINYINT type.
database/schema/grammars/mysql_test.go (1)
Line range hint 1-314
: Add tests for Text and Json types
Based on the PR objectives mentioning new Text and Json methods, corresponding test cases should be added to maintain consistent test coverage. Consider adding:
TestTypeText()
for text type handlingTestTypeJson()
for JSON type handling
This will ensure the new functionality is properly tested like other column types.
Would you like me to help generate these test cases following the existing patterns?
database/schema/schema_test.go (2)
1646-1651
: Verify JSON handling across different databases
The implementation of JSON columns varies significantly across databases:
- PostgreSQL: Native json and jsonb types
- MySQL: JSON type
- SQLite: Stored as TEXT
- SQLServer: Stored as nvarchar(max)
Ensure that the application layer properly handles these differences, especially for jsonb which may not have equivalent functionality in non-PostgreSQL databases.
1668-1687
: LGTM! Comprehensive test coverage for column types
The test assertions thoroughly verify:
- Column presence
- Type mappings
- Collation settings
- Default values
- Nullability
- Comments
The implementation provides good coverage across different database drivers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (1)
database/schema/grammars/wrap_test.go (1)
49-56
: Consider adding more test cases for better coverage.To make the test more robust, consider adding:
- Edge cases:
- Empty slice
- Nil slice
- Strings containing special characters or quotes
- Tests for other supported drivers (MySQL, SQLite)
- Documentation comments explaining the expected behavior for each driver
Example implementation:
func (s *WrapTestSuite) TestQuotes() { + // Test empty and nil slices + s.Equal([]string{}, s.wrap.Quotes([]string{})) + s.Equal([]string{}, s.wrap.Quotes(nil)) + + // Test PostgreSQL driver (default) result := s.wrap.Quotes([]string{"value1", "value2"}) s.Equal([]string{"'value1'", "'value2'"}, result) + // Test strings with special characters + result = s.wrap.Quotes([]string{"val'ue1", "val\"ue2"}) + s.Equal([]string{"'val''ue1'", "'val\"ue2'"}, result) + // Test SQL Server driver s.wrap.driver = database.DriverSqlserver result = s.wrap.Quotes([]string{"value1", "value2"}) s.Equal([]string{"N'value1'", "N'value2'"}, result) + + // Test MySQL driver + s.wrap.driver = database.DriverMysql + result = s.wrap.Quotes([]string{"value1", "value2"}) + s.Equal([]string{"'value1'", "'value2'"}, result) }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
database/schema/grammars/wrap_test.go
(1 hunks)
🔇 Additional comments (1)
database/schema/grammars/wrap_test.go (1)
49-56
: LGTM! The test implementation looks good.
The test correctly verifies the Quotes
method behavior for both PostgreSQL and SQL Server drivers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
⚠️ Performance Alert ⚠️
Possible performance regression was detected for benchmark.
Benchmark result of this commit is worse than the previous benchmark result exceeding threshold 1.50
.
Benchmark suite | Current: b1bdbbd | Previous: ea85653 | Ratio |
---|---|---|---|
Benchmark_Fatal |
2e-7 ns/op 0 B/op 0 allocs/op |
1e-7 ns/op 0 B/op 0 allocs/op |
2 |
Benchmark_Fatal - ns/op |
2e-7 ns/op |
1e-7 ns/op |
2 |
This comment was automatically generated by workflow using github-action-benchmark.
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #731 +/- ##
==========================================
- Coverage 69.58% 69.37% -0.22%
==========================================
Files 210 210
Lines 16943 17022 +79
==========================================
+ Hits 11790 11809 +19
- Misses 4496 4556 +60
Partials 657 657 ☔ View full report in Codecov by Sentry. 🚨 Try these New Features:
|
📑 Description
Summary by CodeRabbit
✅ Checks