-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FE Release 2024-09-16 #3129
FE Release 2024-09-16 #3129
Conversation
- @synapsecns/[email protected]
* feat: scaffold `MulticallTarget` * test: define unit tests for Multicall target * test: add cases with no revert data * test: check that `msg.sender` is preserved * feat: implement `MulticallTarget` * fix: bubble reverts correctly * docs: add NatSpec
Co-authored-by: Trajan0x <[email protected]>
Co-authored-by: Trajan0x <[email protected]>
* adding more txn tracing and new sdk methods * smalling naming nits * adding new function to return status and information at once
- @synapsecns/[email protected]
WalkthroughThis pull request introduces multiple changes across various files. A new linter for detecting magic numbers is added in the Changes
Possibly related PRs
Suggested labels
Recent review detailsConfiguration used: .coderabbit.yaml Files ignored due to path filters (1)
Files selected for processing (1)
Files skipped from review as they are similar to previous changes (1)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
Early access features: enabledWe are currently testing the following features in early access:
Note:
|
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## fe-release #3129 +/- ##
====================================================
+ Coverage 33.38834% 38.23421% +4.84586%
====================================================
Files 343 423 +80
Lines 20603 24363 +3760
Branches 82 119 +37
====================================================
+ Hits 6879 9315 +2436
- Misses 13147 14309 +1162
- Partials 577 739 +162
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Bundle ReportChanges will increase total bundle size by 4.18kB (0.01%) ⬆️. This is within the configured threshold ✅ Detailed changes
ℹ️ *Bundle size includes cached data from a previous commit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Outside diff range and nitpick comments (7)
packages/contracts-rfq/package.json (1)
4-4
: Approved: Version bump to 0.3.0 indicates significant updates.The version change from 0.2.14 to 0.3.0 suggests the addition of new features, improvements, or changes in functionality that are backward compatible, as per semantic versioning conventions.
Please ensure that the release notes or changelog document the specific changes and enhancements introduced in this version. This will help users and developers understand the scope and impact of the update.
docs/bridge/docs/rfq/API/API.md (1)
28-36
: Great addition to the API documentation!The new section "API Version Changes" provides clear guidelines on how API users should handle version changes. Including the "X-Api-Version" header in each response is a good practice for API versioning. Referring to the
versions.go
file for detailed version information is helpful.There seems to be a missing preposition in this sentence:
Likewise, it will be their responsibility review the versions.go file, to research & understand how any changes may affect their integration, and to implement any necessary adjustments resulting from the API changes.
Consider adding the word "to" before "review":
-Likewise, it will be their responsibility review the versions.go file, to research & understand how any changes may affect their integration, and to implement any necessary adjustments resulting from the API changes. +Likewise, it will be their responsibility to review the versions.go file, to research & understand how any changes may affect their integration, and to implement any necessary adjustments resulting from the API changes.Tools
LanguageTool
[uncategorized] ~36-~36: Possible missing preposition found.
Context: ...kewise, it will be their responsibility review the versions.go file, to research & und...(AI_HYDRA_LEO_MISSING_TO)
Markdownlint
28-28: null
Emphasis used instead of a heading(MD036, no-emphasis-as-heading)
services/rfq/api/docs/swagger.yaml (1)
102-105
: Approved with a suggestionAdding the
X-Api-Version
header to the API response is a good practice for API versioning and evolution.Consider enhancing the description to include a direct link to the relevant section of the API documentation for more detailed information about the versioning scheme and its implications.
core/metrics/README.md (1)
15-34
: Excellent documentation for configuring multiple OTLP exporters!The table clearly outlines the environment variables for specifying endpoints and transport protocols for additional OTLP exporters. This will greatly help users in configuring multiple exporters, which is not supported by default in OpenTelemetry clients.
Please address the TODO comment by fully documenting the options for
OTEL_EXPORTER_OTLP_SECURE_MODE
andOTEL_EXPORTER_OTLP_HEADERS
to ensure that the documentation is complete and comprehensive.core/metrics/otlp.go (3)
41-57
: LGTM! Consider adding tests to cover error handling.The code dynamically creates additional exporters based on the presence of environment variables, using a loop counter to generate suffixes for the variable names. This allows for a flexible number of exporters to be configured.
The loop correctly breaks when no more environment variables are found, and the created exporters are properly appended to the list.
To improve test coverage, consider adding tests to cover the error handling paths at lines 52-55. Do you want me to generate the test code or open a GitHub issue to track this task?
Tools
GitHub Check: codecov/patch
[warning] 52-55: core/metrics/otlp.go#L52-L55
Added lines #L52 - L55 were not covered by tests
[warning] 57-57: core/metrics/otlp.go#L57
Added line #L57 was not covered by tests
131-176
: LGTM! Consider adding tests to cover error handling.The
makeOTLPExporter
function encapsulates the logic for creating an OTLP exporter based on the provided configuration. It correctly retrieves the configuration values using thegetEnvSuffix
function and checks for the compatibility of the transport type with the security settings.The
buildClientFromTransport
function is called with the appropriate transport type and options to create the OTLP client, and the OTLP exporter is properly created and returned using the created client.To improve test coverage, consider adding tests to cover the error handling paths at lines 150-151, 157-158, 168-169, and 173-174. Do you want me to generate the test code or open a GitHub issue to track this task?
Tools
GitHub Check: codecov/patch
[warning] 150-151: core/metrics/otlp.go#L150-L151
Added lines #L150 - L151 were not covered by tests
[warning] 157-158: core/metrics/otlp.go#L157-L158
Added lines #L157 - L158 were not covered by tests
[warning] 168-169: core/metrics/otlp.go#L168-L169
Added lines #L168 - L169 were not covered by tests
[warning] 173-174: core/metrics/otlp.go#L173-L174
Added lines #L173 - L174 were not covered by tests
178-197
: LGTM! Consider adding a TODO comment for URL validation.The
buildClientFromTransport
function provides a flexible way to create an OTLP client based on the transport type and options. The use of a variadic list ofOption
functions allows for extensible configuration of the client.The
transportOptions
struct is used to store the HTTP and gRPC options separately, and each providedOption
function is applied to configure the client. The switch statement correctly creates and returns the corresponding OTLP client based on the transport type.Consider adding a TODO comment at line 187 to remind future developers to implement URL validation for the OTLP client.
Tools
GitHub Check: codecov/patch
[warning] 183-184: core/metrics/otlp.go#L183-L184
Added lines #L183 - L184 were not covered by tests
[warning] 190-191: core/metrics/otlp.go#L190-L191
Added lines #L190 - L191 were not covered by tests
[warning] 194-195: core/metrics/otlp.go#L194-L195
Added lines #L194 - L195 were not covered by tests
Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Files selected for processing (29)
- .golangci.yml (1 hunks)
- core/go.mod (1 hunks)
- core/metrics/README.md (1 hunks)
- core/metrics/export_test.go (1 hunks)
- core/metrics/multiexporter.go (1 hunks)
- core/metrics/multiexporter_test.go (1 hunks)
- core/metrics/otlp.go (5 hunks)
- core/metrics/otlp_test.go (1 hunks)
- core/metrics/rookout.go (1 hunks)
- docs/bridge/docs/rfq/API/API.md (1 hunks)
- packages/contracts-rfq/CHANGELOG.md (1 hunks)
- packages/contracts-rfq/contracts/interfaces/IMulticallTarget.sol (1 hunks)
- packages/contracts-rfq/contracts/utils/MulticallTarget.sol (1 hunks)
- packages/contracts-rfq/package.json (1 hunks)
- packages/contracts-rfq/test/MulticallTarget.t.sol (1 hunks)
- packages/contracts-rfq/test/harnesses/MulticallTargetHarness.sol (1 hunks)
- packages/rest-api/CHANGELOG.md (1 hunks)
- packages/rest-api/package.json (1 hunks)
- packages/rest-api/src/app.ts (1 hunks)
- packages/synapse-interface/CHANGELOG.md (1 hunks)
- packages/synapse-interface/components/LanguageSelector.tsx (1 hunks)
- packages/synapse-interface/next.config.js (1 hunks)
- packages/synapse-interface/package.json (1 hunks)
- packages/synapse-interface/public/blacklist.json (1 hunks)
- services/rfq/api/docs/docs.go (5 hunks)
- services/rfq/api/docs/swagger.json (5 hunks)
- services/rfq/api/docs/swagger.yaml (5 hunks)
- services/rfq/api/rest/handler.go (4 hunks)
- services/rfq/api/rest/server.go (1 hunks)
Files skipped from review due to trivial changes (7)
- core/metrics/rookout.go
- packages/rest-api/CHANGELOG.md
- packages/rest-api/package.json
- packages/synapse-interface/CHANGELOG.md
- packages/synapse-interface/package.json
- services/rfq/api/rest/handler.go
- services/rfq/api/rest/server.go
Additional context used
GitHub Check: codecov/patch
core/metrics/multiexporter.go
[warning] 55-58: core/metrics/multiexporter.go#L55-L58
Added lines #L55 - L58 were not covered by tests
[warning] 64-66: core/metrics/multiexporter.go#L64-L66
Added lines #L64 - L66 were not covered by tests
[warning] 79-80: core/metrics/multiexporter.go#L79-L80
Added lines #L79 - L80 were not covered by testscore/metrics/otlp.go
[warning] 37-38: core/metrics/otlp.go#L37-L38
Added lines #L37 - L38 were not covered by tests
[warning] 52-55: core/metrics/otlp.go#L52-L55
Added lines #L52 - L55 were not covered by tests
[warning] 57-57: core/metrics/otlp.go#L57
Added line #L57 was not covered by tests
[warning] 150-151: core/metrics/otlp.go#L150-L151
Added lines #L150 - L151 were not covered by tests
[warning] 157-158: core/metrics/otlp.go#L157-L158
Added lines #L157 - L158 were not covered by tests
[warning] 168-169: core/metrics/otlp.go#L168-L169
Added lines #L168 - L169 were not covered by tests
[warning] 173-174: core/metrics/otlp.go#L173-L174
Added lines #L173 - L174 were not covered by tests
[warning] 183-184: core/metrics/otlp.go#L183-L184
Added lines #L183 - L184 were not covered by tests
[warning] 190-191: core/metrics/otlp.go#L190-L191
Added lines #L190 - L191 were not covered by tests
[warning] 194-195: core/metrics/otlp.go#L194-L195
Added lines #L194 - L195 were not covered by tests
[warning] 222-230: core/metrics/otlp.go#L222-L230
Added lines #L222 - L230 were not covered by tests
Markdownlint
packages/contracts-rfq/CHANGELOG.md
9-9: Expected: h2; Actual: h3
Heading levels should only increment by one level at a time(MD001, heading-increment)
docs/bridge/docs/rfq/API/API.md
28-28: null
Emphasis used instead of a heading(MD036, no-emphasis-as-heading)
LanguageTool
docs/bridge/docs/rfq/API/API.md
[uncategorized] ~36-~36: Possible missing preposition found.
Context: ...kewise, it will be their responsibility review the versions.go file, to research & und...(AI_HYDRA_LEO_MISSING_TO)
GitHub Check: CodeQL
packages/rest-api/src/app.ts
[failure] 560-560: Database query built from user-controlled sources
This query string depends on a user-provided value.
[failure] 625-625: Database query built from user-controlled sources
This query string depends on a user-provided value.
Additional comments not posted (61)
core/metrics/export_test.go (2)
4-6
: LGTM!The function signature and documentation are clear and concise. The function takes a string input and returns a map of key-value pairs, which aligns with the expected behavior based on the documentation.
4-6
: Verify the implementation of theheadersToMap
function.The
HeadersToMap
function relies on theheadersToMap
helper function for the actual conversion logic. To ensure the correctness of theHeadersToMap
function, please provide the implementation of theheadersToMap
function for review.Verification successful
Implementation verified, but improvements suggested
The
headersToMap
function is implemented correctly for basic use cases. It properly splits the input string into key-value pairs and populates a map. However, there are a few potential improvements that could make it more robust:
- Consider trimming whitespace from keys and values to avoid issues with extra spaces.
- Add support for URL-decoded values, which are common in HTTP headers.
- Handle cases where a header might appear multiple times by using a map[string][]string instead.
- Include special handling for empty input strings.
These suggestions would enhance the function's ability to handle various edge cases in real-world scenarios.
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the implementation of the `headersToMap` function. # Test: Search for the function definition. Expect: The function implementation. rg --type go -A 10 $'func headersToMap'Length of output: 625
Script:
#!/bin/bash # Description: Retrieve the complete implementation of the `headersToMap` function. # Search for the function definition and include more context lines rg --type go -A 20 $'func headersToMap'Length of output: 976
packages/contracts-rfq/contracts/interfaces/IMulticallTarget.sol (3)
7-10
: LGTM!The
Result
struct is well-defined and effectively encapsulates the outcome of a call with itssuccess
andreturnData
fields.
12-12
: LGTM!The
multicallNoResults
function is well-defined, with appropriate parameters and visibility. It effectively allows for executing multiple calls without returning results, providing flexibility in handling failed calls through theignoreReverts
parameter.
13-18
: LGTM!The
multicallWithResults
function is well-defined, with appropriate parameters, return value, and visibility. It effectively allows for executing multiple calls and returns an array ofResult
structs, providing flexibility in handling failed calls through theignoreReverts
parameter. The returnedResult
array allows for inspecting the outcome of each call.packages/synapse-interface/next.config.js (1)
47-47
: LGTM! Ensure translations are available for the new locale.The addition of the
zh-CN
locale to thei18n
configuration is a positive step towards enhancing the application's multi-language support. This change aligns well with the PR objective of broadening the application's accessibility to diverse linguistic contexts.Please ensure that the necessary translations for the
zh-CN
locale are provided in the application to deliver a seamless user experience for Chinese-speaking users.packages/contracts-rfq/test/harnesses/MulticallTargetHarness.sol (6)
14-17
: LGTM!The function logic is correct, and the implementation is accurate. Setting
addressField
tomsg.sender
captures the caller's address as intended.
19-22
: LGTM!The function logic is correct, and the implementation is accurate. Setting
addressField
to the provided_addressField
parameter allows for external control of the field value.
24-27
: LGTM!The function logic is correct, and the implementation is accurate. Setting
uintField
to the provided_uintField
parameter allows for external control of the field value.
29-31
: LGTM!The function logic is correct, and the implementation is accurate. Reverting with the
CustomError
custom error demonstrates the usage of custom errors for revert scenarios.
33-35
: LGTM!The function logic is correct, and the implementation is accurate. Reverting with the
REVERT_MESSAGE
constant demonstrates a standard revert scenario with a predefined message.
37-42
: LGTM!The function logic is correct, and the implementation is accurate. Using inline assembly to revert without a specific message demonstrates an undetermined revert scenario. The
solhint-disable-next-line
comment is appropriately used to disable the solhint warning for the inline assembly usage.core/metrics/multiexporter_test.go (1)
14-45
: LGTM!The test function
TestMultiExporter
is well-structured and covers the core functionality of theMultiExporter
. It correctly sets up the necessary exporters, creates test spans, and verifies that the spans are exported to both exporters. The assertions are appropriate and ensure the expected behavior.Additionally, the test validates the shutdown process of the
MultiExporter
by attempting to export spans after shutdown and checking for an error. This ensures that theMultiExporter
can handle export requests even after being shut down.Overall, the test implementation is solid and ensures the reliability and correctness of the
MultiExporter
functionality.core/metrics/otlp_test.go (1)
9-71
: Excellent test coverage!The test function
TestHeadersToMap
provides comprehensive coverage for theHeadersToMap
function. The use of table-driven tests with well-defined test cases ensures that the function handles various scenarios correctly, including basic input, empty input, extra spaces, missing values, missing keys, and multiple equal signs.The test cases are clearly organized and cover important edge cases, making it easy to understand and maintain the tests. Good job!
core/metrics/multiexporter.go (4)
1-12
: LGTM!The package declaration and imports are appropriate for the functionality of the package.
13-17
: LGTM!The
MultiExporter
interface is well-defined and serves the purpose of allowing multiple exporters. The method names are clear and self-explanatory.
19-30
: LGTM!The
multiExporter
struct andNewMultiExporter
function are well-defined and serve the purpose of creating a new multi-exporter. The comments provide clear documentation on the usage and purpose of the function.
71-76
: LGTM!The implementation of the
Shutdown
method is correct and utilizes thedoParallel
method effectively to handle the graceful shutdown of all registered exporters concurrently.packages/synapse-interface/components/LanguageSelector.tsx (1)
15-15
: LGTM!The new language option for Simplified Chinese has been added correctly to the
languages
array. The language code'zh-CN'
is a valid ISO 639-1 code, and the language name'中文(简体)'
is the correct representation in Chinese characters.This addition enhances the language selection functionality and provides support for users who prefer Simplified Chinese.
.golangci.yml (1)
105-106
: Great addition of themnd
linter to detect magic numbers!The
mnd
linter is a valuable tool for identifying instances of magic numbers in the codebase. By discouraging the use of unexplained numeric literals and promoting the use of named constants, this linter helps improve code readability, maintainability, and reduces the risk of errors related to magic numbers.Developers should review their code and replace magic numbers with appropriately named constants to comply with this linter. While this may require some initial effort, it will contribute to a cleaner and more maintainable codebase in the long run.
packages/contracts-rfq/CHANGELOG.md (1)
6-11
: LGTM! The multicall target abstraction feature looks good.The multicall target abstraction feature introduced in version 0.3.0 is a notable enhancement that can improve the efficiency of handling multiple calls in the
contracts-rfq
package. The feature is well-documented with the associated issue SLT-134 and commit reference.Tools
Markdownlint
9-9: Expected: h2; Actual: h3
Heading levels should only increment by one level at a time(MD001, heading-increment)
packages/contracts-rfq/contracts/utils/MulticallTarget.sol (3)
19-30
: LGTM! ThemulticallNoResults
function is implemented securely.The function correctly uses
delegatecall
to preserve themsg.sender
and execute the batched calls in the same context as the original caller. The non-payable nature of the function mitigates potential security risks associated withdelegatecall
andmsg.value
.The
ignoreReverts
flag provides flexibility in error handling, allowing the caller to choose whether to ignore or propagate revert errors.
40-58
: LGTM! ThemulticallWithResults
function is implemented securely.The function correctly uses
delegatecall
to preserve themsg.sender
and execute the batched calls in the same context as the original caller. The non-payable nature of the function mitigates potential security risks associated withdelegatecall
andmsg.value
.The
ignoreReverts
flag provides flexibility in error handling, allowing the caller to choose whether to ignore or propagate revert errors.The function also preserves and returns the results of each call as an array of
Result
structs, providing additional flexibility and information to the caller.
63-75
: LGTM! The_bubbleRevert
function is implemented correctly.The function serves the important purpose of bubbling up the original revert reason from the underlying call, enhancing error transparency and debugging capabilities. It preserves the same custom error or revert string, if one was used.
The use of inline assembly is justified in this case, as it allows for low-level manipulation of the revert reason and enables bubbling up the original error message. The function is appropriately marked as
pure
since it does not interact with the contract's state and only operates on the providedreturnData
.services/rfq/api/docs/swagger.yaml (4)
126-129
: Looks good!The addition of the
X-Api-Version
header is consistent with the previous change.The suggestion to enhance the description from the previous comment applies here as well.
143-146
: Looks good!The addition of the
X-Api-Version
header is consistent with the previous changes.The suggestion to enhance the description from the first comment applies here as well.
185-188
: Looks good!The addition of the
X-Api-Version
header is consistent with the previous changes.The suggestion to enhance the description from the first comment applies here as well.
212-215
: Looks good!The addition of the
X-Api-Version
header is consistent with the previous changes.The suggestion to enhance the description from the first comment applies here as well.
core/metrics/otlp.go (4)
33-40
: LGTM!The code correctly initializes a list of exporters, starting with a primary exporter created using the default configuration.
Tools
GitHub Check: codecov/patch
[warning] 37-38: core/metrics/otlp.go#L37-L38
Added lines #L37 - L38 were not covered by tests
60-71
: LGTM!The code correctly creates a
MultiExporter
using the list of exporters, allowing multiple exporters to be used concurrently. ThebaseHandler
is properly initialized with theMultiExporter
and other configuration options.
199-247
: LGTM!The
transportOptions
struct provides a clean way to store and manage the HTTP and gRPC options separately. TheOption
function type allows for a flexible and extensible way to configure the client options.The
withURL
,withSecure
, andwithHeaders
functions are well-defined and correctly append the corresponding options to both HTTP and gRPC options. ThewithSecure
function properly configures the client with secure or insecure options based on the provided boolean value, and thewithHeaders
function correctly converts the provided headers string to a map using theheadersToMap
function.Tools
GitHub Check: codecov/patch
[warning] 222-230: core/metrics/otlp.go#L222-L230
Added lines #L222 - L230 were not covered by tests
249-269
: LGTM!The
headersToMap
function provides a useful utility to convert a string of key-value pairs to a map. It correctly initializes an empty map to store the result and properly splits the input string by comma to get the key-value pairs.For each key-value pair, it correctly splits the pair by "=" to get the key and value and adds the key-value pair to the result map only if the pair is valid. The function returns the resulting map of key-value pairs.
core/go.mod (1)
66-66
: LGTM!The introduction of a direct dependency on
google.golang.org/grpc
at versionv1.64.0
aligns with the AI-generated summary and appears to be a valid enhancement to the project's architecture. This change may improve the project's capabilities for remote procedure calls and communication between services.Ensure that the gRPC version is compatible with the other dependencies in the project and that there are no version conflicts.
services/rfq/api/docs/swagger.json (4)
33-39
: LGTM!Adding the
X-Api-Version
header to the response is a good practice for API versioning. The description provides clarity and directs users to the documentation for more information.
70-76
: Skipping the review.The changes in this code segment are the same as the ones reviewed and approved in the previous code segment.
102-107
: Skipping the review.The changes in this code segment are the same as the ones reviewed and approved in the previous code segments.
166-171
: Skipping the review.The changes in these code segments are the same as the ones reviewed and approved in the previous code segments.
Also applies to: 201-207
services/rfq/api/docs/docs.go (4)
44-50
: LGTM!The addition of the
X-Api-Version
header to the response definition for the/ack
endpoint'sPUT
operation enhances the API documentation by providing clear information about the API version. The header type and description are appropriate, and the change aligns with the PR objective of improving API versioning clarity.
81-87
: Looks good!The addition of the
X-Api-Version
header to the response definition for the/bulk_quotes
endpoint'sPUT
operation enhances the API documentation by providing clear information about the API version. The header type and description are appropriate, and the change aligns with the PR objective of improving API versioning clarity.
113-118
: Approved!The addition of the
X-Api-Version
header to the response definition for the/contracts
endpoint'sGET
operation enhances the API documentation by providing clear information about the API version. The header type and description are appropriate, and the change aligns with the PR objective of improving API versioning clarity.
177-182
: Looks great!The addition of the
X-Api-Version
header to the response definition for the/quotes
endpoint'sGET
operation enhances the API documentation by providing clear information about the API version. The header type and description are appropriate, and the change aligns with the PR objective of improving API versioning clarity.packages/contracts-rfq/test/MulticallTarget.t.sol (14)
15-19
: LGTM!The
setUp
function correctly initializes the test environment by creating a new instance ofMulticallTargetHarness
and setting some initial values. This is a standard practice in test contracts.
21-23
: LGTM!The
getEncodedStringRevertMessage
function correctly encodes the string revert message usingabi.encodeWithSignature
. This is a helper function used in the test cases.
25-32
: LGTM!The
getMsgSenderData
function correctly encodes an array of function calls usingabi.encodeCall
. This is a helper function used to generate test data for themsg.sender
test cases.
34-41
: LGTM!The
getMsgSenderResults
function correctly constructs an array of expected results using theIMulticallTarget.Result
struct. This is a helper function used to generate expected results for themsg.sender
test cases.
43-50
: LGTM!The
getNoRevertsData
function correctly encodes an array of function calls that do not revert usingabi.encodeCall
. This is a helper function used to generate test data for the "no reverts" test cases.
52-59
: LGTM!The
getNoRevertsResults
function correctly constructs an array of expected results using theIMulticallTarget.Result
struct. This is a helper function used to generate expected results for the "no reverts" test cases.
61-68
: LGTM!The
getCustomErrorRevertData
function correctly encodes an array of function calls that revert with a custom error usingabi.encodeCall
. This is a helper function used to generate test data for the "custom error revert" test cases.
70-77
: LGTM!The
getCustomErrorRevertResults
function correctly constructs an array of expected results using theIMulticallTarget.Result
struct. This is a helper function used to generate expected results for the "custom error revert" test cases.
79-86
: LGTM!The
getStringRevertData
function correctly encodes an array of function calls that revert with a string message usingabi.encodeCall
. This is a helper function used to generate test data for the "string revert" test cases.
88-95
: LGTM!The
getStringRevertResults
function correctly constructs an array of expected results using theIMulticallTarget.Result
struct. This is a helper function used to generate expected results for the "string revert" test cases.
97-104
: LGTM!The
getUndeterminedRevertData
function correctly encodes an array of function calls that revert with an undetermined error usingabi.encodeCall
. This is a helper function used to generate test data for the "undetermined revert" test cases.
106-113
: LGTM!The
getUndeterminedRevertResults
function correctly constructs an array of expected results using theIMulticallTarget.Result
struct. This is a helper function used to generate expected results for the "undetermined revert" test cases.
117-123
: LGTM!The
test_multicallNoResults_ignoreReverts_noReverts
function correctly tests themulticallNoResults
function with no reverts andignoreReverts
set totrue
. It asserts that the state of the contract is correctly updated after the multicall.
125-132
: LGTM!The
test_multicallNoResults_ignoreReverts_withMsgSender
function correctly tests themulticallNoResults
function withmsg.sender
andignoreReverts
set totrue
. It asserts that the state of the contract is correctly updated after the multicall.packages/rest-api/src/app.ts (5)
475-507
: LGTM!The function correctly validates the input parameters, handles errors, and retrieves the Synapse transaction ID using the Synapse SDK. The implementation looks good.
555-561
: Verify the GraphQL query construction.The static analysis tool has flagged a potential issue with the GraphQL query being built from user-controlled sources. Please ensure that the
synapseTxId
parameter is properly sanitized before being included in the query to prevent any potential security vulnerabilities.Do you want me to generate a code snippet demonstrating proper input sanitization or open a GitHub issue to track this task?
Tools
GitHub Check: CodeQL
[failure] 560-560: Database query built from user-controlled sources
This query string depends on a user-provided value.
510-582
: LGTM!The function correctly validates the input parameters, handles errors, and retrieves the bridge transaction status using the Synapse SDK. It also fetches additional transaction details from an external GraphQL endpoint when the status is found. The overall implementation looks good.
Tools
GitHub Check: CodeQL
[failure] 560-560: Database query built from user-controlled sources
This query string depends on a user-provided value.
620-626
: Verify the GraphQL query construction.The static analysis tool has flagged a potential issue with the GraphQL query being built from user-controlled sources. Please ensure that the
originChainId
andtxHash
parameters are properly sanitized before being included in the query to prevent any potential security vulnerabilities.Do you want me to generate a code snippet demonstrating proper input sanitization or open a GitHub issue to track this task?
Tools
GitHub Check: CodeQL
[failure] 625-625: Database query built from user-controlled sources
This query string depends on a user-provided value.
585-648
: LGTM!The function correctly validates the input parameters, handles errors, and retrieves the destination transaction details using a GraphQL query. It provides a clear response indicating the transaction status and includes the relevant transaction information when available. The overall implementation looks good.
Tools
GitHub Check: CodeQL
[failure] 625-625: Database query built from user-controlled sources
This query string depends on a user-provided value.packages/synapse-interface/public/blacklist.json (1)
537-538
: Verify the rationale for blacklisting this address.The change looks good in terms of the JSON syntax and Ethereum address format. However, blacklisting an address is a significant action that can negatively impact the user experience for the owner of that address.
Please provide more context on why this specific address "0xB0A2e43D3E0dc4C71346A71484aC6a2627bbCbeD" is being added to the blacklist. Some potential reasons could be:
- The address was involved in a security incident or hack
- The address belongs to a known bad actor in the ecosystem
- The address violated the terms of service
Documenting the rationale will help maintain an audit trail and ensure there is a valid justification for this action.
const defaultTimeout = 30 * time.Second | ||
|
||
// ExportSpans exports a batch of spans. | ||
func (m *multiExporter) ExportSpans(parentCtx context.Context, ss []trace.ReadOnlySpan) error { | ||
return m.doParallel(parentCtx, func(ctx context.Context, exporter trace.SpanExporter) error { | ||
return exporter.ExportSpans(ctx, ss) | ||
}) | ||
} | ||
|
||
func (m *multiExporter) doParallel(parentCtx context.Context, fn func(context.Context, trace.SpanExporter) error) error { | ||
ctx, cancel := context.WithTimeout(parentCtx, defaultTimeout) | ||
defer cancel() | ||
|
||
var wg sync.WaitGroup | ||
var errors []error | ||
var mu sync.Mutex | ||
|
||
wg.Add(len(m.exporters)) | ||
for _, exporter := range m.exporters { | ||
go func(exporter trace.SpanExporter) { | ||
defer wg.Done() | ||
err := fn(ctx, exporter) | ||
if err != nil { | ||
mu.Lock() | ||
errors = append(errors, fmt.Errorf("error in doMultiple: %w", err)) | ||
mu.Unlock() | ||
} | ||
}(exporter) | ||
} | ||
|
||
wg.Wait() | ||
if len(errors) > 0 { | ||
// nolint: wrapcheck | ||
return multierr.Combine(errors...) | ||
} | ||
|
||
return nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add tests to cover the missing lines in the doParallel
method.
The implementation of the ExportSpans
method and the doParallel
helper method is correct and handles concurrency and error collection effectively.
However, static analysis hints indicate that some lines in the doParallel
method are not covered by tests:
- Lines 55-58: Appending errors to the
errors
slice. - Lines 64-66: Combining and returning the collected errors.
Please add tests to cover these lines and ensure the behavior is thoroughly tested.
Do you want me to generate the missing test cases or open a GitHub issue to track this task?
Tools
GitHub Check: codecov/patch
[warning] 55-58: core/metrics/multiexporter.go#L55-L58
Added lines #L55 - L58 were not covered by tests
[warning] 64-66: core/metrics/multiexporter.go#L64-L66
Added lines #L64 - L66 were not covered by tests
// AddExporter adds an exporter to the multi exporter. | ||
func (m *multiExporter) AddExporter(exporter trace.SpanExporter) { | ||
m.exporters = append(m.exporters, exporter) | ||
} | ||
|
||
var _ trace.SpanExporter = &multiExporter{} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add tests to cover the AddExporter
method.
The implementation of the AddExporter
method is correct and adds a new exporter to the exporters
slice. The type assertion ensures that the multiExporter
struct implements the trace.SpanExporter
interface.
However, static analysis hints indicate that the lines in the AddExporter
method are not covered by tests:
- Lines 79-80: Appending the new exporter to the
exporters
slice.
Please add tests to cover the AddExporter
method and ensure its behavior is thoroughly tested.
Do you want me to generate the missing test cases or open a GitHub issue to track this task?
Tools
GitHub Check: codecov/patch
[warning] 79-80: core/metrics/multiexporter.go#L79-L80
Added lines #L79 - L80 were not covered by tests
# [0.3.0](https://github.com/synapsecns/sanguine/compare/[email protected]@0.3.0) (2024-09-10) | ||
|
||
|
||
### Features |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix the heading level increment.
The heading level increments from h2 to h3, violating the Markdownlint rule MD001. Heading levels should only increment by one level at a time.
Apply this diff to fix the heading level:
-### Features
+## Features
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
### Features | |
## Features |
Tools
Markdownlint
9-9: Expected: h2; Actual: h3
Heading levels should only increment by one level at a time(MD001, heading-increment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Outside diff range and nitpick comments (11)
core/metrics/export_test.go (1)
4-6
: LGTM! Consider adding tests.The
HeadersToMap
function looks good. It provides a clean public API for converting headers to a map.Consider adding unit tests to ensure the expected behavior, especially around edge cases like empty headers, malformed headers, etc.
core/metrics/multiexporter.go (1)
55-58
: Add missing test coverage.The static analysis hints indicate that the following lines are not covered by tests:
- Lines 55-58
- Lines 64-66
- Lines 79-80
Please add appropriate test cases to ensure the robustness and reliability of the
MultiExporter
implementation.Do you want me to generate the missing test cases or open a GitHub issue to track this task?
Also applies to: 64-66, 79-80
Tools
GitHub Check: codecov/patch
[warning] 55-58: core/metrics/multiexporter.go#L55-L58
Added lines #L55 - L58 were not covered by testsdocs/bridge/docs/rfq/API/API.md (1)
28-36
: Great addition of the API version header and related documentation!The new section provides clear guidance to API users on detecting and handling version changes. Referring to the
versions.go
file for detailed information is helpful.Please fix the section title to use a heading instead of emphasis, as indicated by the static analysis hint. You can use the following markdown syntax for a heading:
-**API Version Changes** +## API Version ChangesTools
Markdownlint
28-28: null
Emphasis used instead of a heading(MD036, no-emphasis-as-heading)
core/metrics/otlp.go (5)
33-71
: Add tests to improve coverage.The static analysis tool
codecov/patch
reports that several lines in this function are not covered by tests, such as the error handling code paths. To ensure the robustness of the implementation, please add tests to cover these missing lines.Do you want me to help write the missing tests or open a GitHub issue to track this task?
Tools
GitHub Check: codecov/patch
[warning] 37-38: core/metrics/otlp.go#L37-L38
Added lines #L37 - L38 were not covered by tests
[warning] 52-55: core/metrics/otlp.go#L52-L55
Added lines #L52 - L55 were not covered by tests
[warning] 57-57: core/metrics/otlp.go#L57
Added line #L57 was not covered by tests
141-176
: Add tests for error handling.The
makeOTLPExporter
function performs important validations and error handling based on the environment variable configurations. However, the static analysis toolcodecov/patch
reports that some of the error handling lines (150-151, 157-158, 168-169, 173-174) are not covered by tests.To ensure the reliability of the error handling, please add tests to cover these lines.
Do you want me to help write the missing tests or open a GitHub issue to track this task?
Tools
GitHub Check: codecov/patch
[warning] 150-151: core/metrics/otlp.go#L150-L151
Added lines #L150 - L151 were not covered by tests
[warning] 157-158: core/metrics/otlp.go#L157-L158
Added lines #L157 - L158 were not covered by tests
[warning] 168-169: core/metrics/otlp.go#L168-L169
Added lines #L168 - L169 were not covered by tests
[warning] 173-174: core/metrics/otlp.go#L173-L174
Added lines #L173 - L174 were not covered by tests
179-197
: Add a test for unknown transport type.The
buildClientFromTransport
function handles the creation of the OTLP client based on the transport type and options. The error handling for unknown transport types (lines 194-195) is good, but the static analysis toolcodecov/patch
reports that these lines are not covered by tests.To ensure the reliability of the error handling, please add a test case that covers the scenario of an unknown transport type.
Do you want me to help write the test or open a GitHub issue to track this task?
Tools
GitHub Check: codecov/patch
[warning] 183-184: core/metrics/otlp.go#L183-L184
Added lines #L183 - L184 were not covered by tests
[warning] 190-191: core/metrics/otlp.go#L190-L191
Added lines #L190 - L191 were not covered by tests
[warning] 194-195: core/metrics/otlp.go#L194-L195
Added lines #L194 - L195 were not covered by tests
219-238
: Urgent: Add tests for TLS setup.The
withSecure
function sets up the secure mode configuration for both HTTP and gRPC clients. However, the static analysis toolcodecov/patch
reports that the lines responsible for configuring TLS credentials (line 222) and TLS client config (lines 226-230) are not covered by tests.Given the critical importance of secure communication, it is crucial to have thorough tests for the TLS setup. Please prioritize adding test cases that cover these lines to ensure the correctness and reliability of the secure mode configuration.
I strongly recommend addressing this as soon as possible. Do you want me to help write the necessary tests or open a high-priority GitHub issue to track this task?
Tools
GitHub Check: codecov/patch
[warning] 222-230: core/metrics/otlp.go#L222-L230
Added lines #L222 - L230 were not covered by tests
249-269
: LGTM with a suggestion!The
headersToMap
function correctly converts a comma-separated string of key-value pairs to a map. It handles the case where a key-value pair may not have a value by checking the length of the split result.One suggestion is to add a comment clarifying the assumption about the input format. For example:
// headersToMap converts a comma-separated string of key-value pairs to a map. // The input string should follow the format "key1=value1,key2=value2,...". // Keys and values should not contain commas or equal signs. func headersToMap(input string) map[string]string { // ... }This comment will help users understand the expected format of the input string and the limitations of the function.
packages/rest-api/src/app.ts (3)
475-507
: LGTM with a minor suggestion!The
/getSynapseTxId
endpoint implementation looks good. It correctly validates the required parameters and uses theSynapse.getSynapseTxId
method to retrieve the transaction ID.One minor suggestion would be to improve the error handling for the
Synapse.getSynapseTxId
promise rejection. Instead of providing a generic error message, you could include the actual error message returned by the SDK for more specific debugging information.Apply this diff to improve the error handling:
.catch((err) => { res.status(400).send({ message: 'Ensure that your request matches the following format: /getSynapseTxId?originChainId=8453&bridgeModule=SynapseRFQ&txHash=0x4acd82091b54cf584d50adcad9f57c61055beaca130016ecc3798d3d61f5487a', - error: err.message, + error: `Failed to retrieve Synapse transaction ID: ${err.message}`, }) })
510-582
: Looks good with a few suggestions!The
/getBridgeTxStatus
endpoint implementation is solid. It correctly validates the required parameters, uses theSynapse.getBridgeTxStatus
method to retrieve the transaction status, and fetches additional transaction details from an external GraphQL endpoint.A few suggestions for improvement:
Consider adding more specific error handling for the GraphQL request. Instead of a generic "Error fetching bridge transaction status" message, you could include the actual error message returned by the GraphQL endpoint for more specific debugging information.
Be cautious about the dependency on the external GraphQL endpoint. If that service is unavailable or responds with unexpected data, it could lead to issues in your endpoint. Consider adding appropriate error handling and fallback mechanisms to gracefully handle such scenarios.
Ensure that the
synapseTxId
parameter is properly sanitized before using it in the GraphQL query construction to prevent potential injection vulnerabilities.Apply this diff to improve the error handling for the GraphQL request:
} catch (err) { res.status(400).send({ - message: 'Error fetching bridge transaction status', + message: `Error fetching bridge transaction status: ${err.message}`, error: err.message, }) }Tools
GitHub Check: CodeQL
[failure] 560-560: Database query built from user-controlled sources
This query string depends on a user-provided value.
585-648
: Solid implementation with a few suggestions!The
/getDestinationTx
endpoint implementation looks good overall. It correctly validates the required parameters, constructs a GraphQL query to fetch transaction details from an external endpoint, and returns the appropriate status based on the results.A few suggestions for improvement:
Consider adding more specific error handling for the GraphQL request. Instead of a generic "Error fetching bridge transaction status" message, you could include the actual error message returned by the GraphQL endpoint for more specific debugging information.
Be cautious about the dependency on the external GraphQL endpoint. If that service is unavailable or responds with unexpected data, it could lead to issues in your endpoint. Consider adding appropriate error handling and fallback mechanisms to gracefully handle such scenarios.
Ensure that the
originChainId
andtxHash
parameters are properly sanitized before using them in the GraphQL query construction to prevent potential injection vulnerabilities.Apply this diff to improve the error handling for the GraphQL request:
} catch (err) { res.status(400).send({ - message: 'Error fetching bridge transaction status', + message: `Error fetching bridge transaction status: ${err.message}`, error: err.message, }) }Tools
GitHub Check: CodeQL
[failure] 625-625: Database query built from user-controlled sources
This query string depends on a user-provided value.
Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Files selected for processing (29)
- .golangci.yml (1 hunks)
- core/go.mod (1 hunks)
- core/metrics/README.md (1 hunks)
- core/metrics/export_test.go (1 hunks)
- core/metrics/multiexporter.go (1 hunks)
- core/metrics/multiexporter_test.go (1 hunks)
- core/metrics/otlp.go (5 hunks)
- core/metrics/otlp_test.go (1 hunks)
- core/metrics/rookout.go (1 hunks)
- docs/bridge/docs/rfq/API/API.md (1 hunks)
- packages/contracts-rfq/CHANGELOG.md (1 hunks)
- packages/contracts-rfq/contracts/interfaces/IMulticallTarget.sol (1 hunks)
- packages/contracts-rfq/contracts/utils/MulticallTarget.sol (1 hunks)
- packages/contracts-rfq/package.json (1 hunks)
- packages/contracts-rfq/test/MulticallTarget.t.sol (1 hunks)
- packages/contracts-rfq/test/harnesses/MulticallTargetHarness.sol (1 hunks)
- packages/rest-api/CHANGELOG.md (1 hunks)
- packages/rest-api/package.json (1 hunks)
- packages/rest-api/src/app.ts (1 hunks)
- packages/synapse-interface/CHANGELOG.md (1 hunks)
- packages/synapse-interface/components/LanguageSelector.tsx (1 hunks)
- packages/synapse-interface/next.config.js (1 hunks)
- packages/synapse-interface/package.json (1 hunks)
- packages/synapse-interface/public/blacklist.json (1 hunks)
- services/rfq/api/docs/docs.go (5 hunks)
- services/rfq/api/docs/swagger.json (5 hunks)
- services/rfq/api/docs/swagger.yaml (5 hunks)
- services/rfq/api/rest/handler.go (4 hunks)
- services/rfq/api/rest/server.go (1 hunks)
Files skipped from review due to trivial changes (9)
- core/metrics/rookout.go
- packages/contracts-rfq/package.json
- packages/rest-api/CHANGELOG.md
- packages/rest-api/package.json
- packages/synapse-interface/CHANGELOG.md
- packages/synapse-interface/package.json
- packages/synapse-interface/public/blacklist.json
- services/rfq/api/rest/handler.go
- services/rfq/api/rest/server.go
Additional context used
GitHub Check: codecov/patch
core/metrics/multiexporter.go
[warning] 55-58: core/metrics/multiexporter.go#L55-L58
Added lines #L55 - L58 were not covered by tests
[warning] 64-66: core/metrics/multiexporter.go#L64-L66
Added lines #L64 - L66 were not covered by tests
[warning] 79-80: core/metrics/multiexporter.go#L79-L80
Added lines #L79 - L80 were not covered by testscore/metrics/otlp.go
[warning] 37-38: core/metrics/otlp.go#L37-L38
Added lines #L37 - L38 were not covered by tests
[warning] 52-55: core/metrics/otlp.go#L52-L55
Added lines #L52 - L55 were not covered by tests
[warning] 57-57: core/metrics/otlp.go#L57
Added line #L57 was not covered by tests
[warning] 150-151: core/metrics/otlp.go#L150-L151
Added lines #L150 - L151 were not covered by tests
[warning] 157-158: core/metrics/otlp.go#L157-L158
Added lines #L157 - L158 were not covered by tests
[warning] 168-169: core/metrics/otlp.go#L168-L169
Added lines #L168 - L169 were not covered by tests
[warning] 173-174: core/metrics/otlp.go#L173-L174
Added lines #L173 - L174 were not covered by tests
[warning] 183-184: core/metrics/otlp.go#L183-L184
Added lines #L183 - L184 were not covered by tests
[warning] 190-191: core/metrics/otlp.go#L190-L191
Added lines #L190 - L191 were not covered by tests
[warning] 194-195: core/metrics/otlp.go#L194-L195
Added lines #L194 - L195 were not covered by tests
[warning] 222-230: core/metrics/otlp.go#L222-L230
Added lines #L222 - L230 were not covered by tests
Markdownlint
packages/contracts-rfq/CHANGELOG.md
9-9: Expected: h2; Actual: h3
Heading levels should only increment by one level at a time(MD001, heading-increment)
docs/bridge/docs/rfq/API/API.md
28-28: null
Emphasis used instead of a heading(MD036, no-emphasis-as-heading)
GitHub Check: CodeQL
packages/rest-api/src/app.ts
[failure] 560-560: Database query built from user-controlled sources
This query string depends on a user-provided value.
[failure] 625-625: Database query built from user-controlled sources
This query string depends on a user-provided value.
Additional comments not posted (51)
core/metrics/export_test.go (1)
4-6
: Verify the correctness of theheadersToMap
implementation.The
HeadersToMap
function delegates the actual conversion logic to theheadersToMap
helper function. Ensure thatheadersToMap
is well-tested and handles various edge cases correctly.Run the following script to verify the
headersToMap
implementation:Verification successful
Implementation and testing of
headersToMap
appear adequateThe
headersToMap
function is implemented correctly for the basic use case of splitting header strings into key-value pairs. It's indirectly tested through theHeadersToMap
wrapper function inTestHeadersToMap
, which uses a table-driven approach, likely covering various scenarios.Potential improvements to consider:
- Handling of empty input strings
- Trimming whitespace from keys and values
- Dealing with malformed input (e.g., missing '=' in a pair)
The function's usage is limited to testing contexts, which reduces its potential impact on the overall system.
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the correctness of the `headersToMap` implementation. # Test 1: Search for the `headersToMap` function definition. # Expect: The function definition should exist in one of the Go files. rg --type go $'func headersToMap' # Test 2: Search for unit tests for the `headersToMap` function. # Expect: There should be unit tests covering various scenarios. rg --type go $'func Test(.*?)headersToMap'Length of output: 151
Script:
#!/bin/bash # Inspect the implementation of headersToMap echo "=== headersToMap implementation ===" sed -n '/func headersToMap/,/^}/p' core/metrics/otlp.go # Search for tests using HeadersToMap echo -e "\n=== Tests using HeadersToMap ===" rg --type go 'HeadersToMap\(' -A 5 -B 5 # Look for usages of HeadersToMap in the codebase echo -e "\n=== Usages of HeadersToMap ===" rg --type go 'HeadersToMap\('Length of output: 2571
packages/contracts-rfq/contracts/interfaces/IMulticallTarget.sol (3)
1-2
: LGTM!The SPDX license identifier and Solidity version pragma are correctly specified.
4-10
: LGTM!The notice provides useful context, and the
Result
struct is correctly defined within the interface.
12-18
: LGTM!The
multicallNoResults
andmulticallWithResults
functions are correctly defined with appropriate parameters and return types. TheignoreReverts
parameter and the returned array ofResult
structs provide flexibility and detailed information about the multicall execution.packages/synapse-interface/next.config.js (1)
47-47
: LGTM! The addition ofzh-CN
locale enhances multilingual support.The inclusion of the
zh-CN
locale in thei18n
configuration is a positive change that expands the application's accessibility to Chinese-speaking users. This aligns with the goal of providing a more inclusive and globally accessible user experience.Ensure that the necessary translations and localized content are added for the new locale.
packages/contracts-rfq/test/harnesses/MulticallTargetHarness.sol (1)
6-43
: LGTM!The
MulticallTargetHarness
contract is a well-structured testing harness that extends theMulticallTarget
contract. It provides a comprehensive set of functions for testing various scenarios, including state variable manipulation and different types of reverts.The contract follows the Solidity naming conventions, uses appropriate data types, and imports the necessary dependencies. The code is clean, readable, and well-documented.
Overall, this contract is a valuable addition to the testing suite and can improve the test coverage and quality of the codebase.
core/metrics/multiexporter_test.go (1)
14-45
: LGTM!The test function
TestMultiExporter
is well-structured and effectively tests the core functionality of theMultiExporter
. It covers the following aspects:
- Initialization of the
MultiExporter
with multiple exporters- Exporting spans using the
ExportSpans
method and verifying that they are exported to all the underlying exporters- Shutting down the
MultiExporter
using theShutdown
method and verifying that subsequent attempts to export spans do not result in errorsThe test uses appropriate assertions to validate the expected behavior and leverages the OpenTelemetry testing utilities effectively.
Great job on adding this comprehensive test for the
MultiExporter
functionality!core/metrics/otlp_test.go (1)
9-71
: LGTM!The test function
TestHeadersToMap
is well-structured and covers a good range of scenarios. The use of table-driven testing,reflect.DeepEqual
, and sub-tests are all good practices. The test cases are easy to understand and maintain.core/metrics/multiexporter.go (1)
1-83
: Excellent implementation of the MultiExporter!The
MultiExporter
interface and its implementationmultiExporter
are well-designed and follow best practices. The use of goroutines and wait groups for concurrent execution of the export operation across multiple exporters is an effective approach. The timeout context ensures that the export operation doesn't block indefinitely, and themultierr
package is used appropriately to combine errors from multiple exporters.The code is well-documented with clear comments explaining the purpose and functionality of each method. Overall, this implementation enhances the flexibility and scalability of trace data exporting in applications using OpenTelemetry.
Tools
GitHub Check: codecov/patch
[warning] 55-58: core/metrics/multiexporter.go#L55-L58
Added lines #L55 - L58 were not covered by tests
[warning] 64-66: core/metrics/multiexporter.go#L64-L66
Added lines #L64 - L66 were not covered by tests
[warning] 79-80: core/metrics/multiexporter.go#L79-L80
Added lines #L79 - L80 were not covered by testspackages/synapse-interface/components/LanguageSelector.tsx (1)
15-15
: LGTM!The addition of the Simplified Chinese language option to the
languages
array is correct and follows the existing pattern. It will enable users to select Simplified Chinese as a language in the interface.packages/contracts-rfq/CHANGELOG.md (1)
6-11
: LGTM!The changelog entry for version 0.3.0 follows the conventional commits format and includes an accurate compare link and release date.
Tools
Markdownlint
9-9: Expected: h2; Actual: h3
Heading levels should only increment by one level at a time(MD001, heading-increment)
packages/contracts-rfq/contracts/utils/MulticallTarget.sol (3)
19-30
: LGTM!The
multicallNoResults
function is well-implemented and provides a useful functionality for batching calls while preserving themsg.sender
. The logic is correct, and the function handles the revert cases appropriately based on theignoreReverts
flag.The function is non-payable, which ensures that only calls with
msg.value == 0
can be batched, preventing potential security risks.Overall, the implementation is solid and adds value to the contract.
40-58
: LGTM!The
multicallWithResults
function is well-implemented and provides a useful functionality for batching calls while preserving themsg.sender
and returning the results of each call. The logic is correct, and the function handles the revert cases appropriately based on theignoreReverts
flag.The function is non-payable, which ensures that only calls with
msg.value == 0
can be batched, preventing potential security risks.Returning the success status and return data of each call as an array of
Result
structs is a nice touch, as it allows for further processing or error handling of the results.Overall, the implementation is solid and adds value to the contract.
63-75
: LGTM!The
_bubbleRevert
function is a well-implemented utility function that bubbles up the revert reason from the underlying call. The logic is correct and follows the implementation from OpenZeppelin'sAddress
library, which is a trusted source.Using inline assembly to bubble up the revert reason is an efficient approach that minimizes the gas cost. The function also preserves the custom error or revert string, if one was used, which is important for error handling and debugging.
Reverting with a custom error
MulticallTarget__UndeterminedRevert
when thereturnData
is empty is a good fallback mechanism to provide a meaningful error message in case the underlying call doesn't provide one.Overall, the implementation is solid and adds value to the contract.
services/rfq/api/docs/swagger.yaml (4)
102-105
: LGTM!Adding the
X-Api-Version
header to the API response is a good practice for API versioning and compatibility management. The description provides clarity on the purpose of the header.
126-129
: Looks good!The addition of the
X-Api-Version
header to the "Upsert quotes" endpoint response is consistent with the previous change and follows the API versioning best practice.
143-146
: Looks good to me!The addition of the
X-Api-Version
header to the "Get contract addresses" endpoint response aligns with the API versioning strategy and is consistent with the changes made to other endpoints.
185-188
: LGTM!The addition of the
X-Api-Version
header to the responses of the "Get quotes" and "Upsert quote" endpoints is consistent with the changes made to other endpoints and follows the API versioning best practice. The description provides clarity on the purpose of the header.Also applies to: 212-215
core/metrics/README.md (3)
15-16
: LGTM!The overview of environment variable support and the explanation for introducing custom variables are clear and informative. The reference to the
multiexporter.go
file is helpful for understanding the implementation details.
17-30
: Great documentation update!The table listing the additional environment variables for configuring multiple OTLP exporters is well-structured and informative. The descriptions are clear, and the inclusion of default values is helpful. The note about adding more exporters by incrementing the variable names provides flexibility and extensibility.
31-32
: Consistent and informative.Mentioning that the same configuration pattern can be applied to
OTEL_EXPORTER_OTLP_SECURE_MODE
andOTEL_EXPORTER_OTLP_HEADERS
is consistent with the previous segment and provides additional information for configuring OTLP exporters.core/metrics/otlp.go (5)
Line range hint
96-115
: LGTM!The
handleShutdown
function implementation looks good:
- It ensures a graceful shutdown of the tracer provider.
- The short wait time allows in-process spans to complete.
- Using a separate context with a timeout for the shutdown process is a good practice.
- Logging the errors as warnings is appropriate for the best-effort shutdown.
132-134
: LGTM!The
getEnvSuffix
function is simple and straightforward. It correctly retrieves the value of an environment variable with a suffix, providing the default value if the variable is not set.
136-138
: LGTM!The
makeEnv
function is a trivial concatenation of the environment variable name and suffix. It is simple and correct.
210-217
: LGTM!The
withURL
function is a simple and straightforward way to set the endpoint URL option for both HTTP and gRPC clients. It correctly appends the appropriate options to thetransportOptions
struct.
240-247
: LGTM!The
withHeaders
function correctly sets the headers option for both HTTP and gRPC clients. It uses theheadersToMap
function to convert the comma-separated string of key-value pairs to a map, which is then appended to thetransportOptions
struct.core/go.mod (1)
66-66
: Verify the impact of the gRPC dependency change.The change from an indirect to a direct dependency on
google.golang.org/grpc
package may have implications on the project's dependency management, build process, and runtime behavior.To assess the impact, run the following script:
If Tests 1 and 2 yield relevant results, it confirms that the project is directly utilizing gRPC functionalities.
If Test 3 reveals any breaking changes or compatibility issues, the code may need to be adapted accordingly.Verification successful
Verified: gRPC dependency change is justified and low-risk.
The change from an indirect to a direct dependency on
google.golang.org/grpc
v1.64.0 is well-supported by the codebase's extensive use of gRPC across multiple services. The update introduces minor changes and improvements without major breaking changes. Consider the following actions:
- Review usage of
InPayload.Data
andOutPayload.Data
as they're now deprecated.- Update any code relying on the
GRPC_GO_ADVERTISE_COMPRESSORS
environment variable.- Consider adopting
NewClient
instead of deprecatedDial
andDialContext
methods.- Verify that the removal of the
github.com/golang/protobuf
dependency doesn't affect your code.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: # - Verify if the gRPC package is being used directly in the codebase. # - Check if there are any breaking changes or compatibility issues with the updated gRPC version. # Test 1: Search for direct usage of the gRPC package. # Expect: Find occurrences of gRPC types, functions, or methods being used. rg --type go -A 5 $'google.golang.org/grpc' # Test 2: Search for gRPC server/client initialization. # Expect: Find occurrences of server/client setup using gRPC. rg --type go -A 10 $'grpc.NewServer|grpc.Dial' # Test 3: Check the changelog or release notes of the gRPC package for any breaking changes or compatibility issues between the previous indirect version and the new direct version v1.64.0. # Expect: No breaking changes or compatibility issues that could affect the project. gh release view v1.64.0 --repo https://github.com/grpc/grpc-goLength of output: 15071
services/rfq/api/docs/swagger.json (4)
33-39
: LGTM!The addition of the
X-Api-Version
header to the/ack
endpoint's200
response is consistent with the changes made in other endpoints. The header type and description are appropriate.
70-76
: Looks good!The
X-Api-Version
header is correctly added to the/bulk_quotes
endpoint's200
response. The header type and description match the changes made in other endpoints.
102-107
: Approved.The
X-Api-Version
header is properly added to the/contracts
endpoint's200
response. The header type and description are consistent with the changes made in other endpoints.
166-171
: Changes look good!The
X-Api-Version
header is correctly added to both theGET
andPUT
operations of the/quotes
endpoint. The header type and description are consistent across all modified endpoints.Also applies to: 201-207
services/rfq/api/docs/docs.go (4)
44-50
: LGTM!The addition of the
X-Api-Version
header to the/ack
endpoint response definition is consistent with the overall API versioning enhancement. The header type and description are appropriate.
81-87
: LGTM!The addition of the
X-Api-Version
header to the/bulk_quotes
endpoint response definition is consistent with the overall API versioning enhancement. The header type and description are appropriate.
113-118
: LGTM!The addition of the
X-Api-Version
header to the/contracts
endpoint response definition is consistent with the overall API versioning enhancement. The header type and description are appropriate.
177-182
: LGTM!The addition of the
X-Api-Version
header to the GET and PUT/quotes
endpoint response definitions is consistent with the overall API versioning enhancement. The header type and description are appropriate, and it's good to see the same header being added to both methods of the endpoint for consistency.Also applies to: 212-218
packages/contracts-rfq/test/MulticallTarget.t.sol (14)
15-19
: LGTM!The
setUp
function correctly initializes the test harness and sets the initial state. The logic is straightforward and there are no apparent issues.
21-23
: LGTM!The
getEncodedStringRevertMessage
function correctly encodes a string revert message usingabi.encodeWithSignature
. The logic is clear and there are no apparent issues.
25-32
: LGTM!The
getMsgSenderData
function correctly generates an array of encoded function calls usingabi.encodeCall
. The logic is clear and there are no apparent issues.
34-41
: LGTM!The
getMsgSenderResults
function correctly generates an array of expected results usingIMulticallTarget.Result
structs. The logic is clear and there are no apparent issues.
43-50
: LGTM!The
getNoRevertsData
function correctly generates an array of encoded function calls that do not revert usingabi.encodeCall
. The logic is clear and there are no apparent issues.
52-59
: LGTM!The
getNoRevertsResults
function correctly generates an array of expected results usingIMulticallTarget.Result
structs. The logic is clear and there are no apparent issues.
61-68
: LGTM!The
getCustomErrorRevertData
function correctly generates an array of encoded function calls with a custom error revert usingabi.encodeCall
. The logic is clear and there are no apparent issues.
70-77
: LGTM!The
getCustomErrorRevertResults
function correctly generates an array of expected results usingIMulticallTarget.Result
structs. It correctly encodes the custom error selector for the reverting call. The logic is clear and there are no apparent issues.
79-86
: LGTM!The
getStringRevertData
function correctly generates an array of encoded function calls with a string revert usingabi.encodeCall
. The logic is clear and there are no apparent issues.
88-95
: LGTM!The
getStringRevertResults
function correctly generates an array of expected results usingIMulticallTarget.Result
structs. It correctly encodes the string revert message for the reverting call. The logic is clear and there are no apparent issues.
97-104
: LGTM!The
getUndeterminedRevertData
function correctly generates an array of encoded function calls with an undetermined revert usingabi.encodeCall
. The logic is clear and there are no apparent issues.
106-113
: LGTM!The
getUndeterminedRevertResults
function correctly generates an array of expected results usingIMulticallTarget.Result
structs. It correctly encodes an empty string for the undetermined revert. The logic is clear and there are no apparent issues.
117-123
: LGTM!The
test_multicallNoResults_ignoreReverts_noReverts
function correctly tests themulticallNoResults
function with no reverts andignoreReverts
set totrue
. It retrieves the encoded data, calls the function, and asserts the expected state changes. The logic is clear and there are no apparent issues.
125-132
: LGTM!The
test_multicallNoResults_ignoreReverts_withMsgSender
function correctly tests themulticallNoResults
function withmsg.sender
andignoreReverts
set totrue
. It retrieves the encoded data, sets themsg.sender
, calls the function, and asserts the expected state changes. The logic is clear and there are no apparent issues.packages/rest-api/src/app.ts (2)
560-560
: Potential injection vulnerability in GraphQL query construction.The static analysis tool has flagged a potential injection vulnerability in the construction of the GraphQL query using the
synapseTxId
parameter.To mitigate this risk, ensure that the
synapseTxId
parameter is properly sanitized before using it in the query construction. You can use a library likegraphql-tag
to safely construct the query using variables instead of string interpolation.Apply this diff to safely construct the GraphQL query using
graphql-tag
:+ import gql from 'graphql-tag'; // ... - const graphqlQuery = ` + const graphqlQuery = gql` { bridgeTransactions( useMv: true - kappa: "${txIdWithout0x}" + kappa: $txIdWithout0x ) { toInfo { chainID address txnHash USDValue tokenSymbol blockNumber formattedTime } } } `; const graphqlResponse = await fetch(graphqlEndpoint, { method: 'POST', headers: { 'Content-Type': 'application/json', }, - body: JSON.stringify({ query: graphqlQuery }), + body: JSON.stringify({ + query: graphqlQuery, + variables: { txIdWithout0x }, + }), });Tools
GitHub Check: CodeQL
[failure] 560-560: Database query built from user-controlled sources
This query string depends on a user-provided value.
625-625
: Potential injection vulnerability in GraphQL query construction.The static analysis tool has flagged a potential injection vulnerability in the construction of the GraphQL query using the
originChainId
andtxHash
parameters.To mitigate this risk, ensure that these parameters are properly sanitized before using them in the query construction. You can use a library like
graphql-tag
to safely construct the query using variables instead of string interpolation.Apply this diff to safely construct the GraphQL query using
graphql-tag
:+ import gql from 'graphql-tag'; // ... - const graphqlQuery = ` + const graphqlQuery = gql` { bridgeTransactions( useMv: true - chainIDFrom: ${originChainId} + chainIDFrom: $originChainId - txnHash: "${txHash}" + txnHash: $txHash ) { toInfo { chainID address txnHash USDValue tokenSymbol blockNumber formattedTime } } } `; const graphqlResponse = await fetch(graphqlEndpoint, { method: 'POST', headers: { 'Content-Type': 'application/json', }, - body: JSON.stringify({ query: graphqlQuery }), + body: JSON.stringify({ + query: graphqlQuery, + variables: { originChainId, txHash }, + }), });Tools
GitHub Check: CodeQL
[failure] 625-625: Database query built from user-controlled sources
This query string depends on a user-provided value.
# magic numbers | ||
- mnd |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Enable the mnd
linter to detect magic numbers.
The mnd
linter is being added to the list of disabled linters, which contradicts the purpose of introducing it. To effectively detect and prevent the use of magic numbers in the codebase, the mnd
linter should be enabled.
Remove the following lines to enable the mnd
linter:
- # magic numbers
- - mnd
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
# magic numbers | |
- mnd |
### Features | ||
|
||
* **contracts-rfq:** Multicall target abstraction [SLT-134] ([#3078](https://github.com/synapsecns/sanguine/issues/3078)) ([100324f](https://github.com/synapsecns/sanguine/commit/100324f269f77f73fc10913d0162676f5f918996)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix the heading level for the feature entry.
The feature entry is using heading level 3 (h3), but the conventional commits format expects heading level 2 (h2) for the feature type.
Apply this diff to fix the heading level:
-### Features
+## Features
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
### Features | |
* **contracts-rfq:** Multicall target abstraction [SLT-134] ([#3078](https://github.com/synapsecns/sanguine/issues/3078)) ([100324f](https://github.com/synapsecns/sanguine/commit/100324f269f77f73fc10913d0162676f5f918996)) | |
## Features | |
* **contracts-rfq:** Multicall target abstraction [SLT-134] ([#3078](https://github.com/synapsecns/sanguine/issues/3078)) ([100324f](https://github.com/synapsecns/sanguine/commit/100324f269f77f73fc10913d0162676f5f918996)) |
Tools
Markdownlint
9-9: Expected: h2; Actual: h3
Heading levels should only increment by one level at a time(MD001, heading-increment)
Co-authored-by: Trajan0x <[email protected]>
Deploying sanguine-fe with Cloudflare Pages
|
…ization (#3126) * RESTful refactor * Adds controllers and routes * Validations * Middleware for error handling * Adds tests --------- Co-authored-by: defi-moses <[email protected]>
packages/rest-api/src/controllers/getBridgeTxStatusController.ts
Dismissed
Show dismissed
Hide dismissed
packages/rest-api/src/controllers/getBridgeTxStatusController.ts
Dismissed
Show dismissed
Hide dismissed
packages/rest-api/src/controllers/getDestinationTxController.ts
Dismissed
Show dismissed
Hide dismissed
- @synapsecns/[email protected]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Outside diff range and nitpick comments (17)
packages/rest-api/src/controllers/tokenListController.ts (1)
3-5
: Improve the function implementation.The function correctly returns the
tokenList
object as a JSON response. However, consider the following improvements:
- Add error handling to catch and respond with appropriate errors.
- Add request validation to ensure that the endpoint is called with the expected parameters.
- Add authentication and authorization checks to protect the endpoint from unauthorized access.
packages/rest-api/src/utils/formatBNToString.ts (1)
4-22
: The function logic is correct, but it could be simplified.The function correctly handles the different cases for formatting the BigNumber instance into a string representation with different precision levels.
However, the function could be simplified by using the
toFixed
method instead of converting the number to a string.Apply this diff to simplify the function:
- if (rawNumber === 0) { - return rawNumber.toFixed(1) - } - return rawNumber.toString() + return rawNumber.toFixed(decimalPlaces)packages/rest-api/src/controllers/swapController.ts (1)
6-33
: TheswapController
function implementation looks solid!The function follows a clear and logical structure:
- It validates the request using
express-validator
and returns a 400 status code if there are validation errors.- It extracts the necessary parameters from the request query and response locals.
- It converts the amount to Wei using the
parseUnits
function, accounting for the token's decimal precision.- It calls the
swapQuote
method from theSynapse
service with the required parameters.- It formats the response to present the maximum amount out in a human-readable format using
formatUnits
and sends it back as a JSON response.- It handles errors by returning a 500 status code with a generic error message and the error details.
The function provides a robust API endpoint for handling token swaps, with proper request validation and error handling.
Consider extracting the error handling logic into a separate middleware function or utility. This would help keep the controller function focused on its core responsibility and improve code reusability. For example:
const errorHandler = (err, req, res, next) => { res.status(500).json({ error: 'An unexpected error occurred. Please try again later.', details: err.message, }) } // Use the errorHandler middleware in the Express app app.use(errorHandler)Then, you can simplify the
catch
block in theswapController
function:} catch (err) { next(err) }This way, the error handling logic is centralized and can be reused across different controller functions.
packages/rest-api/src/controllers/bridgeController.ts (1)
38-43
: Consider providing more detailed error messages.Instead of returning a generic error message, consider providing more specific error messages based on the type of error that occurred. This will help clients better understand and debug any issues they may encounter.
For example, you could modify the error handling to include more specific error messages:
} catch (err) { res.status(500).json({ - error: 'An unexpected error occurred in /bridge. Please try again later.', + error: `An unexpected error occurred in /bridge: ${err.message}`, details: err.message, }) }packages/rest-api/src/controllers/bridgeTxInfoController.ts (1)
6-50
: Add unit tests with mockedSynapse
methods.To ensure the
bridgeTxInfoController
function behaves correctly and handles different scenarios properly, consider adding unit tests that cover various cases, such as successful requests, validation errors, and network errors. In the unit tests, you can mock theSynapse.allBridgeQuotes
andSynapse.bridge
methods to control their behavior and test how the controller handles different responses from these methods. This will help catch potential bugs and regressions, and provide confidence in the correctness of the controller.packages/rest-api/src/tests/swapTxInfoRoute.test.ts (5)
10-21
: Add assertions for the correctness of the response data.While the test checks for the presence of expected properties in the response, it does not assert the correctness of the actual data. Consider adding assertions to validate the values of
data
,to
, andvalue
properties to ensure the API returns the expected results.
21-21
: Reduce the timeout value to speed up the test execution.The current timeout value of 10 seconds is quite high for a unit test. Consider reducing it to a lower value (e.g., 5 seconds) to speed up the test execution while still allowing enough time for the API to respond.
33-33
: Reduce the timeout value to speed up the test execution.Similar to the previous test, consider reducing the timeout value to a lower value (e.g., 5 seconds) to speed up the test execution while still allowing enough time for the API to respond.
43-43
: Reduce the timeout value to speed up the test execution.As with the other tests, consider reducing the timeout value to a lower value (e.g., 5 seconds) to speed up the test execution while still allowing enough time for the API to respond.
53-53
: Reduce the timeout value to speed up the test execution.Consistent with the suggestions for the other tests, consider reducing the timeout value to a lower value (e.g., 5 seconds) to speed up the test execution while still allowing enough time for the API to respond.
packages/rest-api/src/controllers/getDestinationTxController.ts (1)
65-71
: Improve error handling to provide more specific and helpful error messages.The current error handling catches any unexpected errors and responds with a generic error message. Consider providing more specific and helpful error messages based on the type of error encountered. This will aid in debugging and troubleshooting issues.
For example, you can update the error handling logic to include specific error messages for different scenarios:
} catch (err) { if (err instanceof GraphQLError) { res.status(500).json({ error: 'An error occurred while fetching data from the GraphQL endpoint.', details: err.message, }); } else if (err instanceof NetworkError) { res.status(500).json({ error: 'A network error occurred while communicating with the GraphQL endpoint.', details: err.message, }); } else { res.status(500).json({ error: 'An unexpected error occurred in /getDestinationTx. Please try again later.', details: err.message, }); } }packages/rest-api/src/tests/getSynapseTxIdRoute.test.ts (6)
10-19
: LGTM! Consider reducing the test timeout.The test case looks good and covers the happy path scenario with valid input. The assertions are checking for the expected status code and response body property.
However, the test case has a timeout of 10 seconds, which might be too high for a unit test. Consider reducing the timeout to a lower value, such as 1-2 seconds, to ensure faster test execution.
21-29
: LGTM! Consider reducing the test timeout.The test case looks good and covers the scenario when
originChainId
query parameter is missing. The assertions are checking for the expected status code and error object property.However, the test case has a timeout of 10 seconds, which might be too high for a unit test. Consider reducing the timeout to a lower value, such as 1-2 seconds, to ensure faster test execution.
31-39
: LGTM! Consider reducing the test timeout.The test case looks good and covers the scenario when
bridgeModule
query parameter is missing. The assertions are checking for the expected status code and error object property.However, the test case has a timeout of 10 seconds, which might be too high for a unit test. Consider reducing the timeout to a lower value, such as 1-2 seconds, to ensure faster test execution.
41-48
: LGTM! Consider reducing the test timeout.The test case looks good and covers the scenario when
txHash
query parameter is missing. The assertions are checking for the expected status code and error object property.However, the test case has a timeout of 10 seconds, which might be too high for a unit test. Consider reducing the timeout to a lower value, such as 1-2 seconds, to ensure faster test execution.
50-59
: LGTM! Consider reducing the test timeout.The test case looks good and covers the scenario when
originChainId
query parameter is non-numeric. The assertions are checking for the expected status code and error object property.However, the test case has a timeout of 10 seconds, which might be too high for a unit test. Consider reducing the timeout to a lower value, such as 1-2 seconds, to ensure faster test execution.
61-73
: LGTM! Consider reducing the test timeout.The test case looks good and covers the scenario when
bridgeModule
query parameter is invalid. The assertions are checking for the expected status code and error object property.However, the test case has a timeout of 10 seconds, which might be too high for a unit test. Consider reducing the timeout to a lower value, such as 1-2 seconds, to ensure faster test execution.
Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Files ignored due to path filters (1)
yarn.lock
is excluded by!**/yarn.lock
,!**/*.lock
Files selected for processing (42)
- packages/rest-api/.babelrc (1 hunks)
- packages/rest-api/.eslintrc.js (1 hunks)
- packages/rest-api/CHANGELOG.md (1 hunks)
- packages/rest-api/jest.config.js (1 hunks)
- packages/rest-api/package.json (2 hunks)
- packages/rest-api/src/app.ts (1 hunks)
- packages/rest-api/src/constants/index.ts (1 hunks)
- packages/rest-api/src/controllers/bridgeController.ts (1 hunks)
- packages/rest-api/src/controllers/bridgeTxInfoController.ts (1 hunks)
- packages/rest-api/src/controllers/getBridgeTxStatusController.ts (1 hunks)
- packages/rest-api/src/controllers/getDestinationTxController.ts (1 hunks)
- packages/rest-api/src/controllers/getSynapseTxIdController.ts (1 hunks)
- packages/rest-api/src/controllers/indexController.ts (1 hunks)
- packages/rest-api/src/controllers/swapController.ts (1 hunks)
- packages/rest-api/src/controllers/swapTxInfoController.ts (1 hunks)
- packages/rest-api/src/controllers/tokenListController.ts (1 hunks)
- packages/rest-api/src/middleware/showFirstValidationError.ts (1 hunks)
- packages/rest-api/src/routes/bridgeRoute.ts (1 hunks)
- packages/rest-api/src/routes/bridgeTxInfoRoute.ts (1 hunks)
- packages/rest-api/src/routes/getBridgeTxStatusRoute.ts (1 hunks)
- packages/rest-api/src/routes/getDestinationTxRoute.ts (1 hunks)
- packages/rest-api/src/routes/getSynapseTxIdRoute.ts (1 hunks)
- packages/rest-api/src/routes/index.ts (1 hunks)
- packages/rest-api/src/routes/indexRoute.ts (1 hunks)
- packages/rest-api/src/routes/swapRoute.ts (1 hunks)
- packages/rest-api/src/routes/swapTxInfoRoute.ts (1 hunks)
- packages/rest-api/src/routes/tokenListRoute.ts (1 hunks)
- packages/rest-api/src/services/synapseService.ts (1 hunks)
- packages/rest-api/src/tests/bridgeRoute.test.ts (1 hunks)
- packages/rest-api/src/tests/bridgeTxInfoRoute.test.ts (1 hunks)
- packages/rest-api/src/tests/getBridgeTxStatusRoute.test.ts (1 hunks)
- packages/rest-api/src/tests/getDestinationTxRoute.test.ts (1 hunks)
- packages/rest-api/src/tests/getSynapseTxIdRoute.test.ts (1 hunks)
- packages/rest-api/src/tests/indexRoute.test.ts (1 hunks)
- packages/rest-api/src/tests/swapRoute.test.ts (1 hunks)
- packages/rest-api/src/tests/swapTxInfoRoute.test.ts (1 hunks)
- packages/rest-api/src/tests/tokenListRoute.test.ts (1 hunks)
- packages/rest-api/src/utils/findTokenInfo.ts (1 hunks)
- packages/rest-api/src/utils/formatBNToString.ts (1 hunks)
- packages/rest-api/src/utils/getTokenDecimals.ts (1 hunks)
- packages/rest-api/src/validations/validateTokens.ts (1 hunks)
- packages/rest-api/tsconfig.json (3 hunks)
Files skipped from review as they are similar to previous changes (2)
- packages/rest-api/CHANGELOG.md
- packages/rest-api/package.json
Additional context used
Learnings (5)
packages/rest-api/src/routes/tokenListRoute.ts (1)
Learnt from: abtestingalpha PR: synapsecns/sanguine#3126 File: packages/rest-api/src/routes/tokenListRoute.ts:7-7 Timestamp: 2024-09-13T20:00:10.635Z Learning: The `tokenListRoute` returns a static token list and does not require input validation.
packages/rest-api/src/tests/tokenListRoute.test.ts (2)
Learnt from: abtestingalpha PR: synapsecns/sanguine#3126 File: packages/rest-api/src/tests/indexRoute.test.ts:21-31 Timestamp: 2024-09-16T17:19:34.740Z Learning: In this project, tests intentionally use hardcoded values for counts to capture changes in the number of available chains and tokens.
Learnt from: abtestingalpha PR: synapsecns/sanguine#3126 File: packages/rest-api/src/routes/tokenListRoute.ts:7-7 Timestamp: 2024-09-13T20:00:10.635Z Learning: The `tokenListRoute` returns a static token list and does not require input validation.
packages/rest-api/tsconfig.json (1)
Learnt from: abtestingalpha PR: synapsecns/sanguine#3126 File: packages/rest-api/tsconfig.json:16-16 Timestamp: 2024-09-13T20:00:47.255Z Learning: In this codebase, 'noUnusedLocals' is set to false during development to prevent build breaks when experimenting, and unused variables are caught during linting.
packages/rest-api/src/tests/indexRoute.test.ts (1)
Learnt from: abtestingalpha PR: synapsecns/sanguine#3126 File: packages/rest-api/src/tests/indexRoute.test.ts:21-31 Timestamp: 2024-09-16T17:19:34.740Z Learning: In this project, tests intentionally use hardcoded values for counts to capture changes in the number of available chains and tokens.
packages/rest-api/src/tests/bridgeRoute.test.ts (1)
Learnt from: abtestingalpha PR: synapsecns/sanguine#3126 File: packages/rest-api/src/tests/indexRoute.test.ts:21-31 Timestamp: 2024-09-16T17:19:34.740Z Learning: In this project, tests intentionally use hardcoded values for counts to capture changes in the number of available chains and tokens.
Additional comments not posted (81)
packages/rest-api/.babelrc (1)
1-3
: LGTM!The Babel configuration looks good. The combination of
@babel/preset-env
and@babel/preset-typescript
presets will enable the project to use modern JavaScript and TypeScript features while ensuring browser compatibility.packages/rest-api/src/constants/index.ts (1)
1-5
: LGTM!The constant
VALID_BRIDGE_MODULES
is a useful addition to the codebase. It can be used to validate user input, generate documentation, and enforce consistency when referencing bridge modules.The array contains a list of valid bridge module names, which can help prevent errors and improve the reliability of the application.
packages/rest-api/src/controllers/tokenListController.ts (1)
1-1
: LGTM!The import statement correctly imports the
tokenList
object from theconstants/bridgeable
module.packages/rest-api/.eslintrc.js (1)
3-10
: LGTM!The changes to the ESLint configuration are logically sound and serve a specific purpose. Turning off the
prettier/prettier
rule for thejest.config.js
file allows for greater flexibility in formatting configurations for Jest-related configurations. This change enables developers to bypass Prettier's formatting checks in the context of thejest.config.js
file, while the overall structure of the ESLint configuration remains intact.The changes have a localized impact and only affect the formatting rules for the
jest.config.js
file. They do not introduce any potential issues or risks to the codebase.packages/rest-api/src/routes/indexRoute.ts (1)
1-9
: LGTM!The code segment correctly sets up an Express route for handling GET requests to the root path ('/'). It follows the standard pattern and imports the necessary dependencies.
The route invokes the
indexController
to handle the requests, which is a good separation of concerns. The actual response will depend on the implementation of theindexController
.Overall, this is a solid foundation for the API route.
packages/rest-api/src/routes/tokenListRoute.ts (1)
1-9
: LGTM!The new route for retrieving the token list is implemented correctly:
- It follows the standard Express routing pattern.
- The
tokenListController
is imported and used appropriately.- The route is properly defined and exported.
Based on the provided learnings, the lack of input validation is acceptable as the route returns a static token list.
packages/rest-api/src/app.ts (3)
3-3
: Good refactoring!Abstracting the routing logic into a separate module improves code organization and maintainability.
8-8
: Good addition!The
express.json()
middleware is necessary for parsing JSON request bodies. This is a required setup for handling JSON data in the API.
9-9
: Looks good!Mounting the
routes
module at the root path completes the refactoring of abstracting routing logic. This keeps theapp.ts
file clean and focused on application setup.packages/rest-api/jest.config.js (1)
1-10
: LGTM! The Jest configuration settings are appropriate for the project.The configuration file sets up a robust testing framework tailored for a TypeScript-based REST API project:
- Using
ts-jest
as the preset ensures seamless processing of TypeScript files during testing.- Setting the test environment to 'node' is suitable for running tests in a Node.js context.
- Defining the root directory as
src
ensures that Jest looks for test files in the correct location.- Transforming TypeScript files with
babel-jest
enables proper transpilation during testing.- The specified file extensions and module resolution settings are comprehensive and appropriate for the project.
Overall, the configuration establishes a solid foundation for effective testing of the TypeScript codebase.
packages/rest-api/src/services/synapseService.ts (4)
1-2
: LGTM!The imports are necessary for initializing the Synapse SDK with JSON-RPC providers. The code segment looks good.
4-4
: LGTM!The import is necessary for mapping over the chain configurations to extract RPC URLs and chain IDs. The code segment looks good.
6-9
: LGTM!The mapping logic is correct and necessary for initializing the Synapse SDK. The code segment looks good.
11-11
: LGTM!The initialization logic is correct and necessary for interacting with multiple blockchain networks through the Synapse protocol. The code segment looks good.
packages/rest-api/src/utils/findTokenInfo.ts (1)
3-17
: LGTM!The
findTokenInfo
function is implemented correctly:
- It properly handles the case when the provided chain doesn't exist in
BRIDGE_MAP
.- It efficiently iterates through the token addresses to find a match for the provided token symbol.
- It returns the token information (address and decimals) when a match is found, and
null
otherwise.The logic is sound and the code is readable. Great job!
packages/rest-api/src/utils/getTokenDecimals.ts (1)
3-19
: LGTM!The
getTokenDecimals
function is well-implemented and provides a flexible way to retrieve token decimal values based on the blockchain context. Here are some key points:
- The function handles different scenarios for defining token decimals (object or single value).
- It iterates over the
tokensList
object efficiently usingObject.entries()
.- The case-insensitive comparison of token addresses ensures accurate matching.
- The default return value of 18 decimals aligns with the common standard for many cryptocurrencies.
Overall, the function enhances the usability and flexibility of token-related operations in the application.
packages/rest-api/src/utils/formatBNToString.ts (1)
1-3
: Imports look good.The imports are correctly used in the function.
packages/rest-api/src/routes/getDestinationTxRoute.ts (1)
1-22
: LGTM!The Express.js router is set up correctly to handle GET requests for retrieving transaction details. The use of
express-validator
to enforce validation rules on the request parameters is a good practice. The validation rules are appropriate for the requiredoriginChainId
andtxHash
parameters.The
showFirstValidationError
middleware is used effectively to handle validation errors and provide clear error messages for invalid requests. ThegetDestinationTxController
is invoked only after successful validation, maintaining a clean separation of concerns between routing, validation, and controller logic.Overall, this code segment enhances the API's robustness by ensuring that only valid requests are processed.
packages/rest-api/src/middleware/showFirstValidationError.ts (1)
1-24
: LGTM! The middleware enhances error handling and maintains consistency.The
showFirstValidationError
middleware follows a clear and logical flow for handling validation errors in an Express.js application. It utilizes theexpress-validator
library effectively to check for validation results and extracts the first error if any exist.The JSON response constructed by the middleware provides useful details about the validation error, including the value that failed validation, the error message, the specific field (if applicable), and the location of the error. This helps clients understand and fix the validation issues easily.
When there are no validation errors, the middleware correctly calls the
next()
function to pass control to the next middleware in the stack, ensuring the smooth flow of the request processing pipeline.Overall, this middleware enhances error handling by providing clear feedback to clients about validation issues and helps maintain a consistent error response format across the application.
packages/rest-api/src/validations/validateTokens.ts (1)
5-22
: LGTM! The token validation middleware looks good.The
validateTokens
middleware function provides a clean and reusable way to validate token parameters in Express.js routes. It leverages theexpress-validator
library to enforce validation rules and incorporates a custom validation step to ensure the token is valid using thefindTokenInfo
utility function.The middleware enhances the control flow by ensuring that only valid tokens are processed further in the request lifecycle and stores the validated token information in
req.res.locals.tokenInfo
for subsequent middleware or route handlers to access.Ensure that the
findTokenInfo
utility function is implemented correctly and handles edge cases appropriately. You can use the following script to search for its implementation and verify its correctness:Verification successful
The
findTokenInfo
function is implemented correctly, but there are areas for improvement.The function properly handles the main edge cases, such as when the chain is not found in BRIDGE_MAP or when the token symbol is not found in the chain data. However, there are a few points to consider:
- Case sensitivity: The function doesn't handle case-insensitive token symbol comparisons, which could lead to issues if the casing doesn't match exactly.
- Lack of unit tests: No unit tests were found for this function, which is a potential risk for maintaining its correctness over time.
- BRIDGE_MAP structure: While partially visible, we can't confirm if it's consistently structured across all chains and tokens.
Suggestions for improvement:
- Implement case-insensitive token symbol comparison.
- Add comprehensive unit tests for the
findTokenInfo
function.- Ensure the BRIDGE_MAP structure is consistent and well-documented.
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the implementation of the `findTokenInfo` utility function. # Test: Search for the function implementation. Expect: The function should handle edge cases appropriately. ast-grep --lang typescript --pattern $'function findTokenInfo($_, $_) { $$$ }'Length of output: 81
Script:
#!/bin/bash # Description: Comprehensive search for findTokenInfo function or its equivalent # Search for any occurrence of "findTokenInfo" echo "Searching for findTokenInfo:" rg "findTokenInfo" -n --context 3 # Search for imports of findTokenInfo echo -e "\nSearching for imports of findTokenInfo:" rg "import.*findTokenInfo" -n # Search for similar function names related to token info echo -e "\nSearching for similar function names:" rg "function.*(find|get|retrieve).*[Tt]oken[Ii]nfo" -n --context 3 # Search for arrow function definitions related to token info echo -e "\nSearching for arrow function definitions:" rg "const.*[Tt]oken[Ii]nfo.*=.*\(.*\).*=>" -n --context 3Length of output: 2786
Script:
#!/bin/bash # Description: Gather more information about findTokenInfo function and related components # Display the full implementation of findTokenInfo echo "Full implementation of findTokenInfo:" cat packages/rest-api/src/utils/findTokenInfo.ts # Display the structure of BRIDGE_MAP echo -e "\nStructure of BRIDGE_MAP:" rg "BRIDGE_MAP\s*=\s*" packages/rest-api/src/constants/bridgeMap.ts -n --context 5 # Search for unit tests related to findTokenInfo echo -e "\nSearching for unit tests of findTokenInfo:" rg "describe.*findTokenInfo|test.*findTokenInfo" -n --context 3Length of output: 1051
packages/rest-api/src/controllers/getSynapseTxIdController.ts (1)
5-27
: LGTM!The
getSynapseTxIdController
function is well-structured and follows best practices for handling Express.js requests. It properly validates the request, extracts the necessary parameters, invokes theSynapse
service, and returns appropriate responses based on the outcome.The error handling is also implemented correctly, ensuring that any errors are caught and returned with a 500 status code and relevant details.
packages/rest-api/tsconfig.json (4)
2-2
: LGTM!The single line formatting for the
include
section is more concise and does not affect functionality.
6-6
: LGTM!The single line formatting for the
lib
section is more concise and does not affect functionality.
16-16
: Ensure unused locals are cleaned up before merging to production.Setting
noUnusedLocals
tofalse
aligns with the codebase's practice of allowing unused locals during development and catching them during linting. However, please ensure that all unused local variables are properly cleaned up before merging this change to production.
25-27
: LGTM!The addition of the
downlevelIteration
compiler option is useful for backward compatibility with ES5 and ES3 when iterating over objects that are not arrays or array-like.The single line formatting for the
files
section is more concise and does not affect functionality.packages/rest-api/src/routes/swapTxInfoRoute.ts (1)
1-28
: LGTM! The code segment follows best practices and provides robust request validation.The code segment demonstrates several strengths:
- The use of
express-validator
for request validation ensures that only valid requests are processed.- The validations cover important aspects such as ensuring the chain is supported, tokens are valid, and the amount is numeric.
- The custom validation function
validateTokens
promotes code reusability.- The
showFirstValidationError
middleware ensures that validation errors are handled appropriately.- Delegating the request handling to a separate controller (
swapTxInfoController
) promotes separation of concerns and maintainability.Overall, the code segment provides a secure and modular approach to handling requests for retrieving swap transaction information.
packages/rest-api/src/routes/swapRoute.ts (1)
1-28
: Excellent input validation and error handling!The code segment demonstrates a robust approach to input validation and error handling for the token swap API. The use of
express-validator
library allows for comprehensive validation checks on all the required parameters, ensuring that only valid requests are processed.The custom validation for the
chain
parameter against theCHAINS_ARRAY
is a great way to enforce the supported chains. ThevalidateTokens
middleware ensures that bothfromToken
andtoToken
are valid for the specified chain, further enhancing the API's reliability.The
showFirstValidationError
middleware provides a centralized way to handle validation errors, promoting code reusability and maintainability.Overall, this approach significantly reduces the likelihood of processing invalid requests and improves the API's robustness. Great job!
packages/rest-api/src/controllers/indexController.ts (1)
4-30
: Excellent implementation of theindexController
function!The function is well-structured, follows a clear logical flow, and effectively utilizes the
tokensList
andCHAINS_ARRAY
constants to gather and process the necessary data. The data processing logic is concise and efficient, usingObject.values
andObject.entries
to extract and structure the data into a comprehensive response format.The response includes a welcome message, a list of available chains, and a structured list of tokens with their corresponding chains, enhancing the API's usability by clearly presenting the available resources to the client. This allows clients to easily parse and utilize the data for their specific needs.
Error handling is implemented to catch any unexpected errors and provide meaningful error messages to the client, ensuring a robust and informative API experience.
Overall, this implementation provides a solid foundation for clients to interact with the swap and bridge service effectively.
packages/rest-api/src/routes/getSynapseTxIdRoute.ts (1)
10-30
: LGTM!The route definition is well-structured and follows best practices:
- It validates the request parameters using
express-validator
before processing the request.- The validations ensure that the required parameters (
originChainId
,bridgeModule
, andtxHash
) are present and of the correct type.- The additional validation for
bridgeModule
against a list of valid bridge modules enhances the robustness of the API.- The use of
showFirstValidationError
middleware ensures that validation errors are handled gracefully.- The separation of the route definition and the business logic (handled by
getSynapseTxIdController
) promotes a clean and maintainable code structure.packages/rest-api/src/routes/index.ts (5)
1-2
: LGTM!The import statement for the
express
module is correct and necessary for creating an Express.js application.
3-11
: LGTM!The import statements for various route handlers are well-organized and modular. This structure enhances the maintainability and scalability of the API by clearly delineating different functionalities into separate route modules.
13-13
: LGTM!The instantiation of the Express router using
express.Router()
is correct and allows for modular and organized handling of routes.
15-23
: LGTM!The route definitions using
router.use()
are well-structured and cover various functionalities related to the REST API. Each route is associated with a specific path and its corresponding handler, allowing for organized and modular handling of incoming requests.
25-25
: LGTM!Exporting the Express router as the default export is correct and allows it to be easily imported and used in other parts of the application.
packages/rest-api/src/tests/indexRoute.test.ts (2)
10-19
: LGTM!The test is correctly verifying the response status and welcome message for the root path.
21-31
: LGTM!The test is correctly verifying the response status and the number of available chains and tokens for the root path.
Based on the learnings, the hardcoded values for the counts are intentionally used to capture changes in the number of available chains and tokens. This approach is valid for this project.
packages/rest-api/src/controllers/swapController.ts (1)
1-4
: Imports look good!The necessary imports for
express-validator
,@ethersproject/units
, and theSynapse
service are included.packages/rest-api/src/routes/bridgeRoute.ts (1)
1-34
: LGTM! The code segment follows good practices for input validation and error handling.The code segment demonstrates several positive aspects:
- It uses
express-validator
for input validation, ensuring data integrity and preventing invalid requests from reaching the controller.- It includes custom validation for
fromChain
andtoChain
to ensure only supported chains are accepted.- It utilizes the
validateTokens
function to validate the presence of tokens associated with the chains.- It validates the
amount
parameter to ensure it is numeric.- It employs the
showFirstValidationError
middleware to handle validation errors and provide meaningful error responses to the client.- It passes the request to the
bridgeController
only after successful validation, maintaining a good separation of concerns.These practices enhance the API's robustness, usability, and maintainability.
packages/rest-api/src/routes/getBridgeTxStatusRoute.ts (1)
1-38
: LGTM!The code segment implements a well-structured Express.js route for handling GET requests to retrieve the status of bridge transactions. The route includes comprehensive validation checks for incoming request parameters using the
express-validator
library.The validations ensure that only valid data is processed by the API, checking for the presence and validity of
destChainId
,bridgeModule
, andsynapseTxId
parameters. The use of constants for valid chain IDs and bridge modules helps maintain consistency and makes the code more maintainable.If any validation fails, the middleware function
showFirstValidationError
is invoked to handle the error response in a centralized manner. If all validations pass, the request is forwarded to thegetBridgeTxStatusController
for further processing.Overall, the code segment is well-organized, follows best practices, and enhances the API by providing a structured way to validate inputs for bridge transaction status requests.
packages/rest-api/src/routes/bridgeTxInfoRoute.ts (5)
11-33
: LGTM!The route definition is correctly structured, including the validation checks, error handling middleware, and the controller invocation.
14-19
: LGTM!The validation check for
fromChain
correctly ensures that the value is numeric, corresponds to a supported chain, exists, and is required.
20-25
: LGTM!The validation check for
toChain
correctly ensures that the value is numeric, corresponds to a supported chain, exists, and is required.
26-27
: LGTM!The validation checks for
fromToken
andtoToken
correctly invoke thevalidateTokens
function to validate the tokens associated with the specified chains.
28-28
: LGTM!The validation check for
amount
correctly ensures that the value is numeric.packages/rest-api/src/controllers/swapTxInfoController.ts (1)
6-42
: LGTM!The
swapTxInfoController
function follows a clear and logical flow for processing swap transactions. It incorporates the following best practices:
- Validates incoming request data using
express-validator
.- Retrieves token information from the response locals.
- Converts the
amount
to Wei format using theparseUnits
function.- Interacts with the
Synapse
service to obtain a swap quote and execute the swap operation.- Returns the transaction information in JSON format upon successful execution.
- Handles errors by returning appropriate status codes and error messages.
The code is well-structured, modular, and maintains a clear separation of concerns. The error handling ensures a robust and user-friendly API experience.
packages/rest-api/src/controllers/bridgeController.ts (2)
1-6
: LGTM!The imports are correctly used and necessary for the functionality of the
bridgeController
function.
7-44
: ThebridgeController
function is well-structured and follows best practices.
- The function uses
express-validator
for request validation, ensuring that any errors are returned with a 400 status code.- It extracts the necessary parameters from the request query and
res.locals
, and converts theamount
to Wei using theparseUnits
function.- It calls the
Synapse.allBridgeQuotes
method with the extracted parameters to fetch bridge quotes.- The response from
Synapse.allBridgeQuotes
is processed to format themaxAmountOut
andfeeAmount
using theformatBNToString
utility function.- The formatted payload is sent back to the client as a JSON response.
- In case of any errors, a 500 status code is returned with an error message.
The function provides a structured way to retrieve and format bridge transaction quotes, improving the user experience for clients interacting with the bridge service.
packages/rest-api/src/controllers/bridgeTxInfoController.ts (1)
6-50
: LGTM! The overall structure and logic of the function are well-designed.The
bridgeTxInfoController
function follows a clear and logical flow for handling the request, processing the data, and returning the response. It properly validates the request usingexpress-validator
, extracts the necessary information from the request, retrieves bridge quotes using theSynapse.allBridgeQuotes
method, processes the quotes in parallel usingPromise.all
, and invokes theSynapse.bridge
method for each quote to obtain transaction information. The function returns the results as a JSON response or a 500 status with error details if an error occurs.packages/rest-api/src/tests/swapRoute.test.ts (5)
9-9
: Great test suite!The test suite is well-structured and covers important scenarios for the
/swap
route, including:
- Valid input test
- Unsupported chain test
- Missing
toToken
test- Missing
amount
testThis ensures the robustness of the
/swap
route.
10-22
: LGTM!The test case is well-written and covers the happy path scenario for the
/swap
route. It ensures that the route returns a successful response with the expected properties for valid input parameters.The increased timeout of 10 seconds is appropriate considering the real Synapse service call.
24-35
: Excellent error handling test!The test case is well-written and covers an important error scenario for the
/swap
route. It ensures that the route returns a 400 status code and the expected error message when an unsupported chain is provided.The increased timeout of 10 seconds is appropriate considering the real Synapse service call.
37-46
: Great validation test!The test case is well-written and covers an important validation scenario for the
/swap
route. It ensures that the route returns a 400 status code and the expected error message when thetoToken
parameter is missing.The increased timeout of 10 seconds is appropriate considering the real Synapse service call.
48-57
: Another great validation test!The test case is well-written and covers another important validation scenario for the
/swap
route. It ensures that the route returns a 400 status code and the expected error message when theamount
parameter is missing.The increased timeout of 10 seconds is appropriate considering the real Synapse service call.
packages/rest-api/src/controllers/getDestinationTxController.ts (2)
6-72
: LGTM!The overall implementation of the
getDestinationTxController
function looks good. It follows a clear and logical flow for handling the request, fetching data from the GraphQL endpoint, and responding with the formatted transaction details. The use ofexpress-validator
for request validation and the handling of different response scenarios (completed vs pending transactions) are well-implemented.
15-36
: Verify the security of the GraphQL query construction.The GraphQL query is constructed using template literals and directly interpolates the
originChainId
andtxHash
values from the request query. Ensure that these values are properly sanitized and validated to prevent potential injection vulnerabilities.Run the following script to verify the usage of
originChainId
andtxHash
:packages/rest-api/src/controllers/getBridgeTxStatusController.ts (5)
1-6
: LGTM!The import statements are correctly specified and necessary for the functionality of the controller.
7-22
: LGTM!The function correctly uses
express-validator
for request validation, handles validation errors appropriately, and calls thegetBridgeTxStatus
method with the correct arguments.
23-48
: LGTM!The code segment correctly checks for the presence of a status, constructs the GraphQL query with the necessary fields and parameters, and sends the POST request to the GraphQL endpoint with the correct headers and body.
50-71
: LGTM!The code segment correctly processes the GraphQL response, extracts the necessary information, handles the presence and absence of
toInfo
appropriately, and formats the transaction value using thegetTokenDecimals
utility.
72-79
: LGTM!The
catch
block correctly handles any unexpected errors that may occur within the function and returns a 500 status code with an appropriate error message and details.packages/rest-api/src/tests/getDestinationTxRoute.test.ts (4)
10-30
: LGTM!The test case is well-structured and covers the happy path scenario. It checks for the presence of all the expected properties in the response body and has an appropriate timeout.
32-39
: LGTM!The test case is well-structured and covers the error scenario for missing
originChainId
. It checks for the presence of the expected error field in the response body and has an appropriate timeout.
41-47
: LGTM!The test case is well-structured and covers the error scenario for missing
txHash
. It checks for the presence of the expected error field in the response body and has an appropriate timeout.
49-57
: LGTM!The test case is well-structured and covers the error scenario for non-numeric
originChainId
. It checks for the presence of the expected error field in the response body and has an appropriate timeout.packages/rest-api/src/tests/bridgeRoute.test.ts (5)
10-23
: LGTM!The test case covers a valid scenario and makes appropriate assertions on the response status code and body structure. The timeout of 15000 ms seems reasonable for an API call.
25-38
: LGTM!The test case covers an error scenario for an unsupported
fromChain
value and makes appropriate assertions on the response status code and error message. The timeout of 10000 ms seems reasonable for an API call.
40-50
: LGTM!The test case covers an error scenario for an unsupported
toChain
value and makes appropriate assertions on the response status code and error message. The timeout of 10000 ms seems reasonable for an API call.
52-62
: LGTM!The test case covers an error scenario for a missing
fromToken
parameter and makes appropriate assertions on the response status code and error field. The timeout of 10000 ms seems reasonable for an API call.
64-74
: LGTM!The test case covers an error scenario for a missing
amount
parameter and makes appropriate assertions on the response status code and error field. The timeout of 10000 ms seems reasonable for an API call.packages/rest-api/src/tests/bridgeTxInfoRoute.test.ts (6)
10-28
: LGTM!The test case for valid input is well-structured and covers the essential assertions:
- Verifies the 200 status code.
- Checks that the response body is a non-empty array.
- Validates the presence of expected properties in each response element.
The 10-second timeout is appropriate for an API call.
30-44
: LGTM!The test case for an unsupported
fromChain
value is well-structured and covers the essential assertions:
- Verifies the 400 status code.
- Checks that the response body contains an error object with the expected message.
The 10-second timeout is appropriate for an API call.
46-57
: LGTM!The test case for an unsupported
toChain
value is well-structured and covers the essential assertions:
- Verifies the 400 status code.
- Checks that the response body contains an error object with the expected message.
The 10-second timeout is appropriate for an API call.
59-69
: LGTM!The test case for a missing
fromToken
query parameter is well-structured and covers the essential assertions:
- Verifies the 400 status code.
- Checks that the response body contains an error object with the expected missing field.
The 10-second timeout is appropriate for an API call.
71-81
: LGTM!The test case for a missing
amount
query parameter is well-structured and covers the essential assertions:
- Verifies the 400 status code.
- Checks that the response body contains an error object with the expected missing field.
The 10-second timeout is appropriate for an API call.
83-93
: LGTM!The test case for a missing
destAddress
query parameter is well-structured and covers the essential assertions:
- Verifies the 400 status code.
- Checks that the response body contains an error object with the expected missing field.
The 10-second timeout is appropriate for an API call.
packages/rest-api/src/tests/getBridgeTxStatusRoute.test.ts (6)
10-28
: LGTM!The test case covers the happy path scenario with valid input and checks for the expected response status code and properties in the response body. The timeout of 10 seconds is reasonable for an API call.
30-42
: LGTM!The test case covers the error scenario with an unsupported
destChainId
and checks for the expected error status code and message in the response body. The timeout of 10 seconds is reasonable for an API call.
44-56
: LGTM!The test case covers the error scenario with an invalid
bridgeModule
and checks for the expected error status code and message in the response body. The timeout of 10 seconds is reasonable for an API call.
58-65
: LGTM!The test case covers the error scenario with a missing required parameter
synapseTxId
and checks for the expected error status code and field in the response body. The timeout of 10 seconds is reasonable for an API call.
67-75
: LGTM!The test case covers the error scenario with a missing required parameter
destChainId
and checks for the expected error status code and field in the response body. The timeout of 10 seconds is reasonable for an API call.
77-85
: LGTM!The test case covers the error scenario with a missing required parameter
bridgeModule
and checks for the expected error status code and field in the response body. The timeout of 10 seconds is reasonable for an API call.
import request from 'supertest' | ||
import express from 'express' | ||
|
||
import tokenListRoute from '../routes/tokenListRoute' | ||
|
||
const app = express() | ||
app.use('/tokenList', tokenListRoute) | ||
|
||
describe('Index Route', () => { | ||
it('should return a list of tokens with chain address mappings', async () => { | ||
const response = await request(app).get('/tokenList') | ||
|
||
expect(response.status).toBe(200) | ||
|
||
const keys = Object.keys(response.body) | ||
|
||
expect(keys.length).toBe(62) | ||
expect(response.body['ETH']['addresses']['1']).toBe( | ||
'0x0000000000000000000000000000000000000000' | ||
) | ||
expect(response.body['SYN']['addresses']['1']).toBe( | ||
'0x0f2d719407fdbeff09d87557abb7232601fd9f29' | ||
) | ||
}) | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! The test case is well-structured and covers the essential aspects of the tokenListRoute
functionality.
The test case follows the Arrange-Act-Assert pattern and makes appropriate assertions on the response status and body structure. It also verifies the specific token addresses for 'ETH' and 'SYN'.
Consider adding a comment to explain the significance of the hardcoded values, such as the expected number of tokens and specific token addresses. This will provide clarity for future maintainers.
Additionally, consider a more flexible approach for asserting the token addresses to accommodate future changes in the token list. For example, you could store the expected token addresses in a separate configuration file and use them in the assertions.
const synapseTxId = await Synapse.getSynapseTxId( | ||
Number(originChainId), | ||
bridgeModule, | ||
txHash | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding error handling for originChainId
parsing.
The function assumes that the originChainId
provided in the query parameters is a valid number. However, if an invalid value is provided, the Number()
conversion might result in NaN
.
To handle this scenario gracefully, consider adding a check for NaN
and returning an appropriate error response.
Here's an example of how you can modify the code:
const synapseTxId = await Synapse.getSynapseTxId(
- Number(originChainId),
+ isNaN(Number(originChainId)) ? null : Number(originChainId),
bridgeModule,
txHash
)
+if (!synapseTxId) {
+ return res.status(400).json({ error: 'Invalid originChainId provided.' })
+}
Committable suggestion was skipped due to low confidence.
validateTokens('fromChain', 'fromToken', 'fromToken'), | ||
validateTokens('toChain', 'toToken', 'toToken'), | ||
check('amount').isNumeric(), | ||
check('destAddress').isString(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Validate the destAddress
format based on the toChain
.
The current validation check only ensures that the destAddress
value is a string. However, it does not validate if the value is a valid address format for the specified toChain
.
Consider adding an additional validation to check if the destAddress
format is valid based on the toChain
. This can help prevent invalid or malformed addresses from being accepted, reducing the risk of failed transactions or incorrect recipient addresses.
export const bridgeTxInfoController = async (req, res) => { | ||
const errors = validationResult(req) | ||
if (!errors.isEmpty()) { | ||
return res.status(400).json({ errors: errors.array() }) | ||
} | ||
|
||
try { | ||
const { fromChain, toChain, amount, destAddress } = req.query | ||
const fromTokenInfo = res.locals.tokenInfo.fromToken | ||
const toTokenInfo = res.locals.tokenInfo.toToken | ||
|
||
const amountInWei = parseUnits(amount.toString(), fromTokenInfo.decimals) | ||
|
||
const quotes = await Synapse.allBridgeQuotes( | ||
Number(fromChain), | ||
Number(toChain), | ||
fromTokenInfo.address, | ||
toTokenInfo.address, | ||
amountInWei | ||
) | ||
|
||
const txInfoArray = await Promise.all( | ||
quotes.map(async (quote) => { | ||
const txInfo = await Synapse.bridge( | ||
destAddress, | ||
quote.routerAddress, | ||
Number(fromChain), | ||
Number(toChain), | ||
fromTokenInfo.address, | ||
amountInWei, | ||
quote.originQuery, | ||
quote.destQuery | ||
) | ||
return txInfo | ||
}) | ||
) | ||
res.json(txInfoArray) | ||
} catch (err) { | ||
res.status(500).json({ | ||
error: | ||
'An unexpected error occurred in /bridgeTxInfo. Please try again later.', | ||
details: err.message, | ||
}) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider breaking down the function into smaller, reusable functions.
The bridgeTxInfoController
function is quite large and contains multiple responsibilities, such as request validation, data extraction, bridge quote retrieval, and transaction information processing. To improve readability and maintainability, consider breaking down the function into smaller, reusable functions that handle specific tasks. For example, you could extract the request validation logic into a separate middleware function, and create separate functions for retrieving bridge quotes and processing transaction information.
} catch (err) { | ||
res.status(500).json({ | ||
error: | ||
'An unexpected error occurred in /bridgeTxInfo. Please try again later.', | ||
details: err.message, | ||
}) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Improve the error handling with more specific error messages.
The current error handling in the catch
block returns a generic 500 status with a generic error message and the error details. To provide more helpful information to the client, consider improving the error handling by returning more specific error messages based on the type of error that occurred. For example, you could differentiate between validation errors, network errors, and internal server errors, and return appropriate status codes and error messages for each case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
* use experimental logger to debug * fix lint * [goreleaser]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Outside diff range and nitpick comments (1)
contrib/opbot/botmd/botmd.go (1)
41-43
: Add tests for the uncovered lines.The static analysis hints indicate that the added lines related to the experimental logger integration are not covered by tests. To ensure the reliability and maintainability of the code, it's important to have adequate test coverage.
Do you want me to generate the missing test cases or open a GitHub issue to track this task?
Also applies to: 48-48
Tools
GitHub Check: codecov/patch
[warning] 41-43: contrib/opbot/botmd/botmd.go#L41-L43
Added lines #L41 - L43 were not covered by tests
Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Files selected for processing (3)
- contrib/opbot/botmd/botmd.go (2 hunks)
- contrib/opbot/botmd/commands.go (3 hunks)
- contrib/opbot/go.mod (1 hunks)
Additional context used
GitHub Check: codecov/patch
contrib/opbot/botmd/botmd.go
[warning] 41-43: contrib/opbot/botmd/botmd.go#L41-L43
Added lines #L41 - L43 were not covered by tests
[warning] 48-48: contrib/opbot/botmd/botmd.go#L48
Added line #L48 was not covered by testscontrib/opbot/botmd/commands.go
[warning] 85-85: contrib/opbot/botmd/commands.go#L85
Added line #L85 was not covered by tests
[warning] 294-294: contrib/opbot/botmd/commands.go#L294
Added line #L294 was not covered by tests
[warning] 343-348: contrib/opbot/botmd/commands.go#L343-L348
Added lines #L343 - L348 were not covered by tests
[warning] 350-350: contrib/opbot/botmd/commands.go#L350
Added line #L350 was not covered by tests
[warning] 359-359: contrib/opbot/botmd/commands.go#L359
Added line #L359 was not covered by tests
Additional comments not posted (3)
contrib/opbot/botmd/botmd.go (2)
14-14
: LGTM!The import statement for the experimental logger is correct.
35-35
: Experimental logger integration looks good!The addition of the
logger
field to theBot
struct and its initialization in theNewBot
function using theotelzap
andexperimentalLogger
packages is implemented correctly. This change enhances the bot's logging capabilities, which can aid in debugging and monitoring.Also applies to: 41-43, 48-48
contrib/opbot/go.mod (1)
31-31
: LGTM!The change from an indirect to a direct dependency on the
github.com/uptrace/opentelemetry-go-extra/otelzap
package looks good. This aligns with modern practices for observability and logging using OpenTelemetry and the Zap logging library.
@@ -82,6 +82,7 @@ | |||
// search for the transaction | |||
res, err := b.signozClient.SearchTraces(ctx.Context(), signoz.Last3Hr, searchMap) | |||
if err != nil { | |||
b.logger.Errorf(ctx.Context(), "error searching for the transaction: %v", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a test case for the error scenario.
The error logging statement improves the visibility of potential failures. However, the static analysis tool indicates that this line is not covered by tests.
Please add a test case that simulates the error scenario to ensure the logging statement is executed as expected.
Do you want me to generate the test case or open a GitHub issue to track this task?
Tools
GitHub Check: codecov/patch
[warning] 85-85: contrib/opbot/botmd/commands.go#L85
Added line #L85 was not covered by tests
@@ -290,6 +291,7 @@ | |||
} | |||
} | |||
if err != nil { | |||
b.logger.Errorf(ctx.Context(), "error fetching quote request: %v", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a test case for the error scenario.
The error logging statement improves the visibility of potential failures. However, the static analysis tool indicates that this line is not covered by tests.
Please add a test case that simulates the error scenario to ensure the logging statement is executed as expected.
Do you want me to generate the test case or open a GitHub issue to track this task?
Tools
GitHub Check: codecov/patch
[warning] 294-294: contrib/opbot/botmd/commands.go#L294
Added line #L294 was not covered by tests
if err != nil { | ||
b.logger.Errorf(ctx, "error fetching quote request: %v", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a test case for the error scenario.
The error logging statement improves the visibility of potential failures. However, the static analysis tool indicates that this line is not covered by tests.
Please add a test case that simulates the error scenario to ensure the logging statement is executed as expected.
Do you want me to generate the test case or open a GitHub issue to track this task?
Tools
GitHub Check: codecov/patch
[warning] 350-350: contrib/opbot/botmd/commands.go#L350
Added line #L350 was not covered by tests
txHash, err = relClient.GetTxHashByNonce( | ||
ctx, | ||
&relapi.GetTxByNonceRequest{ | ||
ChainID: rawRequest.OriginChainID, | ||
Nonce: nonce, | ||
}) | ||
if err != nil { | ||
b.logger.Errorf(ctx, "error fetching quote request: %v", err) | ||
return fmt.Errorf("error fetching quote request: %w", err) | ||
} | ||
return nil | ||
}, | ||
retry.WithMaxAttempts(3), | ||
retry.WithMaxAttemptTime(15*time.Second), | ||
retry.WithMaxAttempts(5), | ||
retry.WithMaxAttemptTime(30*time.Second), | ||
) | ||
if err != nil { | ||
b.logger.Errorf(ctx.Context(), "error fetching quote request: %v", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tip
Codebase Verification
Add test cases for rfqRefund retry mechanism
The review has revealed a lack of specific test cases for the rfqRefund function and its newly implemented retry mechanism. To ensure the reliability and correctness of this critical functionality, please consider the following actions:
- Create new test cases in contrib/opbot/botmd/commands_test.go specifically for the rfqRefund function.
- Implement tests that cover both successful scenarios and scenarios where retries are necessary.
- Use the structure of TestRetryWithBackoff in core/retry/retry_test.go as a reference for testing retry mechanisms.
- Ensure that the new tests verify the behavior with the specified MaxAttempts (5) and MaxAttemptTime (30 seconds).
Example test structure:
func TestRfqRefundRetryMechanism(t *testing.T) {
// Test successful scenario
t.Run("Success", func(t *testing.T) {
// Implement test for successful rfqRefund execution
})
// Test retry scenario
t.Run("RetrySuccess", func(t *testing.T) {
// Implement test where rfqRefund succeeds after retries
})
// Test max retries exceeded
t.Run("MaxRetriesExceeded", func(t *testing.T) {
// Implement test where rfqRefund fails after max retries
})
// Test max attempt time exceeded
t.Run("MaxAttemptTimeExceeded", func(t *testing.T) {
// Implement test where rfqRefund fails due to timeout
})
}
These tests will help verify the impact of the retry mechanism changes and ensure the robustness of the rfqRefund function.
Analysis chain
Add test cases for the retry mechanism changes and verify the impact.
The changes to the retry mechanism aim to provide a more robust handling of transient errors. However, the static analysis tool indicates that the added and modified lines are not covered by tests.
Please consider the following:
- Add test cases that simulate transient errors to ensure the retry mechanism behaves as expected.
- Verify the impact of the increased maximum attempts and attempt time on the function's behavior and performance.
Run the following script to analyze the impact of the retry mechanism changes:
Do you want me to generate the test cases or open a GitHub issue to track this task?
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Analyze the impact of the retry mechanism changes.
# Test: Search for the `retry.WithBackoff` usage. Expect: Only the updated usage.
rg --type go $'retry\\.WithBackoff' -A 5
# Test: Search for test cases that simulate transient errors. Expect: At least one test case.
rg --type go $'[Tt]est.*rfqRefund' -A 20 | rg $'err.*fetching quote request'
Length of output: 32450
Script:
#!/bin/bash
# Search for test files related to rfqRefund or commands.go
fd -e go test | xargs rg -i 'rfqrefund|commands\.go'
# Look for existing retry mechanism tests
rg --type go 'func Test.*Retry' -g '*_test.go'
# Check for any existing tests in the same package as commands.go
fd -e go | rg 'contrib/opbot/botmd/.*_test\.go'
Length of output: 292
Script:
#!/bin/bash
# Examine the contents of commands_test.go for rfqRefund related tests
rg -i 'func test.*rfqrefund' contrib/opbot/botmd/commands_test.go
# Check for any usage of retry mechanism in the test file
rg 'retry\.WithBackoff' contrib/opbot/botmd/commands_test.go
# Look at the structure of TestRetryWithBackoff for reference
rg -A 10 'func TestRetryWithBackoff' core/retry/retry_test.go
Length of output: 501
Tools
GitHub Check: codecov/patch
[warning] 343-348: contrib/opbot/botmd/commands.go#L343-L348
Added lines #L343 - L348 were not covered by tests
[warning] 350-350: contrib/opbot/botmd/commands.go#L350
Added line #L350 was not covered by tests
[warning] 359-359: contrib/opbot/botmd/commands.go#L359
Added line #L359 was not covered by tests
* descending timestamp support * better code * fix order [goreleaser] * [goreleaser] * differnt delimiter [goreleaser] * fix [goreleaser] --------- Co-authored-by: Trajan0x <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Outside diff range and nitpick comments (1)
contrib/opbot/botmd/commands.go (1)
54-59
: Update command description to include 'order' parameterThe command now accepts an optional 'order' parameter, but the description doesn't mention it. Including information about the 'order' parameter in the description will help users understand how to use the command fully.
Consider updating the description to include details about the 'order' parameter, e.g., "Find a transaction in signoz, with an optional 'order' parameter ('asc' or 'desc') to sort the results."
Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Files selected for processing (1)
- contrib/opbot/botmd/commands.go (6 hunks)
Additional comments not posted (1)
contrib/opbot/botmd/commands.go (1)
63-63
: LGTMSplitting tags using "@" aligns with the new command format and improves tag parsing.
order := strings.ToLower(ctx.Request().Param("order")) | ||
isAscending := order == "a" || order == "asc" | ||
if isAscending { | ||
sort.Slice(traceList, func(i, j int) bool { | ||
return traceList[i].Timestamp.Before(traceList[j].Timestamp) | ||
}) | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure consistent sorting by setting a default sort order
Currently, if the 'order' parameter is not provided or doesn't match expected values, the traceList
may not be sorted, leading to inconsistent results. It's recommended to define a default sort order (e.g., descending) and handle invalid inputs gracefully.
Apply this diff to set descending order as the default and handle invalid 'order' inputs:
order := strings.ToLower(ctx.Request().Param("order"))
-isAscending := order == "a" || order == "asc"
-if isAscending {
- sort.Slice(traceList, func(i, j int) bool {
- return traceList[i].Timestamp.Before(traceList[j].Timestamp)
- })
-}
+switch order {
+case "a", "asc":
+ sort.Slice(traceList, func(i, j int) bool {
+ return traceList[i].Timestamp.Before(traceList[j].Timestamp)
+ })
+case "", "d", "desc":
+ sort.Slice(traceList, func(i, j int) bool {
+ return traceList[i].Timestamp.After(traceList[j].Timestamp)
+ })
+default:
+ _, err := ctx.Response().Reply("Invalid 'order' parameter. Use 'asc' or 'desc'.")
+ if err != nil {
+ log.Println(err)
+ }
+ return
+}
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
order := strings.ToLower(ctx.Request().Param("order")) | |
isAscending := order == "a" || order == "asc" | |
if isAscending { | |
sort.Slice(traceList, func(i, j int) bool { | |
return traceList[i].Timestamp.Before(traceList[j].Timestamp) | |
}) | |
} | |
order := strings.ToLower(ctx.Request().Param("order")) | |
switch order { | |
case "a", "asc": | |
sort.Slice(traceList, func(i, j int) bool { | |
return traceList[i].Timestamp.Before(traceList[j].Timestamp) | |
}) | |
case "", "d", "desc": | |
sort.Slice(traceList, func(i, j int) bool { | |
return traceList[i].Timestamp.After(traceList[j].Timestamp) | |
}) | |
default: | |
_, err := ctx.Response().Reply("Invalid 'order' parameter. Use 'asc' or 'desc'.") | |
if err != nil { | |
log.Println(err) | |
} | |
return | |
} |
function multicallNoResults(bytes[] calldata data, bool ignoreReverts) external { | ||
for (uint256 i = 0; i < data.length; ++i) { | ||
// We perform a delegate call to ourself to preserve the msg.sender. This is identical to `msg.sender` | ||
// calling the functions directly one by one, therefore doesn't add any security risks. | ||
// Note: msg.value is also preserved when doing a delegate call, but this function is not payable, | ||
// so it's always 0 and not a security risk. | ||
(bool success, bytes memory result) = address(this).delegatecall(data[i]); | ||
if (!success && !ignoreReverts) { | ||
_bubbleRevert(result); | ||
} | ||
} | ||
} |
Check notice
Code scanning / Slither
Calls inside a loop Low
function multicallNoResults(bytes[] calldata data, bool ignoreReverts) external { | ||
for (uint256 i = 0; i < data.length; ++i) { | ||
// We perform a delegate call to ourself to preserve the msg.sender. This is identical to `msg.sender` | ||
// calling the functions directly one by one, therefore doesn't add any security risks. | ||
// Note: msg.value is also preserved when doing a delegate call, but this function is not payable, | ||
// so it's always 0 and not a security risk. | ||
(bool success, bytes memory result) = address(this).delegatecall(data[i]); | ||
if (!success && !ignoreReverts) { | ||
_bubbleRevert(result); | ||
} | ||
} | ||
} |
Check warning
Code scanning / Slither
Low-level calls Warning
function multicallWithResults( | ||
bytes[] calldata data, | ||
bool ignoreReverts | ||
) | ||
external | ||
returns (Result[] memory results) | ||
{ | ||
results = new Result[](data.length); | ||
for (uint256 i = 0; i < data.length; ++i) { | ||
// We perform a delegate call to ourself to preserve the msg.sender. This is identical to `msg.sender` | ||
// calling the functions directly one by one, therefore doesn't add any security risks. | ||
// Note: msg.value is also preserved when doing a delegate call, but this function is not payable, | ||
// so it's always 0 and not a security risk. | ||
(results[i].success, results[i].returnData) = address(this).delegatecall(data[i]); | ||
if (!results[i].success && !ignoreReverts) { | ||
_bubbleRevert(results[i].returnData); | ||
} | ||
} | ||
} |
Check notice
Code scanning / Slither
Calls inside a loop Low
function multicallWithResults( | ||
bytes[] calldata data, | ||
bool ignoreReverts | ||
) | ||
external | ||
returns (Result[] memory results) | ||
{ | ||
results = new Result[](data.length); | ||
for (uint256 i = 0; i < data.length; ++i) { | ||
// We perform a delegate call to ourself to preserve the msg.sender. This is identical to `msg.sender` | ||
// calling the functions directly one by one, therefore doesn't add any security risks. | ||
// Note: msg.value is also preserved when doing a delegate call, but this function is not payable, | ||
// so it's always 0 and not a security risk. | ||
(results[i].success, results[i].returnData) = address(this).delegatecall(data[i]); | ||
if (!results[i].success && !ignoreReverts) { | ||
_bubbleRevert(results[i].returnData); | ||
} | ||
} | ||
} |
Check warning
Code scanning / Slither
Low-level calls Warning
function _bubbleRevert(bytes memory returnData) internal pure { | ||
// Look for revert reason and bubble it up if present | ||
if (returnData.length > 0) { | ||
// The easiest way to bubble the revert reason is using memory via assembly | ||
// solhint-disable-next-line no-inline-assembly | ||
assembly { | ||
let returndata_size := mload(returnData) | ||
revert(add(32, returnData), returndata_size) | ||
} | ||
} else { | ||
revert MulticallTarget__UndeterminedRevert(); | ||
} | ||
} |
Check warning
Code scanning / Slither
Assembly usage Warning
null
77859d5: docs preview link
77859d5: synapse-interface preview link
Summary by CodeRabbit
Release Notes
New Features
Enhancements
X-Api-Version
header across multiple endpoints.Bug Fixes
Tests
a2582c5: docs preview link
a2582c5: synapse-interface preview link
749bd4b: docs preview link
749bd4b: synapse-interface preview link
2e20047: docs preview link
2e20047: synapse-interface preview link
707eca0: docs preview link
707eca0: synapse-interface preview link
d988f7b: docs preview link
d988f7b: synapse-interface preview link