-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update flow-go #673
Update flow-go #673
Conversation
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughThe pull request includes significant changes across multiple files, focusing on the Changes
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments.
🛑 Comments failed to post (44)
tests/web3js/debug_util_test.js (1)
5-15: 💡 Codebase verification
Consider adding error handling test for debug_flowHeightByBlock
The review suggestion is valid. Looking at the codebase:
- The
debug_util_test.js
only tests the happy path fordebug_flowHeightByBlock
- Other test files in the codebase (like
eth_filter_endpoints_test.js
) demonstrate proper error handling patterns- The
debug_traces_test.js
shows comprehensive test coverage with multiple scenariosHere's a more specific suggestion:
+const LATEST_BLOCK = 'latest' +const INVALID_BLOCK = -1 + -it('should retrieve flow height', async () => { +it('should retrieve flow height for latest block', async () => { let response = await helpers.callRPCMethod( 'debug_flowHeightByBlock', - ['latest'] + [LATEST_BLOCK] ) assert.equal(response.status, 200) assert.isDefined(response.body) let height = response.body.result assert.equal(height, conf.startBlockHeight) }) + +it('should handle invalid block number for flow height', async () => { + let response = await helpers.callRPCMethod( + 'debug_flowHeightByBlock', + [INVALID_BLOCK] + ) + assert.equal(response.status, 400) + assert.isDefined(response.body.error) +})🔗 Analysis chain
Consider enhancing test coverage and error handling
While the happy path is covered, the test could be improved in several ways:
- Add error handling for the RPC call
- Include negative test cases (e.g., invalid block numbers)
- Make the test description more specific about what it's testing
- Consider extracting 'latest' as a named constant
Here's a suggested enhancement:
+const LATEST_BLOCK = 'latest' + -it('should retrieve flow height', async () => { +it('should retrieve correct flow height for latest block', async () => { let response = await helpers.callRPCMethod( 'debug_flowHeightByBlock', - ['latest'] + [LATEST_BLOCK] ) + + // Verify HTTP response assert.equal(response.status, 200) assert.isDefined(response.body) + // Verify height matches configuration let height = response.body.result assert.equal(height, conf.startBlockHeight) }) + +it('should handle invalid block number gracefully', async () => { + let response = await helpers.callRPCMethod( + 'debug_flowHeightByBlock', + [-1] // Invalid block number + ) + assert.equal(response.status, 400) +})Let's verify if there are any other test files that might need similar improvements:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Find other test files that might need similar improvements fd -e js -e ts test_ tests/ --exec grep -l "debug_flowHeightByBlock" # Check for error handling patterns in existing tests rg "should.*handle.*error" tests/Length of output: 104
Script:
#!/bin/bash # Let's try a broader search to understand the test patterns in the codebase # First, find all test files fd -e js -e ts ".*test.*" tests/ # Then, let's look for any debug-related tests rg -l "debug_" tests/ # And check for existing error handling patterns in tests rg "it\(.*error.*" tests/ rg "it\(.*invalid.*" tests/Length of output: 1786
Script:
#!/bin/bash # Let's examine the debug-related test files more closely cat tests/web3js/debug_util_test.js cat tests/web3js/debug_traces_test.js # Also check for any existing error handling patterns in these files rg -A 5 "it\(" tests/web3js/debug_util_test.js rg -A 5 "it\(" tests/web3js/debug_traces_test.jsLength of output: 19465
services/requester/utils.go (1)
21-31:
⚠️ Potential issueAdd input validation and error handling.
The function assumes valid input and doesn't handle potential errors. Consider:
- Validating input script format
- Adding error return for malformed scripts
- Using bytes.Replace to avoid string conversions
-func replaceAddresses(script []byte, chainID flow.ChainID) []byte { +func replaceAddresses(script []byte, chainID flow.ChainID) ([]byte, error) { + if len(script) == 0 { + return nil, fmt.Errorf("empty script") + } + s := string(script) + if !strings.Contains(s, "import") { + return script, nil // No modifications needed + } + // iterate over all the import name and address pairs and replace them in script for _, contract := range contracts { s = strings.ReplaceAll(s, fmt.Sprintf("import %s", contract.Name), fmt.Sprintf("import %s from %s", contract.Name, contract.Address.HexWithPrefix()), ) } - return []byte(s) + return []byte(s), nil }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.if len(script) == 0 { return nil, fmt.Errorf("empty script") } s := string(script) if !strings.Contains(s, "import") { return script, nil // No modifications needed } // iterate over all the import name and address pairs and replace them in script for _, contract := range contracts { s = strings.ReplaceAll(s, fmt.Sprintf("import %s", contract.Name), fmt.Sprintf("import %s from %s", contract.Name, contract.Address.HexWithPrefix()), ) } return []byte(s), nil }
storage/mocks/mocks.go (1)
26-26: 🛠️ Refactor suggestion
Consider making timestamp deterministic for testing
Using
time.Now().Second()
can lead to non-deterministic test behavior. Consider making this value configurable or using a fixed timestamp for consistent test results.- Timestamp: uint64(time.Now().Second()), + Timestamp: uint64(1234567890), // Use a fixed timestamp for deterministic tests📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.Timestamp: uint64(1234567890), // Use a fixed timestamp for deterministic tests
tests/web3js/eth_get_storage_at_test.js (1)
39-40:
⚠️ Potential issueFix variable declaration
The variable
newValue
is declared withoutlet
orconst
, which could lead to unexpected behavior.- newValue = 100 + const newValue = 100📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.const newValue = 100 updateData = deployed.contract.methods.store(newValue).encodeABI()
tests/web3js/eth_revert_reason_test.js (2)
80-93: 🛠️ Refactor suggestion
Extract duplicated polling logic into a helper function.
This polling logic is identical to the previous implementation. To improve maintainability and reduce code duplication, consider extracting it into a reusable helper function.
Consider adding this helper to the
helpers.js
file:// Add to helpers.js async function waitForReceipt(txHash, maxAttempts = 50) { let receipt = null; let attempts = 0; while (receipt == null && attempts < maxAttempts) { const response = await callRPCMethod( 'eth_getTransactionReceipt', [txHash] ); receipt = response.body.result; if (receipt == null) { attempts++; await new Promise(resolve => setTimeout(resolve, 100)); } } if (receipt == null) { throw new Error('Timeout waiting for transaction receipt'); } return response; }Then simplify both polling sections to:
- let rcp = null - while (rcp == null) { - rcp = await helpers.callRPCMethod( - 'eth_getTransactionReceipt', - [txHash] - ) - if (rcp.body.result == null) { - rcp = null - } - } + let rcp = await helpers.waitForReceipt(txHash);
43-56: 🛠️ Refactor suggestion
Add timeout and delay to the polling mechanism.
While the polling mechanism is a good approach for waiting for transaction receipts, it could be improved by:
- Adding a timeout to prevent infinite loops
- Including a small delay between attempts to avoid overwhelming the node
Consider refactoring to:
let txHash = response.body.result let rcp = null + const maxAttempts = 50; // 5 seconds total timeout + let attempts = 0; // wait until the transaction is executed & indexed, and its // receipt becomes available. - while (rcp == null) { + while (rcp == null && attempts < maxAttempts) { rcp = await helpers.callRPCMethod( 'eth_getTransactionReceipt', [txHash] ) if (rcp.body.result == null) { rcp = null + attempts++; + await new Promise(resolve => setTimeout(resolve, 100)); // 100ms delay } } + if (rcp == null) { + throw new Error('Timeout waiting for transaction receipt'); + }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.let txHash = response.body.result let rcp = null const maxAttempts = 50; // 5 seconds total timeout let attempts = 0; // wait until the transaction is executed & indexed, and its // receipt becomes available. while (rcp == null && attempts < maxAttempts) { rcp = await helpers.callRPCMethod( 'eth_getTransactionReceipt', [txHash] ) if (rcp.body.result == null) { rcp = null attempts++; await new Promise(resolve => setTimeout(resolve, 100)); // 100ms delay } } if (rcp == null) { throw new Error('Timeout waiting for transaction receipt'); }
storage/pebble/register_storage_test.go (2)
1-160: 💡 Codebase verification
Test coverage needs improvement for RegisterStorage
The current test coverage has gaps in the implementation:
- Missing test cases:
- Error handling for iterator creation and closure in
lookupRegister
- Panic recovery in
Get
method- Edge cases for
lookupKey
encoding/decoding- Error cases for batch operations in
Store
- Concurrent access scenarios
🔗 Analysis chain
Verify test coverage for RegisterStorage implementation.
While the tests cover many scenarios, let's verify the coverage against the actual implementation.
Let me gather more information about the RegisterStorage implementation to properly verify the test coverage.
Let me check the actual implementation in register_storage.go to compare with test coverage.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check test coverage and find any untested methods in RegisterStorage # Find all methods in RegisterStorage echo "Finding RegisterStorage methods:" ast-grep --pattern 'func ($r *RegisterStorage) $method($$$) $$$' # Find corresponding test cases echo -e "\nFinding test cases:" rg -p "func Test.*Register.*"Length of output: 397
Script:
#!/bin/bash # Find the RegisterStorage implementation echo "RegisterStorage implementation:" ast-grep --pattern 'type RegisterStorage struct { $$$ } $$$ func ($r *RegisterStorage) $method($$$) $$$' # Look for interface that RegisterStorage implements echo -e "\nRegisterStorage interface:" ast-grep --pattern 'type $name interface { $$$ Get($$$) $$$ Store($$$) $$$ $$$ }' # Find the actual implementation file echo -e "\nFile containing RegisterStorage:" fd -e go register_storageLength of output: 528
Script:
#!/bin/bash # Get the implementation details echo "RegisterStorage implementation details:" cat storage/pebble/register_storage.go # Find all test functions and their contents for better coverage analysis echo -e "\nTest implementations:" rg -A 10 "^func Test" storage/pebble/register_storage_test.goLength of output: 8365
93-160: 🛠️ Refactor suggestion
Improve test naming and coverage for snapshot tests.
The snapshot tests effectively verify caching behavior, but there are some areas for improvement:
- The test names are duplicated (both using "get register" at lines 101 and 113)
- Missing tests for cache invalidation scenarios
- Test scenarios would benefit from documentation
Apply these changes to improve the test structure:
func Test_StorageSnapshot(t *testing.T) { + // Test scenarios: + // 1. Empty snapshot returns empty value + // 2. Cached values reduce storage access + // 3. Nil values are properly cached + t.Parallel() // ... existing setup ... - runDB("get register", t, func(t *testing.T, db *Storage) { + runDB("empty snapshot returns empty value", t, func(t *testing.T, db *Storage) { // ... existing test ... }) - runDB("get register", t, func(t *testing.T, db *Storage) { + runDB("cached values reduce storage access", t, func(t *testing.T, db *Storage) { // ... existing test ... }) // Add new test for cache invalidation + runDB("cache invalidation on height change", t, func(t *testing.T, db *Storage) { + t.Parallel() + count := uint64(0) + storageGet := func(id flow.RegisterID, height uint64) (flow.RegisterValue, error) { + count++ + return []byte{byte(count)}, nil + } + s := NewStorageSnapshot(storageGet, 0) + + // First access at height 0 + v1, err := s.GetValue(owner, key) + require.NoError(t, err) + + // Access at different height should invalidate cache + s = NewStorageSnapshot(storageGet, 1) + v2, err := s.GetValue(owner, key) + require.NoError(t, err) + require.NotEqual(t, v1, v2) + require.Equal(t, uint64(2), count) + }) }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.func Test_StorageSnapshot(t *testing.T) { // Test scenarios: // 1. Empty snapshot returns empty value // 2. Cached values reduce storage access // 3. Nil values are properly cached t.Parallel() owner := []byte("0x1") ownerAddress := flow.BytesToAddress(owner) key := []byte("0x3") value := []byte{0x4} runDB("empty snapshot returns empty value", t, func(t *testing.T, db *Storage) { t.Parallel() r := NewRegisterStorage(db, ownerAddress) s, err := r.GetSnapshotAt(0) require.NoError(t, err) v, err := s.GetValue(owner, key) require.NoError(t, err) require.Empty(t, v) }) runDB("cached values reduce storage access", t, func(t *testing.T, db *Storage) { t.Parallel() count := uint64(0) storageGet := func(id flow.RegisterID, height uint64) (flow.RegisterValue, error) { count++ return value, nil } s := NewStorageSnapshot(storageGet, 0) v, err := s.GetValue(owner, key) require.NoError(t, err) require.Equal(t, value, v) v, err = s.GetValue(owner, key) require.NoError(t, err) require.Equal(t, value, v) // value should be cached require.Equal(t, uint64(1), count) }) runDB("get register - cache nil", t, func(t *testing.T, db *Storage) { t.Parallel() count := uint64(0) storageGet := func(id flow.RegisterID, height uint64) (flow.RegisterValue, error) { count++ return nil, nil } s := NewStorageSnapshot(storageGet, 0) v, err := s.GetValue(owner, key) require.NoError(t, err) require.Empty(t, v) v, err = s.GetValue(owner, key) require.NoError(t, err) require.Empty(t, v) // value should be cached require.Equal(t, uint64(1), count) }) runDB("cache invalidation on height change", t, func(t *testing.T, db *Storage) { t.Parallel() count := uint64(0) storageGet := func(id flow.RegisterID, height uint64) (flow.RegisterValue, error) { count++ return []byte{byte(count)}, nil } s := NewStorageSnapshot(storageGet, 0) // First access at height 0 v1, err := s.GetValue(owner, key) require.NoError(t, err) // Access at different height should invalidate cache s = NewStorageSnapshot(storageGet, 1) v2, err := s.GetValue(owner, key) require.NoError(t, err) require.NotEqual(t, v1, v2) require.Equal(t, uint64(2), count) }) }
tests/web3js/verify_cadence_arch_calls_test.js (1)
6-13: 🛠️ Refactor suggestion
Improve test organization and verification
The test setup could be enhanced in several ways:
- Use describe blocks to organize related test cases
- Make the test title more specific to what's being tested
- Add verification for successful contract deployment
Consider refactoring like this:
-it('should be able to use Cadence Arch calls', async () => { +describe('Cadence Arch Integration Tests', () => { + let contractAddress; + let deployed; + + beforeEach(async () => { let latest = await web3.eth.getBlockNumber() let expectedBlockHeight = conf.startBlockHeight assert.equal(latest, expectedBlockHeight) + let deployed = await helpers.deployContract('storage') let contractAddress = deployed.receipt.contractAddress + assert.ok(contractAddress, 'Contract should be deployed successfully') + }) + + it('should verify random source calls', async () => {Committable suggestion skipped: line range outside the PR's diff.
storage/register_delta_test.go (1)
25-29: 🛠️ Refactor suggestion
Remove unused parameter
evmBlockHeight
.The
delta
helper function accepts anevmBlockHeight
parameter that is never used. The snapshot is always created at height 0.-delta := func(t *testing.T, r *pebbleStorage.RegisterStorage, evmBlockHeight uint64) *storage.RegisterDelta { +delta := func(t *testing.T, r *pebbleStorage.RegisterStorage) *storage.RegisterDelta { ss, err := r.GetSnapshotAt(0) require.NoError(t, err) return storage.NewRegisterDelta(ss) }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.delta := func(t *testing.T, r *pebbleStorage.RegisterStorage) *storage.RegisterDelta { ss, err := r.GetSnapshotAt(0) require.NoError(t, err) return storage.NewRegisterDelta(ss) }
storage/pebble/blocks.go (1)
202-222: 🛠️ Refactor suggestion
Enhance error handling and parameter validation
The method has several areas that could be improved:
- The sanity check should explicitly handle all possible error cases, not just ErrStorageNotInitialized
- The batch parameter should be validated
- Error message formatting is inconsistent (some use "failed to" while others use "can't")
Consider applying these improvements:
func (b *Blocks) InitHeights(cadenceHeight uint64, cadenceID flow.Identifier, batch *pebble.Batch) error { + if batch == nil { + return fmt.Errorf("batch cannot be nil") + } + // sanity check, make sure we don't have any heights stored, disable overwriting the database _, err := b.LatestEVMHeight() - if !errors.Is(err, errs.ErrStorageNotInitialized) { - return fmt.Errorf("can't init the database that already has data stored") + if err == nil { + return fmt.Errorf("failed to init: database already contains data") + } else if !errors.Is(err, errs.ErrStorageNotInitialized) { + return fmt.Errorf("failed to check database state: %w", err) }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.func (b *Blocks) InitHeights(cadenceHeight uint64, cadenceID flow.Identifier, batch *pebble.Batch) error { if batch == nil { return fmt.Errorf("batch cannot be nil") } // sanity check, make sure we don't have any heights stored, disable overwriting the database _, err := b.LatestEVMHeight() if err == nil { return fmt.Errorf("failed to init: database already contains data") } else if !errors.Is(err, errs.ErrStorageNotInitialized) { return fmt.Errorf("failed to check database state: %w", err) } if err := b.store.set(latestCadenceHeightKey, nil, uint64Bytes(cadenceHeight), batch); err != nil { return fmt.Errorf("failed to init latest Cadence height at: %d, with: %w", cadenceHeight, err) } if err := b.store.set(latestEVMHeightKey, nil, uint64Bytes(0), batch); err != nil { return fmt.Errorf("failed to init latest EVM height at: %d, with: %w", 0, err) } // we store genesis block because it isn't emitted over the network genesisBlock := models.GenesisBlock(b.chainID) if err := b.Store(cadenceHeight, cadenceID, genesisBlock, batch); err != nil { return fmt.Errorf("failed to store genesis block at Cadence height: %d, with: %w", cadenceHeight, err) }
cmd/run/cmd.go (1)
60-68:
⚠️ Potential issueRemove duplicate shutdown message.
There's a duplicate shutdown message after the select statement that's redundant since shutdown messages are already logged within the select cases.
Apply this diff to remove the duplicate message:
select { case <-osSig: log.Info().Msg("OS Signal to shutdown received, shutting down") cancel() case <-done: log.Info().Msg("done, shutting down") cancel() } - - log.Info().Msg("OS Signal to shutdown received, shutting down") - cancel()Committable suggestion skipped: line range outside the PR's diff.
tests/go.mod (1)
223-230: 💡 Codebase verification
Update golang.org/x packages to address security vulnerabilities
Based on the verification results, I recommend the following updates:
- Update golang.org/x/crypto from v0.28.0 to the latest stable version to address the Terrapin attack (CVE-2023-48795)
- The other golang.org/x packages (sys, term, text) are indirect dependencies and their current versions are not affected by known vulnerabilities
The project uses Go 1.22 which is compatible with the latest versions of these packages.
🔗 Analysis chain
Security: Verify compatibility with latest golang.org/x updates
The updates to golang.org/x packages include security-related modules:
- crypto: v0.28.0
- sys: v0.26.0
- term: v0.25.0
- text: v0.19.0
Please ensure these versions are compatible with all dependencies and consider updating to the latest stable versions if available.
Let me generate additional verification scripts to check the dependencies and compatibility.
The previous results showed security advisories for crypto package and we need to verify the actual usage and dependencies before making a final recommendation.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for newer versions of golang.org/x packages and any security advisories # Check for latest versions of golang.org/x packages for pkg in "crypto" "sys" "term" "text"; do echo "Checking golang.org/x/$pkg..." gh api graphql -f query='{ repository(owner: "golang", name: "'$pkg'") { releases(last: 1) { nodes { tagName createdAt } } } }' done # Check for any security advisories gh api graphql -f query=' { securityVulnerabilities(ecosystem: GO, first: 10, package: "golang.org/x/crypto") { nodes { advisory { summary severity publishedAt } vulnerableVersionRange firstPatchedVersion { identifier } } } }'Length of output: 4290
Script:
#!/bin/bash # Check go.mod files for dependencies that might be affected by golang.org/x updates echo "Checking main go.mod file..." cat go.mod echo -e "\nChecking all go.mod files for dependencies on golang.org/x packages..." fd -e mod -x grep -l "golang.org/x/" {} echo -e "\nChecking for direct usage of crypto package..." rg -l "golang.org/x/crypto" echo -e "\nChecking Go version..." grep "^go " go.modLength of output: 11582
services/replayer/blocks_provider.go (2)
45-49:
⚠️ Potential issueEnsure thread-safe access to
latestBlock
The
latestBlock
field inBlocksProvider
is accessed and modified without synchronization inOnBlockReceived()
andGetSnapshotAt()
. IfBlocksProvider
is used concurrently by multiple goroutines, this could result in race conditions.Consider adding a mutex to safeguard access to
latestBlock
.Proposed changes:
+import "sync" type BlocksProvider struct { blocks storage.BlockIndexer chainID flowGo.ChainID tracer *tracers.Tracer latestBlock *models.Block + mu sync.RWMutex } func (bp *BlocksProvider) OnBlockReceived(block *models.Block) error { + bp.mu.Lock() + defer bp.mu.Unlock() if bp.latestBlock != nil && bp.latestBlock.Height != (block.Height-1) { return fmt.Errorf( "%w: received new block: %d, non-sequential of latest block: %d", models.ErrInvalidHeight, block.Height, bp.latestBlock.Height, ) } bp.latestBlock = block return nil } func (bp *BlocksProvider) GetSnapshotAt(height uint64) ( evmTypes.BlockSnapshot, error, ) { + bp.mu.RLock() + defer bp.mu.RUnlock() if bp.latestBlock != nil && bp.latestBlock.Height == height { return &blockSnapshot{ BlocksProvider: bp, block: *bp.latestBlock, }, nil } // Proceed to fetch block from indexer if not found in latestBlock // Remaining code... }Also applies to: 65-78
28-35: 🛠️ Refactor suggestion
Handle errors explicitly when retrieving block hash
In the anonymous function within
BlockContext()
, errors frombs.blocks.GetByHeight(n)
andblock.Hash()
are silently ignored, returning an empty hash. This could lead to unexpected behavior if the hash is essential for downstream processes. Consider handling the errors appropriately, either by logging them or by modifying the function signature to return an error.Proposed change:
func(n uint64) gethCommon.Hash { block, err := bs.blocks.GetByHeight(n) if err != nil { + // Handle or log the error + fmt.Printf("Error retrieving block at height %d: %v\n", n, err) return gethCommon.Hash{} } blockHash, err := block.Hash() if err != nil { + // Handle or log the error + fmt.Printf("Error computing hash for block at height %d: %v\n", n, err) return gethCommon.Hash{} } return blockHash },Committable suggestion skipped: line range outside the PR's diff.
services/requester/remote_cadence_arch.go (1)
33-33:
⚠️ Potential issueAdd synchronization to protect concurrent access to
cachedCalls
The
cachedCalls
map in theRemoteCadenceArch
struct is accessed and modified in theRun
andrunCall
methods without any synchronization mechanism. If instances ofRemoteCadenceArch
are used concurrently across multiple goroutines, this could lead to data races and unpredictable behavior.To fix this issue, consider adding a
sync.RWMutex
to synchronize access to thecachedCalls
map. Here's how you can modify the code:+import "sync" type RemoteCadenceArch struct { blockHeight uint64 client *CrossSporkClient chainID flow.ChainID cachedCalls map[string]evmTypes.Data + mutex sync.RWMutex } func (rca *RemoteCadenceArch) Run(input []byte) ([]byte, error) { key := hex.EncodeToString(crypto.Keccak256(input)) + rca.mutex.RLock() if result, ok := rca.cachedCalls[key]; ok { + rca.mutex.RUnlock() return result, nil } + rca.mutex.RUnlock() evmResult, err := rca.runCall(input) if err != nil { return nil, err } return evmResult.ReturnedData, nil } func (rca *RemoteCadenceArch) runCall(input []byte) (*evmTypes.ResultSummary, error) { // ... existing code ... key := hex.EncodeToString(crypto.Keccak256(input)) + rca.mutex.Lock() rca.cachedCalls[key] = evmResult.ReturnedData + rca.mutex.Unlock() return evmResult, nil }Also applies to: 67-69, 119-119
storage/pebble/storage_test.go (3)
43-50:
⚠️ Potential issuePotential Resource Leak: Missing
batch.Close()
CallIn the
get stored block
test, the batch created is not being closed. Failing to close the batch may lead to resource leaks. Ensure thatbatch.Close()
is called after committing the batch.Apply this diff to fix the issue:
runDB("get stored block", t, func(t *testing.T, db *Storage) { const height = uint64(12) cadenceID := flow.Identifier{0x1} cadenceHeight := uint64(20) bl := mocks.NewBlock(height) blocks := NewBlocks(db, flowGo.Emulator) batch := db.NewBatch() + defer func() { + require.NoError(t, batch.Close()) + }() err := blocks.InitHeights(config.EmulatorInitCadenceHeight, flow.Identifier{0x1}, batch) require.NoError(t, err) err = blocks.Store(cadenceHeight, cadenceID, bl, batch) require.NoError(t, err) err = batch.Commit(pebble.Sync) require.NoError(t, err)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.batch := db.NewBatch() defer func() { require.NoError(t, batch.Close()) }() err := blocks.InitHeights(config.EmulatorInitCadenceHeight, flow.Identifier{0x1}, batch) require.NoError(t, err) err = blocks.Store(cadenceHeight, cadenceID, bl, batch) require.NoError(t, err) err = batch.Commit(pebble.Sync)
24-32:
⚠️ Potential issuePotential Resource Leak: Missing
batch.Close()
CallIn the
store block
test, the batch created is not being closed. Failing to close the batch may lead to resource leaks. It's important to ensure thatbatch.Close()
is called after committing the batch.Apply this diff to fix the issue:
func TestBlock(t *testing.T) { runDB("store block", t, func(t *testing.T, db *Storage) { bl := mocks.NewBlock(10) blocks := NewBlocks(db, flowGo.Emulator) batch := db.NewBatch() + defer func() { + require.NoError(t, batch.Close()) + }() err := blocks.InitHeights(config.EmulatorInitCadenceHeight, flow.Identifier{0x1}, batch) require.NoError(t, err) err = blocks.Store(20, flow.Identifier{0x1}, bl, batch) require.NoError(t, err) err = batch.Commit(pebble.Sync) require.NoError(t, err) })📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.batch := db.NewBatch() defer func() { require.NoError(t, batch.Close()) }() err := blocks.InitHeights(config.EmulatorInitCadenceHeight, flow.Identifier{0x1}, batch) require.NoError(t, err) err = blocks.Store(20, flow.Identifier{0x1}, bl, batch) require.NoError(t, err) err = batch.Commit(pebble.Sync)
76-82:
⚠️ Potential issuePotential Resource Leak: Missing
batch.Close()
CallIn the
get not found block error
test, the batch created is not being closed. Failing to close the batch may lead to resource leaks. Please ensure you callbatch.Close()
after committing the batch.Apply this diff to fix the issue:
runDB("get not found block error", t, func(t *testing.T, db *Storage) { blocks := NewBlocks(db, flowGo.Emulator) batch := db.NewBatch() + defer func() { + require.NoError(t, batch.Close()) + }() err := blocks.InitHeights(config.EmulatorInitCadenceHeight, flow.Identifier{0x1}, batch) require.NoError(t, err) err = blocks.Store(2, flow.Identifier{0x1}, mocks.NewBlock(1), batch) // init require.NoError(t, err) err = batch.Commit(pebble.Sync) require.NoError(t, err)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.batch := db.NewBatch() defer func() { require.NoError(t, batch.Close()) }() err := blocks.InitHeights(config.EmulatorInitCadenceHeight, flow.Identifier{0x1}, batch) require.NoError(t, err) err = blocks.Store(2, flow.Identifier{0x1}, mocks.NewBlock(1), batch) // init require.NoError(t, err) err = batch.Commit(pebble.Sync)
services/evm/executor.go (1)
97-100: 🛠️ Refactor suggestion
Ensure thread safety of
BlockExecutor
's mutable stateThe
BlockExecutor
modifiess.gasUsed
ands.txIndex
during execution. If instances ofBlockExecutor
are accessed by multiple goroutines, this could lead to data races. Consider adding synchronization mechanisms or documenting thatBlockExecutor
is not safe for concurrent use.api/utils.go (1)
95-97:
⚠️ Potential issueAvoid including user input in error messages
Including the invalid hex string in the error message may expose sensitive information. It's safer to provide a generic error message to prevent potential information leakage.
Apply this diff to modify the error message:
if err != nil { - return common.Hash{}, fmt.Errorf("invalid hex string: %s", s) + return common.Hash{}, fmt.Errorf("invalid hex string") }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.if err != nil { return common.Hash{}, fmt.Errorf("invalid hex string") }
services/replayer/call_tracer_collector.go (1)
76-82:
⚠️ Potential issuePotential data race on
resultsByTxID
due to concurrent map accessThe
resultsByTxID
map is accessed concurrently inCollect
and during tracing callbacks without synchronization, which can lead to data races or runtime panics. In Go, maps are not safe for concurrent access across multiple goroutines.Apply this diff to synchronize access to
resultsByTxID
using a mutex:+import "sync" type CallTracerCollector struct { tracer *tracers.Tracer resultsByTxID map[common.Hash]json.RawMessage logger zerolog.Logger + mu sync.Mutex } func (ct *CallTracerCollector) Collect(txID common.Hash) (json.RawMessage, error) { + ct.mu.Lock() result, found := ct.resultsByTxID[txID] if !found { + ct.mu.Unlock() return nil, fmt.Errorf("trace result for tx %s not found", txID.String()) } delete(ct.resultsByTxID, txID) + ct.mu.Unlock() return result, nil } ... func (ct *CallTracerCollector) OnTxEnd(receipt *types.Receipt, err error) { ... + ct.mu.Lock() ct.resultsByTxID[receipt.TxHash] = res + ct.mu.Unlock() ... }Also applies to: 134-134
models/events.go (1)
236-243: 🛠️ Refactor suggestion
Avoid modifying input events by sorting in-place
In
NewSingleBlockEvents
, the code sortsevents.Events
in place, modifying the inputflow.BlockEvents
. This could lead to unintended side effects if the caller usesevents
elsewhere. Consider creating a copy of theevents.Events
slice before sorting to avoid modifying the original input.Apply this diff to create a copy of the events before sorting:
func NewSingleBlockEvents(events flow.BlockEvents) BlockEvents { + eventsCopy := events + eventsCopy.Events = make([]flow.Event, len(events.Events)) + copy(eventsCopy.Events, events.Events) // first we sort all the events in the block, by their TransactionIndex, // and then we also sort events in the same transaction, by their EventIndex. - sort.Slice(events.Events, func(i, j int) bool { + sort.Slice(eventsCopy.Events, func(i, j int) bool { if eventsCopy.Events[i].TransactionIndex != eventsCopy.Events[j].TransactionIndex { return eventsCopy.Events[i].TransactionIndex < eventsCopy.Events[j].TransactionIndex } return eventsCopy.Events[i].EventIndex < eventsCopy.Events[j].EventIndex }) - cdcEvents, err := NewCadenceEvents(events) + cdcEvents, err := NewCadenceEvents(eventsCopy) return BlockEvents{ Events: cdcEvents, Err: err, } }Also applies to: 245-249
storage/pebble/register_storage.go (1)
213-229:
⚠️ Potential issueUse a pointer receiver for
StorageSnapshot.GetValue
to ensure cache updatesThe
GetValue
method modifies thecache
map, but sinceStorageSnapshot
has a value receiver, the cache updates won't persist beyond the method scope. Changing the receiver to a pointer ensures the cache behaves as intended.Apply this diff to fix the receiver:
-func (s StorageSnapshot) GetValue(owner []byte, key []byte) ([]byte, error) { +func (s *StorageSnapshot) GetValue(owner []byte, key []byte) ([]byte, error) {Ensure that all calls to methods of
StorageSnapshot
use the pointer receiver as well.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.func (s *StorageSnapshot) GetValue(owner []byte, key []byte) ([]byte, error) { id := flow.CadenceRegisterID(owner, key) value, ok := s.cache[id] if ok { return value, nil } // get from index val, err := s.storageGet(id, s.evmBlockHeight) if err != nil { return nil, err } // non-existing key will also be cached with `nil` value. s.cache[id] = val return val, nil }
services/ingestion/engine.go (2)
152-157:
⚠️ Potential issueAvoid using
log.Fatal()
in a deferred functionUsing
e.log.Fatal()
within a deferred function will terminate the application abruptly if an error occurs during batch closure. This can prevent graceful shutdowns and proper error handling. Instead, consider logging the error and allowing the application to continue or handle the error appropriately.Apply this diff to replace
e.log.Fatal()
withe.log.Error()
:defer func(batch *pebbleDB.Batch) { err := batch.Close() if err != nil { - e.log.Fatal().Err(err).Msg("failed to close batch") + e.log.Error().Err(err).Msg("failed to close batch") } }(batch)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.defer func(batch *pebbleDB.Batch) { err := batch.Close() if err != nil { e.log.Error().Err(err).Msg("failed to close batch") } }(batch)
260-262: 🛠️ Refactor suggestion
Include the original error when returning
The error message returned does not include the original error
err
, which can make debugging more difficult. Wrapping the original error provides additional context for understanding the failure.Apply this diff to include the original error:
err = e.registerStore.Store(registerEntriesFromKeyValue(res.StorageRegisterUpdates()), blockEvents.Height, batch) if err != nil { - return fmt.Errorf("failed to store state changes on block: %d", events.Block().Height) + return fmt.Errorf("failed to store state changes on block %d: %w", events.Block().Height, err) }Committable suggestion skipped: line range outside the PR's diff.
api/debug.go (2)
84-106:
⚠️ Potential issueAvoid potential nil pointer dereference in
isDefaultCallTracer
In
TraceTransaction
, the functionisDefaultCallTracer(config)
is called without checking ifconfig
orconfig.Tracer
is nil. This could lead to a nil pointer dereference ifconfig
orconfig.Tracer
is nil.Apply this diff to add necessary nil checks:
func isDefaultCallTracer(config *tracers.TraceConfig) bool { if config == nil { return false } + if config.Tracer == nil { + return false + } if *config.Tracer != replayer.TracerName { return false } tracerConfig := json.RawMessage(replayer.TracerConfig) return slices.Equal(config.TracerConfig, tracerConfig) }Committable suggestion skipped: line range outside the PR's diff.
253-362:
⚠️ Potential issueHandle possible nil
config
inTraceCall
In the
TraceCall
method, ensure thatconfig.TraceConfig
is properly initialized to avoid nil pointer dereferences when accessing it later.Apply this diff to initialize
config.TraceConfig
if it's nil:if config == nil { config = &tracers.TraceCallConfig{} } +if config.TraceConfig == nil { + config.TraceConfig = &tracers.TraceConfig{} +} tracer, err := tracerForReceipt(&config.TraceConfig, nil) if err != nil { return nil, err }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.func (d *DebugAPI) TraceCall( _ context.Context, args ethTypes.TransactionArgs, blockNrOrHash rpc.BlockNumberOrHash, config *tracers.TraceCallConfig, ) (interface{}, error) { tx, err := encodeTxFromArgs(args) if err != nil { return nil, err } // Default address in case user does not provide one from := d.config.Coinbase if args.From != nil { from = *args.From } if config == nil { config = &tracers.TraceCallConfig{} } if config.TraceConfig == nil { config.TraceConfig = &tracers.TraceConfig{} } tracer, err := tracerForReceipt(&config.TraceConfig, nil) if err != nil { return nil, err } height, err := resolveBlockTag(&blockNrOrHash, d.blocks, d.logger) if err != nil { return nil, err } cdcHeight, err := d.blocks.GetCadenceHeight(height) if err != nil { return nil, err } block, err := d.blocks.GetByHeight(height) if err != nil { return nil, err } blocksProvider := replayer.NewBlocksProvider( d.blocks, d.config.FlowNetworkID, tracer, ) viewProvider := query.NewViewProvider( d.config.FlowNetworkID, flowEVM.StorageAccountAddress(d.config.FlowNetworkID), d.registerStore, blocksProvider, BlockGasLimit, ) view, err := viewProvider.GetBlockView(block.Height) if err != nil { return nil, err } to := gethCommon.Address{} if tx.To != nil { to = *tx.To } rca := requester.NewRemoteCadenceArch(cdcHeight, d.client, d.config.FlowNetworkID) opts := []query.DryCallOption{} opts = append(opts, query.WithTracer(tracer)) opts = append(opts, query.WithExtraPrecompiledContracts([]types.PrecompiledContract{rca})) if config.StateOverrides != nil { for addr, account := range *config.StateOverrides { // Override account nonce. if account.Nonce != nil { opts = append(opts, query.WithStateOverrideNonce(addr, uint64(*account.Nonce))) } // Override account(contract) code. if account.Code != nil { opts = append(opts, query.WithStateOverrideCode(addr, *account.Code)) } // Override account balance. if account.Balance != nil { opts = append(opts, query.WithStateOverrideBalance(addr, (*big.Int)(*account.Balance))) } if account.State != nil && account.StateDiff != nil { return nil, fmt.Errorf("account %s has both 'state' and 'stateDiff'", addr.Hex()) } // Replace entire state if caller requires. if account.State != nil { opts = append(opts, query.WithStateOverrideState(addr, *account.State)) } // Apply state diff into specified accounts. if account.StateDiff != nil { opts = append(opts, query.WithStateOverrideStateDiff(addr, *account.StateDiff)) } } } _, err = view.DryCall( from, to, tx.Data, tx.Value, tx.Gas, opts..., ) if err != nil { return nil, err } return tracer.GetResult() }
bootstrap/bootstrap.go (2)
498-498:
⚠️ Potential issueFix typo in variable name 'evmBlokcHeight'
The variable
evmBlokcHeight
appears to have a typo. It should beevmBlockHeight
for clarity and consistency.Apply this diff to correct the variable name:
- evmBlokcHeight := uint64(0) + evmBlockHeight := uint64(0)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.evmBlockHeight := uint64(0)
504-507:
⚠️ Potential issueUpdate error message to use the correct block height variable
In the error message, the block height is hardcoded as
0
instead of usingevmBlockHeight
. This may lead to confusion during debugging.Apply this diff to use the correct variable in the error message:
- return nil, fmt.Errorf("could not get register snapshot at block height %d: %w", 0, err) + return nil, fmt.Errorf("could not get register snapshot at block height %d: %w", evmBlockHeight, err)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.snapshot, err := registerStore.GetSnapshotAt(evmBlokcHeight) if err != nil { return nil, fmt.Errorf("could not get register snapshot at block height %d: %w", evmBlockHeight, err) }
services/ingestion/event_subscriber.go (2)
450-472: 🛠️ Refactor suggestion
Potential unbounded memory growth in recovery mode
In the
accumulateEventsMissingBlock
function, ther.recoveredEvents
slice can grow indefinitely if the missing block event is never found, which may lead to excessive memory usage. Consider adding a limit to the number of events accumulated or implementing a timeout to prevent potential memory exhaustion.You can apply the following change to add a limit to the number of recovery attempts:
func (r *RPCEventSubscriber) accumulateEventsMissingBlock(events flow.BlockEvents) models.BlockEvents { txEvents := events.Events + const maxRecoveryEvents = 10000 // Maximum number of events to accumulate // Sort `EVM.TransactionExecuted` events sort.Slice(txEvents, func(i, j int) bool { if txEvents[i].TransactionIndex != txEvents[j].TransactionIndex { return txEvents[i].TransactionIndex < txEvents[j].TransactionIndex } return txEvents[i].EventIndex < txEvents[j].EventIndex }) + if len(r.recoveredEvents)+len(txEvents) > maxRecoveryEvents { + err := fmt.Errorf("reached maximum recovery events limit (%d)", maxRecoveryEvents) + return models.NewBlockEventsError(err) + } r.recoveredEvents = append(r.recoveredEvents, txEvents...) events.Events = r.recoveredEvents // ...Committable suggestion skipped: line range outside the PR's diff.
238-243:
⚠️ Potential issueCheck for nil
EVMContract
to prevent nil pointer dereferenceWhen retrieving the
EVMContract
address, there's a potential risk of a nil pointer dereference ifEVMContract
isnil
. Ensure thatEVMContract
is notnil
before accessing itsAddress
.Apply this diff to add a nil check:
evmContracts := systemcontracts.SystemContractsForChain(r.chain) +if evmContracts.EVMContract == nil { + return 0, fmt.Errorf("EVMContract is nil for chain ID %s", r.chain) +} evmAddress := common.Address(evmContracts.EVMContract.Address)Committable suggestion skipped: line range outside the PR's diff.
models/events_test.go (3)
345-345:
⚠️ Potential issueUse
NewMultiBlockEvents
instead ofNewSingleBlockEvents
In
TestNewMultiBlockEvents
, the test case at line 345 incorrectly callsNewSingleBlockEvents
instead ofNewMultiBlockEvents
. This causes the test to use the wrong event processing function.Apply this diff to correct the function call:
- evmEvents := NewSingleBlockEvents(blockEvents) + evmEvents := NewMultiBlockEvents(blockEvents)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.evmEvents := NewMultiBlockEvents(blockEvents)
85-85:
⚠️ Potential issueCorrect the construction of
flow.Identifier
At line 85,
flow.Identifier
is initialized with{0x1}
, which creates an identifier shorter than the required 32 bytes.flow.Identifier
expects a 32-byte array. Using an incorrectly sized identifier may lead to unpredictable behavior in the tests.Apply this diff to fix the issue by using the
flow.HexToID
function or initializing with a 32-byte array:- BlockID: flow.Identifier{0x1}, + BlockID: flow.HexToID("0000000000000000000000000000000000000000000000000000000000000001"),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.BlockID: flow.HexToID("0000000000000000000000000000000000000000000000000000000000000001"),
218-225: 🛠️ Refactor suggestion
Handle potential nil pointer dereference for
block
In the test case starting at line 218, there's a possibility that
block
could benil
ifnewBlock
returns an error. Even thoughrequire.NoError(t, err)
is called, it's safer to ensureblock
is notnil
before using it.Apply this diff to add a check:
require.NoError(t, err) + require.NotNil(t, block)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.block, blockEvent, err := newBlock(1, hashes) require.NoError(t, err) require.NotNil(t, block) blockEvent.TransactionIndex = 4 blockEvent.EventIndex = 0 blockEvents.Events = append(blockEvents.Events, blockEvent) // parse the EventStreaming API response evmEvents := NewSingleBlockEvents(blockEvents)
tests/web3js/debug_traces_test.js (1)
28-31:
⚠️ Potential issueDeclare the variable
response
before useThe variable
response
is used without prior declaration in multiple places. This can lead to unintended global variable creation, which may cause bugs and hinder maintainability. In JavaScript, variables should be declared usinglet
,const
, orvar
before they are used.Apply this diff to declare
response
appropriately:+ let response = await helpers.callRPCMethod( 'debug_traceTransaction', [receipt.transactionHash, callTracer] )
Repeat this change for all instances where
response
is first assigned without declaration to ensure proper variable scoping.Also applies to: 47-50, 86-88, 137-139, 190-192, 247-249, 303-305, 356-358, 383-385, 456-458, 489-491, 503-505, 533-535
api/api.go (3)
794-794: 🛠️ Refactor suggestion
⚠️ Potential issueHandle errors appropriately in
EstimateGas
In the
EstimateGas
method, when there's an error encoding the transaction (encodeTxFromArgs
), the function returns theBlockGasLimit
without reporting the error. This could mask underlying issues and provide misleading information to the caller. Consider returning an error to inform the client about the encoding failure.
922-922: 🛠️ Refactor suggestion
Use dynamic block gas limit instead of static
BlockGasLimit
When calculating
gasUsedRatio
, using the staticBlockGasLimit
constant may result in inaccurate ratios if the actual block gas limit differs. It's better to use the actual gas limit of the block (e.g.,block.GasLimit
) to ensure accurate calculations.
1031-1031: 🛠️ Refactor suggestion
Set block's actual gas limit in
prepareBlockResponse
The
GasLimit
field in the block response is set to the staticBlockGasLimit
constant. To accurately represent the block data, use the actual gas limit from the block (block.GasLimit
) instead of a constant value.storage/register_delta.go (2)
80-129: 🛠️ Refactor suggestion
Consider making
allocateSlabIndex
concurrency safeThe
allocateSlabIndex
function manipulates shared state (e.g., account status and slab indices) without synchronization. If this function is called concurrently, race conditions may occur, leading to incorrect slab index allocations or data corruption. Consider implementing synchronization mechanisms, such as mutexes, to ensure thread-safe operations.
57-63:
⚠️ Potential issueCorrect
ValueExists
to accurately detect existing entries with empty valuesThe
ValueExists
method checks iflen(value) > 0
, which will returnfalse
if a key exists but its value is an empty byte slice ([]byte{}
). This could lead to incorrect assumptions about the presence of keys with empty values. Consider modifying the logic to differentiate between a missing key and a key with an empty value by checking for anil
error fromGetValue
.Apply this diff to adjust the method:
func (r *RegisterDelta) ValueExists(owner []byte, key []byte) (bool, error) { _, err := r.GetValue(owner, key) if err != nil { - return false, err + if errors.Is(err, storage.ErrValueNotFound) { + return false, nil + } + return false, err } - return len(value) > 0, nil + return true, nil }Ensure that
storage.ErrValueNotFound
correctly represents the error returned when a key does not exist.Committable suggestion skipped: line range outside the PR's diff.
services/requester/requester.go (3)
59-69:
⚠️ Potential issueReintroduce
context.Context
parameters to interface methodsThe removal of
context.Context
from the methodsGetBalance
,GetNonce
,GetCode
,Call
, andEstimateGas
might limit the ability to handle cancellations, timeouts, and propagate deadlines, which are essential for network operations and long-running computations. It is a best practice in Go to includecontext.Context
as the first parameter in such methods to enable better control over their execution.Apply the following diff to reintroduce
context.Context
parameters:func GetBalance( + ctx context.Context, address common.Address, height uint64, ) (*big.Int, error) func GetNonce( + ctx context.Context, address common.Address, height uint64, ) (uint64, error) func GetCode( + ctx context.Context, address common.Address, height uint64, ) ([]byte, error) func Call( + ctx context.Context, tx *types.LegacyTx, from common.Address, height uint64, stateOverrides *ethTypes.StateOverride, ) ([]byte, error) func EstimateGas( + ctx context.Context, tx *types.LegacyTx, from common.Address, height uint64, stateOverrides *ethTypes.StateOverride, ) (uint64, error)Also applies to: 74-79, 82-86
189-189:
⚠️ Potential issueEnsure
replaceAddresses
function is defined and accessibleThe function
replaceAddresses
is used at several points (lines 189, 234, and 414) but its definition is not present in this file. IfreplaceAddresses
has been removed or moved, this will result in a compilation error. Ensure thatreplaceAddresses
is properly defined or imported from the correct package.If
replaceAddresses
is no longer needed, consider removing its usage or replacing it with the appropriate logic.Also applies to: 234-234, 414-414
559-561:
⚠️ Potential issueHandle potential integer overflow in
AddOne64th
functionThe
AddOne64th
function addsn/64
ton
, which could cause an integer overflow whenn
is close tomath.MaxUint64
. This could lead to incorrect gas calculations and unexpected behaviors.Consider modifying the function to detect and handle overflow:
func AddOne64th(n uint64) uint64 { + increment := n / 64 + if n > math.MaxUint64 - increment { + // Handle overflow by capping the value at math.MaxUint64 or returning an error + return math.MaxUint64 + } return n + increment }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.func AddOne64th(n uint64) uint64 { increment := n / 64 if n > math.MaxUint64 - increment { // Handle overflow by capping the value at math.MaxUint64 or returning an error return math.MaxUint64 } return n + increment }
0de5ef9
into
feature/local-tx-reexecution
Description
to include onflow/flow-go#6759
For contributor use:
master
branchFiles changed
in the Github PR explorerSummary by CodeRabbit
Release Notes
New Features
debug_flowHeightByBlock
.DebugAPI
.BlockExecutor
for executing transactions in a blockchain context.RegisterStorage
for managing register values in a blockchain environment.Bug Fixes
Documentation
Tests
Chores