Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update flow-go #673

Merged
merged 2 commits into from
Nov 25, 2024
Merged

Conversation

janezpodhostnik
Copy link
Contributor

@janezpodhostnik janezpodhostnik commented Nov 25, 2024

Description

to include onflow/flow-go#6759


For contributor use:

  • Targeted PR against master branch
  • Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
  • Code follows the standards mentioned here.
  • Updated relevant documentation
  • Re-reviewed Files changed in the Github PR explorer
  • Added appropriate labels

Summary by CodeRabbit

Release Notes

  • New Features

    • Enhanced documentation for the EVM Gateway, including a new API method debug_flowHeightByBlock.
    • Added transaction tracing capabilities in the DebugAPI.
    • Introduced BlockExecutor for executing transactions in a blockchain context.
    • Implemented RegisterStorage for managing register values in a blockchain environment.
  • Bug Fixes

    • Improved error handling and validation across various API methods.
  • Documentation

    • Updated README.md to clarify API usage and configuration options.
  • Tests

    • Expanded test coverage for new and existing functionalities, including transaction tracing and state overrides.
  • Chores

    • Updated dependencies to ensure compatibility with the latest libraries and frameworks.

Copy link
Contributor

coderabbitai bot commented Nov 25, 2024

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

The pull request includes significant changes across multiple files, focusing on the Makefile, API structures, and documentation updates. Key modifications involve the removal of mock generation commands, restructuring of the BlockChainAPI and DebugAPI to enhance transaction handling and tracing capabilities, and the introduction of new utility functions. Additionally, the README.md has been updated to clarify API functionalities and configuration options. The changes reflect a shift towards improved structure, maintainability, and functionality within the project.

Changes

File Path Change Summary
Makefile Removed mock generation commands for AccountIndexer and services/traces.
README.md Added debug_flowHeightByBlock method to JSON-RPC API section; updated configuration flags and clarity.
api/api.go Removed accounts field from BlockChainAPI; added BlockGasLimit constant; updated method signatures.
api/debug.go Enhanced DebugAPI with new fields and methods for transaction tracing; improved error handling.
api/encode_transaction.go Removed file containing encodeTxFromArgs function.
api/pull.go Updated transaction handling to use *ethTypes.Transaction instead of *Transaction.
api/stream.go Changed return type of prepareBlockHeader to *ethTypes.BlockHeader.
api/utils.go Introduced utility functions for block resolution and error handling.
api/wallet.go Updated SignTransaction and SendTransaction method signatures to use ethTypes.TransactionArgs.
bootstrap/bootstrap.go Added Registers field to Storages struct; modified event ingestion logic.
cmd/run/cmd.go Updated Run function signature for context management; enhanced shutdown handling.
config/config.go Removed several configuration fields related to tracing.
eth/types/types.go Changed package name from api to types; updated TransactionArgs and Transaction structs.
models/block.go Changed FixedHash field type; updated decodeBlockEvent function signature.
models/events.go Added fields and methods for handling event payloads in CadenceEvents.
services/ingestion/engine.go Updated Engine struct to include new fields and improved event processing logic.
services/replayer/blocks_provider.go Introduced BlocksProvider for managing block snapshots.
storage/index.go Removed AccountIndexer interface and related methods.
storage/pebble/accounts.go Removed Accounts struct and methods for managing account operations.
storage/register_delta.go Added RegisterDelta struct for managing register state changes.
tests/e2e_web3js_test.go Added new end-to-end tests for various functionalities.

Possibly related PRs

Suggested labels

Improvement, Compatibility, Documentation

Suggested reviewers

  • peterargue
  • zhangchiqing
  • m-Peter

Poem

🐇 In the meadow, changes bloom,
With mocks removed, there's more room.
APIs refined, the code's a delight,
Tracing transactions, all feels just right!
So hop along, let’s test and play,
A brighter path in code today! 🌼


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@janezpodhostnik janezpodhostnik changed the base branch from main to feature/local-tx-reexecution November 25, 2024 19:35
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments.

🛑 Comments failed to post (44)
tests/web3js/debug_util_test.js (1)

5-15: 💡 Codebase verification

Consider adding error handling test for debug_flowHeightByBlock

The review suggestion is valid. Looking at the codebase:

  1. The debug_util_test.js only tests the happy path for debug_flowHeightByBlock
  2. Other test files in the codebase (like eth_filter_endpoints_test.js) demonstrate proper error handling patterns
  3. The debug_traces_test.js shows comprehensive test coverage with multiple scenarios

Here's a more specific suggestion:

+const LATEST_BLOCK = 'latest'
+const INVALID_BLOCK = -1
+
-it('should retrieve flow height', async () => {
+it('should retrieve flow height for latest block', async () => {
     let response = await helpers.callRPCMethod(
         'debug_flowHeightByBlock',
-        ['latest']
+        [LATEST_BLOCK]
     )
     assert.equal(response.status, 200)
     assert.isDefined(response.body)
 
     let height = response.body.result
     assert.equal(height, conf.startBlockHeight)
 })
+
+it('should handle invalid block number for flow height', async () => {
+    let response = await helpers.callRPCMethod(
+        'debug_flowHeightByBlock',
+        [INVALID_BLOCK]
+    )
+    assert.equal(response.status, 400)
+    assert.isDefined(response.body.error)
+})
🔗 Analysis chain

Consider enhancing test coverage and error handling

While the happy path is covered, the test could be improved in several ways:

  1. Add error handling for the RPC call
  2. Include negative test cases (e.g., invalid block numbers)
  3. Make the test description more specific about what it's testing
  4. Consider extracting 'latest' as a named constant

Here's a suggested enhancement:

+const LATEST_BLOCK = 'latest'
+
-it('should retrieve flow height', async () => {
+it('should retrieve correct flow height for latest block', async () => {
     let response = await helpers.callRPCMethod(
         'debug_flowHeightByBlock',
-        ['latest']
+        [LATEST_BLOCK]
     )
+    
+    // Verify HTTP response
     assert.equal(response.status, 200)
     assert.isDefined(response.body)
 
+    // Verify height matches configuration
     let height = response.body.result
     assert.equal(height, conf.startBlockHeight)
 })
+
+it('should handle invalid block number gracefully', async () => {
+    let response = await helpers.callRPCMethod(
+        'debug_flowHeightByBlock',
+        [-1]  // Invalid block number
+    )
+    assert.equal(response.status, 400)
+})

Let's verify if there are any other test files that might need similar improvements:

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Find other test files that might need similar improvements
fd -e js -e ts test_ tests/ --exec grep -l "debug_flowHeightByBlock"

# Check for error handling patterns in existing tests
rg "should.*handle.*error" tests/

Length of output: 104


Script:

#!/bin/bash
# Let's try a broader search to understand the test patterns in the codebase
# First, find all test files
fd -e js -e ts ".*test.*" tests/

# Then, let's look for any debug-related tests
rg -l "debug_" tests/

# And check for existing error handling patterns in tests
rg "it\(.*error.*" tests/
rg "it\(.*invalid.*" tests/

Length of output: 1786


Script:

#!/bin/bash
# Let's examine the debug-related test files more closely
cat tests/web3js/debug_util_test.js
cat tests/web3js/debug_traces_test.js

# Also check for any existing error handling patterns in these files
rg -A 5 "it\(" tests/web3js/debug_util_test.js
rg -A 5 "it\(" tests/web3js/debug_traces_test.js

Length of output: 19465

services/requester/utils.go (1)

21-31: ⚠️ Potential issue

Add input validation and error handling.

The function assumes valid input and doesn't handle potential errors. Consider:

  1. Validating input script format
  2. Adding error return for malformed scripts
  3. Using bytes.Replace to avoid string conversions
-func replaceAddresses(script []byte, chainID flow.ChainID) []byte {
+func replaceAddresses(script []byte, chainID flow.ChainID) ([]byte, error) {
+	if len(script) == 0 {
+		return nil, fmt.Errorf("empty script")
+	}
+
 	s := string(script)
+	if !strings.Contains(s, "import") {
+		return script, nil  // No modifications needed
+	}
+
 	// iterate over all the import name and address pairs and replace them in script
 	for _, contract := range contracts {
 		s = strings.ReplaceAll(s,
 			fmt.Sprintf("import %s", contract.Name),
 			fmt.Sprintf("import %s from %s", contract.Name, contract.Address.HexWithPrefix()),
 		)
 	}
 
-	return []byte(s)
+	return []byte(s), nil
}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

	if len(script) == 0 {
		return nil, fmt.Errorf("empty script")
	}

	s := string(script)
	if !strings.Contains(s, "import") {
		return script, nil  // No modifications needed
	}

	// iterate over all the import name and address pairs and replace them in script
	for _, contract := range contracts {
		s = strings.ReplaceAll(s,
			fmt.Sprintf("import %s", contract.Name),
			fmt.Sprintf("import %s from %s", contract.Name, contract.Address.HexWithPrefix()),
		)
	}

	return []byte(s), nil
}
storage/mocks/mocks.go (1)

26-26: 🛠️ Refactor suggestion

Consider making timestamp deterministic for testing

Using time.Now().Second() can lead to non-deterministic test behavior. Consider making this value configurable or using a fixed timestamp for consistent test results.

-			Timestamp:           uint64(time.Now().Second()),
+			Timestamp:           uint64(1234567890), // Use a fixed timestamp for deterministic tests
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

			Timestamp:           uint64(1234567890), // Use a fixed timestamp for deterministic tests
tests/web3js/eth_get_storage_at_test.js (1)

39-40: ⚠️ Potential issue

Fix variable declaration

The variable newValue is declared without let or const, which could lead to unexpected behavior.

-    newValue = 100
+    const newValue = 100
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    const newValue = 100
    updateData = deployed.contract.methods.store(newValue).encodeABI()
tests/web3js/eth_revert_reason_test.js (2)

80-93: 🛠️ Refactor suggestion

Extract duplicated polling logic into a helper function.

This polling logic is identical to the previous implementation. To improve maintainability and reduce code duplication, consider extracting it into a reusable helper function.

Consider adding this helper to the helpers.js file:

// Add to helpers.js
async function waitForReceipt(txHash, maxAttempts = 50) {
    let receipt = null;
    let attempts = 0;
    
    while (receipt == null && attempts < maxAttempts) {
        const response = await callRPCMethod(
            'eth_getTransactionReceipt',
            [txHash]
        );
        receipt = response.body.result;
        
        if (receipt == null) {
            attempts++;
            await new Promise(resolve => setTimeout(resolve, 100));
        }
    }
    
    if (receipt == null) {
        throw new Error('Timeout waiting for transaction receipt');
    }
    
    return response;
}

Then simplify both polling sections to:

-   let rcp = null
-   while (rcp == null) {
-       rcp = await helpers.callRPCMethod(
-           'eth_getTransactionReceipt',
-           [txHash]
-       )
-       if (rcp.body.result == null) {
-           rcp = null
-       }
-   }
+   let rcp = await helpers.waitForReceipt(txHash);

43-56: 🛠️ Refactor suggestion

Add timeout and delay to the polling mechanism.

While the polling mechanism is a good approach for waiting for transaction receipts, it could be improved by:

  1. Adding a timeout to prevent infinite loops
  2. Including a small delay between attempts to avoid overwhelming the node

Consider refactoring to:

    let txHash = response.body.result
    let rcp = null
+   const maxAttempts = 50;  // 5 seconds total timeout
+   let attempts = 0;
    // wait until the transaction is executed & indexed, and its
    // receipt becomes available.
-   while (rcp == null) {
+   while (rcp == null && attempts < maxAttempts) {
        rcp = await helpers.callRPCMethod(
            'eth_getTransactionReceipt',
            [txHash]
        )
        if (rcp.body.result == null) {
            rcp = null
+           attempts++;
+           await new Promise(resolve => setTimeout(resolve, 100));  // 100ms delay
        }
    }
+   if (rcp == null) {
+       throw new Error('Timeout waiting for transaction receipt');
+   }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    let txHash = response.body.result
    let rcp = null
    const maxAttempts = 50;  // 5 seconds total timeout
    let attempts = 0;
    // wait until the transaction is executed & indexed, and its
    // receipt becomes available.
    while (rcp == null && attempts < maxAttempts) {
        rcp = await helpers.callRPCMethod(
            'eth_getTransactionReceipt',
            [txHash]
        )
        if (rcp.body.result == null) {
            rcp = null
            attempts++;
            await new Promise(resolve => setTimeout(resolve, 100));  // 100ms delay
        }
    }
    if (rcp == null) {
        throw new Error('Timeout waiting for transaction receipt');
    }
storage/pebble/register_storage_test.go (2)

1-160: 💡 Codebase verification

Test coverage needs improvement for RegisterStorage

The current test coverage has gaps in the implementation:

  • Missing test cases:
    • Error handling for iterator creation and closure in lookupRegister
    • Panic recovery in Get method
    • Edge cases for lookupKey encoding/decoding
    • Error cases for batch operations in Store
    • Concurrent access scenarios
🔗 Analysis chain

Verify test coverage for RegisterStorage implementation.

While the tests cover many scenarios, let's verify the coverage against the actual implementation.


Let me gather more information about the RegisterStorage implementation to properly verify the test coverage.


Let me check the actual implementation in register_storage.go to compare with test coverage.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check test coverage and find any untested methods in RegisterStorage

# Find all methods in RegisterStorage
echo "Finding RegisterStorage methods:"
ast-grep --pattern 'func ($r *RegisterStorage) $method($$$) $$$' 

# Find corresponding test cases
echo -e "\nFinding test cases:"
rg -p "func Test.*Register.*" 

Length of output: 397


Script:

#!/bin/bash
# Find the RegisterStorage implementation
echo "RegisterStorage implementation:"
ast-grep --pattern 'type RegisterStorage struct {
  $$$
}

$$$

func ($r *RegisterStorage) $method($$$) $$$'

# Look for interface that RegisterStorage implements
echo -e "\nRegisterStorage interface:"
ast-grep --pattern 'type $name interface {
  $$$
  Get($$$) $$$
  Store($$$) $$$
  $$$
}'

# Find the actual implementation file
echo -e "\nFile containing RegisterStorage:"
fd -e go register_storage

Length of output: 528


Script:

#!/bin/bash
# Get the implementation details
echo "RegisterStorage implementation details:"
cat storage/pebble/register_storage.go

# Find all test functions and their contents for better coverage analysis
echo -e "\nTest implementations:"
rg -A 10 "^func Test" storage/pebble/register_storage_test.go

Length of output: 8365


93-160: 🛠️ Refactor suggestion

Improve test naming and coverage for snapshot tests.

The snapshot tests effectively verify caching behavior, but there are some areas for improvement:

  1. The test names are duplicated (both using "get register" at lines 101 and 113)
  2. Missing tests for cache invalidation scenarios
  3. Test scenarios would benefit from documentation

Apply these changes to improve the test structure:

 func Test_StorageSnapshot(t *testing.T) {
+    // Test scenarios:
+    // 1. Empty snapshot returns empty value
+    // 2. Cached values reduce storage access
+    // 3. Nil values are properly cached
+
     t.Parallel()
     // ... existing setup ...

-    runDB("get register", t, func(t *testing.T, db *Storage) {
+    runDB("empty snapshot returns empty value", t, func(t *testing.T, db *Storage) {
         // ... existing test ...
     })

-    runDB("get register", t, func(t *testing.T, db *Storage) {
+    runDB("cached values reduce storage access", t, func(t *testing.T, db *Storage) {
         // ... existing test ...
     })

     // Add new test for cache invalidation
+    runDB("cache invalidation on height change", t, func(t *testing.T, db *Storage) {
+        t.Parallel()
+        count := uint64(0)
+        storageGet := func(id flow.RegisterID, height uint64) (flow.RegisterValue, error) {
+            count++
+            return []byte{byte(count)}, nil
+        }
+        s := NewStorageSnapshot(storageGet, 0)
+        
+        // First access at height 0
+        v1, err := s.GetValue(owner, key)
+        require.NoError(t, err)
+        
+        // Access at different height should invalidate cache
+        s = NewStorageSnapshot(storageGet, 1)
+        v2, err := s.GetValue(owner, key)
+        require.NoError(t, err)
+        require.NotEqual(t, v1, v2)
+        require.Equal(t, uint64(2), count)
+    })
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

func Test_StorageSnapshot(t *testing.T) {
    // Test scenarios:
    // 1. Empty snapshot returns empty value
    // 2. Cached values reduce storage access
    // 3. Nil values are properly cached

    t.Parallel()
    owner := []byte("0x1")
    ownerAddress := flow.BytesToAddress(owner)
    key := []byte("0x3")
    value := []byte{0x4}

    runDB("empty snapshot returns empty value", t, func(t *testing.T, db *Storage) {
        t.Parallel()

        r := NewRegisterStorage(db, ownerAddress)
        s, err := r.GetSnapshotAt(0)
        require.NoError(t, err)

        v, err := s.GetValue(owner, key)
        require.NoError(t, err)
        require.Empty(t, v)
    })

    runDB("cached values reduce storage access", t, func(t *testing.T, db *Storage) {
        t.Parallel()

        count := uint64(0)

        storageGet := func(id flow.RegisterID, height uint64) (flow.RegisterValue, error) {
            count++
            return value, nil
        }

        s := NewStorageSnapshot(storageGet, 0)

        v, err := s.GetValue(owner, key)
        require.NoError(t, err)
        require.Equal(t, value, v)

        v, err = s.GetValue(owner, key)
        require.NoError(t, err)
        require.Equal(t, value, v)

        // value should be cached
        require.Equal(t, uint64(1), count)
    })

    runDB("get register - cache nil", t, func(t *testing.T, db *Storage) {
        t.Parallel()

        count := uint64(0)

        storageGet := func(id flow.RegisterID, height uint64) (flow.RegisterValue, error) {
            count++
            return nil, nil
        }

        s := NewStorageSnapshot(storageGet, 0)

        v, err := s.GetValue(owner, key)
        require.NoError(t, err)
        require.Empty(t, v)

        v, err = s.GetValue(owner, key)
        require.NoError(t, err)
        require.Empty(t, v)

        // value should be cached
        require.Equal(t, uint64(1), count)
    })

    runDB("cache invalidation on height change", t, func(t *testing.T, db *Storage) {
        t.Parallel()
        count := uint64(0)
        storageGet := func(id flow.RegisterID, height uint64) (flow.RegisterValue, error) {
            count++
            return []byte{byte(count)}, nil
        }
        s := NewStorageSnapshot(storageGet, 0)
        
        // First access at height 0
        v1, err := s.GetValue(owner, key)
        require.NoError(t, err)
        
        // Access at different height should invalidate cache
        s = NewStorageSnapshot(storageGet, 1)
        v2, err := s.GetValue(owner, key)
        require.NoError(t, err)
        require.NotEqual(t, v1, v2)
        require.Equal(t, uint64(2), count)
    })
}
tests/web3js/verify_cadence_arch_calls_test.js (1)

6-13: 🛠️ Refactor suggestion

Improve test organization and verification

The test setup could be enhanced in several ways:

  1. Use describe blocks to organize related test cases
  2. Make the test title more specific to what's being tested
  3. Add verification for successful contract deployment

Consider refactoring like this:

-it('should be able to use Cadence Arch calls', async () => {
+describe('Cadence Arch Integration Tests', () => {
+  let contractAddress;
+  let deployed;
+
+  beforeEach(async () => {
     let latest = await web3.eth.getBlockNumber()
     let expectedBlockHeight = conf.startBlockHeight
     assert.equal(latest, expectedBlockHeight)
+
     let deployed = await helpers.deployContract('storage')
     let contractAddress = deployed.receipt.contractAddress
+    assert.ok(contractAddress, 'Contract should be deployed successfully')
+  })
+
+  it('should verify random source calls', async () => {

Committable suggestion skipped: line range outside the PR's diff.

storage/register_delta_test.go (1)

25-29: 🛠️ Refactor suggestion

Remove unused parameter evmBlockHeight.

The delta helper function accepts an evmBlockHeight parameter that is never used. The snapshot is always created at height 0.

-delta := func(t *testing.T, r *pebbleStorage.RegisterStorage, evmBlockHeight uint64) *storage.RegisterDelta {
+delta := func(t *testing.T, r *pebbleStorage.RegisterStorage) *storage.RegisterDelta {
    ss, err := r.GetSnapshotAt(0)
    require.NoError(t, err)
    return storage.NewRegisterDelta(ss)
}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

	delta := func(t *testing.T, r *pebbleStorage.RegisterStorage) *storage.RegisterDelta {
		ss, err := r.GetSnapshotAt(0)
		require.NoError(t, err)
		return storage.NewRegisterDelta(ss)
	}
storage/pebble/blocks.go (1)

202-222: 🛠️ Refactor suggestion

Enhance error handling and parameter validation

The method has several areas that could be improved:

  1. The sanity check should explicitly handle all possible error cases, not just ErrStorageNotInitialized
  2. The batch parameter should be validated
  3. Error message formatting is inconsistent (some use "failed to" while others use "can't")

Consider applying these improvements:

 func (b *Blocks) InitHeights(cadenceHeight uint64, cadenceID flow.Identifier, batch *pebble.Batch) error {
+    if batch == nil {
+        return fmt.Errorf("batch cannot be nil")
+    }
+
     // sanity check, make sure we don't have any heights stored, disable overwriting the database
     _, err := b.LatestEVMHeight()
-    if !errors.Is(err, errs.ErrStorageNotInitialized) {
-        return fmt.Errorf("can't init the database that already has data stored")
+    if err == nil {
+        return fmt.Errorf("failed to init: database already contains data")
+    } else if !errors.Is(err, errs.ErrStorageNotInitialized) {
+        return fmt.Errorf("failed to check database state: %w", err)
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

func (b *Blocks) InitHeights(cadenceHeight uint64, cadenceID flow.Identifier, batch *pebble.Batch) error {
    if batch == nil {
        return fmt.Errorf("batch cannot be nil")
    }

    // sanity check, make sure we don't have any heights stored, disable overwriting the database
    _, err := b.LatestEVMHeight()
    if err == nil {
        return fmt.Errorf("failed to init: database already contains data")
    } else if !errors.Is(err, errs.ErrStorageNotInitialized) {
        return fmt.Errorf("failed to check database state: %w", err)
    }

    if err := b.store.set(latestCadenceHeightKey, nil, uint64Bytes(cadenceHeight), batch); err != nil {
        return fmt.Errorf("failed to init latest Cadence height at: %d, with: %w", cadenceHeight, err)
    }

    if err := b.store.set(latestEVMHeightKey, nil, uint64Bytes(0), batch); err != nil {
        return fmt.Errorf("failed to init latest EVM height at: %d, with: %w", 0, err)
    }

    // we store genesis block because it isn't emitted over the network
    genesisBlock := models.GenesisBlock(b.chainID)
    if err := b.Store(cadenceHeight, cadenceID, genesisBlock, batch); err != nil {
        return fmt.Errorf("failed to store genesis block at Cadence height: %d, with: %w", cadenceHeight, err)
    }
cmd/run/cmd.go (1)

60-68: ⚠️ Potential issue

Remove duplicate shutdown message.

There's a duplicate shutdown message after the select statement that's redundant since shutdown messages are already logged within the select cases.

Apply this diff to remove the duplicate message:

 		select {
 		case <-osSig:
 			log.Info().Msg("OS Signal to shutdown received, shutting down")
 			cancel()
 		case <-done:
 			log.Info().Msg("done, shutting down")
 			cancel()
 		}
-
-		log.Info().Msg("OS Signal to shutdown received, shutting down")
-		cancel()

Committable suggestion skipped: line range outside the PR's diff.

tests/go.mod (1)

223-230: 💡 Codebase verification

Update golang.org/x packages to address security vulnerabilities

Based on the verification results, I recommend the following updates:

  • Update golang.org/x/crypto from v0.28.0 to the latest stable version to address the Terrapin attack (CVE-2023-48795)
  • The other golang.org/x packages (sys, term, text) are indirect dependencies and their current versions are not affected by known vulnerabilities

The project uses Go 1.22 which is compatible with the latest versions of these packages.

🔗 Analysis chain

Security: Verify compatibility with latest golang.org/x updates

The updates to golang.org/x packages include security-related modules:

  • crypto: v0.28.0
  • sys: v0.26.0
  • term: v0.25.0
  • text: v0.19.0

Please ensure these versions are compatible with all dependencies and consider updating to the latest stable versions if available.


Let me generate additional verification scripts to check the dependencies and compatibility.

The previous results showed security advisories for crypto package and we need to verify the actual usage and dependencies before making a final recommendation.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for newer versions of golang.org/x packages and any security advisories

# Check for latest versions of golang.org/x packages
for pkg in "crypto" "sys" "term" "text"; do
  echo "Checking golang.org/x/$pkg..."
  gh api graphql -f query='{
    repository(owner: "golang", name: "'$pkg'") {
      releases(last: 1) {
        nodes {
          tagName
          createdAt
        }
      }
    }
  }'
done

# Check for any security advisories
gh api graphql -f query='
{
  securityVulnerabilities(ecosystem: GO, first: 10, package: "golang.org/x/crypto") {
    nodes {
      advisory {
        summary
        severity
        publishedAt
      }
      vulnerableVersionRange
      firstPatchedVersion {
        identifier
      }
    }
  }
}'

Length of output: 4290


Script:

#!/bin/bash
# Check go.mod files for dependencies that might be affected by golang.org/x updates
echo "Checking main go.mod file..."
cat go.mod

echo -e "\nChecking all go.mod files for dependencies on golang.org/x packages..."
fd -e mod -x grep -l "golang.org/x/" {}

echo -e "\nChecking for direct usage of crypto package..."
rg -l "golang.org/x/crypto"

echo -e "\nChecking Go version..."
grep "^go " go.mod

Length of output: 11582

services/replayer/blocks_provider.go (2)

45-49: ⚠️ Potential issue

Ensure thread-safe access to latestBlock

The latestBlock field in BlocksProvider is accessed and modified without synchronization in OnBlockReceived() and GetSnapshotAt(). If BlocksProvider is used concurrently by multiple goroutines, this could result in race conditions.

Consider adding a mutex to safeguard access to latestBlock.

Proposed changes:

+import "sync"

 type BlocksProvider struct {
     blocks      storage.BlockIndexer
     chainID     flowGo.ChainID
     tracer      *tracers.Tracer
     latestBlock *models.Block
+    mu          sync.RWMutex
 }

 func (bp *BlocksProvider) OnBlockReceived(block *models.Block) error {
+    bp.mu.Lock()
+    defer bp.mu.Unlock()
     if bp.latestBlock != nil && bp.latestBlock.Height != (block.Height-1) {
         return fmt.Errorf(
             "%w: received new block: %d, non-sequential of latest block: %d",
             models.ErrInvalidHeight,
             block.Height,
             bp.latestBlock.Height,
         )
     }
     bp.latestBlock = block
     return nil
 }

 func (bp *BlocksProvider) GetSnapshotAt(height uint64) (
     evmTypes.BlockSnapshot,
     error,
 ) {
+    bp.mu.RLock()
+    defer bp.mu.RUnlock()
     if bp.latestBlock != nil && bp.latestBlock.Height == height {
         return &blockSnapshot{
             BlocksProvider: bp,
             block:          *bp.latestBlock,
         }, nil
     }
     // Proceed to fetch block from indexer if not found in latestBlock
     // Remaining code...
 }

Also applies to: 65-78


28-35: 🛠️ Refactor suggestion

Handle errors explicitly when retrieving block hash

In the anonymous function within BlockContext(), errors from bs.blocks.GetByHeight(n) and block.Hash() are silently ignored, returning an empty hash. This could lead to unexpected behavior if the hash is essential for downstream processes. Consider handling the errors appropriately, either by logging them or by modifying the function signature to return an error.

Proposed change:

func(n uint64) gethCommon.Hash {
    block, err := bs.blocks.GetByHeight(n)
    if err != nil {
+       // Handle or log the error
+       fmt.Printf("Error retrieving block at height %d: %v\n", n, err)
        return gethCommon.Hash{}
    }
    blockHash, err := block.Hash()
    if err != nil {
+       // Handle or log the error
+       fmt.Printf("Error computing hash for block at height %d: %v\n", n, err)
        return gethCommon.Hash{}
    }
    return blockHash
},

Committable suggestion skipped: line range outside the PR's diff.

services/requester/remote_cadence_arch.go (1)

33-33: ⚠️ Potential issue

Add synchronization to protect concurrent access to cachedCalls

The cachedCalls map in the RemoteCadenceArch struct is accessed and modified in the Run and runCall methods without any synchronization mechanism. If instances of RemoteCadenceArch are used concurrently across multiple goroutines, this could lead to data races and unpredictable behavior.

To fix this issue, consider adding a sync.RWMutex to synchronize access to the cachedCalls map. Here's how you can modify the code:

+import "sync"

 type RemoteCadenceArch struct {
     blockHeight uint64
     client      *CrossSporkClient
     chainID     flow.ChainID
     cachedCalls map[string]evmTypes.Data
+    mutex       sync.RWMutex
 }

 func (rca *RemoteCadenceArch) Run(input []byte) ([]byte, error) {
     key := hex.EncodeToString(crypto.Keccak256(input))

+    rca.mutex.RLock()
     if result, ok := rca.cachedCalls[key]; ok {
+        rca.mutex.RUnlock()
         return result, nil
     }
+    rca.mutex.RUnlock()

     evmResult, err := rca.runCall(input)
     if err != nil {
         return nil, err
     }
     return evmResult.ReturnedData, nil
 }

 func (rca *RemoteCadenceArch) runCall(input []byte) (*evmTypes.ResultSummary, error) {
     // ... existing code ...
     key := hex.EncodeToString(crypto.Keccak256(input))
+    rca.mutex.Lock()
     rca.cachedCalls[key] = evmResult.ReturnedData
+    rca.mutex.Unlock()

     return evmResult, nil
 }

Also applies to: 67-69, 119-119

storage/pebble/storage_test.go (3)

43-50: ⚠️ Potential issue

Potential Resource Leak: Missing batch.Close() Call

In the get stored block test, the batch created is not being closed. Failing to close the batch may lead to resource leaks. Ensure that batch.Close() is called after committing the batch.

Apply this diff to fix the issue:

 runDB("get stored block", t, func(t *testing.T, db *Storage) {
 	const height = uint64(12)
 	cadenceID := flow.Identifier{0x1}
 	cadenceHeight := uint64(20)
 	bl := mocks.NewBlock(height)

 	blocks := NewBlocks(db, flowGo.Emulator)
 	batch := db.NewBatch()
+	defer func() {
+		require.NoError(t, batch.Close())
+	}()
 	err := blocks.InitHeights(config.EmulatorInitCadenceHeight, flow.Identifier{0x1}, batch)
 	require.NoError(t, err)

 	err = blocks.Store(cadenceHeight, cadenceID, bl, batch)
 	require.NoError(t, err)

 	err = batch.Commit(pebble.Sync)
 	require.NoError(t, err)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

		batch := db.NewBatch()
		defer func() {
			require.NoError(t, batch.Close())
		}()
		err := blocks.InitHeights(config.EmulatorInitCadenceHeight, flow.Identifier{0x1}, batch)
		require.NoError(t, err)

		err = blocks.Store(cadenceHeight, cadenceID, bl, batch)
		require.NoError(t, err)

		err = batch.Commit(pebble.Sync)

24-32: ⚠️ Potential issue

Potential Resource Leak: Missing batch.Close() Call

In the store block test, the batch created is not being closed. Failing to close the batch may lead to resource leaks. It's important to ensure that batch.Close() is called after committing the batch.

Apply this diff to fix the issue:

 func TestBlock(t *testing.T) {

	runDB("store block", t, func(t *testing.T, db *Storage) {
		bl := mocks.NewBlock(10)
		blocks := NewBlocks(db, flowGo.Emulator)
		batch := db.NewBatch()
+		defer func() {
+			require.NoError(t, batch.Close())
+		}()
		err := blocks.InitHeights(config.EmulatorInitCadenceHeight, flow.Identifier{0x1}, batch)
		require.NoError(t, err)

		err = blocks.Store(20, flow.Identifier{0x1}, bl, batch)
		require.NoError(t, err)

		err = batch.Commit(pebble.Sync)
		require.NoError(t, err)
	})
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

		batch := db.NewBatch()
		defer func() {
			require.NoError(t, batch.Close())
		}()

		err := blocks.InitHeights(config.EmulatorInitCadenceHeight, flow.Identifier{0x1}, batch)
		require.NoError(t, err)

		err = blocks.Store(20, flow.Identifier{0x1}, bl, batch)
		require.NoError(t, err)

		err = batch.Commit(pebble.Sync)

76-82: ⚠️ Potential issue

Potential Resource Leak: Missing batch.Close() Call

In the get not found block error test, the batch created is not being closed. Failing to close the batch may lead to resource leaks. Please ensure you call batch.Close() after committing the batch.

Apply this diff to fix the issue:

 runDB("get not found block error", t, func(t *testing.T, db *Storage) {
 	blocks := NewBlocks(db, flowGo.Emulator)

 	batch := db.NewBatch()
+	defer func() {
+		require.NoError(t, batch.Close())
+	}()
 	err := blocks.InitHeights(config.EmulatorInitCadenceHeight, flow.Identifier{0x1}, batch)
 	require.NoError(t, err)
 	err = blocks.Store(2, flow.Identifier{0x1}, mocks.NewBlock(1), batch) // init
 	require.NoError(t, err)

 	err = batch.Commit(pebble.Sync)
 	require.NoError(t, err)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

		batch := db.NewBatch()
		defer func() {
			require.NoError(t, batch.Close())
		}()
		err := blocks.InitHeights(config.EmulatorInitCadenceHeight, flow.Identifier{0x1}, batch)
		require.NoError(t, err)
		err = blocks.Store(2, flow.Identifier{0x1}, mocks.NewBlock(1), batch) // init
		require.NoError(t, err)

		err = batch.Commit(pebble.Sync)
services/evm/executor.go (1)

97-100: 🛠️ Refactor suggestion

Ensure thread safety of BlockExecutor's mutable state

The BlockExecutor modifies s.gasUsed and s.txIndex during execution. If instances of BlockExecutor are accessed by multiple goroutines, this could lead to data races. Consider adding synchronization mechanisms or documenting that BlockExecutor is not safe for concurrent use.

api/utils.go (1)

95-97: ⚠️ Potential issue

Avoid including user input in error messages

Including the invalid hex string in the error message may expose sensitive information. It's safer to provide a generic error message to prevent potential information leakage.

Apply this diff to modify the error message:

 if err != nil {
-	return common.Hash{}, fmt.Errorf("invalid hex string: %s", s)
+	return common.Hash{}, fmt.Errorf("invalid hex string")
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

	if err != nil {
		return common.Hash{}, fmt.Errorf("invalid hex string")
	}
services/replayer/call_tracer_collector.go (1)

76-82: ⚠️ Potential issue

Potential data race on resultsByTxID due to concurrent map access

The resultsByTxID map is accessed concurrently in Collect and during tracing callbacks without synchronization, which can lead to data races or runtime panics. In Go, maps are not safe for concurrent access across multiple goroutines.

Apply this diff to synchronize access to resultsByTxID using a mutex:

+import "sync"

 type CallTracerCollector struct {
     tracer        *tracers.Tracer
     resultsByTxID map[common.Hash]json.RawMessage
     logger        zerolog.Logger
+    mu            sync.Mutex
 }

 func (ct *CallTracerCollector) Collect(txID common.Hash) (json.RawMessage, error) {
+    ct.mu.Lock()
     result, found := ct.resultsByTxID[txID]
     if !found {
+        ct.mu.Unlock()
         return nil, fmt.Errorf("trace result for tx %s not found", txID.String())
     }
     delete(ct.resultsByTxID, txID)
+    ct.mu.Unlock()
     return result, nil
 }

 ...

 func (ct *CallTracerCollector) OnTxEnd(receipt *types.Receipt, err error) {
     ...
+    ct.mu.Lock()
     ct.resultsByTxID[receipt.TxHash] = res
+    ct.mu.Unlock()
     ...
 }

Also applies to: 134-134

models/events.go (1)

236-243: 🛠️ Refactor suggestion

Avoid modifying input events by sorting in-place

In NewSingleBlockEvents, the code sorts events.Events in place, modifying the input flow.BlockEvents. This could lead to unintended side effects if the caller uses events elsewhere. Consider creating a copy of the events.Events slice before sorting to avoid modifying the original input.

Apply this diff to create a copy of the events before sorting:

 func NewSingleBlockEvents(events flow.BlockEvents) BlockEvents {
+	eventsCopy := events
+	eventsCopy.Events = make([]flow.Event, len(events.Events))
+	copy(eventsCopy.Events, events.Events)

 	// first we sort all the events in the block, by their TransactionIndex,
 	// and then we also sort events in the same transaction, by their EventIndex.
-	sort.Slice(events.Events, func(i, j int) bool {
+	sort.Slice(eventsCopy.Events, func(i, j int) bool {
 		if eventsCopy.Events[i].TransactionIndex != eventsCopy.Events[j].TransactionIndex {
 			return eventsCopy.Events[i].TransactionIndex < eventsCopy.Events[j].TransactionIndex
 		}
 		return eventsCopy.Events[i].EventIndex < eventsCopy.Events[j].EventIndex
 	})

-	cdcEvents, err := NewCadenceEvents(events)
+	cdcEvents, err := NewCadenceEvents(eventsCopy)
 	return BlockEvents{
 		Events: cdcEvents,
 		Err:    err,
 	}
 }

Also applies to: 245-249

storage/pebble/register_storage.go (1)

213-229: ⚠️ Potential issue

Use a pointer receiver for StorageSnapshot.GetValue to ensure cache updates

The GetValue method modifies the cache map, but since StorageSnapshot has a value receiver, the cache updates won't persist beyond the method scope. Changing the receiver to a pointer ensures the cache behaves as intended.

Apply this diff to fix the receiver:

-func (s StorageSnapshot) GetValue(owner []byte, key []byte) ([]byte, error) {
+func (s *StorageSnapshot) GetValue(owner []byte, key []byte) ([]byte, error) {

Ensure that all calls to methods of StorageSnapshot use the pointer receiver as well.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

func (s *StorageSnapshot) GetValue(owner []byte, key []byte) ([]byte, error) {
	id := flow.CadenceRegisterID(owner, key)
	value, ok := s.cache[id]
	if ok {
		return value, nil
	}

	// get from index
	val, err := s.storageGet(id, s.evmBlockHeight)
	if err != nil {
		return nil, err
	}

	// non-existing key will also be cached with `nil` value.
	s.cache[id] = val
	return val, nil
}
services/ingestion/engine.go (2)

152-157: ⚠️ Potential issue

Avoid using log.Fatal() in a deferred function

Using e.log.Fatal() within a deferred function will terminate the application abruptly if an error occurs during batch closure. This can prevent graceful shutdowns and proper error handling. Instead, consider logging the error and allowing the application to continue or handle the error appropriately.

Apply this diff to replace e.log.Fatal() with e.log.Error():

 defer func(batch *pebbleDB.Batch) {
     err := batch.Close()
     if err != nil {
-        e.log.Fatal().Err(err).Msg("failed to close batch")
+        e.log.Error().Err(err).Msg("failed to close batch")
     }
 }(batch)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

	defer func(batch *pebbleDB.Batch) {
		err := batch.Close()
		if err != nil {
			e.log.Error().Err(err).Msg("failed to close batch")
		}
	}(batch)

260-262: 🛠️ Refactor suggestion

Include the original error when returning

The error message returned does not include the original error err, which can make debugging more difficult. Wrapping the original error provides additional context for understanding the failure.

Apply this diff to include the original error:

 err = e.registerStore.Store(registerEntriesFromKeyValue(res.StorageRegisterUpdates()), blockEvents.Height, batch)
 if err != nil {
-    return fmt.Errorf("failed to store state changes on block: %d", events.Block().Height)
+    return fmt.Errorf("failed to store state changes on block %d: %w", events.Block().Height, err)
 }

Committable suggestion skipped: line range outside the PR's diff.

api/debug.go (2)

84-106: ⚠️ Potential issue

Avoid potential nil pointer dereference in isDefaultCallTracer

In TraceTransaction, the function isDefaultCallTracer(config) is called without checking if config or config.Tracer is nil. This could lead to a nil pointer dereference if config or config.Tracer is nil.

Apply this diff to add necessary nil checks:

func isDefaultCallTracer(config *tracers.TraceConfig) bool {
	if config == nil {
		return false
	}

+	if config.Tracer == nil {
+		return false
+	}

	if *config.Tracer != replayer.TracerName {
		return false
	}

	tracerConfig := json.RawMessage(replayer.TracerConfig)
	return slices.Equal(config.TracerConfig, tracerConfig)
}

Committable suggestion skipped: line range outside the PR's diff.


253-362: ⚠️ Potential issue

Handle possible nil config in TraceCall

In the TraceCall method, ensure that config.TraceConfig is properly initialized to avoid nil pointer dereferences when accessing it later.

Apply this diff to initialize config.TraceConfig if it's nil:

 if config == nil {
 	config = &tracers.TraceCallConfig{}
 }

+if config.TraceConfig == nil {
+	config.TraceConfig = &tracers.TraceConfig{}
+}

 tracer, err := tracerForReceipt(&config.TraceConfig, nil)
 if err != nil {
 	return nil, err
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

func (d *DebugAPI) TraceCall(
	_ context.Context,
	args ethTypes.TransactionArgs,
	blockNrOrHash rpc.BlockNumberOrHash,
	config *tracers.TraceCallConfig,
) (interface{}, error) {
	tx, err := encodeTxFromArgs(args)
	if err != nil {
		return nil, err
	}

	// Default address in case user does not provide one
	from := d.config.Coinbase
	if args.From != nil {
		from = *args.From
	}

	if config == nil {
		config = &tracers.TraceCallConfig{}
	}

	if config.TraceConfig == nil {
		config.TraceConfig = &tracers.TraceConfig{}
	}

	tracer, err := tracerForReceipt(&config.TraceConfig, nil)
	if err != nil {
		return nil, err
	}

	height, err := resolveBlockTag(&blockNrOrHash, d.blocks, d.logger)
	if err != nil {
		return nil, err
	}

	cdcHeight, err := d.blocks.GetCadenceHeight(height)
	if err != nil {
		return nil, err
	}

	block, err := d.blocks.GetByHeight(height)
	if err != nil {
		return nil, err
	}

	blocksProvider := replayer.NewBlocksProvider(
		d.blocks,
		d.config.FlowNetworkID,
		tracer,
	)
	viewProvider := query.NewViewProvider(
		d.config.FlowNetworkID,
		flowEVM.StorageAccountAddress(d.config.FlowNetworkID),
		d.registerStore,
		blocksProvider,
		BlockGasLimit,
	)

	view, err := viewProvider.GetBlockView(block.Height)
	if err != nil {
		return nil, err
	}

	to := gethCommon.Address{}
	if tx.To != nil {
		to = *tx.To
	}
	rca := requester.NewRemoteCadenceArch(cdcHeight, d.client, d.config.FlowNetworkID)

	opts := []query.DryCallOption{}
	opts = append(opts, query.WithTracer(tracer))
	opts = append(opts, query.WithExtraPrecompiledContracts([]types.PrecompiledContract{rca}))
	if config.StateOverrides != nil {
		for addr, account := range *config.StateOverrides {
			// Override account nonce.
			if account.Nonce != nil {
				opts = append(opts, query.WithStateOverrideNonce(addr, uint64(*account.Nonce)))
			}
			// Override account(contract) code.
			if account.Code != nil {
				opts = append(opts, query.WithStateOverrideCode(addr, *account.Code))
			}
			// Override account balance.
			if account.Balance != nil {
				opts = append(opts, query.WithStateOverrideBalance(addr, (*big.Int)(*account.Balance)))
			}
			if account.State != nil && account.StateDiff != nil {
				return nil, fmt.Errorf("account %s has both 'state' and 'stateDiff'", addr.Hex())
			}
			// Replace entire state if caller requires.
			if account.State != nil {
				opts = append(opts, query.WithStateOverrideState(addr, *account.State))
			}
			// Apply state diff into specified accounts.
			if account.StateDiff != nil {
				opts = append(opts, query.WithStateOverrideStateDiff(addr, *account.StateDiff))
			}
		}
	}
	_, err = view.DryCall(
		from,
		to,
		tx.Data,
		tx.Value,
		tx.Gas,
		opts...,
	)

	if err != nil {
		return nil, err
	}

	return tracer.GetResult()
}
bootstrap/bootstrap.go (2)

498-498: ⚠️ Potential issue

Fix typo in variable name 'evmBlokcHeight'

The variable evmBlokcHeight appears to have a typo. It should be evmBlockHeight for clarity and consistency.

Apply this diff to correct the variable name:

-	evmBlokcHeight := uint64(0)
+	evmBlockHeight := uint64(0)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

		evmBlockHeight := uint64(0)

504-507: ⚠️ Potential issue

Update error message to use the correct block height variable

In the error message, the block height is hardcoded as 0 instead of using evmBlockHeight. This may lead to confusion during debugging.

Apply this diff to use the correct variable in the error message:

-	return nil, fmt.Errorf("could not get register snapshot at block height %d: %w", 0, err)
+	return nil, fmt.Errorf("could not get register snapshot at block height %d: %w", evmBlockHeight, err)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

		snapshot, err := registerStore.GetSnapshotAt(evmBlokcHeight)
		if err != nil {
			return nil, fmt.Errorf("could not get register snapshot at block height %d: %w", evmBlockHeight, err)
		}
services/ingestion/event_subscriber.go (2)

450-472: 🛠️ Refactor suggestion

Potential unbounded memory growth in recovery mode

In the accumulateEventsMissingBlock function, the r.recoveredEvents slice can grow indefinitely if the missing block event is never found, which may lead to excessive memory usage. Consider adding a limit to the number of events accumulated or implementing a timeout to prevent potential memory exhaustion.

You can apply the following change to add a limit to the number of recovery attempts:

 func (r *RPCEventSubscriber) accumulateEventsMissingBlock(events flow.BlockEvents) models.BlockEvents {
     txEvents := events.Events
+    const maxRecoveryEvents = 10000  // Maximum number of events to accumulate
     // Sort `EVM.TransactionExecuted` events
     sort.Slice(txEvents, func(i, j int) bool {
         if txEvents[i].TransactionIndex != txEvents[j].TransactionIndex {
             return txEvents[i].TransactionIndex < txEvents[j].TransactionIndex
         }
         return txEvents[i].EventIndex < txEvents[j].EventIndex
     })
+    if len(r.recoveredEvents)+len(txEvents) > maxRecoveryEvents {
+        err := fmt.Errorf("reached maximum recovery events limit (%d)", maxRecoveryEvents)
+        return models.NewBlockEventsError(err)
+    }
     r.recoveredEvents = append(r.recoveredEvents, txEvents...)
     events.Events = r.recoveredEvents
     // ...

Committable suggestion skipped: line range outside the PR's diff.


238-243: ⚠️ Potential issue

Check for nil EVMContract to prevent nil pointer dereference

When retrieving the EVMContract address, there's a potential risk of a nil pointer dereference if EVMContract is nil. Ensure that EVMContract is not nil before accessing its Address.

Apply this diff to add a nil check:

 evmContracts := systemcontracts.SystemContractsForChain(r.chain)
+if evmContracts.EVMContract == nil {
+    return 0, fmt.Errorf("EVMContract is nil for chain ID %s", r.chain)
+}
 evmAddress := common.Address(evmContracts.EVMContract.Address)

Committable suggestion skipped: line range outside the PR's diff.

models/events_test.go (3)

345-345: ⚠️ Potential issue

Use NewMultiBlockEvents instead of NewSingleBlockEvents

In TestNewMultiBlockEvents, the test case at line 345 incorrectly calls NewSingleBlockEvents instead of NewMultiBlockEvents. This causes the test to use the wrong event processing function.

Apply this diff to correct the function call:

-			evmEvents := NewSingleBlockEvents(blockEvents)
+			evmEvents := NewMultiBlockEvents(blockEvents)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

		evmEvents := NewMultiBlockEvents(blockEvents)

85-85: ⚠️ Potential issue

Correct the construction of flow.Identifier

At line 85, flow.Identifier is initialized with {0x1}, which creates an identifier shorter than the required 32 bytes. flow.Identifier expects a 32-byte array. Using an incorrectly sized identifier may lead to unpredictable behavior in the tests.

Apply this diff to fix the issue by using the flow.HexToID function or initializing with a 32-byte array:

-    		BlockID: flow.Identifier{0x1},
+    		BlockID: flow.HexToID("0000000000000000000000000000000000000000000000000000000000000001"),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

			BlockID: flow.HexToID("0000000000000000000000000000000000000000000000000000000000000001"),

218-225: 🛠️ Refactor suggestion

Handle potential nil pointer dereference for block

In the test case starting at line 218, there's a possibility that block could be nil if newBlock returns an error. Even though require.NoError(t, err) is called, it's safer to ensure block is not nil before using it.

Apply this diff to add a check:

 			require.NoError(t, err)
+			require.NotNil(t, block)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

		block, blockEvent, err := newBlock(1, hashes)
		require.NoError(t, err)
		require.NotNil(t, block)
		blockEvent.TransactionIndex = 4
		blockEvent.EventIndex = 0
		blockEvents.Events = append(blockEvents.Events, blockEvent)

		// parse the EventStreaming API response
		evmEvents := NewSingleBlockEvents(blockEvents)
tests/web3js/debug_traces_test.js (1)

28-31: ⚠️ Potential issue

Declare the variable response before use

The variable response is used without prior declaration in multiple places. This can lead to unintended global variable creation, which may cause bugs and hinder maintainability. In JavaScript, variables should be declared using let, const, or var before they are used.

Apply this diff to declare response appropriately:

+        let response = await helpers.callRPCMethod(
         'debug_traceTransaction',
         [receipt.transactionHash, callTracer]
     )

Repeat this change for all instances where response is first assigned without declaration to ensure proper variable scoping.

Also applies to: 47-50, 86-88, 137-139, 190-192, 247-249, 303-305, 356-358, 383-385, 456-458, 489-491, 503-505, 533-535

api/api.go (3)

794-794: 🛠️ Refactor suggestion

⚠️ Potential issue

Handle errors appropriately in EstimateGas

In the EstimateGas method, when there's an error encoding the transaction (encodeTxFromArgs), the function returns the BlockGasLimit without reporting the error. This could mask underlying issues and provide misleading information to the caller. Consider returning an error to inform the client about the encoding failure.


922-922: 🛠️ Refactor suggestion

Use dynamic block gas limit instead of static BlockGasLimit

When calculating gasUsedRatio, using the static BlockGasLimit constant may result in inaccurate ratios if the actual block gas limit differs. It's better to use the actual gas limit of the block (e.g., block.GasLimit) to ensure accurate calculations.


1031-1031: 🛠️ Refactor suggestion

Set block's actual gas limit in prepareBlockResponse

The GasLimit field in the block response is set to the static BlockGasLimit constant. To accurately represent the block data, use the actual gas limit from the block (block.GasLimit) instead of a constant value.

storage/register_delta.go (2)

80-129: 🛠️ Refactor suggestion

Consider making allocateSlabIndex concurrency safe

The allocateSlabIndex function manipulates shared state (e.g., account status and slab indices) without synchronization. If this function is called concurrently, race conditions may occur, leading to incorrect slab index allocations or data corruption. Consider implementing synchronization mechanisms, such as mutexes, to ensure thread-safe operations.


57-63: ⚠️ Potential issue

Correct ValueExists to accurately detect existing entries with empty values

The ValueExists method checks if len(value) > 0, which will return false if a key exists but its value is an empty byte slice ([]byte{}). This could lead to incorrect assumptions about the presence of keys with empty values. Consider modifying the logic to differentiate between a missing key and a key with an empty value by checking for a nil error from GetValue.

Apply this diff to adjust the method:

func (r *RegisterDelta) ValueExists(owner []byte, key []byte) (bool, error) {
     _, err := r.GetValue(owner, key)
     if err != nil {
-        return false, err
+        if errors.Is(err, storage.ErrValueNotFound) {
+            return false, nil
+        }
+        return false, err
     }
-    return len(value) > 0, nil
+    return true, nil
}

Ensure that storage.ErrValueNotFound correctly represents the error returned when a key does not exist.

Committable suggestion skipped: line range outside the PR's diff.

services/requester/requester.go (3)

59-69: ⚠️ Potential issue

Reintroduce context.Context parameters to interface methods

The removal of context.Context from the methods GetBalance, GetNonce, GetCode, Call, and EstimateGas might limit the ability to handle cancellations, timeouts, and propagate deadlines, which are essential for network operations and long-running computations. It is a best practice in Go to include context.Context as the first parameter in such methods to enable better control over their execution.

Apply the following diff to reintroduce context.Context parameters:

 func GetBalance(
+    ctx context.Context,
     address common.Address,
     height uint64,
 ) (*big.Int, error)

 func GetNonce(
+    ctx context.Context,
     address common.Address,
     height uint64,
 ) (uint64, error)

 func GetCode(
+    ctx context.Context,
     address common.Address,
     height uint64,
 ) ([]byte, error)

 func Call(
+    ctx context.Context,
     tx *types.LegacyTx,
     from common.Address,
     height uint64,
     stateOverrides *ethTypes.StateOverride,
 ) ([]byte, error)

 func EstimateGas(
+    ctx context.Context,
     tx *types.LegacyTx,
     from common.Address,
     height uint64,
     stateOverrides *ethTypes.StateOverride,
 ) (uint64, error)

Also applies to: 74-79, 82-86


189-189: ⚠️ Potential issue

Ensure replaceAddresses function is defined and accessible

The function replaceAddresses is used at several points (lines 189, 234, and 414) but its definition is not present in this file. If replaceAddresses has been removed or moved, this will result in a compilation error. Ensure that replaceAddresses is properly defined or imported from the correct package.

If replaceAddresses is no longer needed, consider removing its usage or replacing it with the appropriate logic.

Also applies to: 234-234, 414-414


559-561: ⚠️ Potential issue

Handle potential integer overflow in AddOne64th function

The AddOne64th function adds n/64 to n, which could cause an integer overflow when n is close to math.MaxUint64. This could lead to incorrect gas calculations and unexpected behaviors.

Consider modifying the function to detect and handle overflow:

 func AddOne64th(n uint64) uint64 {
+    increment := n / 64
+    if n > math.MaxUint64 - increment {
+        // Handle overflow by capping the value at math.MaxUint64 or returning an error
+        return math.MaxUint64
+    }
     return n + increment
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

func AddOne64th(n uint64) uint64 {
	increment := n / 64
	if n > math.MaxUint64 - increment {
		// Handle overflow by capping the value at math.MaxUint64 or returning an error
		return math.MaxUint64
	}
	return n + increment
}

@janezpodhostnik janezpodhostnik merged commit 0de5ef9 into feature/local-tx-reexecution Nov 25, 2024
2 checks passed
@m-Peter m-Peter deleted the janez/update-flow-go branch November 26, 2024 09:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants