-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sinner #1380
Sinner #1380
Conversation
Here is the updated walkthrough: WalkthroughThis comprehensive update introduces the Changes
TipsChat with CodeRabbit Bot (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Files ignored due to filter (2)
- go.work
- services/sinner/go.mod
Files selected for processing (4)
- .codecov.yml (1 hunks)
- services/sinner/.goreleaser.yml (1 hunks)
- services/sinner/Makefile (1 hunks)
- services/sinner/README.md (1 hunks)
Files skipped from review due to trivial changes (4)
- .codecov.yml
- services/sinner/.goreleaser.yml
- services/sinner/Makefile
- services/sinner/README.md
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## master #1380 +/- ##
===================================================
+ Coverage 51.16094% 51.39717% +0.23623%
===================================================
Files 366 371 +5
Lines 25195 25480 +285
Branches 271 270 -1
===================================================
+ Hits 12890 13096 +206
- Misses 11035 11089 +54
- Partials 1270 1295 +25
Flags with carried forward coverage won't be shown. Click here to find out more.
☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 7
Configuration used: CodeRabbit UI
Files selected for processing (5)
- services/sinner/README.md (1 hunks)
- services/sinner/config/indexer/config.go (1 hunks)
- services/sinner/service/indexer.go (1 hunks)
- services/sinner/service/indexer_test.go (1 hunks)
- services/sinner/service/service_test.go (1 hunks)
Files skipped from review due to trivial changes (2)
- services/sinner/README.md
- services/sinner/service/service_test.go
Additional comments: 16
services/sinner/config/indexer/config.go (7)
1-14: The package imports and package declaration look good. No issues found.
16-40: The
ContractType
enum andContractTypeFromString
function are well implemented. They provide a clear way to handle different contract types.42-56: The
Config
struct is well defined and includes all necessary fields for the configuration of the sinner's data consumption. The use of struct tags for YAML parsing is correct.58-76: The
ChainConfig
andContractConfig
structs are well defined and include all necessary fields for the configuration of a chain and a contract respectively. The use of struct tags for YAML parsing is correct.78-94: The
IsValid
method onConfig
is well implemented. It checks for the presence of required fields and validates each chain configuration.96-111: The
IsValid
method onChainConfig
is well implemented. It checks for the presence of required fields and validates each contract configuration. It also checks for duplicate contract addresses.113-127: The
IsValid
method onContractConfig
is well implemented. It checks for the presence of required fields and validates the contract type.services/sinner/service/indexer.go (9)
1-19: Imports are well organized and relevant to the code. No unused imports detected.
21-33: The
ChainIndexer
struct has been updated to include arefreshRate
field. This is a good practice as it allows the refresh rate to be configurable.35-45: The
NewChainIndexer
function has been updated to accept arefreshRate
parameter. Ensure that all calls to this function throughout the codebase have been updated to match the new signature.79-89: The
Index
function now accepts acontext.Context
parameter. This is a good practice as it allows the function to be cancelled if necessary.91-101: The
processContract
function now accepts acontext.Context
parameter and returns an error. This is a good practice as it allows the function to be cancelled if necessary and provides better error handling.103-119: The
createEventParser
function now returns an error. This is a good practice as it provides better error handling.121-142: The
indexContractEvents
function now returns an error. This is a good practice as it provides better error handling.144-173: The
fetchBlockRange
function now returns an error. This is a good practice as it provides better error handling.175-226: The
processBlocksInRange
function now returns an error. This is a good practice as it provides better error handling.
// DecodeConfig parses in a config from a file. | ||
func DecodeConfig(filePath string) (cfg Config, err error) { | ||
if _, err := os.Stat(filePath); os.IsNotExist(err) { | ||
return Config{}, fmt.Errorf("config file does not exist: %w", err) | ||
} | ||
input, err := os.ReadFile(filepath.Clean(filePath)) | ||
if err != nil { | ||
return Config{}, fmt.Errorf("failed to read file: %w", err) | ||
} | ||
err = yaml.Unmarshal(input, &cfg) | ||
if err != nil { | ||
return Config{}, fmt.Errorf("could not unmarshall config %s: %w", ellipsis.Shorten(string(input), 30), err) | ||
} | ||
err = cfg.IsValid() | ||
if err != nil { | ||
return cfg, err | ||
} | ||
|
||
return cfg, nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The DecodeConfig
function is well implemented and handles errors appropriately. However, the check for an empty file path is still missing. This can lead to a misleading error message if an empty string is passed as the file path.
if _, err := os.Stat(filePath); os.IsNotExist(err) {
+ if filePath == "" {
+ return Config{}, fmt.Errorf("config file path is empty")
+ }
+ if os.IsNotExist(err) {
return Config{}, fmt.Errorf("config file does not exist: %w", err)
}
Commitable suggestion (Beta)
// DecodeConfig parses in a config from a file. | |
func DecodeConfig(filePath string) (cfg Config, err error) { | |
if _, err := os.Stat(filePath); os.IsNotExist(err) { | |
return Config{}, fmt.Errorf("config file does not exist: %w", err) | |
} | |
input, err := os.ReadFile(filepath.Clean(filePath)) | |
if err != nil { | |
return Config{}, fmt.Errorf("failed to read file: %w", err) | |
} | |
err = yaml.Unmarshal(input, &cfg) | |
if err != nil { | |
return Config{}, fmt.Errorf("could not unmarshall config %s: %w", ellipsis.Shorten(string(input), 30), err) | |
} | |
err = cfg.IsValid() | |
if err != nil { | |
return cfg, err | |
} | |
return cfg, nil | |
} | |
// DecodeConfig parses in a config from a file. | |
func DecodeConfig(filePath string) (cfg Config, err error) { | |
if filePath == "" { | |
return Config{}, fmt.Errorf("config file path is empty") | |
} | |
if _, err := os.Stat(filePath); os.IsNotExist(err) { | |
return Config{}, fmt.Errorf("config file does not exist: %w", err) | |
} | |
input, err := os.ReadFile(filepath.Clean(filePath)) | |
if err != nil { | |
return Config{}, fmt.Errorf("failed to read file: %w", err) | |
} | |
err = yaml.Unmarshal(input, &cfg) | |
if err != nil { | |
return Config{}, fmt.Errorf("could not unmarshall config %s: %w", ellipsis.Shorten(string(input), 30), err) | |
} | |
err = cfg.IsValid() | |
if err != nil { | |
return cfg, err | |
} | |
return cfg, nil | |
} |
wllt, err := wallet.FromRandom() | ||
Nil(t.T(), err) | ||
t.testBackend.FundAccount(ctx, wllt.Address(), *big.NewInt(params.Ether)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The wallet is being created from a random source and then funded. This could lead to potential loss of funds if the wallet's private key is not stored or backed up. Consider using a deterministic source for generating the wallet in a test environment.
tx, err := originContract.SendBaseMessage(txContext.TransactOpts, t.destinationChainID, [32]byte{}, 1, paddedRequest, []byte{}) | ||
Nil(t.T(), err) | ||
err = t.scribeDB.StoreEthTx(ctx, tx, t.originChainID, common.BigToHash(big.NewInt(int64(3))), 3, uint64(1)) | ||
Nil(t.T(), err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error handling here is not consistent. In line 44, the error is checked immediately after the function call. However, in line 46, another function is called before the error is checked. This could lead to unexpected behavior if an error occurs in the first function call. Consider checking the error immediately after each function call.
- tx, err := originContract.SendBaseMessage(txContext.TransactOpts, t.destinationChainID, [32]byte{}, 1, paddedRequest, []byte{})
- Nil(t.T(), err)
- err = t.scribeDB.StoreEthTx(ctx, tx, t.originChainID, common.BigToHash(big.NewInt(int64(3))), 3, uint64(1))
+ tx, sendErr := originContract.SendBaseMessage(txContext.TransactOpts, t.destinationChainID, [32]byte{}, 1, paddedRequest, []byte{})
+ Nil(t.T(), sendErr)
+ if sendErr != nil {
+ return
+ }
+ storeErr := t.scribeDB.StoreEthTx(ctx, tx, t.originChainID, common.BigToHash(big.NewInt(int64(3))), 3, uint64(1))
+ Nil(t.T(), storeErr)
+ if storeErr != nil {
+ return
+ }
Commitable suggestion (Beta)
tx, err := originContract.SendBaseMessage(txContext.TransactOpts, t.destinationChainID, [32]byte{}, 1, paddedRequest, []byte{}) | |
Nil(t.T(), err) | |
err = t.scribeDB.StoreEthTx(ctx, tx, t.originChainID, common.BigToHash(big.NewInt(int64(3))), 3, uint64(1)) | |
Nil(t.T(), err) | |
tx, sendErr := originContract.SendBaseMessage(txContext.TransactOpts, t.destinationChainID, [32]byte{}, 1, paddedRequest, []byte{}) | |
Nil(t.T(), sendErr) | |
if sendErr != nil { | |
return | |
} | |
storeErr := t.scribeDB.StoreEthTx(ctx, tx, t.originChainID, common.BigToHash(big.NewInt(int64(3))), 3, uint64(1)) | |
Nil(t.T(), storeErr) | |
if storeErr != nil { | |
return | |
} |
go func() { | ||
for { | ||
select { | ||
case <-ctx.Done(): | ||
return | ||
case <-time.After(timeout): | ||
testDB.UNSAFE_DB().WithContext(ctx).Find(&model.OriginSent{}).First(&originEvent) | ||
if len(originEvent.Message) > 0 { | ||
// cancel if message stored | ||
cancelIndexing() | ||
} | ||
} | ||
} | ||
}() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This goroutine is running indefinitely until a message is stored or the context is done. This could potentially lead to a deadlock if the message is never stored. Consider adding a maximum number of retries or a total timeout to prevent this.
Equal(t.T(), sentLog.TxIndex, originEvent.TxIndex) | ||
|
||
// Get and check the message status | ||
messageStatus, err := testDB.RetrieveMessageStatus(ctx, originEvent.MessageHash) | ||
Nil(t.T(), err) | ||
Equal(t.T(), sentLog.TxHash.String(), *messageStatus.OriginTxHash) | ||
Equal(t.T(), graphqlModel.MessageStateLastSeenOrigin, *messageStatus.LastSeen) | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test assertions are not descriptive enough. If the test fails, it might be difficult to understand what went wrong. Consider adding a message to each assertion to describe what it is testing.
- Equal(t.T(), sentLog.TxIndex, originEvent.TxIndex)
+ Equal(t.T(), sentLog.TxIndex, originEvent.TxIndex, "Expected TxIndex to be equal")
Commitable suggestion (Beta)
Equal(t.T(), sentLog.TxIndex, originEvent.TxIndex) | |
// Get and check the message status | |
messageStatus, err := testDB.RetrieveMessageStatus(ctx, originEvent.MessageHash) | |
Nil(t.T(), err) | |
Equal(t.T(), sentLog.TxHash.String(), *messageStatus.OriginTxHash) | |
Equal(t.T(), graphqlModel.MessageStateLastSeenOrigin, *messageStatus.LastSeen) | |
}) | |
Equal(t.T(), sentLog.TxIndex, originEvent.TxIndex, "Expected TxIndex to be equal") | |
// Get and check the message status | |
messageStatus, err := testDB.RetrieveMessageStatus(ctx, originEvent.MessageHash) | |
Nil(t.T(), err) | |
Equal(t.T(), sentLog.TxHash.String(), *messageStatus.OriginTxHash, "Expected TxHash to be equal") | |
Equal(t.T(), graphqlModel.MessageStateLastSeenOrigin, *messageStatus.LastSeen, "Expected LastSeen to be equal") | |
}) |
func (t *ServiceSuite) storeTestLog(ctx context.Context, tx *types.Transaction, chainID uint32, blockNumber uint64) (*types.Log, error) { | ||
t.testBackend.WaitForConfirmation(ctx, tx) | ||
receipt, err := t.testBackend.TransactionReceipt(ctx, tx.Hash()) | ||
if err != nil { | ||
return nil, fmt.Errorf("failed to get receipt for transaction %s: %w", tx.Hash().String(), err) | ||
} | ||
receipt.Logs[0].BlockNumber = blockNumber | ||
for _, log := range receipt.Logs { | ||
err = t.scribeDB.StoreLogs(ctx, chainID, *log) | ||
if err != nil { | ||
return nil, fmt.Errorf("error storing swap log: %w", err) | ||
} | ||
} | ||
return receipt.Logs[len(receipt.Logs)-1], nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error from t.testBackend.TransactionReceipt
and t.scribeDB.StoreLogs
is not being handled. If an error occurs, it should be handled appropriately to prevent any unexpected behavior.
- receipt, err := t.testBackend.TransactionReceipt(ctx, tx.Hash())
- if err != nil {
- return nil, fmt.Errorf("failed to get receipt for transaction %s: %w", tx.Hash().String(), err)
- }
- receipt.Logs[0].BlockNumber = blockNumber
- for _, log := range receipt.Logs {
- err = t.scribeDB.StoreLogs(ctx, chainID, *log)
- if err != nil {
- return nil, fmt.Errorf("error storing swap log: %w", err)
- }
- }
+ receipt, receiptErr := t.testBackend.TransactionReceipt(ctx, tx.Hash())
+ if receiptErr != nil {
+ return nil, fmt.Errorf("failed to get receipt for transaction %s: %w", tx.Hash().String(), receiptErr)
+ }
+ receipt.Logs[0].BlockNumber = blockNumber
+ for _, log := range receipt.Logs {
+ storeErr := t.scribeDB.StoreLogs(ctx, chainID, *log)
+ if storeErr != nil {
+ return nil, fmt.Errorf("error storing swap log: %w", storeErr)
+ }
+ }
Commitable suggestion (Beta)
func (t *ServiceSuite) storeTestLog(ctx context.Context, tx *types.Transaction, chainID uint32, blockNumber uint64) (*types.Log, error) { | |
t.testBackend.WaitForConfirmation(ctx, tx) | |
receipt, err := t.testBackend.TransactionReceipt(ctx, tx.Hash()) | |
if err != nil { | |
return nil, fmt.Errorf("failed to get receipt for transaction %s: %w", tx.Hash().String(), err) | |
} | |
receipt.Logs[0].BlockNumber = blockNumber | |
for _, log := range receipt.Logs { | |
err = t.scribeDB.StoreLogs(ctx, chainID, *log) | |
if err != nil { | |
return nil, fmt.Errorf("error storing swap log: %w", err) | |
} | |
} | |
return receipt.Logs[len(receipt.Logs)-1], nil | |
} | |
func (t *ServiceSuite) storeTestLog(ctx context.Context, tx *types.Transaction, chainID uint32, blockNumber uint64) (*types.Log, error) { | |
receipt, receiptErr := t.testBackend.TransactionReceipt(ctx, tx.Hash()) | |
if receiptErr != nil { | |
return nil, fmt.Errorf("failed to get receipt for transaction %s: %w", tx.Hash().String(), receiptErr) | |
} | |
receipt.Logs[0].BlockNumber = blockNumber | |
for _, log := range receipt.Logs { | |
storeErr := t.scribeDB.StoreLogs(ctx, chainID, *log) | |
if storeErr != nil { | |
return nil, fmt.Errorf("error storing swap log: %w", storeErr) | |
} | |
} | |
return receipt.Logs[len(receipt.Logs)-1], nil | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 6
Configuration used: CodeRabbit UI
Files selected for processing (2)
- services/sinner/service/service_test.go (1 hunks)
- services/sinner/service/sinner.go (1 hunks)
Additional comments: 2
services/sinner/service/service_test.go (2)
1-16: The import section is well organized and follows the Go convention of separating standard library imports from third-party imports.
94-103: The test assertions are clear and check the important aspects of the functionality.
// TestSinner tests Sinner. | ||
func (t *ServiceSuite) TestSinner() { | ||
t.RunOnAllDBs(func(testDB db.TestEventDB) { | ||
// Store test logs and txs | ||
err := t.scribeDB.StoreLogs(t.GetTestContext(), t.originChainID, t.originTestLog) | ||
Nil(t.T(), err) | ||
err = t.scribeDB.StoreEthTx(t.GetTestContext(), t.originTestTx, t.originChainID, t.originTestLog.BlockHash, t.originTestLog.BlockNumber, uint64(t.originTestLog.TxIndex)) | ||
Nil(t.T(), err) | ||
err = t.scribeDB.StoreLastIndexed(t.GetTestContext(), t.originTestLog.Address, t.originChainID, 625782, false) | ||
Nil(t.T(), err) | ||
|
||
err = t.scribeDB.StoreLogs(t.GetTestContext(), t.destinationChainID, t.destinationTestLog) | ||
Nil(t.T(), err) | ||
err = t.scribeDB.StoreEthTx(t.GetTestContext(), t.destinationTestTx, t.destinationChainID, t.destinationTestLog.BlockHash, t.destinationTestLog.BlockNumber, uint64(t.destinationTestLog.TxIndex)) | ||
Nil(t.T(), err) | ||
err = t.scribeDB.StoreLastIndexed(t.GetTestContext(), t.destinationTestLog.Address, t.destinationChainID, 1975780, false) | ||
Nil(t.T(), err) | ||
|
||
originConfig := indexerConfig.ChainConfig{ | ||
ChainID: t.originChainID, | ||
Contracts: []indexerConfig.ContractConfig{{ | ||
ContractType: "origin", | ||
Address: t.originTestLog.Address.String(), | ||
StartBlock: 625780, | ||
}}, | ||
} | ||
|
||
destinationConfig := indexerConfig.ChainConfig{ | ||
ChainID: t.destinationChainID, | ||
Contracts: []indexerConfig.ContractConfig{{ | ||
ContractType: "execution_hub", | ||
Address: t.destinationTestLog.Address.String(), | ||
StartBlock: 1975778, | ||
}}, | ||
} | ||
|
||
config := indexerConfig.Config{ | ||
ScribeURL: t.scribeFetcherPath, | ||
DBPath: t.scribeDBPath, | ||
DefaultRefreshRate: 1, | ||
DBType: "sqlite", | ||
SkipMigrations: true, | ||
Chains: []indexerConfig.ChainConfig{originConfig, destinationConfig}, | ||
} | ||
|
||
sinner, err := service.NewSinner(testDB, config, t.metrics) | ||
Nil(t.T(), err) | ||
|
||
originEvent := model.OriginSent{} | ||
destinationEvent := model.Executed{} | ||
|
||
indexingCtx, cancelIndexing := context.WithCancel(t.GetTestContext()) | ||
go func() { | ||
err = sinner.Index(indexingCtx) | ||
Nil(t.T(), err) | ||
}() | ||
|
||
timeout := 2 * time.Second | ||
go func() { | ||
for { | ||
select { | ||
case <-t.GetTestContext().Done(): | ||
return | ||
case <-time.After(timeout): | ||
// check db | ||
testDB.UNSAFE_DB().WithContext(t.GetTestContext()).Find(&model.OriginSent{}).First(&originEvent) | ||
testDB.UNSAFE_DB().WithContext(t.GetTestContext()).Find(&model.Executed{}).First(&destinationEvent) | ||
if len(originEvent.MessageHash) > 0 && len(destinationEvent.MessageHash) > 0 { | ||
// cancel if message stored | ||
cancelIndexing() | ||
} | ||
} | ||
} | ||
}() | ||
<-indexingCtx.Done() | ||
|
||
// Check parity of events | ||
Equal(t.T(), t.originTestLog.TxHash.String(), originEvent.TxHash) | ||
Equal(t.T(), t.destinationTestLog.TxHash.String(), destinationEvent.TxHash) | ||
|
||
// Get and check the message status | ||
messageStatus, err := testDB.RetrieveMessageStatus(t.GetTestContext(), originEvent.MessageHash) | ||
Nil(t.T(), err) | ||
Equal(t.T(), t.originTestLog.TxHash.String(), *messageStatus.OriginTxHash) | ||
Equal(t.T(), graphqlModel.MessageStateLastSeenDestination, *messageStatus.LastSeen) | ||
}) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The TestSinner
function is comprehensive and tests various aspects of the Sinner
service. It includes setup, execution, and verification steps. However, it's a bit long and could benefit from being broken down into smaller, more focused test functions. This would improve readability and maintainability.
err := t.scribeDB.StoreLogs(t.GetTestContext(), t.originChainID, t.originTestLog) | ||
Nil(t.T(), err) | ||
err = t.scribeDB.StoreEthTx(t.GetTestContext(), t.originTestTx, t.originChainID, t.originTestLog.BlockHash, t.originTestLog.BlockNumber, uint64(t.originTestLog.TxIndex)) | ||
Nil(t.T(), err) | ||
err = t.scribeDB.StoreLastIndexed(t.GetTestContext(), t.originTestLog.Address, t.originChainID, 625782, false) | ||
Nil(t.T(), err) | ||
|
||
err = t.scribeDB.StoreLogs(t.GetTestContext(), t.destinationChainID, t.destinationTestLog) | ||
Nil(t.T(), err) | ||
err = t.scribeDB.StoreEthTx(t.GetTestContext(), t.destinationTestTx, t.destinationChainID, t.destinationTestLog.BlockHash, t.destinationTestLog.BlockNumber, uint64(t.destinationTestLog.TxIndex)) | ||
Nil(t.T(), err) | ||
err = t.scribeDB.StoreLastIndexed(t.GetTestContext(), t.destinationTestLog.Address, t.destinationChainID, 1975780, false) | ||
Nil(t.T(), err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error handling here is good, as it checks for errors after each operation. However, it might be more efficient to use a loop to iterate over a list of operations, reducing code repetition.
indexingCtx, cancelIndexing := context.WithCancel(t.GetTestContext()) | ||
go func() { | ||
err = sinner.Index(indexingCtx) | ||
Nil(t.T(), err) | ||
}() | ||
|
||
timeout := 2 * time.Second | ||
go func() { | ||
for { | ||
select { | ||
case <-t.GetTestContext().Done(): | ||
return | ||
case <-time.After(timeout): | ||
// check db | ||
testDB.UNSAFE_DB().WithContext(t.GetTestContext()).Find(&model.OriginSent{}).First(&originEvent) | ||
testDB.UNSAFE_DB().WithContext(t.GetTestContext()).Find(&model.Executed{}).First(&destinationEvent) | ||
if len(originEvent.MessageHash) > 0 && len(destinationEvent.MessageHash) > 0 { | ||
// cancel if message stored | ||
cancelIndexing() | ||
} | ||
} | ||
} | ||
}() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This section starts the Index
method in a goroutine and periodically checks the database for certain events. It's a good practice to use context cancellation to stop the indexing process once the required events are found. However, the time.After
function creates a new timer on each iteration, which can lead to resource leaks if not properly handled. Consider using time.NewTicker
instead.
// | ||
// nolint:gocognit | ||
func NewSinner(eventDB db.EventDB, config indexerConfig.Config, handler metrics.Handler) (*Sinner, error) { | ||
chainIndexers := make(map[uint32]*ChainIndexer) | ||
httpClient := http.Client{ | ||
Timeout: 10 * time.Second, | ||
Transport: &http.Transport{ | ||
ResponseHeaderTimeout: 10 * time.Second, | ||
}, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The HTTP client is now created within the NewSinner
function and used to create a new fetcher. This is a good improvement, but it would be even better to pass the HTTP client as a parameter to the function. This would make the function more testable and flexible, as it would allow you to use different HTTP clients for different environments (e.g., a mock client for testing, a client with different timeout settings for production, etc.).
- func NewSinner(eventDB db.EventDB, config indexerConfig.Config, handler metrics.Handler) (*Sinner, error) {
+ func NewSinner(eventDB db.EventDB, config indexerConfig.Config, handler metrics.Handler, httpClient *http.Client) (*Sinner, error) {
Commitable suggestion (Beta)
// | |
// nolint:gocognit | |
func NewSinner(eventDB db.EventDB, config indexerConfig.Config, handler metrics.Handler) (*Sinner, error) { | |
chainIndexers := make(map[uint32]*ChainIndexer) | |
httpClient := http.Client{ | |
Timeout: 10 * time.Second, | |
Transport: &http.Transport{ | |
ResponseHeaderTimeout: 10 * time.Second, | |
}, | |
} | |
// nolint:gocognit | |
func NewSinner(eventDB db.EventDB, config indexerConfig.Config, handler metrics.Handler, httpClient *http.Client) (*Sinner, error) { | |
chainIndexers := make(map[uint32]*ChainIndexer) | |
httpClient := http.Client{ | |
Timeout: 10 * time.Second, | |
Transport: &http.Transport{ | |
ResponseHeaderTimeout: 10 * time.Second, | |
}, | |
} |
// nolint gocognit,cyclop | ||
func getChainIndexer(eventDB db.EventDB, chainID uint32, fetcher fetcherpkg.ScribeFetcher, chainConfig indexerConfig.ChainConfig, refreshRate time.Duration) (*ChainIndexer, error) { | ||
parsers := Parsers{ | ||
ChainID: chainID, | ||
} | ||
for i := range chainConfig.Contracts { | ||
switch chainConfig.Contracts[i].ContractType { | ||
case "origin": | ||
originParser, err := origin.NewParser(common.HexToAddress(chainConfig.Contracts[i].Address), eventDB, chainID) | ||
if err != nil { | ||
return nil, fmt.Errorf("could not create origin parser: %w", err) | ||
} | ||
parsers.OriginParser = originParser | ||
case "execution_hub": | ||
destinationParser, err := destination.NewParser(common.HexToAddress(chainConfig.Contracts[i].Address), eventDB, chainID) | ||
if err != nil { | ||
return nil, fmt.Errorf("could not create execution_hub parser: %w", err) | ||
} | ||
parsers.DestinationParser = destinationParser | ||
} | ||
} | ||
|
||
chainIndexer := NewChainIndexer(eventDB, parsers, fetcher, chainConfig, refreshRate) | ||
|
||
return chainIndexer, nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The getChainIndexer
function is quite complex and has a high cyclomatic complexity due to the nested switch-case and for-loop. Consider breaking it down into smaller, more manageable functions. For example, you could create a separate function to handle the creation of the Parsers
struct. This would make the code easier to read and maintain, and it would also make it easier to test each individual component of the function.
func (e Sinner) Index(ctx context.Context) error { | ||
var wg sync.WaitGroup | ||
errChan := make(chan error, len(e.config.Chains)) // Buffered to prevent goroutine blockage | ||
|
||
// Listen for errors | ||
go func() { | ||
for err := range errChan { | ||
logger.ReportSinnerError(fmt.Errorf("could not livefill explorer: %w", err), 0, logger.SinnerIndexingFailure) | ||
} | ||
}() | ||
|
||
for i := range e.config.Chains { | ||
chainConfig := e.config.Chains[i] | ||
chainIndexer := e.indexers[chainConfig.ChainID] | ||
|
||
wg.Add(1) | ||
go func(chainCfg *indexerConfig.ChainConfig, indexer *ChainIndexer) { | ||
defer wg.Done() | ||
|
||
for { | ||
chainCtx, cancelChainCtx := context.WithCancel(ctx) | ||
|
||
select { | ||
case <-ctx.Done(): // global context canceled | ||
errChan <- fmt.Errorf("global context canceled") | ||
cancelChainCtx() // cancel the local context before returning | ||
return | ||
default: | ||
err := chainIndexer.Index(chainCtx) | ||
cancelChainCtx() // cancel the local context immediately after its use | ||
|
||
if err != nil { | ||
errChan <- fmt.Errorf("error indexing chain %d: %w", chainCfg.ChainID, err) | ||
continue // continue trying | ||
} | ||
return | ||
} | ||
} | ||
}(&chainConfig, chainIndexer) | ||
} | ||
|
||
wg.Wait() | ||
close(errChan) | ||
|
||
return nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Index
function is well-structured and handles errors properly. It uses a buffered error channel to prevent goroutine blockage and a wait group to ensure all goroutines finish before the function returns. However, the error handling could be improved by returning the first error that occurs instead of just logging it. This would allow the caller to handle the error and decide whether to continue or not.
func (e Sinner) Index(ctx context.Context) error {
var wg sync.WaitGroup
errChan := make(chan error, len(e.config.Chains)) // Buffered to prevent goroutine blockage
// Listen for errors
go func() {
for err := range errChan {
logger.ReportSinnerError(fmt.Errorf("could not livefill explorer: %w", err), 0, logger.SinnerIndexingFailure)
}
}()
+ var firstErr error
+ go func() {
+ firstErr = <-errChan
+ }()
for i := range e.config.Chains {
chainConfig := e.config.Chains[i]
chainIndexer := e.indexers[chainConfig.ChainID]
wg.Add(1)
go func(chainCfg *indexerConfig.ChainConfig, indexer *ChainIndexer) {
defer wg.Done()
for {
chainCtx, cancelChainCtx := context.WithCancel(ctx)
select {
case <-ctx.Done(): // global context canceled
errChan <- fmt.Errorf("global context canceled")
cancelChainCtx() // cancel the local context before returning
return
default:
err := chainIndexer.Index(chainCtx)
cancelChainCtx() // cancel the local context immediately after its use
if err != nil {
errChan <- fmt.Errorf("error indexing chain %d: %w", chainCfg.ChainID, err)
continue // continue trying
}
return
}
}
}(&chainConfig, chainIndexer)
}
wg.Wait()
close(errChan)
+ if firstErr != nil {
+ return firstErr
+ }
return nil
}
Commitable suggestion (Beta)
func (e Sinner) Index(ctx context.Context) error { | |
var wg sync.WaitGroup | |
errChan := make(chan error, len(e.config.Chains)) // Buffered to prevent goroutine blockage | |
// Listen for errors | |
go func() { | |
for err := range errChan { | |
logger.ReportSinnerError(fmt.Errorf("could not livefill explorer: %w", err), 0, logger.SinnerIndexingFailure) | |
} | |
}() | |
for i := range e.config.Chains { | |
chainConfig := e.config.Chains[i] | |
chainIndexer := e.indexers[chainConfig.ChainID] | |
wg.Add(1) | |
go func(chainCfg *indexerConfig.ChainConfig, indexer *ChainIndexer) { | |
defer wg.Done() | |
for { | |
chainCtx, cancelChainCtx := context.WithCancel(ctx) | |
select { | |
case <-ctx.Done(): // global context canceled | |
errChan <- fmt.Errorf("global context canceled") | |
cancelChainCtx() // cancel the local context before returning | |
return | |
default: | |
err := chainIndexer.Index(chainCtx) | |
cancelChainCtx() // cancel the local context immediately after its use | |
if err != nil { | |
errChan <- fmt.Errorf("error indexing chain %d: %w", chainCfg.ChainID, err) | |
continue // continue trying | |
} | |
return | |
} | |
} | |
}(&chainConfig, chainIndexer) | |
} | |
wg.Wait() | |
close(errChan) | |
return nil | |
} | |
func (e Sinner) Index(ctx context.Context) error { | |
var wg sync.WaitGroup | |
errChan := make(chan error, len(e.config.Chains)) // Buffered to prevent goroutine blockage | |
// Listen for errors | |
go func() { | |
for err := range errChan { | |
logger.ReportSinnerError(fmt.Errorf("could not livefill explorer: %w", err), 0, logger.SinnerIndexingFailure) | |
} | |
}() | |
var firstErr error | |
go func() { | |
firstErr = <-errChan | |
}() | |
for i := range e.config.Chains { | |
chainConfig := e.config.Chains[i] | |
chainIndexer := e.indexers[chainConfig.ChainID] | |
wg.Add(1) | |
go func(chainCfg *indexerConfig.ChainConfig, indexer *ChainIndexer) { | |
defer wg.Done() | |
for { | |
chainCtx, cancelChainCtx := context.WithCancel(ctx) | |
select { | |
case <-ctx.Done(): // global context canceled | |
errChan <- fmt.Errorf("global context canceled") | |
cancelChainCtx() // cancel the local context before returning | |
return | |
default: | |
err := chainIndexer.Index(chainCtx) | |
cancelChainCtx() // cancel the local context immediately after its use | |
if err != nil { | |
errChan <- fmt.Errorf("error indexing chain %d: %w", chainCfg.ChainID, err) | |
continue // continue trying | |
} | |
return | |
} | |
} | |
}(&chainConfig, chainIndexer) | |
} | |
wg.Wait() | |
close(errChan) | |
if firstErr != nil { | |
return firstErr | |
} | |
return nil | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (1)
- services/sinner/README.md (1 hunks)
Files skipped from review due to trivial changes (1)
- services/sinner/README.md
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (2)
- contrib/promexporter/internal/gql/dfk/client.gen.go (1 hunks)
- contrib/promexporter/internal/gql/dfk/models.gen.go (6 hunks)
Additional comments: 4
contrib/promexporter/internal/gql/dfk/client.gen.go (1)
- 33-48: The changes to the
Query
struct seem to be in line with the PR summary. Ensure that the new fields are being used correctly and that the GraphQL queries are updated accordingly. Also, verify that the backend GraphQL API supports these new fields.contrib/promexporter/internal/gql/dfk/models.gen.go (3)
127-128: The addition of
AccessoryAuction
to theAccessory
struct and the corresponding filter fields inAccessoryFilter
are consistent with the PR summary. Ensure that the new fields are being used correctly in the rest of the codebase.502-503: The addition of
ArmorAuction
to theArmor
struct and the corresponding filter fields inArmorFilter
are consistent with the PR summary. Ensure that the new fields are being used correctly in the rest of the codebase.2916-2917: The addition of
WeaponAuction
to theWeapon
struct and the corresponding filter fields inWeaponFilter
are consistent with the PR summary. Ensure that the new fields are being used correctly in the rest of the codebase.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 1
Configuration used: CodeRabbit UI
Files selected for processing (1)
- services/sinner/db/model/model.go (1 hunks)
Additional comments: 4
services/sinner/db/model/model.go (4)
1-7: The import statements are well organized and only necessary packages are imported. Good job.
9-12: The
GetAllModels
function is simple and straightforward. It returns all the models for migration. This is a good practice as it makes it easy to add or remove models in the future.27-42: Global variables are declared and initialized in the
init
function. This is a good practice as it ensures that these variables are initialized before they are used.130-139: The struct definitions are clear and well-documented. Each field has a corresponding comment explaining its purpose, which is a good practice. The use of
gorm
tags for database mapping is also correct.
func init() { | ||
allModels := GetAllModels() | ||
namer := dbcommon.NewNamer(allModels) | ||
TxHashFieldName = namer.GetConsistentName("TxHash") | ||
ChainIDFieldName = namer.GetConsistentName("ChainID") | ||
BlockNumberFieldName = namer.GetConsistentName("BlockNumber") | ||
ContractAddressFieldName = namer.GetConsistentName("ContractAddress") | ||
MessageHashFieldName = namer.GetConsistentName("MessageHash") | ||
|
||
OriginTxHashFieldName = namer.GetConsistentName("OriginTxHash") | ||
DestinationTxHashFieldName = namer.GetConsistentName("DestinationTxHash") | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The init
function is used to initialize global variables. This is a common practice in Go, but be aware that the order of initialization is not guaranteed across different files. If these variables are used in the init
functions of other files, it could lead to unexpected behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (1)
- services/sinner/db/model/model.go (1 hunks)
Additional comments: 2
services/sinner/db/model/model.go (2)
1-139: The code looks good overall. The new structs
MessageStatus
,OriginSent
,Executed
, andLastIndexed
are well defined with appropriate field tags for the ORM. TheGetAllModels
function and theinit
function are also correctly implemented. However, ensure that theinit
function does not cause any unexpected behavior due to the order of initialization across different files.14-25: The
init
function is used to initialize global variables. This is a common practice in Go, but be aware that the order of initialization is not guaranteed across different files. If these variables are used in theinit
functions of other files, it could lead to unexpected behavior.
- The previous comment is still valid.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (1)
- services/sinner/db/model/model.go (1 hunks)
Additional comments: 4
services/sinner/db/model/model.go (4)
1-139: The code looks well-structured and follows Go best practices. The use of
gorm
tags for struct fields is correct and will ensure proper mapping between Go struct fields and database columns. Theinit
function is used appropriately to initialize global variables. TheGetAllModels
function is a good abstraction for retrieving all models for migration. The struct definitions are clear and well-documented.14-25: The previous comment about the
init
function is still valid. The order of initialization is not guaranteed across different files. If these variables are used in theinit
functions of other files, it could lead to unexpected behavior.44-128: The struct definitions for
MessageStatus
,OriginSent
, andExecuted
are clear and well-documented. The use ofgorm
tags for struct fields is correct and will ensure proper mapping between Go struct fields and database columns.130-139: The
LastIndexed
struct is well-defined and documented. The use ofgorm
tags for struct fields is correct and will ensure proper mapping between Go struct fields and database columns.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (9)
- services/explorer/consumer/client/resolver-client/server.go (6 hunks)
- services/scribe/api/data_test.go (5 hunks)
- services/scribe/cmd/cmd.md (1 hunks)
- services/scribe/graphql/client/client.go (2 hunks)
- services/scribe/graphql/client/queries/queries.graphql (1 hunks)
- services/scribe/graphql/server/graph/queries.resolvers.go (1 hunks)
- services/scribe/graphql/server/graph/resolver/server.go (6 hunks)
- services/scribe/graphql/server/graph/schema/queries.graphql (1 hunks)
- services/scribe/service/indexer/indexer_test.go (1 hunks)
Files skipped from review due to trivial changes (2)
- services/scribe/cmd/cmd.md
- services/scribe/graphql/client/client.go
Additional comments: 21
services/scribe/graphql/client/queries/queries.graphql (1)
- 16-18: The new optional parameter
$asc
has been added to theGetLogsRange
query. This is a good addition as it allows the caller to specify the desired sorting order of the logs. Ensure that all calls to this function throughout the codebase have been updated to handle this new parameter.services/scribe/graphql/server/graph/queries.resolvers.go (1)
- 33-47: The new parameter
asc
is optional and is checked fornil
before being dereferenced, which is good for avoiding potential null pointer dereference issues. The logic for retrieving logs in ascending or descending order based on theasc
parameter is correctly implemented. Ensure that all calls to this function throughout the codebase have been updated to match the new signature.services/scribe/graphql/server/graph/schema/queries.graphql (1)
- 24-28: The addition of the
asc
field to thelogsRange
query is a good way to allow clients to specify the sorting order of the results. Ensure that the backend logic correctly handles this new field. Also, consider adding theasc
field to other range queries for consistency.+ asc: Boolean = False
services/explorer/consumer/client/resolver-client/server.go (6)
83-83: Ensure that all calls to this function throughout the codebase have been updated to match the new signature.
141-141: Ensure that all implementations of the
QueryResolver
interface have been updated to match the new function signature.422-422: Ensure that the
asc
argument is being correctly passed to theLogsRange
resolver function.876-876: The default value for
asc
is set toFalse
. Ensure that this is the intended behavior.1431-1439: Ensure that the
asc
argument is being correctly unmarshalled and added to theargs
map.3016-3016: Ensure that the
asc
argument is being correctly passed to theLogsRange
function.services/scribe/graphql/server/graph/resolver/server.go (6)
83-83: Ensure that all calls to this function throughout the codebase have been updated to match the new signature.
141-141: Ensure that all calls to this function throughout the codebase have been updated to match the new signature.
422-422: Ensure that all calls to this function throughout the codebase have been updated to match the new signature.
876-876: The default value for
asc
is set toFalse
. Ensure that this default behavior is expected and documented.1431-1439: The
asc
argument is being unmarshalled and added to theargs
map. This is a standard way to handle optional arguments in GraphQL.3016-3016: Ensure that all calls to this function throughout the codebase have been updated to match the new signature.
services/scribe/api/data_test.go (5)
5-11: The new import statement
github.com/synapsecns/sanguine/core
has been added. Ensure that the package is available and the path is correct.51-65: The function
GetLogsRange
has been updated to accept an additional argumentascending *bool
. The test case has been updated to test getting logs in ascending order. The assertion statements have been modified to compare the expected and actual values in the correct order. Ensure that the functionGetLogsRange
is correctly implemented to handle the new argument.220-233: The test case for checking the last and first stored block numbers is not correct. The test case assumes that the last and first stored block numbers are the same, which might not be the case in a real-world scenario. You should store multiple block times with different block numbers and then check the last and first stored block numbers.
298-301: The test case for checking the last indexed block number is not correct. The test case assumes that the last indexed block number is the same as the block number used to store the last indexed block. However, the last indexed block number should be retrieved from the database and not assumed.
397-400: The test case for checking the last confirmed block number is not correct. The test case assumes that the last confirmed block number is the same as the block number used to store the last confirmed block. However, the last confirmed block number should be retrieved from the database and not assumed.
services/scribe/service/indexer/indexer_test.go (1)
- 470-473: Ensure that the new URL "https://arbitrum.llamarpc.com" is correct and accessible. Also, verify that the backend service at this URL is compatible with the
backend.DialBackend
function and can handle the requests made by theTestTxTypeNotSupported
function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 2
Configuration used: CodeRabbit UI
Files selected for processing (5)
- services/sinner/fetcher/client/client.go (1 hunks)
- services/sinner/fetcher/client/queries/queries.graphql (1 hunks)
- services/sinner/fetcher/client/schema/queries.graphql (1 hunks)
- services/sinner/fetcher/scribe.go (1 hunks)
- services/sinner/service/indexer.go (1 hunks)
Files skipped from review due to trivial changes (2)
- services/sinner/fetcher/client/schema/queries.graphql
- services/sinner/fetcher/scribe.go
Additional comments: 11
services/sinner/fetcher/client/queries/queries.graphql (1)
- 1-116: The GraphQL queries look well-structured and cover a wide range of data retrieval needs for the Synapse interchain network. Ensure that the backend GraphQL server is set up to handle these queries correctly and efficiently. Also, consider adding error handling in the client code that calls these queries to handle any potential errors returned by the server.
services/sinner/fetcher/client/client.go (1)
- 1-1: This file is auto-generated. Any changes made to it will be overwritten the next time the code is generated. Ensure that the code generation is correct and up-to-date.
services/sinner/service/indexer.go (9)
1-19: The import section is well organized and follows the Go convention of grouping standard library imports separately from third-party imports.
21-33: The
ChainIndexer
struct has been updated to include arefreshRate
field of typetime.Duration
. This is a good practice as it allows the refresh rate to be configurable.35-45: The
NewChainIndexer
function has been updated to accept arefreshRate
parameter. Ensure that all calls to this function throughout the codebase have been updated to match the new signature.79-89: The
Index
method now takes acontext.Context
parameter. This is a good practice as it allows the method to be cancelled or timed out.91-101: The
processContract
method now takes acontext.Context
parameter and returns an error. This is a good practice as it allows the method to be cancelled or timed out and improves error handling.103-119: The
createEventParser
method now returns an error. This is a good practice for error handling.121-142: The
indexContractEvents
method now returns an error. This is a good practice for error handling.144-173: The
fetchBlockRange
method now returns an error. This is a good practice for error handling.175-226: The
processBlocksInRange
method now returns an error. This is a good practice for error handling.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 1
Configuration used: CodeRabbit UI
Files selected for processing (1)
- services/sinner/service/indexer.go (1 hunks)
Additional comments: 10
services/sinner/service/indexer.go (10)
1-19: The import section is well organized and imports only the necessary packages. Good use of aliasing to avoid naming conflicts.
21-33: The
ChainIndexer
struct is well defined with clear field names. The addition ofrefreshRate
is a good way to control the frequency of indexing operations.35-45: The
NewChainIndexer
function is updated correctly to include therefreshRate
parameter. It initializes a newChainIndexer
object with the provided parameters.47-54: The
getBackoffConfig
function is a good way to encapsulate the backoff configuration. It uses a jittered exponential backoff, which is a good strategy for retrying operations in distributed systems.79-89: The
Index
function correctly uses theerrgroup
package to handle errors that occur in multiple goroutines. This is a good practice for managing concurrent operations.91-101: The
processContract
function correctly creates an event parser and indexes the contract events in a goroutine. It also correctly handles errors that may occur during these operations.103-119: The
createEventParser
function correctly creates an event parser based on the contract type. It also correctly handles errors that may occur during this operation.121-142: The
indexContractEvents
function correctly indexes all events for a contract. It uses a context for cancellation and correctly handles errors that may occur during this operation.144-173: The
fetchBlockRange
function correctly fetches the block range for the contract. It uses a context for cancellation and correctly handles errors that may occur during this operation.175-226: The
processBlocksInRange
function correctly processes all blocks in the range for the contract. It uses a context for cancellation and correctly handles errors that may occur during this operation.
Description
Tracking the message lifecycle for the synapse interchain network.
Whats in here
/api
: A graphql interface to interact with message data, currently supports the following queries/cmd
: CLI for starting sinner's indexer and server/contracts
: Holds parsers for each event on each contracts. Wraps contract ABIs from agents to reduce maintainability complexity as contracts and agents are upgrade./db
: Handles all read and write operations to sinner's mysql database/fetcher
: Gets logs from scribe, wraps the Scribe api./graphql
: Graphql schema and resolvers for the API./logger
: Sinners logger./metadata
: Metrics constants./service
: Holds the indexer logic for all chains and all contracts./types
: Holds types used around Sinner.Summary by CodeRabbit
New Features:
Bug Fixes:
Documentation:
Refactor:
Style:
Tests:
Chores:
Revert: