Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Process messages concurrently #288

Merged
merged 62 commits into from
Jun 4, 2024
Merged

Process messages concurrently #288

merged 62 commits into from
Jun 4, 2024

Conversation

cam-schultz
Copy link
Collaborator

@cam-schultz cam-schultz commented May 13, 2024

Why this should be merged

Fixes #31
Refactors ApplicationRelayer to be more akin to a standalone worker, as opposed to a composite piece of a subnet Listener

How this works

  • Adds methods ProcessHeight and ProcessMessage to ApplicationRelayer. ProcessHeight calls ProcessMessage in a separete goroutine for each Warp message in the block. ProcessMessage can be called on its own (as is the case for manual Warp message relays)
  • Decouples ApplicationRelayer from Listener, ExternalHandler, and MessageManager
  • ExternalHandler adds the method RegisterRequestID, which routes all inbound app responses for that ID to the caller. Used to route messages directly to the initiating ApplicationRelayer
  • MessageManager is now a factory for MessageHandlers, an ephemeral type that handles message processing for a single Warp message
  • CheckpointManager simplified to only perform periodic writes. The previous prepare-stage-commit flow is now encapsulated in ApplicationRelayer.ProcessHeight
  • Adds BatchMessage e2e test that sends multiple Teleporter messages in a single transaction. This tests concurrent message processing within a single block. Solidity infra added to tests/ to support the new BatchCrossChainMessage contract.

How this was tested

CI
Added batch tx e2e test
TODO: before/after througput testing (see #185)

How is this documented

N/A. No functional changes, pure throughput

var eg errgroup.Group
for _, msg := range msgs {
eg.Go(func() error {
return r.ProcessMessage(msg)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Upgraded to go 1.22 so that this closure captures the value of msg on each iteration, rather than the variable itself.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's an interesting change for a language upgrade. I'm surprised if this change doesn't break some apps.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cam-schultz There's a flag you can set in 1.21 that enables this feature, so we can keep our versions consistent.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for pointing that out @geoff-vball . I elected to go with the workaround of assigning a local variable in each iteration rather than include that experimental flag in the build, since we're trying to lock into stable language features.

Base automatically changed from db-manager to main May 14, 2024 17:52
main/main.go Show resolved Hide resolved
main/main.go Outdated Show resolved Hide resolved
main/main.go Show resolved Hide resolved
geoff-vball
geoff-vball previously approved these changes May 31, 2024
geoff-vball
geoff-vball previously approved these changes May 31, 2024
@geoff-vball geoff-vball dismissed stale reviews from michaelkaplan13 and themself via 7d2f8ee May 31, 2024 20:42
@cam-schultz cam-schultz merged commit b3c24b2 into main Jun 4, 2024
7 checks passed
@cam-schultz cam-schultz deleted the app-relayer-worker branch June 4, 2024 20:58
Copy link
Contributor

@bernard-avalabs bernard-avalabs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this was merged before I finished my second pass, but it LGTM. Just a minor nit comment.


// Check for the expected number of responses, and clear from the map if all expected responses have been received
// TODO: we can improve performance here by independently locking the response channel and response count maps
responses := h.responsesCount[requestID]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: perhaps add an "ok" check on the map lookup just to defensively check for logic errors.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

Process messages from the same chain concurrently
5 participants